url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://docs.ocean.dwavesys.com/en/latest/concepts/qm.html | $Ax + By + Cxy$
where $$A$$, $$B$$, and $$C$$ are constants. Single-variable terms—$$Ax$$ and $$By$$ here—are linear with the constant biasing the term’s variable. Two-variable terms—$$Cxy$$ here—are quadratic with a relationship between the variables. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41859567165374756, "perplexity": 492.6196248009235}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00372.warc.gz"} |
https://www.andlearning.org/percentage-formula/ | Home » Math Formulas » Percentage Formula
# Percentage Change Formula | Decrease, Yield, Increase
## Percentage Formula
In simple terms, percent means per hundred. Whenever you have to express the number between one and zero then percentage formula is needed. Usually, the percentage is defined as the fraction of 100. The symbol for percentage is “%” and the major application of percentage is to compare or find the ratios. Here, we have given percentage formula in mathematics –
#### The Percentage Formula is given as,
$\ Percentage = \frac{Value}{Total\:Value}\times 100$
### Percentage Change Formula
Whenever we find the difference between two numbers then it would be either greater or less than earlier. To compare any two values or to find the differences, we need a percentage change formula. They are named as the percentage increase or percentage decrease in mathematics. Let us discuss on each of the concepts in the next sections deeply.
#### Percentage change formula is
$\ Percentage\;Change=\frac{Old\;Value-New\;Value}{Old\;Value}\times 100%$
### Percentage Decrease Formula
To find the percentage decrease, you first need to note down the numbers you are comparing. First, find the difference i.e. Decrease Value = Original number – New number. Now you have to divide the decreased value with original value and multiply the same with 100. If your final output is a negative number then this would be percent increase otherwise percent decrease. With the same technique, this is possible to calculate the percentage differences of multiple numbers.
The Percentage Decrease Formula is given as,
$\ Percentage\;Decrease = \frac{Decrease\;in\;value}{Original\; value} \times 100$
### Percentage Increase Formula
To find the percentage decrease, you first need to note down the numbers you are comparing. First, find the difference i.e. Increase Value = New number Original number. Now you have to divide the Increase value with original value and multiply the same with 100. If your final output is a negative number then this would be a percent decrease otherwise percent increase. The Percentage Increase Formula in mathematics is given is given as,
The Percentage Increase Formula is given as,
$\large Percentage\;Increase=\frac{Increased\;Value}{Original\;Value}\times 100$
Note down, the final value could be positive, negative or zero. For zero value, it means there is no change in percent at all.
### Percentage Yield Formula
Every time when we are experimenting, the final value would be a little different from your predictions. In case of chemistry, this difference is popular with the name percentage yield. Take an example of the online recipe where actual ingredients with proper quantity are given. Also, it will give you a perfect idea of servings. But, the quantity may be off or extra in most of the cases. There are different reasons for the same like wrong cup measures, food spoilage, or different stove etc.
$\ Percentage\; Yield = \frac{actual\;yield}{theoretical\;yield} \times 100\;%$
The same concept is applicable when we do perform experiments in Chemistry lab to make a Compound. However, the experiment should be performed in ideal conditions only. Still, most of the times, there are experimental errors most of the times and your predictions would be slightly different from the actual one.
Here, is given a quick formula to calculate the percentage yield in mathematics and chemistry. When you apply this formula, make sure that actual yield value and theoretical yield values are in the same units otherwise there would be a definite error in the final value. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499363899230957, "perplexity": 641.2990459501905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527566.44/warc/CC-MAIN-20190419101239-20190419123239-00353.warc.gz"} |
https://math.stackexchange.com/questions/3011647/given-technology-find-number-of-distinguishable-ways-the-letters-can-be-arrang | # Given TECHNOLOGY , find number of distinguishable ways the letters can be arranged in which letters T,E and N are together [closed]
Given TECHNOLOGY , find number of distinguishable ways the letters can be arranged in which letters T,E and N are together
This is my working-
$$3! \cdot \frac{7!}{2!}$$
is this correct ?
## closed as off-topic by Don Thousand, MisterRiemann, NCh, Shailesh, John BDec 1 '18 at 0:45
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Don Thousand, MisterRiemann, NCh, Shailesh
If this question can be reworded to fit the rules in the help center, please edit the question.
• Solutions to such exercises should usually be presented in such a way that one can understand how you reached the answer, i.e. you should explain how you got that particular number, instead of just presenting the final answer. – MisterRiemann Nov 24 '18 at 15:00
• Be careful. TECHNOLOGY has ten letters, so you have a block of three letters and seven other letters to arrange. – N. F. Taussig Nov 24 '18 at 15:04
Assume $$T,E, N$$ as single letter therefore, total number of letters in the word technology is 8 this can be arranged in $$8!$$ ways and number of ways in which $$T,E, N$$ can be arranged $$3!$$ ways and since $$O$$ is repeating two times hence answer is $$\frac{8!×3!}{2!}$$.
Consider $$\text{TEN}$$ together as a block and all other letters as single block. Then you have $$8$$ blocks. So there are $$8!$$ permutations possible and $$3!$$ permutations of $$TEN$$. Also the letter $$\text{O}$$ is not distinguishable.
So total number of ways is $$\dfrac{8!\cdot 3!}{2!}$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130512237548828, "perplexity": 545.7132811196825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256314.25/warc/CC-MAIN-20190521082340-20190521104340-00032.warc.gz"} |
https://www.est.colpos.mx/web/packages/pseval/vignettes/introduction.html | # Methods
## Introduction
A valid principal surrogate endpoint can be used as a primary endpoint for evaluating treatments in phase II clinical trials and for predicting individual treatment effects post licensure. A surrogate is considered to be valid if it provides reliable predictions of treatment effects on the clinical endpoint of interest. Frangakis and Rubin (2002) introduced the concept of principal stratification and the definition of a principal surrogate (PS). Informally, a post-treatment intermediate response variable is a principal surrogate if causal effects of the treatment on the clinical outcome only exist when causal effects of the treatment on the intermediate variable exist. The criteria for a PS have been modified and extended in more recent works, with most current literature focusing on wide effect modification as the primary criterion of interest; tests for wide effect modification are implemented in the package.
The goal of PS evaluation is estimation and testing of how treatment efficacy on the clinical outcome of interest varies over subgroups defined by possible treatment and surrogate combinations of interest; this is an effect modification objective. The combinations of interest are called the principal strata and they include a set of unobservable counterfactual responses: responses that would have occurred under a set of conditions counter to the observed conditions. To finesse this problem of unobservable responses, a variety of clever trial designs and estimation approaches have been proposed. Several of these have been implemented in the pseval package (Sachs and Gabriel 2016).
## Notation
Let $$Z_i$$ be the treatment indicator for subject $$i$$, where 0 indicates the control or standard treatment, and 1 indicates the experimental treatment. We currently only allow for two levels of treatment and assume that the treatment assignments are randomized. Let $$S_i$$ be the observed value of the intermediate response for subject $$i$$. Since $$S_i$$ can be affected by treatment, there are two naturally occurring counterfactual values of $$S_i$$: $$S_i(1)$$ under treatment, and $$S_i(0)$$ under control. Let $$s_z$$ be the realization of the random variable $$S(z)$$, for $$z \in \{0, 1\}$$. The outcome of interest is denoted $$Y_i$$. We consider the counterfactual values of $$Y_i(0)$$ and $$Y_i(1)$$. We allow for binary, count, and time-to-event outcomes, thus $$Y_i$$ may be a vector containing a time variable and an event/censoring indicator, i.e. $$Y_i = (T_i, \Delta_i)$$ where $$\Delta_i = 1$$ if $$T_i$$ is an event time, and $$\Delta_i = 0$$ if $$T_i$$ is a censoring time. For all of the methods, $$S_i(z)$$ is only defined if the clinical outcome $$Y_i(z)$$ does not occur before the potential surrogate $$S_i(z)$$ is measured at a fixed time $$\tau$$ after entry into the study. The data analyses only include participants who have not experienced the clinical outcome by time $$\tau$$. For interpretability all of the methods assume no individual-level treatment effects on $$Y$$ before $$\tau$$, which we refer to as the “Equal early individual risk” assumption below.
## Estimands
Criteria for $$S$$ to be a good surrogate are based on risk estimands that condition on the potential intermediate responses. The risk is defined as a mapping $$g$$ of the cumulative distribution function of $$Y(z)$$ conditional on the intermediate responses. The joint risk estimands conditions on the candidate surrogate under both level of treatment, $$(S(1), S(0))$$. $risk_1(s_1, s_0) = g\left\{F_{s_1}\left[Y(1) | S(0) = s_0, S(1) = s_1\right]\right\}, \\ risk_0(s_1, s_0) = g\left\{F_{s_1}\left[Y(0) | S(0) = s_0, S(1) = s_1\right]\right\}.$ For instance, for a binary outcome, the risk function may simply be the probability $$risk_z(s_1, s_0) = P(Y(z) = 1 | S(0) = s_0, S(1) = s_1)$$, or for a time-to-event outcome the risk function may be the cumulative distribution function $$risk_z(s_1, s_0) = P(Y(z) \leq t | S(0) = s_0, S(1) = s_1)$$.
Currently we focus only on marginal risk estimands which condition only on $$S(1)$$, the intermediate response or biomarker under active treatment:
$risk_1(s_1) = g\left\{F_{s_1}\left[Y(1) | S(1) = s_1\right]\right\}, \\ risk_0(s_1) = g\left\{F_{s_1}\left[Y(0) | S(1) = s_1\right]\right\}.$
Neither of the joint risk estimands are indentifiable in a standard randomized trial, as either S(0) or S(1) or both will be missing for each subject. In the special case where $$S(0)$$ is constant, such as the immune response to HIV antigens or Hep B in the placebo arm of a vaccine trial, the joint and marginal risk estimands are equivalent. This special case is referred to as case constant biomarker (CB) in much of the literature (Gilbert and Hudgens 2008); S_i(0) = c for subjects i. This may occur outside the vaccine setting when one considers the AUC of a treatment drug as a surrogate; those receiving placebo will have no drug and therefore all placebo AUC will be 0. Under assumptions given below, and in the case CB setting, the marginal risk estimand is indentifiable in the treatment arm.
As well, as will be outlined below, there are specific trial augmentations that allow for the measurement or imputation of the missing counterfactual Ss. Under one of these augmentations, case CB can sometimes be induced by considering a function of the a candidate surrogate for evaluation. Greater detail on this point given below.
Specification of the distributions of $$Y(z) | S(1)$$ determines the likelihood, we will denote this as $$f(y | \beta, s_1, z)$$. If $$S(1)$$ were fully observed, simple maximum likelihood estimation could be used. The key challenge in estimating these risk estimands is solving the problem of conditioning on counterfactual values that are not observable for at least a subset of subjects in a randomized trial. This involves integrating out missing values based on some models and/or set of assumptions.
## Principal Surrogate Criteria
Frangakis and Rubin (2002) gave a single criterion for a biomarker S to be a PS: causal effects of the treatment on the clinical outcome only exist when causal effects of the treatment on the intermediate variable exist. In general this can only be evaluated using the joint risk estimands, which consider not only the counterfactual values of the biomarker under treatment, but also under control $$S(0)$$. However, in the special case where all $$S(0)$$ values are constant, say at level $$C$$, such as an immune response to HIV in a HIV negative population pre-vaccination this criteria, often referred to as average causal necessity (ACN), can by written in terms of the marginal risk estimands as:
$risk_1(C)=risk_0(C)$
More recently, other works Gilbert and Hudgens (2008), Wolfson and Gilbert (2010), Huang, Gilbert, and Wolfson (2013), Gabriel and Gilbert (2014), and E. E. Gabriel and Follmann (2015) have suggested that this criterion is both too restrictive and in some cases can be vacuously true. Instead most current works suggest that the wide effect modification (WEM) criterion is of primary importance, ACN being of secondary importance. WEM is given formally in terms of the risk estimands and a known contrast function $$h$$ satisfying $$h(x, y) = 0$$ if and only if $$x = y$$ by:
$|h(risk_1(s_1),risk_0(s_1)) - h(risk_1(s{_1}^*),risk_0(s{_1}^*))|>\delta$
for at least some $$s_1 \neq s_{1}^*$$ and $$\delta>0$$, with the larger the $$\delta$$ the better. To evaluate WEM and ACN we need to identify the risk estimands.
## Augmentation and Assumptions
We first make three standard assumptions used in much of the literature for absorbing events outcomes:
• Stable Unit Treatment Value Assumption (SUTVA): Observations on the independent units in the trial should be unaffected by the treatment assignment of other units.
• Ignorable Treatment Assignment: The observed treatment assignment does not change the counterfactual clinical outcome.
• Equal individual risk up to the time of candidate surrogate measurement $$\tau$$.
In time-to-event settings one more assumption is needed:
• Non-informative censoring.
It should be noted that the equal individual risk assumption requires that time-to-event analysis start at time $$\tau$$, rather than at randomization.
Wolfson and Gilbert (2010) outlines how these assumptions are needed for identification of the risk estimands. Now to deal with the missing $$S(1)$$ values among those with $$Z = 0$$, we next focus on three trial augmentations: Baseline immunogenicity predictor (BIP), closeout placebo vaccination (CPV), and baseline surrogate measurement (BSM). For further details on these augmentations, we refer you to Follmann (2006), Gilbert and Hudgens (2008), Gabriel and Gilbert (2014), and E. E. Gabriel and Follmann (2015).
### BIP
Briefly, a BIP $$W$$ is any baseline measurement or set of measurements that is highly correlated with $$S$$. It is particularly useful if $$W$$ is unlikely to be associated with the clinical outcome after conditioning on $$S$$, i.e. $$Y \perp W | S(1)$$; some of the methods leverage this assumption. The BIP $$W$$ is used to integrate out the missing $$S(1)$$ among those with $$Z = 0$$ based on a model for $$S(1) | W$$ that is estimated among those with $$Z = 1$$. We describe how this model is used in the next section.
The assumptions needed for a BIP to be useful depend on the risk model used. If the BIP is included in the risk model, only the assumption of no interaction with treatment and the candidate surrogate are needed. However, if the BIP is not included in the risk model, the assumption that that clinical outcome is independent of the BIP given the candidate surrogate is needed. Although not a requirement for identification of the risk estimands, it has been found in most simulations studies that a correlation between the BIP and $$S(1)$$ of greater than 0.7 is needed for unbiased estimation in finite samples.
### CPV
Under a CPV augmented design, control recipients that do not have events at the end of the follow-up period are given the experimental treatment. Then their intermediate response is measured at some time post treatment. This measurement is then used as a direct imputation for the missing $$S(1)$$. This augmentation was developed in the setting of vaccine trials, where the surrogate is an immune response and the outcome is infection. One set of conservative assumptions to use CPV as a direct imputation for $$S(1)$$ are given in Wolfson and Gilbert (2010) are:
• Time constancy of the true intermediate response under active treatment, $$S(1)=S_{CPV}$$ almost surely, for placebo recipients that are crossed over at the end of the trial, where $$S_{CPV}$$ is the measurement of the candidate surrogate after crossover treatment of the placebo subjects.
• No events (infections) during the close-out period
### BSM
Gabriel and Gilbert (2014) suggested the baseline augmentation BSM, which is a pre-treatment measurement of the candidate PS, denoted $$S_B$$. The BSM may be a good predictor of $$S(1)$$ without any further assumptions. It can be used in the same way as a BIP. Alternatively you can transform $$S(1) - S_B$$ and use this as the candidate surrogate, further increasing the association with the BSM/BIP. Under the BSM assumption outlined in Gabriel and Gilbert (2014);
• Time constancy of the true intermediate response under control,
then $$S(0)=S_{BSM}$$ almost surely. You do not need this assumption to use a BSM, but if it holds then it induces the CB case, thus the joint and marginal risk estimands are equivalent.
## Estimated Maximum Likelihood
Let $$f(y | \beta, s_1, z)$$ denote the density of $$Y | S(1), Z$$ with parameters $$\beta$$. Further let $$R_i$$ denote the indicator for missingness in $$S_i(1)$$. We proceed to estimate $$\beta$$ by maximizing
$\prod_{i = 1}^n \left\{f(Y_i | \beta, S_i(1), Z_i)\right\}^R_i \left\{\int f(Y_i | \beta, s, Z_i) \, d \hat{F}_{S(1) | W}(s | W_i)\right\}^{1 - R_i}$
with respect to $$\beta$$.
This procedure is called estimated maximum likelihood (EML) and was developed in Pepe and Fleming (1991). The key idea is that we are averaging the likelihood contributions for subjects missing $$S(1)$$ with respect to the estimated distribution of $$S(1) | W$$. A BIP $$W$$ that is strongly associated with $$S(1)$$ is needed for adequate performance.
Closed-form inference is not available for EML estimates, thus we recommend use of the bootstrap for estimation of standard errors. It was suggested as an approach to principal surrogate evaluation by Gilbert and Hudgens (2008) and Huang and Gilbert (2011).
##Pseudoscore
Huang, Gilbert, and Wolfson (2013) suggest a different estimation procedure that does have a closed form variance estimator. Instead of numerically optimizing the estimated likelihood, the pseudoscore approach iteratively finds the solution to weighted versions of the score equations. Pseudoscore estimates were also suggested in Wolfson (2009) and implemented for several special cases in Huang, Gilbert, and Wolfson (2013). We have implemented here only one of the special cases: categorical $$BIP$$ and binary $$Y$$ ($$S$$ may be continuous or categorical). In addition to having closed form variance estimators, it has been argued that the pseudo-score estimators are more efficient than the EML estimators. The closed form variance estimates are not yet implemented.
## Package features
Typically, users would have to code up the likelihood, integration models, and perform the optimization themselves. This is beyond the reach of many researchers who desire to use these methods. The goal of pseval is to correctly implement these methods with a flexible and user-friendly interface, enabling researchers to implement and interpret a wide variety of models.
The pseval package allows users to specify the type of augmented design that is used in their study, specify the form of the risk model along with the distribution of $$Y | S(1)$$, and specify different integration models to estimate the distribution of $$S(1) | W$$. Then the likelihood can be maximized and bootstraps run. Post-estimation summaries are available to display and analyze the treatment efficacy as a function of $$S(1)$$. All of this is implemented with a flexible and familiar interface.
# Package information
## Installation
pseval is an R package aimed at implementing existing methods for surrogate evaluation using a flexible and common interface. Development will take place on the Github page, and the current version of the package can be installed as shown below. First you must install the devtools package, if you haven’t already install.packages("devtools").
devtools::install_github("sachsmc/pseval")
or it can be installed from CRAN:
install.packages("pseval")
## Usage
Here we will walk through some basic analyses from the point of view of a new R user. Along the way we will highlight the main features of pseval. pseval supports both binary outcomes and time-to-event, thus we will also need to load the survival package.
library(pseval)
library(survival)
## Warning: package 'survival' was built under R version 3.5.1
First let’s create an example dataset. The package provides the function generate_example_data which takes a single argument: the sample size.
set.seed(1492)
fakedata <- generate_example_data(n = 800)
head(fakedata)
Z BIP CPV BSM S.obs time.obs event.obs Y.obs S.obs.cat BIP.cat
0 0.3353179 1.4851399 0.4596161 0.3526810 0.3301972 0 0 (-0.198,0.503] (0.0574,0.766]
0 1.4536863 2.6379400 1.3959104 1.4668891 0.1195136 1 0 (1.36, Inf] (0.766, Inf]
0 -0.7243934 NA -0.6272350 -0.7319076 0.2631222 1 1 (-Inf,-0.198] (-Inf,-0.678]
0 -0.1183592 0.9421504 0.0773831 -0.0183341 0.1373458 1 0 (-0.198,0.503] (-0.678,0.0574]
0 -0.2352566 NA -0.1497145 -0.1847024 0.8543703 1 1 (-0.198,0.503] (-0.678,0.0574]
0 -0.7782851 0.1159434 -0.6572161 -0.6631371 0.2200481 1 0 (-Inf,-0.198] (-Inf,-0.678]
The example data includes both a time-to-event outcome, a binary outcome, a surrogate, a BIP, CPV, and BSM, and a categorical version of the surrogate. The true model for the time is exponential, with parameters (intercept) = -1, S(1) = 0.0, Z = 0.0, S(1):Z = -0.75. The true model for binary is logistic, with the same parameter values.
In the above table S.obs.cat and BIP.cat are formed as S.obs.cat <- factor(S.obs,levels=c(-Inf, quantile(c(S.0, S.1), c(.25, .5, .75), na.rm = TRUE), Inf)) and similarly for BIP.cat. Alternatively a user could input arbitrary numeric values to represent different discrete subgroups (e.g., 0s and 1s to denote 2 subgroups).
### The "psdesign" object
We begin by creating a "psdesign" object with the synonymous function. This is the object that combines the raw dataset with information about the study design and the structure of the data. Subsequent analysis will operate on this psdesign object. It is designed to be analogous to the svydesign function in the survey package (https://CRAN.R-project.org/package=survey). The first argument is the data frame where the data are stored. All subsequent arguments describe the mappings from the variable names in the data frame to important variables in the PS analysis, using the same notation as above. An optional weights argument describes the sampling weights, if present. Our first analysis will use the binary version of the outcome, with continuous $$S.1$$ and the BIP labeled $$BIP$$. The object has a print method, so we can inspect the result.
binary.ps <- psdesign(data = fakedata, Z = Z, Y = Y.obs, S = S.obs, BIP = BIP)
binary.ps
## Augmented data frame: 800 obs. by 6 variables.
## Z Y S.1 S.0 cdfweights BIP
## 1 0 0 NA 0.3527 1 0.335
## 2 0 0 NA 1.4669 1 1.454
## 3 0 1 NA -0.7319 1 -0.724
## 4 0 0 NA -0.0183 1 -0.118
## 5 0 1 NA -0.1847 1 -0.235
## 6 0 0 NA -0.6631 1 -0.778
##
## Empirical TE: 0.526
##
## Mapped variables:
## Z -> Z
## Y -> Y.obs
## S -> S.obs
## BIP -> BIP
##
## Integration models:
## None present, see ?add_integration for information on integration models.
##
## Risk models:
## None present, see ?add_riskmodel for information on risk models.
## No estimates present, see ?ps_estimate.
## No bootstraps present, see ?ps_bootstrap.
The printout displays a brief description of the data, including the empirical treatment efficacy estimate, the variables used in the analysis and their corresponding variables in the original dataset. Finally the printout invites the user to see the help page for add_integration, in order to add an integration model to the psdesign object, the next step in the analysis.
Missing values in the $$S$$ variable are allowed. Note that any cases where $$S(1)$$ is missing will be integrated over in the likelihood or score equations. Thus any cases that experienced an event prior to the time $$\tau$$ when the surrogate was measured should be excluded from the dataset. The equal individual risk assumption allows us to make causal inferences even after excluding such cases.
psdesign easily accommodates case-control or case-cohort sampling. In this case, the surrogate $$S$$ is only measured on a subset of the data, inducing missingness in $$S$$ by design. Let’s modify the fake dataset to see how it works. We’re going to sample all of the cases, and 20% of the controls for measurement of $$S$$.
fakedata.cc <- fakedata
missdex <- sample((1:nrow(fakedata.cc))[fakedata.cc$Y.obs == 0], size = floor(sum(fakedata.cc$Y.obs == 0) * .8))
fakedata.cc[missdex, ]$S.obs <- NA fakedata.cc$weights <- ifelse(fakedata.cc$Y.obs == 1, 1, .2) Now we can create the "psdesign" object, using the entire dataset (including those missing S.obs) and passing the weights to the weights field. binary.cc <- psdesign(data = fakedata.cc, Z = Z, Y = Y.obs, S = S.obs, BIP = BIP, weights = weights) The other augmentation types can be defined by mapping variables to the names BIP, CPV, and/or BSM. The augmentations are handled as described in the previous section: CPV is used as a direct imputation for $$S(1)$$, and BSM is used as a direct imputation for $$S(0)$$. BIPs and BSMs are made available in the augmented dataset for use in the integration models which we describe in the next subsection. For survival outcomes, a key assumption is that the potential surrogate is measured at a fixed time $$\tau$$ after entry into the study. Any subjects who have a clinical outcome prior to $$\tau$$ will be removed from the analysis, with a warning. If tau is not specified in the psdesign object, then it is assumed to be 0. Survival outcomes are specified by mapping Y to a Surv object, which requires the survival package: surv.ps <- psdesign(data = fakedata, Z = Z, Y = Surv(time.obs, event.obs), S = S.obs, BIP = BIP, CPV = CPV, BSM = BSM) ## Warning in psdesign(data = fakedata, Z = Z, Y = Surv(time.obs, ## event.obs), : tau missing in psdesign: assuming that the surrogate S was ## measured at time 0. ### Integration models The EML procedure requires an estimate of $$F_{S(1) | W}$$, and we refer to this as the integration model. Details are available in the help page for add_integration, shown below. Several integration models are implemented, including a parametric model that uses a formula interface to define a regression model, a semiparametric model that specifies a location and a scale model is robust to the specification of the distribution, and a non-parametric model that uses empirical conditional probability estimates for discrete $$W$$ and $$S(1)$$. ?add_integration add_integration R Documentation ## Integration models ### Description Add integration model to a psdesign object ### Usage add_integration(psdesign, integration) ### Arguments psdesign A psdesign object integration An integration object ### Details This is a list of the available integration models. The fundamental problem in surrogate evaluation is that there are unobserved values of the counterfactual surrogate responses S(1). In the estimated maximum likelihood framework, for subjects missing the S(1) values, we use an auxiliary pre-treatment variable or set of variables W that is observed for every subject to estimate the distribution of S(1) | W. Typically, this W is a BIP. Then for each missing S(1), we integrate likelihood contributions over each non-missing S(1) given their value of W, and average over the contributions. • integrate_parametric This is a parametric integration model that fits a linear model for the mean of S(1) | W and assumes a Gaussian distribution. • integrate_bivnorm This is another parametric integration model that assumes that S(1) and W are jointly normally distributed. The user must specify their mean, variances and correlation. • integrate_nonparametric This is a non-parametric integration model that is only valid for categorical S(1) and W. It uses the observed proportions to estimate the joint distribution of S(1), W. • integrate_semiparametric This is a semi-parametric model that uses the semi-parametric location scale model of Heagerty and Pepe (1999). Models are specified for the location of S(1) | W and the scale of S(1) | W. Then integrations are drawn from the empirical distribution of the residuals from that model, which are then transformed to the appropriate location and scale. ### Examples test <- psdesign(generate_example_data(n = 100), Z = Z, Y = Y.obs, S = S.obs, BIP = BIP) add_integration(test, integrate_parametric(S.1 ~ BIP)) test + integrate_parametric(S.1 ~ BIP) # same as above For this first example, let’s use the parametric integration model. We specify the mean model for $$S(1) | W$$ as a formula. The predictor is generally a function of the BIP and the BSM, if available. We can add the integration model directly to the psdesign object and inspect the results. Note that in the formula, we refer to the variable names in the augmented dataset. binary.ps <- binary.ps + integrate_parametric(S.1 ~ BIP) binary.ps ## Augmented data frame: 800 obs. by 6 variables. ## Z Y S.1 S.0 cdfweights BIP ## 1 0 0 NA 0.3527 1 0.335 ## 2 0 0 NA 1.4669 1 1.454 ## 3 0 1 NA -0.7319 1 -0.724 ## 4 0 0 NA -0.0183 1 -0.118 ## 5 0 1 NA -0.1847 1 -0.235 ## 6 0 0 NA -0.6631 1 -0.778 ## ## Empirical TE: 0.526 ## ## Mapped variables: ## Z -> Z ## Y -> Y.obs ## S -> S.obs ## BIP -> BIP ## ## Integration models: ## integration model for S.1 : ## integrate_parametric(formula = S.1 ~ BIP ) ## ## Risk models: ## None present, see ?add_riskmodel for information on risk models. ## No estimates present, see ?ps_estimate. ## No bootstraps present, see ?ps_bootstrap. We can add multiple integration models to a psdesign object, say we want a model for $$S(0) | W$$: binary.ps + integrate_parametric(S.0 ~ BIP) ## Augmented data frame: 800 obs. by 6 variables. ## Z Y S.1 S.0 cdfweights BIP ## 1 0 0 NA 0.3527 1 0.335 ## 2 0 0 NA 1.4669 1 1.454 ## 3 0 1 NA -0.7319 1 -0.724 ## 4 0 0 NA -0.0183 1 -0.118 ## 5 0 1 NA -0.1847 1 -0.235 ## 6 0 0 NA -0.6631 1 -0.778 ## ## Empirical TE: 0.526 ## ## Mapped variables: ## Z -> Z ## Y -> Y.obs ## S -> S.obs ## BIP -> BIP ## ## Integration models: ## integration model for S.1 : ## integrate_parametric(formula = S.1 ~ BIP ) ## integration model for S.0 : ## integrate_parametric(formula = S.0 ~ BIP ) ## ## Risk models: ## None present, see ?add_riskmodel for information on risk models. ## No estimates present, see ?ps_estimate. ## No bootstraps present, see ?ps_bootstrap. In a future version of the package, we will allow for estimation of the joint risk estimands that depend on both $$S(0)$$ and $$S(1)$$. We can also use splines, other transformations, and other variables in the formula: library(splines) binary.ps + integrate_parametric(S.1 ~ BIP^2) binary.ps + integrate_parametric(S.1 ~ bs(BIP, df = 3)) binary.ps + integrate_parametric(S.1 ~ BIP + BSM + BSM^2) These are shown as examples, we will proceed with the simple linear model for integration. The other integration models are called integrate_bivnorm, integrate_nonparametric, and integrate_semiparametric. See their help files for details on the models and their specification. The next step is to define the risk model. We will proceed with the simple parametric integration model. ### Risk models and likelihoods The risk model is the specification of the distribution for the outcome $$Y$$ given $$S(1)$$ and $$Z$$. We accommodate a variety of flexible specifications for this model, for binary, time-to-event, and count outcomes. We have implemented exponential and weibull survival models, and a flexible specification for binary models, allowing for standard or custom link functions. See the help file for add_riskmodel for more details. ?add_riskmodel add_riskmodel R Documentation ## Add risk model to a psdesign object ### Description Add risk model to a psdesign object ### Usage add_riskmodel(psdesign, riskmodel) ### Arguments psdesign A psdesign object riskmodel A risk model object, from the list above ### Details The risk model component specifies the likelihood for the data. This involves specifying the distribution of the outcome variable, whether it is binary or time-to-event, and specifying how the surrogate S(1) and the treatment Z interact and affect the outcome. We use the formula notation to be consistent with other regression type models in R. Below is a list of available risk models. • risk_binary This is a generic risk model for binary outcomes. The user can specify the formula, and link function using either risk.logit for the logistic link, or risk.probit for the probit link. Custom link functions may also be specified, which take a single numeric vector argument, and returns a vector of corresponding probabilities. • risk_weibull This is a parameterization of the Weibull model for time-to-event outcomes that is consistent with that of rweibull. The user specifies the formula for the linear predictor of the scale parameter. • risk_exponential This is a simple exponential model for a time-to-event outcome. • risk_poisson This is a Poisson model for count outcomes. It allows for offsets in the formula. • risk_continuous This is a Gaussian model for continuous outcomes. It assumes that larger values of the outcome are harmful (e.g. blood pressure) ### Examples test <- psdesign(generate_example_data(n = 100), Z = Z, Y = Y.obs, S = S.obs, BIP = BIP) + integrate_parametric(S.1 ~ BIP) add_riskmodel(test, risk_binary()) test + risk_binary() # same as above Let’s add a simple binary risk model using the logit link. The argument D specifies the number of samples to use for the simulated annealing, also known as empirical integration, in the EML procedure. In general, D should be set to something reasonably large, like 2 or 3 times the sample size. binary.ps <- binary.ps + risk_binary(model = Y ~ S.1 * Z, D = 5, risk = risk.logit) binary.ps ## Augmented data frame: 800 obs. by 6 variables. ## Z Y S.1 S.0 cdfweights BIP ## 1 0 0 NA 0.3527 1 0.335 ## 2 0 0 NA 1.4669 1 1.454 ## 3 0 1 NA -0.7319 1 -0.724 ## 4 0 0 NA -0.0183 1 -0.118 ## 5 0 1 NA -0.1847 1 -0.235 ## 6 0 0 NA -0.6631 1 -0.778 ## ## Empirical TE: 0.526 ## ## Mapped variables: ## Z -> Z ## Y -> Y.obs ## S -> S.obs ## BIP -> BIP ## ## Integration models: ## integration model for S.1 : ## integrate_parametric(formula = S.1 ~ BIP ) ## ## Risk models: ## risk_binary(model = Y ~ S.1 * Z, D = 5, risk = risk.logit ) ## ## No estimates present, see ?ps_estimate. ## No bootstraps present, see ?ps_bootstrap. ### Estimation and Bootstrap We estimate the parameters and bootstrap using the same type of syntax. We can add a "ps_estimate" object, which takes optional arguments start for starting values, and other arguments that are passed to the optim function. The method argument determines the optimization method, we have found that “BFGS” works well in these types of problems and it is the default. Use "pseudo-score" as the method argument for pseudo-score estimation for binary risk models with categorical BIPs. The ps_bootstrap function takes the additional arguments n.boots for the number of bootstrap replicates, and progress.bar which is a logical that displays a progress bar in the R console if true. It is helpful to pass the estimates as starting values in the bootstrap resampling. With estimates and bootstrap replicates present, printing the psdesign object displays additional information. In real examples you should use more than 10 bootstrap replicates. binary.est <- binary.ps + ps_estimate(method = "BFGS") binary.boot <- binary.est + ps_bootstrap(n.boots = 10, progress.bar = FALSE, start = binary.est$estimates$par, method = "BFGS") binary.boot ## Augmented data frame: 800 obs. by 6 variables. ## Z Y S.1 S.0 cdfweights BIP ## 1 0 0 NA 0.3527 1 0.335 ## 2 0 0 NA 1.4669 1 1.454 ## 3 0 1 NA -0.7319 1 -0.724 ## 4 0 0 NA -0.0183 1 -0.118 ## 5 0 1 NA -0.1847 1 -0.235 ## 6 0 0 NA -0.6631 1 -0.778 ## ## Empirical TE: 0.526 ## ## Mapped variables: ## Z -> Z ## Y -> Y.obs ## S -> S.obs ## BIP -> BIP ## ## Integration models: ## integration model for S.1 : ## integrate_parametric(formula = S.1 ~ BIP ) ## ## Risk models: ## risk_binary(model = Y ~ S.1 * Z, D = 5, risk = risk.logit ) ## ## Estimated parameters: ## (Intercept) S.1 Z S.1:Z ## -0.921 -0.028 -0.219 -1.133 ## Convergence: TRUE ## ## Bootstrap replicates: ## Estimate boot.se lower.CL.2.5. upper.CL.97.5. p.value ## (Intercept) -0.921 0.103 -1.043 -0.757 4.92e-19 ## S.1 -0.028 0.099 -0.178 0.124 7.78e-01 ## Z -0.219 0.192 -0.453 0.112 2.55e-01 ## S.1:Z -1.133 0.169 -1.582 -1.064 2.02e-11 ## ## Out of 10 bootstraps, 10 converged ( 100 %) ## ## Test for wide effect modification on 1 degree of freedom. 2-sided p value < .0001 ### Do it all at once The next code chunk shows how the model can be defined and estimated all at once. binary.est <- psdesign(data = fakedata, Z = Z, Y = Y.obs, S = S.obs, BIP = BIP) + integrate_parametric(S.1 ~ BIP) + risk_binary(model = Y ~ S.1 * Z, D = 50, risk = risk.logit) + ps_estimate(method = "BFGS") ### Plots and summaries We provide summary and plotting methods for the psdesign object. If bootstrap replicates are present, the summary method does a test for wide effect modification. Under the parametric risk models implemented in this package, the test for wide effect modification is equivalent to a test that the $$S(1):Z$$ coefficient is different from 0. This is implemented using a Wald test using the bootstrap estimate of the variance. Another way to assess wide effect modification is to compute the standardized total gain (STG) (Gabriel, Sachs, and Gilbert 2015). This is implemented in the calc_STG function. The standardized total gain can be interpreted as the area sandwiched between the risk difference curve and the horizontal line at the marginal risk difference. It is a measure of the spread of the distribution of the risk difference, and is a less parametric way to test for wide effect modification. The calc_STG function computes the STG at the estimated parameters, at the bootstrap samples, if present. The function prints the results and invisibly returns a list containing the observed STG, and the bootstrapped STGS. calc_STG(binary.boot, progress.bar = FALSE) The summary method also computes the marginal treatment efficacy marginalized over $$S(1)$$ and compares it to the average treatment efficacy conditional on $$S(1)$$. This is an assessment of model fit. A warning will be given if the two estimates are dramatically different. These estimates are presented in the summary along with the empirical marginal treatment efficacy. smary <- summary(binary.boot) ## Augmented data frame: 800 obs. by 6 variables. ## Z Y S.1 S.0 cdfweights BIP ## 1 0 0 NA 0.3527 1 0.335 ## 2 0 0 NA 1.4669 1 1.454 ## 3 0 1 NA -0.7319 1 -0.724 ## 4 0 0 NA -0.0183 1 -0.118 ## 5 0 1 NA -0.1847 1 -0.235 ## 6 0 0 NA -0.6631 1 -0.778 ## ## Empirical TE: 0.526 ## ## Mapped variables: ## Z -> Z ## Y -> Y.obs ## S -> S.obs ## BIP -> BIP ## ## Integration models: ## integration model for S.1 : ## integrate_parametric(formula = S.1 ~ BIP ) ## ## Risk models: ## risk_binary(model = Y ~ S.1 * Z, D = 5, risk = risk.logit ) ## ## Estimated parameters: ## (Intercept) S.1 Z S.1:Z ## -0.921 -0.028 -0.219 -1.133 ## Convergence: TRUE ## ## Bootstrap replicates: ## Estimate boot.se lower.CL.2.5. upper.CL.97.5. p.value ## (Intercept) -0.921 0.103 -1.043 -0.757 4.92e-19 ## S.1 -0.028 0.099 -0.178 0.124 7.78e-01 ## Z -0.219 0.192 -0.453 0.112 2.55e-01 ## S.1:Z -1.133 0.169 -1.582 -1.064 2.02e-11 ## ## Out of 10 bootstraps, 10 converged ( 100 %) ## ## Test for wide effect modification on 1 degree of freedom. 2-sided p value < .0001 ## ## Treatment Efficacy: ## empirical marginal model ## 0.526 0.526 0.534 ## Model-based average TE is 1.4 % different from the empirical and 1.4 % different from the marginal. The calc_risk function computes the risk in each treatment arm, and contrasts of the risks. By default it computes the treatment efficacy, but there are other contrast functions available. The contrast function is a function that takes 2 inputs, the $$risk_0$$ and $$risk_1$$, and returns some one dimensional function of those two inputs. It must be vectorized. Some built-in functions are “TE” for treatment efficacy $$= 1 - risk_1(s)/risk_0(s)$$, “RR” for relative risk $$= risk_1(s)/risk_0(s)$$, “logRR” for log of the relative risk, and “RD” for the risk difference $$= risk_1(s) -risk_0(s)$$. You can pass the name of the function, or the function itself to calc_risk. See ?calc_risk for more information about contrast functions. Other arguments of the calc_risk function include t, the time at which to calculate the risk for time-to-event outcomes, n.samps which is the number of samples over the range of S.1 at which the risk will be calculated, and CI.type, which can be set to "pointwise" for pointwise confidence intervals or "band" for a simultaneous confidence band. sig.level is the significance level for the bootstrap confidence intervals. If the outcome is time-to-event and $$t$$ is not present, then it will use the restricted mean survival time. head(calc_risk(binary.boot, contrast = "TE", n.samps = 20), 3) S.1 Y R0 R1 Y.boot.se Y.upper.CL.0.95 Y.lower.CL.0.95 R0.boot.se R0.upper.CL.0.95 R0.lower.CL.0.95 R1.boot.se R1.upper.CL.0.95 R1.lower.CL.0.95 V1 -0.4464944 -0.2160029 0.2873305 0.3493948 0.2185294 -0.0128214 -0.4497046 0.0280259 0.3419269 0.2535211 0.0434403 0.4264883 0.3085566 V2 -0.3816933 -0.1586955 0.2869594 0.3324985 0.2029119 0.0329791 -0.3852309 0.0269339 0.3388935 0.2546054 0.0410547 0.4061426 0.2931740 V3 -0.0686187 0.0978651 0.2851703 0.2572621 0.1360893 0.2420804 -0.0864422 0.0220953 0.3244273 0.2598872 0.0296320 0.3133475 0.2255883 head(calc_risk(binary.boot, contrast = function(R0, R1) 1 - R1/R0, n.samps = 20), 3) S.1 Y R0 R1 Y.boot.se Y.upper.CL.0.95 Y.lower.CL.0.95 R0.boot.se R0.upper.CL.0.95 R0.lower.CL.0.95 R1.boot.se R1.upper.CL.0.95 R1.lower.CL.0.95 V1 -1.0223250 -0.7603983 0.2906411 0.5116441 0.3685964 -0.4239773 -1.0782111 0.0385470 0.3694287 0.2440199 0.0596076 0.6101497 0.4608493 V2 -0.3397777 -0.1223212 0.2867195 0.3217913 0.1930885 0.0622400 -0.3438526 0.0262416 0.3369384 0.2553085 0.0394974 0.3931473 0.2834669 V3 -0.1976601 -0.0034737 0.2859069 0.2869001 0.1616824 0.1587782 -0.2063205 0.0239886 0.3303513 0.2577016 0.0342345 0.3502916 0.2520441 It is easy to plot the risk estimates. By default, the plot method displays the TE contrast, but this can be changed using the same syntax as in calc_risk. plot(binary.boot, contrast = "TE", lwd = 2) abline(h = smary$TE.estimates[2], lty = 3)
expit <- function(x) exp(x)/(1 + exp(x))
trueTE <- function(s){
r0 <- expit(-1 - 0 * s)
r1 <- expit(-1 - .75 * s)
1 - r1/r0
}
rug(binary.boot$augdata$S.1)
curve(trueTE(x), add = TRUE, col = "red")
legend("bottomright", legend = c("estimated TE", "95\\% CB", "marginal TE", "true TE"),
col = c("black", "black", "black", "red"), lty = c(1, 2, 3, 1), lwd = c(2, 2, 1, 1))
By default, plots of psdesign objects with bootstrap samples will display simultaneous confidence bands for the curve. These bands $$L\alpha$$ satisfy
$P\left\{\sup_{s \in B} | \hat{VE}(s) - VE(s) | \leq L_\alpha \right\} \leq 1 - \alpha,$
for confidence level $$\alpha$$. The alternative is to use pointwise confidence intervals, with the option CI.type = "pointwise". These intervals satisfy
$P\left\{\hat{L}_\alpha \leq VE(s) \leq \hat{U}_\alpha\right\} \leq 1 - \alpha, \mbox{ for all } s.$
Different summary measures are available for plotting. The options are “TE” for treatment efficacy = $$1 - risk_1(s)/risk_0(s)$$, “RR” for relative risk = $$risk_1(s)/risk_0(s)$$, “logRR” for log of the relative risk, “risk” for the risk in each treatment arm, and “RD” for the risk difference = $$risk_1(s) - risk_0(s)$$. We can also transform using the log option of plot.
plot(binary.boot, contrast = "logRR", lwd = 2, col = c("black", "grey75", "grey75"))
plot(binary.boot, contrast = "RR", log = "y", lwd = 2, col = c("black", "grey75", "grey75"))
plot(binary.boot, contrast = "RD", lwd = 2, col = c("black", "grey75", "grey75"))
plot(binary.boot, contrast = "risk", lwd = 2, lty = c(1, 0, 0, 2, 0, 0))
legend("topright", legend = c("R0", "R1"), lty = c(1, 2), lwd = 2)
The calc_risk function is the workhorse that creates the plots. You can call this function directly to obtain estimates, standard errors, and confidence intervals for the estimated risk in each treatment arm and transformations of the risk like TE. The parameter n.samps determines the number of points at which to calculate the risk. The points are evenly spaced over the range of S.1. Use this function to compute other summaries, make plots using ggplot2 or lattice and more.
te.est <- calc_risk(binary.boot, CI.type = "pointwise", n.samps = 200)
head(te.est, 3)
S.1 Y R0 R1 Y.boot.se Y.lower.CL.2.5 Y.upper.CL.97.5 R0.boot.se R0.lower.CL.2.5 R0.upper.CL.97.5 R1.boot.se R1.lower.CL.2.5 R1.upper.CL.97.5
V1 -1.427854 -1.1383231 0.2929860 0.6264988 0.4721713 -2.244543 -0.7098834 0.0464966 0.2313938 0.3721349 0.0613676 0.5819195 0.7724227
V2 -1.149462 -0.8820479 0.2913751 0.5483818 0.4017460 -1.821322 -0.5306331 0.0410054 0.2375724 0.3606038 0.0611316 0.5038805 0.6929037
V3 -1.021737 -0.7598320 0.2906377 0.5114735 0.3684418 -1.615792 -0.4408812 0.0385357 0.2404458 0.3553656 0.0595985 0.4677770 0.6515752
# Summary and Conclusion
We have implemented the core methods for principal surrogate evaluation in our pseval package. Our aim was to create a flexible and consistent user interface that allows for the estimation of a wide variety of statistical models in this framework. There has been some other work in this area. The Surrogate package implements the core methods for the evaluation of trial-level surrogates using a meta-analytic framework. It also has a wide variety of models, each implemented in a different function each with a long list of parameters (Elst et al. 2016).
Our package uses the + sign to combine function calls into a single object. This is called “overloading the + operator” and is most famously known from the ggplot2 package (Wickham 2009). Conceptually, this was appealing to us because it allows users to build up analysis objects starting from the design, and ending with the estimation. The distinct analysis concepts of the design, risk model specification, integration model, and estimation/bootstrap approaches are separated into distinct function calls, each with a limited number of parameters. This makes it easier for users to keep track of their models, makes it easier to understand the methods involved, and allows for the specification of a wide variety of models by mixing and matching the function calls. This framework will also make it easier to maintain the codebase, and to extend it in the future as the methods evolve. Our package is useful for novice and expert R users alike, and implements an important set of statistical methods for the first time.
# Appendix
### Plot both types of CI
plot(binary.boot, contrast = "TE", lwd = 2, CI.type = "band")
sbs <- calc_risk(binary.boot, CI.type = "pointwise", n.samps = 200)
lines(Y.lower.CL.2.5 ~ S.1, data = sbs, lty = 3, lwd = 2)
lines(Y.upper.CL.97.5 ~ S.1, data = sbs, lty = 3, lwd = 2)
legend("bottomright", lwd = 2, lty = 1:3,
legend = c("estimate", "simultaneous CI", "pointwise CI"))
### Plot with ggplot2
library(ggplot2)
## Warning: package 'ggplot2' was built under R version 3.5.2
TE.est <- calc_risk(binary.boot, n.samps = 200)
ggplot(TE.est,
aes(x = S.1, y = Y, ymin = Y.lower.CL.0.95, ymax = Y.upper.CL.0.95)) +
geom_line() + geom_ribbon(alpha = .2) + ylab(attr(TE.est, "Y.function"))
### Case-control design
cc.fit <- binary.cc + integrate_parametric(S.1 ~ BIP) +
risk_binary(D = 10) + ps_estimate()
cc.fit
### Survival outcome
surv.fit <- psdesign(fakedata, Z = Z, Y = Surv(time.obs, event.obs),
S = S.obs, BIP = BIP, CPV = CPV) +
integrate_semiparametric(formula.location = S.1 ~ BIP, formula.scale = S.1 ~ 1) +
risk_exponential(D = 10) + ps_estimate(method = "BFGS") + ps_bootstrap(n.boots = 20)
surv.fit
plot(surv.fit)
### Continuous outcome
fakedata$Y.cont <- log(fakedata$time.obs + 0.01)
cont.fit <- psdesign(fakedata, Z = Z, Y = Y.cont,
S = S.obs, BIP = BIP, CPV = CPV) +
integrate_semiparametric(formula.location = S.1 ~ BIP, formula.scale = S.1 ~ 1) +
risk_continuous(D = 10) + ps_estimate(method = "BFGS") #+ ps_bootstrap(n.boots = 20)
cont.fit
## Augmented data frame: 800 obs. by 7 variables.
## Z Y S.1 S.0 cdfweights BIP CPV
## 1 0 -1.078 NA 0.3527 1 0.335 1.485
## 2 0 -2.044 NA 1.4669 1 1.454 2.638
## 3 0 -1.298 NA -0.7319 1 -0.724 NA
## 4 0 -1.915 NA -0.0183 1 -0.118 0.942
## 5 0 -0.146 NA -0.1847 1 -0.235 NA
## 6 0 -1.469 NA -0.6631 1 -0.778 0.116
##
## Empirical TE: -0.688
##
## Mapped variables:
## Z -> Z
## Y -> Y.cont
## S -> S.obs
## BIP -> BIP
## CPV -> CPV
##
## Integration models:
## integration model for S.1 :
## integrate_semiparametric(formula.location = S.1 ~ BIP, formula.scale = S.1 ~ 1 )
##
## Risk models:
## risk_continuous(D = 10 )
##
## Estimated parameters:
## sigma (Intercept) S.1 Z S.1:Z
## 0.0430 -1.4099 -0.0535 -0.1022 -0.8852
## Convergence: TRUE
##
## No bootstraps present, see ?ps_bootstrap.
plot(cont.fit, contrast = "risk")
### Categorical S
S.obs.cat and BIP.cat are factors:
with(fakedata, table(S.obs.cat, BIP.cat))
S.obs.cat/BIP.cat (-Inf,-0.678] (-0.678,0.0574] (0.0574,0.766] (0.766, Inf]
(-Inf,-0.198] 135 65 0 0
(-0.198,0.503] 64 63 73 0
(0.503,1.36] 1 72 72 55
(1.36, Inf] 0 0 55 145
cat.fit <- psdesign(fakedata, Z = Z, Y = Y.obs,
S = S.obs.cat, BIP = BIP.cat) +
integrate_nonparametric(formula = S.1 ~ BIP) +
risk_binary(Y ~ S.1 * Z, D = 10, risk = risk.probit) + ps_estimate(method = "BFGS")
cat.fit
## Augmented data frame: 800 obs. by 6 variables.
## Z Y S.1 S.0 cdfweights BIP
## 1 0 0 <NA> (-0.198,0.503] 1 (0.0574,0.766]
## 2 0 0 <NA> (1.36, Inf] 1 (0.766, Inf]
## 3 0 1 <NA> (-Inf,-0.198] 1 (-Inf,-0.678]
## 4 0 0 <NA> (-0.198,0.503] 1 (-0.678,0.0574]
## 5 0 1 <NA> (-0.198,0.503] 1 (-0.678,0.0574]
## 6 0 0 <NA> (-Inf,-0.198] 1 (-Inf,-0.678]
##
## Empirical TE: 0.526
##
## Mapped variables:
## Z -> Z
## Y -> Y.obs
## S -> S.obs.cat
## BIP -> BIP.cat
##
## Integration models:
## integration model for S.1 :
## integrate_nonparametric(formula = S.1 ~ BIP )
##
## Risk models:
## risk_binary(model = Y ~ S.1 * Z, D = 10, risk = risk.probit )
##
## Estimated parameters:
## (Intercept) S.1(-0.198,0.503] S.1(0.503,1.36]
## -0.427 -0.194 -0.341
## S.1(1.36, Inf] Z S.1(-0.198,0.503]:Z
## -0.101 0.216 -0.461
## S.1(0.503,1.36]:Z S.1(1.36, Inf]:Z
## -0.650 -1.573
## Convergence: TRUE
##
## No bootstraps present, see ?ps_bootstrap.
plot(cat.fit)
### Pseudo-score
Categorical W allows for estimation of the model using the pseudo-score method for binary outcomes. $$S$$ may be continuous or categorical:
cat.fit.ps <- psdesign(fakedata, Z = Z, Y = Y.obs,
S = S.obs, BIP = BIP.cat) +
integrate_nonparametric(formula = S.1 ~ BIP) +
risk_binary(Y ~ S.1 * Z, D = 10, risk = risk.logit) + ps_estimate(method = "pseudo-score") +
ps_bootstrap(n.boots = 20, method = "pseudo-score")
## Bootstrapping 20 replicates:
## ===========================================================================
summary(cat.fit.ps)
## Augmented data frame: 800 obs. by 6 variables.
## Z Y S.1 S.0 cdfweights BIP
## 1 0 0 NA 0.3527 1 (0.0574,0.766]
## 2 0 0 NA 1.4669 1 (0.766, Inf]
## 3 0 1 NA -0.7319 1 (-Inf,-0.678]
## 4 0 0 NA -0.0183 1 (-0.678,0.0574]
## 5 0 1 NA -0.1847 1 (-0.678,0.0574]
## 6 0 0 NA -0.6631 1 (-Inf,-0.678]
##
## Empirical TE: 0.526
##
## Mapped variables:
## Z -> Z
## Y -> Y.obs
## S -> S.obs
## BIP -> BIP.cat
##
## Integration models:
## integration model for S.1 :
## integrate_nonparametric(formula = S.1 ~ BIP )
##
## Risk models:
## risk_binary(model = Y ~ S.1 * Z, D = 10, risk = risk.logit )
##
## Estimated parameters:
## (Intercept) S.1 Z S.1:Z
## -0.9341 -0.0143 -0.2058 -1.1462
## Convergence: TRUE
##
## Bootstrap replicates:
## Estimate boot.se lower.CL.2.5. upper.CL.97.5. p.value
## (Intercept) -0.9341 0.164 -1.113 -0.587 1.17e-08
## S.1 -0.0143 0.126 -0.242 0.203 9.09e-01
## Z -0.2058 0.245 -0.667 0.122 4.00e-01
## S.1:Z -1.1462 0.228 -1.509 -0.737 4.91e-07
##
## Out of 20 bootstraps, 20 converged ( 100 %)
##
## Test for wide effect modification on 1 degree of freedom. 2-sided p value < .0001
##
## Treatment Efficacy:
## empirical marginal model
## 0.526 0.526 0.531
## Model-based average TE is 1.0 % different from the empirical and 1.0 % different from the marginal.
plot(cat.fit.ps)
# References
Elst, Wim Van der, Paul Meyvisch, Ariel Alonso, Hannah M. Ensor, and Christopher J. Weir & Geert Molenberghs. 2016. Surrogate: Evaluation of Surrogate Endpoints in Clinical Trials. https://CRAN.R-project.org/package=Surrogate.
Follmann, D. 2006. “Augmented Designs to Assess Immune Response in Vaccine Trials.” Biometrics 62 (4):1161–9.
Frangakis, CE, and DB Rubin. 2002. “Principal Stratification in Causal Inference.” Biometrics 58 (1):21–29.
Gabriel, Erin E, and Dean Follmann. 2015. “Augmented Trial Designs for Evaluation of Principal Surrogates.” Biostatistics 0 (0):1–25.
Gabriel, Erin E., and Peter B. Gilbert. 2014. “Evaluating Principal Surrogate Endpoints with Time-to-Event Data Accounting for Time-Varying Treatment Efficacy.” Biostatistics 15 (2):251–65.
Gabriel, Erin E., Michael C. Sachs, and Peter B. Gilbert. 2015. “Comparing and Combining Biomarkers as Principle Surrogates for Time-to-Event Clinical Endpoints.” Statistics in Medicine 34 (3):381–95. https://doi.org/10.1002/sim.6349.
Gilbert, PB, and MG Hudgens. 2008. “Evaluating Candidate Principal Surrogate Endpoints.” Biometrics 64 (4):1146–54.
Huang, Y, and PB Gilbert. 2011. “Comparing Biomarkers as Principal Surrogate Endpoints.” Biometrics.
Huang, Ying, Peter B Gilbert, and Julian Wolfson. 2013. “Design and Estimation for Evaluating Principal Surrogate Markers in Vaccine Trials.” Biometrics 69 (2):301–9.
Pepe, MS, and TR Fleming. 1991. “A Nonparametric Method for Dealing with Mismeasured Covariate Data.” Journal of the American Statistical Association 86 (413):108–13.
Sachs, Michael C., and Erin E. Gabriel. 2016. Pseval: Methods for Evaluating Principal Surrogates of Treatment Response.
Wickham, Hadley. 2009. Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.
Wolfson, J. 2009. “Statistical Methods for Identifying Surrogate Endpoints in Vaccine Trials.” Doctor of Philosophy Dissertation, Chair: Gilbert, Peter: University of Washington; Department of Biostatistics.
Wolfson, J, and PB Gilbert. 2010. “Statistical Identifiability and the Surrogate Endpoint Problem, with Application to Vaccine Trials.” Biometrics 66 (4):1153–61. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6786560416221619, "perplexity": 3208.4383240770794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055632.65/warc/CC-MAIN-20210917090202-20210917120202-00503.warc.gz"} |
https://math.stackexchange.com/questions/773742/express-skew-commutative-product-of-whitney-sum-of-vector-bundles-in-tensor-prod | # express skew-commutative product of Whitney sum of vector bundles in tensor products
Let $\xi$ and $\eta$ be vector bundles over a paracompact space $B$ and $\xi\oplus\eta$ be their Whitney sum. Can we write $\Lambda(\xi\oplus\eta)\cong \Lambda(\xi)\otimes \Lambda(\eta)$ as (graded) vector bundles?
For vector spaces $V,W$, we have $\Lambda(V\oplus W)\cong \Lambda(V)\otimes \Lambda(W)$ are isomorphic as graded vector spaces. But I am confused for vector bundle case...
• You just transplant what you know about vector spaces to the trivializations of your bundles, so you're done. :D – Ted Shifrin Apr 29 '14 at 3:56
• In Milnor's book (whose notation you seem to be using), on chap. 3, p. 31, he answers your question: how to transfer linear algebra constructions to vector bundles. It is a bit abstract, but very clear and complete, as is usual with Milnor. – Gil Bor May 1 '14 at 4:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598910808563232, "perplexity": 440.55133254571365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573080.8/warc/CC-MAIN-20190917141045-20190917163045-00079.warc.gz"} |
http://www.mendeley.com/groups/530631/stellar-clusters/ | # Stellar Clusters
In this group: 50 papers · 4 members
## Group activity
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added documents to this group
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added documents to this group
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added a document to this group
Marco Castellani added documents to this group
Marco Castellani added a document to this group
The difference between colour magnitude diagrams taken obtained at two different epochs is impressive http://gclusters.altervista.org/biblio_2.php?ggc=NGC%207089
Marco Castellani added a document to this group
Marco Castellani added a document to this group | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977392554283142, "perplexity": 26711.684918563526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119661285.56/warc/CC-MAIN-20141024030101-00150-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://izanbf.es/tags/math/ | # Math
## Mandelbrot Set Visualization
Probably you know the Mandelbrot set, it's so interesting to... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844419360160828, "perplexity": 10839.96581686593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00551.warc.gz"} |
https://www.arxiv-vanity.com/papers/1305.5939/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
The common ancestor process revisited
Sandra Kluth Thiemo Hustedt Ellen Baake
Technische Fakultät, Universität Bielefeld, Box 100131, 33501 Bielefeld, Germany
E-mail: {skluth, thustedt, ebaake}@techfak.uni-bielefeld.de
Abstract
Abstract. We consider the Moran model in continuous time with two types, mutation, and selection. We concentrate on the ancestral line and its stationary type distribution. Building on work by Fearnhead (J. Appl. Prob. 39 (2002), 38-54) and Taylor (Electron. J. Probab. 12 (2007), 808-847), we characterise this distribution via the fixation probability of the offspring of all individuals of favourable type (regardless of the offspring’s types). We concentrate on a finite population and stay with the resulting discrete setting all the way through. This way, we extend previous results and gain new insight into the underlying particle picture.
2000 Mathematics Subject Classification: Primary 92D15; Secondary 60J28.
Key words: Moran model, ancestral process with selection, ancestral line, common ancestor process, fixation probabilities.
1 Introduction
Understanding the interplay of random reproduction, mutation, and selection is a major topic of population genetics research. In line with the historical perspective of evolutionary research, modern approaches aim at tracing back the ancestry of a sample of individuals taken from a present population. Generically, in populations that evolve at constant size over a long time span without recombination, the ancestral lines will eventually coalesce backwards in time into a single line of descent. This ancestral line is of special interest. In particular, its type composition may differ substantially from the distribution at present time. This mirrors the fact that the ancestral line consists of those individuals that are successful in the long run; thus, its type distribution is expected to be biased towards the favourable types.
This article is devoted to the ancestral line in a classical model of population genetics, namely, the Moran model in continuous time with two types, mutation, and selection (i.e., one type is ‘fitter’ than the other). We are particularly interested in the stationary distribution of the types along the ancestral line, to be called the ancestral type distribution. We build on previous work by Fearnhead [9] and Taylor [20]. Fearnhead’s approach is based on the ancestral selection graph, or ASG for short [14, 16]. The ASG is an extension of Kingman’s coalescent [12, 13], which is the central tool to describe the genealogy of a finite sample in the absence of selection. The ASG copes with selection by including so-called virtual branches in addition to the real branches that define the true genealogy. Fearnhead calculates the ancestral type distribution in terms of the coefficients of a series expansion that is related to the number of (‘unfit’) virtual branches.
Taylor uses diffusion theory and a backward-forward construction that relies on a description of the full population. He characterises the ancestral type distribution in terms of the fixation probability of the offspring of all ‘fit’ individuals (regardless of the offspring’s types). This fixation probability is calculated via a boundary value problem.
Both approaches rely strongly on analytical tools; in particular, they employ the diffusion limit (which assumes infinite population size, weak selection and mutation) from the very beginning. The results only have partial interpretations in terms of the graphical representation of the model (i.e., the representation that makes individual lineages and their interactions explicit). The aim of this article is to complement these approaches by starting from the graphical representation for a population of finite size and staying with the resulting discrete setting all the way through, performing the diffusion limit only at the very end. This will result in an extension of the results to arbitrary selection strength, as well as a number of new insights, such as an intuitive explanation of Taylor’s boundary value problem in terms of the particle picture, and an alternative derivation of the ancestral type distribution.
The paper is organised as follows. We start with a short outline of the Moran model (Section 2). In Section 3, we introduce the common ancestor type process and briefly recapitulate Taylor’s and Fearnhead’s approaches. We concentrate on a Moran model of finite size and trace the descendants of the initially ‘fit’ individuals forward in time. Decomposition according to what can happen after the first step gives a difference equation, which turns into Taylor’s diffusion equation in the limit. We solve this difference equation and obtain the fixation probability in the finite-size model in closed form. In Section 5, we derive the coefficients of the ancestral type distribution within the discrete setting. Section 6 summarises and discusses the results.
2 The Moran model with mutation and selection
We consider a haploid population of fixed size in which each individual is characterised by a type . If an individual reproduces, its single offspring inherits the parent’s type and replaces a randomly chosen individual, maybe its own parent. This way the replaced individual dies and the population size remains constant.
Individuals of type reproduce at rate , whereas individuals of type reproduce at rate , . Accordingly, type- individuals are termed ‘fit’, type- individuals are ‘unfit’. In line with a central idea of the ASG, we will decompose reproduction events into neutral and selective ones. Neutral ones occur at rate and happen to all individuals, whereas selective events occur at rate and are reserved for type- individuals.
Mutation occurs independently of reproduction. An individual of type mutates to type at rate , , , . This is to be understood in the sense that every individual, regardless of its type, mutates at rate and the new type is with probability . Note that this includes the possibility of ‘silent’ mutations, i.e., mutations from type to type .
The Moran model has a well-known graphical representation as an interacting particle system (cf. Fig. 1). The vertical lines represent the individuals, with time running from top to bottom in the figure. Reproduction events are represented by arrows with the reproducing individual at the base and the offspring at the tip. Mutation events are marked by bullets.
We are now interested in the process , where is the number of individuals of type at time . When the number of type- individuals is , it increases by one at rate and decreases by one at rate , where
λNk=k(N−k)N(1+sN)+(N−k)uNν0andμNk=k(N−k)N+kuNν1. (1)
Thus, is a birth-death process with birth rates and death rates . For and its stationary distribution is with
πNZ(k)=CNk∏i=1λNi−1μNi,0⩽k⩽N, (2)
where is a normalising constant (cf. [4, p. 19]). (As usual, an empty product is understood as .)
To arrive at a diffusion, we consider the usual rescaling
(XNt)t⩾0:=1N(ZNNt)t⩾0,
and assume that , , and , . As , we obtain the well-known diffusion limit
(Xt)t⩾0:=limN→∞(XNt)t⩾0.
Given , a sequence with and , is characterised by the drift coefficient
a(x)=limN→∞(λNkN−μNkN)=(1−x)θν0−xθν1+(1−x)xσ (3)
and the diffusion coefficient
b(x)=limN→∞1N(λNkN+μNkN)=2x(1−x). (4)
Hence, the infinitesimal generator of the diffusion is defined by
Af(x)=(1−x)x∂2∂x2f(x)+[(1−x)θν0−xθν1+(1−x)xσ]∂∂xf(x), f∈C2([0,1]).
The stationary density – known as Wright’s formula – is given by
πX(x)=C(1−x)θν1−1xθν0−1exp(σx), (5)
where is a normalising constant. See [5, Ch. 7, 8] or [8, Ch. 4, 5] for reviews of diffusion processes in population genetics and [11, Ch. 15] for a general survey of diffusion theory.
In contrast to our approach starting from the Moran model, [9] and [20] choose the diffusion limit of the Wright-Fisher model as the basis for the common ancestor process. This is, however, of minor importance, since both diffusion limits differ only by a rescaling of time by a factor of (cf. [5, Ch. 7], [8, Ch. 5] or [11, Ch. 15]).
3 The common ancestor type process
Assume that the population is stationary and evolves according to the diffusion process . Then, at any time , there almost surely exists a unique individual that is, at some time , ancestral to the whole population; cf. Fig. 2. (One way to see this is via [14, Thm. 3.2, Corollary 3.4], which shows that the expected time to the ultimate ancestor in the ASG remains bounded if the sample size tends to infinity.) We say that the descendants of this individual become fixed and call it the common ancestor at time . The lineage of these distinguished individuals over time defines the so-called ancestral line. Denoting the type of the common ancestor at time by , , we term the common ancestor type process or CAT process for short. Of particular importance is its stationary type distribution , to which we will refer as the ancestral type distribution. Unfortunately, the CAT process is not Markovian. But two approaches are available that augment by a second component to obtain a Markov process. They go back to Fearnhead [9] and Taylor [20]; we will recapitulate them below.
3.1 Taylor’s approach
For ease of exposition, we start with Taylor’s approach [20]. It relies on a description of the full population forward in time (in the diffusion limit of the Moran model as ) and builds on the so-called structured coalescent [2]. The process is , with states , and . In [20] this process is termed common ancestor process (CAP).
Define as the probability that the common ancestor at a given time is of type , provided that the frequency of type- individuals at this time is . Obviously, , . Since the process is time-homogeneous, is independent of time. Denote the (stationary) distribution of by . Its marginal distributions are (with respect to the first variable) and (with respect to the second variable). may then be written as the product of the marginal density and the conditional probability (cf. [20]):
πT(0,x)dx =h(x)πX(x)dx, πT(1,x)dx =(1−h(x))πX(x)dx.
Since is well known (5), it remains to specify . Taylor uses a backward-forward construction within diffusion theory to derive a boundary value problem for , namely:
12b(x)h′′(x)+a(x)h′(x)−(θν1x1−x+θν01−xx)h(x)+θν1x1−x=0,h(0)=0,h(1)=1. (6)
Taylor shows that (6) has a unique solution. The stationary distribution of is thus determined in a unique way as well. The function is smooth in and its derivative can be continuously extended to (cf. [20, Lemma 2.3, Prop. 2.4]).
In the neutral case (i.e., without selection, ), all individuals reproduce at the same rate, independently of their types. For reasons of symmetry, the common ancestor thus is a uniform random draw from the population; consequently, . In the presence of selection, Taylor determines the solution of the boundary value problem via a series expansion in (cf. [20, Sec. 4] and see below), which yields
h(x)=x+σx−θν0(1−x)−θν1exp(−σx)∫x0(~x−p)pθν0(1−p)θν1exp(σp)dp (7) (8)
The stationary type distribution of the ancestral line now follows via marginalisation:
α0=∫10h(x)πX(x)dx \ and \ α1=∫10(1−h(x))πX(x)dx. (9)
Following [20], we define and write
h(x)=x+ψ(x). (10)
Since is the conditional probability that the common ancestor is fit, is the part of this probability that is due to selective reproduction.
Substituting (10) into (6) leads to a boundary value problem for :
12b(x)ψ′′(x)+a(x)ψ′(x)−(θν1x1−x+θν01−xx)ψ(x)+σx(1−x)=0,ψ(0)=ψ(1)=0. (11)
Here, the smooth inhomogeneous term is more favourable as compared to the divergent inhomogeneous term in (6). Note that Taylor actually derives the boundary value problems (6) and (11) for the more general case of frequency-dependent selection, but restricts himself to frequency-independence to derive solution (7).
We can only give a brief introduction to Fearnhead’s approach [9] here. On the basis of the ASG, the ancestry of a randomly chosen individual from the present (stationary) population is traced backwards in time. More precisely, one considers the process with values in , where is the type of the individual’s ancestor at time before the present (that is, at forward time ). Obviously, there is a minimal time so that, for all , (see also Fig. 2), provided the underlying process is extended to .
To make the process Markov, the true ancestor (known as the real branch) is augmented by a collection of virtual branches (see [1, 14, 16, 19] for the details). Following [9, Thm. 1], certain virtual branches may be removed (without compromising the Markov property) and the remaining set of virtual branches contains only unfit ones. We will refer to the resulting construction as the pruned ASG. It is described by the process , where (with values in ) is the number of virtual branches (of type ). is termed common ancestor process in [9] (but keep in mind that it is that is called CAP in [20]). Reversing the direction of time in the pruned ASG yields an alternative augmentation of the CAT process (for ).
Fearnhead provides a representation of the stationary distribution of the pruned process, which we will denote by . This stationary distribution is expressed in terms of constants defined by the following backward recursion:
ρ(k)k+1=0 \ and \ ρ(k)j−1=σj+σ+θ−(j+θν1)ρ(k)j, k∈N,2⩽j⩽k+1. (12)
The limit exists (cf. [9, Lemma 1]) and the stationary distribution of the pruned ASG is given by (cf. [9, Thm. 3])
πF(i,n)={anEπX(X(1−X)n),if i=0,(an−an+1)EπX((1−X)n+1),if i=1, with \ an:=n∏j=1ρj
for all . Fearnhead proves this result by straightforward verification of the stationarity condition; the calculation is somewhat cumbersome and does not yield insight into the connection with the graphical representation of the pruned ASG. Marginalising over the number of virtual branches results in the stationary type distribution of the ancestral line, namely,
αi=∑n⩾0πF(i,n). (13)
Furthermore, this reasoning points to an alternative representation of respectively (cf. [20]):
h(x)=x+x∑n⩾1an(1−x)n \ respectively \ ψ(x)=x∑n⩾1an(1−x)n. (14)
The , to which we will refer as Fearnhead’s coefficients, can be shown [20] to follow the second-order forward recursion
(2+θν1)a2−(2+σ+θ)a1+σ=0,(n+θν1)an−(n+σ+θ)an−1+σan−2=0,n⩾3. (15)
Indeed, (14) solves the boundary problem (6) and, therefore, equals (7) (cf. [20, Lemma 4.1]).
The forward recursion (15) is greatly preferable to the backward recursion (12), which can only be solved approximately with initial value for some large . What is still missing is the initial value, . To calculate it, Taylor defines (cf. [20, Sec. 4.1])
v(x):=h(x)−xx=ψ(x)x=∑n⩾1an(1−x)n (16)
and uses111Note the missing factor of in his equation (28).
an=(−1)nn!v(n)(1). (17)
This way a straightforward (but lengthy) calculation (that includes a differentiation of expression (7)) yields
a1=−v′(1)=−ψ′(1)=σ1+θν1(1−~x). (18)
4 Discrete approach
Our focus is on the stationary type distribution of the CAT process. We have seen so far that it corresponds to the marginal distribution of both and , with respect to the first variable. Our aim now is to establish a closer connection between the properties of the ancestral type distribution and the graphical representation of the Moran model. In a first step we re-derive the differential equations for and in a direct way, on the basis of the particle picture for a finite population. This derivation will be elementary and, at the same time, it will provide a direct interpretation of the resulting differential equations.
4.1 Difference and differential equations for h and ψ
Equations for . Since it is essential to make the connection with the graphical representation explicit, we start from a population of finite size , rather than from the diffusion limit. Namely, we look at a new Markov process with the natural filtration , where . is the number of fit individuals as before and holds the number of descendants of types and at time of an unordered sample with composition collected at time . More precisely, we start with a -measurable state (this means that must be independent of the future evolution; but note that it need not be a random sample) and observe the population evolve in forward time. At time , count the type- descendants and the type- descendants of our initial sample and summarise the results in the unordered sample . Together with , this gives the current state (cf. Fig. 3).
As soon as the initial sample is ancestral to all individuals, it clearly will be ancestral to all individuals at all later times. Therefore,
AN:={(m,k):k∈{0,…,N},m0⩽k,|m|=N},
where for a sample , is a closed (or invariant) set of the Markov chain. (Given a Markov chain in continuous time on a discrete state space , a non-empty subset is called closed (or invariant) provided that , , (cf. [17, Ch. 3.2]).)
From now on we restrict ourselves to the initial value , i.e. the population consists of fit individuals and the initial sample contains them all. Our aim is to calculate the probability of absorption in , which will also give us the fixation probability of the descendants of the type- individuals. In other words, we are interested in the probability that the common ancestor at time 0 belongs to our fit sample . Let us define as the equivalent of in the case of finite population size , that is, is the probability that one of the fit individuals is the common ancestor given . Equivalently, is the absorption probability of in , conditional on . Obviously, , . It is important to note that, given absorption in , the common ancestor is a random draw from the initial sample. Therefore,
P(a specific type-0 individual will fix∣ZN0=k)=hNkk. (19)
Likewise,
P(a specific type-1 individual will fix∣ZN0=k)=1−hNkN−k. (20)
We will now calculate the absorption probabilities with the help of ‘first-step analysis’ (cf. [17, Thm. 3.3.1], see also [5, Thm. 7.5]). Let us recall the method for convenience.
Lemma 1 (‘first-step analysis’).
Assume that is a Markov chain in continuous time on a discrete state space , is a closed set and , , is the waiting time to leave the state . Then for all ,
P(Y absorbs in A∣Y(0)=y) =∑z∈E:z≠yP(Y(Ty)=z∣Y(0)=y) =∑z∈E:z≠x×P(Y absorbs in A∣Y(0)=z).
So let us decompose the event ‘absorption in ’ according to the first step away from the initial state. Below we analyse all possible transitions (which are illustrated in Fig. 4), state the transition rates and calculate absorption probabilities, based upon the new state. We assume throughout that .
(a)
:
One of the sample individuals of type reproduces and replaces a type- individual. We distinguish according to the kind of the reproduction event.
(a1)
Neutral reproduction rate: .
(a2)
Selective reproduction rate: .
In both cases, the result is a sample containing all fit individuals. Now starts afresh in the new state , with absorption probability .
(b)
:
A type- individual reproduces and replaces a (sample) individual of type . This occurs at rate and leads to a sample that consists of all fit individuals. The absorption probability, if we start in the new state, is .
(c)
:
This transition describes a mutation of a type- individual to type and occurs at rate . The new sample contains all fit individuals, plus a single unfit one. Starting now from , the absorption probability has two contributions: First, by definition, with probability , one of the fit individuals will be the common ancestor. In addition, by (20), the single unfit individual has fixation probability , so the probability to absorb in when starting from the new state is
P(absorption in AN∣(M0,ZN0)=((k−1,1),k−1)) =hNk−1+1−hNk−1N−(k−1).
(d)
:
This is a mutation from type to type , which occurs at rate . We then have fit individuals in the population altogether, but the new sample contains only of them. Arguing as in (c) and this time using (19), we get
P(absorption in AN∣(M0,ZN0)=((k,0),k+1)) =hNk+1−hNk+1k+1.
Note that, in steps (c) and (d) (and already in (19) and (20)), we have used the permutation invariance of the fit (respectively unfit) lines to express the absorption probabilities as a function of (the number of fit individuals in the population) alone. This way, we need not cope with the full state space of . Taking together the first-step principle with the results of (a)–(d), we obtain the linear system of equations for (with the rates and as in (1)):
(λNk+μNk)hNk=λNkhNk+1+μNkhNk−1+kuNν11−hNk−1N−(k−1)−(N−k)uNν0hNk+1k+1, (21)
, which is complemented by the boundary conditions , . Rearranging results in
(22)
Let us now consider a sequence with and . The probabilities converge to as (for the stationary case a proof is given in the Appendix). Equation (22), with replaced by , together with (3) and (4) leads to Taylor’s boundary value problem (6).
Equations for . As before, we consider with start in , and now introduce the new function . is the part of the absorption probability in that goes back to selective reproductions (in comparison to the neutral case). We therefore speak of (as well as of ) as the ‘extra’ absorption probability.
Substituting in (21) yields the following difference equation for :
(λNk+μNk)ψNk=λNkψNk+1+μNkψNk−1+k(N−k)N2sN−kuNν1ψNk−1N−(k−1)−(N−k)uNν0ψNk+1k+1 (23)
, together with the boundary conditions . It has a nice interpretation, which is completely analogous to that of except in case (a2): If one of the fit sample individuals reproduces via a selective reproduction event, the extra absorption probability is (rather than ). Here, is the neutral fixation probability of the individual just created via the selective event; is the extra absorption probability of all type- individuals present after the event. The neutral contribution gives rise to the term on the right-hand side of (23). Performing in the same way as for , we obtain Taylor’s boundary value problem (11) and now have an interpretation in terms of the graphical representation to go with it.
4.2 Solution of the difference equation
In this Section, we derive an explicit expression for the fixation probabilities , that is, a solution of the difference equation (21), or equivalently, (23). Although the calculations only involve standard techniques, we perform them here explicitly since this yields additional insight. Since there is no danger of confusion, we omit the subscript (or superscript) for economy of notation.
The following Lemma specifies the extra absorption probabilities in terms of a recursion.
Lemma 2.
Let . Then
ψN−k=k(N−k)μN−k(μN−1N−1ψN−1+λN−k+1(k−1)(N−k+1)ψN−k+1−s(k−1)N2). (24)
Remark 1.
The quantity is well defined for all , and is well defined even for .
Proof of Lemma 2.
Let . Set in (23) and divide by to obtain
(λii(N−i)+μii(N−i))ψi =(1+sN+uν0i+1)ψi+1+(1N+uν1N−(i−1))ψi−1+sN2 =λi+1(i+1)(N−i−1)ψi+1+μi−1(i−1)(N−i+1)ψi−1+sN2. (25)
Together with
(λ1N−1+μ1N−1)ψ1 =λ22(N−2)ψ2+sN2, (26) (λN−1N−1+μN−1N−1)ψN−1 =μN−22(N−2)ψN−2+sN2, (27)
and the boundary conditions , we obtain a new linear system of equations for the vector Summation over the last equations yields
N−1∑i=N−k+1(λii(N−i)+μii(N−i))ψi= N−2∑i=N−k+1λi+1(i+1)(N−i−1)ψi+1 +N−1∑i=N−k+1μi−1(i−1)(N−i+1)ψi−1+s(k−1)N2,
which proves the assertion.
Lemma 2 allows for an explicit solution for .
Theorem 1.
For , let
χnℓ:=n∏i=ℓλiμi \ and \ K:=N−1∑n=0χn1. (28)
The solution of recursion is then given by
ψN−k=k(N−k)μN−kN−1∑n=N−kχnN−k+1(μN−1N−1ψN−1−s(N−1−n)N2) (29)
with
ψN−1=1KN−1μN−1sN2N−2∑n=0(N−1−n)χn1. (30)
An alternative representation is given by
ψN−k=1Kk(N−k)μN−ksN2N−k−1∑ℓ=0N−1∑n=N−k(n−ℓ)χℓ1χnN−k+1. (31)
Proof.
We first prove (29) by induction over . For , (29) is easily checked to be true. Inserting the induction hypothesis for some into recursion (24) yields
ψN−k= k(N−k)μN−k[μN−1N−1ψN−1 +λN−k+1μN−k+1N−1∑n=N−k+1χnN−k+2(μN−1N−1ψN−1−s(N−1−n)N2)−s(k−1)N2],
which immediately leads to (29). For , (29) gives (30), since and is well defined by Remark 1. We now check (31) by inserting (30) into (29) and then use the expression for as in (28):
ψN−k =1Kk(N−k)μN−ksN2N−1∑n=N−kχnN−k+1[N−1∑ℓ=0(N−1−ℓ)χℓ1−N−1∑ℓ=0(N−1−n)χℓ1] =1Kk(N−k)μN−ksN2N−1∑ℓ=0N−1∑n=N−k(n−ℓ)χℓ1χnN−k+1. Then we split the first sum according to whether ℓ⩽N−k−1 or ℓ⩾N−k, and use χℓ1=χN−k1χℓN−k+1 in the latter case: ψN−k =1Kk(N−k)μN−ksN2[N−k−1∑ℓ=0N−1∑n=N−k(n−ℓ)χℓ1χnN−k+1 =+χN−k1N−1∑ℓ=N−kN−1∑n=N−k(n−ℓ)χℓN−k+1χnN−k+1].
The first sum is the right-hand side of (31) and the second sum disappears due to symmetry.
Let us note that the fixation probabilities thus obtained have been well known for the case with selection in the absence of mutation (see, e.g., [5, Thm. 6.1]), but to the best of our knowledge, have not yet appeared in the literature for the case with mutation.
4.3 The solution of the differential equation
As a little detour, let us revisit the boundary value problem (6). To solve it, Taylor assumes that can be expanded in a power series in . This yields a recursive series of boundary value problems (for the various powers of ), which are solved by elementary methods and combined into a solution of (cf. [20]).
However, the calculations are slightly long-winded. In what follows, we show that the boundary value problem (6) (or equivalently (11)) may be solved in a direct and elementary way, without the need for a series expansion. Defining
c(x):=−θν1x1−x−θν01−xx
and remembering the drift coefficient (cf. (3)) and the diffusion coefficient (cf. (4)), differential equation (11) reads
12b(x)ψ′′(x)+a(x)ψ′(x)+c(x)ψ(x)=−σx(1−x)
or, equivalently,
(32)
Since
c(x)b(x)=ddxa(x)b(x), (33)
(32) is an exact differential equation (for the concept of exactness, see [10, Ch. 3.11] or [3, Ch. 2.6]). Solving it corresponds to solving its primitive
ψ′(x)+2a(x)b(x)ψ(x)=−σ(x−~x). (34)
The constant plays the role of an integration constant and will be determined by the initial conditions later. (Obviously, (32) is recovered by differentiating (34) and observing (33).) As usual, we consider the homogeneous equation
φ′(x)+2a(x)b(x)φ(x)=φ′(x)+(σ−θν11−x+θν0x)φ(x)=0
first. According to [5, Ch. 7.4] and [8, Ch. 4.3], its solution is given by
φ1(x)=exp(∫x−2a(z)b(z)dz)=γ(1−x)−θν1x−θν0exp(−σx)=2Cγb(x)πX(x).
(Note the link to the stationary distribution provided by the last expression (cf. [5, Thm. 7.8] and [8, Ch. 4.5]).) Of course, the same expression is obtained via separation of variables. Again we will deal with the constant later.
Variation of parameters yields the solution of the inhomogeneous equation (34):
φ2(x)=φ1(x)∫xβ−σ(p−~x)φ1(p)dp=σφ1(x)∫xβ~x−pφ1(p)dp. (35)
Finally, it remains to specify the constants of integration , and the constant to comply with . We observe that the factor cancels in (35), thus its choice is arbitrary. diverges for and , so the choice of and has to guarantee , where . Hence, and
~x∫101φ1(p)dp=∫10pφ1(p)dp ⇔ ~x=∫10pφ1(p)dp∫101φ1(p)dp.
For the sake of completeness, l’Hôpital’s rule can be used to check that . The result indeed coincides with Taylor’s (cf. (7)).
We close this Section with a brief consideration of the initial value of the recursions (15). Since, by (18), , it may be obtained by analysing the limit of (34). In the quotient , numerator and denominator disappear as . According to l’Hôpital’s rule, we get
limx→1a(x)ψ(x)b(x)=limx→1(−θν0−θν1+σ(1−2x))ψ(x)+a(x)ψ′(x)2(1−2x)=12θν1ψ′(1),
therefore, the limit of (34) yields
−ψ′(1)(1+θν1)=σ(1−~x).
Thus, we obtain without the need to differentiate expression (7).
5 Derivation of Fearnhead’s coefficients in the discrete setting
Let us now turn to the ancestral type distribution and Fearnhead’s coefficients that characterise it. To this end, we start from the linear system of equations for in (25)-(27). Let
˜ψNk:=ψNkk(N−k), (36)
for . In terms of these new variables, (27) reads
−μNN−1˜ψNN−1+μNN−2˜ψNN | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475066065788269, "perplexity": 617.4526878164663}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881763.20/warc/CC-MAIN-20200706160424-20200706190424-00389.warc.gz"} |
http://mathhelpforum.com/algebra/218867-cannot-life-me-follow-simplification-print.html | # Cannot for the life of me follow this simplification
• May 13th 2013, 12:33 AM
lukasaurus
Cannot for the life of me follow this simplification
http://i.imgur.com/6HvVncv.jpg
I understand how the sqrt comes down, from the right half of the top, but how does the top left part lose the square root?
After simplifying, I can get
sqrt(x^2+1) - x^2 / (x^2+1)^(3/2)
However, the answer that all online calcs give and my tutorial notes give is
1/ (x^2+1)^(3/2)
And I can see where that is going from the working up the top, but can't figure out how the sqrt(x^2+1) turns into (x^2+1)
• May 13th 2013, 01:16 AM
agentmulder
Re: Cannot for the life of me follow this simplification
One way to see this is to factor, first cancel the 2 with 1/2 at top right then factor
$\frac{(x^2 + 1)^{-\frac{1}{2}}[(x^2 + 1) - x^2]}{x^2 + 1}$
Notice if you distribute, the bases are the same so you add exponents and keep the base , -1/2 + 1 = 1/2 so we're good
Next simplify within brackets = 1 then bring what's left to the denominator so it's exponent is positive, finally keep the same base and add the exponents. Let me know if you need more details.
:)
• May 13th 2013, 01:36 AM
lukasaurus
Re: Cannot for the life of me follow this simplification
Yep,I definitely need more details with that first factoring. I have been sitting, trying to work this out for over two hours
• May 13th 2013, 01:40 AM
ibdutt
1 Attachment(s)
Re: Cannot for the life of me follow this simplification
• May 13th 2013, 02:26 AM
agentmulder
Re: Cannot for the life of me follow this simplification
Quote:
Originally Posted by lukasaurus
Yep,I definitely need more details with that first factoring. I have been sitting, trying to work this out for over two hours
Let's just look at only the numerator for a moment , after canceling 1/2 with 2 and let's represent the radical with an equivalent exponent.
$(x^2 + 1)^{\frac{1}{2}} - x^2(x^2 + 1)^{-\frac{1}{2}}$
is it easier to see the same base now?
We must factor from both terms because we have subtraction between the 2 terms.
Factoring from the term on the right is easy, just pull it out, factoring from the term on the left is a bit harder but if you can understand it you save time and space. Essentialy , to figure out what exponent to put you subtract the exponents
We identified the common base
$(x^2 + 1)$
We decide to factor out
$(x^2 + 1)^{-\frac{1}{2}}$
So far we have
$(x^2 + 1)^{-\frac{1}{2}}[ \ \ \ \ \ \ - x^2]$
what do we put in the space? Well, it's going to be the same base with an exponent of...
$\frac{1}{2} - (- \frac{1}{2} ) = 1$
the positive 1/2 is the exponent on the base that is getting factored the -1/2 is the exponent on the base that is doing the factoring.
$(x^2 + 1)^{-\frac{1}{2}}[(x^2 + 1)^1 - x^2]$
or simply
$(x^2 + 1)^{-\frac{1}{2}}[x^2 + 1 - x^2]$
The rest of it is obvious, i hope to have helped you understand the factoring. Let me know if anything is not clear.
:) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8528382778167725, "perplexity": 738.1014638906853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678687395/warc/CC-MAIN-20140313024447-00071-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/hall-effect-data.383060/ | # Hall Effect data
1. Mar 2, 2010
### Kazkek
Hi, I am doing research in Hall Mobility and Half-heusler compounds.
I've taken some data on several samples now and I am kind of perplexed. We use a very stable magnetic field between -.6 Tesla and .6 Tesla to do the measurements.
Currently we take data points at intervals of .1 tesla to acquire a line of Magnetic Field vs. Resistance (or voltage since Current is held constant). And for the most part we take the slope of that line to get a "B / R" ratio and then calculate the carrier concentration.
Most of the samples have run smoothly and we've gotten linear data, but a few of the samples are returning Parabolic data.
This meaning that we start at .6 Tesla and go to -.6 Tesla in .1 Tesla increments. Taking data at each point.
http://img682.imageshack.us/img682/2230/graph3t.png" [Broken]
We are also using a Van der pauw geometry instead of bar geometry (due to the shape of the samples that are made).
my question is, What does a parabolic curve mean for Hall effect data. I thought it should be mostly linear.
If you need any more info that I've left out, just ask I should be able to tell you. Thanks.
Last edited by a moderator: May 4, 2017
Can you offer guidance or do you also need help?
Draft saved Draft deleted
Similar Discussions: Hall Effect data | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8777491450309753, "perplexity": 1580.159916932522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00167.warc.gz"} |
https://arxiv.org/abs/1504.05661 | math.OC
(what is this?)
# Title: Control of Generalized Energy Storage Networks
Abstract: The integration of intermittent and volatile renewable energy resources requires increased flexibility in the operation of the electric grid. Storage, broadly speaking, provides the flexibility of shifting energy over time; network, on the other hand, provides the flexibility of shifting energy over geographical locations. The optimal control of general storage networks in uncertain environments is an important open problem. The key challenge is that, even in small networks, the corresponding constrained stochastic control problems with continuous spaces suffer from curses of dimensionality, and are intractable in general settings. For large networks, no efficient algorithm is known to give optimal or near-optimal performance. This paper provides an efficient and provably near-optimal algorithm to solve this problem in a very general setting. We study the optimal control of generalized storage networks, i.e., electric networks connected to distributed generalized storages. Here generalized storage is a unifying dynamic model for many components of the grid that provide the functionality of shifting energy over time, ranging from standard energy storage devices to deferrable or thermostatically controlled loads. An online algorithm is devised for the corresponding constrained stochastic control problem based on the theory of Lyapunov optimization. We prove that the algorithm is near-optimal, and construct a semidefinite program to min- imize the sub-optimality bound. The resulting bound is a constant that depends only on the parameters of the storage network and cost functions, and is independent of uncertainty realizations. Numerical examples are given to demonstrate the effectiveness of the algorithm.
Comments: This report, written in January 2014, is a longer version of the conference paper [1] (See references in the report). This version contains a somewhat more general treatment for the cases with sub-differentiable objective functions and Markov disturbance. arXiv admin note: substantial text overlap with arXiv:1405.7789 Subjects: Optimization and Control (math.OC) Cite as: arXiv:1504.05661 [math.OC] (or arXiv:1504.05661v1 [math.OC] for this version)
## Submission history
From: Junjie Qin [view email]
[v1] Wed, 22 Apr 2015 05:53:32 GMT (1527kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975402116775513, "perplexity": 915.6503428281238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.65/warc/CC-MAIN-20180923000827-20180923021227-00408.warc.gz"} |
https://reference.opcfoundation.org/v104/Core/DataTypes/RationalNumber/ | ## RationalNumber
The fields of the RationalNumber DataType are defined in the following table:
Name Type
RationalNumber Structure
numerator Int32
denominator UInt32
The representation of the RationalNumber DataType in the address space is shown in the following table:
Name Attribute
NodeId i=18806
NamespaceUri http://opcfoundation.org/UA/
BrowseName RationalNumber
IsAbstract False
SubtypeOf Structure
Categories | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90042644739151, "perplexity": 29500.853348655866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891228.40/warc/CC-MAIN-20201026115814-20201026145814-00577.warc.gz"} |
https://www.deepdyve.com/lp/oxford-university-press/global-structural-properties-of-random-graphs-UIkbSR5ban | # Global Structural Properties of Random Graphs
Global Structural Properties of Random Graphs Abstract We study two global structural properties of a graph $$\Gamma$$, denoted $$\mathcal{AS}$$ and $$\mathcal{CFS}$$, which arise in a natural way from geometric group theory. We study these properties in the Erdős–Rényi random graph model $${\mathcal G}(n,p)$$, proving the existence of a sharp threshold for a random graph to have the $$\mathcal{AS}$$ property asymptotically almost surely, and giving fairly tight bounds for the corresponding threshold for the $$\mathcal{CFS}$$ property. As an application of our results, we show that for any constant $$p$$ and any $$\Gamma\in{\mathcal G}(n,p)$$, the right-angled Coxeter group $$W_\Gamma$$ asymptotically almost surely has quadratic divergence and thickness of order $$1$$, generalizing and strengthening a result of Behrstock–Hagen–Sisto [8]. Indeed, we show that at a large range of densities a random right-angled Coxeter group has quadratic divergence. 1 Introduction In this article, we consider two properties of graphs motivated by geometric group theory. We show that these properties are typically present in random graphs. We repay the debt to geometric group theory by applying our (purely graph-theoretic) results to the large-scale geometry of Coxeter groups. Random graphs Let $${\mathcal G}(n,p)$$ be the random graph model on $$n$$ vertices obtained by including each edge independently at random with probability $$p=p(n)$$. The parameter $$p$$ is often referred to as the density of $${\mathcal G}(n,p)$$. The model $${\mathcal G}(n,p)$$ was introduced by Gilbert [23], and the resulting random graphs are usually referred to as the “Erdős–Rényi random graphs” in honor of Erdős and Rényi’s seminal contributions to the field, and we follow this convention. We say that a property $$\mathcal{P}$$ holds asymptotically almost surely (a.a.s.) in $${\mathcal G}(n,p)$$ if for $$\Gamma\in{\mathcal G}(n,p),$$ we have $$\mathbb{P}(\Gamma \in \mathcal{P})\rightarrow 1$$ as $$n\rightarrow \infty$$. In this article, we will be interested in proving that certain global properties hold a.a.s. in $${\mathcal G}(n,p)$$ both for a wide range of probabilities $$p=p(n)$$. A graph property is (monotone) increasing if it is closed under the addition of edges. A paradigm in the theory of random graphs is that global increasing graph properties exhibit sharp thresholds in $${\mathcal G}(n,p)$$: for many global increasing properties $$\mathcal{P}$$, there is a critical density$$p_c=p_c(n)$$ such that for any fixed $$\epsilon>0$$ if $$p<(1-\epsilon)p_c$$ then a.a.s. $$\mathcal{P}$$ does not hold in $${\mathcal G}(n,p)$$, while if $$p>(1+\epsilon)p_c$$ then a.a.s. $$\mathcal{P}$$ holds in $${\mathcal G}(n,p)$$. A quintessential example is the following classical result of Erdős and Rényi which provides a sharp threshold for connectedness: Theorem (Erdős–Rényi; [21]). There is a sharp threshold for connectivity of a random graph with critical density $$p_c(n)=\frac{\log(n)}{n}$$. □ The local structure of the Erdős–Rényi random graph is well understood, largely due to the assumption of independence between the edges. For example, Erdős–Rényi and others have obtained threshold densities for the existence of certain subgraphs in a random graph (see e.g., [21, Theorem 1, Corollaries 1–5]). In earlier applications of random graphs to geometric group theory, this feature of the model was successfully exploited in order to analyze the geometry of right-angled Artin and Coxeter groups presented by random graphs; this is notable, for example, in the work of Charney and Farber [14]. In particular, the presence of an induced square implies non-hyperbolicity of the associated right-angled Coxeter group [14, 21, 31]. In this article, we take a more global approach. Earlier work established a correspondence between some fundamental geometric properties of right-angled Coxeter groups and large-scale structural properties of the presentation graph, rather than local properties such as the presence or absence of certain specified subgraphs. The simplest of these properties is the property of being the join of two subgraphs that are not cliques. One large scale graph property relevant in the present context is a property studied in [8] which, roughly, says that the graph is constructed in a particular organized, inductive way from joins. In this article, we discuss a refined version of this property, $$\mathcal{CFS}$$, which is a slightly-modified version of a property introduced by Dani–Thomas [17]. We also study a stronger property, $$\mathcal{AS}$$, and show it is generic in random graphs for a large range of $$p(n)$$, up $$1-\omega(n^{-2})$$. $$\boldsymbol{\mathcal{AS}}$$ graphs The first class of graphs we study is the class of augmented suspensions, which we denote $$\mathcal{AS}$$. A graph is an augmented suspension if it contains an induced subgraph which is a suspension (see Section 2 for a precise definition of this term), and any vertex which is not in that suspension is connected by edges to at least two nonadjacent vertices of the suspension. Theorem 4.4 and 4.5 (Sharp Threshold for $$\boldsymbol{\mathcal{AS}}$$).Let $$\epsilon>0$$ be fixed. If $$p=p(n)$$ satisfies $$p\ge(1+\epsilon)\left(\displaystyle\frac{\log n}{n}\right)^{\frac{1}{3}}$$ and $$(1-p)n^2\to\infty$$, then $$\Gamma\in{\mathcal G}(n,p(n))$$ is a.a.s. in $$\mathcal{AS}$$. On the other hand, if $$p\le(1-\epsilon)\left(\displaystyle\frac{\log{n}}{n}\right)^{\frac{1}{3}}$$, then $$\Gamma\in{\mathcal G}(n,p)$$ a.a.s. does not lie in $$\mathcal{AS}$$. Intriguingly, Kahle proved that a function similar to the critical density in Theorem 4.4 is the threshold for a random simplicial complex to have vanishing second rational cohomology [28]. Remark (Behavior near $$p=1$$). Note that property $$\mathcal{AS}$$ is not monotone increasing, since it requires the presence of a number of non-edges. In particular, complete graphs are not in $$\mathcal{AS}$$. Thus unlike the global properties typically studied in the theory of random graphs, $$\mathcal{AS}$$ will cease to hold a.a.s. when the density $$p$$ is very close to $$1$$. In fact, [8, Theorem 3.9] shows that if $$p(n)=1-\Omega\left(\frac{1}{n^2}\right)$$, then a.a.s. $$\Gamma$$ is either a clique or a clique minus a fixed number of edges whose endpoints are all disjoint. Thus, with positive probability, $$\Gamma\in\mathcal{AS}$$. However, [14, Theorem 1] shows that if $$(1-p)n^2\to 0$$ then $$\Gamma$$ is asymptotically almost surely a clique, and hence not in $$\mathcal{AS}$$. □ $$\boldsymbol{\mathcal{CFS}}$$ graphs The second family of graphs, which we call $$\mathcal{CFS}$$ graphs (“Constructed From Squares”), arise naturally in geometric group theory in the context of the large–scale geometry of right–angled Coxeter groups, as we explain below and in Section 3. A special case of these graphs was introduced by Dani–Thomas to study divergence in triangle-free right-angled Coxeter groups [17]. The graphs we study are intimately related to a property called thickness, a feature of many key examples in geometric group theory and low dimensional topology that is, closely related to divergence, relative hyperbolicity, and a number of other topics. This property is, in essence, a connectivity property because it relies on a space being “connected” through sequences of “large” subspaces. Roughly speaking, a graph is $$\mathcal{CFS}$$ if it can be built inductively by chaining (induced) squares together in such a way that each square overlaps with one of the previous squares along a diagonal (see Section 2 for a precise definition). We explain in the next section how this class of graphs generalizes $$\mathcal{AS}$$. Our next result about genericity of $$\mathcal{CFS}$$ combines with Proposition 3.1 below to significantly strengthen [8, Theorem VI]. This result is an immediate consequence of Theorems 5.1 and 5.7, which, in fact, establish slightly more precise, but less concise, bounds. Theorem 5.1 and 5.7 Suppose $$(1-p)n^2\to\infty$$ and let $$\epsilon>0$$. Then $$\Gamma\in{\mathcal G}(n,p)$$ is a.a.s. in $$\mathcal{CFS}$$ whenever $$p(n)>n^{-\frac{1}{2}+\epsilon}$$. Conversely, $$\Gamma\in{\mathcal G}(n,p)$$ is a.a.s. not in $$\mathcal{CFS}$$, whenever $$p(n)<n^{-\frac{1}{2} -\epsilon}.$$ We actually show, in Theorem 5.1, that at densities above $$5\sqrt{\frac{\log n}{n}}$$, with $$(1-p)n^2\to\infty$$, the random graph is a.a.s. in $$\mathcal{CFS}$$, while in Theorem 5.7 we show a random graph a.a.s. not in $$\mathcal{CFS}$$ at densities below $$\frac{1}{\sqrt{n}\log{n}}$$. Theorem 5.1 applies to graphs in a range strictly larger than that in which Theorem 4.4 holds (though our proof of Theorem 5.1 relies on Theorem 4.4 to deal with the large $$p$$ case). Theorem 5.1 combines with Theorem 4.5 to show that, for densities between $$\left(\log n/n\right)^{\frac{1}{2}}$$ and $$\left(\log n/n\right)^{\frac{1}{3}}$$, a random graph is asymptotically almost surely in $$\mathcal{CFS}$$ but not in $$\mathcal{AS}$$. We also note that Babson–Hoffman–Kahle [3] proved that a function of order $$n^{-\frac{1}{2}}$$ appears as the threshold for simple-connectivity in the Linial–Meshulam model for random 2–complexes [29]. It would be interesting to understand whether there is a connection between genericity of the $$\mathcal{CFS}$$ property and the topology of random 2-complexes. Unlike our results for the $$\mathcal{AS}$$ property, we do not establish a sharp threshold for the $$\mathcal{CFS}$$ property. In fact, we believe that neither the upper nor lower bounds, given in Theorem 5.1 and Theorem 5.7, for the critical density around which $$\mathcal{CFS}$$ goes from a.a.s. not holding to a.a.s. holding are sharp. Indeed, we believe that there is a sharp threshold for the $$\mathcal{CFS}$$ property located at $$p_c(n)=\theta(n^{-\frac{1}{2}})$$. This conjecture is linked to the emergence of a giant component in the “square graph” of $$\Gamma$$ (see the next section for a definition of the square graph and the heuristic discussion after the proof of Theorem 5.7). Applications to geometric group theory Our interest in the structure of random graphs was sparked largely by questions about the large-scale geometry of right-angled Coxeter groups. Coxeter groups were first introduced in [15] as a generalization of reflection groups, that is, discrete groups generated by a specified set of reflections in Euclidean space. A reflection group is right-angled if the reflection loci intersect at right angles. An abstract right-angled Coxeter group generalizes this situation: it is defined by a group presentation in which the generators are involutions and the relations are obtained by declaring some pairs of generators to commute. Right-angled Coxeter groups (and more general Coxeter groups) play an important role in geometric group theory and are closely-related to some of that field’s most fundamental objects, for example, CAT(0) cube complexes [18, 27, 33] and (right-angled) Bruhat-Tits buildings (see e.g., [18]). A right-angled Coxeter group is determined by a unique finite simplicial presentation graph: the vertices correspond to the involutions generating the group, and the edges encode the pairs of generators that commute. In fact, the presentation graph uniquely determines the right-angled Coxeter group [32]. In this article, as an application of our results on random graphs, we continue the project of understanding large-scale geometric features of right-angled Coxeter groups in terms of the combinatorics of the presentation graph, begun in [8, 14, 17]. Specifically, we study right-angled Coxeter groups defined by random presentation graphs, focusing on the prevalence of two important geometric properties: relative hyperbolicity and thickness. Relative hyperbolicity, in the sense introduced by Gromov and equivalently formulated by many others [10, 22, 24, 35], when it holds, is a powerful tool for studying groups. On the other hand, thickness of a finitely-generated group (more generally, a metric space) is a property introduced by Behrstock–Druţu–Mosher in [6] as a geometric obstruction to relative hyperbolicity and has a number of powerful geometric applications. For example, thickness gives bounds on divergence (an important quasi-isometry invariant of a metric space) in many different groups and spaces [5, 7, 11, 17, 37]. Thickness is an inductive property: in the present context, a finitely generated group $$G$$ is thick of order $$0$$ if and only if it decomposes as the direct product of two infinite subgroups. The group $$G$$ is thick of order $$n$$ if there exists a finite collection $$\mathcal H$$ of undistorted subgroups of $$G$$, each thick of order $$n-1$$, whose union generates a finite-index subgroup of $$G$$ and which has the following “chaining” property: for each $$g,g'\in G$$, one can construct a sequence $$g\in g_1H_1,g_2H_2,\ldots,g_kH_k\ni g'$$ of cosets, with each $$H_i\in\mathcal H$$, so that consecutive cosets have infinite coarse intersection. Many of the best-known groups studied by geometric group theorists are thick, and indeed thick of order $$1$$: one-ended right-angled Artin groups, mapping class groups of surfaces, outer automorphism groups of free groups, fundamental groups of three-dimensional graph manifolds, etc. [6]. The class of Coxeter groups contains many examples of hyperbolic and relatively hyperbolic groups. There is a criterion for hyperbolicity purely in terms of the presentation graph due to Moussong [31] and an algebraic criterion for relative hyperbolicity due to Caprace [13]. The class of Coxeter groups includes examples which are non-relatively hyperbolic, for instance, those constructed by Davis–Januszkiewicz [19] and, also, ones studied by Dani–Thomas [17]. In fact, in [8], this is taken further: it is shown that every Coxeter group is actually either thick, or hyperbolic relative to a canonical collection of thick Coxeter subgroups. Further, there is a simple, structural condition on the presentation graph, checkable in polynomial time, which characterizes thickness. This result is needed to deduce the applications below from our graph theoretic results. Charney and Farber initiated the study of random graph products (including right-angled Artin and Coxeter groups) using the Erdős-Rényi model of random graphs [14]. The structure of the group cohomology of random graph products was obtained in [20]. In [8], various results are proved about which random graphs have the thickness property discussed above, leading to the conclusion that, at certain low densities, random right-angled Coxeter groups are relatively hyperbolic (and thus not thick), while at higher densities, random right-angled Coxeter groups are thick. In this article, we improve significantly on one of the latter results, and also prove something considerably more refined: we isolate not just thickness of random right-angled Coxeter groups, but thickness of a specified order, namely $$1$$: Corollary 3.2 (Random Coxeter groups are thick of order 1.)There exists a constant $$C>0$$ such that if $$p\colon{\mathbb{N}}\rightarrow(0,1)$$ satisfies $$\left(\displaystyle\frac{C\log{n}}{n}\right)^{\frac{1}{2}}\leq p(n)\leq 1-\displaystyle\frac{(1+\epsilon)\log{n}}{n}$$ for some $$\epsilon>0$$, then the random right-angled Coxeter group $$W_{G_{n,p}}$$ is asymptotically almost surely thick of order exactly $$1$$, and in particular has quadratic divergence. Corollary 3.2 significantly improves on Theorem 3.10 of [8], as discussed in Section 3. This theorem follows from Theorems 5.1 and 4.4, the latter being needed to treat the case of large $$p(n)$$, including the interesting special case in which $$p$$ is constant. Remark 1.1. We note that characterizations of thickness of right-angled Coxeter groups in terms of the structure of the presentation graph appear to generalize readily to graph products of arbitrary finite groups and, probably, via the action on a cube complex constructed by Ruane and Witzel in [36], to arbitrary graph products of finitely generated abelian groups, using appropriate modifications of the results in [8]. □ Organization of the article In Section 2, we give the formal definitions of $$\mathcal{AS}$$ and $$\mathcal{CFS}$$ and introduce various other graph-theoretic notions we will need. In Section 3, we discuss the applications of our random graph results to geometric group theory, in particular, to right-angled Coxeter groups and more general graph products. In Section 4, we obtain a sharp threshold result for $$\mathcal{AS}$$ graphs. Section 5 is devoted to $$\mathcal{CFS}$$ graphs. Finally, Section 6 contains some simulations of random graphs with density near the threshold for $$\mathcal{AS}$$ and $$\mathcal{CFS}$$. 2 Definitions Convention 2.1 A graph is a pair of finite sets $$\Gamma=(V,E)$$, where $$V=V(\Gamma)$$ is a set of vertices, and $$E=E(\Gamma)$$ is a collection of pairs of distinct elements of $$V$$, which constitute the set of edges of $$G$$. A subgraph of $$\Gamma$$ is a graph $$\Gamma'$$ with $$V(\Gamma')\subseteq V(\Gamma)$$ and $$E(\Gamma')\subseteq E(\Gamma)$$; $$\Gamma'$$ is said to be an induced subgraph of $$\Gamma$$ if $$E(\Gamma')$$ consists exactly of those edges from $$E(\Gamma)$$ whose vertices lie in $$V(\Gamma')$$. In this article, we focus on induced subgraphs, and we generally write “subgraph” to mean “induced subgraph”. In particular, we often identify a subgraph with the set of vertices inducing it, and we write $$\vert \Gamma\vert$$ for the order of $$\Gamma$$, that is, the number of vertices it contains. A clique of size $$t$$ is a complete graph on $$t\geq0$$ vertices. This includes the degenerate case of the empty graph on $$t= 0$$ vertices. □ Fig. 1. View largeDownload slide A graph in $$\mathcal{AS}$$. A block exhibiting inclusion in $$\mathcal{AS}$$ is shown in bold; the two (left-centrally located) ends of the bold block are highlighted. (A colour version of this figure is available from the journal’s website.) Fig. 1. View largeDownload slide A graph in $$\mathcal{AS}$$. A block exhibiting inclusion in $$\mathcal{AS}$$ is shown in bold; the two (left-centrally located) ends of the bold block are highlighted. (A colour version of this figure is available from the journal’s website.) Definition 2.2 (Link, join). Given a graph $$\Gamma$$, the link of a vertex $$v\in\Gamma$$, denoted $${\mathrm Lk}_\Gamma(v)$$, is the subgraph spanned by the set of vertices adjacent to $$v$$. Given graphs $$A,B$$, the join$$A\star B$$ is the graph formed from $$A\sqcup B$$ by joining each vertex of $$A$$ to each vertex of $$B$$ by an edge. A suspension is a join where one of the factors $$A,B$$ is the graph consisting of two vertices and no edges. □ We now describe a family of graphs, denoted $$\mathcal{CFS}$$, which satisfy the global structural property that they are “constructed from squares.” Definition 2.3 ($$\boldsymbol{\mathcal{CFS}}$$). Given a graph $$\Gamma$$, let $$\square(\Gamma)$$ be the auxiliary graph whose vertices are the induced $$4$$–cycles from $$\Gamma$$, with two distinct $$4$$–cycles joined by an edge in $$\square(\Gamma)$$ if and only if they intersect in a pair of non-adjacent vertices of $$\Gamma$$ (i.e., in a diagonal). We refer to $$\square(\Gamma)$$ as the square-graph of $$\Gamma$$. A graph $$\Gamma$$ belongs to $$\mathcal{CFS}$$ if $$\Gamma=\Gamma'\star K$$, where $$K$$ is a (possibly empty) clique and $$\Gamma'$$ is a non-empty subgraph such that $$\square(\Gamma')$$ has a connected component $$C$$ such that the union of the $$4$$–cycles from $$C$$ covers all of $$V(\Gamma')$$. Given a vertex $$F\in\square(\Gamma)$$, we refer to the vertices in the $$4$$–cycle in $$\Gamma$$ associated to $$F$$ as the support of $$F$$. □ Remark 2.4. Dani–Thomas introduced component with full support graphs in [17], a subclass of the class of triangle-free graphs. We note that each component with full support graph is constructed from squares, but the converse is not true. Indeed, since we do not require our graphs to be triangle-free, our definition necessarily only counts induced 4–cycles and allows them to intersect in more ways than in [17]. This distinction is relevant to the application to Coxeter groups, which we discuss in Section 3. □ Definition 2.5 (Augmented suspension). The graph $$\Gamma$$ is an augmented suspension if it contains an induced subgraph $$B=\{w,w'\}\star \Gamma'$$, where $$w,w'$$ are nonadjacent and $$\Gamma'$$ is not a clique, satisfying the additional property that if $$v\in\Gamma-B$$, then $$\mathrm{Lk}_\Gamma(v)\cap \Gamma'$$ is not a clique. Let $$\mathcal{AS}$$ denote the class of augmented suspensions. Figure 1 shows a graph in $$\mathcal{AS}$$. □ Remark 2.6. Neither the $$\mathcal{CFS}$$ nor the $$\mathcal{AS}$$ properties introduced above are monotone with respect to the addition of edges. This stands in contrast to the most commonly studied global properties of random graphs. □ Definition 2.7 (Block, core, ends). A block in $$\Gamma$$ is a subgraph of the form $$B(w,w')=\{w,w'\}\star \Gamma'$$ where $$\{w,w'\}$$ is a pair of non-adjacent vertices and $$\Gamma'\subset \Gamma$$ is a subgraph of $$\Gamma$$ induced by a set of vertices adjacent to both $$w$$ and $$w'$$. A block is maximal if $$V(\Gamma')=\mathrm{Lk}_{\Gamma}(w)\cap \mathrm{Lk}_{\Gamma}(w')$$. Given a block $$B=B(w,w')$$, we refer to the non-adjacent vertices $$w,w'$$ as the ends of $$B$$, denoted $$\mathrm{end}(B)$$, and the vertices of $$\Gamma'$$ as the core of $$B$$, denoted $$\mathrm{core}(B)$$. □ Note that $$\mathcal{AS}\subsetneq\mathcal{CFS}$$, indeed Theorems 5.1 and 4.5 show that there must exist graphs in $$\mathcal{CFS}$$ that are not in $$\mathcal{AS}$$. Here we explain how any graph in $$\mathcal{AS}$$ is in $$\mathcal{CFS}$$. Lemma 2.8. Let $$\Gamma$$ be a graph in $$\mathcal{AS}$$. Then $$\Gamma \in \mathcal{CFS}$$. □ Proof Let $$B(w,w')=\{w,w'\}\star \Gamma'$$ be a maximal block in $$\Gamma$$ witnessing $$\Gamma \in \mathcal{AS}$$. Write $$\Gamma'=A\star D$$, where $$D$$ is the collection of all vertices of $$\Gamma'$$ which are adjacent to every other vertex of $$\Gamma'$$. Note that $$D$$ induces a clique in $$\Gamma$$. By definition of the $$\mathcal{AS}$$ property $$\Gamma'$$ is not a clique, whence, $$A$$ contains at least one pair of non-adjacent vertices. Furthermore, by the definition of $$D$$, for every vertex $$a\in A$$ there exists $$a' \in A$$ with $$\{a,a'\}$$ non-adjacent. The $$4$$–cycles induced by $$\{w,w'a,a'\}$$ for non-adjacent pairs $$a, a'$$ from $$A$$ are connected in $$\square(\Gamma)$$. Denote the component of $$\square(\Gamma)$$ containing them by $$C$$. Consider now a vertex $$v\in \Gamma - B(w,w')$$. Since $$B(w,w')$$ is maximal, we have that at least one of $$w,w'$$ is not adjacent to $$v$$ — without loss of generality, let us assume that it is $$w$$. By the $$\mathcal{AS}$$ property, $$v$$ must be adjacent to a pair $$a,a'$$ of non-adjacent vertices from $$A$$. Then $$\{w, a,a', v\}$$ induces a $$4$$–cycle, which is adjacent to $$\{w,w',a,a'\}\in\square(\Gamma)$$ and hence lies in $$C$$. Finally, consider a vertex $$d \in D$$. If $$v$$ is adjacent to all vertices of $$\Gamma$$, then $$\Gamma$$ is the join of a graph with a clique containing $$d$$, and we can ignore $$d$$ with respect to establishing the $$\mathcal{CFS}$$ property. Otherwise $$d$$ is not adjacent to some $$v\in \Gamma-B(w,w')$$. By the $$\mathcal{AS}$$ property, $$v$$ is connected by edges to a pair of non-adjacent vertices $$\{a,a'\}$$ from $$A$$. Thus $$\{d,a,a',v\}$$ induces a $$4$$–cycle. Since (as established above) there is some $$4$$–cycle in $$C$$ containing $$\{a,a',v\}$$, we have that $$\{d,a,a',v\} \in C$$ as well. Thus $$\Gamma= \Gamma'' \star K$$, where $$K$$ is a clique and $$V(\Gamma'')$$ is covered by the union of the $$4$$–cycles in $$C$$, so that $$\Gamma \in \mathcal{CFS}$$ as claimed. ■ 3 Geometry of right-angled Coxeter groups If $$\Gamma$$ is a finite simplicial graph, the right-angled Coxeter group$$W_\Gamma$$presented by$$\Gamma$$ is the group defined by the presentation ⟨Vert(Γ)∣{w2,uvu−1v−1:u,v,w∈Vert(Γ),{u,v}∈Edge(Γ)⟩. A result of Mühlherr [32] shows that the correspondence $$\Gamma\leftrightarrow W_\Gamma$$ is bijective. We can thus speak of “the random right-angled Coxeter group” — it is the right-angled Coxeter group presented by the random graph. (We emphasize that the above presentation provides the definition of a right-angled Coxeter group: this definition abstracts the notion of a reflection group – a subgroup of a linear group generated by reflections — but infinite Coxeter groups need not admit representations as reflection groups.) Recent articles have discussed the geometry of Coxeter groups, especially relative hyperbolicity and closely-related quasi-isometry invariants like divergence and thickness, cf. [8, 13, 17]. In particular, Dani–Thomas introduced a property they call having a component of full support for triangle-free graphs (which is exactly the triangle-free version of $$\mathcal{CFS}$$) and they prove that under the assumption $$\Gamma$$ is triangle-free, $$W_\Gamma$$ is thick of order at most $$1$$ if and only if it has quadratic divergence if and only if $$\Gamma$$ is in $$\mathcal{CFS}$$, see [17, Theorem 1.1 and Remark 4.8]. Since the densities where random graphs are triangle-free are also square-free (and thus not $$\mathcal{CFS}$$ — in fact, they are disconnected!), we need the following slight generalization of the result of Dani–Thomas: Proposition 3.1. Let $$\Gamma$$ be a finite simplicial graph. If $$\Gamma$$ is in $$\mathcal{CFS}$$ and $$\Gamma$$ does not decompose as a nontrivial join, then $$W_\Gamma$$ is thick of order exactly $$1$$. □ Proof Theorem II of [8] shows immediately that, if $$\Gamma\in\mathcal{CFS}$$, then $$W_\Gamma$$ is thick, being formed by a series of thick unions of $$4$$–cycles; since each $$4$$–cycle is a join, it follows that $$\Gamma$$ is thick of order at most $$1$$. On the other hand, [8, Proposition 2.11] shows that $$W_\Gamma$$ is thick of order at least $$1$$ provided $$\Gamma$$ is not a join. ■ Our results about random graphs yield: Corollary 3.2. There exists $$k>0$$ so that if $$p\colon{\mathbb{N}}\rightarrow(0,1)$$ and $$\epsilon>0$$ are such that $$\sqrt{\frac{k\log n}{n}}\leq p(n)\leq 1-\displaystyle\frac{(1-\epsilon)\log{n}}{n}$$ for all sufficiently large $$n$$, then for $$\Gamma\in{\mathcal G}(n,p)$$ the group $$W_\Gamma$$ is asymptotically almost surely thick of order exactly $$1$$ and hence has quadratic divergence. □ Proof Theorem 5.1 shows that any such $$\Gamma$$ is asymptotically almost surely in $$\mathcal{CFS}$$, whence $$W_\Gamma$$ is thick of order at most $$1$$. We emphasize that to apply this result for sufficiently large functions $$p(n)$$ the proof of Theorem 5.1 requires an application of Theorem 4.4 to establish that $$\Gamma$$ is a.a.s. in $$\mathcal{AS}$$ and hence in $$\mathcal{CFS}$$ by Lemma 2.8. By Proposition 3.1, to show that the order of thickness is exactly one, it remains to rule out the possibility that $$\Gamma$$ decomposes as a nontrivial join. However, this occurs if and only if the complement graph is disconnected, which asymptotically almost surely does not occur whenever $$p(n)\le 1-\frac{(1-\epsilon)\log{n}}{n}$$, by the sharp threshold for connectivity of $${\mathcal G}(n,1-p)$$ established by Erdős and Rényi in [21]. Since this holds for $$p(n)$$ by assumption, we conclude that asymptotically almost surely, $$W_\Gamma$$ is thick of order at least $$1$$. Since $$W_\Gamma$$ is CAT(0) and thick of order exactly $$1$$, the consequence about divergence now follows from [5]. ■ This corollary significantly generalizes Theorem 3.10 of [8], which established that, if $$\Gamma\in{\mathcal G}(n,\frac{1}{2})$$, then $$W_\Gamma$$ is asymptotically almost surely thick. Theorem 3.10 of [8] does not provide effective bounds on the order of thickness and its proof is significantly more complicated than the proof of Corollary 3.2 given above — indeed, it required several days of computation (using 2013 hardware) to establish the base case of an inductive argument. Remark 3.3 (Higher-order thickness). A lower bound of $$p(n)=n^{-\frac{5}{6}}$$ for membership in a larger class of graphs whose corresponding Coxeter groups are thick can be found in [8, Theorem 3.4]. In fact, this argument can be adapted to give a simple proof that a.a.s. thickness does not occur at densities below $$n^{-\frac{3}{4}}$$. The correct threshold for a.a.s. thickness is, however, unknown. □ Remark 3.4 (Random graph products versus random presentations). Corollary 3.2 and Remark 3.3 show that the random graph model for producing random right-angled Coxeter groups generates groups with radically different geometric properties. This is in direct contrast to other methods of producing random groups, most notably Gromov’s random presentation model [25, 26] where, depending on the density of relators, groups are either almost surely hyperbolic or finite (with order at most $$2$$). This contrast speaks to the merits of considering a random right-angled Coxeter group as a natural place to study random groups. For instance, Calegari–Wilton recently showed that in the Gromov model a random group contains many subgroups which are isomorphic to the fundamental group of a compact hyperbolic 3–manifold [12]; does the random right-angled Coxeter group also contain such subgroups? Right-angled Coxeter groups, and indeed thick ones, are closely related to Gromov’s random groups in another way. When the parameter for a Gromov random groups is $$<\frac{1}{6}$$ such a group is word-hyperbolic [25] and acts properly and cocompactly on a CAT(0) cube complex [34]. Hence the Gromov random group virtually embeds in a right-angled Artin group [4]. Moreover, at such parameters such a random group is one-ended [16], whence the associated right-angled Artin group is as well. By [4] this right-angled Artin group is thick of order 1. Since any right-angled Artin group is commensurable with a right-angled Coxeter group [19], one obtains a thick of order $$1$$ right-angled Coxeter group containing the randomly presented group. □ 4 Genericity of $$\boldsymbol{\mathcal{AS}}$$ We will use the following standard Chernoff bounds (see e.g., [2, Theorems A.1.11 and A.1.13]): Lemma 4.1 (Chernoff bounds). Let $$X_1,\ldots,X_n$$ be independent identically distributed random variables taking values in $$\{0,1\}$$, let $$X$$ be their sum, and let $$\mu=\mathbb E[X]$$. Then for any $$\delta\in(0,2/3)$$ P(|X−μ|≥δμ)≤2e−δ2μ3. □ Corollary 4.2. Let $$\varepsilon, \delta>0$$ be fixed. (i) If $$p(n)\ge \left(\frac{(6+\varepsilon)\log n}{\delta^2n}\right)^{1/2}$$, then a.a.s. for all pairs of distinct vertices $$\{x,y\}$$ in $$\Gamma \in {\mathcal G}(n,p)$$ we have $$\left\vert \vert \mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)\vert-p^2(n-2)\right\vert < \delta p^2(n-2)$$. (ii) If $$p(n)\ge \left(\frac{(9+\varepsilon)\log n}{\delta^2n}\right)^{1/3}$$, then a.a.s. for all triples of distinct vertices $$\{x,y, z\}$$ in $$\Gamma \in {\mathcal G}(n,p)$$ we have $$\left\vert \vert \mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)\cap \mathrm{Lk}_{\Gamma}(z)\vert-p^3(n-3)\right\vert < \delta p^3(n-3)$$. □ Proof For (i), let $$\{x,y\}$$ be any pair of distinct vertices. For each vertex $$v \in \Gamma-\{x,y\}$$, set $$X_v$$ to be the indicator function of the event that $$v\in \mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)$$, and set $$X=\sum_v X_v$$ to be the size of $$\mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)$$. We have $$\mathbb{E}X=p^2(n-2)$$ and so by the Chernoff bounds above, $$\Pr\left(\vert X-p^2(n-2)\vert\geq \delta p^2(n-2)\right) \leq 2e^{-\frac{\delta^2p^2(n-2)}{3}}$$. Applying Markov’s inequality, the probability that there exists some “bad pair” $$\{x,y\}$$ in $$\Gamma$$ for which $$\vert\mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)\vert$$ deviates from its expected value by more than $$\delta p^2(n-2)$$ is at most (n2)2e−δ2p2(n−2)3=o(1), provided $$\delta^2 p^2n\ge (6+\varepsilon) \log n$$ and $$\varepsilon, \delta>0$$ are fixed. Thus for this range of $$p=p(n)$$, a.a.s. no such bad pair exists. The proof of (ii) is nearly identical. ■ Lemma 4.3. (i) Suppose $$1-p\geq \frac{\log n}{2n}$$. Then asymptotically almost surely, the order of a largest clique in $$\Gamma\in{\mathcal G}(n,p)$$ is $$o(n)$$. (ii) Let $$\eta$$ be fixed with $$0<\eta<1$$. Suppose $$1-p\geq \eta$$. Then asymptotically almost surely, the order of a largest clique in $$\Gamma \in {\mathcal G}(n,p)$$ is $$O(\log n)$$. □ Proof For (i), set $$r=\alpha n$$, for some $$\alpha$$ bounded away from $$0$$. Write $$H(\alpha)=\alpha\log\frac{1}{\alpha}+(1-\alpha)\log\frac{1}{1-\alpha}$$. Using the standard entropy bound $$\binom{n}{\alpha n}\leq e^{H(\alpha)n}$$ and our assumption for $$(1-p)$$, we see that the expected number of $$r$$-cliques in $$\Gamma$$ is (nr)p(r2) ≤eH(α)nelog(1−(1−p))(α2n22+O(n))≤exp(−α22nlogn+O(n))=o(1). Thus by Markov’s inequality, a.a.s. $$\Gamma$$ does not contain a clique of size $$r$$, and the order of a largest clique in $$\Gamma$$ is $$o(n)$$. The proof of (ii) is similar: suppose $$1-p>\eta$$. Then for any $$r\leq n$$, (nr)p(r2) <nr(1−η)r(r−1)/2=exp(r(logn−r−12log11−η)), which for $$\eta>0$$ fixed and $$r-1>\frac{2}{\log (1/(1-\eta))}(1+ \log n )$$ is as most $$n^{-\frac{2}{\log (1/(1-\eta))}}=o(1)$$. We may thus conclude as above that a.a.s. a largest clique in $$\Gamma$$ has order $$O(\log n)$$. ■ Theorem 4.4 (Genericity of $$\mathcal{AS}$$). Suppose $$p(n)\ge(1+\epsilon)\left(\displaystyle\frac{\log{n}}{n}\right)^{\frac{1}{3}}$$ for some $$\epsilon>0$$ and $$(1-p)n^2\to\infty$$. Then, a.a.s. $$\Gamma\in{\mathcal G}(n,p)$$ is in $$\mathcal{AS}$$. □ Proof Let $$\delta>0$$ be a small constant to be specified later (the choice of $$\delta$$ will depend on $$\epsilon$$). By Corollary 4.2 (i) for $$p(n)$$ in the range we are considering, a.a.s. all joint links have size at least $$(1-\delta)p^2(n-2)$$. Denote this event by $$\mathcal{E}_1$$. We henceforth condition on $$\mathcal{E}_1$$ occurring (not this only affects the values of probabilities by an additive factor of $$\mathbb{P}(\mathcal{E}_1^c)=O(n^{-\varepsilon})=o(1)$$). With probability $$1-p^{\binom{n}{2}}=1-o(1)$$, $$\Gamma$$ is not a clique, whence, there there exist non-adjacent vertices in $$\Gamma$$. We henceforth assume $$\Gamma\neq K_n$$, and choose $$v_1, v_2\in \Gamma$$ which are not adjacent. Let $$B$$ be the maximal block associated with the pair $$(v_1,v_2)$$. We separate the range of $$p$$ into three. Case 1: $$\boldsymbol{p}$$ is “far” from both the threshold and $$1$$. Let $$\alpha>0$$ be fixed, and suppose $$\alpha n^{-1/4}\leq p \leq 1- \frac{\log n}{2n}$$. Let $$\mathcal{E}_2$$ be the event that for every vertex $$v\in\Gamma-B$$ the set $$\mathrm{Lk}_{\Gamma}(v)\cap B$$ has size at least $$\frac{1}{2}p^3(n-3)$$. By Corollary 4.2, a.a.s. event $$\mathcal{E}_2$$ occurs, that is, all vertices in $$\Gamma-B$$ have this property. We claim that a.a.s. there is no clique of order at least $$\frac{1}{2}p^3(n-3)$$ in $$\Gamma$$. Indeed, if $$p<1-\eta$$ for some fixed $$\eta>0$$, then by Lemma 4.3 part (ii), a largest clique in $$\Gamma$$ has order $$O(\log n)=o(p^3n)$$. On the other hand, if $$1-\eta <p\leq 1- \frac{\log n}{2n}$$, then by Lemma 4.3 part (i), a largest clique in $$\Gamma$$ has order $$o(n)=o(p^3n)$$. Thus in either case a.a.s. for every$$v\in \Gamma-B$$, $$\mathrm{Lk}_{\Gamma}(v)\cap B$$ is not a clique and hence $$v \in \overline{B}$$, so that a.a.s. $$\overline{B}=\Gamma$$, and $$\Gamma \in \mathcal{AS}$$ as required. Case 2: $$\boldsymbol{p}$$ is “close” to the threshold. Suppose that $$(1+\epsilon)\left(\displaystyle\frac{\log{n}}{n}\right)^{\frac{1}{3}}\le p(n)$$ and $$np^4\to 0$$. Let $$\vert B\vert=m+2$$. By our conditioning, we have $$(1-\delta)(n-2)p^2\leq m \leq (1+\delta)(n-2)p^2$$. The probability that a given vertex $$v\in \Gamma$$ is not in $$\overline{B}$$ is given by: P(v∉B¯|{|B|=m})=(1−p)m+mp(1−p)m+1+∑r=2m(mr)pr(1−p)m−rp(r2). (1) In this equation, the first two terms come from the case where $$v$$ is connected to $$0$$ and $$1$$ vertex in $$B\setminus\{v_1,v_2\}$$ respectively, while the third term comes from the case where the link of $$v$$ in $$B\setminus\{v_1,v_2\}$$ is a clique on $$r\geq 2$$ vertices. As we shall see, in the case $$np^4\to 0$$ which we are considering, the contribution from the first two terms dominates. Let us estimate their order: (1−p)m+mp(1−p)m−1=(1+mp1−p)(1−p)m ≤(1+mp1−p)e−mp ≤(1+(1−δ)(1+ϵ)3logn1−p)n−(1−δ)(1+ϵ)3. Taking $$\delta<1-\frac{1}{(1+\epsilon)^3}$$ this expression is $$o(n^{-1})$$. We now treat the sum making up the remaining terms in Equation 1. To do so, we will analyze the quotient of successive terms in the sum. Fixing $$2\le r\le m-1$$ we see: (mr+1)pr+1(1−p)m−r−1p(r+12)(mr)pr(1−p)m−rp(r2)=m−r−1r+1⋅pr+11−p≤mpr+1≤mp3. Since $$np^4\to 0$$ (by assumption), this also tends to zero as $$n\to\infty$$. The quotients of successive terms in the sum thus tend to zero uniformly as $$n\to \infty$$, and we may bound the sum by a geometric series: ∑r=2m(mr)pr(1−p)m−rp(r2)≤(m2)p3(1−p)m−2∑i=0m−2(mp3)i≤(12+o(1))m2p3(1−p)m−2. Now, $$m^2p^3(1-p)^{m-2}=\frac{mp^2}{1-p}\cdot mp(1-p)^{m-1}$$. The second factor in this expression was already shown to be $$o(n^{-1})$$, while $$mp^2\le(1+\delta)np^4 \to 0$$ by assumption, so the total contribution of the sum is $$o(n^{-1})$$. Thus for any value of $$m$$ between $$(1-\delta)p^2(n-2)$$ and $$(1+\delta)p^2(n-2)$$, the right hand side of Equation (1) is $$o(n^{-1})$$, and we conclude: P(v∉B¯|E1)≤o(n−1). Thus, by Markov’s inequality, P(B¯=Γ)≥P(E1)(1−∑vP(v∉B¯|E1))=1−o(1), establishing that a.a.s. $$\Gamma\in\mathcal{AS}$$, as claimed. Case 3: $$p$$ is “close” to $$1$$. Suppose $$n^{-2}\ll (1-p)\leq \frac{\log n}{2n}$$. Consider the complement of $$\Gamma$$, $$\Gamma^c\in {\mathcal G}(n, 1-p)$$. In the range of the parameter $$\Gamma^c$$ a.a.s. has at least two connected components that contain at least two vertices. In particular, taking complements, we see that $$\Gamma$$ is a.a.s. a join of two subgraphs, neither of which is a clique. It is a simple exercise to see that such as graph is in $$\mathcal{AS}$$, thus a.a.s. $$\Gamma\in\mathcal{AS}$$. ■ As we now show, the bound obtained in the above theorem is actually a sharp threshold. Analogous to the classical proof of the connectivity threshold [21], we consider vertices which are “isolated” from a block to prove that graphs below the threshold strongly fail to be in $$\mathcal{AS}$$. Theorem 4.5. If $$p\le\left(1-\epsilon\right)\left(\displaystyle\frac{\log{n}}{n}\right)^{\frac{1}{3}}$$ for some $$\epsilon>0$$, then $$\Gamma\in{\mathcal G}(n,p)$$ is asymptotically almost surely not in $$\mathcal{AS}$$. □ Proof We will show that, for $$p$$ as hypothesized, every block has a vertex “isolated” from it. Explicitly, let $$\Gamma\in{\mathcal G}(n,p)$$ and consider $$B=B_{v,w}=\mathrm{Lk}(v)\cap \mathrm{Lk}(w)\cup\{v,w\}$$. Let $$X(v,w)$$ be the event that every vertex of $$\Gamma-B$$ is connected by an edge to some vertex of $$B$$. Clearly $$\Gamma\in \mathcal{AS}$$ only if the event $$X(v,w)$$ occurs for some pair of non-adjacent vertices $$\{v,w\}$$. Set $$X=\bigcup_{\{v,w\}} X(v,w)$$. Note that $$X$$ is a monotone event, closed under the addition of edges, so that the probability it occurs in $$\Gamma\in {\mathcal G}(n,p)$$ is a non-decreasing function of $$p$$. We now show that when $$p=\left(1-\epsilon\right)\left(\log{n}/ n\right)^{\frac{1}{3}}$$, a.a.s. $$X$$ does not occur, completing the proof. Consider a pair of vertices $$\{v,w\}$$, and set $$k=\vert B_{v,w}\vert$$. Conditional on $$B_{v,w}$$ having this size and using the standard inequality $$(1-x)\le e^{-x}$$, we have that P(X(v,w))=(1−(1−p)k)n−k≤e−(n−k)(1−p)k. Now, the value of $$k$$ is concentrated around its mean: by Corollary 4.2, for any fixed $$\delta>0$$ and all $$\{v,w\}$$, the order of $$B_{v,w}$$ is a.a.s. at most $$(1+\delta)np^2$$. Conditioning on this event $$\mathcal{E}$$, we have that for any pair of vertices $$v,w$$, P(X(v,w)|E)≤maxk≤(1+δ)np2e−(n−k)(1−p)k=e−(n−(1+δ)np2)(1−p)(1+δ)np2. Now $$(1-p)^{(1+\delta)np^2} = e^{(1+\delta)np^2\log(1-p)}$$ and by Taylor’s theorem $$\log(1-p)=-p+O(p^2)$$, so that: P(X(v,w)|E)≤e−n(1+O(p2))e−(1+δ)np3(1+O(p))=e−n1−(1+δ)(1−ϵ)3+o(1). Choosing $$\delta<\displaystyle\frac{1}{(1-\epsilon)^3}-1$$, the expression above is $$o(n^{-2})$$. Thus P(X) ≤P(Ec)+∑{v,w}P(X(v,w)|E) =o(1)+(n2)o(n−2)=o(1). Thus a.a.s. the monotone event $$X$$ does not occur in $$\Gamma\in{\mathcal G}(n,p)$$ for $$p=\left(1-\epsilon\right)\left(\log{n}/ n\right)^{\frac{1}{3}}$$, and hence a.a.s. the property $$\mathcal{AS}$$ does not hold for $$\Gamma\in{\mathcal G}(n,p)$$ and $$p(n)\leq \left(1-\epsilon\right)\left(\log{n}/ n\right)^{\frac{1}{3}}$$. ■ 5 Genericity of $$\boldsymbol{\mathcal{CFS}}$$ The two main results in this section are upper and lower bounds for inclusion in $$\mathcal{CFS}$$. These results are established in Theorems 5.1 and 5.7. Theorem 5.1. If $$p\colon{\mathbb{N}}\rightarrow(0,1)$$ satisfies $$(1-p)n^2\to\infty$$ and $$p(n)\geq 5\sqrt{\frac{\log n}{n}}$$ for all sufficiently large $$n$$, then a.a.s. $$\Gamma \in {\mathcal G}(n,p)$$ lies in $$\mathcal{CFS}$$. □ The proof of Theorem 5.1 divides naturally into two ranges. First of all for large $$p$$, namely for $$p(n)\geq 2\left(\log{n}/n\right)^{\frac{1}{3}}$$, we appeal to Theorem 4.4 to show that a.a.s. a random graph $$\Gamma\in {\mathcal G}(n,p)$$ is in $$\mathcal{AS}$$ and hence, by Lemma 2.8, in $$\mathcal{CFS}$$. In light of our proof of Theorem 4.4, we may think of this as the case when we can “beam up” every vertex of the graph $$\Gamma$$ to a single block $$B_{x,y}$$ in an appropriate way, and thus obtain a connected component of $$\square(\Gamma)$$ whose support is all of $$V(\Gamma)$$ Secondly we have the case of “small $$p$$” where 5lognn≤p(n)≤2(lognn)13, which is the focus of the remainder of the proof. Here we construct a path of length of order $$n/\log n$$ in $$\square(\Gamma)$$ on to which every vertex $$v\in V(\Gamma)$$ can be “beamed up” by adding a $$4$$–cycle whose support contains $$v$$. This is done in the following manner: we start with an arbitrary pair of non-adjacent vertices contained in a block $$B_{0}$$. We then pick an arbitrary pair of non-adjacent vertices in the block $$B_0$$ and let $$B_1$$ denote the intersection of the block they define with $$V(\Gamma)\setminus B_0$$. We repeat this procedure, to obtain a chain of blocks $$B_0, B_1, B_2, \ldots, B_t$$, with $$t=O(n/\log n)$$, whose union contains a positive proportion of $$V(\Gamma)$$, and which all belong to the same connected component $$C$$ of $$\square(\Gamma)$$. This common component $$C$$ is then large enough that every remaining vertex of $$V(\Gamma)$$ can be attached to it. The main challenge is showing that our process of recording which vertices are included in the support of a component of the square graph does not die out or slow down too much, that is, that the block sizes $$\vert B_i\vert$$ remains relatively large at every stage of the process and that none of the $$B_i$$ form a clique. Having described our strategy, we now fill in the details, beginning with the following upper bound on the probability of $$\Gamma\in {\mathcal G}(n,p)$$ containing a copy of $$K_{10}$$, the complete graph on $$10$$ vertices. The following lemma is a variant of [21, Corollary 4]: Lemma 5.2. Let $$\Gamma\in{\mathcal G}(n,p)$$. If $$p=o(n^{-\frac{1}{4}})$$, then the probability that $$\Gamma\in {\mathcal G}(n,p)$$ contains a clique with at least $$10$$ vertices is at most $$o(n^{-\frac{5}{4}})$$. □ Proof The expected number of copies of $$K_{10}$$ in $$\Gamma$$ is (n10)p(102)≤n10p45=o(n−5/4). The statement of the lemma then follows from Markov’s inequality. ■ Proof of Theorem 5.1. As remarked above, Theorem 4.4 proves Theorem 5.1 for “large” $$p$$, so we only need to deal with the case where 5lognn≤p(n)≤2(lognn)13. We iteratively build a chain of blocks, as follows. Let $$\{x_0,y_0\}$$ be a pair of non-adjacent vertices in $$\Gamma$$, if such a pair exists, and an arbitrary pair of vertices if not. Let $$B_0$$ be the block with ends $$\{x_0, y_0\}$$. Now assume we have already constructed the blocks $$B_0, \ldots, B_i$$, for $$i\geq 0$$. Let $$C_i=\bigcup_i B_i$$ (for convenience we let $$C_{-1}=\emptyset$$). We will terminate the process and set $$t=i$$ if any of the three following conditions occur: $$\vert \mathrm{core}(B_i)\vert\leq 6\log n$$ or $$i\geq n/6\log n$$ or $$\vert V(\Gamma)\setminus C_i\vert \leq n/2$$. Otherwise, we let $$\{x_{i+1}, y_{i+1}\}$$ be a pair of non-adjacent vertices in $$\mathrm{core}(B_i)$$, if such a pair exists, and an arbitrary pair of vertices from $$\mathrm{core}(B_i)$$ otherwise. Let $$B_{i+1}$$ denote the intersection of the block whose ends are $$\{x_{i+1}, y_{i+1}\}$$ and the set $$\left(V(\Gamma)\setminus(C_i)\right)\cup\{x_{i+1}, y_{i+1}\}$$. Repeat. Eventually this process must terminate, resulting in a chain of blocks $$B_0, B_1, \ldots, B_t$$. We claim that a.a.s. both of the following hold for every $$i$$ satisfying $$0 \leq i \leq t$$: (i) $$\vert\mathrm{core}(B_i)\vert > 6\log n$$; and (ii) $$\{x_i,y_i\}$$ is a non-edge in $$\Gamma$$. Part (i) follows from the Chernoff bound given in Lemma 4.1: for each $$i\geq -1$$ the set $$V(\Gamma)\setminus C_i$$ contains at least $$n/2$$ vertices by construction. For each vertex $$v\in V(\Gamma)\setminus C_i$$, let $$X_v$$ be the indicator function of the event that $$v$$ is adjacent to both of $$\{x_{i+1}, y_{i+1}\}$$. The random variables $$(X_v)$$ are independent identically distributed Bernoulli random variables with mean $$p$$. Their sum $$X=\sum_v X_v$$ is exactly the size of the core of $$B_{i+1}$$, and its expectation is at least $$p^2n/2$$. Applying Lemma 4.1, we get that P(X<6logn) ≤P(X≤12EX) ≤2e−(12)225logn6n=2e−2524logn. Thus the probability that $$\vert \mathrm{core}(B_i)\vert <6\log n$$ for some $$i$$ with $$0\leq i \leq t$$ is at most: t2e−2524logn≤4n5logn2e−2524logn=o(1). Part (ii) is a trivial consequence of part (i) and Lemma 5.2: a.a.s. $$\mathrm{core}(B_i)$$ has size at least $$6\log n$$ for every $$i$$ with $$0 \leq i \leq t$$, and a.a.s. $$\Gamma$$ contains no clique on $$10 < \log n$$ vertices, so that a.a.s. at each stage of the process we could choose an non-adjacent pair $$\{x_i,y_i\}$$. From now on we assume that both (i) and (ii) occur, and that $$\Gamma$$ contains no clique of size $$10$$. In addition, we assume that $$\vert \mathrm{core}(B_0)\vert < 8n^{\frac{1}{3}}(\log{n})^{\frac{2}{3}}$$, which occurs a.a.s. by the Chernoff bound. Since $$\mathrm{core}(B_i)\geq 6\log n$$ for every $$i$$, we must have that by time $$0<t\leq n/6\log n$$ the process will have terminated with $$C_t=\bigcup_{i=0}^t B_i$$ supported on at least half of the vertices of $$V(\Gamma)$$. Lemma 5.3. Either one of the assumptions above fails or there exists a connected component $$F$$ of $$\square(\Gamma)$$ such that: (i) for every $$i$$ with $$0 \leq i \leq t$$ and every pair of non-adjacent vertices $$\{v,v'\}\in B_i$$, there is a vertex in $$F$$ whose support in $$\Gamma$$ contains the pair $$\{v,v'\}$$; and (ii) the support in $$\Gamma$$ of the $$4$$–cycles corresponding to vertices of $$F$$ contains all of $$C_t$$ with the exception of at most $$9$$ vertices of $$\mathrm{core}(B_0)$$; moreover, these exceptional vertices are each adjacent to all the vertices of $$\mathrm{core}(B_0)$$. □ Proof By assumption the ends $$\{x_0, y_0\}$$ of $$B_0$$ are non-adjacent. Thus, every pair of non-adjacent vertices $$\{v,v'\}$$ in $$\mathrm{core}(B_0)$$ induces a $$4$$–cycle in $$\Gamma$$ when taken together with $$\{x_0,y_0\}$$, and all of these squares clearly lie in the same component $$F$$ of $$\square(\Gamma)$$. Repeating the argument with the non-adjacent pair $$\{x_1, y_1\}\in \mathrm{core}(B_0)$$ and the block $$B_1$$, and then the non-adjacent pair $$\{x_2, y_2\}\in \mathrm{core}(B_1)$$ and the block $$B_2$$, and so on, we see that there is a connected component $$F$$ in $$\square(\Gamma)$$ such that for every $$0\leq i \leq t$$, every pair of non adjacent vertices $$\{v,v'\} \in B_i$$ lies in a $$4$$–cycle corresponding to a vertex of $$F$$. This establishes (i). We now show that the support of $$F$$ contains all of $$C_t$$ except possibly some vertices in $$B_0$$. We already established that every pair $$\{x_i,y_i\}$$ is in the support of some vertex of $$F$$. Suppose $$v\in \mathrm{core}(B_i)$$ for some $$i>0$$. By construction, $$v$$ is not adjacent to at least one of $$\{x_{i-1}, y_{i-1}\}$$, say $$x_{i-1}$$. Thus, $$\{x_{i-1},x_i, y_i, v\}$$ induces a $$4$$–cycle which contains $$v$$ and is associated to a vertex of $$F$$. Finally, suppose $$v\in \mathrm{core}(B_0)$$. By (i), $$v$$ fails to be in the support of $$F$$ only if $$v$$ is adjacent to all other vertices of $$\mathrm{core}(B_0)$$. Since, by assumption, $$\Gamma$$ does not contain any clique of size $$10$$, there are at most $$9$$ vertices not in the support of $$F$$, proving (ii). ■ Lemma 5.3, shows that a.a.s. we have a “large” component $$F$$ in $$\square(\Gamma)$$ whose support contains “many” pairs of non-adjacent vertices. In the last part of the proof, we use these pairs to prove that the remaining vertices of $$V(\Gamma)$$ are also supported on our connected component. For each $$i$$ satisfying $$0 \leq i \leq t$$, consider a a maximal collection, $$M_{i}$$, of pairwise-disjoint pairs of vertices in $$\mathrm{core}(B_i)\setminus\{x_{i+1},y_{i+1}\}$$. Set $$M=\bigcup_i M_i$$, and let $$M'$$ be the subset of $$M$$ consisting of pairs, $$\{v,v'\}$$, for which $$v$$ and $$v'$$ are not adjacent in $$\Gamma$$. We have |M|=∑i=1t(⌊12|core(Bi)|⌋−1)≥|Ct|2−2t≥n4(1−o(1)). The expected size of $$M'$$ is thus $$(1-p)n(\frac{1}{4}-o(1))=\frac{n}{4}(1-o(1))$$, and by the Chernoff bound from Lemma 4.1 we have P(|M′|≤n5) ≤2e−(15+o(1))2(1−p)n12≤e−(1300+o(1))n, which is $$o(1)$$. Thus a.a.s. $$M'$$ contains at least $$n/5$$ pairs, and by Lemma 5.3 each of these lies in some $$4$$–cycle of $$F$$. We now show that we can “beam up” every vertex not yet supported on $$F$$ by a $$4$$–cycle using a pair from $$M'$$. By construction we have at most $$n/2$$ unsupported vertices from $$V(\Gamma)\setminus C_t$$ and at most $$9$$ unsupported vertices from $$\mathrm{core}(B_0)$$. Assume that $$\vert M' \vert \geq n/5$$. Fix a vertex $$w\in V(\Gamma)\setminus C_t$$. For each pair $$\{v,v'\}\in M'$$, let $$X_{v,v'}$$ be the event that $$w$$ is adjacent to both $$v$$ and $$v'$$. We now observe that if $$X_{v,v'}$$ occurs for some pair $$\{v,v'\}\in M'\cap \mathrm{core}(B_i)$$, then $$w$$ is supported on $$F$$. By construction, $$w$$ is not adjacent to at least one of $$\{x_i, y_i\}$$, let us say without loss of generality $$x_i$$. Hence, $$\{x_i,v,v',w\}$$ is an induced $$4$$–cycle in $$\Gamma$$ which contains $$w$$ and which corresponds to a vertex of $$F$$. The probability that $$X_{v,v'}$$ fails to happen for every pair $$\{v,v'\}\in M'$$ is exactly (1−p2)|M′|≤(1−p2)n/5≤e−p2n5=e−5logn. Thus the expected number of vertices $$w \in V(\Gamma)\setminus C_t$$ which fail to be in the support of $$F$$ is at most $$\frac{n}{2}e^{-5\log n}=o(1)$$, whence by Markov’s inequality a.a.s. no such bad vertex $$w$$ exists. Finally, we deal with the possible $$9$$ left-over vertices $$b_1, b_2, \ldots b_9$$ from $$\mathrm{core}(B_0)$$ we have not yet supported. We observe that since $$\mathrm{core}(B_0)$$ contains at most $$8n^{\frac{1}{3}}(\log{n})^{\frac{2}{3}}$$ vertices (as we are assuming and as occurs a.a.s. , see the discussion before Lemma 5.3 ), we do not stop the process with $$B_0$$, $$\mathrm{core}(B_1)$$ is non-empty and contains at least $$6\log n$$ vertices. As stated in Lemma 5.3, each unsupported vertex $$b_i$$ is adjacent to all other vertices in $$\mathrm{core}(B_0)$$, and in particular to both of $$\{x_1, y_1\}$$. If $$b_i$$ fails to be adjacent to some vertex $$v\in \mathrm{core}(B_1)$$, then the set $$\{b_i, x_1, y_1, v\}$$ induces a $$4$$–cycle corresponding to a vertex of $$F$$ and whose support contains $$b_i$$. The probability that there is some $$b_i$$ not supported in this way is at most 9P(bi adjacent to all of core(B1)) =9p6logn=o(1). Thus a.a.s. we can “beam up” each of the vertices $$b_1, \ldots b_9$$ to $$F$$ using a vertex $$v\in \mathrm{core}(B_1)$$, and the support of the component $$F$$ in the square graph $$\square(\Gamma)$$ contains all vertices of $$\Gamma$$. This shows that a.a.s. $$\Gamma \in \mathcal{CFS}$$, and concludes the proof of the theorem. ■ Remark 5.4. The constant $$5$$ in Theorem 5.1 is not optimal, and indeed it is not hard to improve on it slightly, albeit at the expense of some tedious calculations. We do not try to obtain a better constant, as we believe that the order of the upper bound we have obtained is not sharp. We conjecture that the actual threshold for $$\mathcal{CFS}$$ occurs when $$p(n)$$ is of order $$n^{-1/2}$$ (see the discussion below Theorem 5.7), but a proof of this is likely to require significantly more involved and sophisticated arguments than the present article. □ A simple lower bound for the emergence of the $$\mathcal{CFS}$$ property can be obtained from the fact that if $$\Gamma\in\mathcal{CFS}$$, then $$\Gamma$$ must contain at least $$n-3$$ squares; if $$p(n)\ll n^{-\frac{3}{4}}$$, then by Markov’s inequality a.a.s. a graph in $${\mathcal G}(n,p)$$ contains fewer than $$o(n)$$ squares and thus cannot be in $$\mathcal{CFS}$$. Below, in Theorem 5.7, we prove a better lower bound, showing that the order of the upper bound we proved in Theorem 5.1 is not off by a factor of more than $$(\log n)^{3/2}$$. Lemma 5.5. Let $$\Gamma$$ be a graph and let $$C$$ be the subgraph of $$\Gamma$$ supported on a given connected component of $$\square(\Gamma)$$. Then there exists an ordering $$v_1<v_2< \cdots <v_{\vert C\vert}$$ of the vertices of $$C$$ such that for all $$i\ge3$$, $$v_i$$ is adjacent in $$\Gamma$$ to at least two vertices preceding it in the order. □ Proof As $$C$$ is a component of $$\square(\Gamma)$$, it contains at least one induced $$4$$–cycle. Let $$v_1, v_2$$ be a pair of non-adjacent vertices from such an induced $$4$$–cycle. Then the two other vertices $$\{v_3,v_4\}$$ of the $$4$$–cycle are both adjacent in $$\Gamma$$ to both of $$v_{1}$$ and $$v_{2}$$. If this is all of $$C$$, then we are done. Otherwise, we know that each $$4$$–cycle in $$C$$ is “connected” to the cycle $$F=\{v_{1},v_{2},v_{3},v_{4}\}$$ via a sequence of induced $$4$$–cycles pairwise intersecting in pairwise non-adjacent vertices. In particular, there is some such $$4$$–cycle whose intersection with $$F$$ is either a pair of non-adjacent vertices in $$F$$ or three vertices of $$F$$; either way, we may add the new vertex next in the order. Continuing in this way and using the fact that the number of vertices not yet reached is a monotonically decreasing set of positive integers, the lemma follows. ■ Proposition 5.6. Let $$\delta>0$$. Suppose $$p\leq\frac{1}{\sqrt{n}\log n}$$. Then a.a.s. for $$\Gamma\in{\mathcal G}(n,p)$$, no component of $$\square(\Gamma)$$ has support containing more than $$4\log n$$ vertices of $$\Gamma$$. □ Proof Let $$\delta>0$$. Let $$m=\left\lceil \min\left(4\log n, 4\log \left(\frac{1}{p}\right) \right)\right\rceil$$, with $$p\leq 1/\left(\sqrt{n}\log n\right)$$. We shall show that a.a.s. there is no ordered $$m$$–tuple of vertices $$v_1<v_2<\cdots <v_m$$ from $$\Gamma$$ such that for every $$i\ge 2$$ each vertex $$v_i$$ is adjacent to at least two vertices from $$\{v_j: 1\leq j <i\}$$. By Lemma 5.5, this is enough to establish our claim. Let $$v_1<v_2<\cdots < v_m$$ be an arbitrary ordered $$m$$–tuple of vertices from $$V(\Gamma)$$. For $$i\geq 2$$, let $$A_i$$ be the event that $$v_i$$ is adjacent to at least two vertices in the set $$\{v_j: 1\leq j <i\}$$. We have: Pr(Ai)=∑j=2i−1(i−1j)pj(1−p)i−j−1. (2) As in the proof of Theorem 4.5 we consider the quotients of successive terms in the sum to show that its order is given by the term $$j=2$$. To see this, observe: (i−1j+1)pj+1(1−p)i−j−2(i−1j)pj(1−p)i−j−1≤i−j−1j+1⋅p1−p<mp where the final inequality holds for $$n$$ sufficiently large and $$p=p(n)$$ satisfying our assumption. Since $$m=O\left(\log n\right)$$ and $$p=o(n^{-1/2})$$ we have, again for $$n$$ large enough, that $$mp=o(1)$$, and we may bound the sum in equation (2) by a geometric series to obtain the bound: Pr(Ai) =(i−12)p2(1−p)i−3(1+O(mp)) ≤(i−1)22p2(1+O(mp)). Now let $$A=\bigcap_{i=1}^m A_i$$. Note that the events $$A_i$$ are mutually independent, since they are determined by disjoint edge-sets. Thus we have: Pr(A)=∏i=3mPr(Ai) ≤∏i=3m((i−1)22p2(1+O(mp))) =((m−1)!)2p2m−42m−2(1+O(m2p)), where in the last line we used the equality $$(1+O(mp))^{m-2} =1+O(m^2p)$$ to bound the error term. Thus we have that the expected number $$X$$ of ordered $$m$$–tuples of vertices from $$\Gamma$$ for which $$A$$ holds is at most: E(X)=n!(n−m)!P(A) ≤nm4e2(m2p2−4me22)m(1+O(m2p)) =4e2(nm2p2−4/m2e2)m(1+O(m2p)), where in the first line we used the inequality $$(m-1)!\leq e (m/e)^{m}$$. We now consider the quantity f(n,m,p)=nm2p2−4/m2e2 which is raised to the $$m^{\textrm{th}}$$ power in the inequality above. We claim that $$f(n,m,p)\leq e^{-1+\log 2+o(1)}$$. We have two cases to consider: Case 1: $$m=\lceil 4\log n\rceil$$. Since $$4\log n \leq 4\log(1/p)$$, we deduce that $$p\leq n^{-1}$$. Then f(n,m,p) =n(4logn)2p2−o(1)2e2≤n−1+o(1)≤e−1+log2+o(1). Case 2: $$m=\lceil 4\log(1/p)\rceil$$. First, note that $$p^{-4/m}=\exp\left(\frac{4\log(1/p)}{\lceil 4 \log 1/p\rceil}\right)\leq e$$. Also, for $$p$$ in the range $$[0, n^{-1/2}(\log n)^{-1}]$$ and $$n$$ large enough, $$p^2(\log(1/p))^2$$ is an increasing function of $$p$$ and is thus at most: 1n(logn)2(12logn)2(1+O(loglognlogn))=14n−1(1+o(1)). Plugging this into the expression for $$f(n,m,p)$$, we obtain: f(n,m,p) =(1+o(1))16n(log(1/p))2p2−4/m2e2 ≤(1+o(1))2e=e−1+log2+o(1). Thus, in both cases (1) and (2) we have $$f(n,m,p)\leq e^{-1 +\log 2+o(1)}$$, as claimed, whence E(X) ≤4e2(f(n,m,p))m(1+O(m2p))≤4e2e−(1−log2)m+o(m)(1+O(m2p))=o(1). It follows from Markov’s inequality that the non-negative, integer-valued random variable $$X$$ is a.a.s. equal to $$0$$. In other words, a.a.s. there is no ordered $$m$$–tuple of vertices in $$\Gamma$$ for which the event $$A$$ holds and, hence by Lemma 5.5, no component in $$\square(\Gamma)$$ covering more than $$m\leq 4\log n$$ vertices of $$\Gamma$$. ■ Theorem 5.7. Suppose $$p\leq\frac{1}{\sqrt{n} \log n}$$. Then a.a.s. $$\Gamma\in{\mathcal G}(n,p)$$ is not in $$\mathcal{CFS}$$. □ Proof To show that $$\Gamma\not\in\mathcal{CFS}$$, we first show that, for $$p\leq \frac{1}{\sqrt{n} \log n}$$, a.a.s. there is no non-empty clique $$K$$ such that $$\Gamma=\Gamma' \star K$$. Indeed the standard Chernoff bound guarantees that we have a.a.s. no vertex in $$\Gamma$$ with degree greater than $$\sqrt{n}$$. Thus to prove the theorem, it is enough to show that a.a.s. there is no connected component $$C$$ in $$\square(\Gamma)$$ containing all the vertices in $$\Gamma$$. Theorem 5.6 does this by establishing the stronger bound that a.a.s. there is no connected component $$C$$ covering more than $$4\log n$$ vertices. ■ While Theorem 5.7 improves on the trivial lower bound of $$n^{-3/4}$$, it is still off from the upper bound for the emergence of the $$\mathcal{CFS}$$ property established in Theorem 5.1. It is a natural question to ask where the correct threshold is located. Remark 5.8. We strongly believe that there is a sharp threshold for the $$\mathcal{CFS}$$ property analogous to the one we established for the $$\mathcal{AS}$$ property. What is more, we believe this threshold should essentially coincide with the threshold for the emergence of a giant component in the auxiliary square graph $$\square(\Gamma)$$. Indeed, our arguments in Proposition 5.6 and Theorems 5.1 both focus on bounding the growth of a component in $$\square(\Gamma)$$. Heuristically, we would expect a giant component to emerge in $$\square(\Gamma)$$ to emerge for $$p(n)=cn^{-1/2}$$, for some constant $$c>0$$, when the expected number of common neighbors of a pair of non-adjacent vertices in $$\Gamma$$ is $$c^2$$, and thus the expected number of distinct vertices in $$4$$–cycles which meet a fixed $$4$$–cycle in a non-edge is $$2c^2$$. What the precise value of $$c$$ should be is not entirely clear (a branching process heuristics suggests $$\sqrt{\displaystyle\frac{\sqrt{17}-3}{2}}$$ as a possible value, see Remark 6.2), however, and the dependencies in the square graph make its determination a delicate matter. □ 6 Experiments Theorems 4.4 and 4.5 show that $$\left(\log n/n\right)^{\frac{1}{3}}$$ is a sharp threshold for the family $$\mathcal{AS}$$ and Theorem 5.1 shows that $$n^{-\frac{1}{2}}$$ is the right order of magnitude of the threshold for $$\mathcal{CFS}$$. Below we provide some empirical results on the behavior of random graphs near the threshold for $$\mathcal{AS}$$ and the conjectured threshold for $$\mathcal{CFS}$$. We also compare our experimental data with analogous data at the connectivity threshold. Our experiments are based on various algorithms that we implemented in $$\texttt{C++}$$; the source code is available from the authors. At www.wescac.net/research.html or math.columbia.edu/~jason, all source code and data can be downloaded. We begin with the observation that computer simulations of $$\mathcal{AS}$$ and $$\mathcal{CFS}$$ are tractable. Indeed, it is easily seen that there are polynomial-time algorithms for deciding whether a given graph is in $$\mathcal{AS}$$ and/or $$\mathcal{CFS}$$. Testing for $$\mathcal{AS}$$ by examining each block and determining whether it witnesses $$\mathcal{AS}$$ takes $$O(n^5)$$ steps, where $$n$$ is the number of vertices. The $$\mathcal{CFS}$$ property is harder to detect, but essentially reduces to determining the component structure of the square graph. The square graph can be produced in polynomial time and, in polynomial time, one can find its connected components and check the support of these components in the original graph. Using our software, we tested random graphs in $${\mathcal G}(n,p)$$ for membership in $$\mathcal{AS}$$ for $$n\in\{300+100k\mid0\leq k\leq12\}$$ and $$\{p(n)=\alpha\left(\log n/n\right)^{\frac{1}{3}}\mid \alpha=0.80+0.1k,\,0\leq k\leq 9\}$$. For each pair $$(n,p)$$ of this type, we generated $$400$$ random graphs and tested each for membership in $$\mathcal{AS}$$. (This number of tests ensures that, with probability approximately $$95\%$$, the measured proportion of $$\mathcal{AS}$$ graphs is within at most $$0.05$$ of the actual proportion.) The results are summarized in Figure 2. The data suggest that, fixing $$n$$, the probability that a random graph is in $$\mathcal{AS}$$ increases monotonically in the range of $$p$$ we are considering, rising sharply from almost zero to almost one. Fig. 2. View largeDownload slide Experimental prevalence of $$\mathcal{AS}$$ at density $$\alpha\left(\frac{\log n}{n}\right)^{\frac{1}{3}}$$. (A colour version of this figure is available from the journal’s website.) Fig. 2. View largeDownload slide Experimental prevalence of $$\mathcal{AS}$$ at density $$\alpha\left(\frac{\log n}{n}\right)^{\frac{1}{3}}$$. (A colour version of this figure is available from the journal’s website.) In Figure 3, we display the results of testing random graphs for membership in $$\mathcal{CFS}$$ for $$n\in\{100+100k\mid 0\leq k\leq 15\}$$ and $$\{p(n)=\alpha n^{-\frac{1}{2}} \mid \alpha=0.700+0.025k,\,0\leq k\leq 8\}$$. For each pair $$(n,p)$$ of this type, we generated $$400$$ random graphs and tested each for membership in $$\mathcal{CFS}$$. (This number of tests ensures that, with probability approximately $$95\%$$, the measured proportion of $$\mathcal{AS}$$ graphs is within at most $$0.05$$ of the actual proportion.) The data suggest that, fixing $$n$$, the probability that a random graph is in $$\mathcal{CFS}$$ increases monotonically in considered range of $$p$$: rising sharply from almost zero to almost one inside a narrow window. Fig. 3. View largeDownload slide Experimental prevalence of $$\mathcal{CFS}$$ membership at density $$\alpha n^{-\frac{1}{2}}$$. (A colour version of this figure is available from the journal’s website.) Fig. 3. View largeDownload slide Experimental prevalence of $$\mathcal{CFS}$$ membership at density $$\alpha n^{-\frac{1}{2}}$$. (A colour version of this figure is available from the journal’s website.) Remark 6.1 (Block and core sizes). For each graph $$\Gamma$$ tested, the $$\mathcal{AS}$$ software also keeps track of how many nonadjacent pairs $$\{x,y\}$$ — that is, how many blocks — were tested before finding one sufficient to verify membership in $$\mathcal{AS}$$; if no such block is found, then all non-adjacent pairs have been tested and the graph is not in $$\mathcal{AS}$$. ({A set of such data comes with the source code, and more is available upon request.}) At densities near the threshold, this number of blocks is generally very large compared to the number of blocks tested at densities above the threshold. For example, in one instance with $$(n,\alpha)=(1000,0.89)$$, verifying that the graph was in $$\mathcal{AS}$$ was accomplished after testing just $$86$$ blocks, while at $$(1000,0.80)$$, a typical test checked all $$422961$$ blocks (expected number: $$423397$$) before concluding that the graph is not $$\mathcal{AS}$$. At the same $$(n,\alpha)$$, another test found that the graph was in $$\mathcal{AS}$$, but only after $$281332$$ tests. These data are consonant with the spirit of our proofs of Theorems 4.5 and 4.4: in the former case, we showed that no “good” block exists, while in the latter we show that every block is good. We believe that right at the threshold we should have some intermediate behavior, with the expected number of “good” blocks increasing continuously from $$0$$ to $$(1-p)\binom{n}{2}(1-o(1))$$. What is more, we expect that the more precise threshold for the $$\mathcal{AS}$$ property, coinciding with the appearance of a single “good” block, should be located “closer” to our lower bound than to our upper one, that is, at $$p(n)= \left((1-\epsilon)\log n/n\right)^{1/3}$$, where $$\epsilon(n)$$ is a sequence of strictly positive real numbers tending to $$0$$ as $$n\rightarrow \infty$$ (most likely decaying at a rate just faster than $$(\log n)^{-2}$$, see below). Our experimental data, which exhibit a steep rise in $$\Pr(\mathcal{AS})$$ strictly before the value $$\alpha$$ hits one, gives some support to this guess. Finally, our observations on the number of blocks suggests a natural way to understand the influence of higher-order terms on the emergence of the $$\mathcal{AS}$$ property: at exactly the threshold for $$\mathcal{AS}$$, the event $$E(v,w)$$ that a pair of non-adjacent vertices $$\{v,w\}$$ gives rise to a “good” block is rare and, despite some mild dependencies, the number $$N$$ of pairs $$\{v,w\}$$ for which $$E(v,w)$$ occurs is very likely to be distributed approximatively like a Poisson random variable. The probability $$\Pr(N\geq 1)$$ would then be a very good approximation for $$\Pr(\mathcal{AS})$$. “Good” blocks would thus play a role for the emergence of the $$\mathcal{AS}$$ property in random graphs analogous to that of isolated vertices for connectivity in random graphs. When $$p=\left((1-\epsilon) \log n/n\right)^{\frac{1}{3}}$$, the expectation of $$N$$ is roughly $$ne^{-n^{\epsilon}(1-\epsilon)\log n}$$. This expectation is $$o(1)$$ when $$0<\epsilon(n)=\Omega(1/n)$$ and is $$1/2$$ when $$\epsilon(n)=(1+o(1)) \log 2/(\log n)^2$$. This suggest that the emergence of $$\mathcal{AS}$$ should occur when $$\epsilon(n)$$ decays just a little faster than $$(\log n)^2$$. □ Remark 6.2. Our data suggest that the prevalence of $$\mathcal{CFS}$$ is closely related to the emergence of a giant component in the square graph. Indeed, below the experimentally observed threshold for $$\mathcal{CFS}$$, not only is the support of the largest component in the square graph not all of $$\Gamma \in {\mathcal G}(n,p)$$, but in fact the size of the support of the largest component is an extremely small proportion of the vertices (see Figure 4). In the Erdős–Rényi random graph, a giant component emerges when $$p$$ is around $$1/n$$, that is, when vertices begin to expect at least one neighbor; this corresponds to a paradigmatic condition of expecting at least one child for survival of a Galton–Watson process (see [9] for a modern treatment of the topic). The heuristic observation that when $$p=cn^{-1/2}$$ the vertices of a diagonal $$e$$ in a fixed $$4$$-cycle $$F$$ expect to be adjacent to $$cn^2$$ vertices outside the $$4$$-cycle, giving rise to an expected $$\binom{cn^2+2}{2}-1$$ new $$4$$-cycles connected to $$F$$ through $$e$$ in $$\square(\Gamma)$$ suggests that $$c=\sqrt{\displaystyle\frac{\sqrt{17}-3}{2}}\approx 0.7494$$ could be a reasonable guess for the location of the threshold for the $$\mathcal{CFS}$$ property. 2 Our data, although not definitive, appears somewhat supportive of this guess: see Figure 4 which is based on the same underlying data set as Figure 3. We note that unlike an Erdős–Rényi random graph, the square graph $$\square(\Gamma)$$ exhibits some strong local dependencies, which may make the determination of the exact location of its phase transition a much more delicate affair. □ Fig. 4. View largeDownload slide Fraction of vertices supporting the largest $$\mathcal{CFS}$$–subgraph at density $$\alpha n^{-\frac{1}{2}}$$. (A colour version of this figure is available from the journal’s website.) Fig. 4. View largeDownload slide Fraction of vertices supporting the largest $$\mathcal{CFS}$$–subgraph at density $$\alpha n^{-\frac{1}{2}}$$. (A colour version of this figure is available from the journal’s website.) Remark 6.3. For comparison with the threshold data for $$\mathcal{AS}$$ and $$\mathcal{CFS}$$, we include below a similar figure of experimental data for connectivity for $$\alpha$$ from $$0.8$$ to $$1.4$$, where $$p=\frac{\alpha\log n}{n}$$. Given, what we know about the thresholds for connectivity and the $$\mathcal{AS}$$ property, this last set of data together with Figure 2 should serve as a warning not to draw overly strong conclusions: the graphs we tested are sufficiently large for the broader picture to emerge, but probably not large enough to allow us to pinpoint the exact location of the threshold for $$\mathcal{CFS}$$. □ Fig. 5. View largeDownload slide Experimental prevalence of connectedness at density $$\alpha\frac{\log n}{n}$$. (A colour version of this figure is available from the journal’s website.) Fig. 5. View largeDownload slide Experimental prevalence of connectedness at density $$\alpha\frac{\log n}{n}$$. (A colour version of this figure is available from the journal’s website.) Funding This work was supported by National Science Foundation [DMS-0739392 to J.B, 1045119 to M.F.H.] and also supported by a Simons Fellowship to J.B. Acknowledgments J.B. thanks the Barnard/Columbia Mathematics Department for their hospitality during the writing of this article. J.B. thanks Noah Zaitlen for introducing him to the joy of cluster computing and for his generous time spent answering remedial questions about programming. Also, thanks to Elchanan Mossel for several interesting conversations about random graphs. M.F.H. and T.S. thank the organizers of the 2015 Geometric Groups on the Gulf Coast conference, at which some of this work was completed. This work benefited from several pieces of software, the results of one of which is discussed in Section 6. Some of the software written by the authors incorporates components previously written by J.B. and M.F.H. jointly with Alessandro Sisto. Another related useful program was written by Robbie Lyman under the supervision of J.B. and T.S. during an REU program supported by NSF Grant DMS-0739392, see [30]. This research was supported, in part, by a grant of computer time from the City University of New York High Performance Computing Center under NSF Grants CNS-0855217, CNS-0958379, and ACI-1126113. We thank the anonymous referee for their helpful comments. References [1] Agol I. Groves D. and Manning. J. “The virtual Haken conjecture.” Documenta Mathematica 18 (2013): 1045– 87. [2] Alon N. and Spencer. J. H. The probabilistic method , 3rd ed. Wiley-Interscience Series in Discrete Mathematics and Optimization. Hoboken, NJ: John Wiley & Sons, Inc., 2008. [3] Babson E. Hoffman C. and Kahle. M. “The fundamental group of random 2-complexes.” Journal of the American Mathematical Society 24, no. 1 (2011): 1– 28. Google Scholar CrossRef Search ADS [4] Behrstock J. and Charney. R. “Divergence and quasimorphisms of right-angled Artin groups.” Mathematische Annalen 352 (2012): 339– 56. Google Scholar CrossRef Search ADS [5] Behrstock J. and Druţu. C. “Divergence, thick groups, and short conjugators.” Illinois Journal of Mathematics 58, no. 4 (2014): 939– 80. [6] Behrstock J. Druţu C. and Mosher. L. “Thick metric spaces, relative hyperbolicity, and quasi-isometric rigidity.” Mathematische Annalen 344, no. 3 (2009): 543– 95. Google Scholar CrossRef Search ADS [7] Behrstock J. and Hagen. M. “Cubulated groups: thickness, relative hyperbolicity, and simplicial boundaries.” Geometry, Groups, and Dynamics 10, no. 2 (2016): 649– 707. Google Scholar CrossRef Search ADS [8] Behrstock J. M. Hagen F. and Sisto. A. “Thickness, relative hyperbolicity, and randomness in Coxeter groups.” Algebraic and Geometric Topology, to appear with an appendix written jointly with Pierre-Emmanuel Caprace . [9] Bollobás B. and Riordan. O. “A simple branching process approach to the phase transition in $$G_{n, p}$$.” The Electronic Journal of Combinatorics 19, no. 4 (2012): P21. [10] Bowditch B. H. “Relatively hyperbolic groups.” International Journal of Algebra and Computation 22, no. 3 (2012): 1250016, 66. Google Scholar CrossRef Search ADS [11] Brock J. and Masur. H. “Coarse and synthetic Weil–Petersson geometry: quasi-flats, geodesics, and relative hyperbolicity.” Geometry & Topology 12 (2008): 2453– 95. Google Scholar CrossRef Search ADS [12] Calegari D. and Wilton H. 3–manifolds everywhere. ArXiv:math.GR/1404.7043 (accessed on April 29, 2014). [13] Caprace P.-E. “Buildings with isolated subspaces and relatively hyperbolic Coxeter groups.” Innovations in Incidence Geometry , 10 (2009): 15– 31. In proceedings of Buildings & Groups, Ghent. [14] Charney R. and Farber M. Random groups arising as graph products. Algebraic and Geometric Topology 12 (2012): 979– 96. Google Scholar CrossRef Search ADS [15] Coxeter H. S. “Discrete groups generated by reflections.” Annals of Mathematics (1934): 588– 621. [16] Dahmani F. Guirardel V. and Przytycki. P. “Random groups do not split.” Mathematische Annalen 349, no. 3 (2011): 657– 73. Google Scholar CrossRef Search ADS [17] Dani P. and Thomas. A. “Divergence in right-angled Coxeter groups.” Transactions of the American Mathematical Society 367, no. 5 (2015): 3549– 77. Google Scholar CrossRef Search ADS [18] Davis M. W. The geometry and topology of Coxeter groups, vol. 32 of London Mathematical Society Monographs Series . Princeton, NJ, Princeton University Press, 2008. [19] Davis M. W. and Januszkiewicz. T. “Right-angled Artin groups are commensurable with right-angled Coxeter groups.” Journal of Pure and Applied Algebra 153, no. 3 (2000): 229– 35. Google Scholar CrossRef Search ADS [20] Davis M. W. and Kahle. M. “Random graph products of finite groups are rational duality groups.” Journal of Topology 7, no. 2 (2014): 589– 606. Google Scholar CrossRef Search ADS [21] Erdős P. and Rényi. A. “On the evolution of random graphs. II.” Bulletin de l’Institut International de Statistique 38 (1961): 343– 47. [22] Farb B. “Relatively hyperbolic groups.” Geometric and Functional Analysis (GAFA) 8, no. 5 (1998): 810– 40. Google Scholar CrossRef Search ADS [23] Gilbert E. “Random graphs.” Annals of Mathematical Statistics 4 (1959): 1141– 44. Google Scholar CrossRef Search ADS [24] Gromov M. Hyperbolic groups. In Essays in group theory, edited by Gersten S. vol. 8 of MSRI Publications . New York, NY, Springer, 1987. [25] Gromov M. Asymptotic invariants of infinite groups, Geometric Group Theory, edited by Niblo G. A. and Roller M. A. vol. 2. London Mathematical Society Lecture Notes 182 (1993): 1– 295. [26] Gromov M. “Random walk in random groups.” Geometric and Functional Analysis (GAFA) 13, no. 1 (2003): 73– 146. Google Scholar CrossRef Search ADS [27] Haglund F. and Wise. D. T. “Coxeter groups are virtually special.” Advances in Mathematics 224, no. 5 (2010): 1890– 903. Google Scholar CrossRef Search ADS [28] Kahle M. “Sharp vanishing thresholds for cohomology of random flag complexes.” Annals of Mathematics (2) 179, no. 3 (2014): 1085– 107. Google Scholar CrossRef Search ADS [29] Linial N. and Meshulam. R. “Homological connectivity of random 2-complexes.” Combinatorica 26, no. 4 (2006): 475– 87. Google Scholar CrossRef Search ADS [30] Lyman R. Algorithmic Computation of Thickness in Right-Angled Coxeter Groups . Undergraduate honors thesis, Columbia University, 2015. [31] Moussong G. Hyperbolic Coxeter groups . PhD thesis, Ohio State University, 1988. Available at https://people.math.osu.edu/davis.12/papers/Moussongthesis.pdf (accessed on November 21, 2016). [32] Mühlherr B. “Automorphisms of graph-universal coxeter groups.” Journal of Algebra 200, no. 2 (1998): 629– 49. Google Scholar CrossRef Search ADS [33] Niblo G. A. and Reeves. L. D. “Coxeter groups act on CAT (0) cube complexes.” Journal of Group Theory 6, no. 3 (2003): 399. Google Scholar CrossRef Search ADS [34] Ollivier Y. and Wise. D. “Cubulating random groups at density less than 1/6.” Transactions of the American Mathematical Society 363, no. 9 (2011): 4701– 733. Google Scholar CrossRef Search ADS [35] Osin D. V. “Relatively hyperbolic groups: intrinsic geometry, algebraic properties, and algorithmic problems.” Memoirs of the American Mathematical Society 179, no. 843 (2006): vi+ 100. Google Scholar CrossRef Search ADS [36] Ruane K. and Witzel. S. “CAT(0) cubical complexes for graph products of finitely generated abelian groups.” arXiv preprint arXiv:1310.8646 (2013), to appear in New York Journal of Mathematics. [37] Sultan H. “The asymptotic cones of Teichmüller space and thickness.” Algebraic and Geometric Topology 15, no. 5 (2015): 3071– 106. Google Scholar CrossRef Search ADS © The Author(s) 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. [email protected] http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Mathematics Research Notices Oxford University Press
# Global Structural Properties of Random Graphs
, Volume 2018 (5) – Mar 1, 2018
31 pages
/lp/oxford-university-press/global-structural-properties-of-random-graphs-UIkbSR5ban
Publisher
Oxford University Press
ISSN
1073-7928
eISSN
1687-0247
D.O.I.
10.1093/imrn/rnw287
Publisher site
See Article on Publisher Site
### Abstract
Abstract We study two global structural properties of a graph $$\Gamma$$, denoted $$\mathcal{AS}$$ and $$\mathcal{CFS}$$, which arise in a natural way from geometric group theory. We study these properties in the Erdős–Rényi random graph model $${\mathcal G}(n,p)$$, proving the existence of a sharp threshold for a random graph to have the $$\mathcal{AS}$$ property asymptotically almost surely, and giving fairly tight bounds for the corresponding threshold for the $$\mathcal{CFS}$$ property. As an application of our results, we show that for any constant $$p$$ and any $$\Gamma\in{\mathcal G}(n,p)$$, the right-angled Coxeter group $$W_\Gamma$$ asymptotically almost surely has quadratic divergence and thickness of order $$1$$, generalizing and strengthening a result of Behrstock–Hagen–Sisto [8]. Indeed, we show that at a large range of densities a random right-angled Coxeter group has quadratic divergence. 1 Introduction In this article, we consider two properties of graphs motivated by geometric group theory. We show that these properties are typically present in random graphs. We repay the debt to geometric group theory by applying our (purely graph-theoretic) results to the large-scale geometry of Coxeter groups. Random graphs Let $${\mathcal G}(n,p)$$ be the random graph model on $$n$$ vertices obtained by including each edge independently at random with probability $$p=p(n)$$. The parameter $$p$$ is often referred to as the density of $${\mathcal G}(n,p)$$. The model $${\mathcal G}(n,p)$$ was introduced by Gilbert [23], and the resulting random graphs are usually referred to as the “Erdős–Rényi random graphs” in honor of Erdős and Rényi’s seminal contributions to the field, and we follow this convention. We say that a property $$\mathcal{P}$$ holds asymptotically almost surely (a.a.s.) in $${\mathcal G}(n,p)$$ if for $$\Gamma\in{\mathcal G}(n,p),$$ we have $$\mathbb{P}(\Gamma \in \mathcal{P})\rightarrow 1$$ as $$n\rightarrow \infty$$. In this article, we will be interested in proving that certain global properties hold a.a.s. in $${\mathcal G}(n,p)$$ both for a wide range of probabilities $$p=p(n)$$. A graph property is (monotone) increasing if it is closed under the addition of edges. A paradigm in the theory of random graphs is that global increasing graph properties exhibit sharp thresholds in $${\mathcal G}(n,p)$$: for many global increasing properties $$\mathcal{P}$$, there is a critical density$$p_c=p_c(n)$$ such that for any fixed $$\epsilon>0$$ if $$p<(1-\epsilon)p_c$$ then a.a.s. $$\mathcal{P}$$ does not hold in $${\mathcal G}(n,p)$$, while if $$p>(1+\epsilon)p_c$$ then a.a.s. $$\mathcal{P}$$ holds in $${\mathcal G}(n,p)$$. A quintessential example is the following classical result of Erdős and Rényi which provides a sharp threshold for connectedness: Theorem (Erdős–Rényi; [21]). There is a sharp threshold for connectivity of a random graph with critical density $$p_c(n)=\frac{\log(n)}{n}$$. □ The local structure of the Erdős–Rényi random graph is well understood, largely due to the assumption of independence between the edges. For example, Erdős–Rényi and others have obtained threshold densities for the existence of certain subgraphs in a random graph (see e.g., [21, Theorem 1, Corollaries 1–5]). In earlier applications of random graphs to geometric group theory, this feature of the model was successfully exploited in order to analyze the geometry of right-angled Artin and Coxeter groups presented by random graphs; this is notable, for example, in the work of Charney and Farber [14]. In particular, the presence of an induced square implies non-hyperbolicity of the associated right-angled Coxeter group [14, 21, 31]. In this article, we take a more global approach. Earlier work established a correspondence between some fundamental geometric properties of right-angled Coxeter groups and large-scale structural properties of the presentation graph, rather than local properties such as the presence or absence of certain specified subgraphs. The simplest of these properties is the property of being the join of two subgraphs that are not cliques. One large scale graph property relevant in the present context is a property studied in [8] which, roughly, says that the graph is constructed in a particular organized, inductive way from joins. In this article, we discuss a refined version of this property, $$\mathcal{CFS}$$, which is a slightly-modified version of a property introduced by Dani–Thomas [17]. We also study a stronger property, $$\mathcal{AS}$$, and show it is generic in random graphs for a large range of $$p(n)$$, up $$1-\omega(n^{-2})$$. $$\boldsymbol{\mathcal{AS}}$$ graphs The first class of graphs we study is the class of augmented suspensions, which we denote $$\mathcal{AS}$$. A graph is an augmented suspension if it contains an induced subgraph which is a suspension (see Section 2 for a precise definition of this term), and any vertex which is not in that suspension is connected by edges to at least two nonadjacent vertices of the suspension. Theorem 4.4 and 4.5 (Sharp Threshold for $$\boldsymbol{\mathcal{AS}}$$).Let $$\epsilon>0$$ be fixed. If $$p=p(n)$$ satisfies $$p\ge(1+\epsilon)\left(\displaystyle\frac{\log n}{n}\right)^{\frac{1}{3}}$$ and $$(1-p)n^2\to\infty$$, then $$\Gamma\in{\mathcal G}(n,p(n))$$ is a.a.s. in $$\mathcal{AS}$$. On the other hand, if $$p\le(1-\epsilon)\left(\displaystyle\frac{\log{n}}{n}\right)^{\frac{1}{3}}$$, then $$\Gamma\in{\mathcal G}(n,p)$$ a.a.s. does not lie in $$\mathcal{AS}$$. Intriguingly, Kahle proved that a function similar to the critical density in Theorem 4.4 is the threshold for a random simplicial complex to have vanishing second rational cohomology [28]. Remark (Behavior near $$p=1$$). Note that property $$\mathcal{AS}$$ is not monotone increasing, since it requires the presence of a number of non-edges. In particular, complete graphs are not in $$\mathcal{AS}$$. Thus unlike the global properties typically studied in the theory of random graphs, $$\mathcal{AS}$$ will cease to hold a.a.s. when the density $$p$$ is very close to $$1$$. In fact, [8, Theorem 3.9] shows that if $$p(n)=1-\Omega\left(\frac{1}{n^2}\right)$$, then a.a.s. $$\Gamma$$ is either a clique or a clique minus a fixed number of edges whose endpoints are all disjoint. Thus, with positive probability, $$\Gamma\in\mathcal{AS}$$. However, [14, Theorem 1] shows that if $$(1-p)n^2\to 0$$ then $$\Gamma$$ is asymptotically almost surely a clique, and hence not in $$\mathcal{AS}$$. □ $$\boldsymbol{\mathcal{CFS}}$$ graphs The second family of graphs, which we call $$\mathcal{CFS}$$ graphs (“Constructed From Squares”), arise naturally in geometric group theory in the context of the large–scale geometry of right–angled Coxeter groups, as we explain below and in Section 3. A special case of these graphs was introduced by Dani–Thomas to study divergence in triangle-free right-angled Coxeter groups [17]. The graphs we study are intimately related to a property called thickness, a feature of many key examples in geometric group theory and low dimensional topology that is, closely related to divergence, relative hyperbolicity, and a number of other topics. This property is, in essence, a connectivity property because it relies on a space being “connected” through sequences of “large” subspaces. Roughly speaking, a graph is $$\mathcal{CFS}$$ if it can be built inductively by chaining (induced) squares together in such a way that each square overlaps with one of the previous squares along a diagonal (see Section 2 for a precise definition). We explain in the next section how this class of graphs generalizes $$\mathcal{AS}$$. Our next result about genericity of $$\mathcal{CFS}$$ combines with Proposition 3.1 below to significantly strengthen [8, Theorem VI]. This result is an immediate consequence of Theorems 5.1 and 5.7, which, in fact, establish slightly more precise, but less concise, bounds. Theorem 5.1 and 5.7 Suppose $$(1-p)n^2\to\infty$$ and let $$\epsilon>0$$. Then $$\Gamma\in{\mathcal G}(n,p)$$ is a.a.s. in $$\mathcal{CFS}$$ whenever $$p(n)>n^{-\frac{1}{2}+\epsilon}$$. Conversely, $$\Gamma\in{\mathcal G}(n,p)$$ is a.a.s. not in $$\mathcal{CFS}$$, whenever $$p(n)<n^{-\frac{1}{2} -\epsilon}.$$ We actually show, in Theorem 5.1, that at densities above $$5\sqrt{\frac{\log n}{n}}$$, with $$(1-p)n^2\to\infty$$, the random graph is a.a.s. in $$\mathcal{CFS}$$, while in Theorem 5.7 we show a random graph a.a.s. not in $$\mathcal{CFS}$$ at densities below $$\frac{1}{\sqrt{n}\log{n}}$$. Theorem 5.1 applies to graphs in a range strictly larger than that in which Theorem 4.4 holds (though our proof of Theorem 5.1 relies on Theorem 4.4 to deal with the large $$p$$ case). Theorem 5.1 combines with Theorem 4.5 to show that, for densities between $$\left(\log n/n\right)^{\frac{1}{2}}$$ and $$\left(\log n/n\right)^{\frac{1}{3}}$$, a random graph is asymptotically almost surely in $$\mathcal{CFS}$$ but not in $$\mathcal{AS}$$. We also note that Babson–Hoffman–Kahle [3] proved that a function of order $$n^{-\frac{1}{2}}$$ appears as the threshold for simple-connectivity in the Linial–Meshulam model for random 2–complexes [29]. It would be interesting to understand whether there is a connection between genericity of the $$\mathcal{CFS}$$ property and the topology of random 2-complexes. Unlike our results for the $$\mathcal{AS}$$ property, we do not establish a sharp threshold for the $$\mathcal{CFS}$$ property. In fact, we believe that neither the upper nor lower bounds, given in Theorem 5.1 and Theorem 5.7, for the critical density around which $$\mathcal{CFS}$$ goes from a.a.s. not holding to a.a.s. holding are sharp. Indeed, we believe that there is a sharp threshold for the $$\mathcal{CFS}$$ property located at $$p_c(n)=\theta(n^{-\frac{1}{2}})$$. This conjecture is linked to the emergence of a giant component in the “square graph” of $$\Gamma$$ (see the next section for a definition of the square graph and the heuristic discussion after the proof of Theorem 5.7). Applications to geometric group theory Our interest in the structure of random graphs was sparked largely by questions about the large-scale geometry of right-angled Coxeter groups. Coxeter groups were first introduced in [15] as a generalization of reflection groups, that is, discrete groups generated by a specified set of reflections in Euclidean space. A reflection group is right-angled if the reflection loci intersect at right angles. An abstract right-angled Coxeter group generalizes this situation: it is defined by a group presentation in which the generators are involutions and the relations are obtained by declaring some pairs of generators to commute. Right-angled Coxeter groups (and more general Coxeter groups) play an important role in geometric group theory and are closely-related to some of that field’s most fundamental objects, for example, CAT(0) cube complexes [18, 27, 33] and (right-angled) Bruhat-Tits buildings (see e.g., [18]). A right-angled Coxeter group is determined by a unique finite simplicial presentation graph: the vertices correspond to the involutions generating the group, and the edges encode the pairs of generators that commute. In fact, the presentation graph uniquely determines the right-angled Coxeter group [32]. In this article, as an application of our results on random graphs, we continue the project of understanding large-scale geometric features of right-angled Coxeter groups in terms of the combinatorics of the presentation graph, begun in [8, 14, 17]. Specifically, we study right-angled Coxeter groups defined by random presentation graphs, focusing on the prevalence of two important geometric properties: relative hyperbolicity and thickness. Relative hyperbolicity, in the sense introduced by Gromov and equivalently formulated by many others [10, 22, 24, 35], when it holds, is a powerful tool for studying groups. On the other hand, thickness of a finitely-generated group (more generally, a metric space) is a property introduced by Behrstock–Druţu–Mosher in [6] as a geometric obstruction to relative hyperbolicity and has a number of powerful geometric applications. For example, thickness gives bounds on divergence (an important quasi-isometry invariant of a metric space) in many different groups and spaces [5, 7, 11, 17, 37]. Thickness is an inductive property: in the present context, a finitely generated group $$G$$ is thick of order $$0$$ if and only if it decomposes as the direct product of two infinite subgroups. The group $$G$$ is thick of order $$n$$ if there exists a finite collection $$\mathcal H$$ of undistorted subgroups of $$G$$, each thick of order $$n-1$$, whose union generates a finite-index subgroup of $$G$$ and which has the following “chaining” property: for each $$g,g'\in G$$, one can construct a sequence $$g\in g_1H_1,g_2H_2,\ldots,g_kH_k\ni g'$$ of cosets, with each $$H_i\in\mathcal H$$, so that consecutive cosets have infinite coarse intersection. Many of the best-known groups studied by geometric group theorists are thick, and indeed thick of order $$1$$: one-ended right-angled Artin groups, mapping class groups of surfaces, outer automorphism groups of free groups, fundamental groups of three-dimensional graph manifolds, etc. [6]. The class of Coxeter groups contains many examples of hyperbolic and relatively hyperbolic groups. There is a criterion for hyperbolicity purely in terms of the presentation graph due to Moussong [31] and an algebraic criterion for relative hyperbolicity due to Caprace [13]. The class of Coxeter groups includes examples which are non-relatively hyperbolic, for instance, those constructed by Davis–Januszkiewicz [19] and, also, ones studied by Dani–Thomas [17]. In fact, in [8], this is taken further: it is shown that every Coxeter group is actually either thick, or hyperbolic relative to a canonical collection of thick Coxeter subgroups. Further, there is a simple, structural condition on the presentation graph, checkable in polynomial time, which characterizes thickness. This result is needed to deduce the applications below from our graph theoretic results. Charney and Farber initiated the study of random graph products (including right-angled Artin and Coxeter groups) using the Erdős-Rényi model of random graphs [14]. The structure of the group cohomology of random graph products was obtained in [20]. In [8], various results are proved about which random graphs have the thickness property discussed above, leading to the conclusion that, at certain low densities, random right-angled Coxeter groups are relatively hyperbolic (and thus not thick), while at higher densities, random right-angled Coxeter groups are thick. In this article, we improve significantly on one of the latter results, and also prove something considerably more refined: we isolate not just thickness of random right-angled Coxeter groups, but thickness of a specified order, namely $$1$$: Corollary 3.2 (Random Coxeter groups are thick of order 1.)There exists a constant $$C>0$$ such that if $$p\colon{\mathbb{N}}\rightarrow(0,1)$$ satisfies $$\left(\displaystyle\frac{C\log{n}}{n}\right)^{\frac{1}{2}}\leq p(n)\leq 1-\displaystyle\frac{(1+\epsilon)\log{n}}{n}$$ for some $$\epsilon>0$$, then the random right-angled Coxeter group $$W_{G_{n,p}}$$ is asymptotically almost surely thick of order exactly $$1$$, and in particular has quadratic divergence. Corollary 3.2 significantly improves on Theorem 3.10 of [8], as discussed in Section 3. This theorem follows from Theorems 5.1 and 4.4, the latter being needed to treat the case of large $$p(n)$$, including the interesting special case in which $$p$$ is constant. Remark 1.1. We note that characterizations of thickness of right-angled Coxeter groups in terms of the structure of the presentation graph appear to generalize readily to graph products of arbitrary finite groups and, probably, via the action on a cube complex constructed by Ruane and Witzel in [36], to arbitrary graph products of finitely generated abelian groups, using appropriate modifications of the results in [8]. □ Organization of the article In Section 2, we give the formal definitions of $$\mathcal{AS}$$ and $$\mathcal{CFS}$$ and introduce various other graph-theoretic notions we will need. In Section 3, we discuss the applications of our random graph results to geometric group theory, in particular, to right-angled Coxeter groups and more general graph products. In Section 4, we obtain a sharp threshold result for $$\mathcal{AS}$$ graphs. Section 5 is devoted to $$\mathcal{CFS}$$ graphs. Finally, Section 6 contains some simulations of random graphs with density near the threshold for $$\mathcal{AS}$$ and $$\mathcal{CFS}$$. 2 Definitions Convention 2.1 A graph is a pair of finite sets $$\Gamma=(V,E)$$, where $$V=V(\Gamma)$$ is a set of vertices, and $$E=E(\Gamma)$$ is a collection of pairs of distinct elements of $$V$$, which constitute the set of edges of $$G$$. A subgraph of $$\Gamma$$ is a graph $$\Gamma'$$ with $$V(\Gamma')\subseteq V(\Gamma)$$ and $$E(\Gamma')\subseteq E(\Gamma)$$; $$\Gamma'$$ is said to be an induced subgraph of $$\Gamma$$ if $$E(\Gamma')$$ consists exactly of those edges from $$E(\Gamma)$$ whose vertices lie in $$V(\Gamma')$$. In this article, we focus on induced subgraphs, and we generally write “subgraph” to mean “induced subgraph”. In particular, we often identify a subgraph with the set of vertices inducing it, and we write $$\vert \Gamma\vert$$ for the order of $$\Gamma$$, that is, the number of vertices it contains. A clique of size $$t$$ is a complete graph on $$t\geq0$$ vertices. This includes the degenerate case of the empty graph on $$t= 0$$ vertices. □ Fig. 1. View largeDownload slide A graph in $$\mathcal{AS}$$. A block exhibiting inclusion in $$\mathcal{AS}$$ is shown in bold; the two (left-centrally located) ends of the bold block are highlighted. (A colour version of this figure is available from the journal’s website.) Fig. 1. View largeDownload slide A graph in $$\mathcal{AS}$$. A block exhibiting inclusion in $$\mathcal{AS}$$ is shown in bold; the two (left-centrally located) ends of the bold block are highlighted. (A colour version of this figure is available from the journal’s website.) Definition 2.2 (Link, join). Given a graph $$\Gamma$$, the link of a vertex $$v\in\Gamma$$, denoted $${\mathrm Lk}_\Gamma(v)$$, is the subgraph spanned by the set of vertices adjacent to $$v$$. Given graphs $$A,B$$, the join$$A\star B$$ is the graph formed from $$A\sqcup B$$ by joining each vertex of $$A$$ to each vertex of $$B$$ by an edge. A suspension is a join where one of the factors $$A,B$$ is the graph consisting of two vertices and no edges. □ We now describe a family of graphs, denoted $$\mathcal{CFS}$$, which satisfy the global structural property that they are “constructed from squares.” Definition 2.3 ($$\boldsymbol{\mathcal{CFS}}$$). Given a graph $$\Gamma$$, let $$\square(\Gamma)$$ be the auxiliary graph whose vertices are the induced $$4$$–cycles from $$\Gamma$$, with two distinct $$4$$–cycles joined by an edge in $$\square(\Gamma)$$ if and only if they intersect in a pair of non-adjacent vertices of $$\Gamma$$ (i.e., in a diagonal). We refer to $$\square(\Gamma)$$ as the square-graph of $$\Gamma$$. A graph $$\Gamma$$ belongs to $$\mathcal{CFS}$$ if $$\Gamma=\Gamma'\star K$$, where $$K$$ is a (possibly empty) clique and $$\Gamma'$$ is a non-empty subgraph such that $$\square(\Gamma')$$ has a connected component $$C$$ such that the union of the $$4$$–cycles from $$C$$ covers all of $$V(\Gamma')$$. Given a vertex $$F\in\square(\Gamma)$$, we refer to the vertices in the $$4$$–cycle in $$\Gamma$$ associated to $$F$$ as the support of $$F$$. □ Remark 2.4. Dani–Thomas introduced component with full support graphs in [17], a subclass of the class of triangle-free graphs. We note that each component with full support graph is constructed from squares, but the converse is not true. Indeed, since we do not require our graphs to be triangle-free, our definition necessarily only counts induced 4–cycles and allows them to intersect in more ways than in [17]. This distinction is relevant to the application to Coxeter groups, which we discuss in Section 3. □ Definition 2.5 (Augmented suspension). The graph $$\Gamma$$ is an augmented suspension if it contains an induced subgraph $$B=\{w,w'\}\star \Gamma'$$, where $$w,w'$$ are nonadjacent and $$\Gamma'$$ is not a clique, satisfying the additional property that if $$v\in\Gamma-B$$, then $$\mathrm{Lk}_\Gamma(v)\cap \Gamma'$$ is not a clique. Let $$\mathcal{AS}$$ denote the class of augmented suspensions. Figure 1 shows a graph in $$\mathcal{AS}$$. □ Remark 2.6. Neither the $$\mathcal{CFS}$$ nor the $$\mathcal{AS}$$ properties introduced above are monotone with respect to the addition of edges. This stands in contrast to the most commonly studied global properties of random graphs. □ Definition 2.7 (Block, core, ends). A block in $$\Gamma$$ is a subgraph of the form $$B(w,w')=\{w,w'\}\star \Gamma'$$ where $$\{w,w'\}$$ is a pair of non-adjacent vertices and $$\Gamma'\subset \Gamma$$ is a subgraph of $$\Gamma$$ induced by a set of vertices adjacent to both $$w$$ and $$w'$$. A block is maximal if $$V(\Gamma')=\mathrm{Lk}_{\Gamma}(w)\cap \mathrm{Lk}_{\Gamma}(w')$$. Given a block $$B=B(w,w')$$, we refer to the non-adjacent vertices $$w,w'$$ as the ends of $$B$$, denoted $$\mathrm{end}(B)$$, and the vertices of $$\Gamma'$$ as the core of $$B$$, denoted $$\mathrm{core}(B)$$. □ Note that $$\mathcal{AS}\subsetneq\mathcal{CFS}$$, indeed Theorems 5.1 and 4.5 show that there must exist graphs in $$\mathcal{CFS}$$ that are not in $$\mathcal{AS}$$. Here we explain how any graph in $$\mathcal{AS}$$ is in $$\mathcal{CFS}$$. Lemma 2.8. Let $$\Gamma$$ be a graph in $$\mathcal{AS}$$. Then $$\Gamma \in \mathcal{CFS}$$. □ Proof Let $$B(w,w')=\{w,w'\}\star \Gamma'$$ be a maximal block in $$\Gamma$$ witnessing $$\Gamma \in \mathcal{AS}$$. Write $$\Gamma'=A\star D$$, where $$D$$ is the collection of all vertices of $$\Gamma'$$ which are adjacent to every other vertex of $$\Gamma'$$. Note that $$D$$ induces a clique in $$\Gamma$$. By definition of the $$\mathcal{AS}$$ property $$\Gamma'$$ is not a clique, whence, $$A$$ contains at least one pair of non-adjacent vertices. Furthermore, by the definition of $$D$$, for every vertex $$a\in A$$ there exists $$a' \in A$$ with $$\{a,a'\}$$ non-adjacent. The $$4$$–cycles induced by $$\{w,w'a,a'\}$$ for non-adjacent pairs $$a, a'$$ from $$A$$ are connected in $$\square(\Gamma)$$. Denote the component of $$\square(\Gamma)$$ containing them by $$C$$. Consider now a vertex $$v\in \Gamma - B(w,w')$$. Since $$B(w,w')$$ is maximal, we have that at least one of $$w,w'$$ is not adjacent to $$v$$ — without loss of generality, let us assume that it is $$w$$. By the $$\mathcal{AS}$$ property, $$v$$ must be adjacent to a pair $$a,a'$$ of non-adjacent vertices from $$A$$. Then $$\{w, a,a', v\}$$ induces a $$4$$–cycle, which is adjacent to $$\{w,w',a,a'\}\in\square(\Gamma)$$ and hence lies in $$C$$. Finally, consider a vertex $$d \in D$$. If $$v$$ is adjacent to all vertices of $$\Gamma$$, then $$\Gamma$$ is the join of a graph with a clique containing $$d$$, and we can ignore $$d$$ with respect to establishing the $$\mathcal{CFS}$$ property. Otherwise $$d$$ is not adjacent to some $$v\in \Gamma-B(w,w')$$. By the $$\mathcal{AS}$$ property, $$v$$ is connected by edges to a pair of non-adjacent vertices $$\{a,a'\}$$ from $$A$$. Thus $$\{d,a,a',v\}$$ induces a $$4$$–cycle. Since (as established above) there is some $$4$$–cycle in $$C$$ containing $$\{a,a',v\}$$, we have that $$\{d,a,a',v\} \in C$$ as well. Thus $$\Gamma= \Gamma'' \star K$$, where $$K$$ is a clique and $$V(\Gamma'')$$ is covered by the union of the $$4$$–cycles in $$C$$, so that $$\Gamma \in \mathcal{CFS}$$ as claimed. ■ 3 Geometry of right-angled Coxeter groups If $$\Gamma$$ is a finite simplicial graph, the right-angled Coxeter group$$W_\Gamma$$presented by$$\Gamma$$ is the group defined by the presentation ⟨Vert(Γ)∣{w2,uvu−1v−1:u,v,w∈Vert(Γ),{u,v}∈Edge(Γ)⟩. A result of Mühlherr [32] shows that the correspondence $$\Gamma\leftrightarrow W_\Gamma$$ is bijective. We can thus speak of “the random right-angled Coxeter group” — it is the right-angled Coxeter group presented by the random graph. (We emphasize that the above presentation provides the definition of a right-angled Coxeter group: this definition abstracts the notion of a reflection group – a subgroup of a linear group generated by reflections — but infinite Coxeter groups need not admit representations as reflection groups.) Recent articles have discussed the geometry of Coxeter groups, especially relative hyperbolicity and closely-related quasi-isometry invariants like divergence and thickness, cf. [8, 13, 17]. In particular, Dani–Thomas introduced a property they call having a component of full support for triangle-free graphs (which is exactly the triangle-free version of $$\mathcal{CFS}$$) and they prove that under the assumption $$\Gamma$$ is triangle-free, $$W_\Gamma$$ is thick of order at most $$1$$ if and only if it has quadratic divergence if and only if $$\Gamma$$ is in $$\mathcal{CFS}$$, see [17, Theorem 1.1 and Remark 4.8]. Since the densities where random graphs are triangle-free are also square-free (and thus not $$\mathcal{CFS}$$ — in fact, they are disconnected!), we need the following slight generalization of the result of Dani–Thomas: Proposition 3.1. Let $$\Gamma$$ be a finite simplicial graph. If $$\Gamma$$ is in $$\mathcal{CFS}$$ and $$\Gamma$$ does not decompose as a nontrivial join, then $$W_\Gamma$$ is thick of order exactly $$1$$. □ Proof Theorem II of [8] shows immediately that, if $$\Gamma\in\mathcal{CFS}$$, then $$W_\Gamma$$ is thick, being formed by a series of thick unions of $$4$$–cycles; since each $$4$$–cycle is a join, it follows that $$\Gamma$$ is thick of order at most $$1$$. On the other hand, [8, Proposition 2.11] shows that $$W_\Gamma$$ is thick of order at least $$1$$ provided $$\Gamma$$ is not a join. ■ Our results about random graphs yield: Corollary 3.2. There exists $$k>0$$ so that if $$p\colon{\mathbb{N}}\rightarrow(0,1)$$ and $$\epsilon>0$$ are such that $$\sqrt{\frac{k\log n}{n}}\leq p(n)\leq 1-\displaystyle\frac{(1-\epsilon)\log{n}}{n}$$ for all sufficiently large $$n$$, then for $$\Gamma\in{\mathcal G}(n,p)$$ the group $$W_\Gamma$$ is asymptotically almost surely thick of order exactly $$1$$ and hence has quadratic divergence. □ Proof Theorem 5.1 shows that any such $$\Gamma$$ is asymptotically almost surely in $$\mathcal{CFS}$$, whence $$W_\Gamma$$ is thick of order at most $$1$$. We emphasize that to apply this result for sufficiently large functions $$p(n)$$ the proof of Theorem 5.1 requires an application of Theorem 4.4 to establish that $$\Gamma$$ is a.a.s. in $$\mathcal{AS}$$ and hence in $$\mathcal{CFS}$$ by Lemma 2.8. By Proposition 3.1, to show that the order of thickness is exactly one, it remains to rule out the possibility that $$\Gamma$$ decomposes as a nontrivial join. However, this occurs if and only if the complement graph is disconnected, which asymptotically almost surely does not occur whenever $$p(n)\le 1-\frac{(1-\epsilon)\log{n}}{n}$$, by the sharp threshold for connectivity of $${\mathcal G}(n,1-p)$$ established by Erdős and Rényi in [21]. Since this holds for $$p(n)$$ by assumption, we conclude that asymptotically almost surely, $$W_\Gamma$$ is thick of order at least $$1$$. Since $$W_\Gamma$$ is CAT(0) and thick of order exactly $$1$$, the consequence about divergence now follows from [5]. ■ This corollary significantly generalizes Theorem 3.10 of [8], which established that, if $$\Gamma\in{\mathcal G}(n,\frac{1}{2})$$, then $$W_\Gamma$$ is asymptotically almost surely thick. Theorem 3.10 of [8] does not provide effective bounds on the order of thickness and its proof is significantly more complicated than the proof of Corollary 3.2 given above — indeed, it required several days of computation (using 2013 hardware) to establish the base case of an inductive argument. Remark 3.3 (Higher-order thickness). A lower bound of $$p(n)=n^{-\frac{5}{6}}$$ for membership in a larger class of graphs whose corresponding Coxeter groups are thick can be found in [8, Theorem 3.4]. In fact, this argument can be adapted to give a simple proof that a.a.s. thickness does not occur at densities below $$n^{-\frac{3}{4}}$$. The correct threshold for a.a.s. thickness is, however, unknown. □ Remark 3.4 (Random graph products versus random presentations). Corollary 3.2 and Remark 3.3 show that the random graph model for producing random right-angled Coxeter groups generates groups with radically different geometric properties. This is in direct contrast to other methods of producing random groups, most notably Gromov’s random presentation model [25, 26] where, depending on the density of relators, groups are either almost surely hyperbolic or finite (with order at most $$2$$). This contrast speaks to the merits of considering a random right-angled Coxeter group as a natural place to study random groups. For instance, Calegari–Wilton recently showed that in the Gromov model a random group contains many subgroups which are isomorphic to the fundamental group of a compact hyperbolic 3–manifold [12]; does the random right-angled Coxeter group also contain such subgroups? Right-angled Coxeter groups, and indeed thick ones, are closely related to Gromov’s random groups in another way. When the parameter for a Gromov random groups is $$<\frac{1}{6}$$ such a group is word-hyperbolic [25] and acts properly and cocompactly on a CAT(0) cube complex [34]. Hence the Gromov random group virtually embeds in a right-angled Artin group [4]. Moreover, at such parameters such a random group is one-ended [16], whence the associated right-angled Artin group is as well. By [4] this right-angled Artin group is thick of order 1. Since any right-angled Artin group is commensurable with a right-angled Coxeter group [19], one obtains a thick of order $$1$$ right-angled Coxeter group containing the randomly presented group. □ 4 Genericity of $$\boldsymbol{\mathcal{AS}}$$ We will use the following standard Chernoff bounds (see e.g., [2, Theorems A.1.11 and A.1.13]): Lemma 4.1 (Chernoff bounds). Let $$X_1,\ldots,X_n$$ be independent identically distributed random variables taking values in $$\{0,1\}$$, let $$X$$ be their sum, and let $$\mu=\mathbb E[X]$$. Then for any $$\delta\in(0,2/3)$$ P(|X−μ|≥δμ)≤2e−δ2μ3. □ Corollary 4.2. Let $$\varepsilon, \delta>0$$ be fixed. (i) If $$p(n)\ge \left(\frac{(6+\varepsilon)\log n}{\delta^2n}\right)^{1/2}$$, then a.a.s. for all pairs of distinct vertices $$\{x,y\}$$ in $$\Gamma \in {\mathcal G}(n,p)$$ we have $$\left\vert \vert \mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)\vert-p^2(n-2)\right\vert < \delta p^2(n-2)$$. (ii) If $$p(n)\ge \left(\frac{(9+\varepsilon)\log n}{\delta^2n}\right)^{1/3}$$, then a.a.s. for all triples of distinct vertices $$\{x,y, z\}$$ in $$\Gamma \in {\mathcal G}(n,p)$$ we have $$\left\vert \vert \mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)\cap \mathrm{Lk}_{\Gamma}(z)\vert-p^3(n-3)\right\vert < \delta p^3(n-3)$$. □ Proof For (i), let $$\{x,y\}$$ be any pair of distinct vertices. For each vertex $$v \in \Gamma-\{x,y\}$$, set $$X_v$$ to be the indicator function of the event that $$v\in \mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)$$, and set $$X=\sum_v X_v$$ to be the size of $$\mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)$$. We have $$\mathbb{E}X=p^2(n-2)$$ and so by the Chernoff bounds above, $$\Pr\left(\vert X-p^2(n-2)\vert\geq \delta p^2(n-2)\right) \leq 2e^{-\frac{\delta^2p^2(n-2)}{3}}$$. Applying Markov’s inequality, the probability that there exists some “bad pair” $$\{x,y\}$$ in $$\Gamma$$ for which $$\vert\mathrm{Lk}_{\Gamma}(x)\cap \mathrm{Lk}_{\Gamma}(y)\vert$$ deviates from its expected value by more than $$\delta p^2(n-2)$$ is at most (n2)2e−δ2p2(n−2)3=o(1), provided $$\delta^2 p^2n\ge (6+\varepsilon) \log n$$ and $$\varepsilon, \delta>0$$ are fixed. Thus for this range of $$p=p(n)$$, a.a.s. no such bad pair exists. The proof of (ii) is nearly identical. ■ Lemma 4.3. (i) Suppose $$1-p\geq \frac{\log n}{2n}$$. Then asymptotically almost surely, the order of a largest clique in $$\Gamma\in{\mathcal G}(n,p)$$ is $$o(n)$$. (ii) Let $$\eta$$ be fixed with $$0<\eta<1$$. Suppose $$1-p\geq \eta$$. Then asymptotically almost surely, the order of a largest clique in $$\Gamma \in {\mathcal G}(n,p)$$ is $$O(\log n)$$. □ Proof For (i), set $$r=\alpha n$$, for some $$\alpha$$ bounded away from $$0$$. Write $$H(\alpha)=\alpha\log\frac{1}{\alpha}+(1-\alpha)\log\frac{1}{1-\alpha}$$. Using the standard entropy bound $$\binom{n}{\alpha n}\leq e^{H(\alpha)n}$$ and our assumption for $$(1-p)$$, we see that the expected number of $$r$$-cliques in $$\Gamma$$ is (nr)p(r2) ≤eH(α)nelog(1−(1−p))(α2n22+O(n))≤exp(−α22nlogn+O(n))=o(1). Thus by Markov’s inequality, a.a.s. $$\Gamma$$ does not contain a clique of size $$r$$, and the order of a largest clique in $$\Gamma$$ is $$o(n)$$. The proof of (ii) is similar: suppose $$1-p>\eta$$. Then for any $$r\leq n$$, (nr)p(r2) <nr(1−η)r(r−1)/2=exp(r(logn−r−12log11−η)), which for $$\eta>0$$ fixed and $$r-1>\frac{2}{\log (1/(1-\eta))}(1+ \log n )$$ is as most $$n^{-\frac{2}{\log (1/(1-\eta))}}=o(1)$$. We may thus conclude as above that a.a.s. a largest clique in $$\Gamma$$ has order $$O(\log n)$$. ■ Theorem 4.4 (Genericity of $$\mathcal{AS}$$). Suppose $$p(n)\ge(1+\epsilon)\left(\displaystyle\frac{\log{n}}{n}\right)^{\frac{1}{3}}$$ for some $$\epsilon>0$$ and $$(1-p)n^2\to\infty$$. Then, a.a.s. $$\Gamma\in{\mathcal G}(n,p)$$ is in $$\mathcal{AS}$$. □ Proof Let $$\delta>0$$ be a small constant to be specified later (the choice of $$\delta$$ will depend on $$\epsilon$$). By Corollary 4.2 (i) for $$p(n)$$ in the range we are considering, a.a.s. all joint links have size at least $$(1-\delta)p^2(n-2)$$. Denote this event by $$\mathcal{E}_1$$. We henceforth condition on $$\mathcal{E}_1$$ occurring (not this only affects the values of probabilities by an additive factor of $$\mathbb{P}(\mathcal{E}_1^c)=O(n^{-\varepsilon})=o(1)$$). With probability $$1-p^{\binom{n}{2}}=1-o(1)$$, $$\Gamma$$ is not a clique, whence, there there exist non-adjacent vertices in $$\Gamma$$. We henceforth assume $$\Gamma\neq K_n$$, and choose $$v_1, v_2\in \Gamma$$ which are not adjacent. Let $$B$$ be the maximal block associated with the pair $$(v_1,v_2)$$. We separate the range of $$p$$ into three. Case 1: $$\boldsymbol{p}$$ is “far” from both the threshold and $$1$$. Let $$\alpha>0$$ be fixed, and suppose $$\alpha n^{-1/4}\leq p \leq 1- \frac{\log n}{2n}$$. Let $$\mathcal{E}_2$$ be the event that for every vertex $$v\in\Gamma-B$$ the set $$\mathrm{Lk}_{\Gamma}(v)\cap B$$ has size at least $$\frac{1}{2}p^3(n-3)$$. By Corollary 4.2, a.a.s. event $$\mathcal{E}_2$$ occurs, that is, all vertices in $$\Gamma-B$$ have this property. We claim that a.a.s. there is no clique of order at least $$\frac{1}{2}p^3(n-3)$$ in $$\Gamma$$. Indeed, if $$p<1-\eta$$ for some fixed $$\eta>0$$, then by Lemma 4.3 part (ii), a largest clique in $$\Gamma$$ has order $$O(\log n)=o(p^3n)$$. On the other hand, if $$1-\eta <p\leq 1- \frac{\log n}{2n}$$, then by Lemma 4.3 part (i), a largest clique in $$\Gamma$$ has order $$o(n)=o(p^3n)$$. Thus in either case a.a.s. for every$$v\in \Gamma-B$$, $$\mathrm{Lk}_{\Gamma}(v)\cap B$$ is not a clique and hence $$v \in \overline{B}$$, so that a.a.s. $$\overline{B}=\Gamma$$, and $$\Gamma \in \mathcal{AS}$$ as required. Case 2: $$\boldsymbol{p}$$ is “close” to the threshold. Suppose that $$(1+\epsilon)\left(\displaystyle\frac{\log{n}}{n}\right)^{\frac{1}{3}}\le p(n)$$ and $$np^4\to 0$$. Let $$\vert B\vert=m+2$$. By our conditioning, we have $$(1-\delta)(n-2)p^2\leq m \leq (1+\delta)(n-2)p^2$$. The probability that a given vertex $$v\in \Gamma$$ is not in $$\overline{B}$$ is given by: P(v∉B¯|{|B|=m})=(1−p)m+mp(1−p)m+1+∑r=2m(mr)pr(1−p)m−rp(r2). (1) In this equation, the first two terms come from the case where $$v$$ is connected to $$0$$ and $$1$$ vertex in $$B\setminus\{v_1,v_2\}$$ respectively, while the third term comes from the case where the link of $$v$$ in $$B\setminus\{v_1,v_2\}$$ is a clique on $$r\geq 2$$ vertices. As we shall see, in the case $$np^4\to 0$$ which we are considering, the contribution from the first two terms dominates. Let us estimate their order: (1−p)m+mp(1−p)m−1=(1+mp1−p)(1−p)m ≤(1+mp1−p)e−mp ≤(1+(1−δ)(1+ϵ)3logn1−p)n−(1−δ)(1+ϵ)3. Taking $$\delta<1-\frac{1}{(1+\epsilon)^3}$$ this expression is $$o(n^{-1})$$. We now treat the sum making up the remaining terms in Equation 1. To do so, we will analyze the quotient of successive terms in the sum. Fixing $$2\le r\le m-1$$ we see: (mr+1)pr+1(1−p)m−r−1p(r+12)(mr)pr(1−p)m−rp(r2)=m−r−1r+1⋅pr+11−p≤mpr+1≤mp3. Since $$np^4\to 0$$ (by assumption), this also tends to zero as $$n\to\infty$$. The quotients of successive terms in the sum thus tend to zero uniformly as $$n\to \infty$$, and we may bound the sum by a geometric series: ∑r=2m(mr)pr(1−p)m−rp(r2)≤(m2)p3(1−p)m−2∑i=0m−2(mp3)i≤(12+o(1))m2p3(1−p)m−2. Now, $$m^2p^3(1-p)^{m-2}=\frac{mp^2}{1-p}\cdot mp(1-p)^{m-1}$$. The second factor in this expression was already shown to be $$o(n^{-1})$$, while $$mp^2\le(1+\delta)np^4 \to 0$$ by assumption, so the total contribution of the sum is $$o(n^{-1})$$. Thus for any value of $$m$$ between $$(1-\delta)p^2(n-2)$$ and $$(1+\delta)p^2(n-2)$$, the right hand side of Equation (1) is $$o(n^{-1})$$, and we conclude: P(v∉B¯|E1)≤o(n−1). Thus, by Markov’s inequality, P(B¯=Γ)≥P(E1)(1−∑vP(v∉B¯|E1))=1−o(1), establishing that a.a.s. $$\Gamma\in\mathcal{AS}$$, as claimed. Case 3: $$p$$ is “close” to $$1$$. Suppose $$n^{-2}\ll (1-p)\leq \frac{\log n}{2n}$$. Consider the complement of $$\Gamma$$, $$\Gamma^c\in {\mathcal G}(n, 1-p)$$. In the range of the parameter $$\Gamma^c$$ a.a.s. has at least two connected components that contain at least two vertices. In particular, taking complements, we see that $$\Gamma$$ is a.a.s. a join of two subgraphs, neither of which is a clique. It is a simple exercise to see that such as graph is in $$\mathcal{AS}$$, thus a.a.s. $$\Gamma\in\mathcal{AS}$$. ■ As we now show, the bound obtained in the above theorem is actually a sharp threshold. Analogous to the classical proof of the connectivity threshold [21], we consider vertices which are “isolated” from a block to prove that graphs below the threshold strongly fail to be in $$\mathcal{AS}$$. Theorem 4.5. If $$p\le\left(1-\epsilon\right)\left(\displaystyle\frac{\log{n}}{n}\right)^{\frac{1}{3}}$$ for some $$\epsilon>0$$, then $$\Gamma\in{\mathcal G}(n,p)$$ is asymptotically almost surely not in $$\mathcal{AS}$$. □ Proof We will show that, for $$p$$ as hypothesized, every block has a vertex “isolated” from it. Explicitly, let $$\Gamma\in{\mathcal G}(n,p)$$ and consider $$B=B_{v,w}=\mathrm{Lk}(v)\cap \mathrm{Lk}(w)\cup\{v,w\}$$. Let $$X(v,w)$$ be the event that every vertex of $$\Gamma-B$$ is connected by an edge to some vertex of $$B$$. Clearly $$\Gamma\in \mathcal{AS}$$ only if the event $$X(v,w)$$ occurs for some pair of non-adjacent vertices $$\{v,w\}$$. Set $$X=\bigcup_{\{v,w\}} X(v,w)$$. Note that $$X$$ is a monotone event, closed under the addition of edges, so that the probability it occurs in $$\Gamma\in {\mathcal G}(n,p)$$ is a non-decreasing function of $$p$$. We now show that when $$p=\left(1-\epsilon\right)\left(\log{n}/ n\right)^{\frac{1}{3}}$$, a.a.s. $$X$$ does not occur, completing the proof. Consider a pair of vertices $$\{v,w\}$$, and set $$k=\vert B_{v,w}\vert$$. Conditional on $$B_{v,w}$$ having this size and using the standard inequality $$(1-x)\le e^{-x}$$, we have that P(X(v,w))=(1−(1−p)k)n−k≤e−(n−k)(1−p)k. Now, the value of $$k$$ is concentrated around its mean: by Corollary 4.2, for any fixed $$\delta>0$$ and all $$\{v,w\}$$, the order of $$B_{v,w}$$ is a.a.s. at most $$(1+\delta)np^2$$. Conditioning on this event $$\mathcal{E}$$, we have that for any pair of vertices $$v,w$$, P(X(v,w)|E)≤maxk≤(1+δ)np2e−(n−k)(1−p)k=e−(n−(1+δ)np2)(1−p)(1+δ)np2. Now $$(1-p)^{(1+\delta)np^2} = e^{(1+\delta)np^2\log(1-p)}$$ and by Taylor’s theorem $$\log(1-p)=-p+O(p^2)$$, so that: P(X(v,w)|E)≤e−n(1+O(p2))e−(1+δ)np3(1+O(p))=e−n1−(1+δ)(1−ϵ)3+o(1). Choosing $$\delta<\displaystyle\frac{1}{(1-\epsilon)^3}-1$$, the expression above is $$o(n^{-2})$$. Thus P(X) ≤P(Ec)+∑{v,w}P(X(v,w)|E) =o(1)+(n2)o(n−2)=o(1). Thus a.a.s. the monotone event $$X$$ does not occur in $$\Gamma\in{\mathcal G}(n,p)$$ for $$p=\left(1-\epsilon\right)\left(\log{n}/ n\right)^{\frac{1}{3}}$$, and hence a.a.s. the property $$\mathcal{AS}$$ does not hold for $$\Gamma\in{\mathcal G}(n,p)$$ and $$p(n)\leq \left(1-\epsilon\right)\left(\log{n}/ n\right)^{\frac{1}{3}}$$. ■ 5 Genericity of $$\boldsymbol{\mathcal{CFS}}$$ The two main results in this section are upper and lower bounds for inclusion in $$\mathcal{CFS}$$. These results are established in Theorems 5.1 and 5.7. Theorem 5.1. If $$p\colon{\mathbb{N}}\rightarrow(0,1)$$ satisfies $$(1-p)n^2\to\infty$$ and $$p(n)\geq 5\sqrt{\frac{\log n}{n}}$$ for all sufficiently large $$n$$, then a.a.s. $$\Gamma \in {\mathcal G}(n,p)$$ lies in $$\mathcal{CFS}$$. □ The proof of Theorem 5.1 divides naturally into two ranges. First of all for large $$p$$, namely for $$p(n)\geq 2\left(\log{n}/n\right)^{\frac{1}{3}}$$, we appeal to Theorem 4.4 to show that a.a.s. a random graph $$\Gamma\in {\mathcal G}(n,p)$$ is in $$\mathcal{AS}$$ and hence, by Lemma 2.8, in $$\mathcal{CFS}$$. In light of our proof of Theorem 4.4, we may think of this as the case when we can “beam up” every vertex of the graph $$\Gamma$$ to a single block $$B_{x,y}$$ in an appropriate way, and thus obtain a connected component of $$\square(\Gamma)$$ whose support is all of $$V(\Gamma)$$ Secondly we have the case of “small $$p$$” where 5lognn≤p(n)≤2(lognn)13, which is the focus of the remainder of the proof. Here we construct a path of length of order $$n/\log n$$ in $$\square(\Gamma)$$ on to which every vertex $$v\in V(\Gamma)$$ can be “beamed up” by adding a $$4$$–cycle whose support contains $$v$$. This is done in the following manner: we start with an arbitrary pair of non-adjacent vertices contained in a block $$B_{0}$$. We then pick an arbitrary pair of non-adjacent vertices in the block $$B_0$$ and let $$B_1$$ denote the intersection of the block they define with $$V(\Gamma)\setminus B_0$$. We repeat this procedure, to obtain a chain of blocks $$B_0, B_1, B_2, \ldots, B_t$$, with $$t=O(n/\log n)$$, whose union contains a positive proportion of $$V(\Gamma)$$, and which all belong to the same connected component $$C$$ of $$\square(\Gamma)$$. This common component $$C$$ is then large enough that every remaining vertex of $$V(\Gamma)$$ can be attached to it. The main challenge is showing that our process of recording which vertices are included in the support of a component of the square graph does not die out or slow down too much, that is, that the block sizes $$\vert B_i\vert$$ remains relatively large at every stage of the process and that none of the $$B_i$$ form a clique. Having described our strategy, we now fill in the details, beginning with the following upper bound on the probability of $$\Gamma\in {\mathcal G}(n,p)$$ containing a copy of $$K_{10}$$, the complete graph on $$10$$ vertices. The following lemma is a variant of [21, Corollary 4]: Lemma 5.2. Let $$\Gamma\in{\mathcal G}(n,p)$$. If $$p=o(n^{-\frac{1}{4}})$$, then the probability that $$\Gamma\in {\mathcal G}(n,p)$$ contains a clique with at least $$10$$ vertices is at most $$o(n^{-\frac{5}{4}})$$. □ Proof The expected number of copies of $$K_{10}$$ in $$\Gamma$$ is (n10)p(102)≤n10p45=o(n−5/4). The statement of the lemma then follows from Markov’s inequality. ■ Proof of Theorem 5.1. As remarked above, Theorem 4.4 proves Theorem 5.1 for “large” $$p$$, so we only need to deal with the case where 5lognn≤p(n)≤2(lognn)13. We iteratively build a chain of blocks, as follows. Let $$\{x_0,y_0\}$$ be a pair of non-adjacent vertices in $$\Gamma$$, if such a pair exists, and an arbitrary pair of vertices if not. Let $$B_0$$ be the block with ends $$\{x_0, y_0\}$$. Now assume we have already constructed the blocks $$B_0, \ldots, B_i$$, for $$i\geq 0$$. Let $$C_i=\bigcup_i B_i$$ (for convenience we let $$C_{-1}=\emptyset$$). We will terminate the process and set $$t=i$$ if any of the three following conditions occur: $$\vert \mathrm{core}(B_i)\vert\leq 6\log n$$ or $$i\geq n/6\log n$$ or $$\vert V(\Gamma)\setminus C_i\vert \leq n/2$$. Otherwise, we let $$\{x_{i+1}, y_{i+1}\}$$ be a pair of non-adjacent vertices in $$\mathrm{core}(B_i)$$, if such a pair exists, and an arbitrary pair of vertices from $$\mathrm{core}(B_i)$$ otherwise. Let $$B_{i+1}$$ denote the intersection of the block whose ends are $$\{x_{i+1}, y_{i+1}\}$$ and the set $$\left(V(\Gamma)\setminus(C_i)\right)\cup\{x_{i+1}, y_{i+1}\}$$. Repeat. Eventually this process must terminate, resulting in a chain of blocks $$B_0, B_1, \ldots, B_t$$. We claim that a.a.s. both of the following hold for every $$i$$ satisfying $$0 \leq i \leq t$$: (i) $$\vert\mathrm{core}(B_i)\vert > 6\log n$$; and (ii) $$\{x_i,y_i\}$$ is a non-edge in $$\Gamma$$. Part (i) follows from the Chernoff bound given in Lemma 4.1: for each $$i\geq -1$$ the set $$V(\Gamma)\setminus C_i$$ contains at least $$n/2$$ vertices by construction. For each vertex $$v\in V(\Gamma)\setminus C_i$$, let $$X_v$$ be the indicator function of the event that $$v$$ is adjacent to both of $$\{x_{i+1}, y_{i+1}\}$$. The random variables $$(X_v)$$ are independent identically distributed Bernoulli random variables with mean $$p$$. Their sum $$X=\sum_v X_v$$ is exactly the size of the core of $$B_{i+1}$$, and its expectation is at least $$p^2n/2$$. Applying Lemma 4.1, we get that P(X<6logn) ≤P(X≤12EX) ≤2e−(12)225logn6n=2e−2524logn. Thus the probability that $$\vert \mathrm{core}(B_i)\vert <6\log n$$ for some $$i$$ with $$0\leq i \leq t$$ is at most: t2e−2524logn≤4n5logn2e−2524logn=o(1). Part (ii) is a trivial consequence of part (i) and Lemma 5.2: a.a.s. $$\mathrm{core}(B_i)$$ has size at least $$6\log n$$ for every $$i$$ with $$0 \leq i \leq t$$, and a.a.s. $$\Gamma$$ contains no clique on $$10 < \log n$$ vertices, so that a.a.s. at each stage of the process we could choose an non-adjacent pair $$\{x_i,y_i\}$$. From now on we assume that both (i) and (ii) occur, and that $$\Gamma$$ contains no clique of size $$10$$. In addition, we assume that $$\vert \mathrm{core}(B_0)\vert < 8n^{\frac{1}{3}}(\log{n})^{\frac{2}{3}}$$, which occurs a.a.s. by the Chernoff bound. Since $$\mathrm{core}(B_i)\geq 6\log n$$ for every $$i$$, we must have that by time $$0<t\leq n/6\log n$$ the process will have terminated with $$C_t=\bigcup_{i=0}^t B_i$$ supported on at least half of the vertices of $$V(\Gamma)$$. Lemma 5.3. Either one of the assumptions above fails or there exists a connected component $$F$$ of $$\square(\Gamma)$$ such that: (i) for every $$i$$ with $$0 \leq i \leq t$$ and every pair of non-adjacent vertices $$\{v,v'\}\in B_i$$, there is a vertex in $$F$$ whose support in $$\Gamma$$ contains the pair $$\{v,v'\}$$; and (ii) the support in $$\Gamma$$ of the $$4$$–cycles corresponding to vertices of $$F$$ contains all of $$C_t$$ with the exception of at most $$9$$ vertices of $$\mathrm{core}(B_0)$$; moreover, these exceptional vertices are each adjacent to all the vertices of $$\mathrm{core}(B_0)$$. □ Proof By assumption the ends $$\{x_0, y_0\}$$ of $$B_0$$ are non-adjacent. Thus, every pair of non-adjacent vertices $$\{v,v'\}$$ in $$\mathrm{core}(B_0)$$ induces a $$4$$–cycle in $$\Gamma$$ when taken together with $$\{x_0,y_0\}$$, and all of these squares clearly lie in the same component $$F$$ of $$\square(\Gamma)$$. Repeating the argument with the non-adjacent pair $$\{x_1, y_1\}\in \mathrm{core}(B_0)$$ and the block $$B_1$$, and then the non-adjacent pair $$\{x_2, y_2\}\in \mathrm{core}(B_1)$$ and the block $$B_2$$, and so on, we see that there is a connected component $$F$$ in $$\square(\Gamma)$$ such that for every $$0\leq i \leq t$$, every pair of non adjacent vertices $$\{v,v'\} \in B_i$$ lies in a $$4$$–cycle corresponding to a vertex of $$F$$. This establishes (i). We now show that the support of $$F$$ contains all of $$C_t$$ except possibly some vertices in $$B_0$$. We already established that every pair $$\{x_i,y_i\}$$ is in the support of some vertex of $$F$$. Suppose $$v\in \mathrm{core}(B_i)$$ for some $$i>0$$. By construction, $$v$$ is not adjacent to at least one of $$\{x_{i-1}, y_{i-1}\}$$, say $$x_{i-1}$$. Thus, $$\{x_{i-1},x_i, y_i, v\}$$ induces a $$4$$–cycle which contains $$v$$ and is associated to a vertex of $$F$$. Finally, suppose $$v\in \mathrm{core}(B_0)$$. By (i), $$v$$ fails to be in the support of $$F$$ only if $$v$$ is adjacent to all other vertices of $$\mathrm{core}(B_0)$$. Since, by assumption, $$\Gamma$$ does not contain any clique of size $$10$$, there are at most $$9$$ vertices not in the support of $$F$$, proving (ii). ■ Lemma 5.3, shows that a.a.s. we have a “large” component $$F$$ in $$\square(\Gamma)$$ whose support contains “many” pairs of non-adjacent vertices. In the last part of the proof, we use these pairs to prove that the remaining vertices of $$V(\Gamma)$$ are also supported on our connected component. For each $$i$$ satisfying $$0 \leq i \leq t$$, consider a a maximal collection, $$M_{i}$$, of pairwise-disjoint pairs of vertices in $$\mathrm{core}(B_i)\setminus\{x_{i+1},y_{i+1}\}$$. Set $$M=\bigcup_i M_i$$, and let $$M'$$ be the subset of $$M$$ consisting of pairs, $$\{v,v'\}$$, for which $$v$$ and $$v'$$ are not adjacent in $$\Gamma$$. We have |M|=∑i=1t(⌊12|core(Bi)|⌋−1)≥|Ct|2−2t≥n4(1−o(1)). The expected size of $$M'$$ is thus $$(1-p)n(\frac{1}{4}-o(1))=\frac{n}{4}(1-o(1))$$, and by the Chernoff bound from Lemma 4.1 we have P(|M′|≤n5) ≤2e−(15+o(1))2(1−p)n12≤e−(1300+o(1))n, which is $$o(1)$$. Thus a.a.s. $$M'$$ contains at least $$n/5$$ pairs, and by Lemma 5.3 each of these lies in some $$4$$–cycle of $$F$$. We now show that we can “beam up” every vertex not yet supported on $$F$$ by a $$4$$–cycle using a pair from $$M'$$. By construction we have at most $$n/2$$ unsupported vertices from $$V(\Gamma)\setminus C_t$$ and at most $$9$$ unsupported vertices from $$\mathrm{core}(B_0)$$. Assume that $$\vert M' \vert \geq n/5$$. Fix a vertex $$w\in V(\Gamma)\setminus C_t$$. For each pair $$\{v,v'\}\in M'$$, let $$X_{v,v'}$$ be the event that $$w$$ is adjacent to both $$v$$ and $$v'$$. We now observe that if $$X_{v,v'}$$ occurs for some pair $$\{v,v'\}\in M'\cap \mathrm{core}(B_i)$$, then $$w$$ is supported on $$F$$. By construction, $$w$$ is not adjacent to at least one of $$\{x_i, y_i\}$$, let us say without loss of generality $$x_i$$. Hence, $$\{x_i,v,v',w\}$$ is an induced $$4$$–cycle in $$\Gamma$$ which contains $$w$$ and which corresponds to a vertex of $$F$$. The probability that $$X_{v,v'}$$ fails to happen for every pair $$\{v,v'\}\in M'$$ is exactly (1−p2)|M′|≤(1−p2)n/5≤e−p2n5=e−5logn. Thus the expected number of vertices $$w \in V(\Gamma)\setminus C_t$$ which fail to be in the support of $$F$$ is at most $$\frac{n}{2}e^{-5\log n}=o(1)$$, whence by Markov’s inequality a.a.s. no such bad vertex $$w$$ exists. Finally, we deal with the possible $$9$$ left-over vertices $$b_1, b_2, \ldots b_9$$ from $$\mathrm{core}(B_0)$$ we have not yet supported. We observe that since $$\mathrm{core}(B_0)$$ contains at most $$8n^{\frac{1}{3}}(\log{n})^{\frac{2}{3}}$$ vertices (as we are assuming and as occurs a.a.s. , see the discussion before Lemma 5.3 ), we do not stop the process with $$B_0$$, $$\mathrm{core}(B_1)$$ is non-empty and contains at least $$6\log n$$ vertices. As stated in Lemma 5.3, each unsupported vertex $$b_i$$ is adjacent to all other vertices in $$\mathrm{core}(B_0)$$, and in particular to both of $$\{x_1, y_1\}$$. If $$b_i$$ fails to be adjacent to some vertex $$v\in \mathrm{core}(B_1)$$, then the set $$\{b_i, x_1, y_1, v\}$$ induces a $$4$$–cycle corresponding to a vertex of $$F$$ and whose support contains $$b_i$$. The probability that there is some $$b_i$$ not supported in this way is at most 9P(bi adjacent to all of core(B1)) =9p6logn=o(1). Thus a.a.s. we can “beam up” each of the vertices $$b_1, \ldots b_9$$ to $$F$$ using a vertex $$v\in \mathrm{core}(B_1)$$, and the support of the component $$F$$ in the square graph $$\square(\Gamma)$$ contains all vertices of $$\Gamma$$. This shows that a.a.s. $$\Gamma \in \mathcal{CFS}$$, and concludes the proof of the theorem. ■ Remark 5.4. The constant $$5$$ in Theorem 5.1 is not optimal, and indeed it is not hard to improve on it slightly, albeit at the expense of some tedious calculations. We do not try to obtain a better constant, as we believe that the order of the upper bound we have obtained is not sharp. We conjecture that the actual threshold for $$\mathcal{CFS}$$ occurs when $$p(n)$$ is of order $$n^{-1/2}$$ (see the discussion below Theorem 5.7), but a proof of this is likely to require significantly more involved and sophisticated arguments than the present article. □ A simple lower bound for the emergence of the $$\mathcal{CFS}$$ property can be obtained from the fact that if $$\Gamma\in\mathcal{CFS}$$, then $$\Gamma$$ must contain at least $$n-3$$ squares; if $$p(n)\ll n^{-\frac{3}{4}}$$, then by Markov’s inequality a.a.s. a graph in $${\mathcal G}(n,p)$$ contains fewer than $$o(n)$$ squares and thus cannot be in $$\mathcal{CFS}$$. Below, in Theorem 5.7, we prove a better lower bound, showing that the order of the upper bound we proved in Theorem 5.1 is not off by a factor of more than $$(\log n)^{3/2}$$. Lemma 5.5. Let $$\Gamma$$ be a graph and let $$C$$ be the subgraph of $$\Gamma$$ supported on a given connected component of $$\square(\Gamma)$$. Then there exists an ordering $$v_1<v_2< \cdots <v_{\vert C\vert}$$ of the vertices of $$C$$ such that for all $$i\ge3$$, $$v_i$$ is adjacent in $$\Gamma$$ to at least two vertices preceding it in the order. □ Proof As $$C$$ is a component of $$\square(\Gamma)$$, it contains at least one induced $$4$$–cycle. Let $$v_1, v_2$$ be a pair of non-adjacent vertices from such an induced $$4$$–cycle. Then the two other vertices $$\{v_3,v_4\}$$ of the $$4$$–cycle are both adjacent in $$\Gamma$$ to both of $$v_{1}$$ and $$v_{2}$$. If this is all of $$C$$, then we are done. Otherwise, we know that each $$4$$–cycle in $$C$$ is “connected” to the cycle $$F=\{v_{1},v_{2},v_{3},v_{4}\}$$ via a sequence of induced $$4$$–cycles pairwise intersecting in pairwise non-adjacent vertices. In particular, there is some such $$4$$–cycle whose intersection with $$F$$ is either a pair of non-adjacent vertices in $$F$$ or three vertices of $$F$$; either way, we may add the new vertex next in the order. Continuing in this way and using the fact that the number of vertices not yet reached is a monotonically decreasing set of positive integers, the lemma follows. ■ Proposition 5.6. Let $$\delta>0$$. Suppose $$p\leq\frac{1}{\sqrt{n}\log n}$$. Then a.a.s. for $$\Gamma\in{\mathcal G}(n,p)$$, no component of $$\square(\Gamma)$$ has support containing more than $$4\log n$$ vertices of $$\Gamma$$. □ Proof Let $$\delta>0$$. Let $$m=\left\lceil \min\left(4\log n, 4\log \left(\frac{1}{p}\right) \right)\right\rceil$$, with $$p\leq 1/\left(\sqrt{n}\log n\right)$$. We shall show that a.a.s. there is no ordered $$m$$–tuple of vertices $$v_1<v_2<\cdots <v_m$$ from $$\Gamma$$ such that for every $$i\ge 2$$ each vertex $$v_i$$ is adjacent to at least two vertices from $$\{v_j: 1\leq j <i\}$$. By Lemma 5.5, this is enough to establish our claim. Let $$v_1<v_2<\cdots < v_m$$ be an arbitrary ordered $$m$$–tuple of vertices from $$V(\Gamma)$$. For $$i\geq 2$$, let $$A_i$$ be the event that $$v_i$$ is adjacent to at least two vertices in the set $$\{v_j: 1\leq j <i\}$$. We have: Pr(Ai)=∑j=2i−1(i−1j)pj(1−p)i−j−1. (2) As in the proof of Theorem 4.5 we consider the quotients of successive terms in the sum to show that its order is given by the term $$j=2$$. To see this, observe: (i−1j+1)pj+1(1−p)i−j−2(i−1j)pj(1−p)i−j−1≤i−j−1j+1⋅p1−p<mp where the final inequality holds for $$n$$ sufficiently large and $$p=p(n)$$ satisfying our assumption. Since $$m=O\left(\log n\right)$$ and $$p=o(n^{-1/2})$$ we have, again for $$n$$ large enough, that $$mp=o(1)$$, and we may bound the sum in equation (2) by a geometric series to obtain the bound: Pr(Ai) =(i−12)p2(1−p)i−3(1+O(mp)) ≤(i−1)22p2(1+O(mp)). Now let $$A=\bigcap_{i=1}^m A_i$$. Note that the events $$A_i$$ are mutually independent, since they are determined by disjoint edge-sets. Thus we have: Pr(A)=∏i=3mPr(Ai) ≤∏i=3m((i−1)22p2(1+O(mp))) =((m−1)!)2p2m−42m−2(1+O(m2p)), where in the last line we used the equality $$(1+O(mp))^{m-2} =1+O(m^2p)$$ to bound the error term. Thus we have that the expected number $$X$$ of ordered $$m$$–tuples of vertices from $$\Gamma$$ for which $$A$$ holds is at most: E(X)=n!(n−m)!P(A) ≤nm4e2(m2p2−4me22)m(1+O(m2p)) =4e2(nm2p2−4/m2e2)m(1+O(m2p)), where in the first line we used the inequality $$(m-1)!\leq e (m/e)^{m}$$. We now consider the quantity f(n,m,p)=nm2p2−4/m2e2 which is raised to the $$m^{\textrm{th}}$$ power in the inequality above. We claim that $$f(n,m,p)\leq e^{-1+\log 2+o(1)}$$. We have two cases to consider: Case 1: $$m=\lceil 4\log n\rceil$$. Since $$4\log n \leq 4\log(1/p)$$, we deduce that $$p\leq n^{-1}$$. Then f(n,m,p) =n(4logn)2p2−o(1)2e2≤n−1+o(1)≤e−1+log2+o(1). Case 2: $$m=\lceil 4\log(1/p)\rceil$$. First, note that $$p^{-4/m}=\exp\left(\frac{4\log(1/p)}{\lceil 4 \log 1/p\rceil}\right)\leq e$$. Also, for $$p$$ in the range $$[0, n^{-1/2}(\log n)^{-1}]$$ and $$n$$ large enough, $$p^2(\log(1/p))^2$$ is an increasing function of $$p$$ and is thus at most: 1n(logn)2(12logn)2(1+O(loglognlogn))=14n−1(1+o(1)). Plugging this into the expression for $$f(n,m,p)$$, we obtain: f(n,m,p) =(1+o(1))16n(log(1/p))2p2−4/m2e2 ≤(1+o(1))2e=e−1+log2+o(1). Thus, in both cases (1) and (2) we have $$f(n,m,p)\leq e^{-1 +\log 2+o(1)}$$, as claimed, whence E(X) ≤4e2(f(n,m,p))m(1+O(m2p))≤4e2e−(1−log2)m+o(m)(1+O(m2p))=o(1). It follows from Markov’s inequality that the non-negative, integer-valued random variable $$X$$ is a.a.s. equal to $$0$$. In other words, a.a.s. there is no ordered $$m$$–tuple of vertices in $$\Gamma$$ for which the event $$A$$ holds and, hence by Lemma 5.5, no component in $$\square(\Gamma)$$ covering more than $$m\leq 4\log n$$ vertices of $$\Gamma$$. ■ Theorem 5.7. Suppose $$p\leq\frac{1}{\sqrt{n} \log n}$$. Then a.a.s. $$\Gamma\in{\mathcal G}(n,p)$$ is not in $$\mathcal{CFS}$$. □ Proof To show that $$\Gamma\not\in\mathcal{CFS}$$, we first show that, for $$p\leq \frac{1}{\sqrt{n} \log n}$$, a.a.s. there is no non-empty clique $$K$$ such that $$\Gamma=\Gamma' \star K$$. Indeed the standard Chernoff bound guarantees that we have a.a.s. no vertex in $$\Gamma$$ with degree greater than $$\sqrt{n}$$. Thus to prove the theorem, it is enough to show that a.a.s. there is no connected component $$C$$ in $$\square(\Gamma)$$ containing all the vertices in $$\Gamma$$. Theorem 5.6 does this by establishing the stronger bound that a.a.s. there is no connected component $$C$$ covering more than $$4\log n$$ vertices. ■ While Theorem 5.7 improves on the trivial lower bound of $$n^{-3/4}$$, it is still off from the upper bound for the emergence of the $$\mathcal{CFS}$$ property established in Theorem 5.1. It is a natural question to ask where the correct threshold is located. Remark 5.8. We strongly believe that there is a sharp threshold for the $$\mathcal{CFS}$$ property analogous to the one we established for the $$\mathcal{AS}$$ property. What is more, we believe this threshold should essentially coincide with the threshold for the emergence of a giant component in the auxiliary square graph $$\square(\Gamma)$$. Indeed, our arguments in Proposition 5.6 and Theorems 5.1 both focus on bounding the growth of a component in $$\square(\Gamma)$$. Heuristically, we would expect a giant component to emerge in $$\square(\Gamma)$$ to emerge for $$p(n)=cn^{-1/2}$$, for some constant $$c>0$$, when the expected number of common neighbors of a pair of non-adjacent vertices in $$\Gamma$$ is $$c^2$$, and thus the expected number of distinct vertices in $$4$$–cycles which meet a fixed $$4$$–cycle in a non-edge is $$2c^2$$. What the precise value of $$c$$ should be is not entirely clear (a branching process heuristics suggests $$\sqrt{\displaystyle\frac{\sqrt{17}-3}{2}}$$ as a possible value, see Remark 6.2), however, and the dependencies in the square graph make its determination a delicate matter. □ 6 Experiments Theorems 4.4 and 4.5 show that $$\left(\log n/n\right)^{\frac{1}{3}}$$ is a sharp threshold for the family $$\mathcal{AS}$$ and Theorem 5.1 shows that $$n^{-\frac{1}{2}}$$ is the right order of magnitude of the threshold for $$\mathcal{CFS}$$. Below we provide some empirical results on the behavior of random graphs near the threshold for $$\mathcal{AS}$$ and the conjectured threshold for $$\mathcal{CFS}$$. We also compare our experimental data with analogous data at the connectivity threshold. Our experiments are based on various algorithms that we implemented in $$\texttt{C++}$$; the source code is available from the authors. At www.wescac.net/research.html or math.columbia.edu/~jason, all source code and data can be downloaded. We begin with the observation that computer simulations of $$\mathcal{AS}$$ and $$\mathcal{CFS}$$ are tractable. Indeed, it is easily seen that there are polynomial-time algorithms for deciding whether a given graph is in $$\mathcal{AS}$$ and/or $$\mathcal{CFS}$$. Testing for $$\mathcal{AS}$$ by examining each block and determining whether it witnesses $$\mathcal{AS}$$ takes $$O(n^5)$$ steps, where $$n$$ is the number of vertices. The $$\mathcal{CFS}$$ property is harder to detect, but essentially reduces to determining the component structure of the square graph. The square graph can be produced in polynomial time and, in polynomial time, one can find its connected components and check the support of these components in the original graph. Using our software, we tested random graphs in $${\mathcal G}(n,p)$$ for membership in $$\mathcal{AS}$$ for $$n\in\{300+100k\mid0\leq k\leq12\}$$ and $$\{p(n)=\alpha\left(\log n/n\right)^{\frac{1}{3}}\mid \alpha=0.80+0.1k,\,0\leq k\leq 9\}$$. For each pair $$(n,p)$$ of this type, we generated $$400$$ random graphs and tested each for membership in $$\mathcal{AS}$$. (This number of tests ensures that, with probability approximately $$95\%$$, the measured proportion of $$\mathcal{AS}$$ graphs is within at most $$0.05$$ of the actual proportion.) The results are summarized in Figure 2. The data suggest that, fixing $$n$$, the probability that a random graph is in $$\mathcal{AS}$$ increases monotonically in the range of $$p$$ we are considering, rising sharply from almost zero to almost one. Fig. 2. View largeDownload slide Experimental prevalence of $$\mathcal{AS}$$ at density $$\alpha\left(\frac{\log n}{n}\right)^{\frac{1}{3}}$$. (A colour version of this figure is available from the journal’s website.) Fig. 2. View largeDownload slide Experimental prevalence of $$\mathcal{AS}$$ at density $$\alpha\left(\frac{\log n}{n}\right)^{\frac{1}{3}}$$. (A colour version of this figure is available from the journal’s website.) In Figure 3, we display the results of testing random graphs for membership in $$\mathcal{CFS}$$ for $$n\in\{100+100k\mid 0\leq k\leq 15\}$$ and $$\{p(n)=\alpha n^{-\frac{1}{2}} \mid \alpha=0.700+0.025k,\,0\leq k\leq 8\}$$. For each pair $$(n,p)$$ of this type, we generated $$400$$ random graphs and tested each for membership in $$\mathcal{CFS}$$. (This number of tests ensures that, with probability approximately $$95\%$$, the measured proportion of $$\mathcal{AS}$$ graphs is within at most $$0.05$$ of the actual proportion.) The data suggest that, fixing $$n$$, the probability that a random graph is in $$\mathcal{CFS}$$ increases monotonically in considered range of $$p$$: rising sharply from almost zero to almost one inside a narrow window. Fig. 3. View largeDownload slide Experimental prevalence of $$\mathcal{CFS}$$ membership at density $$\alpha n^{-\frac{1}{2}}$$. (A colour version of this figure is available from the journal’s website.) Fig. 3. View largeDownload slide Experimental prevalence of $$\mathcal{CFS}$$ membership at density $$\alpha n^{-\frac{1}{2}}$$. (A colour version of this figure is available from the journal’s website.) Remark 6.1 (Block and core sizes). For each graph $$\Gamma$$ tested, the $$\mathcal{AS}$$ software also keeps track of how many nonadjacent pairs $$\{x,y\}$$ — that is, how many blocks — were tested before finding one sufficient to verify membership in $$\mathcal{AS}$$; if no such block is found, then all non-adjacent pairs have been tested and the graph is not in $$\mathcal{AS}$$. ({A set of such data comes with the source code, and more is available upon request.}) At densities near the threshold, this number of blocks is generally very large compared to the number of blocks tested at densities above the threshold. For example, in one instance with $$(n,\alpha)=(1000,0.89)$$, verifying that the graph was in $$\mathcal{AS}$$ was accomplished after testing just $$86$$ blocks, while at $$(1000,0.80)$$, a typical test checked all $$422961$$ blocks (expected number: $$423397$$) before concluding that the graph is not $$\mathcal{AS}$$. At the same $$(n,\alpha)$$, another test found that the graph was in $$\mathcal{AS}$$, but only after $$281332$$ tests. These data are consonant with the spirit of our proofs of Theorems 4.5 and 4.4: in the former case, we showed that no “good” block exists, while in the latter we show that every block is good. We believe that right at the threshold we should have some intermediate behavior, with the expected number of “good” blocks increasing continuously from $$0$$ to $$(1-p)\binom{n}{2}(1-o(1))$$. What is more, we expect that the more precise threshold for the $$\mathcal{AS}$$ property, coinciding with the appearance of a single “good” block, should be located “closer” to our lower bound than to our upper one, that is, at $$p(n)= \left((1-\epsilon)\log n/n\right)^{1/3}$$, where $$\epsilon(n)$$ is a sequence of strictly positive real numbers tending to $$0$$ as $$n\rightarrow \infty$$ (most likely decaying at a rate just faster than $$(\log n)^{-2}$$, see below). Our experimental data, which exhibit a steep rise in $$\Pr(\mathcal{AS})$$ strictly before the value $$\alpha$$ hits one, gives some support to this guess. Finally, our observations on the number of blocks suggests a natural way to understand the influence of higher-order terms on the emergence of the $$\mathcal{AS}$$ property: at exactly the threshold for $$\mathcal{AS}$$, the event $$E(v,w)$$ that a pair of non-adjacent vertices $$\{v,w\}$$ gives rise to a “good” block is rare and, despite some mild dependencies, the number $$N$$ of pairs $$\{v,w\}$$ for which $$E(v,w)$$ occurs is very likely to be distributed approximatively like a Poisson random variable. The probability $$\Pr(N\geq 1)$$ would then be a very good approximation for $$\Pr(\mathcal{AS})$$. “Good” blocks would thus play a role for the emergence of the $$\mathcal{AS}$$ property in random graphs analogous to that of isolated vertices for connectivity in random graphs. When $$p=\left((1-\epsilon) \log n/n\right)^{\frac{1}{3}}$$, the expectation of $$N$$ is roughly $$ne^{-n^{\epsilon}(1-\epsilon)\log n}$$. This expectation is $$o(1)$$ when $$0<\epsilon(n)=\Omega(1/n)$$ and is $$1/2$$ when $$\epsilon(n)=(1+o(1)) \log 2/(\log n)^2$$. This suggest that the emergence of $$\mathcal{AS}$$ should occur when $$\epsilon(n)$$ decays just a little faster than $$(\log n)^2$$. □ Remark 6.2. Our data suggest that the prevalence of $$\mathcal{CFS}$$ is closely related to the emergence of a giant component in the square graph. Indeed, below the experimentally observed threshold for $$\mathcal{CFS}$$, not only is the support of the largest component in the square graph not all of $$\Gamma \in {\mathcal G}(n,p)$$, but in fact the size of the support of the largest component is an extremely small proportion of the vertices (see Figure 4). In the Erdős–Rényi random graph, a giant component emerges when $$p$$ is around $$1/n$$, that is, when vertices begin to expect at least one neighbor; this corresponds to a paradigmatic condition of expecting at least one child for survival of a Galton–Watson process (see [9] for a modern treatment of the topic). The heuristic observation that when $$p=cn^{-1/2}$$ the vertices of a diagonal $$e$$ in a fixed $$4$$-cycle $$F$$ expect to be adjacent to $$cn^2$$ vertices outside the $$4$$-cycle, giving rise to an expected $$\binom{cn^2+2}{2}-1$$ new $$4$$-cycles connected to $$F$$ through $$e$$ in $$\square(\Gamma)$$ suggests that $$c=\sqrt{\displaystyle\frac{\sqrt{17}-3}{2}}\approx 0.7494$$ could be a reasonable guess for the location of the threshold for the $$\mathcal{CFS}$$ property. 2 Our data, although not definitive, appears somewhat supportive of this guess: see Figure 4 which is based on the same underlying data set as Figure 3. We note that unlike an Erdős–Rényi random graph, the square graph $$\square(\Gamma)$$ exhibits some strong local dependencies, which may make the determination of the exact location of its phase transition a much more delicate affair. □ Fig. 4. View largeDownload slide Fraction of vertices supporting the largest $$\mathcal{CFS}$$–subgraph at density $$\alpha n^{-\frac{1}{2}}$$. (A colour version of this figure is available from the journal’s website.) Fig. 4. View largeDownload slide Fraction of vertices supporting the largest $$\mathcal{CFS}$$–subgraph at density $$\alpha n^{-\frac{1}{2}}$$. (A colour version of this figure is available from the journal’s website.) Remark 6.3. For comparison with the threshold data for $$\mathcal{AS}$$ and $$\mathcal{CFS}$$, we include below a similar figure of experimental data for connectivity for $$\alpha$$ from $$0.8$$ to $$1.4$$, where $$p=\frac{\alpha\log n}{n}$$. Given, what we know about the thresholds for connectivity and the $$\mathcal{AS}$$ property, this last set of data together with Figure 2 should serve as a warning not to draw overly strong conclusions: the graphs we tested are sufficiently large for the broader picture to emerge, but probably not large enough to allow us to pinpoint the exact location of the threshold for $$\mathcal{CFS}$$. □ Fig. 5. View largeDownload slide Experimental prevalence of connectedness at density $$\alpha\frac{\log n}{n}$$. (A colour version of this figure is available from the journal’s website.) Fig. 5. View largeDownload slide Experimental prevalence of connectedness at density $$\alpha\frac{\log n}{n}$$. (A colour version of this figure is available from the journal’s website.) Funding This work was supported by National Science Foundation [DMS-0739392 to J.B, 1045119 to M.F.H.] and also supported by a Simons Fellowship to J.B. Acknowledgments J.B. thanks the Barnard/Columbia Mathematics Department for their hospitality during the writing of this article. J.B. thanks Noah Zaitlen for introducing him to the joy of cluster computing and for his generous time spent answering remedial questions about programming. Also, thanks to Elchanan Mossel for several interesting conversations about random graphs. M.F.H. and T.S. thank the organizers of the 2015 Geometric Groups on the Gulf Coast conference, at which some of this work was completed. This work benefited from several pieces of software, the results of one of which is discussed in Section 6. Some of the software written by the authors incorporates components previously written by J.B. and M.F.H. jointly with Alessandro Sisto. Another related useful program was written by Robbie Lyman under the supervision of J.B. and T.S. during an REU program supported by NSF Grant DMS-0739392, see [30]. This research was supported, in part, by a grant of computer time from the City University of New York High Performance Computing Center under NSF Grants CNS-0855217, CNS-0958379, and ACI-1126113. We thank the anonymous referee for their helpful comments. References [1] Agol I. Groves D. and Manning. J. “The virtual Haken conjecture.” Documenta Mathematica 18 (2013): 1045– 87. [2] Alon N. and Spencer. J. H. The probabilistic method , 3rd ed. Wiley-Interscience Series in Discrete Mathematics and Optimization. Hoboken, NJ: John Wiley & Sons, Inc., 2008. [3] Babson E. Hoffman C. and Kahle. M. “The fundamental group of random 2-complexes.” Journal of the American Mathematical Society 24, no. 1 (2011): 1– 28. Google Scholar CrossRef Search ADS [4] Behrstock J. and Charney. R. “Divergence and quasimorphisms of right-angled Artin groups.” Mathematische Annalen 352 (2012): 339– 56. Google Scholar CrossRef Search ADS [5] Behrstock J. and Druţu. C. “Divergence, thick groups, and short conjugators.” Illinois Journal of Mathematics 58, no. 4 (2014): 939– 80. [6] Behrstock J. Druţu C. and Mosher. L. “Thick metric spaces, relative hyperbolicity, and quasi-isometric rigidity.” Mathematische Annalen 344, no. 3 (2009): 543– 95. Google Scholar CrossRef Search ADS [7] Behrstock J. and Hagen. M. “Cubulated groups: thickness, relative hyperbolicity, and simplicial boundaries.” Geometry, Groups, and Dynamics 10, no. 2 (2016): 649– 707. Google Scholar CrossRef Search ADS [8] Behrstock J. M. Hagen F. and Sisto. A. “Thickness, relative hyperbolicity, and randomness in Coxeter groups.” Algebraic and Geometric Topology, to appear with an appendix written jointly with Pierre-Emmanuel Caprace . [9] Bollobás B. and Riordan. O. “A simple branching process approach to the phase transition in $$G_{n, p}$$.” The Electronic Journal of Combinatorics 19, no. 4 (2012): P21. [10] Bowditch B. H. “Relatively hyperbolic groups.” International Journal of Algebra and Computation 22, no. 3 (2012): 1250016, 66. Google Scholar CrossRef Search ADS [11] Brock J. and Masur. H. “Coarse and synthetic Weil–Petersson geometry: quasi-flats, geodesics, and relative hyperbolicity.” Geometry & Topology 12 (2008): 2453– 95. Google Scholar CrossRef Search ADS [12] Calegari D. and Wilton H. 3–manifolds everywhere. ArXiv:math.GR/1404.7043 (accessed on April 29, 2014). [13] Caprace P.-E. “Buildings with isolated subspaces and relatively hyperbolic Coxeter groups.” Innovations in Incidence Geometry , 10 (2009): 15– 31. In proceedings of Buildings & Groups, Ghent. [14] Charney R. and Farber M. Random groups arising as graph products. Algebraic and Geometric Topology 12 (2012): 979– 96. Google Scholar CrossRef Search ADS [15] Coxeter H. S. “Discrete groups generated by reflections.” Annals of Mathematics (1934): 588– 621. [16] Dahmani F. Guirardel V. and Przytycki. P. “Random groups do not split.” Mathematische Annalen 349, no. 3 (2011): 657– 73. Google Scholar CrossRef Search ADS [17] Dani P. and Thomas. A. “Divergence in right-angled Coxeter groups.” Transactions of the American Mathematical Society 367, no. 5 (2015): 3549– 77. Google Scholar CrossRef Search ADS [18] Davis M. W. The geometry and topology of Coxeter groups, vol. 32 of London Mathematical Society Monographs Series . Princeton, NJ, Princeton University Press, 2008. [19] Davis M. W. and Januszkiewicz. T. “Right-angled Artin groups are commensurable with right-angled Coxeter groups.” Journal of Pure and Applied Algebra 153, no. 3 (2000): 229– 35. Google Scholar CrossRef Search ADS [20] Davis M. W. and Kahle. M. “Random graph products of finite groups are rational duality groups.” Journal of Topology 7, no. 2 (2014): 589– 606. Google Scholar CrossRef Search ADS [21] Erdős P. and Rényi. A. “On the evolution of random graphs. II.” Bulletin de l’Institut International de Statistique 38 (1961): 343– 47. [22] Farb B. “Relatively hyperbolic groups.” Geometric and Functional Analysis (GAFA) 8, no. 5 (1998): 810– 40. Google Scholar CrossRef Search ADS [23] Gilbert E. “Random graphs.” Annals of Mathematical Statistics 4 (1959): 1141– 44. Google Scholar CrossRef Search ADS [24] Gromov M. Hyperbolic groups. In Essays in group theory, edited by Gersten S. vol. 8 of MSRI Publications . New York, NY, Springer, 1987. [25] Gromov M. Asymptotic invariants of infinite groups, Geometric Group Theory, edited by Niblo G. A. and Roller M. A. vol. 2. London Mathematical Society Lecture Notes 182 (1993): 1– 295. [26] Gromov M. “Random walk in random groups.” Geometric and Functional Analysis (GAFA) 13, no. 1 (2003): 73– 146. Google Scholar CrossRef Search ADS [27] Haglund F. and Wise. D. T. “Coxeter groups are virtually special.” Advances in Mathematics 224, no. 5 (2010): 1890– 903. Google Scholar CrossRef Search ADS [28] Kahle M. “Sharp vanishing thresholds for cohomology of random flag complexes.” Annals of Mathematics (2) 179, no. 3 (2014): 1085– 107. Google Scholar CrossRef Search ADS [29] Linial N. and Meshulam. R. “Homological connectivity of random 2-complexes.” Combinatorica 26, no. 4 (2006): 475– 87. Google Scholar CrossRef Search ADS [30] Lyman R. Algorithmic Computation of Thickness in Right-Angled Coxeter Groups . Undergraduate honors thesis, Columbia University, 2015. [31] Moussong G. Hyperbolic Coxeter groups . PhD thesis, Ohio State University, 1988. Available at https://people.math.osu.edu/davis.12/papers/Moussongthesis.pdf (accessed on November 21, 2016). [32] Mühlherr B. “Automorphisms of graph-universal coxeter groups.” Journal of Algebra 200, no. 2 (1998): 629– 49. Google Scholar CrossRef Search ADS [33] Niblo G. A. and Reeves. L. D. “Coxeter groups act on CAT (0) cube complexes.” Journal of Group Theory 6, no. 3 (2003): 399. Google Scholar CrossRef Search ADS [34] Ollivier Y. and Wise. D. “Cubulating random groups at density less than 1/6.” Transactions of the American Mathematical Society 363, no. 9 (2011): 4701– 733. Google Scholar CrossRef Search ADS [35] Osin D. V. “Relatively hyperbolic groups: intrinsic geometry, algebraic properties, and algorithmic problems.” Memoirs of the American Mathematical Society 179, no. 843 (2006): vi+ 100. Google Scholar CrossRef Search ADS [36] Ruane K. and Witzel. S. “CAT(0) cubical complexes for graph products of finitely generated abelian groups.” arXiv preprint arXiv:1310.8646 (2013), to appear in New York Journal of Mathematics. [37] Sultan H. “The asymptotic cones of Teichmüller space and thickness.” Algebraic and Geometric Topology 15, no. 5 (2015): 3071– 106. Google Scholar CrossRef Search ADS © The Author(s) 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. [email protected]
### Journal
International Mathematics Research NoticesOxford University Press
Published: Mar 1, 2018
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9316823482513428, "perplexity": 215.24870620620712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00060.warc.gz"} |
https://mc-stan.org/docs/2_28/functions-reference/math-functions.html | # 28 Mathematical Functions
This appendix provides the definition of several mathematical functions used throughout the manual. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931914210319519, "perplexity": 1837.907759803501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00385.warc.gz"} |
http://www.maa.org/publications/maa-reviews/real-algebraic-geometry?device=desktop | Real Algebraic Geometry
Publisher:
Springer
Publication Date:
2013
Number of Pages:
100
Format:
Paperback
Series:
Unitext 66
Price:
39.95
ISBN:
9783642362422
Category:
Monograph
[Reviewed by
Fernando Q. Gouvêa
, on
08/15/2013
]
It all starts from such a simple idea: let’s study the curves and surfaces we get by writing down algebraic equations. Nevertheless, algebraic geometry has achieved a rather fearsome reputation for being difficult and arcane. Part of it is the result of its long history, but the truth is that the massive arsenal required for fruitful attacks on some of the main problems really does pose an effective barrier to the non-participating fan.
Enter this little book, which contains notes from lectures by Vladimir Arnold on some very elementary algebraic geometry. This is mathematics that anyone can enjoy, requiring only the rudiments of coordinate geometry and algebra.
The title seems to be intentionally ambiguous. Most of the lectures do deal with the geometry of curves and surfaces defined over $$\mathbb{R}$$, so it is technically correct. The editors suggest, however, that Arnold might also have been quietly insisting that this kind of algebraic geometry is a bit more real than the super-abstract kind.
Arnold begins the book by stating Hilbert’s problem on the topological structure of real algebraic curves, but the book becomes much more elementary after that, with chapters on conics (both their geometry and their role in physics) and on projective geometry. The next lecture breaks the boundary suggested by the title to discuss complex algebraic curves (just the very basics). The book concludes with a very interesting lecture on an elementary problem: if you delete n lines from the plane, how many connected pieces are you left with? A research article about this last question (but for the projective plane) is appended to the lectures, followed by notes from the editors.
In summary, a neat little book that your students will understand, but that still contains meaty ideas and difficult problems for them to work on.
Fernando Q. Gouvêa is planning to ask a group of first-year students about lines on a plane, just to see what they can do with the problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6275067329406738, "perplexity": 622.8176426950968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163998951/warc/CC-MAIN-20131204133318-00088-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/221128/probability-of-paths-to-the-boundary-of-a-tree | Probability of paths to the boundary of a tree
Let $G_n$ be the $4$-regular tree of depth $n$, that is to say the finite graph given by the ball of radius $n$ in the Cayley graph of the free group on two generators. By the root I mean the vertex at the center. If I choose vertices uniformly at random from $G_n$, what is the probability that when I choose the root it will be connected to the boundary of the graph by a path through vertices I have chosen already? Equivalently, if I take a random ordering of the vertices of $G_n$, what is the probability that the segment preceding the root will contain a path to the boundary? More precisely, I would like to know whether this probability approaches $0$ as $n$ goes to infinity.
Perhaps easiest to think about it in this way; assign every vertex an independent time which is uniform on $[0,1]$. If the vertices of $G_n$ are added in increasing order of their times, then this is equivalent to adding them one by one uniformly as you describe. But this way we can easily think about all the vertices in the infinite graph simultaneously.
Now condition on the time of the root. Given that this time is $p$, the set of vertices preceding the root contains each other vertex independently with probability $p$. Call this the set of open vertices. Effectively this is percolation with a random probability $p$ (itself chosen uniformly on $[0,1]$).
If we do percolation with probability $p$, then there exists an infinite open path starting from the root with positive probability iff $p>1/3$. This is almost the same as the probability of survival of a Galton-Watson branching process whose offspring distribution is Binomial($3,p$); the difference is that here the root has $4$ possible offspring, while each other vertex has only $3$. The probability (as a fuction of $p$) can be quite easily obtained as the solution of a recursive equation. To get the probability that the set of open vertices contains an infinite path from the root, integrate over $p$ from $1/3$ to $1$.
The limit as $n\to\infty$ of the probability that the set of open vertices contains a path from the root to distance $n$ is then just this probability that the set of open vertices contains an infinite path from the root. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9746314883232117, "perplexity": 58.80431288242358}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141723602.80/warc/CC-MAIN-20201203062440-20201203092440-00143.warc.gz"} |
https://mathoverflow.net/questions/378016/integration-by-parts-on-a-k%C3%A4hler-manifold/378022 | # Integration by parts on a Kähler manifold
I am trying to make sense of integration by parts on a Kähler manifold $$X$$ equipped with a Kähler metric $$\omega$$. Given two smooth real functions $$f$$ and $$h$$ on $$X$$, I want to write down the integration by parts formula for the following: $$\int_{X} h \Delta_{\omega} f \omega^n.$$ In local coordinates $$\Delta_{\omega} f = \sum_{i, j} g^{i \bar\jmath} \partial_{\bar\jmath} \partial_{i}f$$. My guess is that $$\int_{X} h \Delta_{\omega} f \omega^n = -\int_{X} g^{i \bar\jmath} \partial_{\bar\jmath} h \partial_{i} f \omega^n.$$ However, this is not quite right since the LHS is real while the right hand side is not necessarily so. What is the correct formula? Is there a general strategy for thinking of such things in the complex case?
• Try to write in terms of the exterior derivative and the Hodge star, since integration by parts is Stokes' theorem applied to an exact form. – Ben McKay Dec 3 '20 at 9:16
• The formula that you wrote is correct, and the RHS is real! The integrand is not always real-valued, but the integral is. – YangMills Dec 3 '20 at 13:55
• Is it obvious that the integral has to be real?@YangMills – penny Dec 3 '20 at 14:05
• It follows from the equation that you wrote, since the LHS is real. Proving your formula is a simple exercise using the divergence theorem (and the definition of covariant derivatives for a Kahler metric). – YangMills Dec 3 '20 at 14:36
Assume $$(X, d = \partial + \bar{\partial})$$ to be a compact Kähler manifold. The Kähler metric $$g$$ induces a metric on all differential forms, which we will also call $$g$$. It follows that $$\omega^n$$ defines a Hilbert space of $$i$$-forms on $$X$$ by $$\langle u, v\rangle = \int_X g(u,v) \omega^n.$$ For functions $$u, v$$, this is $$\langle u, v\rangle = \int_X u \bar{v} \omega^n.$$
The adjoints $$d^*$$ and $$\bar{\partial}^*$$ are the operators such that for all $$(i-1)$$-form $$u$$ and all $$i$$-form $$v$$, $$\langle du,v\rangle = \langle u,d^*v\rangle,\quad\langle\bar{\partial}u,v\rangle = \langle u, \bar{\partial}^* v\rangle.$$ The expression $$\bar{\partial}^*$$ equals $$-* \bar{\partial} *$$, where $$*$$ is the Hodge-$$*$$ and it involves the Kähler metric $$g$$.
The Laplacian is then $$\Delta_d = d^* d + d d^* = 2 (\bar{\partial} \bar{\partial}^* +\bar{\partial}^* \bar{\partial}) = 2 \Delta_{\bar{\partial}}.$$ Since $$h,f$$ are functions, we have $$\langle h, \Delta_{\bar{\partial}} f\rangle = \langle h, (\bar{\partial} \bar{\partial}^* +\bar{\partial}^* \bar{\partial}) f\rangle = \langle\bar{\partial}^* h, \bar{\partial}^* f\rangle + \langle\bar{\partial} h, \bar{\partial} f\rangle = \langle\bar{\partial} h, \bar{\partial} f\rangle.$$ In terms of your notation, this means $$\int_X h \Delta_\omega f \omega^n = \int_X g(\bar{\partial} h, \bar{\partial} f) \omega^n.$$
• Generally people use $\langle u, v\rangle$ rather than $<u, v>$. The former is given by $\langle u, v\rangle$. – Michael Albanese Dec 3 '20 at 12:10
• I made the edit @MichaelAlbanese mentions. Note also that \quad or \qquad or one of the other spacing commands is probably preferable to repeated \ \ \ . – LSpice Dec 3 '20 at 13:58
• Sorry I am a bit confused. Usually $\bar \partial^{\ast}$ would take a $(1, 1)$ form then gives back a smooth function. Here you seem to be acting it on a function directly. – penny Dec 3 '20 at 14:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980088472366333, "perplexity": 162.8145589174161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369721.76/warc/CC-MAIN-20210305030131-20210305060131-00023.warc.gz"} |
https://huggingface.co/transformers/v4.5.1/main_classes/processors.html | # Processors¶
This library includes processors for several traditional tasks. These processors can be used to process a dataset into examples that can be fed to a model.
## Processors¶
All processors follow the same architecture which is that of the DataProcessor. The processor returns a list of InputExample. These InputExample can be converted to InputFeatures in order to be fed to the model.
class transformers.data.processors.utils.DataProcessor[source]
Base class for data converters for sequence classification data sets.
get_dev_examples(data_dir)[source]
Gets a collection of InputExample for the dev set.
get_example_from_tensor_dict(tensor_dict)[source]
Gets an example from a dict with tensorflow tensors.
Parameters
tensor_dict – Keys and values should match the corresponding Glue tensorflow_dataset examples.
get_labels()[source]
Gets the list of labels for this data set.
get_test_examples(data_dir)[source]
Gets a collection of InputExample for the test set.
get_train_examples(data_dir)[source]
Gets a collection of InputExample for the train set.
tfds_map(example)[source]
Some tensorflow_datasets datasets are not formatted the same way the GLUE datasets are. This method converts examples to the correct format.
class transformers.data.processors.utils.InputExample(guid: str, text_a: str, text_b: Optional[str] = None, label: Optional[str] = None)[source]
A single training/test example for simple sequence classification.
Parameters
• guid – Unique id for the example.
• text_a – string. The untokenized text of the first sequence. For single sequence tasks, only this sequence must be specified.
• text_b – (Optional) string. The untokenized text of the second sequence. Only must be specified for sequence pair tasks.
• label – (Optional) string. The label of the example. This should be specified for train and dev examples, but not for test examples.
to_json_string()[source]
Serializes this instance to a JSON string.
class transformers.data.processors.utils.InputFeatures(input_ids: List[int], attention_mask: Optional[List[int]] = None, token_type_ids: Optional[List[int]] = None, label: Optional[Union[int, float]] = None)[source]
A single set of features of data. Property names are the same names as the corresponding inputs to a model.
Parameters
• input_ids – Indices of input sequence tokens in the vocabulary.
• attention_mask – Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: Usually 1 for tokens that are NOT MASKED, 0 for MASKED (padded) tokens.
• token_type_ids – (Optional) Segment token indices to indicate first and second portions of the inputs. Only some models use them.
• label – (Optional) Label corresponding to the input. Int for classification problems, float for regression problems.
to_json_string()[source]
Serializes this instance to a JSON string.
## GLUE¶
General Language Understanding Evaluation (GLUE) is a benchmark that evaluates the performance of models across a diverse set of existing NLU tasks. It was released together with the paper GLUE: A multi-task benchmark and analysis platform for natural language understanding
This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB, QQP, QNLI, RTE and WNLI.
Those processors are:
• MrpcProcessor
• MnliProcessor
• MnliMismatchedProcessor
• Sst2Processor
• StsbProcessor
• QqpProcessor
• QnliProcessor
• RteProcessor
• WnliProcessor
Additionally, the following method can be used to load values from a data file and convert them to a list of InputExample.
glue.glue_convert_examples_to_features(tokenizer: transformers.tokenization_utils.PreTrainedTokenizer, max_length: Optional[int] = None, task=None, label_list=None, output_mode=None)
Loads a data file into a list of InputFeatures
Parameters
• examples – List of InputExamples or tf.data.Dataset containing the examples.
• tokenizer – Instance of a tokenizer that will tokenize the examples
• max_length – Maximum example length. Defaults to the tokenizer’s max_len
• label_list – List of labels. Can be obtained from the processor using the processor.get_labels() method
• output_mode – String indicating the output mode. Either regression or classification
Returns
If the examples input is a tf.data.Dataset, will return a tf.data.Dataset containing the task-specific features. If the input is a list of InputExamples, will return a list of task-specific InputFeatures which can be fed to the model.
### Example usage¶
An example using these processors is given in the run_glue.py script.
## XNLI¶
The Cross-Lingual NLI Corpus (XNLI) is a benchmark that evaluates the quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on MultiNLI <http://www.nyu.edu/projects/bowman/multinli/>: pairs of text are labeled with textual entailment annotations for 15 different languages (including both high-resource language such as English and low-resource languages such as Swahili).
It was released together with the paper XNLI: Evaluating Cross-lingual Sentence Representations
This library hosts the processor to load the XNLI data:
• XnliProcessor
Please note that since the gold labels are available on the test set, evaluation is performed on the test set.
An example using these processors is given in the run_xnli.py script.
The Stanford Question Answering Dataset (SQuAD) is a benchmark that evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version (v1.1) was released together with the paper SQuAD: 100,000+ Questions for Machine Comprehension of Text. The second version (v2.0) was released alongside the paper Know What You Don’t Know: Unanswerable Questions for SQuAD.
This library hosts a processor for each of the two versions:
### Processors¶
Those processors are:
• SquadV1Processor
• SquadV2Processor
They both inherit from the abstract class SquadProcessor
class transformers.data.processors.squad.SquadProcessor[source]
get_dev_examples(data_dir, filename=None)[source]
Returns the evaluation example from the data directory.
Parameters
• data_dir – Directory containing the data files used for training and evaluating.
• filename – None by default, specify this if the evaluation file has a different name than the original one which is dev-v1.1.json and dev-v2.0.json for squad versions 1.1 and 2.0 respectively.
get_examples_from_dataset(dataset, evaluate=False)[source]
Creates a list of SquadExample using a TFDS dataset.
Parameters
• evaluate – Boolean specifying if in evaluation mode or in training mode
Returns
Examples:
>>> import tensorflow_datasets as tfds
>>> training_examples = get_examples_from_dataset(dataset, evaluate=False)
>>> evaluation_examples = get_examples_from_dataset(dataset, evaluate=True)
get_train_examples(data_dir, filename=None)[source]
Returns the training examples from the data directory.
Parameters
• data_dir – Directory containing the data files used for training and evaluating.
• filename – None by default, specify this if the training file has a different name than the original one which is train-v1.1.json and train-v2.0.json for squad versions 1.1 and 2.0 respectively.
Additionally, the following method can be used to convert SQuAD examples into SquadFeatures that can be used as model inputs.
squad.squad_convert_examples_to_features(tokenizer, max_seq_length, doc_stride, max_query_length, is_training, padding_strategy='max_length', return_dataset=False, threads=1, tqdm_enabled=True)
Converts a list of examples into a list of features that can be directly given as input to a model. It is model-dependant and takes advantage of many of the tokenizer’s features to create the model’s inputs.
Parameters
• examples – list of SquadExample
• tokenizer – an instance of a child of PreTrainedTokenizer
• max_seq_length – The maximum sequence length of the inputs.
• doc_stride – The stride used when the context is too large and is split across several features.
• max_query_length – The maximum length of the query.
• is_training – whether to create features for model evaluation or model training.
• return_dataset – Default False. Either ‘pt’ or ‘tf’. if ‘pt’: returns a torch.data.TensorDataset, if ‘tf’: returns a tf.data.Dataset
Returns
list of SquadFeatures
Example:
processor = SquadV2Processor()
examples = processor.get_dev_examples(data_dir)
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=not evaluate,
)
These processors as well as the aforementionned method can be used with files containing the data as well as with the tensorflow_datasets package. Examples are given below.
### Example usage¶
Here is an example using the processors as well as the conversion method using data files:
# Loading a V2 processor
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
Using tensorflow_datasets is as easy as using a data file:
# tensorflow_datasets only handle Squad V1. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15252923965454102, "perplexity": 6804.290217072434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00754.warc.gz"} |
http://adas-fusion.eu/element/detail/adf38/nrb05%5D%5Bn/nrb05%5D%5Bn_v16ic1-2.dat | nrb05#n_v16ic1-2.dat
Photoexcitation-autoionisation Rate Coefficients
Ion
V16+
Filename
nrb05#n_v16ic1-2.dat
Full Path
Parent states
1s2 2s2 2p3 4S1.5
1s2 2s2 2p3 2D1.5
1s2 2s2 2p3 2D2.5
1s2 2s2 2p3 2P0.5
1s2 2s2 2p3 2P1.5
1s2 2s1 2p4 4P2.5
1s2 2s1 2p4 4P1.5
1s2 2s1 2p4 4P0.5
1s2 2s1 2p4 2D1.5
1s2 2s1 2p4 2D2.5
1s2 2s1 2p4 2S0.5
1s2 2s1 2p4 2P1.5
1s2 2s1 2p4 2P0.5
1s2 2p5 2P1.5
1s2 2p5 2P0.5
Recombined states
1s2 2s2 2p4 3P2.0
1s2 2s2 2p4 3P0.0
1s2 2s2 2p4 3P1.0
1s2 2s2 2p4 1D2.0
1s2 2s2 2p4 1S0.0
1s2 2s1 2p5 3P2.0
1s2 2s1 2p5 3P1.0
1s2 2s1 2p5 3P0.0
1s2 2s1 2p5 1P1.0
1s2 2p6 1S0.0
1s1 2s2 2p5 3P2.0
1s1 2s2 2p5 3P1.0
1s1 2s2 2p5 3P0.0
1s1 2s2 2p5 1P1.0
1s1 2s1 2p6 3S1.0
1s1 2s1 2p6 1S0.0
--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918559193611145, "perplexity": 2.964627690986114}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155676.21/warc/CC-MAIN-20180918185612-20180918205612-00272.warc.gz"} |
http://heattransfer.asmedigitalcollection.asme.org/article.aspx?articleid=1449778 | 0
Research Papers
# A Statistical Model of Bubble Coalescence and Its Application to Boiling Heat Flux Prediction—Part I: Model Development
[+] Author and Article Information
Wen Wu
Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana Champaign, Urbana, IL [email protected]
Barclay G. Jones
Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana Champaign, Urbana, IL [email protected]
Ty A. Newell
Department of Mechanical Science and Engineering, University of Illinois at Urbana Champaign, Urbana, IL [email protected]
J. Heat Transfer 131(12), 121013 (Oct 15, 2009) (11 pages) doi:10.1115/1.4000024 History: Received January 29, 2009; Revised July 16, 2009; Published October 15, 2009
## Abstract
In this work a statistical model is developed by deriving the probability density function $(pdf)$ of bubble coalescence on boiling surface to describe the distribution of vapor bubble radius. Combining this bubble coalescence model with other existing models in the literature that describe the dynamics of bubble motion and the mechanisms of heat transfer, the surface heat flux in subcooled nucleate boiling can be calculated. By decomposing the surface heat flux into various components due to different heat transfer mechanisms, including forced convection, transient conduction, and evaporation, the effect of the bubble motion is identified and quantified. Predictions of the surface heat flux are validated with R134a data measured in boiling experiments and water data available in the literature, with an overall good agreement observed. Results indicate that there exists a limit of surface heat flux due to the increased bubble coalescence and the reduced vapor bubble lift-off radius as the wall temperature increased. Further investigation confirms the consistency between this limit value and the experimentally measured critical heat flux (CHF), suggesting that a unified mechanistic modeling to predict both the surface heat flux and CHF is possible. In view of the success of this statistical modeling, the authors tend to propose the utilization of probabilistic formulation and stochastic analysis in future modeling attempts on subcooled nucleate boiling.
<>
## Figures
Figure 1
Typical flow boiling curve
Figure 2
Bubble in different stages during its life span
Figure 3
Bubble life span and bubble coalescence
Figure 4
Simplified bubble growth and bubble sliding curves
Figure 5
Bubble relative position
Figure 6
Bubble coalescence probability
Figure 7
Nucleation site arrangement and compaction factor
Figure 8
Influence of bubble during its sliding
Figure 9
Heat transfer mechanisms at and near the wall
Figure 10
Transient conduction duration
Figure 11
(a) Schematic diagram of vapor bubble; (b) free-body diagram of bubble
Figure 12
Data dependency of the model
## Related
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Topic Collections | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31505462527275085, "perplexity": 4191.21375488249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804666.54/warc/CC-MAIN-20171118055757-20171118075757-00226.warc.gz"} |
https://www.physicsforums.com/threads/categorizing-a-physics-demonstration.799338/ | # Categorizing a Physics Demonstration
1. Feb 22, 2015
### Richardj1701
ABSTRACT:
I have a collection of springs and disks (masses) to choose from. I have a solid rod fixed to the ground. I slide a piece of sheet metal into the rod to act as a base. I now slide a disk into the rod. Then a spring. Then another disk. Then another spring. And one final disk. I now lift the system (from the sheet metal base) to a desired height. After releasing the system, it drops and we can observe conservation of momentum since the top-most disk will shoot upwards.
GOAL:
Under what category of physics would you place this? I want to know what I have to research in order to analyze the physics behind this reaction. (My end goal is to maximize the speed at which that top most disk flies off.)
ATTEMPTS:
I've looked into coupled, spring-mass systems, but I don't think that this will help me since my system is not so much coupled but sitting on each other.
2. Feb 22, 2015
### Bystander
First quick dirty impression? Newton's cradle.
3. Feb 23, 2015
### CWatters
Perhaps I misunderstand the description but how does dropping the system cause the top disc "shoot upwards"?
Draft saved Draft deleted
Similar Discussions: Categorizing a Physics Demonstration | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764632344245911, "perplexity": 1550.5251440572736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00598.warc.gz"} |
https://plainmath.net/12172/x-2-plus-y-2-equal-25-what-is-dydx-dx-dy | # x^{2} +y^{2} =25 What is dydx dx/dy?
Question
Differential equations
$$x^{2} +y^{2} =25$$
What is dydx dx/dy?
2021-02-10
$$\frac{dx}{dy} =−\frac{x}{y}$$
Since y is not on it's own side, we will have to differentiate implicitly. We will have to take the derivative with respect to x, since the question is asking for dydx dx/dy . After taking the derivative we get: $$2x+2y(dydx)=02x+2y\frac{dx}{dy}=0$$
Since we want to solve for dy/dx we must isolate it. First, we will subtract 2x from both sides to get: $$2y(dydx)=−2x2y\frac{dx}{dy}=−2x$$
Then we will divide both sides by 2y to get: $$dy/dx=−\frac{x}{y}$$
$$\frac{dx}{dy} =−\frac{x}{y}$$
### Relevant Questions
Solve the equation:
$$\displaystyle{\left({x}+{1}\right)}{\frac{{{\left.{d}{y}\right.}}}{{{\left.{d}{x}\right.}}}}={x}{\left({y}^{{2}}+{1}\right)}$$
Solve the equation:
$$\displaystyle{\left({a}-{x}\right)}{\left.{d}{y}\right.}+{\left({a}+{y}\right)}{\left.{d}{x}\right.}={0}$$
The graph of y = f(x) contains the point (0,2), $$\displaystyle{\frac{{{\left.{d}{y}\right.}}}{{{\left.{d}{x}\right.}}}}={\frac{{-{x}}}{{{y}{e}^{{{x}^{{2}}}}}}}$$, and f(x) is greater than 0 for all x, then f(x)=
A) $$\displaystyle{3}+{e}^{{-{x}^{{2}}}}$$
B) $$\displaystyle\sqrt{{{3}}}+{e}^{{-{x}}}$$
C) $$\displaystyle{1}+{e}^{{-{x}}}$$
D) $$\displaystyle\sqrt{{{3}+{e}^{{-{x}^{{2}}}}}}$$
E) $$\displaystyle\sqrt{{{3}+{e}^{{{x}^{{2}}}}}}$$
Consider the curves in the first quadrant that have equationsy=Aexp(7x), where A is a positive constant. Different valuesof A give different curves. The curves form a family,F. Let P=(6,6). Let C be the number of the family Fthat goes through P.
A. Let y=f(x) be the equation of C. Find f(x).
B. Find the slope at P of the tangent to C.
C. A curve D is a perpendicular to C at P. What is the slope of thetangent to D at the point P?
D. Give a formula g(y) for the slope at (x,y) of the member of Fthat goes through (x,y). The formula should not involve A orx.
E. A curve which at each of its points is perpendicular to themember of the family F that goes through that point is called anorthogonal trajectory of F. Each orthogonal trajectory to Fsatisfies the differential equation dy/dx = -1/g(y), where g(y) isthe answer to part D.
Find a function of h(y) such that x=h(y) is the equation of theorthogonal trajectory to F that passes through the point P.
Q. 2# $$(x+1)\frac{dy}{dx}=x(y^{2}+1)$$
Q. 2# $$\displaystyle{\left({x}+{1}\right)}{\frac{{{\left.{d}{y}\right.}}}{{{\left.{d}{x}\right.}}}}={x}{\left({y}^{{{2}}}+{1}\right)}$$
$$\displaystyle{y}={\frac{{{1}}}{{{2}+{\sin{{x}}}}}}$$
find general solution in semi homogenous method of $$\displaystyle\frac{{\left.{d}{y}\right.}}{{\left.{d}{x}\right.}}={x}-{y}+\frac{{1}}{{x}}+{y}-{1}$$
Deterrmine the first derivative $$\displaystyle{\left(\frac{{\left.{d}{y}\right.}}{{\left.{d}{x}\right.}}\right)}$$ :
$$\displaystyle{y}={2}{e}^{{2}}{x}+{I}{n}{x}^{{3}}-{2}{e}^{{x}}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9570276737213135, "perplexity": 893.3449493214381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608702.10/warc/CC-MAIN-20210613100830-20210613130830-00485.warc.gz"} |
https://en.wikisource.org/wiki/Squaring_the_circle | # Squaring the circle
Squaring the circle
(Journal of the Indian Mathematical Society, v, 1913, 138)
Let ${\displaystyle PQR}$ be a circle with centre ${\displaystyle O}$, of which a diameter is ${\displaystyle PR}$. Bisect ${\displaystyle PO}$ at ${\displaystyle H}$ and let ${\displaystyle T}$ be the point of trisection of ${\displaystyle OR}$ nearer ${\displaystyle R}$. Draw ${\displaystyle TQ}$ perpendicular to ${\displaystyle PR}$ and place the chord ${\displaystyle RS=TQ}$.
Join ${\displaystyle PS}$, and draw ${\displaystyle OM}$ and ${\displaystyle TN}$ parallel to ${\displaystyle RS}$. Place a chord ${\displaystyle PK=PM}$, and draw the tangent ${\displaystyle PL=MN}$. Join ${\displaystyle RL}$, ${\displaystyle RK}$ and ${\displaystyle KL}$. Cut off ${\displaystyle RC=RH}$. Draw ${\displaystyle CD}$ parallel to ${\displaystyle KL}$, meeting ${\displaystyle RL}$ at ${\displaystyle D}$.
Then the square on ${\displaystyle RD}$ will be equal to the circle ${\displaystyle PQR}$ approximately.
For ${\displaystyle RS^{2}={\frac {5}{36}}d^{2}}$,
where ${\displaystyle d}$ is the diameter of the circle.
Therefore ${\displaystyle PS^{2}={\frac {31}{36}}d^{2}}$.
But ${\displaystyle PL}$ and ${\displaystyle PK}$ are equal to ${\displaystyle MN}$ and ${\displaystyle PM}$ respectively.
Therefore ${\displaystyle PK^{2}={\frac {31}{144}}d^{2}}$, and ${\displaystyle PL^{2}={\frac {31}{324}}d^{2}}$.
Hence ${\displaystyle RK^{2}=PR^{2}-PK^{2}={\frac {113}{144}}d^{2}}$, and ${\displaystyle RL^{2}=PR^{2}+PL^{2}={\frac {355}{324}}d^{2}}$.
But ${\displaystyle {\frac {RK}{RL}}={\frac {RC}{RD}}={\frac {3}{2}}{\sqrt {\frac {113}{355}}}}$, and ${\displaystyle RC={\frac {3}{4}}d}$.
Therefore ${\displaystyle RD={\frac {d}{2}}{\sqrt {\frac {355}{113}}}=r{\sqrt {\pi }}}$, very nearly.
Note.—If the area of the circle be ${\displaystyle 140,000}$ square miles, then ${\displaystyle RD}$ is greater than the true length by about an inch.
This work is in the public domain in the United States because it was published before January 1, 1925.
The author died in 1920, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 99 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 43, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7956920266151428, "perplexity": 196.7838035120413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149205.56/warc/CC-MAIN-20200714051924-20200714081924-00373.warc.gz"} |
https://kg15.herokuapp.com/abstracts/315 | # Zigzags on Interval Greedoids
### Yulia Kempner Holon Institute of Technology
#### Vadim E. Levit Ariel University
PDF
Minisymposium: GENERAL SESSION TALKS
Content: The pivot operation, i.e., exchanging exactly one element in a basis, is one of the fundamental algorithmic tools in linear algebra. Korte and Lov\'{a}sz introduced a combinatorial analog of this operation for bases of greedoids. Let $X,Y$ be two bases of a greedoid $(U,\mathcal{F})$ such that $\left\vert X-Y\right\vert =1$ and $X\cap Y\in\mathcal{F}$. Then $X$ can be obtained from $Y$ by \textit{a pivot operation} where the element $y\in Y-X$ is pivoted out and the element $x\in X-Y$ is pivoted in. We extend this definition to all feasible sets of the same cardinality and introduce \textit{lower and upper zigzags} comprising these sets. A \textit{zigzag} is a sequence of feasible sets $P_{0},P_{1},...,P_{2m}$ such that: \emph{(i)} these sets have only two different cardinalities; \emph{(ii)} any two consecutive sets in this sequence differ by a single element. Zigzag structures allow us to give new metric characterizations of some subclasses of interval greedoids including antimatroids and matroids.
Back to all abstracts | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503217339515686, "perplexity": 1144.0523171703194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057973.90/warc/CC-MAIN-20210926205414-20210926235414-00108.warc.gz"} |
http://pixel-druid.com/frobenius-kernel.html | § Frobenius Kernel
§ Some facts about conjugates of a subgroup
Let $H$ be a subgroup of $G$. Define $H_g \equiv \{ g h g^{-1} : h \in H \}$.
• We will always have $e \in H_g$ since $geg^{-1} = e \in H_g$.
• Pick $k_1 k_2 \in H_g$. This gives us $k_i = gh_ig^{-1}$. So, $k_1 k_2 = g h_1 g^{-1} g h_2 g^{-1} = g (h_1 h_2) g^{-1} \in H_g$.
• Thus, the conjugates of a subgroup is going to be another subgroup that has nontrivial intersection with the original subgroup.
• For inverse, send $k = ghg^{-1}$ to $k^{-1} = g h^{-1} g^{-1}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975124001502991, "perplexity": 516.5145068829011}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00572.warc.gz"} |
https://brilliant.org/problems/brilliant-elasticity/ | # Brilliant Elasticity
Brilliant Industries creates a consumer good called problems. Suppose that the demand for problems is relatively elastic, and the supply of problems is relatively inelastic. Which one of these scenarios will see the greatest relative increase in the quantity of problems at the equilibrium point? (Assume that each scenario will affect either the supply of or the demand for the product.)
× | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8883296847343445, "perplexity": 1153.8485420714194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205163.72/warc/CC-MAIN-20190326115319-20190326141319-00183.warc.gz"} |
https://worldbuilding.stackexchange.com/questions/224551/how-close-to-themselves-could-be-two-planets-orbits | How close to themselves could be two planets orbits?
There's some pattern within our solar system and many others as well such as the planetary orbits lay within some gradually increasing distances. I assume they could be spread less regularly (looking at SS because extrasolars are still eclipsed by many instrumental errors and no one from my neighborhood was there himself and seen their complete picture) but natural forces kept them away from each other. Maybe at the creation moment like one stone creates circles on water - I'm not sure, but I would like to know is there any point where 2 planets having very tightly planned circular orbits (say 1AU and 1.01 AU) from which their orbits would be not stable because when planet A closing up to B it suddenly got bigger acceleration from her than the Sun. I assume there's the case but how to calculate this point if my conclusion is good?
When I take gravitational force equation like this:
U = {'G': 6.6743e-11, 'jm': 1.89813e+27, 'sm': 1.98847e+30, 'em': 5.9722e+24, 'au': 1.495978707e+11,
'mm': 7.34767309e+22}
Fsun = U['G']*U['sm']*U['em']/U['au']**2
Fmoon = U['G']*U['mm']*U['em']/384000000**2
print(Fsun, Fmoon, Fsun/Fmoon)
and get 3.5416715752424943e+22 1.986220425457726e+20 178.31211127668843 I see that Sun is still almost 200 times stronger to attract the Earth than the Moon. I assume it's the case that Moon doesn't do his orbit alone around Sun but together with Earth. But what if for example Earth (or earth-like body) overtake a planet from AU=1.01 and meet conditions to bind together with that body? Is it the right equation to decide whether if the gravity of Fbody were stronger than Fsun then those orbit wouldn't be separated no more?
Is it right approach? Or maybe it's much more complicated? How close should pass a Jupyter some Earth planet to disturb the latter orbit within a Sun's system?
** EDIT **
The 3-body problem it appears to be. Found some nice simulation imaging what happens when 500x mass Jupyter orbits the same star. 3 body movement taken from [https://github.com/zaman13/Three-Body-Problem-Gravitational-System]
• This kind of sounds like a three-body problem, which is to say, it is extremely complicated, highly chaotic, and no specific equation exists to solve for it, so I'm not sure if an answer is possible here. Great question, though. Feb 20 at 0:24
• Yep, it's like I'm thinking... thank you. I guess it would be answered with real time simulation.. I think I need to search for 3-body solutions. Feb 20 at 0:29
• It won't be easy.. the Github model is nice for a relatively stable setup, not for collisions. The accuracy of the outcome really depends on the time period you want to extend your analysis, and the number of decimals in your calculations. It can be proven that any numerical approximation of the 3-body problem will yield no definitive answer. But scientists are now trying to crack it for stars that collide into binary stars, I found a quite recent story in livescience about it, livescience.com/three-body-problem-statistical-solution.html Feb 20 at 20:35
When two objects orbit a central body, the closer their orbits pass to one-another's, the more likely they are to become co-orbital, collide, or for one to be ejected from the system.
What will happen is dependent upon a great many factors, including the masses of the bodies involved. Given that this is a potentially chaotic situation, there is no easy way to say which will occur other than to simulate the system and see what happens.
• Agreed. This seems to be more complicated. Feb 20 at 0:44
There are a few reasonably plausible ways to design a fictional solar system with planetary orbits quite close together.
Part One: Crazy Co-orbitals.
Epimetheus, a moon of Saturn, orbits the center of Saturn with a semi-major axis of 151,410 kilometers, plus or minus 10 kilometers.
Janus, another moon of Saturn, orbits the center of Saturn with a semi-major axis of 151,460 kilometers, plus or minus 10 kilometers.
That is a difference of approximately 30 to 70 kilometers.
Epimetheus has dimensions of about 129.8 by 114 by 106.2 kilometers.
Janus has dimensions of about 203 by 185 by 152.6 kilometers.
https://en.wikipedia.org/wiki/Epimetheus_(moon)
https://en.wikipedia.org/wiki/Janus_(moon)
So the radii of the two moons in their orbits largely overlap, and when the inner moon Epimetheus catches up with Janus they should collide and destroy each other.
But that doesn't happen to those co-orrbital moons.
Janus's orbit is co-orbital with that of Epimetheus. Janus's mean orbital radius from Saturn was, as of 2006, only 50 km less than that of Epimetheus, a distance smaller than either moon's mean radius. In accordance with Kepler's laws of planetary motion, the closer orbit is completed more quickly. Because of the small difference it is completed in only about 30 seconds less. Each day, the inner moon is an additional 0.25° farther around Saturn than the outer moon. As the inner moon catches up to the outer moon, their mutual gravitational attraction increases the inner moon's momentum and decreases that of the outer moon. This added momentum means that the inner moon's distance from Saturn and orbital period are increased, and the outer moon's are decreased. The timing and magnitude of the momentum exchange is such that the moons effectively swap orbits, never approaching closer than about 10,000 km. At each encounter Janus's orbital radius changes by ~20 km and Epimetheus's by ~80 km: Janus's orbit is less affected because it is four times as massive as Epimetheus. The exchange takes place close to every four years; the last close approaches occurred in January 2006,[15] 2010, 2014, and 2018, and the next in 2022. This is the only such orbital configuration known in the Solar System.[16]
https://en.wikipedia.org/wiki/Janus_(moon)
And it seems possible that the co-orbital status of Janus and Epithemeus, with them switching orbits appoximately every four years, may have existed for tens of millions or hundreds of millions of years.
So a science fiction writer might consider designing a solar system where two planets orbit the star in such an orbit about a thousand times larger with about 50,000 kilometers betweeen the two planetary orbits. And if they have a program that can run orbital similations, they might possibly determine how long such an orbital configuration would be stable for.
Of course, the orbital periods of the two planets around the star would be about one Earth year long, and it would take thousands of such orbits for the inner planet to catch up with the outer planet and for them to switch orbits.
So in the millennia between such events, a civilization could arise on one of the planets and develop astronomy and discover that the planets orbited around their star, and notice that the inner planet was catching up with the other planet. And if their math wasn't up to calculating what would happen, there might be widespread fear that the planets would collide or that one would be ejected from its orbit.
Part Two: Forbidden Zones.
Stephen H. Dole, in Habitable Planets for Man, 1964, discusses the spacing of the planets in our solar system among many other factors involved with planetary habitability.
https://www.rand.org/content/dam/rand/pubs/commercial_books/2007/RAND_CB179-1.pdf
On pages 48 to 52 Dole discusses the forbidden regions around the orbits of planets. The size of the forbidden region of a planet is calculated from the planet's mass, the mass of the star, and the semi-major axis of the planet's orbit. Within the forbidden region, smaller objects can not have stable orbits and so can not clump together to form planets.
According to Dole's calculation, about half of the Solar System is within the forbitten zones of various planets. Thus at most about twice as many planets of similar size could exist within that distance of the Sun. Of course other planet could have stable orbits farther out than the known planets.
I note that small changes in the mass of main sequence stars cause greater changes in their luminosity. If star A is one percent more or less massive than star B, its luminosity will me more than an one percent higher or lower than the luminosity of star B.
So a planet in the habitable zone of a more massive and luminous star will be relatively farther out in the star's gravity field than a planet in the habitable zone of a less massive star. Thus the gravity of the planet should be stronger relative to the gravity of the star at its orbital distance, and the planet should ahve a larger forbidden region.
And a planet in the habitable zone of a less massive and luminous star should orbit deeper within the gravity of the star and thus the gravity of the star will be stronger relative to the planet's gravity, and that should make the planet's forbidden region smaller.
Or maybe it goes the other way around. I'm not sure.
The famous TRAPPIST-1 star system has 7 planets orbiting very close to a small dim star, TRAPPIST-1.
The planetary obits thus have semi-major axis with small differences between them, hundreds of thousands of kilometers instead of tens or hundreds of millions.
All the planets would be visible from each other and would in many cases appear larger than the Moon in the sky of Earth[68]
https://en.wikipedia.org/wiki/TRAPPIST-1#Skies_and_impact_of_stellar_light
Three or four[42] planets – e, f, and g[129] or d, e, and f – are located inside the habitable zone.[59][x]
https://en.wikipedia.org/wiki/TRAPPIST-1#Habitable_zone
Because they are so close to their star, all the TRAPPIST-1 planets are probably tidally locked to it, and it is uncertain whether tidally locked planets can have life.
TRAPPIST-1 is believed to be older than the Sun, so the planets should have had their present orbits for billions of years.
The Kepler-36 system has the smallest known ratio between planetary orbits.
https://en.wikipedia.org/wiki/List_of_exoplanet_extremes#Orbital_characteristics
Kepler-36 b has an orbital semi-major axis of 0.1153 AU,and Kepler-36 c has an orbital Semi-major axis of 0.1283 AU, a difference of 0.013 AU or 1,944,772.3 kilometers. The ratio between the semi-major axis of the orbits is 1.1127493.
https://en.wikipedia.org/wiki/Kepler-36
If a planet orbited its star at a distance of 1,000,000 kilometers, a planet orbiting 1.1127493 times as far out would be at a distance of 1,112,749.3 kilometers, 112,749.3 kilometers farther.
If a planet orbited its star at a distance of 1,000,000,000 kilometers, a planet orbiting 1.1127493 times as far out would be at a distance of 1,112,749,300 kilometers, 112,749,300 kilometers farther.
So one can imagine a solar system with a habitable planet at 0.898 AU, another at 1 AU, another at 1.1127493 AU, a fourth at 1.238211 AU, a fifth at 1.377818 Au, and so on. Though I don't know if such close orbits actually would be stable.
And I don't know how well those close orbits in the TRAPPIST-1 and Kepler-36 system agree with Dole's calculations of planetary forbidden regions.
Part Three: Trojan Planets.
One possible planetary arrangement would be for two planets to share the same orbit around their star, separated by 60d egrees, a trojan orbit.
HOwever, all know trojan orbits in our solar system involve objects with vast mass differences between them.
For example, the mass of the Sun is about 330,000 times the mass of Earth, and thus about 6,000,000 times the mass of Mercury, and about 1,038.3889 times the mass of Jupiter. The largest asteroid in a trojan orbit is 624 Hektor, about 200 kilometers wide, about 0.0157 the diameter of Earth, and thus about 0.0000038 the volume of earth, and presumably having less than 0.0000038 the mass of Earth, which would be less than 0.000000011 the mass of jupiter.
As a rule of thumb, the system is likely to be long-lived if m1 > 100m2 > 10,000m3 (in which m1, m2, and m3 are the masses of the star, planet, and trojan).
https://en.wikipedia.org/wiki/Trojan_(celestial_body)
The dividing line between planets and brown dwarfs is about 13 Jupiter masses or about 4,131.4 Earth masses. A system with a highest mass planet and an Earth mass planet would be unlikely to be stable.
Similarly a system where the larger planet was Earth mass would probably not be stable unless the smaller object had less than 0.0001 Earth mass. And the smallest gravitationally rounded bodies in the solar system that could be called planets and not asteroids or other minor bodies have mass around 0.0001 Earth mass.
The mass range for habitable planets would probably be only about 10 or 100, certainly not enough for a larger planet and its smaller trojan planet to both be habitable.
But something even better than trojan obits has been proposed.
Part Four: Co-orbital Rings.
Astrophysicist Sean Raymond in his PlanetPlanet blog has a section devoted to designing imaginary solar systems with as many planets, preferably habitable, as possible.
In "The Ultimate Retrograde Solar System", Raymond found a paper by Smith and lisseur saying that if alternate planetary orbits were in opposite directions, planets could be packed closer together with stable orbits than if they all orbited in the same direction.
Raymond said that about four planets could orbit the Sun in the habitable zone if they all orbited in the same direction, but about eight could orbit the sun if they orbited in alternating directions.
But Raymond warns science fiction writers:
With the Retrograde Ultimate Solar System we are now swimming in impossible waters. Two planets can end up orbiting the same star in opposite directions, but only if their orbits are widely separated. I don’t know of any way that nature could produce a system of tightly-packed planets with each set of planets orbiting in the exact opposite direction of its immediate neighbors.
This means that the Ultimate Retrograde Solar System would have to be engineered. Created on purpose by some very intelligent and powerful beings.
Such a solar system with closely packed orbits alternating between prograde and retrograde would have to be artificial, and not natural in your story. Characters who know much about planetary formation would have to know that system was artificial.
The good part is coming.
in "The Ultimate Engineered Solar System" Raymond references another paper by Smith and Lissauer.
https://planetplanet.net/2017/05/03/the-ultimate-engineered-solar-system/
Smith and Lissauer show that a ring of co-orbital planets can be stable, if the planets all have the same mass and are all equally spaced along the orbit.
So Raymond designed a system with 42 Earth mass planets sharing the same orbit 1 AU equally spaced. Since an orbit with a radius of 1 AU would have a circumference of about 1,022,022,733 kilometers, 42 equally spaced planets would be spaced about 24,333,874.6 kilometers apart on the orbit.
Then Raymond designed a system with six rings of 42 Earths apiece within the Sun's habitable zone, for a total of 252 Earth like planets.
If the planets were smaller, with about 0.1 times the mass of Earth (about the mass of Mars) there could be 13 rings of 89 such planets each for a total of 1,157 mars like planets in the habitable zone.
Then Raymond designs a system with planets with half of Earth's mass, and so 52 planets in each ring, and with the rings of planets alternating their orbital directions. That gives eight orbits with 52 planets per ring, a total of 416 planets.
But of course such a system could never form naturally.
I can only think of one way our 416-planet system could form. It must have been purposely engineered by a super-intelligent advanced civilization. I’m calling it the Ultimate Engineered Solar System.
We can make such a system millions and billions and trillions of times more plausible by making all the planetary rings orbiting in the same direction, which will reduce the number of rings and the total number of planets.
So that would make the system much more likely to form naturally. But you would probably still have to search millions and billions and trillions and quadrillions of star systems to find one like that which had formed naturally.
So any such star system in fiction would have been made artificially by an advanced civilization.
• You almost wrote a book. Extreme situations scary me and I even could not believe some of them actually exist. Trappist planets are so close to that sun but still outside HZ. Horseshoe orbits are like being engineered. Some orbits with ratios 1:-1 are more stable than in one direction. Simulations give different results according to entry arguments. So many things depends on each other so the simple looking question doesn't have an answer what is great cause we still have 99% topics to clearup, unlike those from Startrek who teleported in few seconds and were bored to the rest of their lives. Feb 21 at 7:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.727889358997345, "perplexity": 1073.5130257227688}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00028.warc.gz"} |
http://wikimechanics.org/momentum | Momentum
Sir Isaac Newton. Painted by G Kneller 1689.
Momentum is the modern English word used for translating the phrase "quantity of motion" that uses on the very first page of his great book, the Principia.1 So to understand motion WikiMechanics starts by using sensation to define the momentum as follows. Consider some particle P characterized by its wavevector $\overline{ \kappa }$ and the total number of quarks it contains $N$. Report on any changes relative to a frame of reference F which is characterized using $\tilde{ \kappa }$ the average wavevector of the quarks in F. Definition: the momentum of particle P in reference frame F is the ordered set of three numbers
\begin{align} \overline{p} \equiv \frac{ h }{ 2 \pi } \left( \overline{ \kappa }^{ \sf{P}} \! - N^{ \sf{P}} \, \tilde{ \kappa }^{ \sf{F}} \right) \end{align}
where $h$ and $\pi$ are constants. The norm of the momentum is marked without an overline
$p \equiv \left\| \, \overline{p} \, \right\|$
If $p=0$ we say that P is stationary or at rest in the F-frame. Alternatively, if $p \ne 0$ then we say that P is in motion.
Sensory interpretation: The momentum is defined by a difference between the wavevector of P and a scaled-down version of the frame's wavevector. Recall that the wavevector has previously been interpreted as a mathematical representation of visual sensation. So momentum can be understood as the visual contrast between a particle and its reference frame.
Next step: conservation of momentum.
page revision: 677, last edited: 26 Sep 2015 19:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868066310882568, "perplexity": 877.3917049920931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948520042.35/warc/CC-MAIN-20171212231544-20171213011544-00323.warc.gz"} |
http://ito.wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=2013&number=1802 | WIAS Preprint No. 1802, (2013)
# Closed-loop optimal experiment design: Solution via moment extension
Authors
• Hildebrand, Roland
• Gevers, Michel
• Solari, Gabriel
2010 Mathematics Subject Classification
• 93E12
Keywords
• Optimal experiment design, Closed-loop identification, Convex programming, Power spectral density, Moment method
Abstract
We consider optimal experiment design for parametric prediction error system identification of linear time-invariant multiple-input multiple-output (MIMO) systems in closed-loop when the true system is in the model set. The optimization is performed jointly over the controller and the spectrum of the external excitation, which can be reparametrized as a joint spectral density matrix. We have shown in [18] that the optimal solution consists of first computing a finite set of generalized moments of this spectrum as the solution of a semi-definite program. A second step then consists of constructing a spectrum that matches this finite set of optimal moments and satisfies some constraints due to the particular closed-loop nature of the optimization problem. This problem can be seen as a moment extension problem under constraints. Here we first show that the so-called central extension always satisfies these constraints, leading to a constructive procedure for the optimal controller and excitation spectrum.We then show that, using this central extension, one can construct a broader set of parametrized optimal solutions that also satisfy the constraints; the additional degrees of freedom can then be used to achieve additional objectives. Finally, our new solution method for the MIMO case allows us to considerably simplify the proofs given in [18] for the single-input single-output case.
Appeared in
• IEEE Trans. Autom. Control, 60 (2015) pp. 1731--1744. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496574401855469, "perplexity": 643.4079098474473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00131.warc.gz"} |
https://www.physicsforums.com/threads/probability-of-finding-a-particle-in-a-box.671124/ | # Probability of finding a particle in a box
1. Feb 11, 2013
### ahhppull
1. The problem statement, all variables and given/known data
Consider ψ (x) for a particle in a box:
ψn(x) = (2/L)1/2sin(n∏x/L)
Calculate the probability of finding the particle in the middle half of the box (i.e., L/4 ≤ x ≤ 3L/4). Also, using this solution show that as ''n'' goes to infinity you get the classical solution of 0.5.
2. Relevant equations
3. The attempt at a solution
I integrated and figured out the probability for n=1,2,3. For n=1 I got 1/2 + 1/∏ which is about 0.818. For n = 1 I got 1/2 and for n=3, I got 0.430.
I don't understand where the problem asks "Also, using this solution show that as ''n'' goes to infinity you get the classical solution of 0.5." From my calculations, as n goes to infinity, it does not approach a value of 0.5.
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Feb 11, 2013
### TSny
It will help if you can evaluate the integral for a general (unspecified) values of n and then look at the result as n goes to infinity.
From the 3 values you have obtained, you can't tell whether or not the probability is approaching any specific value as n gets large. (By the way, I agree with your answers for n = 1 and 2, but not for n = 3.)
3. Feb 11, 2013
### clamtrox
Drawing a picture of some of the solutions might give you some insight into what kind of answer you are looking for.
Similar Discussions: Probability of finding a particle in a box | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9090287685394287, "perplexity": 341.96495938792407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826642.70/warc/CC-MAIN-20171023202120-20171023222120-00302.warc.gz"} |
http://www.matlabtips.com/rounding-errors/ | Rounding errors
Ariane 5 first launched rocket famously exploded because of a numerical error – Source : Wikipedia
I would like today to talk about one very important concept that is often overlooked when you learn to use a computer for data analysis : Rounding errors. In Matlab as in other languages, numbers are represented as a series of 0 and 1 in a way that depends on its type (double or int16, for instance). Each number type is inherently designed to provide a certain precision. Because of this, making mathematical operations on these numbers behave differently than what you expect from simple mathematical formulas. In this post, I hope to allow you to identify these behaviors and avoid all associated problems.
To get you acquainted with this problem, let’s immediately start with some code. Let’s try the following commands at the command line :
>> x=1;
>> y=x+1e-4;
>> x==y
ans =
0
This is expected. x and y are clearly different numbers so their comparison gives 0.
>> x=1;
>> y=x+1e-15;
>> x==y
ans =
0
Again, expected. y is very slightly different than y (by 1e-15).
Now, and I am sure you see where I am going, let’s try the following :
>> x=1;
>> y=x+1e-16;
>> x==y
ans =
1
y is now different from x by 1e-16 and apparently this is too small for Matlab to notice the difference. Why, you might ask? Simply said, x and y are stored, by default in Matlab, as double precision floating point numbers. This means that their values are stored on 8 bytes or a serie of 64 0 or 1. As a result of this, it cannot use an infinite precision as required for mathematical real numbers. Matlab actually provide you with a function to check for the minimal difference noticeable between two numbers: eps. Given the floating point design, it is different for every number. Please now try :
>> eps(x)
ans =
2.2204e-16
Now, we understand why going below 1e-15, Matlab started to mess things around.
I am sure you are thinking, this is way too small, why should I care about this?
First, if you use single instead of double, the precision will actually drop to 1e-7 (for a variable valued at 1).
Second, this sort of problems can happen more easily than you might think.
For instance, let’s suppose you record the value of a particular variable as a double and then at some point in your code, you change it to single to save some space as you create large arrays. I am going to use $\pi$ as it is a good real number example that is commonly used.
>> x=pi;
>> y=single(x);
Now later on in your code you check that y is valued at pi :
>> y==pi
ans =
0
and that can cause all sort of problems because here Matlab is, behind the scene, comparing two numbers with a different precision. In some particular context, you would not be aware at all that there is rounding error problem here as checking the numbers would give you :
x =
3.1416
>> y
y =
3.1416
These problems can cause nasty little bugs that can be very hard to find. Sometimes your program might actually run without you noticing the problem. A good historical example of this kins of numerical errors (although slightly more complicated) is the famous explosion of the first Ariane 5. Somewhere in the rocket control system, there was a number that was recording the horizontal velocity of the rocket. For some reasons, that speed went over the range of the variable stored on the system. The control program of the rocket compared that truncated number to its expectation and thought that the rocket was presumably going out of balance (which it was not) causing the rocket to falsely correct its trajectory and get out of balance for real.
It is also my experience that these rounding errors problems occurs when you change the precision of your numbers and forget about it.
Here is an example that I found that is revealing of the problem that could have taken you a while to figure it out :
>> round(256.49999)
ans =
256
>> round(single(256.49999))
ans =
257
Another example using integers :
>> x=int16(1);
>> y=int16(2);
>> x/y
ans =
1
x and y are stored as integers. When you divide x by y, Matlab expects the output to be an integer as well, so the result end up being truncated to 1 (instead of 0.5). Therefore if you are acquiring some datasets stored as integers, don’t forget to use the right data type to do processing or you are likely to have a lot of rounding errors problems without any actual errors or warning displayed.
Maybe one of the most famous numerical error example is the year 2000 bug. At the time, many programs stored the year as 2 digits so that 1990 would be stored as the integer number 90. As a result, it was completely unclear how all these program would deal with the year 2000. Although many predicted the end of the world, most programs were updated and very few glitches happened.
Maybe it is worth mentioning that the U.S. Naval Observatory, which runs the master clock that keeps the country’s official time, gave the date on its website as 1 Jan 19100.
This entry was posted in Beginners. Bookmark the permalink. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5657381415367126, "perplexity": 895.661948742703}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170940.45/warc/CC-MAIN-20170219104610-00278-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://www.investopedia.com/exam-guide/cfa-level-1/quantitative-methods/probability-distribution-properties.asp | # CFA Level 1
AAA
## Quantitative Methods - Common Probability Distribution Properties
Normal Distribution
The normal distribution is a continuous probability distribution that, when graphed as a probability density, takes the form of the so-called bell-shaped curve. The bell shape results from the fact that, while the range of possible outcomes is infinite (negative infinity to positive infinity), most of the potential outcomes tend to be clustered relatively close to the distribution's mean value. Just how close they are clustered is given by the standard deviation. In other words, a normal distribution is described completely by two parameters: its mean (μ) and its standard deviation (σ).
Here are other defining characteristics of the normal distribution: it is symmetric, meaning the mean value divides the distribution in half and one side is the exact mirror image of the other -that is, skewness = 0. Symmetry also requires that mean = median = mode. Its kurtosis (measure of peakedness) is 3 and its excess kurtosis (kurtosis - 3) equals 0. Also, if given 2 or more normally distributed random variables, the linear combination must also be normally distributed.
While any normal distribution will share these defining characteristics, the mean and standard deviation will be unique to the random variable, and these differences will affect the shape of the distribution. On the following page are two normal distributions, each with the same mean, but the distribution with the dotted line has a higher standard deviation.
Univariate vs. Multivariate Distributions
A univariate distribution specifies probabilities for a single random variable, while a multivariate distribution combines the outcomes of a group of random variables and summarizes probabilities for the group. For example, a stock will have a distribution of possible return outcomes; those outcomes when summarized would be in a univariable distribution. A portfolio of 20 stocks could have return outcomes described in terms of 20 separate univariate distributions, or as one multivariate distribution.
Earlier we indicated that a normal distribution is completely described by two parameters: its mean and standard deviation. This statement is true of a univariate distribution. For models of multivariate returns, the mean and standard deviation of each variable do not completely describe the multivariate set. A third parameter is required, the correlation, or co-movement, between each pair of variables in the set. For example, if a multivariate return distribution was being assembled for a portfolio of stocks, and a number of pairs were found to be inversely related (i.e. one increases at the same time the other decreases), then we must consider the overall effect on portfolio variance. For a group of assets that are not completely positively related, there is the opportunity to reduce overall risk (variance) as a result of the interrelationships.
For a portfolio distribution with n stocks, the multivariate distribution is completely described by the n mean returns, the n standard deviations and the n*(n - 1)/2 correlations. For a 20-stock portfolio, that's 20 lists of returns, 20 lists of variances of return and 20*19/2, or 190 correlations.
Confidence Intervals
Related Articles | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9085373282432556, "perplexity": 513.6531215569743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463444.94/warc/CC-MAIN-20150226074103-00217-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://vmu.phys.msu.ru/en/toc/2014/5 | Faculty of Physics
M.V.Lomonosov Moscow State University
Physics of nuclei and elementary particles
## Photonuclear reactions on titanium isotopes $^{46−50}$Ti
### S.S. Belyshev$^1$, L.Z. Dzhilavyan$^2$, B.S. Ishkhanov$^{1,3}$, I.M. Kapitonov$^1$, A.A. Kuznetsov$^3$, A.S. Kurilik$^1$, V.V. Khankin$^3$
Moscow University Physics Bulletin 2014. 69. N 5. P. 363
Yields of photonuclear reactions on the natural mixture of titanium isotopes were measured under exposure to a beam of bremsstrahlung $γ$-quanta with the end-point energy of 55 MeV. The results are compared with computations based on the TALYS model. It is shown that description of cross sections of photonuclear reactions on Ti isotopes requires the proper account for isospin and configuration splitting of the giant dipole resonance.
Show Abstract
## Analytic solution to the problem of the gaussian beam propagation through nonuniform gas
### O.A. Nikolaeva, F.V. Shugaev
Moscow University Physics Bulletin 2014. 69. N 5. P. 374
The analytic solution to the problem of the Gaussian beam propagation through the non-uniform atmosphere has been derived.
Show Abstract
## Acoustic irradiation of a moving uniform linear array with a transverse distribution of quadrupoles from an arbitrary disposition of radiators
### E.Ya. Bubnov
Moscow University Physics Bulletin 2014. 69. N 5. P. 379
A study of the acoustic irradiation of a moving uniform linear array consisting of a transverse distribution of quadrupoles with an arbitrary disposition of radiators is described. Analytic expressions for the acoustic irradiation were obtained. A mathematical simulation of the pressure amplitude-angle curve vs. array speed, aerial element number, and radiation frequency was performed. The findings may be applied to the interpretation of experimental data related to gas-jet acoustics.
Show Abstract
Condensed matter physics
## Impurity magneto-optical absorption with the participation of resonance states of D$^−_2$ centers in quantum wells
### V.Ch. Zhukovskii$^1$, V.D. Krevchik$^2$, A.B. Grunin$^2$, A.V. Razumov$^2$, P.V. Krevchik$^2$
Moscow University Physics Bulletin 2014. 69. N 5. P. 384
The dependence of the average binding energy of the resonance $g$-state of a D$^−_2$ center on the induction of an external magnetic field in a quantum well with a parabolic confining potential is studied using the zero-range potential method. It has been shown that with an increasing exchange interaction, the character of the dependence of the average binding energy of the resonance $g$-state of the D$^−_2$ center on the induction of the external magnetic field changes. It has been assumed that in $GaAs/AlGaAs$ quantum wells alloyed with small Si donors, resonance D$^−_2$ states can exist under conditions of exchange interaction. It has been found that in spectra of impurity magneto-optical absorption in multiwall quantum structures, exchange interaction manifests itself as oscillations of interference origin.
Show Abstract
Biophysics and medical physics
## An urban ecosystem as a superposition of interrelated active media
### A.E. Sidorova, Yu.V. Mukhartova, L.V. Yakovenko
Moscow University Physics Bulletin 2014. 69. N 5. P. 392
A space-time model that treats the urban ecosystem as a superposition of active media is expanded to take the heterogeneity of anthropogenic and natural factors into account. The approach is based on representation of urban ecosystems as conjugated active media and is aimed at identifying the threshold values of control parameters. The theoretical basis of the system analysis of the stability of urban ecosystems is provided by synergetic ideas concerning autowave self-organization in active media.
Show Abstract
## Polarized fluorescence in investigation of rotational diffusion of the fluorescein family markers in bovine serum albumin solutions
### I.M. Vlasova, A.A. Kuleshova, A.A. Vlasov, A.M. Saletskii
Moscow University Physics Bulletin 2014. 69. N 5. P. 401
The analysis of polarized fluorescence of the fluorescein family markers was conducted and parameters of their rotational diffusion in bovine serum albumin solutions (BSA) were determined. The degree of fluorescence anisotropy of the markers increases in the BSA solutions, as well as the time of rotational relaxation of the markers, while the rotational-diffusion coefficient of the markers decreases. The differences in the rotational-diffusion parameters between the markers are determined by the values of the electronegativity of the atoms in their structural formulas: the increase of the electronegativity of the atoms in the structural formulas of the markers results in the increase of the degree of fluorescence anisotropy, a decrease of the rotational-diffusion coefficient, and in the increase of the rotational-relaxation time both in the solutions without the protein and with BSA.
Show Abstract
## Uncertainty in quantum mechanics and biophysics of complex systems
### V.M. Eskov, V.V. Eskov, T.V. Gavrilenko, M.I. Zimin
Moscow University Physics Bulletin 2014. 69. N 5. P. 406
In accordance with the Heisenberg uncertainty principle, a similar dynamics (behavior) for complex biosystems is postulated. Five peculiar properties and thirteen differences of these specific three-type systems (complexity) from traditional deterministic and stochastic systems determine the need for introducing the uncertainty principle for these specific systems. The value of the special boundary volume of the phase space, V$_{min}^G$, (which is considered as quasiattractor) inside of which the state vector of a biosystem moves chaotically and permanently, is the right side of this inequality. The volumes V$_{min}^G$n andV$_{max}^G$ really represent the peculiar properties of each dynamic biosystem (or each three-type system) as complexity. Particular examples from biomechanics prove the impossibility of using the deterministic or stochastic approach for the description of complexity, and only methods of the new theory of chaos (self-organization) can be useful for the description of complex biosystems. The possibilities of the practical application of this theory in various fields of biology and medicine are also discussed.
Show Abstract
Astronomy, astrophysics, and cosmology
## “Meander”-like and “slit”-like variations in the flux of solar cosmic rays
### G.P. Lyubimov, V.I. Tulupov, N.A. Vlasova
Moscow University Physics Bulletin 2014. 69. N 5. P. 412
Results of studying the origin of “meander”-like and “slit”-like variations in fluxes of solar cosmic rays (SCR) are presented as exemplified by the events that occurred on March 19, 1990 and December 14–15, 2006. Experimental data that were obtained by the GRANAT, ACE, Wind, STEREO-A, and STEREO-B spasecrafts are used. The analysis is based on the data of observations of dynamic structures in the solar atmosphere and their continuation in the heliosphere, as well as on an empirical “reflection” model of the movement, accumulation, and modulation of cosmic rays. A structural source of these variations on the Sun is discussed, which is shaped as discrete magnetic plasma loops and arches located between active regions. Such structures in the form of magnetic-plasma tubes transferred from the chromosphere and corona and filled with SCR rotate together with the Sun and, when crossing the detector location region, cause space-time variations in meander-like and slit-like variations in SCR.
Show Abstract
Physics of Earth, atmosphere, and hydrosphere
## Towards the proof of Kolmogorov hypotheses
### V.P. Yushkov
Moscow University Physics Bulletin 2014. 69. N 5. P. 421
A closure scheme for Kolmogorov spectrum at low and high frequencies is proposed. It allows us to validate second Kolmogorov hypothesis if expand the first one. The proposed closure scheme adds energy of turbulence to the list of controlling parameters and explains energy transfer over the spectrum by wave interaction between incompressible and adiabatic components of turbulence.
Show Abstract
## The evolution of velocity profiles and turbulent viscosity in a system of currents with wind-induced and density flows
### B.I. Samolyubov, I.N. Ivanova
Moscow University Physics Bulletin 2014. 69. N 5. P. 426
The results of an in-situ study and mathematical modeling of a system of currents with wind-induced and density flows are presented. The model is based on the obtained dependences of its key parameters on the density-field characteristics, changes in the water level, and the water-reservoir topography. The revealed features of the of shear-velocity profile are indicated and a function of its distribution from the bottom to the open surface of the water is given. The model that was developed is verified by the data from measuring the parameters of currents and composition of water in the Petrozavodsk Bay of Onega Lake in September 2007. The regularities for the evolution of distributions of the coefficient of the turbulent exchange and current velocity over depth and in time are revealed.
Show Abstract | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7992047071456909, "perplexity": 2825.6816861326583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00194.warc.gz"} |
http://www.gamedev.net/topic/2590-yes-im-another-newbie-please-help/#entry | • Create Account
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
19 replies to this topic
### #1Retrep Members - Reputation: 122
Posted 06 December 1999 - 11:08 PM
First of all, I've decided that this forum will be my home from now on for developing games, so you'll see me around quite a bit. Right, on with my query...
Basically, I'm starting from scratch...totally from scratch. What I need to know, is what programs I should get, and what documentation is good. Basically step by step instructions of how to get started. I need reccomendations on C++ Compilers (they need to be free), 3D packages (again free, if there are such things as free 3D packages), good 3D engines (again free) and good documentation sources. I'm a good learner, and want to learn. I know many of you will be thinking I'm mad skipping 2D and going straight for 3D, but I always throw myself into the deep end, and work with the aftermath I'll probably end up starting with 2D, but in the mean time I'd just like a go at 3D. I'm downloading the Direct X 7 SDK right now, and that's the only thing I know that I need. Everything else, I need help with. Suggestions, anyone?
------------------
### #2Gromit Members - Reputation: 144
Posted 28 November 1999 - 06:31 AM
I don't think that starting out doing 3d games is realistic. Stick with 2d games. Also, no one is going to just give you everything, you are just going to have to read everything here http://www.gamedev.net/reference/programming/ before you ask any questions. Sorry if I sound mean, but this is one of the best places to learn everything you are going to need to know before you start posting for help.
### #3Retrep Members - Reputation: 122
Posted 28 November 1999 - 06:47 AM
I'm reading it right now; sorry if it seemed lazy of me asking right out. If I've more questions after I'm done, I'll be back. thanks
### #4Facehat Members - Reputation: 696
Posted 28 November 1999 - 09:01 AM
I know that you probably have your mind made up, but let me make this suggestion: Don't make a 3D game until you have finished a 2D one. Now I'm not saying that you shouldn't study and learn 3D programming -- nothing wrong with that. Go ahead and code up little test apps and whatever you like.
But what you'll find when doing 3D programming is that it's not drawing triangles or transforming objects which is the tough part -- it's the design issues you'll face. You'll find it much easier to finish your 3D game if you've actually finished a 2D one first because you will have learned a lot about how to code games.
So my suggestion is this: learn all you can about whatever subjects you like -- even if there a bit above your head at this point. Just don't commit yourself to making anything but simple games to start with. You'll thak yourself later .
--TheGoop
### #5Retrep Members - Reputation: 122
Posted 28 November 1999 - 09:55 AM
Thanks for the tip; I read through a guide on this site as to what games you should try to program to start off with, and after the advice in there, I'll probably start off with Tetris. I still want to toy around with 3D though; even just getting a box up that I can maybe manipulate would be good. I just love messing around with 3D packages (the shareware versions, anyway) and my head is buzzing with ideas for 3D games. But I will aim low to begin with. I got myslef a copy of turbo C, and from a free library, a book on learning C++. It's first lesson is the "Hello World!" program, but I'm already having a bit of aproblem. It concerns the "#include " command. It can't find the file when I try to compile. all the include files are in a folder labled Include, but when I move the iostream.h file into the directory that the program is located in, it can open the iostream.h file, but then can't open the mem.h file. When I move the mem.h file into the directory where the program is located as with the iostream.h file, it makes no difference. I looked through the help files on this, but couldn't find anything on it. Any ideas?
### #6foofightr Members - Reputation: 130
Posted 28 November 1999 - 01:19 PM
I'm going to assume certain things here, so if I'm making wrong assumptions, I'm sorry.
I'm assuming you're using this method:
code:
#include "iostream.h"
the quotation marks tell the preprocessor to look in the current directory for the file, and if it can't find it, it aborts. Almost all recent compilers take this one step further and also search the \include directory, but it looks like you have an old compiler so it doesn't search there.
The solution is to enclose the filename in angle brackets < and >, but make SURE you don't put spaces before or after the filename. This will not work:
code:
#include < iostream.h >
and this will work:
code:
#include < iostream.h>
(UBB software treats this as an HTML tag, so I couldn't write it as I normally would because it would erase it. In your code, don't leave spaces around the filename)
The rule of thumb is when you're using the compiler's header files, use < > because that's where they're located, and when you're including your own headers, use " " because they're usually in the working directory.
Hope that helps.
[This message has been edited by foofightr (edited November 28, 1999).]
### #7Retrep Members - Reputation: 122
Posted 29 November 1999 - 08:07 AM
Hmmmm....it didn't work. Could my compiler be messed up?
On that note, I'm thinking of getting the academic version of Visual Basic 6.0, as I heard its easier to use. There are 3 versions, however, Enterprise, Professional, and Standard (I think). Professional looks to be the best, because the upgrade options seem good, but what which one would you recommend?
In the mean time I'll struggle on with TurboC. If anyone has a solution to my problem, please help!
### #8Kyle Radue Members - Reputation: 122
Posted 29 November 1999 - 08:23 AM
This is just a shot in the dark, but double-check that the file you are trying to compile has a .cpp extension, not a .c extension. A (slightly) different compiler is used for c++ code than c code.
### #9BrentP4 Members - Reputation: 122
Posted 29 November 1999 - 08:37 AM
I am used to programming with Borland.(what I learned in school) I just bought Visual C++ 6.0 Standard for $50....after a$50 rebate. I knw that if you choose this compiler...it will be a lot easier to get help...and also....DirectX SDK is aimed toward the visual products. Standard edition would be fine for you i think. ALl you really need is the compiler.(well....for now anyways) The enviroment isn't hard to learn(though i do prefer Borland) If money is a problem (guessing, because you want everything free) Just ask for it for christmas. Any graphics programs you would need... Just basic paint...for now anyways...if you want a free 3d program....Truespace version 1.0 has been released freely....legally....Good luck...And never stop asking questions. The only dumb one is the one which hasn't been asked. My biggest problems with coding is just setting up the compiler. I had a friend at RedStorm help me with that....I also recommend Lamothe's book ....his new one....Windows Game Programming Gurus book...(His rehash of his older book) ....bout 40 dollars...It really helps with learning directx, and windows fundamentals.
[This message has been edited by BrentP4 (edited November 29, 1999).]
### #10crazy166 Members - Reputation: 122
Posted 29 November 1999 - 09:15 AM
When I first decided to start programming games, I also needed everything to be "free". So what I would do is find a book (e.g. Teach yourself games, c++, etc.). Most of these books come with free compilers, though older versions. Up until recently I was using Visual C++ 4.2 Standard edition that I got "free". These packages rarely cost more than $30-$40, and MSVC 4 continues to work with DirectX7.
As far as "free" graphics, I got a limited edition of Adobe Photoshop "free" with my new CD burner. This program is great for someone like me to make mediocre graphics look (at best) decent.
Good luck!
### #11Retrep Members - Reputation: 122
Posted 29 November 1999 - 10:20 AM
Kyle - I check the file extensions, but still nothing. Could it be anything else?
Brent & Crazy - Thanks for the tips I'll be sure to look up that free version off Truespace; I've heard good things about the program, and even if it's not very powerful it'll be a good start. AS for Visual C++ 6.0, I just found out that academic discounts are only available in the US...at least if your buying from Microsofts Website I'll probably still get it, but I'm just wondering: have microsoft released any of their Visual C++ as freeware? Like earlier versions of the program?
As for books, I found a free "Learn C++ in 21 days" from an online library, and am following it to start with. Once I get a bit of cash, i'll invest in some more game specific ones
Oh, and about 3D engines; I saw some available for download from this site, but can anyone reccomend one? There is no ratings on them, so it's hard to tell which one is the most stable, best performer etc. Thanks...
[This message has been edited by Retrep (edited November 29, 1999).]
### #12Facehat Members - Reputation: 696
Posted 29 November 1999 - 11:55 AM
Heres a guess: perhaps your compiler doesn't support C++, only C. I say this because as far as I know iostream.h is part of the c++ library, but *not* the c library.
Just a guess though.
--TheGoop
### #13Retrep Members - Reputation: 122
Posted 30 November 1999 - 07:16 AM
I'll get back to you on that; as far as I know it's a C++ compiler...afterall the library containing "iostream.h" came with it. I'll check for sure though.
### #14Gromit Members - Reputation: 144
Posted 30 November 1999 - 09:34 AM
Didn't you say that you moved the iostream.h file into your program directory. You don't have to do that at all. When a file is in the include directory you use the form < file name >. When the file is not in the include directory you have to use the form " file name ". The last one only works if the file is in the same directory as your program that you are compiling. If it is not then you have to include the pathname to that file.
I hope I read you right. If not then disregard this post.
Good luck to you.
### #15Gromit Members - Reputation: 144
Posted 30 November 1999 - 09:36 AM
Did they make a Turbo C or is there only a Turbo C++. I have only seen the latter. What version number is it? Maybe this will help find your problem.
### #16Retrep Members - Reputation: 122
Posted 30 November 1999 - 11:48 AM
Gromit - Thanks for th help, but I had already tried that an it didn't work
The only clue as to the version I have is that it is "Turbo C++". That's all I can find on that....it's been an age since I downloaded it, so you'll have to excuse me if I can't remember the exact version
### #17 Anonymous Poster_Anonymous Poster_* Guests - Reputation:
Posted 30 November 1999 - 09:22 PM
In case you still want to check for free compilers, I know borland recently released a bunch of their old stuff.
You should find it somewhere on this site http://community.borland.com
Good luck
### #18rperkins Members - Reputation: 122
Posted 01 December 1999 - 03:37 AM
I'm not sure how the compiler options are on that one but you might want to check the include path. It should point at the directory where the .h files live. Just a thought.
### #19Just6979 Members - Reputation: 122
Posted 06 December 1999 - 08:03 PM
one of the best free compilers (and the one i use when not doing windows work) for the PC is DJGPP. is creates protected-mode DOS programs and is fully POSIX compliant (meaning it behaves _exactly_ like the GNU programs you find on Unix systems). for games, check out Allegro.
you can get DJGPP at http://www.delorie.com/djgpp/ you can also get the basic Allegro package here.
-Justin
------------------
### #20DerekSaw Members - Reputation: 243
Posted 06 December 1999 - 11:08 PM
Where is your Turbo C++? By default, it's under C:\TC, so the include files should be in C:\TC\INCLUDE, and lib files in C:\TC\LIB.
Goto [Options|Directories] menu and check the content of 'Include Directories' and 'Library Directories' -- they should be like the above (by default).
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
PARTNERS | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15028224885463715, "perplexity": 1620.4108566623563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719079.39/warc/CC-MAIN-20161020183839-00556-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://phys.libretexts.org/Bookshelves/Classical_Mechanics/Variational_Principles_in_Classical_Mechanics_(Cline)/04%3A_Nonlinear_Systems_and_Chaos/4.01%3A_Introduction_to_Nonlinear_Systems_and_Chaos | # 4.1: Introduction to Nonlinear Systems and Chaos
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
In nature only a subset of systems have equations of motion that are linear. Contrary to the impression given by the analytic solutions presented in undergraduate physics courses, most dynamical systems in nature exhibit non-linear behavior that leads to complicated motion. The solutions of non-linear equations usually do not have analytic solutions, superposition does not apply, and they predict phenomena such as attractors, discontinuous period bifurcation, extreme sensitivity to initial conditions, rolling motion, and chaos. During the past four decades, exciting discoveries have been made in classical mechanics that are associated with the recognition that nonlinear systems can exhibit chaos. Chaotic phenomena have been observed in most fields of science and engineering such as, weather patterns, fluid flow, motion of planets in the solar system, epidemics, changing populations of animals, birds and insects, and the motion of electrons in atoms. The complicated dynamical behavior predicted by non-linear differential equations is not limited to classical mechanics, rather it is a manifestation of the mathematical properties of the solutions of the differential equations involved, and thus is generally applicable to solutions of first or second-order non-linear differential equations. It is important to understand that the systems discussed in this chapter follow a fully deterministic evolution predicted by the laws of classical mechanics, the evolution for which is based on the prior history. This behavior is completely different from a random walk where each step is based on a random process. The complicated motion of deterministic non-linear systems stems in part from sensitivity to the initial conditions. There are many examples of turbulent and laminar flow.
The French mathematician Poincaré is credited with being the first to recognize the existence of chaos during his investigation of the gravitational three-body problem in celestial mechanics. At the end of the nineteenth century Poincaré noticed that such systems exhibit high sensitivity to initial conditions characteristic of chaotic motion, and the existence of nonlinearity which is required to produce chaos. Poincaré’s work received little notice, in part it was overshadowed by the parallel development of the Theory of Relativity and quantum mechanics at the start of the $$20^{th}$$ century. In addition, solving nonlinear equations of motion is difficult, which discouraged work on nonlinear mechanics and chaotic motion. The field blossomed during the $$1960^{\prime }s$$ when computers became sufficiently powerful to solve the nonlinear equations required to calculate the long-time histories necessary to document the evolution of chaotic behavior.
Laplace, and many other scientists, believed in the deterministic view of nature which assumes that if the position and velocities of all particles are known, then one can unambiguously predict the future motion using Newtonian mechanics. Researchers in many fields of science now realize that this “clockwork universe" is invalid. That is, knowing the laws of nature can be insufficient to predict the evolution of nonlinear systems in that the time evolution can be extremely sensitive to the initial conditions even though they follow a completely deterministic development. There are two major classifications of nonlinear systems that lead to chaos in nature. The first classification encompasses nondissipative Hamiltonian systems such as Poincaré’s three-body celestial mechanics system. The other main classification involves driven, damped, non-linear oscillatory systems.
Nonlinearity and chaos is a broad and active field and thus this chapter will focus only on a few examples that illustrate the general features of non-linear systems. Weak non-linearity is used to illustrate bifurcation and asymptotic attractor solutions for which the system evolves independent of the initial conditions. The common sinusoidally-driven linearly-damped plane pendulum illustrates several features characteristic of the evolution of a non-linear system from order to chaos. The impact of non-linearity on wavepacket propagation velocities and the existence of soliton solutions is discussed. The example of the three-body problem is discussed in chapter $$11$$. The transition from laminar flow to turbulent flow is illustrated by fluid mechanics discussed in chapter $$16.8$$. Analytic solutions of nonlinear systems usually are not available and thus one must resort to computer simulations. As a consequence the present discussion focusses on the main features of the solutions for these systems and ignores how the equations of motion are solved.
This page titled 4.1: Introduction to Nonlinear Systems and Chaos is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Douglas Cline via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761094212532043, "perplexity": 335.2439235168635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00176.warc.gz"} |
http://jason.diamond.name/weblog/tag/bdd/ | # Asserting without Equals
Arnis suggested that implementing Equals just for NUnit was wrong so I thought I’d try doing without it.
The CollectionAssert.AreEqual method accepts an optional IComparer implementation. If specified, that will be used instead of Equals.
So I put together a class called PartComparer. Since I switched to comparing the state of the objects outside their classes, I had to expose some of that state via read-only properties. I think I can live with that.
I then deleted all of my Equals and GetHashCode methods (I wasn’t really using GetHashCode anyways).
Here’s what the test changed to:
[Test]
public void It_scans_literal_text()
{
var scanner = new Scanner();
var parts = scanner.Scan("foo");
CollectionAssert.AreEqual(new Part[]
{
new LiteralText("foo"),
},
parts,
new PartComparer());
}
It works the same as before. It just requires the extra argument.
Custom comparers works even with the newer Assert.That syntax:
[Test]
public void It_scans_literal_text()
{
var scanner = new Scanner();
var parts = scanner.Scan("foo");
Assert.That(parts, Is.EqualTo(new Part[]
{
new LiteralText("foo"),
})
.Using(new PartComparer()));
}
The verbosity of constructing the expected Part array is a bit much. If only C# could construct lists like JavaScript, Python, and Ruby…
I decided to try to hide that in a custom assertion method:
private static void AssertThatPartsAreEqual(
IEnumerable<Part> actualParts,
params Part[] expectedParts)
{
Assert.That(actualParts, Is.EqualTo(expectedParts)
.Using(new PartComparer()));
}
Now my test looks like this:
[Test]
public void It_scans_literal_text()
{
var scanner = new Scanner();
var parts = scanner.Scan("foo");
AssertThatPartsAreEqual(parts, new LiteralText("foo"));
}
One really nice thing about having the custom assertion method is that I can modify how parts are compared in just one spot. For example, Scan is really an iterator. NUnit’s failure messages when the collections aren’t arrays are less than ideal. With the comparisons being done in just one spot, I can modify it to convert the enumerable into an array:
private static void AssertThatPartsAreEqual(
IEnumerable<Part> actualParts,
params Part[] expectedParts)
{
Assert.That(actualParts.ToArray(), Is.EqualTo(expectedParts)
.Using(new PartComparer()));
}
Since I’m using .NET 3.5, I can take this one step further and use an extension method:
internal static class EnumerablePartExtensions
{
public static void IsEqualTo(
this IEnumerable<Part> actualParts,
params Part[] expectedParts)
{
Assert.That(actualParts.ToArray(), Is.EqualTo(expectedParts)
.Using(new PartComparer()));
}
}
With that in place, my test now looks like this:
[Test]
public void It_scans_literal_text()
{
var scanner = new Scanner();
var parts = scanner.Scan("foo");
parts.IsEqualTo(new LiteralText("foo"));
}
Wow, it’s like Ruby but without the monkey patching!
One thing I did leave in my code are all of the ToString overrides. Without those, NUnit’s failure messages would be much less helpful. They’re also very useful while debugging.
Thanks, Arnis. Your comment helped me find an alternative (and quite possibly better!) way to get what I want.
# I fail at TDD?
I actually think I’m pretty good at TDD. Every now and then I get reminded that I’m not as good as I think I am.
I’ve been working on a new project (an implementation of the Mustache template language in C# that I’m calling Nustache) and have been having a lot of fun with it. This is the project I’m going to use as an example of how I fail at TDD.
Since this project involves parsing a string, I decided I would probably need a class to scan the string for tokens so those tokens could be parsed and then evaluated.
While writing the test for my Scanner class, I wrote it so that it would assert on the sequence of tokens it returns. I decided the tokens would be instances of a class called Part. One specific subclass of Part would be LiteralText. It represents a span of characters from the source template that is not supposed to be evaluated and just rendered directly to the output. I figured this would be the easiest way to start testing my Scanner class.
The test probably looked like this (I’m writing this way after the fact):
[Test]
public void It_scans_literal_text()
{
var scanner = new Scanner();
var parts = scanner.Scan("foo");
CollectionAssert.AreEqual(new Part[]
{
new LiteralText("foo"),
},
parts);
}
At this point, the test didn’t compile because I hadn’t defined my Scanner, Part, and LiteralText classes yet.
Having written the test first, I learned a few things about the Scanner class it was trying to test:
• It has a default constructor
• It has a method named Scan
• Its Scan method takes in a string
• Its Scan method returns an IEnumerable<Part> (I know this because of the parameters for CollectionAssert.AreEqual)
I also learned something about the LiteralText class:
• It derives from Part (because I’m adding it to a Part array)
• It has a constructor that accepts a string
• It must override the Equals method or this will never work
Since this test is describing the Scanner class, I decided to work on it first:
public class Scanner
{
public IEnumerable<Part> Scan(string template)
{
return null;
}
}
This wouldn’t compile until I defined Part:
public class Part
{
}
The test still needed LiteralText to be defined:
public class LiteralText : Part
{
public LiteralText(string text)
{
}
}
At this point, I was able to compile and run my test. When I did, NUnit said this:
Test 'Nustache.Tests.Describe_Scanner_Scan.It_scans_literal_text' failed:
Expected: < <Nustache.Core.LiteralText> >
But was: null
I liked that failure message, but I wanted to go a bit further and see what the failure message would be when I return an empty array instead of null since it didn’t make sense to me for Scan to return null. Scan changed to this:
public IEnumerable<Part> Scan(string template)
{
return new Part[] { };
}
The failure message became this:
Test 'Nustache.Tests.Describe_Scanner_Scan.It_scans_literal_text' failed:
Expected is <Nustache.Core.Part[1]>, actual is <Nustache.Core.Part[0]>
Values differ at index [0]
Missing: < <Nustache.Core.LiteralText> >
OK, that wasn’t too bad. Next, I wanted to see if I could get it to pass by doing the simplest thing I could possibly do so I changed Scan to this:
public IEnumerable Scan(string template)
{
return new Part[]
{
new LiteralText("foo")
};
}
After seeing that pass, I would have implemented it a little more realistically, but I was in for a surprise. Instead of passing (which is what I expected), I got this failure message:
Test 'Nustache.Tests.Describe_Scanner_Scan.It_scans_literal_text' failed:
Expected and actual are both <Nustache.Core.Part[1]>
Values differ at index [0]
Expected: <Nustache.Core.LiteralText>
But was: <Nustache.Core.LiteralText>
Uh… Oh, yeah! LiteralText needs an override of the Equals method or NUnit will never be able to tell if one instance is “equal to” another.
In order to implement that, I need to make sure the string that gets passed in to the LiteralText constructor gets saved in some sort of field or property. Then I could write my Equals override by hand or let ReSharper generate it for me.
I decided to let ReSharper do it (I’m lazy) and got three methods: bool Equals(LiteralText other), bool Equals(object obj), and int GetHashCode().
After getting that to work, I added a ToString method to LiteralText to make the failure message even clearer.
See the problem? I went off and started implementing code in LiteralText when I was in the middle of trying to get a test for Scanner to pass! Sure, it’s just the Equals and GetHashCode methods, but it’s still code!
I did all of this in response to test I was trying to get to pass so I was still doing TDD, right?
Right?
At the time I was doing this, I didn’t even notice this “problem”. It wasn’t until much later when I decided to run my tests under NCover to see how I was doing. I was practicing TDD, so my coverage should have been pretty good, if not perfect. Sadly, I found I had a bunch of Equals, GetHashCode, and ToString methods that weren’t fully covered and ruining my flawless code coverage report!
So what’s the big deal? Everybody agrees that 100% code coverage isn’t sufficient to ensure the correctness of your code. I absolutely agree with that. Many people also agree that getting 100% code coverage isn’t even worth it. That, I disagree with. As does Patrick Smacchia (author of NDepend) who described why 100% code coverage is a worthwhile goal here. It’s a great article and I highly recommend you all read it.
To rectify this predicament, I forced myself to write tests for my LiteralText class (writing tests after the fact is so boring!).
Since I originally defined it, I discovered that Part had grown a Render method and LiteralText was overriding it. The method was being covered by other tests, but there was nothing that was directly testing LiteralText. That might not be such a big deal, but one of the oft-touted benefits of unit tests is that they can also act as executable documentation. Since I had no unit tests for my LiteralText class, I had no executable documentation for it! How would I ever re-learn (months from now) how it’s supposed to behave without that?
OK, I’m being a bit silly, but I went for it anyways and I really liked the result. Here’s what I came up with:
[TestFixture]
public class Describe_LiteralText
{
[Test]
public void It_cant_be_constructed_with_null_text()
{
Assert.Throws<ArgumentNullException>(() => new LiteralText(null));
}
[Test]
public void It_renders_its_text()
{
var a = new LiteralText("a");
var writer = new StringWriter();
var context = new RenderContext(null, null, writer, null);
a.Render(context);
Assert.AreEqual("a", writer.GetStringBuilder().ToString());
}
[Test]
public void It_has_a_useful_Equals_method()
{
object a = new LiteralText("a");
object a2 = new LiteralText("a");
object b = new LiteralText("b");
Assert.IsTrue(a.Equals(a));
Assert.IsTrue(a.Equals(a2));
Assert.IsTrue(a2.Equals(a));
Assert.IsFalse(a.Equals(b));
Assert.IsFalse(a.Equals(null));
Assert.IsFalse(a.Equals("a"));
}
[Test]
{
var a = new LiteralText("a");
var a2 = new LiteralText("a");
var b = new LiteralText("b");
Assert.IsTrue(a.Equals(a));
Assert.IsTrue(a.Equals(a2));
Assert.IsTrue(a2.Equals(a));
Assert.IsFalse(a.Equals(b));
Assert.IsFalse(b.Equals(a));
Assert.IsFalse(a.Equals(null));
}
[Test]
public void It_has_a_useful_GetHashCode_method()
{
var a = new LiteralText("a");
Assert.AreNotEqual(0, a.GetHashCode());
}
[Test]
public void It_has_a_useful_ToString_method()
{
var a = new LiteralText("a");
Assert.AreEqual("LiteralText(\"a\")", a.ToString());
}
}
As you can probably tell, I’m using a non-standard (for .NET developers) naming scheme for my tests. It’s inspired by RSpec and I really like it.
If you take away all the code, the ugly underscores, and the weird prefixes, you get the documentation:
• LiteralText
• can’t be constructed with null text
• renders its text
• has a useful Equals method
• has an Equals overload for other LiteralText objects
• has a useful GetHashCode method
• has a useful ToString method
I formatted that list by hand, but generating it could easily be automated by processing NUnit’s XML output or using reflection on the test assembly. RSpec has a feature built in to it that can generate this kind of output. (Don’t worry, I don’t plan on switching to Ruby like every other .NET weblogger out there seems to be doing. I do think they have some great ideas, though, and have been really enjoying reading the beta version of the RSpec Book which is what made me try out this new naming scheme.)
Even though this particular class is trivial, doing this helped set up an example to follow for the other Part subclasses which aren’t as simple. Also, I feel much more confident about this class knowing that there is a suite of tests in a single, well-named fixture that provides 100% code coverage for it.
I’m not saying that every class should be covered by one and only one fixture. If the class demands it, I’ll happily break its tests up into multiple fixtures. I could have one fixture per method or one fixture per context. I’m flexible about that. I’d use the desire to that as a possible smell that the class might be trying to do too much, though.
You can also see that I’m pretty flexible about not limiting myself to just one assertion per test. I strongly believe that most tests should only have one assertion but, in this case, it would have been ridiculous to have written a test case for each of the different ways Equals could be invoked.
I was also a little lax on the Arrange/Act/Assert format. This is another practice that I try to consistently follow. Some tests just don’t need anything to be set up! And NUnit’s Assert.Throws syntax kind of forces you to act and assert at the same time. There’s not a lot I can do about that.
Here’s the one thing that I’m firm on: Ensuring that I have a set of tests that fully cover 100% of the unit that they directly test is a Good Thing.
And here’s the million dollar question: What can I do to prevent these kinds of mistakes in the future? Is it even possible?
To be honest, I’m OK making mistakes as long as I can catch and fix them quickly enough. Did I know about the Part and LiteralText classes before starting work on Scanner? I can’t actually remember. Let’s pretend I didn’t. As soon as I saw that the test for Scanner was referencing other classes, should I have stopped what I was doing, put an Ignore attribute on the test, and started working on tests for those other classes first? I’m not so sure about that. I feel like doing that might have negatively impacted the journey I had already embarked on.
So maybe “failing” at TDD in this way is expected? It’s rare when a class has no dependencies. If I’m using tools like NUnit, NCover, and NDepend, I should be able to catch my “mistakes” pretty quickly. This means that my unit tests have to be fast or I’ll rarely run them and, if that happens, my mistakes won’t be caught until much later. By that point, I’ll have too many mistakes to fix, I’ll be out of time, forced to move on to the next task, and, there I go, abandoning everything I believe in (about developing and testing)!
And my coworkers wonder why I hate our “unit tests” that read from and write to the database so much…
# New BehaveN Release
This is a minor, bug fix release. You can see the changes here.
I’ve also updated the wiki with some more documentation.
# BehaveN
I’ve been using a Cucumber-inspired BDD framework for .NET called BehaveN at work for the past year. Today, I just released the next major version.
There’s a little documentation on the wiki, including a tutorial, but there’s a lot left that I haven’t documented yet. I’ll be getting more and more documentation up as time goes on.
It’s been pretty positively received by our product owners, QA, and most of the developers who’ve been exposed to it. It’s being used by a few outside my company, but I haven’t made any effort on “advertising” it. If anybody’s interested in giving it a try, let me know and I’ll try to fill in any of the blanks you might have.
I took some time tonight to throw together a build script for producing proper releases of AutoRunner. If you don’t feel like compiling it yourself, you can get a pre-compiled version here.
I used ILMerge to merge the Growl for Windows assemblies into the executable so it’s basically a single file now.
By the way, last night on Twitter, Steve Bohlen pointed me to this port of the original autotest to .NET. I took a look at the code and it was much more complex than I was looking for. It actually builds and runs your tests every time you save which is much more often than I want.
I had AutoRunner turned on all day at work today and was loving how it would catch me breaking the tests when I wasn’t expecting it to. =)
# AutoRunner
I recently came across this awesome code kata performance by Corey Haines here.
Besides enjoying and learning from his actual performance, I was really impressed by his use of a Ruby tool called autotest. (I’m not sure, but it looks like it has become autospec.)
Not being a Ruby developer, I wanted the same thing for .NET. I did some searching, but my Google-fu failed me so I spent an hour hacking together my own.
The result is called AutoRunner (I know–way creative) and its source is available on GitHub.
If you want it, you have to download the source and compile it yourself for now. (UPDATE: You can now download it here!) Run it from your favorite console (PowerShell, right?) without any arguments to see what options it accepts.
AutoRunner is a little more general purpose than autotest/autospec is. Basically, it can run any executable when any file changes.
What I wanted it for was to run nunit-console.exe whenever my current tests assembly was rebuilt. To do that, I just invoke it with the right arguments.
If you have Growl for Windows running, it will send it a notification which is pure eye candy and not necessary to actually get it to run your tests.
It’s not a Visual Studio add-in. It’s just a plain old console application. Using Visual Studio’s External Tools feature, however, it’s almost as good as an add-in. I set up an external tool with the appropriate arguments and it’s good to go for all of my projects.
To set this up for yourself, you’d create a new external tool with its command set to the path where you built AutoRunner.exe and its arguments set to something like the following (I’ve separated the options on their own lines, but you wouldn’t do that in Visual Studio):
--target $(BinDir)\$(TargetName)$(TargetExt) --exe C:\path\to\nunit-console.exe --pass "$(TargetName) FTW!"
--fail "Oh noes! \$(TargetName) is FAIL!"
You can use whatever test runner you like, of course. Please note that you must have a file from your tests project open or selected in Solution Explorer when you activate the tool or your AutoRunner instance will be watching the wrong DLL!
It doesn’t support plug-ins the way autotest does and most of its functionality is hard-coded for now. If anybody finds it useful, let me know and maybe we can work on improving it together.
# SharpTestsEx
I’ve been using Fabio Maulo‘s NUnitEx project to get fluent assertions on a personal project recently and have been loving it.
He then went and moved on to a new project called SharpTestsEx, which he intended to be framework-agnostic, but currently only worked with MSTest which prevented me from being able to use it (since I never saw a compelling reason to switch to MSTest).
Fabio was kind enough to let me make the changes necessary to remove the dependency on MSTest. A also made framework-specific versions of SharpTestsEx for MSTest, NUnit, and xUnit. The framework-specific versions aren’t really necessary, but they make the error messages a tiny bit prettier if you use the right one for the test framework you’re using.
You can read Fabio’s announcement here.
I plan on updating my personal project to using SharpTestsEx next.
I just realized I never posted about that project; I’ll have to get around to that soon. If you’re curious, it’s a behaviour-driven development framework for .NET called BehaveN. I’m using it at my work and we’re loving it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24700161814689636, "perplexity": 2059.772733744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00155.warc.gz"} |
http://www.gradesaver.com/a-long-way-gone/q-and-a/who-is-gasemu-in-a-long-way-gone-203638 | Who is Gasemu in A Long Way Gone?
Explain who he is, what kind of character he is, and how he knows Ishmael's family. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951601624488831, "perplexity": 3217.9083209799714}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00237-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://www.gradesaver.com/textbooks/science/physics/physics-principles-with-applications-7th-edition/chapter-4-dynamics-newton-s-laws-of-motion-questions-page-99/12 | ## Physics: Principles with Applications (7th Edition)
The Earth has a mass about $10^{25}$ greater than the object, and it accelerates, but for all practical purposes, the resulting motion of the Earth is not detectable. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5614308714866638, "perplexity": 524.2487436892054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120338.97/warc/CC-MAIN-20170423031200-00361-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://brilliant.org/problems/an-algebra-problem-by-matias-bruna/ | # An algebra problem by Matías Bruna
Algebra Level pending
For each $$n \in \mathbb{N}$$, are defined $$f(n)$$ and $$g(n)$$ as follows:
$$f(n)=2n+1$$ and $$g(n)=n^{2}+n$$.
And let $$S=\dfrac{1}{\sqrt{f(1)+2\sqrt{g(1)}}} + \dfrac{1}{\sqrt{f(2)+2\sqrt{g(2)}}} + \cdots + \dfrac{1}{\sqrt{f(2015)+2\sqrt{g(2015)}}}$$
Compute $$\left \lfloor{10S}\right \rfloor$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984486699104309, "perplexity": 4054.5655185956907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607786.59/warc/CC-MAIN-20170524035700-20170524055700-00359.warc.gz"} |
https://www.physicsforums.com/threads/a-little-history-of-physics-how-is-physics-involved-in-this.326790/ | # A little history of physics .How is physics involved in this?
1. Jul 24, 2009
### graphicer89
A little history of physics.....How is physics involved in this??
Well I was reading over something interesting and something is puzzling me. So it turns out that in 1783 the Montgolfier brothers of France launched what is possibly the first balloon flight carrying passengers which was a Duck, a rooster and a sheep. Their balloon which was a bout 35 feet in diameter and constructed of cloth lined with paper, was launched by filling it with smoke.The flight landed safely about some 8 minutes later. What i am trying to figure out is the physics that is involved with this flight ...for example in terms of its ascent, descenting and the landing this "balloon" made.....How would you explain how physics was involved in this experiment??
2. Jul 24, 2009
### RoyalCat
Re: A little history of physics.....How is physics involved in this??
The two main things I can think of as relevant are kinetic gas theory and Archimedes' Principle.
According to Archimedes, if an object of density $$\rho _1$$ is put into a fluid of density $$\rho_2 > \rho_1$$ then the object will experience an upwards lift force.
According to kinetic gas theory, a hotter gas is a less dense gas (A gas is one kind of fluid).
Do note, I only have a rudimentary understanding of both, so you should probably wait for someone to confirm what I've wrote.
Last edited: Jul 24, 2009
3. Jul 24, 2009
### minger
Re: A little history of physics.....How is physics involved in this??
You are correct RoyalCat. Bouyancy in layman's terms says that an object will see an upward force equal the weight of the displaced fluid.
Let's take water as an example. If we were to fill up an infinately thin hollow sphere with water and drop it in more water, what would happen? The principle says that the ball with displace water equal to its own volume, with an associated weight. This weight will exert a force on the ball. In this case, since the densities are equal, the mass equals the force and the ball can remain in equilbrium.
Now, in your case, let's assume that the hot gas, or smoke, or whatever was in the balloon had a density 1/2 that of air. If the volume of the balloon was 100 cubic feet, then the air would exert an upward force of ~8.07 lbf. The smoke itself has mass and weight, which results in net upward force of ~4.04 lbf.
As said previously, there is an ideal gas law which correlates various properties of a fluid. It says that:
$$\frac{P}{\rho} = RT$$
If we keep the pressure and gas constant equal, then we can rewrite:
$$\rho T = \mbox{constant}$$
So, if the temperature goes up, then the density must go down. As we just seen, decreasing the density will decrease the mass the fluid, and thus increase the net upward force.
4. Jul 24, 2009
### RoyalCat
Re: A little history of physics.....How is physics involved in this??
Just to clarify, depending on the ratio between the densities of the fluid and the object, the upwards force can either be smaller than, greater than, or equal to the weight of the object ($$mg$$) meaning the object can have a net acceleration upwards, downwards, or no net acceleration at all.
This is, of course, ignoring the effect of drag (Which is very significant, mind you). But either way, you get movement.
Similar Discussions: A little history of physics .How is physics involved in this? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696196675300598, "perplexity": 675.0916108124117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102307.32/warc/CC-MAIN-20170816144701-20170816164701-00622.warc.gz"} |
http://tex.stackexchange.com/questions/15253/how-to-set-up-a-quartered-layout-in-beamer | # How to set up a quartered layout in beamer
Back in Powerpoint days, I encountered a typical quartered layout, which was just tiling the body of the slide horizontally and vertically into equally sized pieces with arbitrary content:
-----------------------
| *bla | texttext |
| | texttext |
| | texttext |
-----------------------
| | 1. stuff |
| IMAGE | 2. stuff |
| | 3. stuff |
-----------------------
I tried to reproduce this, but failed so far. My first step was to take a column environment and insert the vertical spaces:
\documentclass{beamer}
\begin{document}
\begin{frame}{foo}{bar}
\begin{columns}[t]
\begin{column}{0.5\textwidth}
\begin{minipage}[t][0.35\textheight]{\textwidth}
sdjfls\\
sdjfk\\
sd
\end{minipage}
\begin{minipage}[t][0.35\textheight]{\textwidth}
sd
\end{minipage}
\end{column}
\begin{column}{0.5\textwidth}
\begin{minipage}[t][0.35\textheight]{\textwidth}
sdjfls\\
sd
\end{minipage}
\begin{minipage}[t][0.35\textheight]{\textwidth}
sdjfls\\
sdjfk\\
sd
\end{minipage}
\end{column}
\end{columns}
\end{frame}
\end{document}
But the manual tuning 0.35\textheight is not really what I wanted, since I have to adjust the number 0.35 for every new theme. A better solution would be something like 0.5\columnheight or 0.5\beamerslidebodyheight. There might be no better solution with this ansatz, see question Why does vfill not work inside a beamer column?.
I'm open to completely different solutions.
One possibility could be to define a new length:
\newlength{\restofpage}
\setlength{\restofpage}{\textheight}
and replace all the 0.35\textheight by 0.5\restofpage, but I don't think this works with footlines.
-
use a tabular instead of columns – Herbert Apr 7 '11 at 16:51
The tabular doesn't ensure that the space is indeed filled. To make clear what I mean, consider replacing the minipage by \block{caption}{ *MINIPAGE* }. I would want the block to use up all the available space. – Turion Apr 7 '11 at 17:00
IMO that's too much stuff on one slide. – Matthew Leingang Apr 7 '11 at 17:16
Did you try tabular* environment, or tabularx package? – Jan Hlavacek Apr 8 '11 at 3:13
@Jan: The problem is mainly how to fit the height. tabular* and tabularx fit the width, as far as I know. – Turion Apr 9 '11 at 11:22
I just looked at the code for building a frame and discovered that without some really complicated wizardry, this is not possible!
The \pageheight gets divided in to the main area plus the bits at the top and bottom. You'd need to know all those bits to define a \restofpage length. It is possible to know some of them, such as the header and footer, but one crucial one is not known until after the frame is built. That is the frame title. The frame title is not built until after the frame has been gathered and put together. This is because it is possible to specify the frame title anywhere in the slide.
If you try the code:
\begin{frame}
\message{building the frame}
\lipsum[1]
\frametitle{Quartered Layout\message{in title}}
\message{more building work going on}
\end{frame}
then you get the messages: building the frame more building work going on in title showing that the \frametitle doesn't get processed until after the frame is processed.
Of course, it might be possible to get around this, perhaps by insisting that the frame title be given as an argument to the \begin{frame} and constructing (and measuring) the frame title at the start of the frame. But I think that there may be easier ways to divide up the content than this.
To counter Seamus' point, just because you can do it in Powerpoint doesn't make it a bad thing to do. One of my earlier forays with beamer used columns to present two sides of an idea: http://www.math.ntnu.no/~stacey/Seminars/infgeom.html (source available on that page). And one of my most recent used blocks to break up a page: http://www.crcg.de/wiki/Higher_Structures_in_Topology_and_Geometry_V (look for my name; the source is not available but for the particular bit of doing the blocks there's no magic involved). If there's something particular in one of those that you like the look of but can't figure out yourself, feel free to ask here or privately.
I'm sorry that this isn't a more positive answer!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7130976319313049, "perplexity": 1304.4023538592442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928030.83/warc/CC-MAIN-20150521113208-00015-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://security.stackexchange.com/questions/14816/is-there-any-security-risk-in-storing-hashed-variations-of-passwords | # Is there any security risk in storing hashed variations of passwords?
For example, if hypothetically an application requirement was to tell users when their password might have failed because caps-lock was turned on, and hypothetically there was no way for the application to actually know that caps-lock was turned on, is there any inherent security risk in storing a hashed password as well as a hashed "caps lock" version of a password so that a failed password can also be compared against the "caps lock" version?
If a database containing user accounts with this setup was breached, would the existence of the "caps lock" password hash in addition to the normal password hash make the passwords any more vulnerable than they would be otherwise?
Note this is a hypothetical and I'm interested more in the security implications than whether this is a good programming practice.
• One quick thought: If the two stored hashes (with and without the caps lock transformation) are the same, we can know that the user has no letters in their password. (Or at least has no lower case letters in their password depending on how you do the transformation.) – Ladadadada May 10 '12 at 6:24
Another approach, if you do wish to allow different password variations.
When a user logins to your application, you would normally get a plaintext version of the password (hopefully over SSL) and then hash it and compare it with your stored hash value.
There's therefore no real need to store different hash versions. You can store one password hash, but when the user tries to login, you perform those transformation on the user-provided password at the time of login and then hash those different variations. e.g. if the provided password failed to compare against the hash, apply the caps-lock transformation, hash and then check again against the stored hash. If it then matches you can decide to give the user access to your application.
This way you store only one (canonical) version of the hash, yet you allow several permutations of the same password.
In terms of attack scenarios:
• Online attack trying to authenticate with different passwords - this will become easier relative to the number of possible permutations
However it should usually be easier to limit the number of possible remote brute force or password guess attempts to make it sufficiently hard. You can read more about some techniques here that you should probably implement anyway.
I'd still be very careful with what possible permutations are supported and measure how much they might reduce password entropy of course, but at least you don't need to store multiple hashed versions to do this.
• I think you want to undo the caps-lock transformation, not apply it again. (This might be different.) – D.W. May 10 '12 at 6:04
• I just made a comment on the other answer. As far as my keyboard seems to work, caps-lock simply reverses the case of letters, so applying a caps-lock transformation should be the same as undoing it. But that's a small nuance not important to the discussion anyway. – Yoav Aner May 10 '12 at 6:08
• Good point! Rather strange that Mac implements it differently. I wasn't aware of this. – Yoav Aner May 10 '12 at 20:15
Interesting question. I guess it depends which and how many versions you are storing. The main concern in doing so is how much you are reducing the search-space / entropy.
If, for example, the caps-lock version of the password is an all-uppercase version of the password, then by storing it this way, you are reducing the entropy of the password. The attacker in this case only has to search using the caps-lock version using only uppercase character-set. That gives them a significantly easier job.
If however, the caps-lock version is simply a case-reverse of the letters, i.e. normal password == aXbJklMP and caps-lock version == AxBjKLmp, then you're probably not hugely reducing the search space. There are still two possible passwords instead of one, making the search twice as likely to find one of those passwords though...
Expanding on my earlier comment:
If the two stored hashes (with and without the caps lock transformation) are the same, we can know that the user has no letters in their password. (Or at least has no lower case letters in their password depending on how you do the transformation.)
Since Mac and Windows (and Linux) handle caps lock differently, you will have to store two variations of the caps lock transformation hash: the case reversal (Windows / Linux) and the all uppercase (Mac). Now we can determine even more about the password from the hashes.
1. If all three hashes are the same, the user has no letters (upper case or lower case) in their password.
2. If the Windows transformation hash is different but the Mac transformation hash is the same as the main hash then the user has upper case letters in their password but no lower case letters.
3. If all three hashes are different, the user has both upper and lower case letters in their password.
Another thought: Is it safe to apply the same salt to all three variations? Although I think it should be, I don't see any significant advantage in doing so. If you apply different salt to all three hashes, I think the above information leaks are negated. Note that I am not a cryptographer, although there shouldn't be any information leakage when hashing three related and possibly identical passwords with three different salts, it may require a longer salt to be secure.
Other problems:
Since we have three password hash variations that need to checked whenever a user is logging in, all three hashes will need to be calculated. If you are using bcrypt (or similar) with an appropriate number of hashing rounds, you will want to reduce that number of rounds so that the time taken is roughly 1/3rd of the old value. (On the assumption that you set the number of rounds to the highest you can handle and you now need to calculate three times the number of hashes per login.)
The reason you need to calculate all three hashes even if the first or the second one match is that if you don't it enables credential enumeration via timing attacks. In theory, to prevent credential enumeration you should provide exactly the same response whether the user doesn't exist or it does but the password is wrong. If you return an identical response for both cases but it takes milliseconds when the user doesn't exist and a whole second when the user does exist you have still enabled credential enumeration.
All up, I think it should be relatively safe if you use different salts for all three hashes, always calculate all three variations of the supplied password and reduce the number of rounds so that the timing is reduced by 1/3rd.
However, it might be simpler to only store one hash, check the user-supplied password for lower case letters and if there are none and the hash doesn't match, suggest to them that they might have caps-lock on.
Late thoughts
There's no need to store three different hashes as I suggested earlier if you are planning on accepting the Mac-style transformation. Just uppercase all passwords before they are hashed and only store the all-uppercase version. This also prevents the timing attacks since you only need to calculate one hash per login.
By doing this you have reduced the number of possible letters/numbers/symbols that people can use by 26 so you might want to suggest you users use a slightly longer password to compensate.
• Storing a hash of the allcaps version seems like a bad idea. On the other hand, informing a user that the password attempt he has just entered contained mostly uppercase letters and no lower case ones would pose very little security risk (the only danger would be letting shoulder-snoopers know that). – supercat Mar 18 '14 at 23:43
It so happens that storing both hashes, or reversing the capslock effect and trying both versions, really reduces security by a factor which is between 1 and 2. It is a bit tricky to see, so let's define things clearly.
I am using the caps-reverse notion of capslock. If the capslock effect is really an everything-is-uppercase effect, then you should never accept or store that all-uppercase version; instead, return a warning as @D.W. suggests.
I assume that you use a slow and salted hash function like bcrypt (if you do not then that is a bigger issue and you should fix it first). The "slow" part is configurable with an iteration count that you raise as much as you can, based on two constraints: the CPU budget for the hashing (depending on how much free CPU you have and how many client connections per second), and the user patience (which is never very high). The cost for the attacker is directly proportional to the iteration count. If you store two hash values (for the "normal" and "capslocked" version of the password), then both must have their own salt.
When you "try" the password sent by the user, and then "try again" with a capslocked version of the same password, then you are actually hashing twice, so there is an overhead on your constraints. On average, the CPU effort will be multiplied by 1+f, where f is the proportion of wrong passwords (f = 0 if all the users type perfectly, f is very close to 1 if all the users are chimpanzees who must try a dozen times before typing their password correctly). Also, every time you have to try the capslocked version of the password, then the user has to wait twice longer before being granted access (if the capslocked version turns out to be correct) or being ignominiously rejected (if the password is really wrong, capslock or not). To some extent, average users are a bit more understanding about delays when they feel it is their fault, because they typed wrong, but I would not count on it.
The net effect is that testing for two versions of the password increases the cost by a factor which is between 1 and 2; correspondingly, you must then decrease the iteration count by that factor, and the attacker's effectiveness is multiplied by that factor.
This is really a trade-off between user experience (which means "helpdesk costs" in many cases) and security. If possible, it is probably better to detect that the capslock is pressed and warn the user before the password is entered; however, this may prove difficult (I know a site which succeeds in doing that when the client is Internet Explorer, but fails with Chrome and Firefox; instead, it emits visible warnings for each uppercase letter, which is quite bad because the warnings are visible from afar, so that's a leak of information about the password).
@Yoav Aner has totally nailed it about the security impact of storing both the hash of the password and the hash of the capslock of the password. Also, @cx42net has a brilliant suggestion: use Javascript on the client side to check whether capslock is on, and if so, warn the user.
I don't have anything to add to those answers, so let me instead try to expand on a different approach (inspired by a suggestion from @Yoav Aner). For this, I need to know whether capslock works by uppercasing all letters that you type, or whether it works by reversing the case of all letters that you type. It appears the answer depends upon the particular client, so I'll give you an answer both ways; you might need to apply both, if you're not sure about which kind of system the client is using.
If capslock uppercases all letters: Store just the hash of the password in your database. When you receive a candidate password P from the client, can perform the following steps on the server to validate the password:
1. Compute Hash(P) and see if it matches the user's password hash (as stored in the database). If yes, mark the user as authenticated; you're done. Otherwise, continue to step 2.
2. Check whether P has any lowercase letters in it. If yes, reject the authentication attempt. If not (if all letters in P are in uppercase), reject the authentication attempt but warn the user to check whether they have capslock on and ask them to try again.
This scheme certainly does not negatively impact the security of the system in any way.
If capslock reverses the case of all letters: Store just the hash of the password in your database. When you receive a candidate password P from the client, can perform the following steps on the server to validate the password:
1. Compute Hash(P) and see if it matches the user's password hash (as stored in the database). If yes, mark the user as authenticated; you're done. Otherwise, continue to step 2.
2. Let P' denote the result of reversing the case of all letters in P. Compute Hash(P) and see if it matches the user's password hash (as stored in the database). If yes, mark the user as authenticated but warn them to check whether they have capslock on. Otherwise, reject the authentication attempt.
This scheme reduces the effort needed for an online password guessing attack by a factor of 2; not a big deal. It does not reduce the security of the system against offline attacks at all.
Addendum: Here is a crazy idea you could try if capslock uppercases all letters. I don't recommend it, but I mention it just because it is so crazy.
Store just the hash of the password in your database. When you receive a candidate password P from the client, can perform the following steps on the server to validate the password:
1. Compute Hash(P) and see if it matches the user's password hash (as stored in the database). If yes, mark the user as authenticated; you're done. Otherwise, continue to step 2.
2. At this point, you're going to test the hypothesis that maybe P is the capslock-ified version of the user's password. Check whether P has any lowercase letters. If P has any lowercase letters, then reject the authentication attempt and stop. Otherwise, continue on to step 3, where you will continue to test this hypothesis.
3. You will try to un-capslock-ing P, and then check the up-capslocked version. Unfortunately, there are many possible ways to un-capslock P, so you'll have to try them all. In particular, suppose P has letters in n positions, with all of those letters in uppercase. Try all 2n subsets of those positions. For each subset, let P' denote the result of taking P and then lowercasing just the letters at that subset of position; compute Hash(P'); check if Hash(P') matches the user's hashed password; if it does, mark the user as authenticated and warn them to turn off capslock; otherwise continue on to the next subset. If no subset is successful, reject the authentication attempt.
This is pretty crazy, for all sorts of reasons. It reduces the security of online guessing attacks fairly substantially (by a factor of 2n, for passwords with n letters in them). It doesn't reduce the security of offline guessing attacks. But it also significantly increases the complexity of the server. And given the better answers mentioned elsewhere, there seems to be no reason to use this crazy scheme.
• This seems like a great deal of work that a very small amount of JavaScript code can cover. – Ramhound May 11 '12 at 12:41
• @Ramhound, I completely agree! – D.W. May 11 '12 at 15:36
I don't think this is necessary and moreover, it can reduce the security of your application.
For the necessary part, I don't know how you identify your user, but if it's a client/server authentication and the client sends you the password in clear to the server (the server, then, hash the password to compare it with the database), you could check the uppercase here :
// in the server :
// $password contains the original, clear password$up_pass = invertcase($password); // I'll explain invertcase later if (hash_and_find_pwd_in_db($password) == null) {
if ($up_pass ==$password) {
return "Did you have caps lock enabled?";
}
(Security note: the security of the above scheme assumes you implement hash_and_find_pwd_in_db() by hashing the password and comparing it against a stored hash value in the database. This way your database only stores password hashes, not cleartext passwords.)
You could always check using javascript if the capslock is enabled, but again, if JS is disabled, this won't work.
About performance: Imagine you have a "uppercase" version, a "lowercase" version, etc etc. You will have to check the original password hashed, against the correct one, if it isn't found, you'll have to it the database as many times as alternatives you have. Just to let the user know that what he entered is invalid for a specific reason. I'm sure it's not efficient at all.
Now, as @Yoav Aner said, writing aXbJklMP in caps lock mode will result in this string : AxBjKLmp. That mean, you'll have to implement a invertcase function that change the case of each letter to match the hypothetic caps lock mode, without any certitude.
• @D.W. - not necessarily, if find_pwd_in_db hashes the password and compares it against a stored hash value then you don't need to store passwords in plain-text. – Yoav Aner May 10 '12 at 6:06
• I updated my answer by changing the find_pwd_in_db to hash_and_find_pwd_in_db to make it more clear. I also updated the secure part matching the comment of @Yoav Aner (you're right, it invert the case). – Cyril N. May 10 '12 at 7:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3907887041568756, "perplexity": 1116.4025861792145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00158.warc.gz"} |
https://www.groundai.com/project/gravitational-wave-sources-from-pop-iii-stars-are-preferentially-located-within-the-cores-of-their-host-galaxies/ | The Galactic Location of GW Sources from Pop III Stars
Gravitational Wave Sources from Pop III Stars are Preferentially Located within the Cores of their Host Galaxies
Fabio Pacucci , Abraham Loeb, Stefania Salvadori
Department of Physics, Yale University, New Haven, CT 06511, USA.
Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA.
Dipartimento di Fisica e Astronomia, Universit di Firenze, Via G. Sansone 1, Sesto Fiorentino, Italy.
INAF/Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, Firenze, Italy.
GEPI, Observatoire de Paris, PSL Research University, CNRS, Place Jule Janssen 92190, Meudon, France.
[email protected]
Abstract
The detection of gravitational waves (GWs) generated by merging black holes has recently opened up a new observational window into the Universe. The mass of the black holes in the first and third LIGO detections, ( and ), suggests low-metallicity stars as their most likely progenitors. Based on high-resolution N-body simulations, coupled with state-of-the-art metal enrichment models, we find that the remnants of Pop III stars are preferentially located within the cores of galaxies. The probability of a GW signal to be generated by Pop III stars reaches at from the galaxy center, compared to a benchmark value of outside the core. The predicted merger rates inside bulges is ( is the Pop III binarity fraction). To match the credible range of LIGO merger rates, we obtain: . Future advances in GW observatories and the discovery of possible electromagnetic counterparts could allow the localization of such sources within their host galaxies. The preferential concentration of GW events within the bulge of galaxies would then provide an indirect proof for the existence of Pop III stars.
keywords:
gravitational waves - stars: Population III - black hole physics - cosmology: dark ages, reionization, first stars - cosmology: early Universe - galaxies: bulges
pubyear: 2017pagerange: Gravitational Wave Sources from Pop III Stars are Preferentially Located within the Cores of their Host GalaxiesReferences
1 Introduction
The first detection of gravitational waves (GWs) by the Laser Interferometer Gravitational Wave Observatory (LIGO) has marked the birth of GW astronomy. The event GW150914 (Abbott et al., 2016a) originated from the merger of a binary black hole (BBH) system with masses and at a redshift , corresponding to a luminosity distance of . This discovery was followed by a second (Abbott et al., 2016b) and a third detection (The LIGO Scientific Collaboration et al., 2017), the latter one with inferred source-frame masses of and at . Current predictions (Abbott et al., 2016d) indicate that GW events will be detected regularly with the additional GW detectors (e.g. VIRGO and KAGRA) at a rate of several per month up to . Overall, the inferred merger rate is .
The opening of a new observational window would enable a revolution in our understanding of the Universe. From an astrophysical point of view (Abbott et al., 2016c), the detection provides direct evidence for the existence of BBHs with comparable mass components. This type of BBHs have been predicted in two main types of formation channels, involving isolated stellar binaries in galactic fields or dynamical interactions in dense stellar environments (Belczynski et al., 2016; Rodriguez et al., 2016). Moreover, a high-mass () stellar progenitor favors a low-metallicity formation environment (, see e.g. Eldridge & Stanway 2016). Extremely low metallicity () leads to the formation of high-mass stars because: (i) the process of gas fragmentation is less efficient and results in proto-stellar clouds that are times more massive than in the presence of dust and metals, and (ii) the accretion of gas onto the proto-stellar cores is more efficient (Omukai & Palla, 2002; Bromm et al., 2002; Abel et al., 2002).
The low redshift of the event GW150914, (, and the low metallicity of the stellar progenitors of the BBH suggest two main formation scenarios for this source. The BBH could have formed in the local Universe, possibly in a low-mass galaxy with a low metal content (Belczynski et al., 2016). Another possible formation channel is in globular clusters (Rodriguez et al., 2016; Zevin et al., 2017). This formation channel implies that the BBH underwent a prompt merger, on a time scale much shorter than the Hubble time. Alternatively, the progenitors of the BBH could have formed in the early Universe, possibly from Pop III stars (see e.g. Belczynski et al. 2004). The first population of stars has not been observed so far (see Sobral et al. 2015; Pacucci et al. 2017; Natarajan et al. 2017) and large uncertainties remain about their physical properties. Current theories (Barkana & Loeb, 2001; Bromm, 2013; Loeb & Furlanetto, 2013) suggest that Pop III stars are characterized by: (i) very low metallicities (), and (ii) large masses (). The formation of BBH progenitors at high redshifts from Pop III stars would imply a time delay between formation and merger of . Pop III stars are therefore natural candidates for the progenitors of massive BBHs. For instance, Kinugawa et al. (2014) pointed out that the detection of GW signals from BBHs with masses would strongly indicate that these sources preferentially originated from Pop III stars. Hartwig et al. (2016) estimated the contribution of Pop III stars to the intrinsic merger rate density: owing to their higher masses, the remnants of Pop III stars produce strong GW signals, even if their contribution in number is small and most of them would occur at high .
In this Letter we propose an independent way to assess if the component of BBHs, detected through GWs, originated from Pop III stars. Employing a data-constrained chemical evolution model coupled with high-resolution N-body simulations, we study the location of Pop III star remnants in a galaxy like the Milky Way (MW) and its high- progenitors. Due to the hierarchical build-up of structures, we expect these old and massive black hole relics to be preferentially found inside the bulge of galaxies. Previously, Gao et al. (2010) employed the Aquarius simulation to show that half of Pop III remnants should be localized within of galactic centers. Here we aim at improving this result, tracking the location of Pop III remnants interior to the galactic bulge (). The localization of GW sources by future observations would therefore allow to test their Pop III origin. Such spatial localization of GW events is currently out of reach. For instance, Nissanke et al. (2013) stated that with an array of new detectors it will be possible to reach an accuracy in the spatial localization up to square degrees. This would allow the localization of a GW event within the dimension of a typical galactic bulge () only for a few galaxies in the local Universe. This situation would change if GW events had electromagnetic (EM) counterparts. The Fermi satellite reported the detection of a transient signal at photon energies that lasted and appeared after GW150914 (Connaughton et al., 2016), encompassing of the probability sky map associated with the LIGO event. Similarly, the satellite AGILE might have detected a high-energy () EM counterpart associated with GW170104 (Verrecchia et al., 2017). While the merging of the components of an isolated BBH generates no EM counterpart, other physical situations exist in which the GW is associated with an EM signal. For instance, the merging BBHs may be orbiting inside the disk of a super-massive black hole (Kocsis et al., 2008), inducing a tidal disruption event (see also Perna et al. 2016; Murase et al. 2016). Moreover, the collapse of the massive star forming an inner binary in a dumbbell configuration could appear as a supernova explosion or a GRB (Loeb, 2016; Fedrow et al., 2017). The identification of EM counterparts of GW signals would allow a precise localization of the source.
2 Methods
We start by describing the properties of the simulations employed along with the general assumptions for Pop III stars.
2.1 N-body simulation and chemical evolution
We employed a data-constrained chemical evolution model designed to study the first stellar generations combined with N-body simulations of MW analogues to localize Pop III remnants within the scale of the MW bulge, (Ness et al., 2013).
While the most important features of the N-body simulation are reported here, more details can be found in Scannapieco et al. (2006), Salvadori et al. (2010a). These papers simulated a MW analog with the GCD+ code (Kawata & Gibson, 2003) using a multi-resolution technique to achieve high resolution. The initial conditions are set up at using GRAFIC2 (Bertschinger, 2001). The highest resolution region in the full simulation is a sphere with a radius four times larger than the virial radius of the MW analog at . The dark matter particles mass and spatial resolution are respectively and . The total mass of stars contained in the MW disk and used to calibrate the simulation is . The virial mass and radius of the MW analog, containing about particles, are consistent with observational estimates , (Battaglia et al., 2005).
While the resolution of this N-body simulation is not the highest currently available, the chemical evolution model is unrivaled in studying the metal enrichment of Pop III and Pop I/II stellar populations down to the present day. The star-formation and metal enrichment history of the MW is studied by using a model that self-consistently follows the formation of Pop III and PopI/II stars and that is calibrated to reproduce the observed properties of the MW (Salvadori et al., 2010a). Combined with the N-body simulation, the model naturally reproduces the age-metallicity relation, along with the properties of metal-poor stars, including the metallicity distribution functions and spatial distributions. Furthermore, it matches the properties of higher- galaxies (Salvadori et al., 2010b). So far, no other simulations can simultaneously account for these observations. Hence, even if we are unable to resolve the main formation sites of Pop III stars (i.e. the mini-halos), we are able to localize with great accuracy the descendants of Pop III stars, i.e. the extremely low metallicity Pop II stars. For this reason, the method we employ here is perfectly suitable for pinpointing the location of Pop III stars in the MW and its progenitors.
2.2 General assumptions for Pop III remnants
Regarding the properties of Pop III stars, we make educated guesses, or parametrize the unknown properties. For instance, the initial mass function (IMF) for Pop III stars is predicted to be different from the IMF of local stars. In the simulations, we employ different Pop III IMFs (see Sec. 4 for details) and check that the radial distribution of Pop III stars is nearly independent on these choices. Moreover, not all binaries can produce LIGO sources (Christian & Loeb, 2017), since they need to have suitable initial masses and separations (see Sec. 4). Also the binarity fraction of Pop III stars could be different from the one of local systems. A qualitative constraint on its value can be derived by matching our predictions with the LIGO merger rate (see Sec. 5).
3 Location of Pop III Remnants within their Host Galaxy
Next we analyze the spatial distribution of Pop III stellar remnants for the MW analog and for some of its progenitors at different halo masses and redshifts (see Table 1).
3.1 Pop III remnants in a Milky Way like galaxy
Figure 1 compares the spatial distributions of Pop III and of Pop I/II stars in the simulation of the MW analog.
The kernel density estimation (left panel) suggests that the distribution of Pop III remnants is highly concentrated near the center. Each contour corresponds to of the components of each population. We find that of Pop III stars are located within from the galactic center. This is confirmed by the probability distribution function (PDF, right panel) for the distance from the center on the galactic plane. The radial distribution of the two stellar components is significantly different, as the full widths at half maximum (FWHM) suggest: for Pop III stars and for Pop I/II stars. Figure 2 shows the stellar densities for both populations: the radial distribution of Pop III remnants is steeper, , than the radial distribution of normal stars, . Thus, at increasing radial distances Pop III remnants are rarer:
ρ⋆(PopIII)ρ⋆(PopI/II)∼r−1. (1)
Similar radial profiles are found for smaller galactic systems, such as P1 and P2 in Table 1. This is illustrated in Fig. 3.
4 Modeling the probability of observing a GW signal from Pop III stars
Employing our N-body simulations, we compute the probability that GW signals generated by BBHs originated from Pop III stars. We name this probability and we calculate it as a function of the galactocentric distance . Formally, the probability is the product of three terms.
The first term, , is the probability of having a Pop III remnant at out of the entire sample of stars: , see Fig. 3 for examples of .
The second term, , is the probability of having binary systems. This probability takes into account both the scenario in which a BBH is formed from binary stars and the scenario in which the black holes form separately and then become gravitationally bound. The binarity probability depends on several factors and have been extensively studied for Pop I/II stars. Instead, there are large uncertainties in the extrapolation to the binarity probability of Pop III stars. Here, we parametrize it with a constant value, independent of mass: . Assuming that GW events are generated by Pop III stars in galactic bulges, we derive in Sec. 5 an estimate of the parameter by matching our predicted GW rate with the LIGO statistics, requiring that .
The third term, , is the probability that the progenitor stars end up with remnants of the correct masses to produce a given GW event, such as GW150914 or GW170104. This probability depends on a large number of parameters and is generally impossible to calculate from first principles. The mass ratio distribution, the orbital separation, the mass exchange between companions and the natal kicks after supernova events can only be incorporated by complicated population synthesis models (Eldridge & Stanway, 2016). The use of these models goes beyond the scope of this paper. An improvement in our knowledge of mass ratios could occur in the near future from large surveys, such as GAIA. Mashian & Loeb (2017) predict that up to astrometric binaries hosting black holes and stellar companions brighter than GAIA’s detection threshold should be discovered with sensitivity. Whereas calculating the mass distribution of normal Pop I/II stars is feasible, the IMF for Pop III stars is poorly constrained. Hence, we make the simplifying assumption that depends only on the IMFs of the two populations of stars. In other words, we assume that it depends only on their respective probability of forming stars above a given mass threshold, e.g. . In fact, the mass range for LIGO-detected BBHs so far is , and the initial stellar mass needs to be significantly higher than these values. The characteristic mass of Pop III stars is predicted to be larger than the mass of local stars, and so they are more prone to having the minimum mass necessary to form BBH sources of LIGO signals. With this assumption we are neglecting all the complications that can only be accounted for by population synthesis models. We use the following IMF for the two stellar populations, with a Salpeter exponent and a low mass cutoff that depends on the stellar population:
Φ(m)∝m−2.35exp(−Mcm), (2)
where is in solar units. For Pop III stars we assume , while for Pop I/II stars . With we find that Pop III stars are times more common in the mass range than Pop I/II stars. Instead, with , the factor is .
The final expression for is:
PIII,GW(r)=PIII(r)×Pbin,III×P(M1,M2). (3)
Given our assumptions, the only term that directly depends on position is the first one.
5 Constraints on binarity and radial probabilities
We are now at a position to constrain the binarity probability of Pop III stars given the inferred LIGO merger rate and then compute the overall probability that a GW signal event localized in the bulge of a MW analog originated from Pop III remnants.
The LIGO detection statistics (Abbott et al., 2016d) implies a credible range of merger rates between . In general, is computed as follows:
R=ρG×NBBH×P, (4)
where is the number density of galaxies in the local Universe, is the average number of BBHs in a MW-mass galaxy and is the average merger rate. We assume (Conselice et al., 2016) and (from the distribution of merging times in Rodriguez et al. 2016 for BBHs in globular clusters). A comment on the latter value is warranted. The PDF of semi-major axes in binary orbits for Pop III stars is largely unconstrained. Here we make the educated guess that the PDF should not significantly vary between Pop III and Pop I/II stars (Sana et al., 2012). Thus, it is also reasonable to employ the distribution of merging times for Pop I/II stars. Moreover, the number of binaries within the bulge of our MW analog is (with the binarity probability for Pop III stars). We therefore obtain: . From the LIGO predicted rate we obtain upper and lower limits for the Pop III binarity probability of and . Stacy & Bromm (2013) find an overall binarity fraction of , which is in the middle of our range. The estimated binarity fraction of Pop III stars is large, and somewhat compatible with observations of massive stars in the Milky Way, indicating that about of O-type stars have spectroscopic binary companions. This is also consistent with the fact that primordial stars are predicted to form preferentially in multiple systems (see e.g. Greif et al. 2012; Stacy et al. 2016).
The probability that a GW signal is generated by remnants of binary Pop III stars is shown in Fig. 3 for a MW analog. In the calculation we assumed and different values of the characteristic mass in the Pop III IMF: and . The probability that a GW signal is generated by Pop III stars is enhanced in all cases within the core of the MW analog. In particular, for the probability reaches at from the center, compared to a benchmark value of outside the galactic core. Similarly, in the case, the peak probability inside the core is , with a benchmark value of . Also note that for the probability reaches inside the galactic core.
6 Discussion and Conclusions
The masses of the two merging black holes in the first LIGO detection suggest that their most likely progenitors are massive low-metallicity stars. The BBH could have formed in the local Universe, possibly in a galaxy with a low content of metals, and then merged. Alternatively, the progenitors of the BBH formed in the early Universe, possibly as Pop III stars, and then merged on a Hubble time scale. By using high-resolution N-body simulations of MW-like and smaller galaxies, coupled with a state-of-the-art metal enrichment model, we suggest that the remnants of Pop III stars should be preferentially found within the bulges of galaxies, i.e. from the center of a MW-like galaxy. We predict a merger rate for GW events generated by Pop III stars inside bulges in the local Universe () of , where is the binarity probability of Pop III stars. By matching with the LIGO merger rate, we derive lower and upper limits of . By using and a Salpeter-like IMF with a variable low-mass cutoff for Pop III stars, we predict that the probability for observing GW signals generated by Pop III stars is strongly enhanced in the core of their host galaxies. In particular, in the case , the probability reaches at from the galactic center, compared to a benchmark value of outside it.
A GW signal from within the bulge of galaxies could also originate from BBHs formed in globular clusters and slowly drifting towards the core (Rodriguez et al., 2016). Globular clusters can produce a significant population of massive BBHs that merge in the local Universe, with a merger rate of , with of sources having total masses in the range . They drift towards the center of their galaxy, but the dynamical friction time to reach the innermost is very long. Since only a fraction of BBHs generated in globular clusters would be found inside the innermost core of the galaxy, the merger rate inside the bulge would be , unable to match the LIGO merger rate. We conclude that the Pop III channel is still the preferential one for generating GW events in galactic bulges.
While most Pop III stars are within the core of the galaxy, their total number is small with respect to the Pop I/II stars, whose population had a much longer fraction of the Hubble time to form. Nonetheless, the IMF of Pop III stars is skewed towards more massive stars. It is then up to times more likely to have a Pop III binary star with the right masses to produce GW signal like the first or the third LIGO detections than in the case of Pop I/II stars.
Despite the large uncertainties regarding the physical properties (or even the existence) of Pop III stars, there is one robust conclusion that can be drawn. If the first population of massive () and metal-free () stars exists, then we predict that GW signals generated by their BBHs are preferentially () located within the inner core of galaxies. If the GW signals detected so far by LIGO are indeed generated by Pop III remnants, we predict in addition that their binarity probability is within the range .
The localization of a GW in the core of a galaxy would not by itself pinpoint its origin as Pop III remnants. Nonetheless, the future build-up of a solid statistics of GW events and the possible localization of a large fraction of them within the core of galaxies, coupled with considerations about their masses, could clearly provide an indirect probe of Pop III stars.
FP acknowledges the Chandra grant nr. AR6-17017B and NASA-ADAP grant nr. MA160009 and is grateful to Charles Bailyn, Frank van den Bosch and Daisuke Kawata. SS was supported by the European Commission through a Marie Curie Fellowship (project PRIMORDIAL, nr. 700907). This work was supported in part by the Black Hole Initiative at Harvard University, which is funded by a grant from the John Templeton Foundation.
References
• Abbott et al. (2016a) Abbott B. P., et al., 2016a, Physical Review Letters, 116, 061102
• Abbott et al. (2016b) Abbott B. P., et al., 2016b, Physical Review Letters, 116, 241103
• Abbott et al. (2016c) Abbott B. P., et al., 2016c, ApJ, 818, L22
• Abbott et al. (2016d) Abbott B. P., et al., 2016d, ApJ, 833, L1
• Abel et al. (2002) Abel T., Bryan G. L., Norman M. L., 2002, Science, 295, 93
• Barkana & Loeb (2001) Barkana R., Loeb A., 2001, Phys. Rep., 349, 125
• Battaglia et al. (2005) Battaglia G., et al., 2005, MNRAS, 364, 433
• Belczynski et al. (2004) Belczynski K., Bulik T., Rudak B., 2004, ApJ, 608, L45
• Belczynski et al. (2016) Belczynski K., Holz D. E., Bulik T., O’Shaughnessy R., 2016, Nature, 534, 512
• Bertschinger (2001) Bertschinger E., 2001, ApJS, 137, 1
• Bromm (2013) Bromm V., 2013, Reports on Progress in Physics, 76, 112901
• Bromm et al. (2002) Bromm V., Coppi P. S., Larson R. B., 2002, ApJ, 564, 23
• Christian & Loeb (2017) Christian P., Loeb A., 2017, preprint, (arXiv:1701.01736)
• Connaughton et al. (2016) Connaughton V., et al., 2016, ApJ, 826, L6
• Conselice et al. (2016) Conselice C. J., Wilkinson A., Duncan K., Mortlock A., 2016, ApJ, 830, 83
• Eldridge & Stanway (2016) Eldridge J. J., Stanway E. R., 2016, MNRAS, 462, 3302
• Fedrow et al. (2017) Fedrow J. M., Ott C. D., Sperhake U., Blackman J., Haas R., Reisswig C., De Felice A., 2017, preprint, (arXiv:1704.07383)
• Gao et al. (2010) Gao L., Theuns T., Frenk C. S., Jenkins A., Helly J. C., Navarro J., Springel V., White S. D. M., 2010, MNRAS, 403, 1283
• Greif et al. (2012) Greif T. H., Bromm V., Clark P. C., Glover S. C. O., Smith R. J., Klessen R. S., Yoshida N., Springel V., 2012, MNRAS, 424, 399
• Hartwig et al. (2016) Hartwig T., Volonteri M., Bromm V., Klessen R. S., Barausse E., Magg M., Stacy A., 2016, MNRAS, 460, L74
• Kawata & Gibson (2003) Kawata D., Gibson B. K., 2003, MNRAS, 340, 908
• Kinugawa et al. (2014) Kinugawa T., Inayoshi K., Hotokezaka K., Nakauchi D., Nakamura T., 2014, MNRAS, 442, 2963
• Kocsis et al. (2008) Kocsis B., Haiman Z., Menou K., 2008, ApJ, 684, 870
• Loeb (2016) Loeb A., 2016, ApJ, 819, L21
• Loeb & Furlanetto (2013) Loeb A., Furlanetto S. R., 2013, The First Galaxies in the Universe. Princeton University Press
• Mashian & Loeb (2017) Mashian N., Loeb A., 2017, preprint, (arXiv:1704.03455)
• Murase et al. (2016) Murase K., Kashiyama K., Mészáros P., Shoemaker I., Senno N., 2016, ApJ, 822, L9
• Natarajan et al. (2017) Natarajan P., Pacucci F., Ferrara A., Agarwal B., Ricarte A., Zackrisson E., Cappelluti N., 2017, ApJ, 838, 117
• Ness et al. (2013) Ness M., et al., 2013, MNRAS, 432, 2092
• Nissanke et al. (2013) Nissanke S., Kasliwal M., Georgieva A., 2013, ApJ, 767, 124
• Omukai & Palla (2002) Omukai K., Palla F., 2002, Ap&SS, 281, 71
• Pacucci et al. (2017) Pacucci F., Pallottini A., Ferrara A., Gallerani S., 2017, MNRAS, 468, L77
• Perna et al. (2016) Perna R., Lazzati D., Giacomazzo B., 2016, ApJ, 821, L18
• Rodriguez et al. (2016) Rodriguez C. L., Chatterjee S., Rasio F. A., 2016, Phys. Rev. D, 93, 084029
• Salvadori et al. (2010a) Salvadori S., Ferrara A., Schneider R., Scannapieco E., Kawata D., 2010a, MNRAS, 401, L5
• Salvadori et al. (2010b) Salvadori S., Dayal P., Ferrara A., 2010b, MNRAS, 407, L1
• Sana et al. (2012) Sana H., et al., 2012, Science, 337, 444
• Scannapieco et al. (2006) Scannapieco E., Kawata D., Brook C. B., Schneider R., Ferrara A., Gibson B. K., 2006, ApJ, 653, 285
• Sobral et al. (2015) Sobral D., Matthee J., Darvish B., Schaerer D., Mobasher B., Röttgering H. J. A., Santos S., Hemmati S., 2015, ApJ, 808, 139
• Stacy & Bromm (2013) Stacy A., Bromm V., 2013, MNRAS, 433, 1094
• Stacy et al. (2016) Stacy A., Bromm V., Lee A. T., 2016, MNRAS, 462, 1307
• The LIGO Scientific Collaboration et al. (2017) The LIGO Scientific Collaboration et al., 2017, preprint, (arXiv:1706.01812)
• Verrecchia et al. (2017) Verrecchia F., et al., 2017, preprint, (arXiv:1706.00029)
• Zevin et al. (2017) Zevin M., Pankow C., Rodriguez C. L., Sampson L., Chase E., Kalogera V., Rasio F. A., 2017, preprint, (arXiv:1704.07379)
Comments 0
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Loading ...
139016
You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test
Test description | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8843106627464294, "perplexity": 2957.560220954573}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148375.36/warc/CC-MAIN-20200229022458-20200229052458-00365.warc.gz"} |
https://codereview.stackexchange.com/questions/202750/codility-binary-gap-solution-using-indices | Codility binary gap solution using indices
I am seeking a review of my codility solution.
Whilst the problem is fairly simple, my approach differs to many solutions already on this site. My submission passes 100% of the automated correctness tests.
I am looking for feedback regarding the design of my solution, and any comments in relation to how this submission might be graded by a human.
Problem description:
A binary gap within a positive integer N is any maximal sequence of consecutive zeros that is surrounded by ones at both ends in the binary representation of N.
For example, number 9 has binary representation 1001 and contains a binary gap of length 2. The number 529 has binary representation 1000010001 and contains two binary gaps: one of length 4 and one of length 3. The number 20 has binary representation 10100 and contains one binary gap of length 1. The number 15 has binary representation 1111 and has no binary gaps. The number 32 has binary representation 100000 and has no binary gaps.
Write a function:
def solution(N)
that, given a positive integer N, returns the length of its longest binary gap. The function should return 0 if N doesn't contain a binary gap.
For example, given N = 1041 the function should return 5, because N has binary representation 10000010001 and so its longest binary gap is of length 5. Given N = 32 the function should return 0, because N has binary representation '100000' and thus no binary gaps.
Write an efficient algorithm for the following assumptions:
N is an integer within the range [1..2,147,483,647].
Solution:
def solution(N):
bit_array = [int(bit) for bit in '{0:08b}'.format(N)]
indices = [bit for bit, x in enumerate(bit_array) if x == 1]
if len(indices) < 2:
return 0
lengths = [end - beg for beg, end in zip(indices, indices[1:])]
return max(lengths) - 1
3. you don't need to return early, as you can default max to 1.
4. You shouldn't need the 8 in your format, as it shouldn't matter if there's padding or not, as it'll be filtered anyway.
def solution(N): | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35597094893455505, "perplexity": 1346.3607331133962}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255251.1/warc/CC-MAIN-20190520001706-20190520023706-00328.warc.gz"} |
http://export.arxiv.org/abs/1803.00575 | astro-ph.EP
(what is this?)
# Title: Planetesimal formation during protoplanetary disk buildup
Abstract: Models of dust coagulation and subsequent planetesimal formation are usually computed on the backdrop of an already fully formed protoplanetary disk model. At the same time, observational studies suggest that planetesimal formation should start early, possibly even before the protoplanetary disk is fully formed. In this paper, we investigate under which conditions planetesimals already form during the disk buildup stage, in which gas and dust fall onto the disk from its parent molecular cloud. We couple our earlier planetesimal formation model at the water snow line to a simple model of disk formation and evolution. We find that under most conditions planetesimals only form after the buildup stage when the disk becomes less massive and less hot. However, there are parameters for which planetesimals already form during the disk buildup. This occurs when the viscosity driving the disk evolution is intermediate ($\alpha_v \sim 10^{-3}-10^{-2}$) while the turbulent mixing of the dust is reduced compared to that ($\alpha_t \lesssim 10^{-4}$), and with the assumption that water vapor is vertically well-mixed with the gas. Such $\alpha_t \ll \alpha_v$ scenario could be expected for layered accretion, where the gas flow is mostly driven by the active surface layers, while the midplane layers, where most of the dust resides, are quiescent.
Comments: 6 pages, 5 figures, accepted for publication in A&A, minor changes due to language edition Subjects: Earth and Planetary Astrophysics (astro-ph.EP) DOI: 10.1051/0004-6361/201732221 Cite as: arXiv:1803.00575 [astro-ph.EP] (or arXiv:1803.00575v2 [astro-ph.EP] for this version)
## Submission history
From: Joanna Drazkowska [view email]
[v1] Thu, 1 Mar 2018 19:00:03 GMT (276kb,D)
[v2] Thu, 8 Mar 2018 11:16:39 GMT (276kb,D) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291674852371216, "perplexity": 3627.0246309581717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646636.25/warc/CC-MAIN-20180319081701-20180319101701-00506.warc.gz"} |
https://www.dsprelated.com/thread/11518/fractional-delay-filter-for-non-real-time-applications | ## fractional delay filter for non real time applications
Started by 3 years ago14 replieslatest reply 3 years ago303 views
Hi guys,
I would like to create fractional delay filter that can be applied on a signal for non real time applications. My approach is to apply a linear phase shift via a multiplication of a ideal filter (rectangular window) with linear phase in the frequency domain. In the next step, I transform the signal back to the time domain using a symmetric ifft to obtain a real valued signal.
The Matlab code looks like this:
N = 1e3; % number of samples
n0 = 0.3; % fractional delay in integer samples
x = randn(1,N); % time domain signal
X = fft(x); % frequency domain
k = 0:N-1; % frequency index vector
Y = X.*exp(-1i*2*pi*k/N*n0); % linear phase shift wrt to frequency
y = ifft(Y,'symmetric'); % back to time domain
Due to the rectangular window of the filter I will get ringing artifacts the frequency boundaries. Nevertheless, is this a valid approach to delay a signal by a fractional integer delay?
Cheers,
Jakob
[ - ]
Reply by June 15, 2020
The ringing you refer to is because your approach creates a filter frequency response which has a discontinuity. I'm using the word discontinuity incorrectly because we are in sampled time, but the point is that your linear phase changes abruptly at the transition between the highest frequency and the zero frequency. For a non-fractional delay, the discontinuity is a multiple of 2 *pi which is OK since the phase is an angle and angles are normally mod 2*pi.
Most people would place that discontinuity at the half-sampling frequency rather than at zero, but I'm not sure that isn't just a matter of taste. But discontinuity in the frequency domain always leads to a Gibbs phenomenon in the time domain and to avoid that you should probably apply a time domain window. That means that the signal you are delaying is the windowed signal.
[ - ]
Reply by June 16, 2020
does this mean, that the approach I showed leads to Gibbs phenomenon but yields perfect linear phase for the whole band with no group delay error?
For my application I need a very precise delay filter (<10⁻4 wrt to the sample interval). The amplitude response isn't that important to me.
Does this make sense?
[ - ]
Reply by June 16, 2020
I think you have to express your requirement more carefully. What exactly is a precise delay filter? If the input and the output of the filter have the same sampling rate then you need a precise definition of what you mean by a delay.
Let's do a thought experiment. Say you have a random number sequence X that is a million samples long, where the assumed sampling rate is 1 microsecond. Now select every ten-thousandth sample, samples 0,10000,20000,..., and store it in another file Y. Imagine some process that creates a 100 sample output file Z which duplicates a different set of samples which is identical to the original million sample sequence X at samples 5,10005,20005,...
Clearly you can't expect to do that with truly random samples.
So instead, start the million sample sequence with a constrained set. One way to constrain it would be to take X's DFT and zero out most of the samples, keeping only DFT bins 0 to 49, and their mirror values -1 to -49 (mod 1 million). The inverse FFT is a million sample sequence X' with only frequency components between +/- 49 Hz so it can be sampled with 100 samples without violating the Nyquist sampling criterion. In other words it should be possible to recover the whole set of a million samples of X' from 100 samples Y'. You just need to put the 100 samples into a million spaces with all the other 999900 spaces having zero, and then convolve it with h(n)= (sin Nn)/ sin n. The result of the convolution is a million sample sequence Z' which matches X' everywhere. But you don't want it everywhere so you don't need to compute all the points of the convolution everywhere, only at points 5,10005,20005, etc.
That's your baseline reference for what you mean by a delay.
Now I think it is fair to assume that your original X and its reduced bandwidth X' were both real sequences. Real sequences have Fourier transforms with with real-part even and imaginary part odd.
The scheme you proposed will give Z' with some imaginary part. That, at a minimum, represents an error introduced by your technique. By Parseval's theorem the total energy of the output is the total energy of the input so there must be another part of the error in the real part.
How would you modify your scheme to eliminate any imaginary part? You have to multiply the transform by the transform of another real sequence, which has the required symmetry. In other words, let the linear phase function have its break at the folding frequency rather than at frequency 0.
That eliminates the imaginary part, which is all error. But it's not going to reproduce Z' at the desired 100 points. To do that you need to preserve ALL the frequency components, and the Gibbs phenomenon will produce ringing which is strongest near the phase discontinuity at 50 Hz.
[ - ]
Reply by June 15, 2020
you need to zero-pad your x signal before the FFT. delaying the signal by linear phase-shifting the spectrum is exactly an example of using the FFT to do what we sometimes call "fast convolution". now this assumes non-real-time.
for real-time, if you have memory for a large table, i might recommend using a Kaiser-windowed sinc() function and then linear interpolation between the subsamples. Farrow is okay for what it is, but it is limited to Lagrange interpolation, which is a specific polynomial-based interpolation.
[ - ]
Reply by June 21, 2020
This isn't accurate. The Farrow structure is generic and can implement any polynomial type and order. The coefficients determine which polynomial is implemented: Lagrange, hermite, spline or even custom designed to meet given time domain or frequency specifications.
[ - ]
Reply by June 21, 2020
You are absolutely correct. I was reverberating what I read at https://ccrma.stanford.edu/~jos/Interpolation/Farrow_Interpolation_Features.html and I didn't drill in enough to see that it could be any polynomial. But it does have to be a polynomial-based interpolation and I am not sure how easily you can jump over integer sample boundaries using this Farrow structure.
As for me, since all of the applications I have ever done fractional delay could easily provide an 8K word lookup table, I just designed the bestest brickwall FIR filter I could with 32-taps and 512 phases (or fractional delays) and use linear interpolation between adjacent fractional delays. Reasonably efficient (64 MACs plus one more for linear interpolation) and clean as a whistle. And I could jump to whatever precision delay (as long as it was at least 15 samples) I wanted to every single output sample.
[ - ]
Reply by June 22, 2020
Farrow is just generic structure for implementing polynomials (including linear interp).
Now the frequency response of polynomial based filter doesn't improve much for order >=3 so adding a fixed filter prior to farrow is a good idea instead of cranking up the order.
From memory, good fixed filter up by 128x followed by linear interp was about the same as a good 8x followed by cubic (100dB ish image/alias attenuation). Looks like this is in-line with what you are saying.
What is best mostly depends on the hardware.
[ - ]
Reply by June 16, 2020
RBJ-
That's interesting Farrow has that limitation.
Everyone calls out Farrow anytime decimation/interp factors are impractical or other reason fractional resampling is needed. It's like a reflex. I didn't hear about any drawbacks until now. Thanks.
-Jeff
[ - ]
Reply by June 15, 2020
As others have said, you can do it in the time domain. You could use a Farrow interpolator, or for better accuracy of the amplitude response, you could use a Fractional-delay FIR. See my article on Fractional Delay FIR at:
https://www.dsprelated.com/showarticle/1327.php
regards,
Neil
[ - ]
Reply by June 16, 2020
thanks Neil, that is very valuable for me!
I checked out your scripts and had a look at the group delay error (magnitude of input group delay minus output group delay). The error lies at around 10⁻3 for a fractional delay of 0.123. The error increases with increasing fraction delay.
Do you know how I could create a fractional delay filter that has an error below 10^-4?
[ - ]
Reply by June 16, 2020
Hi JGruber,
You can try changing the dB ripple parameter "r" of the Chebychev window in my function. It is set at 70 dB -- you can improve accuracy by using a higher value, e.g.:
win= chebwin(ntaps,80);
For a given FIR order, as you increase r, the delay ripple decreases at the expense of reduced bandwidth of the filter.
Another option would be to use a Farrow interpolator (piecewise polynomial interpolation), which would give flat, accurate group delay, although the amplitude response is not perfectly flat.
regards,
Neil
[ - ]
Reply by June 17, 2020
Thank you very much Neil.
[ - ]
Reply by June 15, 2020
Hi,
fft then ifft is very expensive for that. I will just use farrow resampler.
[ - ]
Reply by June 15, 2020
Hi,
If you need to vary the fractional delay real-time, farrow resampler is a good choice. If you can afford to recompute the coefficients for each different fractional delay, a sinc interpolation filter could also work. (essentially, the time domain equivalent of your FFT approach). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6494206190109253, "perplexity": 1382.942828202519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00272.warc.gz"} |
http://tex.stackexchange.com/questions/21452/how-to-typeset-this-symbol | # How to typeset this symbol?
It's not \mathscr{I}, which is more italic (slanted)
-
This looks like a problem for detxify! – Seamus Jun 23 '11 at 12:09
Have a look at “How to look up a math symbol?” for ideas how you can easily find a particular symbol. – Martin Scharrer Jun 23 '11 at 12:10
@Seamus: It would be really create if detexify would accept an image URL as input. It's sometimes difficult to draw them correctly. – Martin Scharrer Jun 23 '11 at 12:12
@Martin yeah I've been trying to draw that for like 3 minutes now! My rollerball mouse is not designed for drawing... – Seamus Jun 23 '11 at 12:14
I've found something fairly similar in the STIX fonts, but it is more slanted than that and as you specifically say that \mathscr{I} is more slanted then I guess that's not what you want. – Loop Space Jun 23 '11 at 12:38
\documentclass{standalone}
\pdfmapfile{+rsfso.map}
\DeclareSymbolFont{rsfso}{U}{rsfso}{m}{n}
\DeclareSymbolFontAlphabet{\mathscr}{rsfso}
\begin{document}
$\mathscr{I}$
\end{document}
The \pdfmapfile is necessary as of today, since it seems that the map file doesn't correctly register into TeX Live. Works with TeX Live 2010 and 2011/testing.
The package mathrsfs defines \mathscr to use the font rsfs10 (or another size), while my definition requests the font rsfso10 (or at different size). This font has been developed by Michael Sharpe (texdoc rsfso), but his package redefines \mathcal instead of using a different command. So I copied the definition from mathrsfs changing rsfs in the font names into rsfso.
The font is just like RSFS, but less slanted. I don't know why the TeX Live manager doesn't add the map file to pdftex.map; but since the trick with \pdfmapline works, why bother?
Well, we should bother if the engine used is Xe(La)TeX, so a bug report will be filed.
-
That appears to be it.. Could you explain the difference with the results that I (and Herbert) got when using \mathscr{I}? – willem Jun 23 '11 at 16:00
Works for me now without the pdfmapfile, so apparently the bug got fixed. – Ben Crowell Dec 26 '13 at 16:37
@BenCrowell I guess so. – egreg Dec 26 '13 at 16:39
The sensible answer is "find a suitable font" (for example, the STIX script I is pretty close, but is perhaps too slanted). Here's a silly answer.
\documentclass{standalone}
\usepackage{tikz}
\usepackage{calligraphy}
\begin{document}
\begin{tikzpicture}[scale=10]
\calligraphy[copperplate,red,heavy,heavy line width=.2cm,light line width=.1cm]
(0.8,0.51) .. controls +(-.1,-.08) and +(0,-.15) .. (0.5,0.6) .. controls +(0,.15) and +(-.13,-.07) .. (0.85,0.72) +(0,0) .. controls +(-.26,-0.07) and +(.12,.15) .. (0.61,0.3) .. controls +(-.12,-.15) and +(0,-.13) .. (.22,.28);
\end{tikzpicture}
\end{document}
which produces:
Apart from the blob at the end, it's pretty close. I know that it's close because I did it by blowing up the image in the post and drawing on top of it. The line widths probably need tweaking a little, but it's only meant to be a silly answer ...
-
NICE! However, I need to use this in a journal article, and I'm not sure if editor's like this stuff.. – willem Jun 23 '11 at 13:23
You should probably mention that the calligraphy package isn't on CTAN yet and explain where to get it and how awesome it is – Seamus Jun 23 '11 at 13:32
@willem: I did say it was silly! egreg's answer looks more promising. I have to say that I'm curious as to why \mathscr{I} wouldn't be acceptable to you. – Loop Space Jun 23 '11 at 13:33
@Seamus: I was counting on the fact that no-one would be silly enough to actually try to do this. – Loop Space Jun 23 '11 at 13:34
Choose another font if needed.
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage[charter]{mathdesign}
\begin{document}
\Huge
$\mathfrak{I} \mathscr{I}$
\end{document}
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255502223968506, "perplexity": 2474.576781654922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636213.71/warc/CC-MAIN-20150417045716-00228-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://www.r-bloggers.com/search/ggplot2/page/140/ | # 1965 search results for "ggplot2"
## Benford’s Law after converting count data to be in base 5
March 8, 2012
By
Firstly, I know nothing about election fraud – this isn’t a serious post. But, I do like to do some simple coding. Ben Goldacre posted on using Benford’s Law to look for evidence of Russian election fraud. Then Richie Cotton did the same, but using R. Commenters on both sites suggested that as the data
## A plot of my citations in Google Scholar vs. Web of Science
March 8, 2012
By
There has been some discussion about whether Google Scholar or one of the proprietary software companies numbers are better for citation counts. I personally think Google Scholar is better for a number of reasons: Higher numbers, but consistently/a...
## How Not To Draw a Probability Distribution
March 7, 2012
By
If I google for “probability distribution” I find the following extremely bad picture: It’s bad because it conflates ideas and oversimplifies how variable probability distributions can generally be. Most distributions are not unimodal. Most dist...
March 7, 2012
By
I'm on spring break, and yesterday I took some time to check off some items on my to-do list, namely:Start getting acquainted with all the new features of ggplot2 .Get a handle on dealing with geographic data in R.I've done some furtive geographic...
## Setting Up and Customizing R
March 7, 2012
By
For the longest time I resisted customizing R for my particular environment. My philosophy has been that each R script for each separate analysis I do should be self contained such that I can rerun the script from top to bottom on any machine and get the same results. This being said, I have now
## Why an inverse-Wishart prior may not be such a good idea
March 7, 2012
By
$Why an inverse-Wishart prior may not be such a good idea$
While playing around with Bayesian methods for random effects models, it occured to me that inverse-Wishart priors can really bite you in the bum. Inverse Wishart-priors are popular priors over covariance functions. People like them priors because they are conjugate to a Gaussian likelihood, i.e, if you have data with each : so that the
March 5, 2012
By
Description: Yearly average airline fare since 1995. Data: http://www.bts.gov/ Analysis: When taking the average price of domestic airline fares, there is a clear trend indicating that flying is becoming more affordable - when adjusted for infl...
## Benford’s Law and fraud in the Russian election
March 5, 2012
By
Earlier today Ben Goldacre posted about using Benford’s Law to try and detect fraud in the Russian elections. Read that now, or the rest of this post won’t make sense. This is a loose R translation of Ben’s Stata code. The data is held in a Google doc. While it is possible to directly retrieve
## Boxplots and Day of Week Effects
March 4, 2012
By
THIS BLOG DOES NOT CONSTITUTE INVESTMENT ADVICE. ACTING ON IT WILL MOST LIKELY BE DETRIMENTAL TO YOUR FINANCIAL HEALTH.After following some R-related quant finance blogs like Timely Portfolio, Systematic Investor or Quantitative tho... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2209067940711975, "perplexity": 2601.7749293960574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462555.21/warc/CC-MAIN-20150226074102-00046-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://experimentationlab.berkeley.edu/syllabus | ## 111-Lab Faculty Instructors
Faculty Instructor Course Office Phone Email Kam-Biu Luk Experimentation Lab 427 LeConte Hall or at LBL 50-5004 (510) 642-8162 or at LBL 510-486-7054 [email protected] Dan Stamper-Kurn Experimentation Lab 301D LeConte Hall (510) 642-9618 [email protected] Joel Fajans Instrumentation INS 431 Birge Hall (510) 984-3601 [email protected]
## 111B-Advanced Lab GSI Student Instructors
Graduate Student Instructors GSI new office: 275 LeConte; Lab phone: 642-1937 Prat Prabudhya Bhattacharyya (20) ADV [email protected] Hsinzon Tsai (10) ADV [email protected] Andrew Aikawa (10) ADV [email protected] Antony Lee (20) ADV [email protected]
## 111 Lab Staff Research Engineer 3 and 111-Lab Manager
Donald Orlando 282E LeConte Hall Phone: 642-5328 (with Voice Mail) Email: [email protected]
## Unit credit
• Three (3.0) units of Advanced Lab are required for the Physics major.
• You can also enroll for additional semesters at 1.5 to 3.0 units each (see a faculty or Don Orlando).
## Course Lab Information
• Lab location: 286 LeConte Hall
• Lab hours: Mondays 12-4pm and Tuesday-Friday 1-5pm
• Lab phone: 624-1937 (No answering machine)
• The Physics Department Colloquium is on Mondays from 4:15- 5:15pm in 1 LeConte Hall; all students are strongly encouraged to attend. Also, tea and cookies are served (for a small fee) in 375 LeConte Hall at 4pm every day.
1. All course materials are available from the Experimentation Lab Site or the Instrumentation Lab.
2. Optical Pumping is a required experiment for all students.
4. Please purchase your own USB Drive for file storage before coming to class.
5. For Lost & Found, see Donald Orlando [email protected]
6. Read Physics Campus Computer Policy
## Texts: Required/Recommended
These texts are good references and available on reserve in the Physics 111 Library Site on campus. Please note that you can access the texts only via the campus-network. To set up access from outside the campus see http://www.lib.berkeley.edu/Help/proxy.html.
• Links to all available Physics 111-Lab videos.
• Note that all videos can be seen on the on-line. Note: the safety videos have paper work attached to it that you must sign and turn into the 111 staff.
• References located on the [Physics 111 Library Site]
• You must present one an oral report on an experiment you have completed! So plan what experiment you do for an oral report. All other reports will be in written form.
• Note there is NO eating or drinking in the 111-Lab anywhere except at the bench with the BLUE stripe around it in rooms 282 & 286 LeConte.
Remember you need to complete one (1) oral presentation and turn in three (3) written reports per semester and the error analysis excercise to pass this course.
NOTE: You can do OPT and MOT or MNO in the same time slot, but you Must Signup and Complete Optical Pumping first and then complete Atom Trapping or Magneto Optical.
Experiment Name First Lab Abbreviation Days Allotted Sign up Consecutive Days needed Atomic Force Microscope (AFM) X AFM 8 Yes Atomic Physics (Balmer Series & Zeeman Effect) X ATM 6 Yes Atom Trapping with Rubidium (See Note Above) λ* No MOT 9 Yes Beta–Ray Spectroscopy X BRA 6 Not Available Brownian Motion in Cells X BMC 8 Yes CO2 Laser λ X CO2 8 Yes Compton Scattering X COM 7 Yes Gamma–Ray Spectroscopy X GMA 6 Yes Hall Effect in a Plasma X HAL 6 Yes Hall Effect in a Semiconductor X SHE 5 Yes Josephson Junction X JOS 8 Yes Low Light Signal Measurements X LLS 8 Yes Magneto Optical and Non Linear Laser (See Note Above) λ* No MNO 9 Not Available Muon Lifetime X MUO 6 Yes Nonlinear Dynamics & Chaos $X NLD 7 Yes Nuclear Magnetic Resonance & Pulsed NMR X NMR 9 Yes Optical Trapping (Laser Tweezers) λ* X OTZ 8 Yes Optical Pumping – REQUIRED (Two Setups available) X OPT 3 Yes Quantum Interference and Entanglement X QIE 6 Yes Rutherford Scattering X RUT 6 Yes$ = You will do some LabView programming λ = Lasers used
## Advanced Lab Report Due Dates
Report Due Dates Fall 2017 0. Error Analysis Exercise – EAX 11:00 pm, September 1, online submission 1. Oral* or written 1st Lab 11:00 pm, September 25 online submission for written reports 2. Oral* or written 2nd Lab 11:00 pm, October 16, online submission for written reports 3. Oral* or Written 3rd Lab 11:00 pm, November 13 online submission for written reports 4. Written 4th Lab 8:00 Am, December 11th, Note Change of online submission
*You must sign up for one oral report before the due date. Your oral report can be on the first, second or third lab . All other reports are written.
NOTE: Reports must be in PDF format and submitted online through Bcourses (make sure all graphics are readable and the file size is small enough).
YOU MUST TURN IN ALL REQUIRED WORK TO PASS THIS COURSE.
LATENESS POLICY: NO REPORT PAST THE DATE AND TIME OF THE LAST DUE DATE WILL BE ACCEPTED.
Radiation rings and locker keys are all due by the final day of the lab class. Turn them in to the staff in room 286 LeConte Hall.
## Course Material Fee
To off set the cost of expendable items of the Physics 111B-Lab course, students of the Physics 111B-Lab must pay a Course Materials Fee (CMF) of \$165.00. CMF is a kind of fees approved under the authorities contained in the policies of the Office of the President (October 2014) and the Berkeley Campus (October 2009). The fees are assessed after the fifth week of classes in Fall and Spring, and will be included in the students' CARS (Campus Accounts Receivables System) statements.
## General
The goal of the advanced lab is to become familiar with experimental physics research. It is a test run as an experimental physicist with all responsibilities. This includes learning how to conduct meaningful experiments, mastering important experimental instrumentation and methods, analyzing data, drawing meaningful conclusions from them and presenting your results in a succinct manner. For this, you will conduct four experiments chosen from 19 other experiment in the lab and one error-analysis exercise. Every student must complete the Optical Pumping experiment.
Important note: if you repeat this class (including dropping it), you must talk to an instructor about which experiments you should choose. Repeating experiments you have done in a previous semester is not allowed.
Note that there is NO eating or drinking in the 111-Lab anywhere, except in rooms 282 & 286 LeConte on the benches with the BLUE Stripe around it. Thank You from the Staff.
## Learning Goals
• Learn how to think as an experimentalist
• What am I trying to find out? Why?
• What does my experiment do? What piece of equipment does what? How?
• How precisely/accurately do I need to know something in my experiment?
• What aspect of my experiment and of the background physics is central, and what is peripheral?
• How trustworthy is my measured result?
• What might have caused statistical errors in my measurement? What might have caused systematic errors in my measurement? How can I honestly and convincingly estimate the effect of those errors?
• Learn how to communicate science
## Remarks
There will be a lot of hard work and frustration, but it is a very rewarding experience, and worth the effort. Often there is no satisfactory solution to a particular problem. Thus you will not be penalized for not getting the correct answer, rather your grade will depend on how systematically you approach the tasks and solve the inevitable problems. The lab is equally challenging to the teaching staff who may not be familiar with all the experiments. Note that the goal of this course is not to teach you the right answer but to instruct you how you can figure out the answers. We are here to help and to guide you in this process. We will teach you problem-solving strategies, for instance, by asking questions rather than giving you the answer you might actually seek.
What you do in this course
Complete 5 assignments (1 EAX exercise, 3 written1 oral report) to receive a grade in this course.
• EAX: Error analysis exercise
• Look for assignment and background materials online
• Complete problem set
• Read lab manual and background materials, watch videos, go over pre-lab questions before coming to class, look at the apparatus before your first assigned day.
• Present the Pre-lab before first day with GSI or Prof. Mid-lab sometimes.
• Written reports must be submitted on the due date. Oral report is given to a faculty at your signed-up time slot after the due date.
## Choosing Experiments
There are about 20 experiments available for this semester, covering a wide range of fields in Physics, such as atomic physics, condensed-matter physics, optics, nuclear and particle physics. Each experiment has instructions accessible via the navigation bar at the top of the web site but refrain from using them as a recipe. You will be much better off by understanding what you are doing rather than following instructions. You must do four experiments and the exercise on error analysis (the latter in the first week of class) to complete the course requirement. One of the four experiments must be Optical Pumping; The other experiments are divided into two groups based on their overall effort. Please note also that we will take the level of difficulty of the individual experiment into account when we grade; in particular, we expect you to go into much more detail for the "easy" experiments.
You can sign up for the first experiment on the first day of class. For the following labs, we will announce a day from when on you can sign up for the next experiment. Sign up will be by groups as determined on the first day of class. The order of the group will rotate through so that every group will have a good chance to choose at least once their favorite experiment. The first group can start signing up at the beginning of the lab (noon on Mondays, 1pm on other days), the 2nd 1h later, the 3rd at 2h later and 4th group in the last hour of the lab.
Fall 2017, we have the following sign-up days and orders:
lab #1: August 23, group B,C,D,A
lab #2: September 18, group C,D,A,B
lab #3: October 9, group D,A,B,C
lab #4: November 6, group A,B,C,D
## Organization
• You perform experiments with a lab partner. You work together on the experiment. However, the main part of your data analysis and all or your written/oral report must be your own work.
• You must sign up to do an experiment in one of the assigned slots. You cannot split your time between multiple slots. The sign-up list is in LeConte 286 next to the door.
• You sign up for Lab 1 on the first day. You sign up for later labs on later days (it will be announced).
• Some of you will have to start Lab N before turning in the report for Lab N-1.
• If you turn in EAX or a lab report after its due date, you will incur a late penalty.
## Reasons for failing this course
• the main reason for failing this course is lack of time management resulting in not turning in all reports and/or too many late penalties. A late report will likely mean that the next report will be also late as you will be busy writing the already late report. So please avoid turning in reports late, it just does not pay off.
## Preparation for each experiment
• Prepare to do the pre-lab at least one day prior to starting your experiment. Otherwise you might loose valuable experimenting time.
• Download the write-up of the experiment from the wen site page.
• Read some of the references and the write-up.
• Watch the appropriate videos for the experiment and any lecture series or safety videos that are available. Attention: some of the videos may be out of date and the apparatus and the procedure may have changed. However, the key is to learn the concepts and experimentation.
• For some experiments, either a laser or radiation safety training is required. Take the required training course and the quiz.
## In the Laboratory
For successful experimentation, you must have a good understanding of the underlying physics. The pre-lab questions are there to guide you towards the important concepts and we require you to go over them with the teaching staff before starting to do the experiment. You do need to turn in and show your written answers to the pre-lab questions, but you must demonstrate a sufficient understanding of the physics related to the questions; otherwise you will not be allowed to start the experiment.
Before experimenting:
• get familiar with the apparatus.
• ask yourself questions before taking any action.
• be patient and careful. Safety is essential.
• DO NOT ABUSE any piece of equipment.
• if there is any issue, talk to the teaching staff or Don.
• you must be proactive when working together.
• Plan ahead: take some measurements and quickly analyze the data to get some idea whether you are heading in the right direction, then take more data
## Reports
You must complete one oral report per semester on an experiment see Oral Report Guidelines and view the How to do an Oral Report Video. All other reports are in written format (see Written Report Guidelines).
All written reports must be submitted online through Bcourses.
Oral report (20 minutes presentation + ~10 minutes questions)
• You must give only one oral report for one of the first three labs.
• You must sign up for which lab you will do the oral report during the first week(s) of the semester.
• Watch the video on how to prepare an oral report.
• Hand in the signed pre-lab page and show the faculty instructor copies of your data and analysis.
• You can either use the white board or any kind of software tool for presentation.
• 10 points will be deduced for every 10 minutes late for giving your oral report.
• While we expect a coherent and well-structured presentation, be prepared that questions wil be discussed as they come up during your presentation.
Written report (less than 15 double-spaced pages): Note All reports must be submitted online through Bcourses.
Some advice on writing the report:
• Latex is a free powerful word processor that is popular among the physicists and mathematicians. Thus, we encourage you to use it to write your report, but we accept any reasonable format.
• We encourage you to analyze your data with MATLAB which is available in the lab; otherwise consider Octave or SciPy as an alternative to MATLAB. Excel for performing fits is strongly discouraged.
• You don't need to provide long derivations.
• You should cite references in the text. For example, to cite a paper: J. Last, Phys. Rev. Lett. volume number, page number (2013); to cite a book: J. Last, title of the book, page, publisher (2012).
• Your should only provide relevant information: think what a student in your position needs to know to understand what you did.
If you encounter difficulties with the analysis or physics, do not hesitate to contact the staff in the lab; we are there to help you.
## Safety
Use common sense and think before acting.
1. No food or drink is allowed in the lab except for the specified area marked with blue tapes.
2. Some experiments that use radiation or lasers will require safety training.
3. View the Radiation Safety Video on YouTube. Then get a pink Radiation Safety form from a 111-Lab staff person. Fill it out & sign the form for getting a Radiation Ring. Also, complete the Radiation Safety Training. After completion of the training, turn in all forms to Don Orlando or a teaching staff.
4. View the Laser Safety video here.
5. Complete the read and sign laser training Laser Safety Training
Your final semester grade will be determined from the total points you receive for the reports where we will take the difficulty of each experiment into account. Each of the four lab-reports is graded on a 0 to 100 point basis, while for the error analysis report you can receive up to 50 points. There are many factors that go into determining the grade that a report receives, but we offer the following rough grading guidelines, where >50% is considered a passing grade:
• Excellent (80% - 100%): Student completed most parts of the experiment, and report demonstrates a clear understanding of each part and the overall picture. The report is easy to follow (would be clear to another student), and is complete without being padded. Report contains complete error analysis, and contains no or few mistakes.
• Average (60% - 80%): Student completed most parts of the experiment, and report demonstrates a general understanding although student may appear confused over some points. Analysis is difficult to follow, and conclusions drawn from the data are not clearly stated.
• Poor (40% - 60%): Student completed major parts of the experiment, but fails to draw conclusions from the data. Report is difficult to follow, and contains many errors.
• Insufficient (0% - 40%): Student fails to demonstrate an understanding of what the experiment is about and/or major parts of the report are missing.
You must have given an oral report. Remember that students who are missing work will be assigned a grade of "F" for the semester, and that no report will be accepted after the last due date.
Note also that you must have turned in all four reports and the error analysis report by the deadline of the last report.
Pick up your report in the lab. We try to return your graded report in a timely fashion, i.e. in two weeks. For feed-back on presentation style, we encourage you to go through the report together with the GSI/faculty who graded it.
## Lateness
10 points will be deduced for every 10 minutes late for giving your oral report at your sign-up time. All written reports are due by 11:00 pm except the last one on the due date. Ten (10) points will be deduced for each started week past the due date. No report will be accepted past 08:00 am on the due date of the last lab report, no exceptions. Getting a late start on your report is no excuse for turning in the report late.
## Plagiarism
Both the University and the 111 Lab staff take the subject of plagiarism very seriously. Please make sure you understand completely the following and ask questions if ever in doubt: "All data that you present in your reports must be your own. All written work that you submit, except for acknowledged quotations, is to be in your own words. Work copied from a book, webpages (including the experimental instructions), from another student's report, or from any other source without proper citation will, under University rules, earn the student a grade of 'F' for the semester, and possible disciplinary action by the Student Conduct Committee." Note that a proper citation requires that you mark clearly which text/illustration has been copied from as well as the source of it. This is most easily done by adding a note of the form "Illustration taken from Ref. [<number>] below the illustration indicating which reference this excerpt belongs to. In case you quote a text, put the quoted text in quotation marks and add the reference number after the text.
## Working Together
You will probably take your data with a partner, and may work together on analyzing these data. But each person must write his or her own report and submit it to 111-Lab Staff for grading. The text of your report, graphs, figures, and derivations of equations must be your own. (This includes graphs generated using standard software: you must each make your own). Please be sure to acknowledge any sources that you use in your reports, and be careful not to copy another's work.
## End of the semester
All materials and reports are due by the last due day, no exceptions. Any graded Lab Reports not picked up by the first week of the subsequent semester will be thrown away. Please make sure you return your radiation ring if you use one. Please let us know how we can make improvements by completing the course evaluation at the link
Thank you. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.350394070148468, "perplexity": 2446.7262536550907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00061.warc.gz"} |
https://www.physicsforums.com/threads/circular-motion-what-is-the-radius-of-the-loop-de-loop-in-meters.603456/ | # Circular motion-what is the radius of the loop de loop in meters
1. May 5, 2012
### dani123
1. The problem statement, all variables and given/known data
Snoopy is flying his vintage warplane in a "loop de loop" path being chased by the Red Baron. His instruments tell him the plane is level (at the bottom of the loop) and travelling at 180 km/hr. He is sitting on a set of bathroom scales that him he weighs four times what he normally does. What is the radius of the loop in meters?
2. Relevant equations
Fc=mv2/R
ac=v2/R
3. The attempt at a solution
I am completely lost with this problem but this is what I attempted...
I started by converting 180km/hr into m/s and came up with 648 000 000m/s.
Then I used this value and plugged it into the centripetal acceleration equation and used ac=9.8m/s2 and then solved for R= 4.28*1016m
I know this must be the wrong answer but I am very lost as to what I should do or even how I should be looking at this problem... If anyone could help me out that would be greatly appreciated! Thank you so much in advance!
2. May 5, 2012
### tiny-tim
hi dani123!
:rofl: :rofl: :rofl:
hint: what is the equation for the reaction force between snoopy and the scales?
3. May 7, 2012
### dani123
im confused as to how thats gonna help me find the radius..
4. May 7, 2012
### tiny-tim
but the only information you have is the magnitude of that reaction force
5. May 7, 2012
### dani123
so do i just convert 180km into meters?
6. May 7, 2012
### dani123
nvm! scratch that last post ...
7. May 7, 2012
### tiny-tim
yes, and hours into seconds
8. May 7, 2012
### dani123
i tend to over think the problems that have the simplest answers! haha
9. May 9, 2012
### dani123
after i converted it into m/s... thats the answer? really?
10. May 9, 2012
### Staff: Mentor
Consider what his effective acceleration must be if he weighs 4x normal weight (when the usual acceleration due to gravity is just g). What additional acceleration is operating when he moves in circular motion?
11. May 28, 2012
### dani123
im lost
12. May 28, 2012
### tiny-tim
what is the equation for the reaction force between snoopy and the scales?
13. May 28, 2012
### dani123
i dont know anymore,, im really confused
14. May 28, 2012
### truesearch
edit...delete
15. May 28, 2012
### Staff: Mentor
If the radius were infinite, there would be a 1g upward force exerted by the scale on him. How many additional g's of force are required to keep him moving in a circle? That should be v2/R.
Chet
16. May 28, 2012
### tiny-tim
well, how many forces are there, and what is the acceleration?
17. May 28, 2012
### Staff: Mentor
The net force is 3g's.
18. May 28, 2012
### dani123
how did you come up with these g's ?
19. May 28, 2012
### azizlwl
What is the radius of the loop in meters?
You draw a free body diagram. From this you can calculate the radius needed.
There are 3 forces exerted on the man.
1. Gravitional force.
2. Centripetal force.
3. Normal force. This force shown by the scale.
20. May 29, 2012
### vela
Staff Emeritus
There are only two forces on Snoopy: the gravitational force and normal force. The net force on Snoopy results in his centripetal acceleration.
Similar Discussions: Circular motion-what is the radius of the loop de loop in meters | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037181496620178, "perplexity": 2347.384318140915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886952.14/warc/CC-MAIN-20180117173312-20180117193312-00535.warc.gz"} |
https://math.stackexchange.com/questions/2923156/understanding-a-proof-that-if-n-geq-3-then-a-n-is-generated-by-all-the-3 | # Understanding a proof that if $n \geq 3$, then $A_n$ is generated by all the $3$-cycles
The following proof (up until the point I got stuck) was given in my class notes
Proof: Let $S$ denote the set of elements in $S_n$ which are the product of two transpositions. Then $\langle S \rangle \subseteq A_n$. Now if $\alpha \in A_n$ then $\alpha = \tau_1 \tau_2 \dots \tau_k$ where the $\tau_i$'s are transpositions and $k$ is even. Now $\alpha = (\tau_1\tau_2)(\tau_3\tau_4) \dots (\tau_{k-1}\tau_k)$ implies that $\alpha \in \langle S \rangle$. Hence $A_n = \langle S \rangle$.
The last line is the part of this proof that I don't understand. For $\alpha$ to be an element in $\langle S \rangle$ we need to show that $\alpha$ is the product of two transpositions, however in the above I don't see how $\alpha$ is the product of two transpositions at all. We'd need to show that $\alpha = f g$ where $f, g \in S_n$ are both $2$-cycles and I don't see how the last line implies that.
Careful here: you need to show that $\alpha\in\left<S\right>$, not that $\alpha\in S$.
$\left<S\right>$ is the subgroup generated by $S$, so it suffices to show that $\alpha$ is a product of elements of $S$. Thus, $$\alpha = \tau_1\tau_2\cdots\tau_k = (\tau_1\tau_2)(\tau_3\tau_4)\cdots(\tau_{k-1}\tau_i)$$ is in $\left<S\right>$ because each of $\tau_1\tau_2$ and $\tau_3\tau_4$, etc. are elements of $S$.
$\langle S \rangle$ is not $S$ itself, it is the group that is generated by the elements of $S$. Every element in $\langle S \rangle$ is a product of elements of $S$. (by definition should be product of elements of $S$ and their inverses but here the inverse of an element of $S$ here is also in $S$ so it doesn't matter). Now, $\tau_1\tau_2, \tau_3\tau_4,\dots,\tau_{k-1}\tau_k$ are all elements in $S$ and $\alpha$ is their product. Hence $\alpha\in\langle S \rangle$.
The notation $\langle S \rangle$ denotes the group generated by the set $S$ which is a subset of some group $G$. This means the proof has achieved it's objective: it's shown that $\alpha$ is a product of elements in $S$, ie elements that are products of two transpositions.
By the way, if you use a proof of the simplicity of $A_n$ that does not depend on $A_n$ being generated by $3$-cycles (and the easiest proof, for example, does not depend on it...get simplicity of $A_5$ from the sizes of its conjugacy classes and for larger $n$ use induction and easy standard results on multiply-transitive permutation groups), then you have a simple way to show that $A_n$ is generated by $3$-cycles, for $n \ge 5$. Since the $3$-cycles form a conjugacy class (you don't even need this....just that they are a union of conjugacy classes), they generate a normal subgroup. Since $A_n$ is simple, that subgroup must be $A_n$.
For $n=3$ or $4$, the proof is also easy: The number of $3$-cycles exceeds $|A_n|/2$ in those cases. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635800123214722, "perplexity": 63.31301447168421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258147.95/warc/CC-MAIN-20190525164959-20190525190959-00372.warc.gz"} |
https://www.lessonplanet.com/teachers/compare-fractions-with-decimals | # Compare Fractions With Decimals
In this comparing fractions learning exercise, students solve ten problems in which fractions and decimal numbers are compared with the <, >, or = signs. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933909177780151, "perplexity": 3892.615708095163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822822.66/warc/CC-MAIN-20171018070528-20171018090528-00532.warc.gz"} |
https://derek.ma/pubs/ | # Publications
### New!Dual Memory Network Model for Sentiment Analysis of Review Text
In sentiment analysis of product reviews, both user and product information are proven to be useful. Current works handle user profile …
### Implicit Discourse Relation Identification for Open-domain Dialogues
Discourse relation identification has been an active area of research for many years, and the challenge of identifying implicit …
### Dual Memory Network Model for Biased Product Review Classification
In sentiment analysis (SA) of product reviews, both user and product information are proven to be useful. Current tasks handle user …
### BlocHIE: a BLOCkchain-based platform for Healthcare Information Exchange
In this paper, we propose BlocHIE, a Blockchain-based platform for healthcare information exchange. First, we analyze the different …
### A Sentiment Analysis Method to Better Utilize User Profile and Product Information
Sentiment analysis which aims to predict users’ opinions is a huge need for many industrial services. In recent years, many methods … | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1547652781009674, "perplexity": 4670.483436221538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573465.18/warc/CC-MAIN-20190919081032-20190919103032-00325.warc.gz"} |
https://pypi.org/project/oimdp/ | OpenITI mARkdown Parser
# oimdp: OpenITI mARkdown Parser
This Python library will parse an OpenITI mARkdown document and return a python class representation of the document structures.
## Usage
import oimdp
md_file = open("mARkdownfile", "r")
md_file.close()
parsed = oimdp.parse(text)
## Parsed structure
Please see the docs, but here are some highlights:
### Document API
content: a list of content structures
get_clean_text(): get the text stripped of markup
### Content structures
Content classes contain an original value from the document and some extracted content such as a text string or a specific value.
Most other structures are listed in sequence (e.g. a Paragraph is followed by a Line).
Line objects and other line-level structures are divided in PhrasePart objects.
PhrasePart are phrase-level tags
## Develop
Set up a virtual environment with venv
python3 -m venv .env
Activate the virtual environment
source .env/bin/activate
Install
python setup.py install
## Tests
With the environment activated:
python tests/test.py
## Project details
Uploaded source
Uploaded 1 2 0 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4419093132019043, "perplexity": 24857.974331856294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00093.warc.gz"} |
http://www.physicspages.com/tag/equation-of-continuity/ | Klein-Gordon equation: probability density and current
Reference: References: Robert D. Klauber, Student Friendly Quantum Field Theory, (Sandtrove Press, 2013) – Chapter 3, Problem 3.3.
In non-relativistic quantum mechanics governed by the Schrödinger equation, the probability density is given by
$\displaystyle \rho=\Psi^{\dagger}\Psi \ \ \ \ \ (1)$
and the probability current is given by (generalizing our earlier result to 3-d and using natural units):
$\displaystyle \mathbf{J}=\frac{i}{2m}\left(\Psi\nabla\Psi^{\dagger}-\Psi^{\dagger}\nabla\Psi\right) \ \ \ \ \ (2)$
The continuity equation for probability is then
$\displaystyle \frac{\partial\rho}{\partial t}+\nabla\cdot\mathbf{J}=0 \ \ \ \ \ (3)$
We’ll now look at how these results appear in relativistic quantum mechanics, using the Klein-Gordon equation:
$\displaystyle \frac{\partial^{2}\phi}{\partial t^{2}}=\left(\nabla^{2}-\mu^{2}\right)\phi=0 \ \ \ \ \ (4)$
We can multiply this equation by ${\phi^{\dagger}}$ and then subtract the hermitian conjugate of the result from the original equation to get
$\displaystyle \phi^{\dagger}\frac{\partial^{2}\phi}{\partial t^{2}}-\phi\frac{\partial^{2}\phi^{\dagger}}{\partial t^{2}}$ $\displaystyle =$ $\displaystyle \phi^{\dagger}\left(\nabla^{2}-\mu^{2}\right)\phi-\phi\left(\nabla^{2}-\mu^{2}\right)\phi^{\dagger}\ \ \ \ \ (5)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \phi^{\dagger}\nabla^{2}\phi-\phi\nabla^{2}\phi^{\dagger} \ \ \ \ \ (6)$
The LHS can be written as
$\displaystyle \phi^{\dagger}\frac{\partial^{2}\phi}{\partial t^{2}}-\phi\frac{\partial^{2}\phi^{\dagger}}{\partial t^{2}}=\frac{\partial}{\partial t}\left(\phi^{\dagger}\frac{\partial\phi}{\partial t}-\phi\frac{\partial\phi^{\dagger}}{\partial t}\right) \ \ \ \ \ (7)$
(use the product rule on the RHS and cancel terms).
The RHS of 6 can be written as (use the product rule again):
$\displaystyle \phi^{\dagger}\nabla^{2}\phi-\phi\nabla^{2}\phi^{\dagger}=\nabla\cdot\left(\phi^{\dagger}\nabla\phi-\phi\nabla\phi^{\dagger}\right) \ \ \ \ \ (8)$
We can write this as a continuity equation for the Klein-Gordon equation, with the following definitions:
$\displaystyle \rho$ $\displaystyle \equiv$ $\displaystyle i\left(\phi^{\dagger}\frac{\partial\phi}{\partial t}-\phi\frac{\partial\phi^{\dagger}}{\partial t}\right)\ \ \ \ \ (9)$ $\displaystyle \mathbf{j}$ $\displaystyle =$ $\displaystyle -i\left(\phi^{\dagger}\nabla\phi-\phi\nabla\phi^{\dagger}\right) \ \ \ \ \ (10)$
[The extra ${i}$ is introduced to make ${\rho}$ and ${\mathbf{j}}$ real. Note that the factor within the parentheses in both expressions is a complex quantity minus its complex conjugate, which always gives a pure imaginary term. Thus multiplying by ${i}$ ensures the result is real.]
Then
$\displaystyle \frac{\partial\rho}{\partial t}+\nabla\cdot\mathbf{j}=0 \ \ \ \ \ (11)$
We can put this in 4-vector form if we use (for some 3-vector ${\mathbf{A}}$):
$\displaystyle \nabla\cdot\mathbf{A}=-\partial^{i}a_{i} \ \ \ \ \ (12)$
where the implied sum over ${i}$ is from ${i=1}$ to ${i=3}$ (spatial coordinates), and the minus sign appears because we’ve raised the index on ${\partial^{i}}$. If we define
$\displaystyle j_{i}=i\left(\phi^{\dagger}\partial_{i}\phi-\phi\partial_{i}\phi^{\dagger}\right) \ \ \ \ \ (13)$
(that is, the negative of 10), then ${\nabla\cdot\mathbf{j}=\partial^{i}j_{i}}$. To make ${j_{\mu}}$ into a 4-vector, we add ${j_{0}=\rho}$ and we get
$\displaystyle \frac{\partial j_{0}}{\partial t}+\partial^{i}j_{i}=\partial^{\mu}j_{\mu}=0 \ \ \ \ \ (14)$
[Note that my definition of ${j_{i}}$ is the negative of the middle term in Klauber’s equation 3-21, although raising the ${i}$ index agrees with the last term in 3-21. I can’t see how his middle and last equations for ${j_{i}}$ and ${j^{i}}$ can both be right, since raising the ${i}$ in the middle equation for ${j_{i}}$ merely raises the ${\phi_{,i}}$ to ${\phi^{,i}}$ without changing the sign.]
The curious thing about the Klein-Gordon equation is that its probability density ${\rho}$ in 9 need not be positive, depending on the values of ${\phi}$ and its time derivative. To see how this can affect the physical meaning of the equation, consider the general plane wave solution to the Klein-Gordon equation
$\displaystyle \phi=\sum_{\mathbf{k}}\frac{1}{\sqrt{2V\omega_{\mathbf{k}}}}\left(A_{\mathbf{k}}e^{-ikx}+B_{\mathbf{k}}^{\dagger}e^{ikx}\right) \ \ \ \ \ (15)$
Klauber explores this starting with his equation 3-24, where he takes a test solution in which all ${B_{\mathbf{k}}^{\dagger}=0}$ and shows that ${\int\rho\;d^{3}x=\sum_{\mathbf{k}}\left|A_{\mathbf{k}}\right|^{2}=1}$ so that in this case, the total probability of finding the system in some state is +1 as it should be. Let’s see what happens if we take all ${A_{\mathbf{k}}=0}$. In that case, 9 becomes
$\displaystyle \rho$ $\displaystyle =$ $\displaystyle i\left[\sum_{\mathbf{k}}\frac{B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]\left[\sum_{\mathbf{k}^{\prime}}\frac{i\omega_{\mathbf{k}^{\prime}}B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]-\nonumber$ $\displaystyle$ $\displaystyle$ $\displaystyle i\left[\sum_{\mathbf{k}^{\prime}}\frac{B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]\left[\sum_{\mathbf{k}}\frac{-i\omega_{\mathbf{k}}B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]\ \ \ \ \ (16)$ $\displaystyle$ $\displaystyle =$ $\displaystyle -\left[\sum_{\mathbf{k}}\frac{B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]\left[\sum_{\mathbf{k}^{\prime}}\frac{\omega_{\mathbf{k}^{\prime}}B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]-\ \ \ \ \ (17)$ $\displaystyle$ $\displaystyle$ $\displaystyle \left[\sum_{\mathbf{k}^{\prime}}\frac{B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]\left[\sum_{\mathbf{k}}\frac{\omega_{\mathbf{k}}B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right] \ \ \ \ \ (18)$
We now wish to calculate ${\int\rho\;d^{3}x}$. We can use the orthonormality of solutions to do the integral. We have
$\displaystyle \frac{1}{V}\int e^{i\left(k^{\prime}-k\right)x}d^{3}x=\delta_{\mathbf{k},\mathbf{k}^{\prime}} \ \ \ \ \ (19)$
We get
$\displaystyle -\int\left[\sum_{\mathbf{k}}\frac{B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]\left[\sum_{\mathbf{k}^{\prime}}\frac{\omega_{\mathbf{k}^{\prime}}B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]d^{3}x$ $\displaystyle =$ $\displaystyle -\sum_{\mathbf{k}}\frac{\left|B_{\mathbf{k}}\right|^{2}}{2}\ \ \ \ \ (20)$ $\displaystyle -\int\left[\sum_{\mathbf{k}^{\prime}}\frac{B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]\left[\sum_{\mathbf{k}}\frac{\omega_{\mathbf{k}}B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]d^{3}x$ $\displaystyle =$ $\displaystyle -\sum_{\mathbf{k}}\frac{\left|B_{\mathbf{k}}\right|^{2}}{2}\ \ \ \ \ (21)$ $\displaystyle \int\rho\;d^{3}x$ $\displaystyle =$ $\displaystyle -\sum_{\mathbf{k}}\left|B_{\mathbf{k}}\right|^{2} \ \ \ \ \ (22)$
Thus the total probability of finding the system in one of the state ${\mathbf{k}}$ is negative.
Stress-energy tensor: conservation equations
Reference: Moore, Thomas A., A General Relativity Workbook, University Science Books (2013) – Chapter 20; Box 20.5.
We can express conservation of energy and momentum in terms of the stress-energy tensor. Recall that the physical meaning of the component ${T^{tt}}$ is the energy density.
To get the conservation laws, consider a small box with dimensions ${dx}$, ${dy}$ and ${dz}$, and restrict our attention to the case of ‘dust’, that is, a fluid containing particles that are all at rest relative to each other. In that case, the tensor has the form
$\displaystyle T^{ij}=\rho_{0}u^{i}u^{j} \ \ \ \ \ (1)$
where ${\rho_{0}}$ is the energy density of the dust in its own rest frame and ${u^{i}}$ is the four-velocity of the fluid as measured in some observer’s local inertial frame. Then the flow of energy into the box over, say, the left-hand face perpendicular to the ${x}$ axis at position ${x}$ in time ${dt}$ is the energy density multiplied by the velocity component in the ${x}$ direction ${v^{x}}$.
$\displaystyle dE_{x}=\left(T_{x}^{tt}v^{x}dt\right)dydz \ \ \ \ \ (2)$
We’ve multiplied by ${dydz}$ since this is the area of the face of the box through which the energy is flowing, and thus the total flow of energy is the density multiplied by the volume that crosses the box’s face, which is ${v^{x}dtdydz}$. The subscript ${x}$ on ${T_{x}^{tt}}$ means that the tensor is evaluated at position ${x}$. We have
$\displaystyle T^{tt}=\rho_{0}u^{t}u^{t} \ \ \ \ \ (3)$
and ${u^{t}v^{x}=\gamma v^{x}=u^{x}}$, so ${T^{tt}v^{x}=T^{tx}}$ using 1, and thus
$\displaystyle dE_{x}=\left(T_{x}^{tx}dt\right)dydz \ \ \ \ \ (4)$
Similarly, the energy flowing across the face at position ${x+dx}$ is then
$\displaystyle dE_{x+dx}=\left(T_{x+dx}^{tx}dt\right)dydz \ \ \ \ \ (5)$
Taking the difference of these two equations we get
$\displaystyle \left(T_{x+dx}^{tx}-T_{x}^{tx}\right)dtdydz=\partial_{x}T^{tx}dxdtdydz \ \ \ \ \ (6)$
We can write similar equations for the ${y}$ and ${z}$ directions:
$\displaystyle \left(T_{y+dy}^{ty}-T_{y}^{ty}\right)dtdxdz$ $\displaystyle =$ $\displaystyle \partial_{y}T^{ty}dxdtdydz\ \ \ \ \ (7)$ $\displaystyle \left(T_{z+dz}^{tz}-T_{z}^{tz}\right)dtdydx$ $\displaystyle =$ $\displaystyle \partial_{z}T^{tz}dxdtdydz \ \ \ \ \ (8)$
Adding these up gives the net total change in energy within the boxes
$\displaystyle dE$ $\displaystyle =$ $\displaystyle -\left(\partial_{x}T^{tx}+\partial_{y}T^{ty}+\partial_{z}T^{tz}\right)dxdydzdt \ \ \ \ \ (9)$
The minus sign occurs because if, say, ${\partial_{x}T^{tx}<0}$, this indicates that ${T_{x}^{tx}>T_{x+dx}^{tx}}$ so more energy flows in at position ${x}$ than flows out at position ${x+dx}$, resulting in ${dE>0}$.
The net change in energy due to its flow across the boundaries of the box must be reflected in the change of the energy within the box. The energy density is given by ${T^{tt}}$ so we must have
$\displaystyle dE$ $\displaystyle =$ $\displaystyle \left(T_{t+dt}^{tt}-T_{t}^{tt}\right)dxdydz\ \ \ \ \ (10)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \partial_{t}T^{tt}dxdydzdt \ \ \ \ \ (11)$
The energy conservation law is then given by
$\displaystyle \partial_{t}T^{tt}dxdydzdt$ $\displaystyle =$ $\displaystyle -\left(\partial_{x}T^{tx}+\partial_{y}T^{ty}+\partial_{z}T^{tz}\right)dxdydzdt \ \ \ \ \ (12)$
Since this must be true for any choice of differentials, the energy conservation law is expressed in the compact form
$\displaystyle \partial_{j}T^{tj}=0 \ \ \ \ \ (13)$
We can do a similar argument for momentum. The component ${T^{tj}}$ (where ${j}$ is a spatial coordinate) is the density of the ${j}$ component of momentum and the components ${T^{ij}}$ are the rates of flow of the ${j}$ component of momentum in the ${i}$ direction, so the net change in momentum component ${j}$ due to differences in the flow rate at the boundaries of the box is
$\displaystyle dp^{j}=-\left(\partial_{x}T^{xj}+\partial_{y}T^{yj}+\partial_{z}T^{zj}\right)dxdydzdt \ \ \ \ \ (14)$
This must be equal to the net change of ${p^{j}}$ within the box over time ${dt}$, so
$\displaystyle dp^{j}=\partial_{t}T^{tj}dxdydzdt \ \ \ \ \ (15)$
and
$\displaystyle \partial_{i}T^{ij}=0 \ \ \ \ \ (16)$
This is therefore true for all four values of ${j}$ and represents conservation of energy and momentum, or just four-momentum.
We derived this formula for the special case of a local inertial frame (LIF). We’ve seen that the appropriate generalization of the gradient is the absolute gradient or covariant derivative, so the appropriate tensor equation for conservation of four-momentum is
$\displaystyle \boxed{\nabla_{i}T^{ij}=0} \ \ \ \ \ (17)$
In terms of Christoffel symbols, this is
$\displaystyle \nabla_{i}T^{ij}=\partial_{i}T^{ij}+\Gamma_{ik}^{i}T^{kj}+\Gamma_{ik}^{j}T^{ik}=0 \ \ \ \ \ (18)$
We can apply this equation to the more general case of a perfect fluid in general coordinates, where the tensor is
$\displaystyle T^{ij}=\left(\rho_{0}+P_{0}\right)u^{i}u^{j}+P_{0}g^{ij} \ \ \ \ \ (19)$
We can work out the covariant derivative in a LIF. In a LIF, all Christoffel symbols are zero so we get
$\displaystyle \nabla_{i}T^{ij}$ $\displaystyle =$ $\displaystyle \partial_{i}T^{ij}\ \ \ \ \ (20)$ $\displaystyle 0$ $\displaystyle =$ $\displaystyle u^{i}u^{j}\partial_{i}\left(\rho_{0}+P_{0}\right)+\left(\rho_{0}+P_{0}\right)\left[u^{i}\partial_{i}u^{j}+u^{j}\partial_{i}u^{i}\right]+\eta^{ij}\partial_{i}P_{0} \ \ \ \ \ (21)$
The four-velocity always satisfies the relation ${\mathbf{u}\cdot\mathbf{u}=u_{j}u^{j}=-1}$ so we have
$\displaystyle \partial_{i}\left(\mathbf{u}\cdot\mathbf{u}\right)$ $\displaystyle =$ $\displaystyle \partial_{i}\left(u^{j}u_{j}\right)\ \ \ \ \ (22)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \partial_{i}\left(\eta_{jk}u^{k}u^{j}\right)\ \ \ \ \ (23)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \eta_{jk}\left[u^{k}\partial_{i}u^{j}+u^{j}\partial_{i}u^{k}\right]\ \ \ \ \ (24)$ $\displaystyle$ $\displaystyle =$ $\displaystyle u_{j}\partial_{i}u^{j}+u_{k}\partial_{i}u^{k}\ \ \ \ \ (25)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 2u_{j}\partial_{i}u^{j}\ \ \ \ \ (26)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 0 \ \ \ \ \ (27)$
We can now multiply 21 by ${u_{j}}$ and use the above result to get
$\displaystyle u^{i}u_{j}u^{j}\partial_{i}\left(\rho_{0}+P_{0}\right)+\left(\rho_{0}+P_{0}\right)\left[u^{i}u_{j}\partial_{i}u^{j}+u_{j}u^{j}\partial_{i}u^{i}\right]+\eta^{ij}u_{j}\partial_{i}P_{0}$ $\displaystyle =$ $\displaystyle -u^{i}\partial_{i}\left(\rho_{0}+P_{0}\right)-\left(\rho_{0}+P_{0}\right)\partial_{i}u^{i}+u^{i}\partial_{i}P_{0} \ \ \ \ \ (28)$ $\displaystyle =$ $\displaystyle 0$ $\displaystyle \left(\rho_{0}+P_{0}\right)\partial_{i}u^{i}+u^{i}\partial_{i}\rho_{0} \ \ \ \ \ (29)$ $\displaystyle =$ $\displaystyle 0$ $\displaystyle \partial_{i}\left(u^{i}\rho_{0}\right)+P_{0}\partial_{i}u^{i} \ \ \ \ \ (30)$ $\displaystyle =$ $\displaystyle 0$
This last equation is known as the equation of continuity. Note that it is valid only in a LIF, since the derivative isn’t covariant.
Now we can multiply 29 by ${u^{j}}$and subtract it from 21:
$\displaystyle u^{i}u^{j}\partial_{i}P_{0}+\left(\rho_{0}+P_{0}\right)u^{i}\partial_{i}u^{j}+\eta^{ij}\partial_{i}P_{0}$ $\displaystyle =$ $\displaystyle 0\ \ \ \ \ (31)$ $\displaystyle \left(\rho_{0}+P_{0}\right)u^{i}\partial_{i}u^{j}$ $\displaystyle =$ $\displaystyle -\left(u^{i}u^{j}+\eta^{ij}\right)\partial_{i}P_{0} \ \ \ \ \ (32)$
This is the equation of motion, also valid in a LIF.
In the non-relativistic limit, the density in any LIF will be the same, as will the pressure. Also, ${P_{0}\ll\rho_{0}}$ so we can approximate 30 by
$\displaystyle \partial_{i}\left(u^{i}\rho_{0}\right)\approx0 \ \ \ \ \ (33)$
Using ${\mathbf{u}\approx\left[1,v^{x},v^{y},v^{z}\right]}$ this becomes
$\displaystyle \partial_{t}\rho_{0}=-\vec{\nabla}\cdot\left(\rho_{0}\mathbf{v}\right) \ \ \ \ \ (34)$
where the arrow above ${\vec{\nabla}}$ indicates this is the 3-d gradient, not the covariant derivative. This is the Newtonian equation of continuity for a perfect fluid, which expresses conservation of mass.
We can also approximate 32 by neglecting any products of velocity components, since ${v^{i}v^{j}\ll1}$ if both ${i}$ and ${j}$ are spatial coordinates. The LHS becomes
$\displaystyle \left(\rho_{0}+P_{0}\right)u^{i}\partial_{i}u^{j}$ $\displaystyle \approx$ $\displaystyle \rho_{0}u^{i}\partial_{i}u^{j}\ \ \ \ \ (35)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \rho_{0}\left[\partial_{t}v^{j}+\left(\vec{\mathbf{v}}\cdot\vec{\nabla}\right)v^{j}\right] \ \ \ \ \ (36)$
The term with ${j=t}$ drops out, since ${u^{t}=1}$ and its derivatives are all zero. We can combine the three spatial coordinates into a single vector expression:
$\displaystyle \rho_{0}\left[\partial_{t}\vec{\mathbf{v}}+\left(\vec{\mathbf{v}}\cdot\vec{\nabla}\right)\vec{\mathbf{v}}\right] \ \ \ \ \ (37)$
The RHS is
$\displaystyle -\left(u^{i}u^{j}+\eta^{ij}\right)\partial_{i}P_{0} \ \ \ \ \ (38)$
If ${j=t}$, the ${i=t}$ term in the sum is zero because ${u^{t}u^{t}+\eta^{ij}=+1-1=0}$. If we ignore all other terms that are second order or higher in ${v}$ and/or ${P_{0}}$, we are left with only ${-\eta^{ij}\partial_{i}P_{0}}$. Looking at the 3 terms with ${j}$ being a spatial coordinate, this is ${-\vec{\nabla}P_{0}}$ so we get the approximation
$\displaystyle \rho_{0}\left[\partial_{t}\vec{\mathbf{v}}+\left(\vec{\mathbf{v}}\cdot\vec{\nabla}\right)\vec{\mathbf{v}}\right]=-\vec{\nabla}P_{0} \ \ \ \ \ (39)$
which is Euler’s equation of motion for a perfect fluid. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 212, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732524156570435, "perplexity": 123.00383216798433}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608936.94/warc/CC-MAIN-20170527094439-20170527114439-00062.warc.gz"} |
https://pyagrum.readthedocs.io/en/latest/lib.notebook.html | # Module notebook¶
tools for BN analysis in jupyter notebook
pyAgrum.lib.notebook.animApproximationScheme(apsc, scale=<ufunc 'log10'>)
show an animated version of an approximation algorithm
Parameters: apsc – the approximation algorithm scale – a function to apply to the figure
pyAgrum.lib.notebook.configuration()
Display the collection of dependance and versions
pyAgrum.lib.notebook.getBN(bn, size='4', format='svg', nodeColor=None, arcWidth=None, arcColor=None, cmap=None, cmapArc=None)
get a HTML string for a Bayesian network
Parameters: bn – the bayesian network size – size of the rendered graph format – render as “png” or “svg” nodeColor – a nodeMap of values (between 0 and 1) to be shown as color of nodes (with special colors for 0
and 1) :param arcWidth: a arcMap of values to be shown as width of arcs :param arcColor: a arcMap of values (between 0 and 1) to be shown as color of arcs :param cmap: color map to show the colors :param cmapArc: color map to show the arc color if distinction is needed
Returns: the graph
pyAgrum.lib.notebook.getBNDiff(bn1, bn2, size='4', format='png')
get a HTML string representation of a graphical diff between the arcs of _bn1 (reference) with those of _bn2.
• full black line: the arc is common for both
• full red line: the arc is common but inverted in _bn2
• dotted black line: the arc is added in _bn2
• dotted red line: the arc is removed in _bn2
Parameters: bn1 (BayesNet) – referent model for the comparison bn2 (BayesNet) – bn compared to the referent model size – size of the rendered graph format – render as “png” or “svg”
pyAgrum.lib.notebook.getDot(dotstring, size='4', format='png')
get a dot string as a HTML string
Parameters: dotstring – dot string size – size of the rendered graph format – render as “png” or “svg” the HTML representation of the graph
pyAgrum.lib.notebook.getGraph(gr, size='4', format='png')
get a HTML string representation of pydot graph
Parameters: gr – pydot graph size – size of the rendered graph format – render as “png” or “svg” the HTML representation of the graph as a string
pyAgrum.lib.notebook.getInference(bn, engine=None, evs=None, targets=None, size='7', format='png', nodeColor=None, arcWidth=None, cmap=None)
get a HTML string for an inference in a notebook
Parameters: bn (gum.BayesNet) – engine (gum.Inference) – inference algorithm used. If None, LazyPropagation will be used evs (dictionnary) – map of evidence targets (set) – set of targets size (string) – size of the rendered graph format (string) – render as “png” or “svg” nodeColor – a nodeMap of values (between 0 and 1) to be shown as color of nodes (with special colors for 0
and 1) :param arcWidth: a arcMap of values to be shown as width of arcs :param cmap: color map to show the color of nodes and arcs :return: the desired representation of the inference
pyAgrum.lib.notebook.getInferenceEngine(ie, inferenceCaption)
display an inference as a BN+ lists of hard/soft evidence and list of targets
Parameters: ie (gum.InferenceEngine) – inference engine inferenceCaption (string) – title for caption
pyAgrum.lib.notebook.getInfluenceDiagram(diag, size='4', format='png')
get a HTML string for an influence diagram as a graph
Parameters: diag – the influence diagram size – size of the rendered graph format – render as “png” or “svg” the HTML representation of the influence diagram
pyAgrum.lib.notebook.getInformation(bn, evs=None, size='4', cmap=<matplotlib.colors.LinearSegmentedColormap object>)
get a HTML string for a bn annoted with results from inference : entropy and mutual informations
Parameters: bn – the BN evs – map of evidence size – size of the graph cmap – colour map used the HTML string
pyAgrum.lib.notebook.getJunctionTree(bn, withNames=True, size='4', format='png')
get a HTML string for a junction tree (more specifically a join tree)
Parameters: bn – the bayesian network withNames (boolean) – display the variable names or the node id in the clique size – size of the rendered graph format – render as “png” or “svg” the HTML representation of the graph
pyAgrum.lib.notebook.getPosterior(bn, evs, target)
shortcut for getProba(gum.getPosterior(bn,evs,target))
Parameters: bn (gum.BayesNet) – the BayesNet evs (dict(str->int)) – map of evidence target (str) – name of target variable the matplotlib graph
pyAgrum.lib.notebook.getPotential(pot, digits=4, withColors=None, varnames=None)
return a HTML string of a gum.Potential as a HTML table. The first dimension is special (horizontal) due to the representation of conditional probability table
Parameters: pot (gum.Potential) – the potential to get digits (int) – number of digits to show of strings varnames (list) – the aliases for variables name in the table boolean withColors : bgcolor for proba cells or not the HTML string
pyAgrum.lib.notebook.getSideBySide(*args, **kwargs)
create an HTML table for args as string (using string, _repr_html_() or str())
Parameters: args – HMTL fragments as string arg, arg._repr_html_() or str(arg) captions – list of strings (captions) a string representing the table
pyAgrum.lib.notebook.showBN(bn, size='4', format='svg', nodeColor=None, arcWidth=None, arcColor=None, cmap=None, cmapArc=None)
show a Bayesian network
Parameters: bn – the bayesian network size – size of the rendered graph format – render as “png” or “svg” nodeColor – a nodeMap of values (between 0 and 1) to be shown as color of nodes (with special colors for 0
and 1) :param arcWidth: a arcMap of values to be shown as width of arcs :param arcColor: a arcMap of values (between 0 and 1) to be shown as color of arcs :param cmap: color map to show the colors :param cmapArc: color map to show the arc color if distinction is needed :return: the graph
pyAgrum.lib.notebook.showBNDiff(bn1, bn2, size='4', format='png')
show a graphical diff between the arcs of _bn1 (reference) with those of _bn2.
• full black line: the arc is common for both
• full red line: the arc is common but inverted in _bn2
• dotted black line: the arc is added in _bn2
• dotted red line: the arc is removed in _bn2
Parameters: bn1 (BayesNet) – referent model for the comparison bn2 (BayesNet) – bn compared to the referent model size – size of the rendered graph format – render as “png” or “svg”
pyAgrum.lib.notebook.showDot(dotstring, size='4', format='png')
show a dot string as a graph
Parameters: dotstring – dot string size – size of the rendered graph format – render as “png” or “svg” the representation of the graph
pyAgrum.lib.notebook.showGraph(gr, size='4', format='png')
show a pydot graph in a notebook
Parameters: gr – pydot graph size – size of the rendered graph format – render as “png” or “svg” the representation of the graph
pyAgrum.lib.notebook.showInference(bn, engine=None, evs=None, targets=None, size='7', format='png', nodeColor=None, arcWidth=None, cmap=None)
show pydot graph for an inference in a notebook
Parameters: bn (gum.BayesNet) – engine (gum.Inference) – inference algorithm used. If None, LazyPropagation will be used evs (dictionnary) – map of evidence targets (set) – set of targets size (string) – size of the rendered graph format (string) – render as “png” or “svg” asString (boolean) – display the graph or return a string containing the corresponding HTML fragment the desired representation of the inference
pyAgrum.lib.notebook.showInfluenceDiagram(diag, size='4', format='png')
show an influence diagram as a graph
Parameters: diag – the influence diagram size – size of the rendered graph format – render as “png” or “svg” the representation of the influence diagram
pyAgrum.lib.notebook.showInformation(bn, evs=None, size='4', cmap=<matplotlib.colors.LinearSegmentedColormap object>)
show a bn annoted with results from inference : entropy and mutual informations
Parameters: bn – the BN evs – map of evidence size – size of the graph cmap – colour map used the graph
pyAgrum.lib.notebook.showJunctionTree(bn, withNames=True, size='4', format='png')
Show a junction tree
Parameters: bn – the bayesian network withNames (boolean) – display the variable names or the node id in the clique size – size of the rendered graph format – render as “png” or “svg” the representation of the graph
pyAgrum.lib.notebook.showPosterior(bn, evs, target)
shortcut for showProba(gum.getPosterior(bn,evs,target))
Parameters: bn – the BayesNet evs – map of evidence target – name of target variable
pyAgrum.lib.notebook.showPotential(pot, digits=4, withColors=None, varnames=None)
show a gum.Potential as a HTML table. The first dimension is special (horizontal) due to the representation of conditional probability table
Parameters: pot (gum.Potential) – the potential to get digits (int) – number of digits to show of strings varnames (list) – the aliases for variables name in the table boolean withColors : bgcolor for proba cells or not the display of the potential
pyAgrum.lib.notebook.showProba(p, scale=1.0)
Show a mono-dim Potential
Parameters: p – the mono-dim Potential
pyAgrum.lib.notebook.sideBySide(*args, **kwargs)
display side by side args as HMTL fragment (using string, _repr_html_() or str())
Parameters: args – HMTL fragments as string arg, arg._repr_html_() or str(arg) captions – list of strings (captions)
## Helpers¶
pyAgrum.lib.notebook.configuration()
Display the collection of dependance and versions
pyAgrum.lib.notebook.sideBySide(*args, **kwargs)
display side by side args as HMTL fragment (using string, _repr_html_() or str())
Parameters: args – HMTL fragments as string arg, arg._repr_html_() or str(arg) captions – list of strings (captions)
## Visualization of Potentials¶
pyAgrum.lib.notebook.showProba(p, scale=1.0)
Show a mono-dim Potential
Parameters: p – the mono-dim Potential
pyAgrum.lib.notebook.getPosterior(bn, evs, target)
shortcut for getProba(gum.getPosterior(bn,evs,target))
Parameters: bn (gum.BayesNet) – the BayesNet evs (dict(str->int)) – map of evidence target (str) – name of target variable the matplotlib graph
pyAgrum.lib.notebook.showPosterior(bn, evs, target)
shortcut for showProba(gum.getPosterior(bn,evs,target))
Parameters: bn – the BayesNet evs – map of evidence target – name of target variable
pyAgrum.lib.notebook.getPotential(pot, digits=4, withColors=None, varnames=None)
return a HTML string of a gum.Potential as a HTML table. The first dimension is special (horizontal) due to the representation of conditional probability table
Parameters: pot (gum.Potential) – the potential to get digits (int) – number of digits to show of strings varnames (list) – the aliases for variables name in the table boolean withColors : bgcolor for proba cells or not the HTML string
pyAgrum.lib.notebook.showPotential(pot, digits=4, withColors=None, varnames=None)
show a gum.Potential as a HTML table. The first dimension is special (horizontal) due to the representation of conditional probability table
Parameters: pot (gum.Potential) – the potential to get digits (int) – number of digits to show of strings varnames (list) – the aliases for variables name in the table boolean withColors : bgcolor for proba cells or not the display of the potential
## Visualization of graphs¶
pyAgrum.lib.notebook.getDot(dotstring, size='4', format='png')
get a dot string as a HTML string
Parameters: dotstring – dot string size – size of the rendered graph format – render as “png” or “svg” the HTML representation of the graph
pyAgrum.lib.notebook.showDot(dotstring, size='4', format='png')
show a dot string as a graph
Parameters: dotstring – dot string size – size of the rendered graph format – render as “png” or “svg” the representation of the graph
pyAgrum.lib.notebook.getGraph(gr, size='4', format='png')
get a HTML string representation of pydot graph
Parameters: gr – pydot graph size – size of the rendered graph format – render as “png” or “svg” the HTML representation of the graph as a string
pyAgrum.lib.notebook.showGraph(gr, size='4', format='png')
show a pydot graph in a notebook
Parameters: gr – pydot graph size – size of the rendered graph format – render as “png” or “svg” the representation of the graph
## Visualization of graphical models¶
pyAgrum.lib.notebook.getBN(bn, size='4', format='svg', nodeColor=None, arcWidth=None, arcColor=None, cmap=None, cmapArc=None)
get a HTML string for a Bayesian network
Parameters: bn – the bayesian network size – size of the rendered graph format – render as “png” or “svg” nodeColor – a nodeMap of values (between 0 and 1) to be shown as color of nodes (with special colors for 0
and 1) :param arcWidth: a arcMap of values to be shown as width of arcs :param arcColor: a arcMap of values (between 0 and 1) to be shown as color of arcs :param cmap: color map to show the colors :param cmapArc: color map to show the arc color if distinction is needed
Returns: the graph
pyAgrum.lib.notebook.showBN(bn, size='4', format='svg', nodeColor=None, arcWidth=None, arcColor=None, cmap=None, cmapArc=None)
show a Bayesian network
Parameters: bn – the bayesian network size – size of the rendered graph format – render as “png” or “svg” nodeColor – a nodeMap of values (between 0 and 1) to be shown as color of nodes (with special colors for 0
and 1) :param arcWidth: a arcMap of values to be shown as width of arcs :param arcColor: a arcMap of values (between 0 and 1) to be shown as color of arcs :param cmap: color map to show the colors :param cmapArc: color map to show the arc color if distinction is needed :return: the graph
pyAgrum.lib.notebook.getInference(bn, engine=None, evs=None, targets=None, size='7', format='png', nodeColor=None, arcWidth=None, cmap=None)
get a HTML string for an inference in a notebook
Parameters: bn (gum.BayesNet) – engine (gum.Inference) – inference algorithm used. If None, LazyPropagation will be used evs (dictionnary) – map of evidence targets (set) – set of targets size (string) – size of the rendered graph format (string) – render as “png” or “svg” nodeColor – a nodeMap of values (between 0 and 1) to be shown as color of nodes (with special colors for 0
and 1) :param arcWidth: a arcMap of values to be shown as width of arcs :param cmap: color map to show the color of nodes and arcs :return: the desired representation of the inference
pyAgrum.lib.notebook.showInference(bn, engine=None, evs=None, targets=None, size='7', format='png', nodeColor=None, arcWidth=None, cmap=None)
show pydot graph for an inference in a notebook
Parameters: bn (gum.BayesNet) – engine (gum.Inference) – inference algorithm used. If None, LazyPropagation will be used evs (dictionnary) – map of evidence targets (set) – set of targets size (string) – size of the rendered graph format (string) – render as “png” or “svg” asString (boolean) – display the graph or return a string containing the corresponding HTML fragment the desired representation of the inference
pyAgrum.lib.notebook.getJunctionTree(bn, withNames=True, size='4', format='png')
get a HTML string for a junction tree (more specifically a join tree)
Parameters: bn – the bayesian network withNames (boolean) – display the variable names or the node id in the clique size – size of the rendered graph format – render as “png” or “svg” the HTML representation of the graph
pyAgrum.lib.notebook.showJunctionTree(bn, withNames=True, size='4', format='png')
Show a junction tree
Parameters: bn – the bayesian network withNames (boolean) – display the variable names or the node id in the clique size – size of the rendered graph format – render as “png” or “svg” the representation of the graph
pyAgrum.lib.notebook.showInformation(bn, evs=None, size='4', cmap=<matplotlib.colors.LinearSegmentedColormap object>)
show a bn annoted with results from inference : entropy and mutual informations
Parameters: bn – the BN evs – map of evidence size – size of the graph cmap – colour map used the graph
pyAgrum.lib.notebook.getInformation(bn, evs=None, size='4', cmap=<matplotlib.colors.LinearSegmentedColormap object>)
get a HTML string for a bn annoted with results from inference : entropy and mutual informations
Parameters: bn – the BN evs – map of evidence size – size of the graph cmap – colour map used the HTML string
pyAgrum.lib.notebook.showInfluenceDiagram(diag, size='4', format='png')
show an influence diagram as a graph
Parameters: diag – the influence diagram size – size of the rendered graph format – render as “png” or “svg” the representation of the influence diagram
pyAgrum.lib.notebook.getInfluenceDiagram(diag, size='4', format='png')
get a HTML string for an influence diagram as a graph
Parameters: diag – the influence diagram size – size of the rendered graph format – render as “png” or “svg” the HTML representation of the influence diagram
## Visualization of approximation algorithm¶
pyAgrum.lib.notebook.animApproximationScheme(apsc, scale=<ufunc 'log10'>)
show an animated version of an approximation algorithm
Parameters: apsc – the approximation algorithm scale – a function to apply to the figure | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21670207381248474, "perplexity": 12222.146837300603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496080.57/warc/CC-MAIN-20190220190527-20190220212527-00357.warc.gz"} |
http://www.iro.umontreal.ca/~qip2012/abstract.php | QIP2012
### Abstracts
Itai Arad, Zeph Landau and Umesh Vazirani.
An improved area law for 1D frustration-free systems
Abstract: We present a new proof for the 1D area law for frustration-free systems with a constant gap, which exponentially improves the entropy bound in Hastings' 1D proof. For particles of dimension d, a spectral gap \eps>0 and interaction strength of at most J, our entropy bound is S <= O(1)X^3 log^8 X where X=(J log d)/ \eps, versus the e^{O(X)} in Hastings' proof. Our proof is completely combinatorial. It follows the simple proof of the commuting case, combining the detectability lemma with basic tools from approximation theory. In higher dimensions, we manage to slightly improve the bound, showing that the entanglement entropy between a region L and its surrounding is bounded by S <= O(1) |\partial L|^2 log^8|\partial L|, which in 2D is very close to the (trivial) volume-law.
Salman Beigi and Amin Gohari.
Information Causality is a Special Point in the Dual of the Gray-Wyner Region
Abstract: Information Causality puts restrictions on the amount of information learned by a party (Bob) in a one-way communication problem. Bob receives an index b, and after a one-way communication from the other party (Alice), tries to recover a part of Alice's input. Because of the possibility of cloning, this game in its completely classical form is equivalent to one in which there are several Bobs indexed by b, who are interested in recovering different parts of Alice's input string, after receiving a public message from her. Adding a private message from Alice to each Bob, and assuming that the game is played many times, we obtain the Gray-Wyner problem for which a complete characterization of the achievable region is known. In this paper, we first argue that in the classical case Information Causality is only a single point in the dual of the Gray-Wyner region. Next, we show that despite the fact that cloning is impossible in a general physical theory, the result from classical world carries over to any physical theory provided that it satisfies a new property. This new property of the physical theory is called the Accessibility of Mutual Information and holds in the quantum theory. It implies that the Gray-Wyner region completely characterizes all the inequalities corresponding to the game of Information Causality. In other words, we provide infinitely many inequalities that Information Causality is only one of them.
In the second part of the paper we prove that Information Causality leads to a non-trivial lower bound on the communication cost of simulating a given non-local box when the parties are allowed to share entanglement. We also consider the same problem when the parties are provided with preshared randomness. Our non-technical contribution is to comment that information theorists who have been interested in the area of control have independently studied the same problem in a different context from an information theoretic perspective, rather than a communication complexity one. Connecting these two lines of research, we report a formula that, rather surprisingly, gives an exact computable expression for the optimal amount of communication needed for sampling from a non-local distribution given infinite preshared randomness.
Salman Beigi and Robert Koenig.
Simplified instantaneous non-local quantum computation with applications to position-based cryptography
Abstract: Motivated by concerns that non-local measurements may violate causality, Vaidman has shown that any non-local operation can be implemented using local operations and a single round of simultaneously passed classical communication only. His protocols are based on a highly non-trivial recursive use of teleportation. Here we give a simple proof of this fact, reducing the amount of entanglement required from a doubly exponential to an exponential amount. We also prove a linear lower bound on the amount of entanglement consumed for the implementation of a certain non-local measurement.
These results have implications for position-based cryptography: any scheme becomes insecure if the adversaries share an amount of entanglement scaling exponentially in the number of communicated qubits. Furthermore, certain schemes are secure under the assumption that the adversaries have at most a linear amount of entanglement and are required to communicate classically.
Aleksandrs Belovs.
Span Programs for Functions with Constant-Sized 1-certificates
Abstract: Besides the Hidden Subgroup Problem, the second large class of quantum speed-ups is for functions with constant-sized 1-certificates. This includes the OR function, solvable by the Grover algorithm, the distinctness, the triangle and other problems. The usual way to solve them is by quantum walk on the Johnson graph.
We propose a solution for the same problems using span programs. The span program is a computational model equivalent to the quantum query algorithm in its strength, and yet very different in its outfit.
We prove the power of our approach by designing a quantum algorithm for the triangle problem with query complexity O(n^{35/27}) that is better than O(n^{13/10}) of the best previously known algorithm by Magniez et al.
Dominic Berry, Richard Cleve and Sevag Gharibian.
Discrete simulations of continuous-time query algorithms that are efficient with respect to queries, gates and space
Abstract: We show that any continuous-time quantum query algorithm whose total query time is T and whose driving Hamiltonian is implementable with G gates can be simulated by a discrete-query quantum algorithm using the following resources: * O(T log T / loglog T) queries * O(T G polylog T) 1- and 2-qubit gates * O(polylog T) qubits of space. This extends previous results where the query cost is the same or better, but where the orders of the second and third resource costs are at least T^2 polylog T and T polylog T respectively. These new bounds are useful in circumstances where abstract black-box query algorithms are translated into concrete algorithms with subroutines substituted for the black-box queries. In these circumstances, what matters most is the total gate complexity, which can be large if the cost of the operations performed between the queries is large (even if the number of queries is small). Our bound implies that if the implementation cost of the driving Hamiltonian is small, the total gate complexity is not much more than the query complexity.
Robin Blume-Kohout.
Paranoid tomography: Confidence regions for quantum hardware
Abstract: Many paranoid'' quantum information processing protocols, such as fault tolerance and cryptography, require rigorously validated quantum hardware. We use tomography to characterize and calibrate such devices. But point estimates of the state or gate implemented by a device -- which are the end result of all current tomographic protocols -- can never provide the rigorously reliable validation required for fault tolerance or QKD. We need region estimators. Here, I introduce likelihood-ratio confidence region estimators, and show that (unlike ad hoc techniques such as the bootstrap) they are absolutely reliable and near-optimally powerful, as well as convenient and simple to implement.
Fernando Brandao, Aram Harrow and Michal Horodecki.
Local random quantum circuits are approximate polynomial-designs
Abstract: We prove that local random quantum circuits acting on n qubits composed of polynomially many nearest neighbor two-qubit gates form an approximate unitary poly(n)-design. Previously it was unknown whether random quantum circuits were a t-design for any t > 3.
The proof is based on an interplay of techniques from quantum many-body theory, representation theory, and the theory of Markov chains. In particular we employ a result of Nachtergaele for lower bounding the spectral gap of frustration-free quantum local Hamiltonians; a quasi-orthogonality property of permutation matrices; and a result of Oliveira which extends to the unitary group the path-coupling method for bounding the mixing time of random walks.
We also consider pseudo-randomness properties of local random quantum circuits of small depth and prove they constitute a quantum poly(n)-copy tensor product expander. The proof also rests on techniques from quantum many-body theory, in particular on the detectability lemma of Aharonov, Arad, Landau, and Vazirani.
We give two applications of the results. Firstly we show the following pseudo- randomness property of efficiently generated quantum states: almost every state of n qubits generated by circuits of size n.k cannot be distinguished from the maximally mixed state by circuits of size n^((k+4)/6); this provides a data-hiding scheme against computationally bounded adversaries. Secondly, we reconsider a recent argument of Masanes, Roncaglia, and Acin concerning local equilibration of time-envolving quantum systems, and strengthen the connection between fast equilibration of small subsystems and the circuit complexity of the unitary which diagonalizes the Hamiltonian.
Gilles Brassard, Peter Høyer, Kassem Kalach, Marc Kaplan, Sophie Laplante and Louis Salvail.
Merkle Puzzles in a Quantum World
Abstract: In 1974, Ralph Merkle proposed the first unclassified scheme for secure communications over insecure channels. When legitimate communicating parties are willing to spend an amount of computational effort proportional to some parameter N, an eavesdropper cannot break into their communication without spending a time proportional to N^2, which is quadratically more than the legitimate effort. We showed in an earlier paper that Merkle's schemes are completely insecure against a quantum adversary, but that their security can be partially restored if the legitimate parties are also allowed to use quantum computation: the eavesdropper needed to spend a time proportional to N^{3/2} to break our earlier quantum scheme. Furthermore, all previous classical schemes could be broken completely by the onslaught of a quantum eavesdropper and we conjectured that this is unavoidable. We give two novel key establishment schemes in the spirit of Merkle's. The first one can be broken by a quantum adversary that makes an effort proportional to N^{5/3} to implement a quantum random walk in a Johnson graph reminiscent of Andris Ambainis' quantum algorithm for the element distinctness problem. This attack is optimal up to logarithmic factors. Our second scheme is purely classical, yet it cannot be broken by a quantum eavesdropper who is only willing to expend effort proportional to that of the legitimate parties.
Sergey Bravyi.
Topological qubits: stability against thermal noise
Abstract: A big open question in the quantum information theory concerns feasibility of a self-correcting quantum memory. A user of such memory would need quantum computing capability only to write and read information. The storage itself requires no action from the user, as long as the memory is put in contact with a cold enough thermal bath.
In this talk I will review toy models of a quantum memory based on stabilizer code Hamiltonians with the topological order. These models describe quantum spin lattices with short-range interactions and a quantum code degenerate ground state. I will show how to derive a lower bound on the memory time for stabilizer Hamiltonians that have no string-like logical operators, such as the recently discovered 3D Cubic Code. This bound applies when the interaction between the spin lattice and the thermal bath can be described by a Markovian master equation. Our results demonstrate that the 3D Cubic Code is a marginally self-correcting memory: for a fixed temperature T the maximum memory time that can be achieved by increasing the lattice size grows exponentially with 1/T^2, whereas the optimal lattice size grows exponentially with 1/T.
We also compute the memory time of the 3D Cubic Code numerically using a novel error correction algorithm to simulate the readout step. The numerics suggests that our lower bound on the memory time is tight.
A unique feature of the studied stabilizer Hamiltonians responsible for the self-correction is the energy landscape that penalizes any sequence of local errors resulting in a logical error. The energy barrier that must be overcome to implement a bit-flip or phase-flip on the encoded qubit grows logarithmically with the lattice size. Such energy landscape also renders diffusion of topological defects over large distances energetically unfavorable.
This is a joint work with Jeongwan Haah (Caltech)
Sergey Bravyi and Robert Koenig.
Disorder-assisted error correction in Majorana chains
Abstract: It was recently realized that quenched disorder may enhance the reliability of topological qubits by reducing the mobility of anyons at zero temperature. Here we compute storage times with and without disorder for quantum chains with unpaired Majorana fermions - the simplest toy model of a quantum memory. Disorder takes the form of a random site-dependent chemical potential. The corresponding one-particle problem is a one-dimensional Anderson model with disorder in the hopping amplitudes. We focus on the zero-temperature storage of a qubit encoded in the ground state of the Majorana chain. Storage and retrieval are modeled by a unitary evolution under the memory Hamiltonian with an unknown weak perturbation followed by an error-correction step. Assuming dynamical localization of the one-particle problem, we show that the storage time grows exponentially with the system size. We give supporting evidence for the required localization property by estimating Lyapunov exponents of the one-particle eigenfunctions. We also simulate the storage process for chains with a few hundred sites. Our numerical results indicate that in the absence of disorder, the storage time grows only as a logarithm of the system size. We provide numerical evidence for the beneficial effect of disorder on storage times and show that suitably chosen pseudorandom potentials can outperform random ones.
Jop Briet and Thomas Vidick.
Explicit lower and upper bounds on the entangled value of multiplayer XOR games
Abstract: XOR games are the simplest model in which the nonlocal properties of entanglement manifest themselves. When there are two players, it is well known that the bias --- the maximum advantage over random play --- of entangled players is at most a constant times greater than that of classical players. Using tools from operator space theory, Perez-Garcia et al. [Comm. Math. Phys. 279 (2), 2008] showed that no such bound holds when there are three or more players: in that case the ratio of the entangled and classical biases can become unbounded and scale with the size of the game.
We give a new, simple and explicit (though still probabilistic) construction of a family of three-player XOR games for which entangled players have a large advantage over classical players. Our game has N^2 questions per player and entangled players have a factor of order sqrt(N) (up to log factors) advantage over classical players. Moreover, the entangled players only need to share a state of local dimension N and measure observables defined by tensor products of the Pauli matrices.
Additionally, we give the first upper bounds on the maximal violation in terms of the number of questions per player, showing that our construction is only quadratically off in that respect. Our results rely on probabilistic estimates on the norm of random matrices and higher-order tensors. Our results rely on probabilistic estimates on the norm of random matrices and higher-order tensors.
Harry Buhrman, Serge Fehr, Christian Schaffner and Florian Speelman.
The Garden-Hose Game and Application to Position-Based Quantum Cryptography
Abstract: We study position-based cryptography in the quantum setting. We examine a class of protocols that only require the communication of a single qubit and 2n bits of classical information. To this end, we define a new model of communication complexity, the garden-hose model, which enables us to prove upper bounds on the number of EPR pairs needed to attack such schemes. This model furthermore opens up a way to link the security of quantum position-based cryptography to traditional complexity theory.
Josh Cadney, Noah Linden and Andreas Winter.
Infinitely many constrained inequalities for the von Neumann entropy
Abstract: We exhibit infinitely many new, constrained inequalities for the von Neumann entropy, and show that they are independent of each other and the known inequalities obeyed by the von Neumann entropy (basically strong subadditivity). The new inequalities were proved originally by Makarychev et al. [Commun. Inf. Syst., 2(2):147-166, 2002] for the Shannon entropy, using properties of probability distributions. Our approach extends the proof of the inequalities to the quantum domain, and includes their independence for the quantum and also the classical cases.
André Chailloux and Iordanis Kerenidis.
Optimal Bounds for Quantum Bit Commitment
Abstract: Bit commitment is a fundamental cryptographic primitive with numerous applications. Quantum information allows for bit commitment schemes in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum protocol by Ambainis achieved a cheating probability of at most 3/4[Amb01]. On the other hand, Kitaev showed that no quantum protocol can have cheating probability less than 1/sqrt{2}[Kit03] (his lower bound on coin flipping can be easily extended to bit commitment). Closing this gap has since been an important and open question.
In this paper, we provide the optimal bound for quantum bit commitment. We first show a lower bound of approximately 0.739, improving Kitaev's lower bound. We then present an optimal quantum bit commitment protocol which has cheating probability arbitrarily close to 0.739. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + \eps in order to achieve a quantum bit commitment protocol with cheating probability 0.739 + O(\eps). We then use the optimal quantum weak coin flipping protocol described by Mochon[Moc07]. To stress the fact that our protocol uses quantum effects beyond the weak coin flip, we show that any classical bit commitment protocol with access to perfect weak (or strong) coin flipping has cheating probability at least 3/4.
André Chailloux and Or Sattath.
The Complexity of the Separable Hamiltonian Problem
Abstract: In this paper, we study variants of the canonical local hamiltonian problem where we have the additional promise that the witness is separable. We define two variants of the local problem. In the separable sparse hamiltonian problem, the hamiltonians are not necessarily local, but rather sparse. We show that this problem is QMA(2) complete. On the other hand, we consider another problem, the separable local hamiltonian problem and show that it is QMA complete. This should be compared to the local hamiltonian problem, and to the sparse hamiltonian problem which are both QMA complete. This is the first study of separable Hamiltonian problems which leads to new complete problems for both QMA and QMA(2) and might give some new ways of comparing these two classes.
Eric Chitambar, Wei Cui and Hoi-Kwong Lo.
Increasing Entanglement by Separable Operations and New Monotones for W-type Entanglement
Abstract: In this talk, we seek to better understand the structure of local operations and classical communication (LOCC) and its relationship to separable operations (SEP). To this end, we compare the abilities of LOCC and SEP for distilling EPR entanglement from one copy of an N-qubit W-class state (i.e. that of the form sqrt{x_0}|00...0> + sqrt{x_1}|10...0> +...+ sqrt{x_n}|00...1>). In terms of transformation success probability, we are able to quantify a gap as large as 37% between the two classes. Our work involves constructing new analytic entanglement monotones for W-class states which can increase on average by separable operations. Additionally, we are able to show that the set of LOCC operations, considered as a subset of the most general quantum measurements, is not closed. Extended Version: arXiv:1106.1208
Matthias Christandl and Renato Renner.
Reliable Quantum State Tomography
Abstract: Quantum state tomography is the task of inferring the state of a quantum system by appropriate measurements. Since the frequency distributions of the outcomes obtained from any finite number of measurements will generally deviate from their asymptotic limits, the estimation of the state can never be perfectly accurate, thus requiring the specification of error bounds. Furthermore, the individual reconstruction of matrix elements of the density operator representation of a state may lead to inconsistent results (e.g., operators with negative eigenvalues). Here we introduce a framework for quantum state tomography that enables the computation of accurate and consistent estimates and reliable error bars from a finite set of data and show that these have a well-defined and universal operational significance. The method does not require any prior assumptions about the distribution of the possible states or a specific parametrization of the state space. The resulting error bars are tight, corresponding to the fundamental limits that quantum theory imposes on the precision of measurements. At the same time, the technique is practical and particularly well suited for tomography on systems consisting of a small number of qubits, which are currently in the focus of interest in experimental quantum information science.
Toby Cubitt, Martin Schwarz, Frank Verstraete, Or Sattath and Itai Arad.
Three Proofs of a Constructive Commuting Quantum Lovasz Local Lemma
Abstract: The recently proven Quantum Lovasz Local Lemma generalises the well-known Lovasz Local Lemma. It states that, if a collection of subspace constraints are "weakly dependent", there necessarily exists a state satisfying all constraints. It implies e.g. that certain instances of the quantum kQSAT satisfiability problem are necessarily satisfiable, or that many-body systems with "not too many" interactions are never frustrated. However, the QLLL only asserts existence; it says nothing about how to find the state. Inspired by Moser's breakthrough classical results, we present a constructive version of the QLLL in the setting of commuting constraints, proving that a simple quantum algorithm converges efficiently to the required state. In fact, we provide three different proofs, all of which are independent of the original QLLL proof. So these results also provide independent, constructive proofs of the commuting QLLL itself, but strengthen it significantly by giving an efficient algorithm for finding the state whose existence is asserted by the QLLL.
Marcus P. Da Silva, Steven T. Flammia, Olivier Landon-Cardinal, Yi-Kai Liu and David Poulin.
Practical characterization of quantum devices without tomography
Abstract: Quantum tomography is the main method used to assess the quality of quantum information processing devices, but its complexity presents a major obstacle for the characterization of even moderately large systems. However, tomography generates much more information than is often sought. Taking a more targeted approach, we develop schemes that enable (i) estimating the fidelity of an experiment to a theoretical ideal description, (ii) learning which description within a reduced subset best matches the experimental data. Both these approaches yield a significant reduction in resources compared to tomography. In particular, we show how to estimate the fidelity between a predicted pure state and an arbitrary experimental state using only a constant number of Pauli expectation values selected at random according to an importance-weighting rule. In addition, we propose methods for certifying quantum circuits and learning continuous-time quantum dynamics that are described by local Hamiltonians or Lindbladians. This extended abstract is a synthesis of arXiv:1104.3835 and arXiv:1104.4695, which the reader can consult for complete details on the results, methods and proofs.
Nilanjana Datta, Min-Hsiu Hsieh and Mark Wilde.
Quantum rate distortion, reverse Shannon theorems, and source-channel separation
Abstract: We derive quantum counterparts of two key theorems of classical information theory, namely, the rate distortion theorem and the source-channel separation theorem. The rate-distortion theorem gives the ultimate limits on lossy data compression, and the source-channel separation theorem implies that a two-stage protocol consisting of compression and channel coding is optimal for transmitting a memoryless source over a memoryless channel. In spite of their importance in the classical domain, there has been surprisingly little work in these areas for quantum information theory. In the present work, we prove that the quantum rate distortion function is given in terms of the regularized entanglement of purification. Although this formula is regularized, at the very least it demonstrates that Barnum's conjecture on the achievability of the coherent information for quantum rate distortion is generally false. We also determine single-letter expressions for entanglement-assisted quantum rate distortion. Moreover, we prove several quantum source channel separation theorems. The strongest of these are in the entanglement-assisted setting, in which we establish a necessary and sufficient condition for transmitting a memoryless source over a memoryless quantum channel up to a given distortion.
Thomas Decker, Gábor Ivanyos, Miklos Santha and Pawel Wocjan.
Hidden Symmetry Subgroup Problems
Abstract: We advocate a new approach of addressing hidden structure problems and finding efficient quantum algorithms. We introduce and investigate the Hidden Symmetry Subgroup Problem (HSSP), which is a generalization of the well-studied Hidden Subgroup Problem (HSP). Given a group acting on a set and an oracle whose level sets define a partition of the set, the task is to recover the subgroup of symmetries of this partition inside the group. The HSSP provides a unifying framework that, besides the HSP, encompasses a wide range of algebraic oracle problems, including quadratic hidden polynomial problems. While the HSSP can have provably exponential quantum query complexity, we obtain efficient quantum algorithms for various interesting cases. To achieve this, we present a general method for reducing the HSSP to the HSP, which works efficiently in several cases related to symmetries of polynomials. The HSSP therefore connects in a rather surprising way certain hidden polynomial problems with the HSP. Using this connection, we obtain the first efficient quantum algorithm for the hidden polynomial problem for multivariate quadratic polynomials over fields of constant characteristic. We also apply the new methods to polynomial function graph problems and present an efficient quantum procedure for constant degree multivariate polynomials over any field. This result improves in several ways the currently known algorithms.
Guillaume Duclos-Cianci, Héctor Bombin and David Poulin.
Equivalence of Topological Codes and Fast Decoding Algorithms
Abstract: Two topological phases are equivalent if they are connected by a local unitary transformation. In this sense, classifying topological phases amounts to classifying long-range entanglement patterns. We show that all 2D topological stabilizer codes are equivalent to several copies of one universal phase: Kitaev's topological code. Error correction benefits from the corresponding local mappings.
Omar Fawzi, Patrick Hayden, Ivan Savov, Pranab Sen and Mark Wilde.
Advances in classical communication for network quantum information theory
Abstract: Our group has developed new techniques that have yielded significant advances in network quantum information theory. We have established the existence of a quantum simultaneous decoder for two-sender quantum multiple access channels by using novel methods to deal with the non-commutativity of the many operators involved, and we have also applied this result in various scenarios, included unassisted and assisted classical communication over quantum multiple access channels, quantum broadcast channels, and quantum interference channels. Prior researchers have already considered classical communication over quantum multiple-access and broadcast channels, but our work extends and in some cases improves upon this prior work. Also, we are the first to make progress on the capacity of the quantum interference channel, which is a channel with two senders and two receivers, where one sender is interested in communicating with one receiver and the other sender with the other receiver. The aim of the proposed talk at QIP 2012 is to summarize this recent work and its applications as well as to discuss new avenues for network quantum information theory that may make use of these results.
Rodrigo Gallego, Lars Würflinger, Antonio Acín and Miguel Navascués.
An operational framework for nonlocality
Abstract: Since the advent of the first quantum information protocols, entanglement was recognized as a key ingredient for quantum information purposes, necessary for quantum computation or cryptography. A framework was developed to characterize and quantify entanglement as a resource based on the following operational principle: entanglement among N parties cannot be created by local operations and classical communication, even when N-1 parties collaborate. More recently, nonlocality has been identified as another resource, alternative to entanglement and necessary for device-independent quantum information protocols. We introduce a novel framework for nonlocality based on a similar principle: nonlocality among N parties cannot be created by local operations and shared randomness even when N-1 parties collaborate. We then show that the standard definition of multipartite nonlocality, due to Svetlichny, is inconsistent with this operational approach: according to it, genuine tripartite nonlocality could be created by two collaborating parties. We then discuss alternative definitions for which consistency is recovered.
Rodrigo Gallego, Lars Erik Würflinger, Antonio Acín and Miguel Navascués.
Quantum correlations require multipartite information principles
Abstract: Identifying which correlations among distant observers are possible within our current description of Nature, based on quantum mechanics, is a fundamental problem in Physics. Recently, information concepts have been proposed as the key ingredient to characterize the set of quantum correlations. Novel information principles, such as, information causality or non-trivial communication complexity, have been introduced in this context and successfully applied to some concrete scenarios. We show in this work a fundamental limitation of this approach: no principle based on bipartite information concepts is able to single out the set of quantum correlations for an arbitrary number of parties. Our results reflect the intricate structure of quantum correlations and imply that new and intrinsically multipartite information concepts are needed for their full understanding.
Sevag Gharibian and Julia Kempe.
Hardness of approximation for quantum problems
Abstract: The polynomial hierarchy plays a central role in classical complexity theory. Here, we define a quantum generalization of the polynomial hierarchy, and initiate its study. We show that not only are there natural complete problems for the second level of this quantum hierarchy, but that these problems are in fact strongly hard to approximate. Our results thus yield the first known hardness of approximation results for a quantum complexity class. Our approach is based on the use of dispersers, and is inspired by the classical results of Umans regarding hardness of approximation for the second level of the classical polynomial hierarchy [Umans, FOCS 1999].
Gus Gutoski and Xiaodi Wu.
Parallel approximation of min-max problems with applications to classical and quantum zero-sum games
Abstract: This paper presents an efficient parallel algorithm for a new class of min-max problems based on the matrix multiplicative weight (MMW) update method. Our algorithm can be used to find near-optimal strategies for competitive two-player classical or quantum games in which a referee exchanges any number of messages with one player followed by any number of additional messages with the other. This algorithm considerably extends the class of games which admit parallel solutions and demonstrates for the first time the existence of a parallel algorithm for any game (classical or quantum) in which one player reacts adaptively to the other.
As a direct consequence, we prove that several competing-provers complexity classes collapse to PSPACE such as QRG(2), SQG and two new classes called DIP and DQIP. A special case of our result is a parallel approximation scheme for a new class of semidefinite programs whose feasible region consists of n-tuples of semidefinite matrices that satisfy a .transcript-like. consistency condition. Applied to this special case, our algorithm yields a direct polynomial-space simulation of multi-message quantum interactive proofs resulting in a first-principles proof of QIP = PSPACE. It is noteworthy that our algorithm establishes a new way, called the min-max approach, to solve SDPs in contrast to the primal-dual approach to SDPs used in the original proof of QIP = PSPACE.
Jeongwan Haah.
Local stabilizer codes in three dimensions without string logical operators
Abstract: We suggest concrete models for self-correcting quantum memory by reporting examples of local stabilizer codes in 3D that have no string logical operators. Previously known local stabilizer codes in 3D all have string-like logical operators, which make the codes non-self-correcting. We introduce a notion of "logical string segments" to avoid difficulties in defining one dimensional objects in discrete lattices. We prove that every string-like logical operator of our code can be deformed to a disjoint union of short segments, and each segment is in the stabilizer group.
Esther Haenggi and Marco Tomamichel.
The Link between Uncertainty Relations and Non-Locality
Abstract: Two of the most intriguing features of quantum physics are the uncertainty principle and the occurrence of non-local correlations. The uncertainty principle states that there exist pairs of non-compatible measurements on quantum systems such that their outcomes cannot be simultaneously predicted by any observer. Non-local correlations of measurement outcomes at different locations cannot be explained by classical physics, but appear in quantum mechanics in the presence of entanglement. Here, we show that these two essential properties of quantum mechanics are quantitatively related. Namely, we provide an entropic uncertainty relation that gives a lower bound on the uncertainty of the binary outcomes of two measurements in terms of the maximum Clauser-Horne-Shimony-Holt value that can be achieved using the same measurements. We discuss an application of this uncertainty relation to certify a quantum source using untrusted devices.
Rahul Jain and Nayak Ashwin.
A quantum information cost trade-off for the Augmented Index
Abstract: In this work we establish a trade-off between the amount of (classical and) quantum information the two parties necessarily reveal about their inputs in the process of the computing Augmented Index, a natural variant of the Index function. A surprising feature of this trade-off is that it holds even under a distribution on inputs on which the function value is 'known in advance'. In fact, this is the price paid by any protocol that works correctly on a hard'' distribution. We show that quantum protocols that compute the Augmented Index function correctly with constant error on the uniform distribution, either Alice reveals Omega(n/t) information about her n-bit input, or Bob reveals Omega(1/t) information about his (log n)-bit input, where t is the number of messages in the protocol, even when the inputs are drawn from an easy'' distribution, the uniform distribution over inputs which evaluate to 0.
The classical version of this result has implications for the space required by streaming algorithms---algorithms that scan the input sequentially only a few times, while processing each input symbol quickly using a small amount of space. It implies that certain context free properties need space Omega(sqrt(n)/T) on inputs of length n, when allowed T unidirectional passes over the input. The quantum version would have similar consequences, provided a certain information inequality holds.
Andrew Landahl, Jonas Anderson and Patrick Rice.
Fault-tolerant quantum computing with color codes
Abstract: We present and analyze protocols for fault-tolerant quantum computing using color codes. To process these codes, no qubit movement is necessary; nearest-neighbor gates in two spatial dimensions suffices. Our focus is on the color codes defined by the 4.8.8 semiregular lattice, as they provide the best error protection per physical qubit among color codes. We present circuit-level schemes for extracting the error syndrome of these codes fault-tolerantly. We further present an integer-program-based decoding algorithm for identifying the most likely error given the (possibly faulty) syndrome. We simulated our syndrome extraction and decoding algorithms against three physically-motivated noise models using Monte Carlo methods, and used the simulations to estimate the corresponding accuracy thresholds for fault-tolerant quantum error correction. We also used a self-avoiding walk analysis to lower-bound the accuracy threshold for two of these noise models. We present two methods for fault-tolerantly computing with these codes. In the first, many of the operations are transversal and therefore spatially local if two-dimensional arrays of qubits are stacked atop each other. In the second, code deformation techniques are used so that all quantum processing is spatially local in just two dimensions. In both cases, the accuracy threshold for computation is comparable to that for error correction. Our analysis demonstrates that color codes perform slightly better than Kitaev's surface codes when circuit details are ignored. When these details are considered, we estimate that color codes achieve a threshold of 0.082(3)%, which is higher than the threshold of 1.3 times 10^{-5} achieved by concatenated coding schemes restricted to nearest-neighbor gates in two dimensions [Spedalieri and Roychowdhury, Quant. Inf. Comp. 9, 666 (2009)] but lower than the threshold of 0.75% to 1.1% reported for the Kitaev codes subject to the same restrictions [Raussendorf and Harrington, Phys. Rev. Lett. 98, 190504 (2007); Wang et al., Phys. Rev. A 83, 020302(R) (2011)]. Finally, because the behavior of our decoder's performance for two of the noise models we consider maps onto an order-disorder phase transition in the three-body random-bond Ising model in 2D and the corresponding random-plaquette gauge model in 3D, our results also answer the Nishimori conjecture for these models in the negative: the statistical-mechanical classical spin systems associated to the 4.8.8 color codes are counterintuitively more ordered at positive temperature than at zero temperature.
Troy Lee, Rajat Mittal, Ben Reichardt, Robert Spalek and Mario Szegedy.
Quantum query complexity for state conversion
Abstract: State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The complexity of converting between two systems of states is given by the distance between them, as measured by this norm.
In the special case of function evaluation, the norm is closely related to the general adversary bound, a semi-definite program that lower-bounds the number of input queries needed by a quantum algorithm to evaluate a function. We thus obtain that the general adversary bound characterizes the quantum query complexity of any function whatsoever. This generalizes and simplifies the proof of the same result in the case of boolean input and output. Also in the case of function evaluation, we show that our norm satisfies a remarkable composition property, implying that the quantum query complexity of the composition of two functions is at most the product of the query complexities of the functions, up to a constant. Finally, our result implies that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem.
Troy Lee and Jérémie Roland.
A strong direct product theorem for quantum query complexity
Abstract: We show that quantum query complexity satisfies a strong direct product theorem. This means that computing k copies of a function with less than k times the quantum queries needed to compute one copy of the function implies that the overall success probability will be exponentially small in k. For a boolean function f we also show an XOR lemma---computing the parity of k copies of f with less than k times the queries needed for one copy implies that the advantage over random guessing will be exponentially small.
We do this by showing that the multiplicative adversary method, which inherently satisfies a strong direct product theorem, is always at least as large as the additive adversary method, which is known to characterize quantum query complexity.
Francois Le Gall.
Improved Output-Sensitive Quantum Algorithms for Boolean Matrix Multiplication
Abstract: We present new quantum algorithms for Boolean Matrix Multiplication in both the time complexity and the query complexity settings. As far as time complexity is concerned, our results show that the product of two n x n Boolean matrices can be computed on a quantum computer in time O(n^{3/2}+nk^{3/4}), where k is the number of non-zero entries in the product, improving over the output-sensitive quantum algorithm by Buhrman and Spalek (SODA'06) that runs in O(n^{3/2}k^{1/2}) time. This is done by constructing a quantum version of a recent classical algorithm by Lingas (ESA'09), using quantum techniques such as quantum counting to exploit the sparsity of the output matrix. As far as query complexity is concerned, our results improve over the quantum algorithm by Vassilevska Williams and Williams (FOCS'10) based on a reduction to the triangle finding problem. One of the main contributions leading to this improvement is the construction of a quantum algorithm for triangle finding tailored especially for the tripartite graphs appearing in the reduction.
Spyridon Michalakis and Justyna Pytel.
Stability of Frustration-Free Hamiltonians
Abstract: We prove stability of the spectral gap for gapped, frustration-free Hamiltonians under general, quasi-local perturbations. We present a necessary and sufficient condition for stability, which we call Local Topological Quantum Order. This result extends previous work by Bravyi et al. on the stability of topological quantum order for Hamiltonians composed of commuting projections with a common zero-energy subspace.
Abel Molina and John Watrous.
Hedging bets with correlated quantum strategies
Abstract: This paper studies correlations among independently administered hypothetical tests of a simple interactive type, and demonstrates that correlations arising in quantum information theoretic variants of these tests can exhibit a striking non-classical behavior. When viewed in a game-theoretic setting, these correlations are suggestive of a perfect form of hedging, where the risk of a loss in one game of chance is perfectly offset by one's actions in a second game. This type of perfect hedging is quantum in nature, it is not possible in classical variants of the tests we consider.
Sandu Popescu.
The smallest possible thermal machines and the foundations of thermodynamics
Abstract: In my talk I raise the question on the fundamental limits to the size of thermal machines - refrigerators, heat pumps and work producing engines - and I will present the smallest possible ones. I will then discuss the issue of a possible complementarity between size and efficiency and show that even the smallest machines could be maximally efficient and I will also present a new point of view over what is work and what do thermal machines actually do. Finally I will present a completely new approach to the foundations of thermodynamics that follows from these results, which in their turn are inspired from quantum information concepts.
Joseph M. Renes, Frederic Dupuis and Renato Renner.
Quantum Polar Coding
Abstract: Polar coding, introduced in 2008 by Arikan, is the first efficiently encodable and decodable coding scheme that provably achieves the Shannon bound for the rate of information transmission over classical discrete memoryless channels (in the asymptotic limit of large block sizes). Here we study the use of polar codes for the efficient coding and decoding of quantum information. Focusing on the case of qubit channels we construct a coding scheme which, using some pre-shared entanglement, asymptotically achieves a net transmission rate equal to the coherent information. Furthermore, for channels with sufficiently low noise level, no pre-shared entanglement is required.
Jérémie Roland.
Quantum rejection sampling
Abstract: Rejection sampling is a well-known method to sample from a target distribution, given the ability to sample from a given distribution. The method has been first formalized by von Neumann (1951) and has many applications in classical computing. We define a quantum analogue of rejection sampling: given a black box producing a coherent superposition of (possibly unknown) quantum states with some amplitudes, the problem is to prepare a coherent superposition of the same states, albeit with different target amplitudes. The main result of this paper is a tight characterization of the query complexity of this quantum state generation problem. We exhibit an algorithm, which we call quantum rejection sampling, and analyze its cost using semidefinite programming. Our proof of a matching lower bound is based on the automorphism principle which allows to symmetrize any algorithm over the automorphism group of the problem.
Furthermore, we illustrate how quantum rejection sampling may be used as a primitive in designing quantum algorithms, by providing three different applications. We first show that it was implicitly used in the quantum algorithm for linear systems of equations by Harrow, Hassidim and Lloyd. Secondly, we show that it can be used to speed up the main step in the quantum Metropolis sampling algorithm by Temme et al.. Finally, we derive a new quantum algorithm for the hidden shift problem of an arbitrary Boolean function.
Joint work with Maris Ozols and Martin Roetteler, to appear in ITCS'12
Norbert Schuch.
Complexity of commuting Hamiltonians on a square lattice of qubits
Abstract: We consider the computational complexity of Hamiltonians which are sums of commuting terms acting on plaquettes in a square lattice of qubits, and we show that deciding whether the ground state minimizes the energy of each local term individually is in the complexity class NP. That is, if the ground states has this property, this can be proven using a classical certificate which can be efficiently verified on a classical computer. Different to previous results on commuting Hamiltonians, our certificate proves the existence of such a state without giving instructions on how to prepare it.
Martin Schwarz, Kristan Temme, Frank Verstraete, Toby Cubitt and David Perez-Garcia.
Preparing projected entangled pair states on a quantum computer
Abstract: We present a quantum algorithm to prepare injective PEPS on a quantum computer, a problem raised by Verstraete, Wolf, Perez-Garcia, and Cirac [PRL 96, 220601 (2006)]. To be efficient, our algorithm requires well-conditioned PEPS projectors and, essentially, an inverse-polynomial spectral gap of the PEPS' parent Hamiltonian. Injective PEPS are the unique groundstates of their parent Hamiltonians and capture groundstates of many physically relevant many-body Hamiltonians, such as e.g. the 2D AKLT state. Even more general is the class of G-injective PEPS which have parent Hamiltonians with a ground state space of degeneracy |G|, the order of the discrete symmetry group G. As our second result we show how to prepare G-injective PEPS under similar assumptions as well. The class of G-injective PEPS contains topologically ordered states, such as Kitaev's toric code which our algorithm is thus able to prepare.
Yaoyun Shi and Xiaodi Wu.
Epsilon-net method for optimizations over separable states
Abstract: In this paper we study the algorithms for the linear optimization over separable quantum states, which is a NP-hard problem. Precisely, the objective function is <Q, \rho> for some hermitian Q where \rho is a separable quantum state. Our strategy is to enumerate (via epsilon-nets) more cleverly with the help of certain structure of some interesting Qs. As a result, we obtain efficient algorithms either in time or space for the following cases.
Firstly, we provide a polynomial time (or space) algorithm when Q can be decomposed into the form Q = sum_{i=1}^M Q^1_i tensor Q^2_i with small M. As a direct consequence, we prove a variant of the complexity class QMA(2) where the verifier performs only logarithm number of unit gates acting on both proofs simultaneously is contained in PSPACE. We also initiate the study of the natural extension of the local Hamiltonian problem to the k-partite case. By the same algorithm, we conclude those problems remain to be inside PSPACE.
Secondly, for a positive semidefinite Q we obtain an algorithm with running time exponential in the Frobenius norm of Q , which reproves one of the main results of Brandao, Christandl and Yard [STOC pp.343(2011)]. Note that this result was originally proved by making use of many non-trivial results in quantum information, whereas our algorithm only utilizes fundamental operations of matrices and the Schmidt decomposition of bipartite pure states.
Graeme Smith, John A. Smolin and Jon Yard.
Quantum communication with gaussian channels of zero quantum capacity
Abstract: Superactivation of channel capacity occurs when two channels have zero capacity separately, but can have nonzero capacity when used together. We present a family of simple and natural examples of superactivation of quantum capacity using gaussian channels that can potentially be realized with current technologies. This demonstrates the richness of the set of gaussian channels and the complexity of their capacity-achieving protocols. Superactivation is therefore not merely an oddity confined to unrealistic models but is in fact necessary for a proper characterization of realistic communication settings.
Rolando Somma and Sergio Boixo.
Spectral Gap Amplification
Abstract: Many problems in quantum information reduce to preparing a specific eigenstate of some Hamiltonian H. The generic cost of quantum algorithms for these problems is determined by the inverse spectral gap of H for that eigenstate and the cost of evolving with H for some fixed time. The goal of spectral gap amplification is therefore to construct a Hamiltonian H' with the same eigenstate as that of H but a bigger spectral gap, requiring that constant-time evolutions with H' and H can be implemented with nearly the same cost. We show that a quadratic spectral gap amplification is possible when H satisfies a frustration-free property and give H' for this case. This results in quantum speedups for some adiabatic evolutions. Defining a suitable oracle model, we establish that the quadratic amplification is optimal for frustration-free Hamiltonians and that no spectral gap amplification is possible if the frustration-free property is removed. A corollary is that finding a similarity transformation between a stochastic Hamiltonian and the corresponding stochastic matrix is hard in the oracle model, setting strong limits in the power of some classical methods that simulate quantum adiabatic evolutions. Implications of spectral gap amplification for quantum speedups of optimization problems and the preparation of projected entangled pair states (PEPS) are discussed.
Nengkun Yu, Runyao Duan and Quanhua Xu.
Bounds on the distance between a unital quantum channel and the convex hull of unitary channels, with applications to the asymptotic quantum Birkhoff conjecture
Abstract: Motivated by the recent resolution of Asymptotic Quantum Birkhoff Conjecture (AQBC), we attempt to estimate the distance between a given unital quantum channel and the convex hull of unitary channels. We provide two lower bounds on this distance by employing techniques from quantum information and operator algebra, respectively. We then show how to apply these results to construct some explicit counterexamples to AQBC.
QIP2012 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211532235145569, "perplexity": 692.7029127669462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864798.12/warc/CC-MAIN-20180522151159-20180522171159-00020.warc.gz"} |
http://en.wikipedia.org/wiki/Parrondo's_paradox | Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996. A more explanatory description is:
There exist pairs of games, each with a higher probability of losing than winning, for which it is possible to construct a winning strategy by playing the games alternately.
Parrondo devised the paradox in connection with his analysis of the Brownian ratchet, a thought experiment about a machine that can purportedly extract energy from random heat motions popularized by physicist Richard Feynman. However, the paradox disappears when rigorously analyzed.[1]
## Illustrative examples
### The saw-tooth example
Figure 1
Consider an example in which there are two points A and B having the same altitude, as shown in Figure 1. In the first case, we have a flat profile connecting them. Here, if we leave some round marbles in the middle that move back and forth in a random fashion, they will roll around randomly but towards both ends with an equal probability. Now consider the second case where we have a saw-tooth-like region between them. Here also, the marbles will roll towards either ends with equal probability (if there were a tendency to move in one direction, marbles in a ring of this shape would tend to spontaneously extract thermal energy to revolve, violating the second law of thermodynamics). Now if we tilt the whole profile towards the right, as shown in Figure 2, it is quite clear that both these cases will become biased towards B.
Now consider the game in which we alternate the two profiles while judiciously choosing the time between alternating from one profile to the other.
Figure 2
When we leave a few marbles on the first profile at point E, they distribute themselves on the plane showing preferential movements towards point B. However, if we apply the second profile when some of the marbles have crossed the point C, but none have crossed point D, we will end up having most marbles back at point E (where we started from initially) but some also in the valley towards point A given sufficient time for the marbles to roll to the valley. Then we again apply the first profile and repeat the steps (points C, D and E now shifted one step to refer to the final valley closest to A). If no marbles cross point C before the first marble crosses point D, we must apply the second profile shortly before the first marble crosses point D, to start over.
It easily follows that eventually we will have marbles at point A, but none at point B. Hence for a problem defined with having marbles at point A being a win and having marbles at point B a loss, we clearly win by playing two losing games.
### The coin-tossing example
A second example of Parrondo's paradox is drawn from the field of gambling. Consider playing two games, Game A and Game B with the following rules. For convenience, define $C_t$ to be our capital at time t, immediately before we play a game.
1. Winning a game earns us $1 and losing requires us to surrender$1. It follows that $C_{t+1} = C_t +1$ if we win at step t and $C_{t+1} = C_t -1$ if we lose at step t.
2. In Game A, we toss a biased coin, Coin 1, with probability of winning $P_1=(1/2)-\epsilon$. If $\epsilon > 0$, this is clearly a losing game in the long run.
3. In Game B, we first determine if our capital is a multiple of some integer $M$. If it is, we toss a biased coin, Coin 2, with probability of winning $P_2=(1/10)-\epsilon$. If it is not, we toss another biased coin, Coin 3, with probability of winning $P_3=(3/4)-\epsilon$. The role of modulo $M$ provides the periodicity as in the ratchet teeth.
It is clear that by playing Game A, we will almost surely lose in the long run. Harmer and Abbott[2] show via simulation that if $M=3$ and $\epsilon = 0.005,$ Game B is an almost surely losing game as well. In fact, Game B is a Markov chain, and an analysis of its state transition matrix (again with M=3) shows that the steady state probability of using coin 2 is 0.3836, and that of using coin 3 is 0.6164.[3] As coin 2 is selected nearly 40% of the time, it has a disproportionate influence on the payoff from Game B, and results in it being a losing game.
However, when these two losing games are played in some alternating sequence - e.g. two games of A followed by two games of B (AABBAABB...), the combination of the two games is, paradoxically, a winning game. Not all alternating sequences of A and B result in winning games. For example, one game of A followed by one game of B (ABABAB...) is a losing game, while one game of A followed by two games of B (ABBABB...) is a winning game. This coin-tossing example has become the canonical illustration of Parrondo's paradox – two games, both losing when played individually, become a winning game when played in a particular alternating sequence. The apparent paradox has been explained using a number of sophisticated approaches, including Markov chains,[4] flashing ratchets,[5] Simulated Annealing[6] and information theory.[7] One way to explain the apparent paradox is as follows:
• While Game B is a losing game under the probability distribution that results for $C_t$ modulo $M$ when it is played individually ($C_t$ modulo $M$ is the remainder when $C_t$ is divided by $M$), it can be a winning game under other distributions, as there is at least one state in which its expectation is positive.
• As the distribution of outcomes of Game B depend on the player's capital, the two games cannot be independent. If they were, playing them in any sequence would lose as well.
The role of $M$ now comes into sharp focus. It serves solely to induce a dependence between Games A and B, so that a player is more likely to enter states in which Game B has a positive expectation, allowing it to overcome the losses from Game A. With this understanding, the paradox resolves itself: The individual games are losing only under a distribution that differs from that which is actually encountered when playing the compound game. In summary, Parrondo's paradox is an example of how dependence can wreak havoc with probabilistic computations made under a naive assumption of independence. A more detailed exposition of this point, along with several related examples, can be found in Philips and Feldman.[8]
### A simplified example
For a simpler example of how and why the paradox works, again consider two games Game A and Game B, this time with the following rules:
1. In Game A, you simply lose $1 every time you play. 2. In Game B, you count how much money you have left. If it is an even number, you win$3. Otherwise you lose $5. Say you begin with$100 in your pocket. If you start playing Game A exclusively, you will obviously lose all your money in 100 rounds. Similarly, if you decide to play Game B exclusively, you will also lose all your money in 100 rounds.
However, consider playing the games alternatively, starting with Game B, followed by A, then by B, and so on (BABABA...). It should be easy to see that you will steadily earn a total of \$2 for every two games.
Thus, even though each game is a losing proposition if played alone, because the results of Game B are affected by Game A, the sequence in which the games are played can affect how often Game B earns you money, and subsequently the result is different from the case where either game is played by itself.
## Application
Parrondo's paradox is used extensively in game theory, and its application in engineering, population dynamics,[9] financial risk, etc., are also being looked into as demonstrated by the reading lists below. Parrondo's games are of little practical use such as for investing in stock markets[10] as the original games require the payoff from at least one of the interacting games to depend on the player's capital. However, the games need not be restricted to their original form and work continues in generalizing the phenomenon. Similarities to volatility pumping and the two-envelope problem[11] have been pointed out. Simple finance textbook models of security returns have been used to prove that individual investments with negative median long-term returns may be easily combined into diversified portfolios with positive median long-term returns.[12] Similarly, a model that is often used to illustrate optimal betting rules has been used to prove that splitting bets between multiple games can turn a negative median long-term return into a positive one.[13]
## Name
In the early literature on Parrondo's paradox, it was debated whether the word 'paradox' is an appropriate description given that the Parrondo effect can be understood in mathematical terms. The 'paradoxical' effect can be mathematically explained in terms of a convex linear combination.
Parrondo's paradox does not seem that paradoxical if one notes that it is actually a combination of three simple games: two of which have losing probabilities and one of which has a high probability of winning. To suggest that one can create a winning strategy with three such games is neither counterintuitive nor paradoxical.
## References
1. ^ Shu, Jian-Jun; Wang, Q.-W. (2014). "Beyond Parrondo’s paradox". Scientific Reports 4 (4244): 1–9. doi:10.1038/srep04244.
2. ^ G. P. Harmer and D. Abbott, "Losing strategies can win by Parrondo's paradox", Nature 402 (1999), 864
3. ^ D. Minor, "Parrondo's Paradox - Hope for Losers!", The College Mathematics Journal 34(1) (2003) 15-20
4. ^ G. P. Harmer and D. Abbott, "Parrondo's paradox", Statistical Science 14 (1999) 206-213
5. ^ G. P. Harmer, D. Abbott, P. G. Taylor, and J. M. R. Parrondo, in Proc. 2nd Int. Conf. Unsolved Problems of Noise and Fluctuations, D. Abbott, and L. B. Kish, eds., American Institute of Physics, 2000
6. ^ G. P. Harmer, D. Abbott, and P. G. Taylor, The Paradox of Parrondo's games, Proc. Royal Society of London A 456 (2000), 1-13
7. ^ G. P. Harmer, D. Abbott, P. G. Taylor, C. E. M. Pearce and J. M. R. Parrondo, Information entropy and Parrondo's discrete-time ratchet, in Proc. Stochastic and Chaotic Dynamics in the Lakes, Ambleside, U.K., P. V. E. McClintock, ed., American Institute of Physics, 2000
8. ^ Thomas K. Philips and Andrew B. Feldman, Parrondo's Paradox is not Paradoxical, Social Science Research Network (SSRN) Working Papers, August 2004
9. ^ V. A. A. Jansen and J. Yoshimura "Populations can persist in an environment consisting of sink habitats only". Proceedings of the National Academy of Sciences USA, 95(1998), 3696-3698 .
10. ^ R. Iyengar and R. Kohli, "Why Parrondo's paradox is irrelevant for utility theory, stock buying, and the emergence of life," Complexity, 9(1), pp. 23-27, 2004
11. ^
12. ^ M. Stutzer, The Paradox of Diversification, The Journal of Investing, Vol. 19, No.1, 2010.
13. ^ M. Stutzer, "A Simple Parrondo Paradox", Mathematical Scientist, V.35, 2010. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5977214574813843, "perplexity": 1054.3846145722043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00238-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://www.maa.org/programs/faculty-and-departments/course-communities/browse?term_node_tid_depth=40572&page=12 | # Browse Course Communities
Displaying 121 - 130 of 235
This is one in a series of lessons on probability from the Math Goodies site.
This is a short article on Laplace's rule of succession. The rule is derived in the context of a ball and urn model.
This is a short article on conditional independence although the term conditional recurrence is used as the title.
This is a short article on the principle of proportionality.
This is an HTML page that is a part of a larger exposition on probability. Bayes' theorem is derived, first for two events and then for a partition of the sample space into $$n$$ events.
This is an html page that is a part of a larger exposition on probability.
This applet shows a discrete sample space as a rectangle with a grid of $$N_x$$ by $$N_y$$ dots. The underlying probability space is the uniform distribution on this finite set of points.
Answers but no solutions are given for user input.
Overview of Monty Hall problem including several alternate Host and Player strategies. Links to interactive game are embedded in text.
Extensive discussion of problem, solution, history, variants and potential host behaviors . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6046034693717957, "perplexity": 548.1811114431288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922089.6/warc/CC-MAIN-20140901014522-00239-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://forum.knime.com/t/meaning-of-distance-matrix-output/1418 | # meaning of distance matrix output
hello!
I´m using the distance matrix node in my workflow. I calculate fingerprits and then i use the tanimoto function of the distance matrix node calculating a distance matrix of the fingerprints.
now in the output column there are the distances listed but i don´t understand between which row values the distance is meaned?! - ok, in the first row there is no distance because it compares the first entry with itself. in the second row there is the distance between the first and the second entry.. ok, but what is the first entry of the third row? is it the distance between molecule 3 and 1 or between 3 and 2? and waht is then the second entry of the third row (distance between which row entries)? and so one...
it is a lower triangular distance matrix.
e.g. in DataRow i at position j you find the distance between the molecule in Row i and the molecule in Row j (only for i<j)
Hi
If you dont want to a matrix of all the molecules being compared with one another and simply want to get a tanimato similarity between one molecule (or a set of molecules) and a list of molecules then you can use either the Indigo Fingerprint Similarity or the Erlwood Fingerprint Similarity.
If you are using more than one molecule to compare against, and want an overall average similarity then simply tick Multifusion query in ther Erlwood node, or average aggregation type in the Indigo node.
Simon. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114053010940552, "perplexity": 782.3881798241897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00660.warc.gz"} |
https://www.zigya.com/study/book?class=11&board=ahsec&subject=Biology&book=Biology&chapter=Chemical+Coordination+and+Integration&q_type=&q_topic=Human+Endocrine+System&q_category=&question_id=BIEN11007626 | ## Chapter Chosen
Chemical Coordination and Integration
## Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
## Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
Briefly explain the structure of adrenal gland and hormones secreted by its different parts.
The adrenal glands are present in pairs, one at the anterior part of each kidney. The adrenal gland is made of two kinds of tissues namely
1. Adrenal cortex : It is the outer part of the adrenal gland. It is further differentiated into three parts :
(a) Zona glomerulosa (Outer zone)
(b) Zona fasciculata (Middle zone)
(c) Zona reticulosa (Inner zone outer to adrenal medulla).
Hormones secreted - Adrenal cortex secretes the following three hormones derived from steroids :
(i) Aldosterone or mineralocorticoids.
(ii) Cortisol or glucocorticoids.
(iii) Sexcorticoids or androgens.
2. Adrenal medulla : It lies inner to cortex.
Hormones secreted- It secretes two hormones namely:
(a) Epinephrine
(b) Norepinephrine.
110 Views
Why is the Parathyroid hormone called hypercalcemic hormone?
PTH or parathyroid hormone is also called hypercalcemic hormone as it increase the calcium level in the blood.
585 Views
Name the largest endocrine gland.
Thyroid gland.
479 Views
Name the endocrine gland which is H shaped.
Thyroid gland.
813 Views
Name the cells that the thyroid gland is composed of.
The thyroid gland is composed of the following :
i. Follicles
ii. Stromal tissue
523 Views
What is goitre?
Goitre is the condition in the the thyroid gland is enlarged and the neck portion becomes swollen .It is due to the deficiency of iodine in our diet which results in the less production and secretion of the thyroid hormones that is hypothyroidism.
467 Views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6049273610115051, "perplexity": 20773.566164931966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584431529.98/warc/CC-MAIN-20190123234228-20190124020228-00038.warc.gz"} |
https://www.encyclopediaofmath.org/index.php/Complexification_of_a_Lie_group | # Complexification of a Lie group
2010 Mathematics Subject Classification: Primary: 22E [MSN][ZBL]
The complexification of a Lie group $G$ over $\R$ is a complex Lie group $G_\C$ containing $G$ as a real Lie subgroup such that the Lie algebra $\def\fg{ {\mathfrak g}}\fg$ of $G$ is a real form of the Lie algebra $\fg_\C$ of $G_\C$ (see Complexification of a Lie algebra). One then says that the group $G$ is a real form of the Lie group $G_\C$. For example, the group $\def\U{ {\rm U}}\U(n)$ of all unitary matrices of order $n$ is a real form of the group $\def\GL{ {\rm GL}}\GL(n,\C)$ of all non-singular matrices of order $n$ with complex entries.
There is a one-to-one correspondence between the complex-analytic linear representations of a connected simply-connected complex Lie group $G_\C$ and the real-analytic representations of its connected real form $G$, under which irreducible representations correspond to each other. This correspondence is set up in the following way: If $\rho$ is an (irreducible) finite-dimensional complex-analytic representation of $G_\C$, then the restriction of $\rho$ to $G$ is an (irreducible) real-analytic representation of $G$.
Not every real Lie group has a complexification. In particular, a connected semi-simple Lie group has a complexification if and only if $G$ is linear, that is, is isomorphic to a subgroup of some group $\GL(n,\C)$. For example, the universal covering of the group of real second-order matrices with determinant 1 does not have a complexification. On the other hand, every compact Lie group has a complexification.
The non-existence of complexifications for certain real Lie groups inspired the introduction of the more general notion of a universal complexification $(\tilde G,\tau)$ of a real Lie group $G$. Here $\tilde G$ is a complex Lie group and $\tau : G\to \tilde G$ is a real-analytic homomorphism such that for every complex Lie group $H$ and every real-analytic homomorphism $\alpha : G\to H$ there exists a unique complex-analytic homomorphism $\beta : \tilde G\to H$ such that $\alpha=\beta\circ \tau$. The universal complexification of a Lie group always exists and is uniquely defined [Bo]. Uniqueness means that if $(\tilde G'\tau')$ is another universal complexification of $\lambda : \tilde G\to \tilde G'$, then there is a natural isomorphism $\lambda\circ\tau = \tau'$ such that $\dim_\C\;\tilde G \le \dim_\R G$. In general, $G$, but if $G$ is simply connected, then $\dim_\C \tilde G = \dim_\R G$ and the kernel of $\tau$ is discrete. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9256894588470459, "perplexity": 72.37196208289275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860557.7/warc/CC-MAIN-20180618125242-20180618145242-00621.warc.gz"} |
https://cn.vosi.biz/bbs/getmsg.aspx/bbsID110/msg_id1146953 | Forum Index \ DriveHQ Customer Support Forum \
• vanoordt
• (12 posts)
Hi,
I have a serious problem with restore of encrypted files. The restore tab of the backup utility doesn't show files, even though it does show the used space. Also the browser shows the files, however, since they are encrypted, it makes no sense to download them.
I have this problem for 3 of my 13 taks now. I have already recreated one of these tasks before, because of this problem. The new tasks performed well for some time and then did the same.
What I see is this:
and when I double click on it:
(Hope these pictures show in the post.)
5/19/2007 3:32:06 AM
• DriveHQ Webmaster
• (1098 posts)
Subject: Re: Restore doesn't show files but shows space used, explorer shows the files
If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files.
If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key.
5/19/2007 4:37:38 AM
• vanoordt
• (12 posts)
Subject: Re: Re: Restore doesn't show files but shows space used, explorer shows the file...
User: DriveHQ Webmaster - 5/19/2007 4:37:38 AM
If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files.
If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key.
I understand that "in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files", however, if I double click it shows no files. Even though they are there as stated in my first post. I can send two screenshots to illustrate this.
The contents of other tasks, encrypted with the same key, does show, and thus can be downloaded.
Thanks for your attention to this problem.
5/19/2007 1:27:31 PM
• DriveHQ Webmaster
• (1098 posts)
Subject: Re: Re: Re: Restore doesn't show files but shows space used, explorer shows the ...
User: vanoordt - 5/19/2007 1:27:31 PM
User: DriveHQ Webmaster - 5/19/2007 4:37:38 AM
If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files.
If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key.
I understand that "in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files", however, if I double click it shows no files. Even though they are there as stated in my first post. I can send two screenshots to illustrate this.
The contents of other tasks, encrypted with the same key, does show, and thus can be downloaded.
Thanks for your attention to this problem.
Please double check the tasks were backed up properly. You can manually restart the task as needed. You can delete an existing task without deleting the backup sets on server, and then recreate the same backup task to fix any possible problems.
5/19/2007 3:49:08 PM
• vanoordt
• (12 posts)
Subject: Re: Re: Re: Re: Restore doesn't show files but shows space used, explorer shows ...
User: DriveHQ Webmaster - 5/19/2007 3:49:08 PM
User: vanoordt - 5/19/2007 1:27:31 PM
User: DriveHQ Webmaster - 5/19/2007 4:37:38 AM
If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files.
If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key.
I understand that "in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files", however, if I double click it shows no files. Even though they are there as stated in my first post. I can send two screenshots to illustrate this.
The contents of other tasks, encrypted with the same key, does show, and thus can be downloaded.
Thanks for your attention to this problem.
Please double check the tasks were backed up properly. You can manually restart the task as needed. You can delete an existing task without deleting the backup sets on server, and then recreate the same backup task to fix any possible problems.
The files are properly backed up. I can use Filemanager to download them, however, since they are encrypted I can't use them.
5/21/2007 1:08:19 AM
• DriveHQ Webmaster
• (1098 posts)
Subject: Re: Re: Re: Re: Re: Restore doesn't show files but shows space used, explorer sh...
User: vanoordt - 5/21/2007 1:08:19 AM
User: DriveHQ Webmaster - 5/19/2007 3:49:08 PM
User: vanoordt - 5/19/2007 1:27:31 PM
User: DriveHQ Webmaster - 5/19/2007 4:37:38 AM
If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files.
If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key.
I understand that "in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files", however, if I double click it shows no files. Even though they are there as stated in my first post. I can send two screenshots to illustrate this.
The contents of other tasks, encrypted with the same key, does show, and thus can be downloaded.
Thanks for your attention to this problem.
Please double check the tasks were backed up properly. You can manually restart the task as needed. You can delete an existing task without deleting the backup sets on server, and then recreate the same backup task to fix any possible problems. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555731177330017, "perplexity": 2105.1123003987555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00517.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2021_AMC_12A_Problems/Problem_22&diff=next&oldid=146099 | # Difference between revisions of "2021 AMC 12A Problems/Problem 22"
## Problem
Suppose that the roots of the polynomial are and , where angles are in radians. What is ?
## Solution
Part 1: solving for c
Notice that
is the negation of the product of roots by Vieta's formulas
Multiply by
Then use sine addition formula backwards:
Part 2: starting to solve for b
is the sum of roots two at a time by Vieta's
We know that
By plugging all the parts in we get:
Which ends up being:
Which is shown in the next part to equal , so
Part 3: solving for a and b as the sum of roots
is the negation of the sum of roots
The real values of the 7th roots of unity are: and they sum to .
If we subtract 1, and condense identical terms, we get:
Therefore, we have
Finally multiply or .
~Tucker
~ pi_is_3.14
## See also
2021 AMC 12A (Problems • Answer Key • Resources) Preceded byProblem 21 Followed byProblem 23 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AMC 12 Problems and Solutions
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
Invalid username
Login to AoPS | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836452066898346, "perplexity": 3805.436326672524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00245.warc.gz"} |
https://2022.help.altair.com/2022.1/activate/business/en_us/topics/reference/oml_language/hwscpBdeAPIs_oml/bdeGetJacobianMethod.htm | # bdeGetJacobianMethod
Takes model and returns the Jacobian method being used.
## Syntax
jacobianMethod = bdeGetJacobianMethod(model)
## Inputs
model
Model used to get the Jacobian method from.
Type: diagram
## Outputs
jacobianMethod
The Jacobian method of the model. Returns 1 for analytical and 2 for numerical.
Type: string
## Examples
Get the Jacobian method being used:
model = bdeGetCurrentModel();
jacobianMethod = bdeGetAlgebraicSolver(model)
jacobianMethod = 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1576128751039505, "perplexity": 14333.239421859551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00430.warc.gz"} |
https://www.lptmc.jussieu.fr/users/guillot | ## de la Matière Condensée
### Publications
1. Dielectric behaviour of polar liquids in the far infrared spectral range : a theory.
B. Guillot, and S. Bratos, Mol. Phys. 33, p.593 (1977).
2. Theoretical analysis of dielectric properties of polar liquids in the far infrared spectral range.
B. Guillot, and S. Bratos, Phys. Rev. A 16, p.424 (1977).
3. Comparaison des fonctions de correlation mono et multimoléculaires des liquides et des solutions.
B. Guillot, et S. Bratos, Mol. Phys. 37, p.991 (1979).
4. Theoretical study of spectra of depolarized light scattered from dense rare-gas fluids.
B. Guillot, S. Bratos and G. Birnbaum, Phys. Rev. A 22, p.2230 (1980).
5. Theory of collision induced absorption in dense rare gas mixture.
B. Guillot, S. Bratos et G. Birbaum. Mol. Phys. 44, p.1021 (1981).
6. Theoretical investigation and experimental detection of rattling motions in atomic and molecular fluids,
S. Bratos and. B Guillot. J. Mol. Struc. 84, p.195 (1982).
7. Theory of collision induced line shapes : absorption and light scattering at low density.
G. Birnbaum, B. Guillot and S. Bratos, Advances in Chemical Physics, Vol. 51, John Wiley Ed. (1982), p.49-113.
8. Theoretical interaction of the far infrared absorption spectrum of dense nitrogen.
B. Guillot and G. Birnbaum, J. Chem. Phys. 29, 686 (1983)
9. Theoretical study of the far infrared absorption spectrum in molecular liquids.
B. Guillot and G. Birnbaum, in Proceedings of the Nato Advanced Research Workshop Phenomena induced by intermolecular interactions, Ed. G. Birnbaum, Plenum Press,p. 437 (1985).
10. Theory of collision induced light scattering and absorption in dense rare gas fluids.
S. Bratos, B. Guillot and G. Birnbaum, ibid ref.10, p.363.
11. Investigation of the chemical potential by molecular dynamics simulation.
B. Guillot and Y. Guissani. Mol. Phys. 54, p.455 (1985).
12. Chemical potential of triatomic polar liquids: a computer simulation study .
Y. Guissani, B. Guillot and F. Sokolic, Chem. Phys. 96, p.271 (1985).
13. Molecular dynamics simulations of thermodynamic and structural properties of liquid .
F. Sokolic, Y. Guissani and B. Guillot, Mol. Phys. 56, p.239 (1985).
14. Computer simulation of liquid sulphur dioxide : comparaison of model potentials.
F. Sokolic, Y. Guissani and B. Guillot, J. Phys. Chem. 89, p.3023 (1985).
15. Density effects and relative diffusion in the far infrared absorption spectrum of compressed liquid nitrogen.
Ph. Marteau , J. Obriot, F. Fondere and B. Guillot, Mol. Phys. 59 (1986).
16. Theoretical investigation of the dip in the far infrared absorption spectrum of dense rare gas mixture.
B. Guillot, J. Chem. Phys. 87, p.1952 (1987).
17. The statistical theory of the ionic equilibrium of water. Basic principles and practical realization .
S. Bratos, Y. Guissani and B. Guillot, in Chemical reactivity in Liquids, ed. by M. Moreau et P. Turq, Plenum Press, p. 5850 (1988).
18. The statistical mechanics of the ionic equilibrium of water: A computer simulation study .
Y. Guissani, B. Guillot and S. Bratos, J. Chem. Phys. 88, p.5850 (1988).
19. The statistical theory of the ionic equilibrium of water. Theory and experiment ,
B. Guillot, Y. Guissani and S. Bratos, in Synergetics, order and Chaos, ed. by M.G. Velarde, World Scientific, p. 473, (1988).
20. Theoretical study of the three-body absorption spectrum in pure rare gas fluids,
B. Guillot, R.D. Mountain and G. Birnbaum, Mol. Phys. 64, p.747 (1988).
21. Far infrared absorption in nitrogen-rare gas compressed mixtures. An experimental and theoretical survey.
B. Guillot, Ph. Marteau and J. Obriot, Mol. Phys. 65, p.765 (1988).
22. Triplet dipoles in the absorption spectra of dense rare gas mixtures. (1) Short range interactions.
B. Guillot, R.D. Mountain and G. Birnbaum, J. Chem. Phys. 90, p.650 (1989).
23. Dipole moments in rare gas interactions.
M. Krauss and B. Guillot, Chem. Phys. Lett. 158, p.142 (1989).
24. Interaction induced absorption in simple to complex liquids,
B. Guillot and G. Birnbaum, in Reactive and Flexible Molecules in Liquids, Ed. Th. Dorfmüller, Kluwer Academic Publishers, p.1 (1989).
25. Structures and energetics of ion-solvent microclusters (n=1, .,8) :, and with, , and
B. Guillot, Y. Guissani, D. Borgis and S. Bratos, in Reactive and Flexible Molecules in Liquids, Ed. Th. Dorfmüller, Kluwer Academic Publishers, p.47 (1989).
26. Etude par Monte Carlo de la solvatation en phase gazeuse des ions halogénures par des molécules protiques et aprotiques ,
B. Guillot, Y. Guissani, D. Borgis et S. Bratos, dans Méthodes Théoriques en Chimie, Journal de Chimie Physique, n°/spécial SFC88, p.977 (1989).
27. Triplet dipoles in the absorption spectra of dense rare gas fluids. (II) Long range interactions.
B. Guillot, J. Chem. Phys. 91, p .3456 (1989).
28. The statistical mechanics of the ionic equilibrium of water. The force law.
S . Bratos, B. Guillot and Y. Guissani, in Static and Dynamic Properties of Liquids, ed. by M. Davidovic and A.K. Soper, Springer, Vol. 40, p.48 (1989)
29. Investigation of ionic solvation dynamics by far infrared spectroscopy,
B. Guillot, Ph. Marteau and J. Obriot, in Modeling of Molecular Structures and Properties, Ed. J.L. Rivail, Elsevier, p. 363 (1990).
30. Investigation of very fast motions in electrolytes solutions by far infrared spectroscopy,
B. Guillot, Ph. Marteau and J. Obriot, J. Chem. Phys. 93, p.6148 (1990).
31. Computer simulation of chemical equilibria,
B. Guillot, Y. Guissani and S. Bratos, J. Phys.: Condensed Matter 2, supp.A, p.165 (1990).
32. Line shapes in dense fluids ; the problem, some answers, future directions,
B. Guillot in SpectralLine Shapes, Vol 6, AIP Series, p.453 (1990).
33. A molecular dynamics study of the far infrared spectrum of liquid water,
B. Guillot, J. Chem. Phys. 95, p.1543 (1991).
34. A computer simulation study of hydrophobic hydration of rare gases and of methane. (I) Thermodynamic and structural properties.
B. Guillot, Y. Guissani, and S. Bratos, J. Chem. Phys. 95, p.3643 (1991).
35. Investigation of charge-transfer complexes by computer simulation. (I) Iodine in benzene solution.
Y. Danten, B. Guillot and Y. Guissani. J. Chem. Phys. 96, p.3783 (1992).
36. Investigation of charge-transfer complexes by computer simulation. (II) Iodine in pyridine solution,
Y. Danten, B. Guillot, and Y. Guissani., J. Chem. Phys. 96, p.3795 (1992).
37. Structure of liquid cyclopropane,
M.I. Cabaco, M. Besnard, M.C. Bellissent-Funel, Y. Guissani, and B. Guillot., in Molecular Liquids : New perspectives in Physics and Chemistry, J. Teixeira-Dias ed., Kluwer Academic Publishers, NATO ASI Series C. Vol 379, p. 513 (1992).
38. Water Solutions of non polar gases: a computer simulation.
B. Guillot, Y. Guissani and S. Bratos, Zhurnal Fizicheskoi Khimii 67, p.30 (1993"); Russian J. of Phys. Chem. 67, p.25 (1993).
39. A computer simulation study of the liquid-vapor coexistence curve of water.
Y. Guissani, and B. Guillot., J. chem. Phys. 98, p.8221 (1993).
40. A computer simulation study of the temperature dependence of the hydrophobic hydration.
B. Guillot, and Y. Guissani., J. Chem. Phys. 99, p.8075 (1993)
41. Temperature dependence of the solubility of non polar gases in liquids.
B. Guillot and Y. Guissani, Mol. Phys 79, p.53 (1993)
42. Coexisting phases and criticality in NaCl by computer simulation,
Y. Guissani, and B. Guillot, J. Chem. Phys 101, p.490 (1994).
43. A far infrared study of water diluted in hydrophobic solvents,
T. Tassaing, Y. Danten, M. Besnard, E. Zoidis, J. Yarwood, Y. Guissani, and B. Guillot., Mol. Phys 84, p.769 (1995).
44. Cancellation effects in collision-induced phenomena.
G. Birnbaum, and B. Guillot, in Collision and Interaction-Induced Spectroscopy, G.C. Tabisz and M.N. Neuman eds, Kluwer Academic Publisher, p1, (1995).
45. Simulation of the far infrared spectrum of liquid water and steam along the coexistence curve.
B. Guillot and Y. Guissani, in Collision and Interaction-Induced Spectroscopy , G.C. Tabisz and M.N. Neuman eds, Kluwer Academic Publisher, p.129, (1995).
46. Thermodynamics and structure of hydrophobic hydration by computer simulation,
B. Guillot and Y. Guissani, in proceedings of the 12th International Conference on the Properties of Water and Steam, H.J. White, Jr. J.V. Sengers, D.B. Neumann and J.C. Bellows eds, Begell House, p. 269 (1995).
47. Towards a theory of coexistence and critically in real molten salts.
B. Guillot, and Y. Guissani, Mol. Phys. 87, p.37 (1996).
48. A numerical investigation of the liquid-vapor coexistence curve of silica.
Y. Guissani and B. Guillot, J. Chem. Phys. 104, p.7633 (1996).
49. The solubility of rare gases in fused silica : A numerical evaluation.
B. Guillot and Y. Guissani, J. Chem. Phys. 105, 255 (1996).
50. Interaction-induced dipoles and polarizabilities in diverse phenomena.
G. Birnbaum, and B. Guillot, in Spectral Line Shapes, AIP Press, Vol 9, p.1 (1997).
51. Evidence of dimer formation in neat liquid 1,3,5-trifluorobenzene.
M.I.Cabaço, Y. Danten, M. Besnard, Y. Guissani, and B. Guillot, Chem. Phys. Lett .262, p.120 (1996).
52. Boson peak and high frequency modes in amorphous silica.
B. Guillot and Y. Guissani, Phys. Rev. Lett.78, p.2401 (1997).
53. Neutron diffraction and molecular dynamics investigations of the temperature dependence of the local ordering in liquid cyclopropane ,
M.I. Cabaço, Y. Danten, M. Besnard, Y. Guissani and B. Guillot, Mol Phys.90, p.817 (1997).
54. Structural studies of liquid cyclopropane : from room temperature up to supercritical conditions ,
M.I. Cabaço, Y. Danten, M. Besnard, M.C. Bellissent-Funel, Y. Guissani and B. Guillot, Mol. Phys.90, p.829 (1997).
55. Neutron diffraction and molecular dynamics investigation of the structural evolution of liquid cyclopropane from the melting point up to the supercritical domain,
M. Cabaço, Y. Danten, M. Besnard, M. Cl. Bellissent-Funel, Y. Guissani and B. Guillot, in proceedings of IAEA on Neutron Beam Research (Lisbonne, 1997), p. 98.
56. The structure of supercritical heavy water as studied by neutron diffraction,
M.C. Bellissent-Funel, T. Tassaing, H. Zhao, D. Beysens, Y. Guissani and B. Guillot, J. Chem. Phys.107, p.2942 (1997).
57. A molecular dynamics study of the vibrational spectra of silica polyamorphs,
B. Guillot and Y. Guissani, Mol. Sim.20, p.41 (1997).
58. Neutron diffraction and molecular dynamics study of liquid benzene and its fluorinated derivatives as a function of temperature.
M.I, Cabaço, Y. Danten, M . Besnard, Y. Guissani and B. Guillot J. Phys. Chem. B.101, p.6977 (1997).
59. The partial pair correlation functions of dense supercritical water,
T. Tassaing, M.C. Bellissent-Funel, B. Guillot and Y. Guissani, Europhysics Lett.42, p.265 (1998).
60. Transport of rare gases and molecular water in fused silica by molecular dynamics simulation,
Y. Guissani and B. Guillot, Mol. Phys.95, p.151 (1998).
61. Quantum effects in simulated water by the Feynman-Hibbs approach,
B. Guillot and Y. Guissani, J. Chem. Phys 108, p.1062 (1998).
62. Hydrogen-bonding in light and heavy water under normal and extreme conditions,
B. Guillot and Y. Guissani, Fluid Phase Equilibria, 150-151, p.19 (1998).
63. Structural investigations of liquid binary mixtures: neutron diffraction and molecular dynamics studies of benzene, hexafluorobenzene and 1,3,5-trifluorobenzene ,
M.I. Cabaço, Y. Danten, M. Besnard, Y. Guissani and B. Guillot, J. Phys. Chem.102, p.10712 (1998).
64. An Accurate Pair Potential for Simulated Water, by
B. Guillot, and Y. Guissani, in Steam, Water, and Hydrothermal Systems : Physics and Chemistry Meeting the Needs of Industry, Proceedings of the 13th International Conference on the Porperties of Water and Steam, Ed. P.R. Tremaine, P.G.
Hill, D.E. Irish, and P.V. Balakrishnan, (NRC Press, Ottawa, 2000).
65. Computer Simulation of Phase Equilibria in Molten Salts: The Case of NH4Cl ,
B. Guillot, and Y. Guissani, ibid ref. 65.
66. How to build a better potential for water,
B. Guillot and Y. Guissani, J. Chem.Phys. 114, p.6720 (2001).
67. Simulation of the liquid-liquid coexistence curve of the tetrahydrofuran + water mixture in the Gibbs ensemble,
I. Brovchenko and B. Guillot, Fluid Phase Equilibria 45/46, p.1 (2001).
68. Chemical reactivity and phase behaviour of NH4Cl by molecular dynamics simulations.I. Solid-solid and solid-fluid equilibria,
B. Guillot and Y. Guissani, J.Chem.Phys.116, p.2047 (2002).
69. Chemical reactivity and phase behaviour of NH4Cl by molecular dynamics simulations. II. The liquid-vapour coexistence curve,
Y. Guissani and B. Guillot, J. Chem. Phys.116, p.2058, (2002)
70. A reappraisal of what we have learnt during three decades of computer simulations on water,
B. Guillot, J. Mol. Liq. 101/1-3, p.219 (2002)
71. Percolation of water in aqueous solution and liquid-liquid immiscibility,
A. Oleinikova, I. Brovchenko, A. Geiger and B. Guillot, J. Chem. Phys. 117, p.3296 (2002)
72. Polyamorphism in low temperature water: a simulation study,
B. Guillot and Y. Guissani, J. Chem. Phys. 119, p.11740 (2003)
73. Investigation of vapour-deposited amorphous ice and irradiated ice by molecular dynamics simulation,
Y. Guissani and B. Guillot, J. Chem. Phys. 120, p.4366 (2004)
74. Breaking of Henry's law for noble gas and solubility in silicate melt under pressure,
Ph. Sarda and B. Guillot, Nature 436, p.95 (2005)
75. The effect of compression on noble gas solubility in silicate melts and consequences for degassing at mid-ocean ridges,
B. Guillot and Ph. Sarda, Geochimica et Cosmochimica Acta 70, p.1215 (2006)
76. Simulated structural and thermal properties of glassy and liquid germania,
M. Micoulaut, Y. Guissani and B. Guillot, Phys. Rev. E 73, 031504 (2006) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883836567401886, "perplexity": 22095.24806444546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141183514.25/warc/CC-MAIN-20201125154647-20201125184647-00101.warc.gz"} |
https://itectec.com/ubuntu/ubuntu-networkmanager-dispatcher-d-pre-down-d-is-not-executed-on-shutdown-anymore/ | # Ubuntu – Networkmanager: dispatcher.d/pre-down.d is not executed on shutdown anymore
network-managershutdown
I am using this (https://askubuntu.com/a/674106/39966) solution to unmount a NFS on shutdown.
But now I discovered, that in most of the cases, when I shutdown the computer via the XFCE menu, the pre-down script is not executed (I see this by a logger message which is not appearing)
• Others where having the same problem.
It looks like there was a change in Network Manager which no longer closes the connection on shutdown of Network Manager. I was able to add a systemd service to be executed when the network goes offline.
I created a file /etc/systemd/system/networkdown.service with the content:
[Unit]
Wants=network-online.target
After=network.target network-online.target
[Service]
Type=oneshot
ExecStart=/bin/true
ExecStop=/bin/umount /media/media
RemainAfterExit=yes
This seems to work. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6674270629882812, "perplexity": 4282.0282961140465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00480.warc.gz"} |
https://www.physicsforums.com/threads/electron-vector-problem.42849/ | # Homework Help: Electron - Vector Problem
1. Sep 12, 2004
### 0aNoMaLi7
One part of this problem has me confused....id appreciate any guidance. I have part (a) and (b) but (c) and (d) are TOTALLY losing me. i don't even know where to begin. THANKS
"An electron's position is given by r=3.00t i - 5.00t^2 j + 3.00 k, with t in seconds and r in meters"
(a) In unit-vector notation, what is the electron's velocity v(t)?
My answer: 3.00 i - 10.0t j+ 0.00 k
(b) What is v in unit-vector notation at t=6.00s?
My answer: 3.00 i - 60.0 j+ 0.00 k
(c) What is the magnitude of v at t = 6.00 s?
(d) What angle does v make with the positive direction of the x axis at t = 6.00 s?
Thank you.
2. Sep 12, 2004
### Tide
To find the magnitude of a vector just square each component, add them up and find the square root.
You can find the angle between two vectors using the "dot product:"
$$\vec A \cdot \vec B = A B \cos \phi$$
where A and B are the magnitudes of the vectors and $\phi$ is the angle between them.
3. Sep 12, 2004
### 0aNoMaLi7
thanks.... solved it :-) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133678436279297, "perplexity": 1135.6473371542845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219495.97/warc/CC-MAIN-20180822045838-20180822065838-00590.warc.gz"} |
http://openstudy.com/updates/50e34d11e4b0e36e35142ce6 | ## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## sylinan 2 over 3 next to x + 1 = 27 one year ago one year ago Edit Question Delete Cancel Submit
• This Question is Closed
1. hba
Best Response
You've already chosen the best response.
0
$\frac{ 2 }{ 3}(x+1)=27$ Right ?
• one year ago
2. sylinan
Best Response
You've already chosen the best response.
0
@hba yes(:
• one year ago
3. hba
Best Response
You've already chosen the best response.
0
1) multiply both sides by 3
• one year ago
4. hba
Best Response
You've already chosen the best response.
0
@sylinan Start working :)
• one year ago
5. Loujoelou
Best Response
You've already chosen the best response.
1
so we have 2/3(x+1)=27 . We multiply the opposite of 2/3 to both sides, which is 3/2, so the fraction can cancel out and that means we have- $3/2∗2/3(x+1)=27∗3/2$ $x+1=27∗3/2$ Now we multiply 27/1*3/2 and that gives us 81/2 and 81/2=40.5 $x+1=40.5$ Now we finally then subtract both sides by 1 which gives us $x=39.5$
• one year ago
6. sylinan
Best Response
You've already chosen the best response.
0
@Loujoelou thank you(:
• one year ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987878203392029, "perplexity": 13116.378917347376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999679121/warc/CC-MAIN-20140305060759-00045-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/346923/are-indices-conventionally-raised-inside-or-outside-of-partial-derivatives-in-ge | # Are indices conventionally raised inside or outside of partial derivatives in general relativity?
If $A_\mu$ is a one-form, then is there a widely accepted convention among physicists about whether the notation $$\partial_\mu A^\mu \tag{1}$$ means "the partial-derivative four-divergence of the four-vector $A^\mu$ corresponding to $A_\mu$", i.e. $$\partial_\mu (g^{\mu \nu} A_\nu),\tag{2}$$ or just $$g^{\mu \nu} \partial_\mu A_\nu~?\tag{3}$$
The former definition corresponds more naturally to our usual definition of the partial derivative, but has the unfortunate property that $\partial_\mu A^\mu \neq \partial^\mu A_\mu$. For higher partial derivatives, do we adopt the convention that all partial derivatives are taken before raising or lowering any indices, so that the the contractions are invariant under the interchange of which index is raised and which is lowered? Or are partial (as opposed to covariant) derivatives used rarely enough in GR that there's no need to adopt a general convention for how they work (after all, the quantity $\partial_\mu A^\mu$ is non-tensorial under either convention)?
(Please don't close this question as being about math rather than physics. This question is asking whether there is a notational convention accepted among physicists, and has nothing to do with math.)
• I don't know of any conventions and I don't think there are any, precisely for the reason you state. But if I had to guess, I'd say that most people would agree that the metric goes inside the derivative. Jul 21, 2017 at 21:51
• Perhaps the question could be better-posed/more interesting from the math pov if you replaced those partials by covariant derivatives, but with a connection that need not be metric compatible. Thoughts? Jul 21, 2017 at 22:10
• @AccidentalFourierTransform Partial derivatives are covariant derivatives with a connection that need not be metric compatible. Technically, any system of coordinates defines a connection in which parallel-transport is defined by simply keeping the partials with respect to each coordinate direction constant. That's just not usually a very physically useful connection (except in the case of Cartesian coordinates on flat spacetime). Jul 21, 2017 at 23:05
• @tparker good point. Jul 22, 2017 at 9:37
Most of the doubts in your questions can be solved if you avoid calling a vector (or a form) their coordinates.
$A_{\mu}$ is not a one-form: $A = A_{\mu}dx^{\mu}$ is.
$g^{\mu\nu}$ is not a tensor: $g= g^{\mu\nu}e_{\mu}\otimes e_{\nu}$ is.
As such one thing is just taking partial derivatives of some functions with respect to their variables, namely $$\sum_{\mu}\frac{\partial}{\partial x^{\mu}} A_{\mu}(x)$$ one other thing is the contraction of a tensor, namely making the tensor act on some dual basis, that is $$\sum_{\mu\nu\sigma}(g^{\mu\nu}e_{\mu}\otimes e_{\nu})(A_{\sigma}dx^{\sigma}) = \sum_{\mu\nu\sigma}(g^{\mu\nu}A_{\sigma})\, e_{\mu}\, e_{\nu}(dx^{\sigma})$$ The convention is that you just have to carry the bases things act upon and that is it.
• So what is your answer to my specific question? Jul 22, 2017 at 14:12
• The answer to your question is that there is no "raising or lowering of the indeces" nor there is the need to define what derivatives to take first, because you are doing two different things that you are mistaking by the same: the former is taking a divergence, the latter is contracting a tensor. Jul 22, 2017 at 14:19
• I'm not doing any things, I'm simply asking about the interpretation of the notation $\partial_\mu A^\mu$. I never actually did any mathematical operations. Jul 22, 2017 at 14:34
• $A_\mu$ is a one-form if you use abstract index notation, which is the correct thing to do. The indices are just type annotations. Jul 28, 2017 at 22:37
• ...indeces all around $(A,B,...), (a,b,...)$ and so forth. In my opinion this is much clunkier than the standard one, to be honest. Jul 30, 2017 at 22:30
There isn't one, because partial derivatives are not meaningful in GR.
Partial derivatives can appear in two places:
• Exterior derivatives
• Lie derivatives.
Obviously they can also appear if you expand a covariant derivative but you really shouldn't raise or lower individual incides then.
For covariant derivatives, it doesn't matter, because $\nabla g=0$, so you can freely move $g$ in or out of the derivative and then we have $\nabla_\mu A^\mu=\nabla^\mu A_\mu$.
For Lie-derivatives, you can express them with covariant derivatives. However it does matter, because Lie-derivatives do not commute with $g$, unless your vector field is a Killing-field, so we have $\mathcal L_X A^\mu\neq(\mathcal L_X A_\nu)g^{\mu\nu}$, in this case, you need to specify whether you raise/lower before or after the Lie-derivative. However I need to say that the index notation meshes really badly with the Lie-derivative notation anyways.
For exterior derivatives, you can express that with covariant derivatives, and also, the exterior derivative is meaningful if and only if, you calculate it on a differential form, which are, by definition, lower-indexed.
As AccidentialFourierTransform said in the comments, the issue is more interesting if you have multiple connections and/or multiple metrics and/or a non-compatible connection. Every time I have seen such situations in the physics literature, the raisings/lowerings were written out explicitly, or a convention was declared beforehand, but because these occurrences are rather specific, one cannot really make a definitive convention en general.
• @AccidentalFourierTransform I have noted non-metric connections in the last paragraph. Jul 22, 2017 at 14:01
• Note that, as I mentioned in a comment to the OP, partial derivatives technically are covariant derivatives with respect to a connection that is not necessarily metric compatible. Jul 22, 2017 at 15:34
• @tparker And that is not necessarily globally defined, and whose existence entirely depends on the whim of choosing a chart. While technically $\partial_\mu$ is indeed a local connection, in terms of function it has no internal meaning. It is only used as a "reference device" because we know how to calculate it. Jul 22, 2017 at 15:39
1. If $A^{\mu}$ is supposed to be (components of) a vector field, i.e. a (1,0) contravariant tensor field, then the expression (1) is not a divergence. A divergence of a vector field in a pseudo-Riemannian manifold is a scalar field, i.e. a (0,0) tensor field, and has the local form $${\rm div} A~=~ \frac{1}{\sqrt{|g|}}\partial_{\mu} (\sqrt{|g|} A^{\mu}) \tag{A}$$
2. Similarly, if $A_{\nu}$ is supposed to be (components of) a co-vector field, i.e. a (0,1) covariant tensor field, then the expression (3) is not a (0,0) tensor field.
3. Apart from the important objection about not working with non-covariant quantities, if OP is merely asking about conventions for a notational short-hand for working with a partial derivative $$\partial^{\mu}\tag{B}$$ with raised index, say, in a general relativistic context, it seems most convenient to let the metric be outside, i.e. $$\partial^{\mu}~:=~g^{\mu\nu}\partial_{\nu}.\tag{C}$$ E.g. the Laplace-Beltrami operator would then become $$\Delta~=~\frac{1}{\sqrt{|g|}}\partial_{\mu}\sqrt{|g|}\partial^{\mu}.\tag{D}$$ But we cannot really recommend the notation (B) outside a special relativistic context in order not to create unnecessary confusion. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287242889404297, "perplexity": 367.5859773548891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00532.warc.gz"} |
https://scipost.org/SciPostPhys.10.1.016 | ## Gravity loop integrands from the ultraviolet
Alex Edison, Enrico Herrmann, Julio Parra-Martinez, Jaroslav Trnka
SciPost Phys. 10, 016 (2021) · published 25 January 2021
### Abstract
We demonstrate that loop integrands of (super-)gravity scattering ampli- tudes possess surprising properties in the ultraviolet (UV) region. In particular, we study the scaling of multi-particle unitarity cuts for asymptotically large momenta and expose an improved UV behavior of four-dimensional cuts through seven loops as com- pared to standard expectations. For N=8 supergravity, we show that the improved large momentum scaling combined with the behavior of the integrand under BCFW deformations of external kinematics uniquely fixes the loop integrands in a number of non-trivial cases. In the integrand construction, all scaling conditions are homoge- neous. Therefore, the only required information about the amplitude is its vanishing at particular points in momentum space. This homogeneous construction gives indirect evidence for a new geometric picture for graviton amplitudes similar to the one found for planar N=4 super Yang-Mills theory. We also show how the behavior at infinity is related to the scaling of tree-level amplitudes under certain multi-line chiral shifts which can be used to construct new recursion relations.
### Authors / Affiliations: mappings to Contributors and Organizations
See all Organizations.
Funders for the research work leading to this publication | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360782861709595, "perplexity": 1967.9908443377833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00476.warc.gz"} |
https://www.gamedev.net/forums/topic/190320-console-input-and-output/ | #### Archived
This topic is now archived and is closed to further replies.
# Console input and output
This topic is 5454 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
What functions are ANSI, work with devc++4, and can do the following in console (for example the windows one)? 1) Get input from the keyboard in realtime (= not like where you have to type something and then press enter, but something that can see what key is pressed while the program is running it's loop) 2) Draw a single character at any location on the screen (including the lowest characters like the smileys 1 and 2) 3) Draw a string of text at any location 4) Put the console into 80x50 mode (instead of the default 80x25) 5) Draw text or characters in color, change foreground and background color, etc... If no ANSI ones exist, do there exist some that work with DevC++4? Thanks! [EDIT] Might it be possible that something is wrong with conio.h in devc++4? [edited by - Boops on November 9, 2003 8:07:08 PM]
##### Share on other sites
you can use conio.h or you can use bios.h Under windows I strongly recommend conio.h - note: no unix versions
I know that atleast Borland supports it for windows consoles. I assume the rest should. If they don''t - go and buy a new compiler.
quote:
Original post by Boops
1) Get input from the keyboard in realtime (= not like where you have to type something and then press enter, but something that can see what key is pressed while the program is running it''s loop)
for a loop
char a;
while(!kbhit());
a=getch();
in your program if it constantly loops
char a;
if(kbhit())a=getch();
2) Draw a single character at any location on the screen (including the lowest characters like the smileys 1 and 2)
gotoxy(x,y);
textcolor(c);
then either use
cprintf();
or
putch();
3) Draw a string of text at any location
above - use cprintf
4) Put the console into 80x50 mode (instead of the default 80x25)
textmode(C4350);
5) Draw text or characters in color, change foreground and background color, etc...
textbackground(c);
textcolor(c);
for input, bios.h has ALOT of control over the input. you can get all the other things like shift states and so on, and control the input so you can see whats in the keyboard buffer, and leave them there, take them out, ignore then, and so on.
anyway, have fun.
##### Share on other sites
Cool! I found a way to get conio.h to work with devc++ and it''s compiler that wants to be so standard but doesn''t want to support this cool stuff the normal way.
Most of the functions you mentioned are working, except textmode! Is that one from conio.h as well, and in what compiler did you get it to work?
##### Share on other sites
And what, pray tell, do you seem to think that ''ANSI'' means, exactly?
Because ANSI C goes back a few decades, while ISO C was updated in 1999.
##### Share on other sites
quote:
Original post by C-Junkie
And what, pray tell, do you seem to think that ''ANSI'' means, exactly?
By ANSI, I just mean something that is supported by Devc++, since appearantly it''s compiler only wants ANSI things.
Please don''t waste this topic by discussing anything else about ANSI.
##### Share on other sites
for text mode;
conio.h
void textmode(int newmode);
Enum: Standard video modes Constant |Value| Text Mode ----------+-----+---------------------------------- LASTMODE | -1 | Previous text mode BW40 | 0 | Black and white 40 columns C40 | 1 | Color 40 columns BW80 | 2 | Black and white 80 columns C80 | 3 | Color 80 columns MONO | 7 | Monochrome 80 columns C4350 | 64 | EGA and 43-line | | VGA 50-line
check INSIDE your conio.h file to see if they have textmode() defined. Maybe in the DevC++ compiler they don''t have it in there.
In a standard conio library, it should be in there.
There is another way, I think its using either dos.h or bios.h that lets you use a function like mode(); to change it.
1. 1
2. 2
3. 3
4. 4
5. 5
Rutin
15
• 14
• 9
• 9
• 9
• 10
• ### Forum Statistics
• Total Topics
632912
• Total Posts
3009183
• ### Who's Online (See full list)
There are no registered users currently online
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18451382219791412, "perplexity": 4777.857324843114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510019.12/warc/CC-MAIN-20181016051435-20181016072935-00388.warc.gz"} |
https://lotharlorraine.wordpress.com/category/probabilities/ | # Are miracles improbable natural events?
Deutsche Version: Sind Wunder unwahrscheinliche Naturereignisse?
Stefan Hartmann is one of the most prominent scholars who deal with the philosophy of probability.
In an interview for the university of Munich, he went into a well-known faith story of the Old Testament in order to illustrate some concepts in a provoking way.
*****
Interviewer: let us start at the very beginning in the Old Testament. In the Book of Genesis, God reveals to hundred-years old Abraham that he’d become father. Why shall Abraham believe this?
Hartmann: if we get a new information and wonder how we should integrate it into our belief system, we start out analysing it according to different criteria.
Three of them are especially important: the initial plausibility of the new information, the coherence of the new information and the reliability of the information source.
These factors often point towards the same direction, but sometimes there are tensions. Like in this example.
We have to do with a highly reliable source, namely God who always says the Truth.
However, the information itself is very implausible, hundred-years old people don’t get children. And it is incoherent: becoming a father at the age of hundred doesn’t match our belief system.
Now we have to weigh out all these considerations and come to a decision about whether or not we should take this information in to our belief system. When God speaks, we are left with no choice but to do that. But if anyone else were to come up with this information, we’d presumably not do it, because the missing coherence and the lacking plausibility would be overwhelming.
The problem for epistemology consists of how to weigh out these three factors against each other.
*****
It must be clearly emphasised that neither the interviewer nor Hartmann believe in the historicity of this story between God and Abraham. It is only used as an illustration for epistemological (i.e. knowledge-related) problems.
As a progressive Christian, I consider that this written tradition has shown up rather late so that its historical foundations are uncertain.
Still, from the standpoint of the philosophy of religion it represents a vital text and lies at the very core of the “leap of faith” of Danish philosopher Søren Kierkegaard.
For that reason, I want to go into Hartmann’s interpretation for I believe that it illustrates a widespread misunderstanding among modern intellectuals.
I am concerned with the following sentence I underscored:
However, the information itself is very implausible, hundred-years old people don’t get children. And it is incoherent: becoming a father at the age of hundred doesn’t match our belief system.
According to Hartmann’s explanation, it looks like as if the Lord had told to Abraham: “Soon you’ll get a kid in a wholly natural way.”
And in that case I can figure out why there would be a logical conflict.
But this isn’t what we find in the original narrative:
Background knowledge: hundred years old people don’t get children in a natural way.
New information:a mighty supernatural being promised to Abraham that he would become father through a miracle.
Put that way, there is no longer any obvious logical tension.
The “father of faith” can only conclude out of his prior experience (and that of countless other people) that such an event would be extremely unlikely under purely natural circumstances.
This doesn’t say anything about God’s abilities to bring about the promised son in another way.
Interestingly enough, one could say the same thing about advanced aliens who would make the same assertion.
The utter natural implausibility of such a birth is absolutely no argument against the possibility that superior creatures might be able to perform it.
## Did ancient people believe in miracles because they didn’t understand well natural processes?
A closely related misconception consists of thinking that religious people from the past believed in miracles because their knowledge of the laws of Nature was extremely limited.
As C.S. Lewis pointed out, it is misleading to say that the first Christians believed in the virgin birth of Jesus because they didn’t know how pregnancy works.
On the contrary, they were very well aware of these states of affairs and viewed this event as God’s intervention for that very reason.
Saint Joseph would not have come to the thought of repudiating his fiancée if he hadn’t known that a pregnancy without prior sexual intercourses goes against the laws of nature.
Although professor Hartmann is doubtlessly an extremely intelligent person, I think he missed the main point.
Are we open to the existence of a God whose actions do not always correspond to the regular patterns of nature? And whose preferences might not always been understood by human reason?
But as progressive Evangelical theologian Randal Rauser argued, I think that the true epistemological and moral conflict only begins when God demands Abraham many years later to sacrifice his son, which overthrows very deep moral intuitions.
Like the earlier German philosopher Immanual Kant, Rauser strongly doubts that such a command is compatible with God’s perfection.
# The crazy bookmaker and the Cult of probability
## A Critique of the Dutch Book Argument
Many neutral observers concur into thinking we are assisting to the formation of a new religion among hopelessly nerdy people.
I’m thinking of course on what has been called hardcore Bayesianism, the epistemology according to which each proposition (“Tomorrow it’ll rain”, “String theory is the true description of the world”, “There is no god” etc.) has a probability which can and should be computed under almost every conceivable circumstance.
In a previous post I briefly explained the two main theories of probabilities, frequentism and Bayesianism. In another post, I laid out my own alternative view called “knowledge-dependent frequentism” which attempts at keeping the objectivity of frequentism while including the limited knowledge of the agent. An application to the Theory of Evolution can be found here.
It is not rare to hear Bayesians talk about their own view of probability as a life-saving truth you cannot live without, or a bit more modestly as THE “key to the universe“.
While trying to win new converts, they often put it as if it were all about accepting Bayes theorem whose truth is certain since it has been mathematically proven. This is a tactic I’ve seen Richard Carrier repeatedly employing.
I wrote this post as a reply for showing that frequentists accept Bayes theorem as well, and that the matter of the dispute isn’t about its mathematical demonstration but about whether or not one accepts that for every proposition, there exists a rational degree of belief behaving like a probability.
## Establishing the necessity of probabilistic coherence
One very popular argument aiming at establishing this is the “Dutch Book Argument” (DBA). I think it is no exaggeration to state that many committed Bayesians venerate it with almost the same degree of devotion a Conservative Evangelical feels towards the doctrine of Biblical inerrancy.
Put forward by Ramsey and De Finetti, it defines a very specific betting game whose participants are threatened by a sure loss (“being Dutch booked”) if the amounts of their odds do not fulfill the basic axioms of probabilities, the so-called Kolmogorov’s axioms (I hope my non-geeky readers will forgive me one day for becoming so shamelessly boring…):
1) the probability of an event is always a real positive number
2) the probability of an event regrouping all possibilities is equal to 1
3) the probability of the sum of disjoint events is equal to the sum of the probability of each event
The betting game upon which the DBA lies is defined as follows: (You can skip this more technical green part whose comprehension isn’t necessary for following the basic thrust of my criticism of the DBA).
## A not very wise wager
Let us consider an event E upon which it must be wagered.
The bookmaker determines a sum of money S (say 100 €) that a person R (Receiver) will get from a person G (Giver) if E comes true. But the person R has to give p*S to the person G beforehand.
The bookmaker determines himself who is going to be R and who is going to be G.
Holding fast to these rules, it’s possible to demonstrate that a clever bookmaker can set up things in such a way that any better not choosing p respecting the laws of probabilities will lose money regardless of the outcome of the event.
Let us consider for example that a better wagers upon the propositions
1) “Tomorrow it will snow” with P1 = 0.65 and upon
2) “Tomorrow it will not snow” with P2 = 0.70.
P1 and P2 violate the laws of probability because the sum of the probabilities of these two mutually exclusive events should be 1 instead of 1.35
In this case, the bookmaker would choose to be G and first get P1*S + P2*S = 100*(1.135) = 135 € from his better R. Afterwards, he wins in the two cases:
– It snows. He must give 100 € to R because of 1). The bookmaker’s gain is 135 € – 100 = 35 €
– It doesn’t snow. He must give 100 € to R because of 2). The bookmaker’s gain is also 135 € – 100 = 35 €
Let us consider the same example where this time the better comes up with P1 = 0.20 and P2 = 0.3 whose sum is largely inferior to 1.
The Bookmaker would choose to be R giving 0.20*100 = 20 € about the snow and 0.3*100 = 30 € about the absence of snow. Again, he wins in both cases:
– It snows. The better must give 100 € to R (the bookmaker) because of 1). The bookmaker’s gain is -30 – 20 +100 = 50 €
– It does not snows. The better must give 100 € to R (the bookmaker) because of 2). The bookmaker’s gain is -30 – 20 +100 = 50 €
In both cases, P1 and P2 having fulfilled the probability axioms would have been BOTH a necessary and sufficient condition for keeping the sure loss from happening.
The same demonstration can be generalized to all other basic axioms of probabilities.
## The thrust of the argument and its shortcomings
The Dutch Book Argument can be formulated as follows:
1) It is irrational to be involved in a bet where you’re bound to lose
2) One can make up a betting game such that for every proposition, you’re doomed to lose if the sums you set do not satisfy the rules of probabilities. In the contrary case you’re safe.
3) Thus you’d be irrational if the amounts you set broke the rules of probabilities.
4) The amounts you set are identical to your psychological degrees of belief
5) Hence you’d be irrational if your psychological degrees of beliefs do not behave like probabilities
Now I could bet any amount you wish there are demonstrably countless flaws in this reasoning.
### I’m not wagering
One unmentioned premise of this purely pragmatic argument is that the agent is willing to wager in the first place. In the large majority of situations where there will be no opportunity for him to do so, he wouldn’t be irrational if his degrees of beliefs were non-probabilistic because there would be no monetary stakes whatsoever.
Moreover, a great number of human beings always refuse to bet by principle and would of course undergo no such threat of “sure loss”.
Since it is a thought experiment, one could of course modify it in such a way that:
“If you don’t agree to participate, I’ll bring you to Guatemala where you’ll be water-boarded until you’ve given up”.
But to my eyes and that of many observers, this would make the argument look incredibly silly and convoluted.
### I don’t care about money
Premise 1) is far from being airtight.
Let us suppose you’re a billionaire who happens to enjoy betting moderate amounts of money for various psychological reasons. Let us further assume your sums do not respect the axioms of probabilities and as a consequence you lose 300 €, that is 0.00003% of your wealth while enjoying the whole game. One must use an extraordinarily question-begging notion of rationality for calling you “irrational” in such a situation.
### Degrees of belief and actions
It is absolutely not true that our betting amounts HAVE to be identical or even closely related to our psychological degree of beliefs.
Let us say that a lunatic bookie threatens to kill my children if I don’t accept to engage in a series of bets concerning insignificant political events in some Chinese provinces I had never heard of previously.
Being in a situation of total ignorance, my psychological degree of beliefs are undefined and keep fluctuating in my brain. But since I want to avoid a sure loss, I make up amounts behaving like probabilities which will prevent me from getting “Dutch-booked”, i.e. amounts having nothing to do with my psychology.
So I avoid sure loss even if my psychological states didn’t behave like probabilities at any moment.
### Propositions whose truth we’ll never discover
There are countless things we will never know (at least assuming atheism is true, as do most Bayesians.)
Let us consider the proposition: “There exists an unreachable parallel universe which is fundamentally governed by a rotation between string-theory and loop-quantum gravity and many related assertions.
Let us suppose I ask to a Bayesian friend: “Why am I irrational if my corresponding degrees of belief in my brain do not fulfill the basic rules of probability?”
The best thing he could answer me (based on the DBA) would be:
“Imagine we NOW had to set odds about each of these propositions. It is true we’ll never know anything about that during our earthly life. But imagine my atheism was wrong: there is a hell, we are both stuck in it, and the devil DEMANDS us to abide by the sums we had set at that time.
You’re irrational because the non-probabilistic degrees of belief you’re having right now means you’ll get dutch-booked by me in hell in front of the malevolent laughters of fiery demons.”
Now I have no doubt this might be a good joke for impressing a geeky girl being not too picky (which is truly an extraordinarily unlikely combination).
But it is incredibly hard to take this as a serious philosophical argument, to say the least.
## A more modest Bayesianism is probably required
To their credits, many more moderate Bayesians have started backing away from the alleged strength and scope of the DBA and state instead that:
“First of all, pretty much no serious Bayesian that I know of uses the Dutch book argument to justify probability. Things like the Savage axioms are much more popular, and much more realistic. Therefore, the scheme does not in any way rest on whether or not you find the Dutch book scenario reasonable. These days you should think of it as an easily digestible demonstration that simple operational decision making principles can lead to the axioms of probability rather than thinking of it as the final story. It is certainly easier to understand than Savage, and an important part of it, namely the “sure thing principle”, does survive in more sophisticated approaches.”
Given that Savage axioms rely heavily on risk assessment, they’re bound to be related to events very well treatable through my own knowledge-dependent frequentism, and I don’t see how they could justify the existence and probabilistic nature of degree of beliefs having no connection with our current concerns (such as the evolutionary path through which a small sub-species of dinosaurs evolved countless years ago).
To conclude, I think there is a gigantic gap between:
– the fragility of the arguments for radical Bayesianism, its serious problems such as magically turning utter ignorance into specific knowledge.
and
– the boldness, self-righteousness and terrible arrogance of its most ardent defenders.
I am myself not a typical old-school frequentist and do find valuable elements in Bayesian epistemology but I find it extremely unpleasant to discuss with disagreeable folks who are much more interested in winning an argument than in humbly improving human epistemology.
Thematic list of ALL posts on this blog (regularly updated)
My other blog on Unidentified Aerial Phenomena (UAP)
# On the probability of evolution
In the following post, I won’t try to calculate specific values but rather to explicate my own Knowledge-dependent frequentist probabilities by using particular examples.
The great evolutionary biologist Stephen Jay Gould was famous for his view that Evolution follows utterly unpredictable paths so that the emergence of any species can be viewed as a “cosmic accident”.
He wrote:
We are glorious accidents of an unpredictable process with no drive to complexity, not the expected results of evolutionary principles that yearn to produce a creature capable of understanding the mode of its own necessary construction.
“We are here because one odd group of fishes had a peculiar fin anatomy that could transform into legs for terrestrial creatures; because the earth never froze entirely during an ice age; because a small and tenuous species, arising in Africa a quarter of a million years ago, has managed, so far, to survive by hook and by crook. We may yearn for a ‘higher answer’– but none exists”
“Homo sapiens [are] a tiny twig on an improbable branch of a contingent limb on a fortunate tree.”
Dr. Stephen Jay Gould, the late Harvard paleontologist, crystallized the question in his book ”Wonderful Life.” What would happen, he asked, if the tape of the history of life were rewound and replayed? For many, including Dr. Gould, the answer was clear. He wrote that ”any replay of the tape would lead evolution down a pathway radically different from the road actually taken.”
You’re welcome to complement my list by adding other quotations. 🙂
## Evolution of man
So, according to Stephen Jay Gould, the probability that human life would have evolved on our planet was extremely low, because countless other outcomes would have been possible as well.
Here, I’m interested to know what this probability p(Homo) means ontologically.
### Bayesian interpretation
For a Bayesian, p(Homo) means the degree of belief we should have that a young planet having exactly the same features as ours back then would harbor a complex evolution leading to our species.
Many Bayesians like to model their degrees of belief in terms of betting amount, but in that situation this seems rather awkward since none of them would still be alive when the outcome of the wager will be known.
Let us consider (for the sake of the argument) an infinite space which also necessarily contain an infinite number of planets perfectly identical to our earth (according to the law of the large numbers.)
According to traditional frequentism, the probability p(Homo) that a planet identical to our world would produce mankind is given as the ratio of primitive earths having brought about humans divided by the total number of planets identical to ours for a large enough (actually endless) number of samples:
p(Homo) ≈ f(Homo) = N(Homo) / N(Primitive_Earths).
### Knowledge-dependent frequentism
According to my own version of frequentism, the planets considered in the definition of probability do not have to be identical to our earth but to ALL PAST characteristics of our earth we’re aware of.
Let PrimiEarths be the name of such a planet back then.
The probability of the evolution of human life would be defined as the limit p'(Homo) of
f'(Homo) = N'(Homo) / N(PrimiEarths‘)
whereby N(PrimiEarths‘) are all primitive planets in our hypothetical endless universe encompassing all features we are aware of on our own planet back then and N'(Homo) is the number of such planets where human beings evolved.
It is my contention that if this quantity exists (that is the ratio converges to a fixed value whereas the size of the sample is enlarged), all Bayesians would adopt p'(Homo) as their own degree of belief.
But what if there were no such convergence? In other words, while one would consider more and more N(PrimiEarths‘) f'(Homo) would keep fluctuating between 0 and 1 without zooming in to a fixed value.
If that is the case, this means that the phenomenon “Human life evolving on a planet gathering the features we know” is completely unpredictable and cannot therefore be associated to a Bayesian degree of belief either, which would mean nothing more than a purely subjective psychological state.
## Evolution of bird
I want to further illustrate the viability of my probabilistic ontology by considering another evolutionary event, namely the appearance of the first birds.
Let us define D as : “Dinosaurs were the forefathers of all modern birds”, a view which has apparently become mainstream over the last decades.
For a Bayesian, p(D) is the degree of belief about this event every rational agent ought to have.
Since this is an unique event of the past, many Bayesians keep arguing that it can’t be grasped by frequentism and can only be studied if one adopts a Bayesian epistemology.
It is my contention this can be avoided by resorting to my Knowledge-Dependent Frequentism (KDF).
Let us define N(Earths’) the number of planets encompassing all features we are aware of on our modern earth (including, of course, the countless birds crowding out the sky, and the numerous fossils found under the ground).
Let us define N(Dino’) as the number of these planets where all birds originated from dinosaurs.
According to my frequentism, f(D) = N(Dino’) / N(Earths’), and p(D) is the limit of f(D) as the sample is increasingly enlarged.
If p(D) is strong, this means that on most earth-like planets containing birds, the ancestors of birds were gruesome reptilians.
But if p(D) is weak (such as 0.05), it means than among the birds of 100 planets having exactly the known features of our earth, only 5 would descend from the grand dragons of Jurassic Park.
Again, what would occur if p(D) didn’t exist because f(d) doesn’t converge as the sample is increased?
This would mean that given our current knowledge, bird evolution is an entirely unpredictable phenomenon for which there can be no objective degree of belief every rational agent ought to satisfy.
## A physical probability dependent on one’s knowledge
In my whole post, my goal was to argue for an alternative view of probability which can combine both strengths of traditional Frequentism and Bayesianism.
Like Frequentism, it is a physical or objective view of probability which isn’t defined in terms of the psychological or neurological state of the agent.
But like Bayesianism, it takes into account the fact that the knowledge of a real agent is always limited and include it into the definition of the probability.
To my mind, Knowledge-Dependent Frequentism (KDF) seems promising in that it allows one to handle the probabilities of single events while upholding a solid connection to the objectivity of the real world.
In future posts I’ll start out applying this concept to the probabilistic investigations of historical problems, as Dr. Richard Carrier is currently doing.
Thematic list of ALL posts on this blog (regularly updated)
My other blog on Unidentified Aerial Phenomena (UAP)
# Did Jesus think he was God?
The outstanding liberal Biblical scholar James McGrath wrote a thought-provoking post on this very topic.
I mentioned a few posts about Bart Ehrman’s recent book yesterday, and there are already a couple more. Larry Hurtado offered some amendments to his post, in light of feedback from Bart Ehrman himself. And Ken Schenck blogged about chapter 3 and whether Jesus thought he was God. In it he writes:
I think we can safely assume that, in his public persona, Jesus did not go around telling everyone he was the Messiah, let alone God.
But one must then ask whether these is a good reason to regard the process that follows, in which Jesus comes to be viewed as the second person of the Trinity, is a legitimate or necessary one.
Schenck also criticizes Ehrman for giving voice to older formulations of scholarly views, as though things had not moved on.
The only people who think that Jesus was viewed as a divine figure from the beginning are some very conservative Christians on the one hand , and mythicists on the other. That in itself is telling.
I’d be very interested to see further exploration of the idea that, in talking about the “son of man,” Jesus was alluding to a future figure other than himself, and that it was only his followers who merged the two, coming up with the notion of a “return” of Jesus. It is a viewpoint that was proposed and then set aside decades ago, and I don’t personally feel like either case has been explored to the fullest extent possible. Scholarship on the Parables of Enoch has shifted since those earlier discussions occurred, and the possibility that that work could have influenced Jesus can no longer be dismissed.
But either way, we are dealing with the expectations of a human being, either regarding his own future exaltation, or the arrival of another figure. We simply do not find in Paul or in our earliest Gospels a depiction of Jesus as one who thought he was God.
Here was my response to that:
Well I’m not really a Conservative Christian (since I reject a fixed Canon and find some forms of pan-en-theism interesting philosophically) but I do believe that Jesus was more than a mere prophet. Along with N.T. Wright I think He viewed Himself as the new temple embodying God’s presence on earth.
I once defended the validity of C.S. Lewis trilemma provided Jesus viewed himself as God.
I’m well aware that Jesus divine sayings in John’s gospel are theological creations .
But here there is something curious going on here.
Many critical scholars think that the historical Jesus falsely predicted the end of the world in the Gospel of Mattew
“Truly I say to you, this generation will not pass away until all these things take place. ” Matthew 24:34
But if one does this, why could we not also accept the following saying
“37”Jerusalem, Jerusalem, who kills the prophets and stones those who are sent to her! How often I wanted to gather your children together, the way a hen gathers her chicks under her wings, and you were unwilling. 38″Behold, your house is being left to you desolate!…” Matthew 23:37
which is located just several verses before Matthew 24:34. It seems rather arbitrary to accept the latter while rejecting the former.
This verse is intriguing in many respects.
In it, Jesus implies his divinity while not stating it explicitly, and if it was a theological creation such as in John’s Gospel, it seems strange that Matthew did not make this point much more often and clearly at other places, if such was his agenda.
What’s more, the presence of Matthew 24:34 (provided it was a false prophecy) has some interesting consequences about the dating and intention of the author.
1) Let us consider that Matthew made up the whole end of his Gospel out of his theological wishful thinking for proving that Christ is the divine Messiah.
If it is the case, it seems extremely unlikely he would write that one or two generations AFTER Jesus had perished.
This fact strongly militates for dating Matthew’s gospel as a pretty early writing.
2) Let us now suppose that Matthew wrote His Gospel long after Jesus’s generation had passed away.
He would certainly not have invented a saying where his Messiah made a false prediction.
It appears much more natural to assume he reports a historical saying of Jesus as it was because he deeply cared for truth , however embarrassing this might prove to be.
And if that is the case, we have good grounds for thinking he did not make up Matthew 23:37 either.
I’m not saying that what I have presented here is an air-tight case, it just seems the most natural way to go about this.
I think that historical events posses objective probabilities, geekily minded readers might be interested in my own approach.”
To which James replied:
“Thanks for making this interesting argument! How would you respond to the suggestion that Jesus there might be speaking as other prophets had, addressing people in the first person as though God were speaking, but without believing his own identity to be that of God’s? I think that might also fit the related saying, “I will destroy this temple, and in three days rebuild it.””
Lotharson:
“That’s an interesting reply, James! Of course I cannot rule this out.
Still, in the verses before Jesus uses the third person for talking about God:
“And anyone who swears by the temple swears by it and by the one who dwells in it. 22 And anyone who swears by heaven swears by God’s throne and by the one who sits on it.”
and verse 36: ” 36 Truly I tell you , all this will come on this generation.” is a typical saying of Jesus he attributes to himself.
And so it seems to me more natural that Jesus would have said something like:
For Truly God says: ’37 “Jerusalem, Jerusalem, you who kill the prophets and stone those sent to you, how often I have longed to gather your children together, as a hen gathers her chicks under her wings, and you were not willing…”
James:
“Well, the same sort of switching back and forth between first person of God and the first person of the prophet is found in other prophetic literature, so I don’t see that as a problem. Of course, it doesn’t demonstrate that that is the best way to account for the phenomenon, but I definitely think it is one interpretative option that needs to be considered.”
I mentioned our conversation because I think it is a nice example of how one can disagree about a topic without being disagreeable towards one another.
Would not the world be in a much better state if everyone began striving for this ideal?
Thematic list of ALL posts on this blog (regularly updated)
My other blog on Unidentified Aerial Phenomena (UAP)
# Invisible burden of proof
Progressive Evangelical apologist Randal Rauser has just written a fascinating post about the way professional Skeptics systematically deny a claim they deem extraordinary.
I’ve talked about God and the burden of proof in the past. (See, for example, “God’s existence: where does the burden of proof lie?” and “Atheist, meet Burden of Proof. Burden of Proof, meet Atheist.”) Today we’ll return to the question beginning with a humorous cartoon.
This cartoon appears to be doing several things. But the point I want to focus on is a particular assumption about the nature of burden of proof. The assumption seems to be this:
Burden of Proof Assumption (BoPA): The person who makes a positive existential claim (i.e. who makes a claim that some thing exists) has a burden of proof to provide evidence to sustain that positive existential claim.
### Two Types of Burden of Proof
Admittedly, it isn’t entirely clear how exactly BoPA is to be understood. So far as I can see, there are two immediate interpretations which we can call the strong and weak interpretations. According to the strong interpretation, BoPA claims that assent to a positive existential claim is only rational if it is based on evidence. In other words, for a person to believe rationally that anything at all exists, one must have evidence for that claim. I call this a “strong” interpretation because it proposes a very high evidential demand on rational belief.
The “weak” interpretation of BoPA refrains from extending the evidential demand to every positive existential claim a person accepts. Instead, it restricts it to every positive existential claim a person proposes to another person.
To illustrate the difference, let’s call the stickmen in the cartoon Jones and Chan. Jones claims he has the baseball, and Chan is enquiring into his evidence for believing this. A strong interpretation of BoPA would render the issue like this: for Jones to be rational in believing that he has a baseball (i.e. that a baseball exists in his possession), Jones must have evidence of this claim.
A weak interpretation of BoPA shifts the focus away from Jones’ internal rationality for believing he has a baseball and on to the rationality that Chan has for accepting Jones’ claim. According to this reading, Chan cannot rationally accept Jones’ testimony unless Jones can provide evidence for it, irrespective of whether Jones himself is rational to believe the claim.
So it seems to me that the cartoon is ambiguous between the weak and strong claims. Moreover, it is clear that each claim carries different epistemological issues in its train.
### Does a theist have a special burden of proof?
Regardless, let’s set that aside and focus in on the core claim shared by both the weak and strong interpretations which is stated above in BoPA. In the cartoon a leap is made from belief about baseballs to belief about religious doctrines. The assumption is thus that BoPA is a claim that extends to any positive existential claim.
I have two reasons for rejecting BoPA as stated. First, there are innumerable examples where rational people recognize that it is not the acceptance of an existential claim which requires evidence. Indeed, in many cases the opposite is the case: it is the denial of an existential claim which requires evidence.
Consider, for example, belief in a physical world which exists external to and independent of human minds. This view (often called “realism”) makes a positive existential claim above and beyond the alternative of idealism. (Idealism is the view that only minds and their experiences exist.) Regardless, when presented with the two positions of realism and idealism, the vast majority of people will recognize that if there is a burden of proof in this question, it is borne by the idealist who denies a positive existential claim.
Second, BoPA runs afoul of the fact that one person’s existential denial is another person’s existential affirmation. The idealist may deny the existence of a world external to the mind. But by doing so, the idealist affirms the existence of a wholly mental world. So while the idealist may seem at first blush to be making a mere denial, from another perspective she is making a positive existential claim.
With that in mind, think about the famous mid-twentieth century debate between Father Copleston (Christian theist) and Lord Russell (atheist) on the existence of God. Copleston defended a cosmological argument according to which God was invoked to explain the origin of the universe. Russell retorted: “I should say that the universe is just there, and that’s all.” With that claim, Russell is not simply denying a positive existential claim (i.e. “God exists”), but he is also making a positive existential claim not made by Copleston (i.e. “the universe is just there, and that’s all”).
In conclusion, the atheist makes novel positive existential claims as surely as the theist. And so it follows that if the latter has a burden to defend her positive existential claim that God does exist, then the former has an equal burden to defend her positive existential claim that the universe is just there and that’s all.
Here is was my response.
This is another of your excellent posts, Randal!
Unlike most Evangelical apologists, you’re a true philosopher of religion and don’t seem to be ideology driven like John Loftus (for instance) obviously is. This makes it always a delight to read your new insights,
I think that when one is confronted with an uncertain claim, there are three possible attitudes:
1) believing it (beyond any reasonable doubt)
2) believing its negation (without the shadow of a doubt).
3) not knowing what to think.
Most professional Skeptics automatically assume that if your opponent cannot prove his position (1), he or she is automatically wrong (2), thereby utterly disregarding option 3).
All these stances can be moderated by probabilities, but since I believe that only events have probabilities, I don’t think one can apply a probabilistic reasoning to God’s existence and to the reality of moral values.
While assessing a worldview, my method consists of comparing its predictions with the data of the real world. And if it makes no prediction at all (such as Deism), agnosticism is the most reasonable position unless you can develop cogent reasons for favoring another worldview.
Anyway, the complexity of reality and the tremendous influence of one’s cultural and personal presuppositions on reality make it very unlikely to know the truth with a rational warrant, and should force us to adopt a profound intellectual humility.
This is why I define faith as HOPE in the face of insufficient evidence.
I believe we have normal, decent (albeit not extraordinarily) evidence for the existence of transcendent beings. These clues would be deemed conclusive in mundane domain of inquiries such as drug trafficking or military espionage.
But many people consider the existence of a realm (or beings) out of the ordinary to be extremely unlikely to begin with.
This is why debates between true believers and hardcore deniers tend to be extraordinarily counter-productive and loveless.
The evidence are the same but Skeptics consider a coincidence of hallucinations, illusions and radar deficits to be astronomical more plausible than visitors from another planet, universe, realm, or something else completely unknown.
In the future, I’ll argue that there are really a SMALL number of UFOs out there (if you stick to the definition “UNKNOWN Flying Objects” instead of a starship populated by gray aliens)
Of course, the same thing can be said about (a little number of) miraculous miracles.
# A mathematical proof of Ockham’s razor?
Ockham’s razor is a principle often used to dismiss out of hand alleged phenomena deemed to be too complex. In the philosophy of religion, it is often invoked for arguing that God’s existence is extremely unlikely to begin with owing to his alleged incredible complexity. A geeky brain is desperately required before entering this sinister realm.
In a earlier post I dealt with some of the most popular justifications for the razor and made the following distinction:
Methodological Razor: if theory A and theory B do the same job of describing all known facts C, it is preferable to use the simplest theory for the next investigations.
Epistemological Razor: if theory A and theory B do the same job of describing all known facts C, the simplest theory is ALWAYS more likely.”
Like the last time, I won’t address the validity of the Methodological Razor (MR) which might be an useful tool in many situations.
My attention will be focused on the epistemological glade and its alleged mathematical grounding.
## Example: prior probabilities of models having discrete variables
To illustrate how this is supposed to work, I built up the following example. Let us consider the result $Y$ of a random experiment depending on a measured random variable $X$. We are now searching for a good model (i.e. function $f(X)$ ) such that the distance $d = Y - f(X)$ is minimized with respect to constant parameters appearing in $f$. Let us consider the following functions: $f1(X,a1)$$f2(X,a1,a2)$$f3(X,a1,a2,a3)$ and $f4(X,a1,a2,a3,a4)$ . which are the only possible models aiming at representing the relation between $Y$ and $X$. Let n1 = 1, n2 = 2, n3 =3 and n4 = 4 be their number of parameters. In what follows, I will neutrally describe how objective Bayesians justify Ockham’s razor in that situation.
### The objective Bayesian reasoning
Objective Bayesians apply the principle of indifference, according to which in utterly unknown situations every rational agent assigns the same probability to each possibility.
Let be $pi_{total} = p( f i)$ , the probability that the function is the correct description of reality. It follows from that assumption that $p1_{total}=p2_{total} = p3_{total} = p4_{total} = p = \frac{1}{4}$ owing to the the additivity of the probabilities.
Let us consider that one constant coefficient $ai$ can only take on five discrete values 1, 2, 3, 4 and 5. Let us call $p1$ $p2$$p3$ and $p4$ the probabilities that one of the four models is right with very specific values of the coefficient (a1, a2, a3, a4). By applying once again the principle of indifference, one gets: $p1(1) = p1(2) = p1(3) = p1(4) = p1(5) = \frac{1}{5}p1_{total} = 5^{-n1}p$ In the case of the second function which depends on two variable a, we have 5*5 doublets of values which are possible: (1,1) (1,2),…..(3,4)….(5,5) From indifference, it follows that $p2(1,1)=p2(1,2) = ... = p2(3,4) = ....p2(5,5) = \frac{1}{25} p2_{total} = 5^{-n2}p$ There are 5*5*5 possible values for f3.
Indifference entails that $p3(1,1,1)=p3(1,,12)=... =p3(3,,2,4)=....p3(5,5,5)= \frac{1}{125} p3_{total} = 5^{-n3}p$ f4 is characterized by four parameters, so that a similar procedure leads to $p4(1,1,1,1)=p4(1,1,1,2) =...=p4(3, 2,1,4)=....p4(5,5,5,5)=\frac{1}{625}p4_{total}= 5^{-n4}p$ Let us now consider four wannabe solutions to the parameter identification problem: S1 = a1 S2 = {b1, b2} S3 = {c1, c2, c3} S4 = {d1, d2, d3, d4} each member being an integer between 1 and 5. The prior probabilities of these solutions are equal to the quantities we have just calculated above. Thus $p(S1)= 5^{-n1}p$ $p(S2)= 5^{-n2}p$ $p(S3)= 5^{-n3}p$ $p(S4)= 5^{-n4}p$ From this, it follows that $\frac{p(Si)}{p(Sj)}= 5^{nj - ni}$ or $O(i,j)= \frac{p(Si)}{p(Sj)} =5^{nj - ni}$ If one compares the first and the second model, $O(1,2) = 5^{2-1} = 5$ which means that the fit with the first model is (a priori) 5 times as likely as that with the second one .
Likewise, O(1,3) = 25 and O(1,4) = 125 showing that the first model is (a priori) 25 and 125 times more likely than the third and fourth model, respectively. If the four model fits the model with the same quality (in that for example fi(X, ai) is perfectly identical to Y), Bayes theorem will preserve the ratios for the computation of the posterior probabilities.
In other words, all things being equal, the simplest model f1(X,a1) is five times more likely than f2(X,a1,a2), 25 times more likely than f3(X,a1,a2,a3) and 125 times more likely than f4(X,a1,a2,a3,a4) because the others contain a greater number of parameters.
For this reason O(i,j) is usually referred to as an Ockham’s factor, because it penalizes the likelihood of complex models. If you are interested in the case of models with continuous real parameters, you can take a look at this publication. The sticking point of the whole demonstration is its heavy reliance on the principle of indifference.
## The trouble with the principle of indifference
I already argued against the principle of indifference in an older post. Here I will repeat and reformulate my criticism.
### Turning ignorance into knowledge
The principle of indifference is not only unproven but also often leads to absurd consequences. Let us suppose that I want to know the probability of certain coins to land odd. After having carried out 10000 trials, I find that the relative frequency tends to converge towards a given value which was 0.35, 0.43, 0.72 and 0.93 for the four last coins I investigated. Let us now suppose that I find a new coin I’ll never have the opportunity to test more than one time. According to the principle of indifference, before having ever started the trial, I should think something like that:
Since I know absolutely nothing about this coin, I know (or consider here extremely plausible) it is as likely to land odd as even.
I think this is magical thinking in its purest form. I am not alone in that assessment.
The great philosopher of science Wesley Salmon (who was himself a Bayesian) wrote what follows. “Knowledge of probabilities is concrete knowledge about occurrences; otherwise it is uselfess for prediction and action. According to the principle of indifference, this kind of knowledge can result immediately from our ignorance of reasons to regard one occurrence as more probable as another. This is epistemological magic. Of course, there are ways of transforming ignorance into knowledge – by further investigation and the accumulation of more information. It is the same with all “magic”: to get the rabbit out of the hat you first have to put him in. The principle of indifference tries to perform “real magic”. “
Objective Bayesians often use the following syllogism for grounding the principle of indifference.
1)If we have no reason for favoring one outcomes, we should assign the same probability to each of them
2) In an utterly unknown situation, we have no reason for favoring one of the outcomes
3) Thus all of them have the same probability.
The problem is that (in a situation of utter ignorance) we have not only no reason for favoring one of the outcomes, but also no grounds for thinking that they are equally probable.
The necessary condition in proposition 1) is obviously not sufficient.
This absurdity (and other paradoxes) led philosopher of mathematics John Norton to conclude:
“The epistemic state of complete ignorance is not a probability distribution.”
The Dempter Shafer theory of evidence offers us an elegant way to express indifference while avoiding absurdities and self-contradictions. According to it, a conviction is not represented by a probability (real value between 0 and 1) but by an uncertainty interval [ belief(h) ; 1 – belief(non h) ] , belief(h) and belief(non h) being the degree of trust one has in the hypothesis h and its negation.
For an unknown coin, indifference according to this epistemology would entail belief(odd) = belief(even) = 0, leading to the probability interval [0 ; 1].
### Non-existing prior probabilities
Philosophically speaking, it is controversial to speak of the probability of a theory before any observation has been taken into account. The great philosopher of evolutionary biology Elliot Sober has a nice way to put it: ““Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.”
It is hard to see how prior probabilities of theories can be something more than just subjective brain states.
## Conclusion
The alleged mathematical demonstration of Ockham’s razor lies on extremely shaky ground because:
1) it relies on the principle of indifference which is not only unproven but leads to absurd and unreliable results as well
2) it assumes that a model has already a probability before any observation.
Philosophically this is very questionable. Now if you are aware of other justifications for Ockham’s razor, I would be very glad if you were to mention them.
# John Loftus, probabilities and the Outsider Test of Faith
John Loftus is a former fundamentalist who has become an outspoken opponent of Christianity which he desires to debunk.
He has created what he calls the “Outsider Test of Faith” which he described as follows:
“This whole inside/outside perspective is quite a dilemma and prompts me to propose and argue on behalf of the OTF, the result of which makes the presumption of skepticism the preferred stance when approaching any religious faith, especially one’s own. The outsider test is simply a challenge to test one’s own religious faith with the presumption of skepticism, as an outsider. It calls upon believers to “Test or examine your religious beliefs as if you were outsiders with the same presumption of skepticism you use to test or examine other religious beliefs.” Its presumption is that when examining any set of religious beliefs skepticism is warranted, since the odds are good that the particular set of religious beliefs you have adopted is wrong.”
But why are the odds very low (instead of unknown) to begin with? His reasoning seems to be as follows:
1) Before we start our investigation, we should consider each religion to possess the same likelihood.
2) Thus if there are (say) N = 70000 religions, the prior probality of a religion being true is 1/70000 p(R), p(R) being the total probability of a religious worldview being true.
(I could not find a writing of Loftus explicitly saying that but it seems to be what he means. However I could find one of the supporters of the OST taking that line of reasoning).
## Objective Bayesianism and the principle of indifference
This is actually a straightforward application of the principle of indifference followed by objective Bayesians:
In completely unknown situations, every rational agent should assign the same probability to all outcomes or theory he is aware of.
While this principle can seem pretty intuitive to many people, it is highly problematic.
In the prestigious Standford Encyclopedia of philosophy, one can read in the article about Bayesian epistemology :
“it is generally agreed by both objectivists and subjectivists that ignorance alone cannot be the basis for assigning prior probabilities.”
To illustrate the problem, I concocted the following story.
Once upon a time, king Lothar of Lorraine had 1000 treasures he wanted to share with his people. He disposed of 50000 red balls and 50000 white balls.
Frederic the Knight (the hero of my trilingual Christmas tale) has to choose one of those in the hope he would get one of the“goldenen Wundern”.
On Monday, Lothar distributes his treasures in a perfectly random fashion.
Frederic knows that the probability of finding the treasure in a red or in a white ball is the same: p(r) = p(w) = 0.5
On Tuesday, the great king puts 10% of the treasure within red balls and 90% within white ones.
Frederic knows that the probabilities are p(r) = 0.10 and p(w) = 0.90
On Wednesday, the sovereign lord of Lorraine puts 67% of the treasures in red balls and 33% in white ones.
Frederic knows that the probabilities are p(r) = 0.67 and p(w) = 0.33
On Thursday, Frederic does not know what the wise king did with his treasure. He could have distributed them in the same way he did during one of the previous days but also have chosen a completely different method.
Therefore Frederic does not know the probabilities; p(r) = ? and p(w) = ?
According to the principle of indifference, Fred would be irrational because he ought to believe that p(r) = 0.5 and p(w) = 0.5 on the grounds it is an unknown situation.
This is an extremely strong claim and I could not find in the literature any hint why Frederic would be irrational by accepting his ignorance of the probabilities.
Actually, I believe that quite the contrary is the case.
If the principle of indifference were true, Fred should reason like this:
“I know that on Monday my Lord mixed the treasures randomly so that p(r) = p(w) = 0.5
I know that on Tuesday He distributed 10% in the white ones and 90% in the red ones so that p(w) = 0.10 and p(r) = 0.90
I know that on Wednesday He distributed 67% in the white ones and 33% in the red ones so that p(w) = 0.67 and p(r) = 0.33
AND
I know absolutely nothing what He did on Thursday, therefore I know tthat the probabilities are p(r) = p(w) = 0.5 exactly like on Monday. “
Now I think that this seems intuitively silly and even absurd to many people. There seems to be just no way how one can transform an utter ignorance into a specific knowledge.
### Degrees of belief of a rational agent
More moderate Bayesians will probably agree with me that it is misguided to speak of a knowledge of probabilities in the fourth case. Nevertheless they might insist he should have the same confidence that the treasure is in a white ball as in a red one.
I’m afraid this changes nothing to the problem. On Monday Fred has a perfect warrant for feeling the same confidence.
How can he have the same confidence on Thursday if he knows absolutely nothing about the distribution?
So Frederic would be perfectly rational in believing that he does not know the probabilities p(r) = ? and p(w) = ?
Likewise, an alien having just landed on earth would be perfectly rational not to know the initial likelihood of the religions:
p(Christianity) = ? p(Islam) = ? p(Mormonism) = ? and so on and so forth.
But there is an additional problem here.
The proposition “the religion x is true one” is not related to any event and it is doubted by non-Bayesian (and moderate Bayesian) philosophers that is warranted to speak of probabilities in such a situation.
Either x is true or false and this cannot be related to any kind of frequency.
The great science philosopher Elliot Sobert (who is sympathetic to Bayesian epistemology) wrote this about the probability of a theory BEFORE any data has been taken into account:
Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.”
He rightly reminds us t the beginning of his article that “it is not inevitable that all propositions should have probabilities. That depends on what one means by probability, a point to which I’ll return. The claim that all propositions have probabilities is a philosophical doctrine, not a theorem of mathematics.” l
So, it would be perfectly warranted for the alien to either confess his ignorance of the prior likelihoods of the various religions or perhaps even consider that these prior probabilities do not exist, as Elliot Sober did with the theory of gravitation.
In future posts, I will lay out a non-Bayesian way to evaluate the goodness of theory which only depends on the set of all known facts and don’t assume the existence of a prior probability before any data has been considered.
As we shall see, many of the probabilistic challenges of Dr. Richard Carrier against Christianity kind of dissolves if one drops the assertion that all propositions have objective prior probabilities.
To conclude, I think I have shown in this post that the probabilistic defense of the Outsider Test of Faith is unsound and depends on very questionable assumptions.
I have not, however, showed at all that the OST is flawed for it might very well be successfully defended based on pragmatic grounds. This will be the topic of future conversations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5724100470542908, "perplexity": 1539.4791984662909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00620-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/11301-equation.html | 1. ## equation
x-366= -415
89+y=112
-27=w-14
2. Originally Posted by dianna
x-366= -415
89+y=112
-27=w-14
We've done a bunch of these for you. I think it would be better to see YOU do them. So I'll work out the first one, and you post your solutions to the second two and we'll look them over.
$x - 366 = -415$
$x - 366 + 366 = -415 + 366$
$x = 49$
The last two are done using almost exactly the same method.
-Dan | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9031322002410889, "perplexity": 720.2489719708989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607963.70/warc/CC-MAIN-20170525025250-20170525045250-00563.warc.gz"} |
https://rd.springer.com/chapter/10.1007/978-3-662-03877-2_17 | # Squeezed States of Light
• Pierre Meystre
• Murray SargentIII
## Abstract
The Heisenberg uncertainty principle $$\Delta A\Delta B \geqslant \frac{1} {2}\left| {\left\langle {\left[ {A,B} \right]} \right\rangle } \right|$$ between the standard deviations of two arbitrary observables, ΔA = 〈(A - (A))21/2 and similarly for ΔB, has a built-in degree of freedom: one can squeeze the standard deviation of one observable provided one “stretches” that for the conjugate observable. For example, the position and momentum standard deviations obey the uncertainty relation
$$\Delta x\Delta p \geqslant \hbar /2$$
(17.1)
and we can squeeze Δx to an arbitrarily small value at the expense of accordingly increasing the standard deviation Δp. All quantum mechanics requires is than the product be bounded from below. As discussed in Sect. 13.1, the electric and magnetic fields form a pair of observables analogous to the position and momentum of a simple harmonic oscillator. Accordingly, they obey a similar uncertainty relation
$$\Delta E\Delta B \geqslant \left( {{\text{constant}}} \right)\hbar /2$$
(17.2)
.
### Keywords
Expense Sine Tral Nism | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700042605400085, "perplexity": 697.0742292634145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812584.40/warc/CC-MAIN-20180219111908-20180219131908-00773.warc.gz"} |
http://math.stackexchange.com/questions/844042/homeomorphism-proof | # Homeomorphism proof
$\mathbb{R}^2-(0 \times \mathbb{R}_+) \approx \mathbb{R}^2$
Now consider the map that sends the line $(-1 \times \mathbb{R}_+)$ to $(0 \times \mathbb{R}_+)$. And then continue this inductively. Every function is a map, because it is a translation. The composition of any finite number of maps is a map. The question I have is whether the countably infinite composition of these maps is a map. Thank you.
-
There's no meaningful definition of an infinite composition of maps. If $s(n)=n+1$ then what does the map $s^{\infty}$ map $n$ to? – Dan Rust Jun 22 '14 at 22:46
In the plane you can find simple homeomorphisms: first contract $R^2$ onto $R^*_+\times R$ via $(x,y)\mapsto(e^x,y)$, and then take the square by identifying the real plane with the complex numbers: $z\mapsto z^2$. This will give you a homeomorphism $R^2\to R^2\setminus R_-\times 0$ which you can then rotate to where you want it. – Olivier Bégassat Jun 22 '14 at 22:50
I guess, @Mike meant the map $(n,x)\mapsto (n+1,x)$ if $n\in\Bbb Z,\ n<0$ and $x>0$. – Berci Jun 22 '14 at 22:50
@Berci: yeah, that's basically what I meant. Additionally, all other points are mapped to themselves. – Mike Jun 22 '14 at 23:00
But this is not continuous on points of $\{n\}\times\Bbb R^{\ge 0}$ for $n<0,\,n\in\Bbb Z$. – Berci Jun 22 '14 at 23:02
Hint: Use polar coordinates, the angle measured starting from the given ray $\{0\}\times\Bbb R^+$ (i.e. the $y$-axis), then deform the angles from $(0,360^\circ)$ to $(90^\circ,270^\circ)$, thus you get the open half plane.
What is the image of the point $(0,-1)$ under this 'map'? If you can't answer this question then it is not a map. – Dan Rust Jun 22 '14 at 22:56
@Mike a small ball around $(0,0)$ has non-open preimage because the preimage has two connected components, one of which is the single point $\{(-1,0)\}$. – Dan Rust Jun 22 '14 at 23:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146397709846497, "perplexity": 255.35598363449512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829970.64/warc/CC-MAIN-20160723071029-00093-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://astronomy.stackexchange.com/questions/44893/would-we-have-spotted-the-ascent-stage-of-apollo-11s-eagle-if-it-was-still-in-o/44896 | # Would we have spotted the ascent stage of Apollo 11's Eagle if it was still in orbit around the Moon?
This video: Is Apollo 11's Lunar Module Still In Orbit Around The Moon 52 Years Later? claims, based on orbital simulations, that there is a chanche that the lunar module of Apollo 11 might be still orbiting the Moon.
Is this claim reasonable from an astronomical point of view? Ie are there telescopes on Earth which would've spotted this probably shiny object on Lunar orbit? Afterall, there are space rocks we know about smaller than this spacecraft.
A related but different question in Space SE: Could the ascent stage of Apollo 11's Eagle still orbit the Moon?
### Background/Video
Starting around minute 6 of the video frozen orbits are discussed, and some example GMAT simulations are run to explain how lunar orbits can be simulated including up to order 160 in the Moon's gravitational spherical harmonics, which have been very accurately mapped by the GRAIL mission.
The video then links to the study itself:
The Apollo 11 “Eagle” Lunar Module ascent stage was abandoned in lunar orbit after the historic landing in 1969. Its fate is unknown. Numerical analysis described here provides evidence that this object might have remained in lunar orbit to the present day. The simulations show a periodic variation in eccentricity of the orbit, correlated to the selenographic longitude of the apsidal line. The rate of apsidal precession is correlated to eccentricity. These two factors appear to interact to stabilize the orbit over the long term.
### "Would we have spotted the ascent stage of Apollo 11's Eagle if it was still in orbit around the Moon?"
It's unclear; maybe.
Here's how it might have been spotted.
The Space SE question Why was the 100m Green Bank dish needed together with DSN's 70m Goldstone dish to detect Chandrayaan-1 in lunar orbit? and its answers explain that delay-doppler radar from the very large diameter (= narrow beam) radar transmitter at Arecibo and the very large diameter (= narrow beam + sensitive) receiver at Green Bank were able to pick up the tiny reflection of Chandrayaan-1 in it's orbit 200 km away from the surface of the Moon.
This was done by separating the spacecraft's reflection from that of the 1012 times larger Moon both spatially (using these large $$\lambda/D$$ dishes) and in frequency space since the doppler shift of a 200 km circular lunar orbit with a velocity of about 1600 m/s at 2380 MHz will be about 25 kHz.
note: I don't know yet if they did in fact use the highest frequency available at Arecibo; there are lower ERP's at 430 and 47 MHz as well, Green bank can receive down to 290 MHz.
They found Chandrayaan-1 this way, and it's unclear if that data would have picked up the much larger Eagle as well or not. 1.22 $$\lambda/D$$ for Aredibo's radar is only 1.7 arc minutes, so unless they really looked for the Eagle they could have easily missed it.
From the linked question in Space SE:
above: "Radar imagery acquired of the Chandrayaan-1 spacecraft as it flew over the moon's south pole on July 3, 2016. The imagery was acquired using NASA's 70-meter (230-foot) antenna at the Goldstone Deep Space Communications Complex in California. This is one of four detections of Chandrayaan-1 from that day." Credit: NASA/JPL-Caltech. From here
above: Cropped section of the previous figure to draw attention to "The white box in the upper-right corner of the animation depicts the strength of echo." Credit: NASA/JPL-Caltech. From here
There are a couple of inconsistencies with theory presented in the video and the paper it is based on. Apparently, the author(s) are suggesting that Eagle did not crash as it was/is in a 'frozen' orbit. However, frozen orbits are thought to occur only for orbits with higher inclinations (in fact only for 4 specific inclinations: 27,50,76 and 86 deg) (see https://science.nasa.gov/science-news/science-at-nasa/2006/06nov_loworbit/ ). But the Apollo 11 module had a nearly equatorial orbit, and a small satellite called PFS-2 released by a later Apollo mission, having a small inclination as well, crashed after only about 1 month. So the argument doesn't quite add up, with both objects having a very similar orbit. The author(s) should run their calculations as well for PFS-2, which we know crashed, and see whether they can replicate this in their calculations. If not, it would show that their calculations are erroneous and their conclusion therefore unfounded.
The brightness of the Eagle ascent module should make it in principle detectable in parts of the orbit where it is not in front of the moon. From its distance and size one can estimate that it would be about one million times less bright than the Hubble Space Telescope (which appears as about a star of magnitude 1), so it should be like a star of magnitude 16. With a telescope of 20 inch diameter or more, this should theoretically be spottable if a sufficient magnification is used (the surface brightness of the background will be diminished with the inverse of the magnification squared, whilst the object, being essentially a point source, will stay at the same brightness). I have heard from some amateur astronomers that they can see magnitude 3 stars during broad daylight with their telescope if they know where to look. Now the moon is 14 magnitudes less bright than the sun, so one should be able to see objects with magnitude 17 during broad (full)moonlight, maybe even fainter if increased magnification is used (provided of course the telescope is powerful enough for this in the first place, i.e. if it has a diameter of 20 inches or more).
And with radar, other satellites orbiting the moon have indeed been detected (see https://moon.nasa.gov/news/12/new-radar-technique-finds-lost-lunar-spacecraft/ ) but not the Eagle module.
(note: I have substantially edited my answer, so some of the comments below may now not apply anymore)
• @zabop The landing modules were crashed deliberately as a seismic experiment, but in case of Apollo 11 the seismometer failed after 3 weeks, which is why it may have missed the crash (see references in my edited answer). The estimate regarding the optical visibility is my own calculation. In any case, satellites of this size (or even smaller) orbiting the moon can and have been detected by radar (see edited answer) Jul 17, 2021 at 22:27
• @JohnHoltz It may be borderline for viewing trough the atmosphere, but 60 miles above the moon would be about an arcminute, so it is fair bit from the moon's disc if you have a sufficient magnification, and with maybe some post-processing you could make it better visible (I am just making an educated guess here; I am not in a position to confirm this from practical experience (my last telescope observation sessions date many years back)). It definitely should not be a problem from outside the earth;s atmosphere. And with radar you should be able to detect it as well (see my edited answer) Jul 17, 2021 at 22:51
• According to Wikipedia, at least, the ascent stages of the LMs for Apollo 12 and 14–17 were crashed into the lunar surface, whereas the ascent stage of Apollo 11's LM was left in lunar orbit, so this means that even if we restrict ourselves to those Apollo missions that actually landed on the moon, it is still not correct that all LMs were crashed into the moon. Eagle's ascent stage may have crashed into the moon as well during the last 50 years, but it was not deliberate, and there are in fact papers showing that it may not have crashed at all. Jul 18, 2021 at 9:20
• @Thomas, you should have watched the OP's video before posting this. It is well-reasoned and informative, especially about lunar orbit degradation. Jul 18, 2021 at 9:35
• So, what you are saying is that when you say in your answer that "all the lunar ascent modules were crashed deliberately back onto the moon", what you really mean is "all the lunar ascent modules were crashed deliberately back onto the moon, except the ones from Apollo 4, 5, 6, 8, 10, 11, and 13"? In that case, it might be a good idea to clarify your answer, since the question is explicitly about Apollo 11's LM, which was left in lunar orbit and not "crashed deliberately back onto the moon", and your answer is explicitly only about the LMs that were crashed deliberately into the moon. Jul 18, 2021 at 9:57
I first wanted to edit this into my other answer here, but since this, even though being highly relevant for the issue, does not directly address the OP's question(s), I decided to add this as a separate answer :
The orbital elements used in the cited work for simulating the orbit of the Eagle ascent stage after being jettisoned from Apollo 11 are actually apparently those of the command module as calculated from the orbit data in the Apollo 11 mission report (Table 7-II) (the lunar module was not really of any interest anymore at this point, so it probably was not systematically tracked after that). This resulted in the following orbital elements they tried for their simulations for the lunar module
The eccentricity of the orbit for these three cases is practically the same: 0.0037, 0.0038, 0.0035 for the nominal, maximum and minimum case respectively.
However, in the Apollo 11 Flight Journal they mention these figures explicitly for the lunar module shortly after 'Ignition of Trans-Earth Injection burn' (about 5 and 7 hours after jettison of the LM)
just before 135:47:24 mission time: Perilune 100.7 km, Apolune 118.7 km
just before 137:30:12 mission time: Perilune 100.7 km Apolune 119.3 km
(they are saying '-cynthion' instead of '-lune' there)
This results in eccentricities 0.0049 and 0.0050 respectively, so substantially higher than assumed for the simulations based on the command module orbit at the time of separation.
So the author may want to revise the orbital parameters in this sense, and also apply the simulation to the PFS-2 satellite in order to remove any ambiguities here and make his results more conclusive. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6869977116584778, "perplexity": 1342.9147707568027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00733.warc.gz"} |
http://archives.datapages.com/data/bulletns/2014/02feb/BLTN12226/BLTN12226.HTM | # AAPG Bulletin
Abstract
AAPG Bulletin, V. 98, No. 2 (February 2014), P. 213226.
DOI:10.1306/06171312226
## A surprising asymmetric paleothermal anomaly around El Gordo diapir, La Popa Basin, Mexico
### Andrew D. Hanson1
1Department of Geoscience, University of Nevada Las Vegas, Las Vegas 89154-4010, Nevada; [email protected]
## ABSTRACT
Thirty-seven mudstone samples were collected from the uppermost Lower Mudstone Member of the Potrerillos Formation in El Gordo minibasin within La Popa Basin, Mexico. The unit is exposed in a circular pattern at the earth's surface and is intersected by El Gordo diapir in the northeast part of the minibasin. Vitrinite reflectance (Ro) results show that samples along the eastern side of the minibasin (i.e., south of the diapir) are mostly thermally immature to low maturity (Ro ranges from 0.53% to 0.64%). Vitrinite values along the southern, western, and northwestern part of the minibasin range between 0.67% and 0.85%. Values of Ro immediately northwest of the diapir are the highest, reaching a maximum of 1.44%. The results are consistent with two different possibilities: (1) that the diapir plunges to the northwest, or (2) that a focused high-temperature heat flow existed along just the northwest margin of the diapir. If the plunging diapir interpretation is correct, then the thermally immature area south of the diapir was in a subsalt position, and the high-maturity area northwest of the diapir was in a suprasalt position prior to Tertiary uplift and erosion. If a presumed salt source at depth to the northwest of El Gordo also fed El Papalote diapir, which is located just to the north of El Gordo diapir, then the tabular halokinetic sequences that are found only along the east side of El Papalote may be subsalt features. However, if the diapir is subvertical and the high-maturity values northwest of the diapir are caused by prolonged, high-temperature fluid flow along just the northwestern margin of the diapir, then both of these scenarios are in disagreement with previously published numerical models. This disagreement arises because the models predict that thermal anomalies will extend outward from a diapir a distance roughly 1.5 times the radius of the diapir, but the results reported here show that the anomalous values on one side of the diapir are about two times the radius, whereas they are as much as five times the radius on the other side of the diapir. The results indicate that strata adjacent to salt margins may experience significantly different heat histories adjacent to different margins of diapirs that result in strikingly different diagenetic histories, even at the same depth.
## Pay-Per-View Purchase Options
The article is available through a document delivery service. Explain these Purchase Options.
Protected Document: $10 Internal PDF Document:$14 Open PDF Document: \$24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.596333920955658, "perplexity": 6147.0273557389855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267865.20/warc/CC-MAIN-20140728011747-00445-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://hal.in2p3.fr/in2p3-00749283 | # Observation of D0-D0bar oscillations
Abstract : We report a measurement of the time-dependent ratio of D0->K+pi- to D0->K-pi+ decay rates in D*+-tagged events using 1.0 fb^{-1} of integrated luminosity recorded by the LHCb experiment. We measure the mixing parameters x'2=(-0.9+-1.3)x10^{-4}, y'=(7.2+-2.4)x10^{-3} and the ratio of doubly-Cabibbo-suppressed to Cabibbo-favored decay rates R_D=(3.52+-0.15)x10^{-3}, where the uncertainties include statistical and systematic sources. The result excludes the no-mixing hypothesis with a probability corresponding to 9.1 standard deviations and represents the first observation of D0-D0bar oscillations from a single measurement.
Type de document :
Article dans une revue
Physical Review Letters, American Physical Society, 2013, 110, pp.101802. 〈10.1103/PhysRevLett.110.101802〉
http://hal.in2p3.fr/in2p3-00749283
Contributeur : Sabine Starita <>
Soumis le : mercredi 7 novembre 2012 - 11:00:33
Dernière modification le : mercredi 21 mars 2018 - 18:56:57
### Citation
S. Barsuk, O. Callot, J. He, B. Jean-Marie, J. Lefrançois, et al.. Observation of D0-D0bar oscillations. Physical Review Letters, American Physical Society, 2013, 110, pp.101802. 〈10.1103/PhysRevLett.110.101802〉. 〈in2p3-00749283〉
### Métriques
Consultations de la notice | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9723528623580933, "perplexity": 17367.72289958773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944479.27/warc/CC-MAIN-20180420155332-20180420175332-00423.warc.gz"} |
https://pbiecek.github.io/ema/InstanceLevelExploration.html | # 5 Introduction to Instance-level Exploration
Instance-level exploration methods help to understand how does a model yield a prediction for a particular single observation. We may consider the following situations as examples:
• We may want to evaluate effects of explanatory variables on the model’s predictions. For instance, we may be interested in predicting the risk of heart attack based on a person’s age, sex, and smoking habits. A model may be used to construct a score (for instance, a linear combination of the explanatory variables representing age, sex, and smoking habits) that could be used for the purposes of prediction. For a particular patient, we may want to learn how much do the different variables contribute to the score?
• We may want to understand how would the model’s predictions change if values of some of the explanatory variables changed? For instance, what would be the predicted risk of heart attack if the patient cut the number of cigarettes smoked per day by half?
• We may discover that the model is providing incorrect predictions and we may want to find the reason. For instance, a patient with a very low risk-score experienced a heart attack. What has driven the wrong prediction?
In this part of the book, we describe the most popular approaches to instance-level exploration. They can be divided into three classes:
• One approach is to analyze how does the model’s prediction for a particular instance differ from the average prediction and how can the difference be distributed among explanatory variables? This method is often called the “variable attributions” approach. An example is provided in panel A of Figure 5.1. Chapters 6-8 present various methods for implementing this approach.
• Another approach uses the interpretation of the model as a function and investigates the local behaviour of this function around the point (observation) of interest $$\underline{x}_*$$. In particular, we analyze the curvature of the model response (prediction) surface around $$\underline{x}_*$$. In case of a black-box model, we may approximate it with a simpler glass-box model around $$\underline{x}_*$$. An example is provided in panel B of Figure 5.1. Chapter 9 presents the Local Interpretable Model-agnostic Explanations (LIME) method that exploits the concept of a “local model.”
• Yet another approach is to investigate how does the model’s prediction change if the value of a single explanatory variable changes? The approach is useful in the so-called “What-if” analyses. In particular, we can construct plots presenting the change in model-based predictions induced by a change of a single explanatory variable. Such plots are usually called ceteris-paribus (CP) profiles. An example is provided in panel C in Figure 5.1. Chapters 10-12 introduce the CP profiles and methods based on them.
Each method has its own merits and limitations. They are briefly discussed in the corresponding chapters. Chapter 13 offers a comparison of the methods. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5499217510223389, "perplexity": 466.5047356608505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900860.51/warc/CC-MAIN-20201028191655-20201028221655-00430.warc.gz"} |
https://schneide.blog/category/software-development/web-applications/ | ## Redux-Toolkit & Solving “ReferenceError: Access lexical declaration … before initialization”
Last week, I had a really annoying error in one of our React-Redux applications. It started with a believed-to-be-minor cleanup in our code, culminated in four developers staring at our code in disbelief and quite some research, and resulted in some rather feasible solutions that, in hindsight, look quite obvious (as is usually the case).
The tech landscape we are talking about here is a React webapp that employs state management via Redux-Toolkit / RTK, the abstraction layer above Redux to simplify the majority of standard use cases one has to deal with in current-day applications. Personally, I happen to find that useful, because it means a perceptible reduction of boilerplate Redux code (and some dependencies that you would use all the time anyway, like redux-thunk) while maintaining compatibility with the really useful Redux DevTools, and not requiring many new concepts. As our application makes good use of URL routing in order to display very different subparts, we implemented our own middleware that does the data fetching upfront in a major step (sometimes called „hydration“).
One of the basic ideas in Redux-Toolkit is the management of your state in substates called slices that aim to unify the handling of actions, action creators and reducers, essentially what was previously described as Ducks pattern.
We provide unit tests with the jest framework, and generally speaking, it is more productive to test general logic instead of React components or Redux state updates (although we sometimes make use of that, too). Jest is very modular in the sense that you can add tests for any JavaScript function from whereever in your testing codebase, the only thing, of course, is that these functions need to be exported from their respective files. This means that a single jest test only needs to resolve the imports that it depends on, recursively (i.e. the depenency tree), not the full application.
Now my case was as follows: I wrote a test that essentially was just testing a small switch/case decision function. I noticed there was something fishy when this test resulted in errors of the kind
• Target container is not a DOM element. (pointing to ReactDOM.render)
• No reducer provided for key “user” (pointing to node_modules redux/lib/redux.js)
• Store does not have a valid reducer. Make sure the argument passed to combineReducers is an object whose values are reducers. (also …/redux.js)
This meant there was too much going on. My unit test should neither know of React nor Redux, and as the culprit, I found that one of the imports in the test file used another import that marginally depended on a slice definition, i.e.
///////////////////////////////
// test.js
///////////////////////////////
import {helper} from "./Helpers.js"
...
///////////////////////////////
// Helpers.js
///////////////////////////////
import {SOME_CONSTANT} from "./state/generalSlice.js"
...
Now I only needed some constant located in generalSlice, so one could easily move this to some “./const.js”. Or so I thought.
When I removed the generalSlice.js depency from Helpers.js, the React application broke. That is, in a place totally unrelated:
ReferenceError: can't access lexical declaration 'loadPage' before initialization
http:/.../static/js/main.chunk.js:11198:100
./src/state/topicSlice.js/<
C:/.../src/state/topicSlice.js:140
> [loadPage.pending]: (state, action) => {...}
From my past failures, I instantly recalled: This is a problem with circular dependencies.
Alas, topicSlice.js imports loadPage.js and loadPage.js imports topicSlice.js, and while some cases allow such a circle to be handled by webpack or similar bundlers, in general, such import loops can cause problems. And while I knew that before, this case was just difficult for me, because of the very nature of RTK.
So this is a thing with the RTK way of organizing files:
• Every action that clearly belongs to one specific slice, can directly be defined in this state file as a property of the “reducers” in createSlice().
• Every action that is shared across files or consumed in more than one reducer (in more than one slice), can be defined as one of the “extraReducers” in that call.
• Async logic like our loadPage is defined in thunks via createAsyncThunk(), which gives you a place suited for data fetching etc. that always comes with three action creators like loadPage.pending, loadPage.fulfilled and loadPage.rejected
• This looks like
///////////////////////////////
// topicSlice.js
///////////////////////////////
const topicSlice = createSlice({
name: 'topic',
initialState,
reducers: {
setTopic: (state, action) => {
},
...
},
extraReducers: {
state.topic = initialState.topic;
},
...
});
export const { setTopic, ... } = topicSlice.actions;
And loadPage itself was a rather complex action creator (thunk), as it could cause state dispatches as well, as it was built, in simplified form, as:
///////////////////////////////
///////////////////////////////
import {setTopic} from './topicSlice.js';
const response = await fetchAllOurData();
if (someCondition(response)) {
await thunkAPI.dispatch(setTopic(SOME_TOPIC));
}
return response;
};
You clearly see that import loop: loadPage needs setTopic from topicSlice.js, topicSlice needs loadPage from loadPage.js. This was rather old code that worked before, so it appeared to me that this is no problem per se – but solving that completely different dependency in Helpers.js (SOME_CONSTANT from generalSlice.js), made something break.
That was quite weird. It looked like this not-really-required import of SOME_CONSTANT made ./generalSlice.js load first, along with it a certain set of imports include some of the dependencies of either loadPage.js or topicSlice.js, so that when their dependencies would have been loaded, their was no import loop required anymore. However, it did not appear advisable to trace that fact to its core because the application has grown a bit already. We needed a solution.
As I told before, it required the brainstorming of multiple developers to find a way of dealing with this. After all, RTK appeared mature enough for me to dismiss “that thing just isn’t fully thought through yet”. Still, code-splitting is such a basic feature that one would expect some answer to that. What we did come up with was
1. One could address the action creators like loadPage.pending (which is created as a result of RTK’s createAsyncThunk) by their string equivalent, i.e. ["loadPage/pending"] instead of [loadPage.pending] as key in the extraReducers of topicSlice. This will be a problem if one ever renames the action from “loadPage” to whatever (and your IDE and linter can’t help you as much with errors), which could be solved by writing one’s own action name factory that can be stashed away in a file with no own imports.
2. One could re-think the idea that setTopic should be among the normal reducers in topicSlice, i.e. being created automatically. Instead, it can be created in its own file and then being referenced by loadPage.js and topicSlice.js in a non-circular manner as export const setTopic = createAction('setTopic'); and then you access it in extraReducers as [setTopic]: ... .
3. One could think hard about the construction of loadPage. This whole thing is actually a hint that loadPage does too many things on too many different levels (i.e. it violates at least the principles of Single Responsibility and Single Level of Abstraction).
1. One fix would be to at least do away with the automatically created loadPage.pending / loadPage.fulfilled / loadPage.rejected actions and instead define custom createAction("loadPage.whatever") with whatever describes your intention best, and put all these in your own file (as in idea 2).
2. Another fix would be splitting the parts of loadPage to other thunks, and the being able to react on the automatically created pending / fulfilled / rejected actions each.
I chose idea 2 because it was the quickest, while allowing myself to let idea 3.1 rest a bit. I guess that next time, I should favor that because it makes the developer’s intention (as in… mine) more clear and the Redux DevTools output even more descriptive.
In case you’re still lost, my solution looks as
///////////////////////////////
// sharedTopicActions.js
///////////////////////////////
import {createAction} from "@reduxjs/toolkit";
export const setTopic = createAction('topic/set');
//...
///////////////////////////////
// topicSlice.js
///////////////////////////////
import {setTopic} from "./sharedTopicActions";
const topicSlice = createSlice({
name: 'topic',
initialState,
reducers: {
...
},
extraReducers: {
[setTopic]: (state, action) => {
},
state.topic = initialState.topic;
},
...
});
///////////////////////////////
// loadPage.js, only change in this line:
///////////////////////////////
import {setTopic} from "./sharedTopicActions";
// ... Rest unchanged
So there’s a simple tool to break circular dependencies in more complex Redux-Toolkit slice structures. It was weird that it never occured to me before, i.e. up until to this day, I always was able to solve circular dependencies by shuffling other parts of the import.
My problem is fixed. The application works as expected and now all the tests work as they should, everything is modular enough and the required change was not of a major structural redesign. It required to think hard but had a rather simple solution. I have trust in RTK again, and one can be safe again in the assumption that JavaScript imports are at least deterministic. Although I will never do the work to analyse what it actually was with my SOME_CONSTANT import that unknowingly fixed the problem beforehand.
Is there any reason to disfavor idea 3.1, though? Feel free to comment your own thoughts on that issue 🙂
## The do-it-yourself rickroll
This is a funny story from a while ago when we were tasked to play audio content in a web application and we used the opportunity to rickroll our web frontend developer. Well, we didn’t exactly rickroll him, we made him rickroll himself.
Our application architecture consisted of a serverside API that could answer a broad range of requests and a client side web application that sends requests to this API. This architecture was sufficient for previous requirements that mostly consisted of data delivery and display on behalf of the user. But it didn’t cut it for the new requirement that needed audio messages that were played to alert the operators on site to be send through the web and played in the browser application, preferably without noticeable delay.
The audio messages were created by text-to-speech synthesis and contained various warnings and alerts that informed the operators of important incidents happening in their system. Because the existing system played the alerts “on site” and all operators suddenly had to work from home (you can probably guess the date range of this story now), the alerts had to follow them to their new main platform, the web application.
We introduced a web socket channel from the server to each connected client application and sent update “news” through the socket. One type of news should be the “audio alert” that contains a payload of a Base64-encoded wave file. We wanted the new functionality to be up and usable on short notice. So we developed the server side first, emitting faked audio alerts on a 30 seconds trigger.
The only problem was that we didn’t have a realistic payload at hand, so we created one. It was a lengthy Base64 string that could be transported to the client application without problem. The frontend developer printed it to the browser console and went on to transform it back to waveform and play it as sound.
Just some moments later, we got some irreproducible messages in the team chat. The transformation succeeded on the first try. Our developer heard the original audio content. This is what he heard, every 30 seconds, again and again:
Yes, you’ve probably recognized the URL right away. But there was no URL in our case. Even if you are paranoid enough to recognize the wave bytes, they were Base64 encoded. Nobody expects a rickroll in Base64!
Our frontend developer had developed the ingredients for his own rickroll and didn’t suspect a thing until it was too late.
This confirmed that our new feature worked. Everybody was happy, maybe a little bit too happy for the occasion. But the days back then lacked some funny moments, so we appreciated it even more.
There are two things that I want to explain in more detail:
The tradition of rickrolling is a strange internet culture thing. Typically, it consists of a published link and an irritated overhasty link clicker. There are some instances were the prank is more elaborate, but oftentimes, it relies on the reputation of the link publisher. To have the “victim” assemble the prank by himself is quite hilarious if you already find “normal” rickrolls funny.
Our first attempt to deliver the whole wave file in one big Base64 string got rejected really fast by the customer organization’s proxy server. We had to make the final implementation even more complex: The server sends an “audio alert” news with a unique token that the client can use to request the Base64 content from the classic API. The system works with this architecture, but nobody ever dared to try what the server returns for the token “dQw4w9WgXcQ” until this day…
## Running a Micronaut Service in Docker
Micronaut is a state-of-the-art micro web framework targeting the Java Virtual Machine (JVM). I quite like that you can implement your web application oder service using either Java, Groovy or Kotlin – all using micronaut as their foundation.
You can easily tailor it specifically to your own needs and add features as need be. Micronaut Launch provides a great starting point and get you up and running within minutes.
## Prepared for Docker
Web services and containers are a perfect match. Containers and their management software eases running and supervising web services a lot. It is feasible to migrate or scale services and run them separately or in some kind of stack.
Micronaut comes prepared for Docker and allows you to build Dockerfiles and Docker images for your application out-of-the-box. The gradle build files of a micronaut application offer handy tasks like dockerBuild or dockerfile to aid you.
A thing that simplifies deploying one service in multiple scenarios is configuration using environment variables (env vars, ENV).
I am a big fan of using environment variables because this basic mechanism is supported almost everywhere: Windows, Linux, docker, systemd, all (?) programming languages and many IDEs.
It is so simple everyone can deal with it and allows for customization by developers and operators alike.
## Configuration using environment variables
Usually, micronaut configuration is stored in YAML format in a file called application.yml. This file is packaged in the application’s executable jar file making it ready to run. Most of the configuration will be fixed for all your deployments but things like the database connection settings or URLs of other services may change with each deployment and are most likely different in development.
Gladly, micronaut provides a mechanism for externalizing configuration.
That way you can use environment variables in application.yml while providing defaults for development for example. Note that you need to put values containing : in backticks. See an example of a database configuration below:
datasources:
default:
url: ${JDBC_URL:jdbc:postgresql://localhost:5432/supermaster} driverClassName: org.postgresql.Driver username:${JDBC_USER:db_user}
dialect: POSTGRES
Having prepared your application this way you can run it using the command line (CLI), via gradle run or with docker run/docker-compose providing environment variables as needed.
# Running the app "my-service" using docker run
docker run --name my-service -p 80:8080 -e JDBC_URL=jdbc:postgresql://prod.databasehost.local:5432/servicedb -e JDBC_USER=mndb -e JDBC_PASSWORD="mnrocks!" my-service
# Running the app using java and CLI
java -DJDBC_URL=jdbc:postgresql://prod.databasehost.local:5432/servicedb -DJDBC_USER=mndb -DJDBC_PASSWORD="mnrocks!" -jar application.jar
## CSS: z-index can be weird.
Before I start this post, there are three things I want to state:
1. If you think the “z-index” is quite simple, you probably never bothered to care.
2. When in doubt, one can always read the official specification
3. There are multiple good elaborations available already (see bottom of this post), but I was missing a comprehensive list of the most important points.
##### Quick Motivation (skip that if you only want the facts)
Yes. We know: The web has become a place which it never intended to be. Nowadays, it seems to be accommodate everything. You want live control of measurement devices? 3D camera applications? Advanced data wizardry? Or in my case, a kind of sophisticated layout engine? … Web Dev in 2021 gives you the impression that it is all merely a matter of time (or cost).
But then, there are always the caveats. Some semi-suggestive idea turns out to be not that accessible at all. Our user experience considerations made us implement “kind of basic” windows (in the operation system sense), that appear at times and disappear at other times, and give the user maximum information while maintaining minimum clutter.
Very early on, I noticed that I had to implement my own drag’n’drop functionality, because HTML5 isn’t really there yet. But I consider that as something advanced, which also has its idiosyncrasy in every conceivable use case, so that’s ok.
But then again, a somewhat-native-feeling windowing system (even if they are only rectangles with text) makes use of a seemingly simple thing: That stuff gets drawn over other stuff in the right order. And this comes with certain pecularities.
The painting order of HTML elements is divided into stacking contexts. Stacking contexts can be stacked above or below each other, and most of the times they behave as expected, but sometimes, they are not. So, for the roundup…
##### Stacking Context – Essential Rules
(This assumes CSS knowledge, but don’t hesitate to comment if you have any questions.)
• If you set any z-index, you set that z-index within the current stacking context.
• You can never enter an outside stacking context, only create new ones inside
• One stacking context as a whole is always either above or below other stacking contexts as a whole
• The root stacking order (from the <html> element) is as you expect:
• Further down in the HTML source means more upfront
• Higher z-index means “more upfront”, but
• z-index doesn’t mean a thing if your neighboring elements do not live inside the same stacking context!
• Within any parent stacking context, new child stacking contexts are created by
• Setting CSS “position” to something other than “static”
• Setting a z-index different from the default value “auto”
• For clarity: “z-index: 0;” is nearly the same as “z-index: auto;”, but the latter doesn’t open up a child stacking context, while the former does.
• Setting CSS “display: flex;” or “display: grid”
• Setting CSS: “isolation: isolate;” (what is that even?)
• Setting CSS: “will-change” to something non-initial (what is that even?)
• Setting one of the “graphically advanced” CSS properties like
• opacity, transform, filter, mix-blend-mode, clip-path, mask, …
There is more, but maybe this can help you bugtracing. And two meta-points:
• CSS evolves, so with new features, always have stacking context in mind
• In a framework context (like the React biosphere), you might not know what your imported dependencies do under the hood. Maybe better isolate them.
They have nice illustrations, too.
If everything fails, go back to square one:
https://www.w3.org/TR/CSS2/visuren.html#propdef-z-index
Be aware.
## Flexible React-Redux Hook Mocks in jest & React Testing Library
Best practices in mocking React components aren’t entirely unheard of, even in connection with a Redux state, and even not in connection with the quite convenient Hooks description ({ useSelector, useDispatch}).
So, of course the knowledge of a proper approach is at hand. And in many scenarios, it makes total sense to follow their principle of exactly arranging your Redux state in your test as you would in your real-world app.
Nevertheless, there are reasons why one wants to introduce a quick, non-overwhelming unit test of a particular component, e.g. when your system is in a state of high fluctuation because multiple parties are still converging on their interfaces and requirements; and a complete mock would be quite premature, more of a major distraction, less of being any help.
Proponents of strict TDD would now object, of course. Anyway – Fortunately, the combination of jest with React Testing Library is flexible enough to give you the tools to drill into any of your state-connected components without much knowledge of the rest of your React architecture*
(*) of course, these tests presume knowledge of your current Redux store structure. In the long run, I’d also consider this bad style, but during highly fluctuatig phases of develpment, I’d favour the explicit “this is how the store is intended to look” as safety by documentation.
On a basic test frame, I want to show you three things:
1. Mocking useSelector in a way that allows for multiple calls
2. Mocking useDispatch in a way that allows expecting a specific action creator to be called.
3. Mocking useSelector in a way that allows for mocking a custom selector without its actual implementation
(Upcoming in a future blog post: Mocking useDispatch in a way to allow for async dispatch-chaining as known from Thunk / Redux Toolkit. But I’m still figuring out how to exactly do it…)
So consider your component e.g. as a simple as:
import {useDispatch, useSelector} from "react-redux";
import {importantAction} from "./place_where_these_are_defined";
const TargetComponent = () => {
const dispatch = useDispatch();
const simpleThing1 = useSelector(store => store.thing1);
const simpleThing2 = useSelector(store => store.somewhere.thing2);
return <>
<div>{simpleThing1}</div>
<div>{simpleThing2}</div>
<button title={"button title!"} onClick={() => dispatch(importantAction())}>Do Important Action!</button>
</>;
};
## Multiple useSelector() calls
If we had a single call to useSelector, we’d be as easily done as making useSelector a jest.fn() with a mockReturnValue(). But we don’t want to constrain ourselves to that. So, what works, in our example, to actually construct a mockin store as plain object, and give our mocked useSelector a mockImplementation that applies its argument (which, as selector, is a function of the store)) to that store.
Note that for this simple example, I did not concern myself with useDispatch() that much. It just returns a dispatch function of () => {}, i.e. it won’t throw an error but also doesn’t do anything else.
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';
jest.mock("react-redux", () => ({
useSelector: jest.fn(),
useDispatch: jest.fn(),
}));
describe('Test TargetComponent', () => {
beforeEach(() => {
useDispatchMock.mockImplementation(() => () => {});
useSelectorMock.mockImplementation(selector => selector(mockStore));
})
afterEach(() => {
useDispatchMock.mockClear();
useSelectorMock.mockClear();
})
const useSelectorMock = reactRedux.useSelector;
const useDispatchMock = reactRedux.useDispatch;
const mockStore = {
thing1: 'this is thing1',
somewhere: {
thing2: 'and I am thing2!',
}
};
it('shows thing1 and thing2', () => {
render(<TargetComponent/>);
expect(screen.getByText('this is thing1').toBeInTheDocument();
expect(screen.getByText('and I am thing2!').toBeInTheDocument();
});
});
This is surprisingly simple considering that one doesn’t find this example scattered all over the internet. If, for some reason, one would require more stuff from react-redux, you can always spread it in there,
jest.mock("react-redux", () => ({
...jest.requireActual("react-redux"),
useSelector: jest.fn(),
useDispatch: jest.fn(),
}));
but remember that in case you want to build full-fledged test suites, why not go the extra mile to construct your own Test store (cf. link above)? Let’s stay simple here.
## Assert execution of a specific action
We don’t even have to change much to look for the call of a specific action. Remember, we presume that our action creator is doing the right thing already, for this example we just want to know that our button actually dispatches it. E.g. you could have connected that to various conditions, the button might be disabled or whatever, … so that could be less trivial than our example.
We just need to know how the original action creator looked like. In jest language, this is known as spying. We add the blue parts:
// ... next to the other imports...
import * as ourActions from './actions';
//... and below this block
const useSelectorMock = reactRedux.useSelector;
const useDispatchMock = reactRedux.useDispatch;
const importantAction = jest.spyOn(ourActions, 'importantAction');
//...
//... other tests...
it('dispatches importantAction', () => {
render(<TargetComponent/>);
const button = screen.getByTitle("button title!"); // there are many ways to get the Button itself. i.e. screen.getByRole('button') if there is only one button, or in order to be really safe, with screen.getByTestId() and the data-testid="..." attribute.
fireEvent.click(button);
expect(importantAction).toHaveBeenCalled();
});
That’s basically it. Remember, that we really disfigured our dispatch() function. What we can not do this way, is a form of
// arrangement
const mockDispatch = jest.fn();
useDispatchMock.mockImplementation(() => mockDispatch);
// test case:
expect(mockDispatch).toHaveBeenCalledWith(importantAction()); // won't work
Because even if we get a mocked version of dispatch() that way, the spyed-on importantAction() call is not the same as the one that happened inside render(). So again. In our limited sense, we just don’t do it. Dispatch() doesn’t do anything, importantAction just gets called once inside.
## Mock a custom selector
Consider now that there are custom selectors which we don’t care about much, we just need them to not throw any error. I.e. below the definition of simpleThing2, this could look like
import {useDispatch, useSelector} from "react-redux";
import {importantAction, ourSuperComplexCustomSelector} from "./place_where_these_are_defined";
const TargetComponent = () => {
const dispatch = useDispatch();
const simpleThing1 = useSelector(store => store.thing1);
const simpleThing2 = useSelector(store => store.somewhere.thing2);
const complexThing = useSelector(ourSuperComplexCustomSelector);
//... whathever you want to do with it....
};
Here, we want to keep it open how exactly complexThing is gained. This selector is considered to already be tested in its own unit test, we just want its value to not-fail and we can really do it like this, blue parts added / changed:
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';
import {ourSuperComplexCustomSelector} from "./place_where_these_are_defined";
jest.mock("react-redux", () => ({
useSelector: jest.fn(),
useDispatch: jest.fn(),
}));
const mockSelectors = (selector, store) => {
if (selector === ourSuperComplexCustomSelector) {
return true; // or what we want to
}
return selector(store);
}
describe('Test TargetComponent', () => {
beforeEach(() => {
useDispatchMock.mockImplementation(() => () => {});
useSelectorMock.mockImplementation(selector => mockSelectors(selector, mockStore));
})
afterEach(() => {
useDispatchMock.mockClear();
useSelectorMock.mockClear();
})
const useSelectorMock = reactRedux.useSelector;
const useDispatchMock = reactRedux.useDispatch;
const mockStore = {
thing1: 'this is thing1',
somewhere: {
thing2: 'and I am thing2!',
}
};
// ... rest stays as it is
});
This wasn’t as obvious to me as you never know what jest is doing behind the scenes. But indeed, you don’t have to spy on anything for this simple test, there is really functional identity of ourSuperComplexCustomSelector inside the TargetComponent and the argument of useSelector.
## So, yeah.
The combination of jest with React Testing Library is obviously quite flexible in allowing you to choose what you actually want to test. This was good news for me, as testing frameworks in general might try to impose their opinions on your style, which isn’t always bad – but in a highly changing environment as is anything that involves React and Redux, sometimes you just want to have a very basic test case in order to concern yourself with other stuff.
So, without wanting that you lower your style to such basic constructs, I hope this was of some insight for you. In a more production-ready state, I would still go the way as that krawaller.se blog post state above, it makes sense. I was just here to get you started 😉
A few weeks ago, I heard a nice story about the hidden cost of new features. Imagine a website, driven by a content management system, consisting of text, pictures and fancy styling. When the content management system gets an update, the website developer takes a look at the release notes and finds that a lot of new and cool features are included that you’ll get for free once you update.
So he updates the site, tries it out and publishes it onto the web. A few days later, the customer and owner of the website sends a bug report of some arbitrarily flipped images. There are just short of a hundred images on the website and a handful of them now show up upside down.
Who would update a website and randomly rotate some images?
Why would a content management system decide that excatly these images need a spin?
The answer is not as obvious as one might think.
The latest cause of the effect was a change of the imaging library the content management system uses to deliver the image content. It got upgraded to a new engine that essentially does the same thing as the old one: Take the image file content and put it on the web. But, it does it more thoroughly.
One feature of JPEG images are the EXIF metadata properties. Examples of useful properties are the photography time, the geolocation or the camera model. Some cameras add even more information into the metadata, like exposure time or the camera’s orientation (rotation) during the photographing process. There are cameras that notice if you hold them upside down and store this circumstance into the picture.
Then, there are imaging libraries that just take the pixels and put them on the screen. And there are libraries that know about their domain and read the EXIF metadata, interpret the rotation data and accomodate for that fact. Because, who would like to look at pictures that are displayed totally wrong?
The first version of the content management system’s imaging library didn’t care much about metadata. The new version takes rotation into account.
So, the cause of the suddenly rotated pictures originates with the photographer that happened to work during a workout session or in australia. This fact was registered and stored by the camera and promply ignored by the picture editing software and the earlier content management system. It was rediscovered only when the new version went live.
For the customer, this is a random regression. It worked just fine all those years! For the developer, this is a minefield. Every picture could contain an evil rotation information that gets applied someday.
For a security engineer, this is a harmless but perfect example of a persistence attack. You embed malicious payloads into data that do nothing for a long time, but are activated suddenly, without outside intervention, by an unrelated change of system parts towards a “lucky” constellation.
Guess what you can embed into EXIF metadata, too? Javascript or any other form of executable code. And then you wait.
To end this blog entry on a light note, sometimes the payload may just happen to be your last name – True!
## React for the algebra enthusiast – Part 2
In Part 1, I explained how algebra can shed some light on a quite restricted class of react-apps. Today, I will lift one of the restrictions. This step needs a new kind of algebraic structure:
## Categories
Category theory is a large branch of pure mathematics, with many facets and applications. Most of the latter are internal to pure mathematics. Since I have a very special application in mind, I will give you a definition which is less general than the most common ones.
Categories can be thought of as generalized monoids. At the same time, a Category is a labelled, directed multigraph with some extra structure. Here is a picture of a labelled directed multigraph – its nodes are labelled with upper case letters and its edges are labelled with lower case letters:
If such a graph happens to be a category, the nodes are called objects and the edges morphisms. The idea is, that the objects are changed or morphed into other objects by the morphisms. We will write $h:A\to B$ for a morphism from object $A$ to object $B$.
But I said something about extra structure and that categories generalize monoids. This extra structure is essentially a monoid structure on the morphisms of a category, except that there is a unit for each object called identity and the operation “$\_\circ\_$” can only be applied to morphisms, if they form “a line”. For example, if we have morphisms like k and i in the picture below, in a category, there will be a new morphism “$k\circ i$“:
Note that “$i$” is on the right in “$k\circ i$” but it is the first morphism if you follow the direction indicated by the arrows. This comes from function composition in mathematics, which suffers from the same weirdness by some historical accident. For now that just means that chains of morphisms have to be read from right to left to make sense of them.
For the indentities and the operation “$\_\circ\_$“, we can ask for the same laws to hold as in a monoid, which will complete the definition of a category:
## Definition (not as general as it could be…)
A category consists of the following data:
• A set of objects A,B,…
• A set of morphisms $f : A_1\to B_1, g:A_2\to B_2,\dots$
• An operation “$\_\circ\_$” which for all (consecutive) pairs of morphisms $f:A\to B$ and $g:B\to C$ returns a morphism $g \circ f : A \to C$
• For any object a morphism $\mathrm{id}_A : A\to A$
Such that the following laws hold:
• $\_\circ\_$ ” is associative: For all morphisms $f : A \to B$, $g : B\to C$ and $h : C\to D$, we have: $h \circ (g \circ f) = (h \circ g) \circ f$
• The identities are left and right neutral: For all morphisms $f: A\to B$ we have: $f \circ \mathrm{id}_A=\mathrm{id}_B \circ f$
## Examples
Before we go to our example of interest, let us look at some examples:
• Any monoid is a category with one object O and for each element m of the monoid a morphism $m:O\to O$. “$m\circ n$” is defined to be $m\cdot n$.
• The graph below can be extended to a category by adding the morhpisms $ef: B\to B, fe: A\to A, efe: A\to B, fef: B\to A, \dots$ and an identity for $A$ and $B$. The operation “$\_\circ\_$” is defined as juxtaposition, where we treat the identities as empty sequences. So for example, $ef\circ efe$ is $efefe: A\to B$.
• More generally: Let $G$ be a labelled directed graph with edges $e_1,\dots,e_r$ and nodes $n_1,\dots,n_l$. Then there is a category $C_G$ with objects $n_1,\dots,n_l$ and morphisms all sequences of consectutive edges – including the empty sequence for any node.
## Action Categories
So let’s generalize Part 1 with our new tool. Our new scope are react-apps, which have actions without parameters, but now, action can not neccessarily be applied in any order. If an action can be fired, may now depend on the state of the app.
The smallest example I can think of, where we can see whats new, is an app with two states, let’s call them ON and OFF and two actions, let’s say SWITCH_ON and SWITCH_OFF:
Let us also say, that the action SWITCH_ON can only be fired in state OFF and SWITCH_OFF only in state ON. The category for that graph has as its morphims the possible sequences of actions. Now, if we follow the path of part 1, the obvious next step is to say that SWITCH_ON after SWITCH_OFF (and the other way around) is the same as the empty action-sequence — which leads us to…
## Quotients
We made a pretty hefty generalization from monoids to categories, but the theory for quotients remains essentially the same. As we defined equivalence relations on the elements of a monoid, we can define equivalence relations on the morphisms of a category. As last time, this is problematic in general, but turns out to just work if we replace sequences of morphisms in the action category with matching source and target.
So in the example above, it is ok to say that SWITCH_ON SWITCH_OFF is the empty sequence on ON and SWITCH_OFF SWITCH_ON is the empty sequence on OFF (keep in mind that the first action to be executed is on the right). Then any action sequence can be reduced to simply SWITCH_ON, SWITCH_OFF or an empty sequence (not the empty sequence, because we have two of them with different source and target). And in this case, the quotient category will be what we drew above, but as a category.
Of course, this is not an example where any high-powered math is needed to get any insights. So far, these posts where just about understanding how the math works. For the next part of this series, my plan is to show how existing tools can be used to calculate larger examples.
## Bridging Eons in Web Dev with Polyfills
Indeed, web development is kind of peculiar. On the one hand, there‘s seldom a field in which new technologies overturn each other at that pace, creating very exciting opportunities ranging from quickly sketching out proof-of-concepts to the efficient construction of real-world applications. On the other hand, there is this strange air of browser dependency and with any new technology one acquires, there‘s always the question of whether this is just some temporary fashion or here to stay.
Which is why it hapens, that one would like to quickly scaffold a web application on the base of React and its ecosystem, but has the requirement that the customer is – either voluntarily or forced by higher powers – using some legacy browser like Internet Explorer 11, for which Microsoft has recently announced its end of life support for 30th November this year. Which doesn’t sound nice for the… *searching quickly* … 5% of desktop/laptop users that still use this old horse, but then again, how long can you cling to an outdated thing?
For the daily life of a web developer, his mind full of peculiarities that the evolution of the ECMAScript standard which basically is JavaScript, there is the practical helper of caniuse.com, telling you for every item of your code you want to know about, which browser / device has support and which doesn’t.
But what about whole frameworks? When I recently had my quest for a IE11-comptabile React app, I already feared that at every corner, I needed to double-check all my doing, especially given that for the development itself, one is certainly advised to instead use one of the browsers that come with a quite some helpful developer tools, like extensions for React, Redux, etc. — but also the features in the built-in Console, where it makes your life a lot easier whether you can just log a certain state as a string of “[Object object]” or a fully interactive display of object properties. Sorry IE11, there are reasons why you have to go.
But actually, then, I figured, that my request is maybe not that far outside the range of rather widespread use cases. Thus, the chance that someone already tried to tackle the problem, aren’t so hopeless. And so this works pretty straightforward:
• Install “react-app-polyfill”, e.g. via npm:
npm install react-app-polyfill
• At the very top of your index.js, add for good measure:
import "react-app-polyfill/ie11";
import "react-app-polyfill/stable";
• Include “IE 11” (with quotes) in your package.json under the “browserlist” as a new entry under “production” and “development”
That should do it. There are people on the internet that advise removing the “node_modules/.cache” directory when doing this in an existing project.
The term of a polyfill is actually derived from some kind of putty, which is actually a nice picture. It’s all about allowing a developer to use accustomed features while maintaining the actual production environment.
Another very useful polyfill in this undertaking was…
// install
npm install --save-dev @babel/plugin-transform-arrow-functions
// then add to the "babel" > "plugins" config array:
"babel": {
"plugins": [
"@babel/plugin-transform-arrow-functions"
]
}
… as I find the new-fashioned arrow function notation quite useful.
So, this seems to bridge (most of) the worries one encounters in this web dev world where use cases span eons of technology evolution. Now, do you know any more useful polyfills that make your life easier?
## React for the algebra enthusiast – Part 1
When I learned to use the react framework, I always had the feeling that it is written in a very mathy way. Since simple googling did not give me any hints if this was a consideration in the design, I thought it might be worth sharing my thoughts on that. I should mention that I am sure others have made the same observations, but it might help algebraist to understand react faster and mathy computer scientiests to remember some algebra.
## Free monoids
In abstract algebra, a monoid is a set M together with a binary operation “$\cdot$” satisfying these two laws:
• There is a neutral element “e”, such that: $\forall x \in M: x \cdot e = e \cdot x = e$
• The operation is associative, i.e. $\forall x,y,z \in M: x \cdot (y\cdot z) = (x\cdot y) \cdot z$
Here are some examples:
• Any set with exactly one element together with the unique choice of operation on it.
• The natural numbers $\mathbb{N}=\{0,1,2,\dots \}$ with addition.
• The one-based natural numbers $\mathbb{N}_1=\{1,2,3,\dots\}$ with multiplication.
• The Integers $\mathbb Z$ with addition.
• For any set M, the set of maps from M to M is a monoid with composition of maps.
• For any set A, we can construct the set List(A), consisting of all finite lists of elements of A. List(A) is a monoid with concatenation of lists. We will denote lists like this: $[1,2,3,\dots]$
Monoids of the form List(A) are called free. With “of the form” I mean that the elements of the sets can be renamed so that sets and operations are the same. For example, the monoid $\mathbb{N}$ with addition and List({1}) are of the same form, witnessed by the following renaming scheme:
$0 \mapsto []$
$1 \mapsto [1]$
$2 \mapsto [1,1]$
$3 \mapsto [1,1,1]$
$\dots$
— so addition and appending lists are the same operation under this identification.
With the exception of $\mathbb{N}_1$, the integers and the monoid of maps on a set, all of the examples above are free monoids. There is also a nice abstract definition of “free”, but for the purpose at hand to describe a special kind of monoid, it is good enough to say, that a monoid M is free, if there is a set A such that M is of the form List(A).
## Action monoids
A react-app (and by that I really mean a react+redux app) has a set of actions. An action always has a type, which is usally a string and a possibly empty list of arguments.
Let us stick to a simple app for now, where each action just has a type and nothing else. And let us further assume, that actions can appear in arbirtrary sequences. That means any action can be fired in any state. The latter simplification will keep us clear from more advanced algebra for now.
For a react-app, sequences of actions form a free monoid. Let us look at a simple example: Suppose our app is a counter which starts with “0” and has an increment (I) and decrement (D) action. Then the sequences of action can be represented by strings like
ID, IIDID, DDD, IDI, …
which form a free monoid with juxtaposition of strings. I have to admit, so far this is not very helpful for a practitioner – but I am pretty sure the next step has at least some potential to help in a complicated situation:
## Quotients
Quotients of sets by an equivalence relation are a very basic tool of modern math. For a monoid, it is not clear if a quotient of its underlying set will still be a monoid with the “same” operations.
Let us look at an example, where everything goes well. In the example from above, the counter should show the same integer if we decrement and then increment (or the other way around). So we could say that the two action sequences
• ID and
• DI
do really nothing and should be considered equivalent to the empty action sequence. So let’s say that any sequence of actions is equivalent to the same sequence with any occurence of “DI” or “ID” deleted. So for example we get:
IIDIIDD $\sim$ I
With this rule, we can reduce any sequence to an equivalent one that is a sequence of Is, a sequence of Ds or empty. So the quotient monoid can be identified with the integers (in two different ways, but that’s ok) and addition corresponds to juxtaposition of action sequences.
The point of this example and the moral of this post is, that we can take a syntactic description (the monoid of action sequences), which is easy to derive from the source code and look at a quotient of the action monoid by a reasonable relation to arrive at some algebraic structure which has a lot to do with the semantic of the app.
So the question remains, if this works just well for an example or if we have a general recipe.
Here is a problem in the general situation: Let $x,y,z\in M$ be elements of a monoid $M$ with operation “$\cdot$” and $\sim$ be an equivalence relation such that $x$ is identified with $y$. Then, denoting equivalence classes with $[\_]$ it is not clear if $[x] \cdot [y]$ should be defined to be $[x\cdot z]$ or $[y\cdot z]$.
Fortunately problems like that disappear for free monoids like our action monoid and equivalence relations constructed in a specific way. As you can see on wikipedia, it is always ok to take the equivalence relation generated by the same kind of identifications we made above: Pick some pairs of sequences which are known “to do the same” from a semantic point of view (like “ID” and “DI” did the same as the empty sequence) and declare sequences to be equivalent, if they arise by replacing sequences known to be the same.
So the approach is that general: It works for apps, where actions do not have parameters and can be fired in any order and for equivalence relations generated by defining finitely many action sequences to do the same. The “any order” is a real restriction, but this post also has a “Part 1” in the title…
## Changing the keyboard navigation behaviour of form inputs
The default behaviour in HTML forms is that you can move the focus from one input element to the next via the tab key and submit the form via the enter key. This is also how dialogs work on most operating systems when using the native UI components. This behaviour is consistent across all browsers, and changing it messes with the user’s expectations and reduces accessibility. So I would normally advise against changing this behaviour without good reasons.
However, one of our customers wanted a different behaviour for an application developed by us. This application replaced an older application where the enter key did not submit the form, but moved the focus to the next input element. The ‘muscle memory’ effect made users accidentally submit the form by hitting the enter key, causing frustration. Since this application is not a public web site, but merely a web technology based intranet application with a small and specialized user base, changing the default behaviour is acceptable if the users want it.
So here’s how to do it. The following JavaScript function focusNextInputOnEnter takes a form element as a parameter and changes the focus behaviour on the input elements within this form.
function focusNextInputOnEnter(form) {
var inputs = form.querySelectorAll('input, select, textarea');
for (var i = 0; i < inputs.length; i++) {
var input = inputs[i];
return function(event) {
if (!isEnter(event.which)) {
return;
}
var nextIndex = index + 1;
while (nextIndex < inputs.length) {
var nextInput = inputs[nextIndex];
if (nextInput.disabled) {
nextIndex++;
continue;
}
nextInput.focus();
break;
}
};
})(i));
}
function isEnter(keyCode) {
return keyCode === 13;
}
}
It works by handling the keypress events on the input elements and checking the key code for the enter key (code 13). It has an additional check so that disabled input elements are skipped.
To apply this change in behaviour to a form we have to call the function when the DOM content is loaded:
<form id="demo-form">
<input type="text">
<input type="text" disabled="disabled">
<input type="checkbox">
<select>
<option>A</option>
<option>B</option>
</select>
<textarea></textarea>
<input type="text">
<input type="text">
</form>
<script> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 60, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42156052589416504, "perplexity": 1895.952044625052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00000.warc.gz"} |
https://chemistry.stackexchange.com/questions/49827/gases-produced-by-pyrolysis-of-cellulose | # Gases produced by pyrolysis of cellulose
I heated cotton in a sealed container (with a small hole) over a natural gas flame. Some gases and smoke were produced. What would they probably be? I can come up with some guesses based on the composition of cellulose: $\ce{CO2}$, $\ce{CH4}$ or possibly other hydrocarbons, $\ce{CO}$, $\ce{H2}$, $\ce{H2O}$, however I do not know which of those they are. Obviously, soot ($\ce{C}$) was also formed, due to the visible smoke particles.
During pyrolysis, organic compounds are thermally decomposed in the absence of oxygen. The pyrolysis products are classified into categories based on their physical state of existence: char (solid), bio-oil (liquid) and non-condensable gases (gas). The relative proportions of these three product fractions significantly vary depending upon the process conditions, as is shown in the table below.
$$\small \begin{array}{lcc} \hline \text{Pyrolosis Technology} & \text{Residence Time} & \text{Heating Rate} & \text{Temperature} & \text{Char} & \text{Bio-Oil} & \text{Gases} \\ \hline \text{Conventional} & \text{5-30}\ \mathrm{min} & \text{<50} ^\circ \mathrm{C\ min^{-1}} & \text{400-600} ^\circ \mathrm{C} & \text{<35%} & \text{<30%} & \text{<40%}\\ \text{Fast Pyrolysis} & \text{<5}\ \mathrm{s} & \text{~1000} ^\circ \mathrm{C\ s^{-1}} & \text{400-600} ^\circ \mathrm{C} & \text{<25%} & \text{<75%} & \text{<20%}\\ \text{Flash Pyrolosis} & \text{<0.1}\ \mathrm{s} & \text{~1000} ^\circ \mathrm{C\ s^{-1}} & \text{650-900} ^\circ \mathrm{C} & \text{<20%} & \text{<20%} & \text{<70%}^{[1]}\\ \hline \end{array}$$
The exact compositions of the products of cellulose pyrolysis at different temperatures can be seen below.
$$\small \begin{array}{lcc} \hline \text{Products} & \text{Peak Temp,}\ 500 ^\circ \mathrm{C} & \text{Holding Temp,}\ 400 ^\circ \mathrm{C} & \text{Peak Temp,}\ 750 ^\circ \mathrm{C} & \text{Peak Temp,}\ 1000 ^\circ \mathrm{C}\\ \hline \ce{CO} & \text{0.99%} & \text{0.25%} & \text{15.82%} & \text{22.57%}\\ \ce{CO2} & \text{0.3%} & \text{1.45%} & \text{2.38%} & \text{3.36%}\\ \ce{H2O} & \text{3.55%} & \text{6.49%} & \text{8.72%} & \text{9.22%}\\ \ce{CH4} & \text{0%} & \text{0%} & \text{1.11%} & \text{2.62%}\\ \ce{C2H4} & \text{0%} & \text{0%} & \text{1.05%} & \text{2.18%}\\ \ce{C2H6} & \text{0%} & \text{0%} & \text{0.17%} & \text{0.28%}\\ \ce{C3H6} & \text{0%} & \text{0%} & \text{0.70%} & \text{0.80%}\\ \ce{H2} & \text{0%} & \text{0%} & \text{0.36%} & \text{1.18%}\\ \ce{CH3OH} & \text{0.25%} & \text{0.21%} & \text{1.03%} & \text{0.98%}\\ \ce{CH3CHO} & \text{0.01%} & \text{0.05%} & \text{1.58%} & \text{1.7%}\\ \text{tar} & \text{16.37%} & \text{83.35%} & \text{59.92%} & \text{49.12%}\\ \text{char} & \text{83.63%} & \text{6.17%} & \text{3.32%} & \text{3.91%}\\ \text{other} & \text{0.19%} & \text{0.16%} & \text{2.14%} & \text{1.78%}\\ \text{total} & \text{105.25%} & \text{98.36%} & \text{98.8%} & \text{99.86%}\\ \hline \end{array}$$
The holding time for each of these reactions was $30\ \mathrm{s}^{[2]}$. As shown in the table, $\ce{CO}$, $\ce{H2O}$, and $\ce{CO2}$ are the major gaseous products, with $\ce{H2}$ and hydrocarbons being produced in considerably smaller proportion.
$^{[1]}$ Patwardhan, Pushkaraj Ramchandra, "Understanding the product distribution from biomass fast pyrolysis" (2010). Graduate Theses and Dissertations. Paper 11767.
$^{[2]}$ Hajaligol, M. R.; Howard, J. B.; Longwell, J. P.; Peters, W. A. Product Compositions and Kinetics for Rapid Pyrolysis of Cellulose. Industrial & Engineering Chemistry Process Design and Development Ind. Eng. Chem. Proc. Des. Dev. 1982, 21, 457–465.
• Impressive. Do you make the tables by hand or is there some tool for making them? – CowperKettle May 9 '16 at 7:31
• @CopperKettle I did have to enter the data manually, but the formatting was taken from this meta post. Digging around the sandboxes can yield some really neat $\LaTeX$ tools :) – ringo May 9 '16 at 8:02
• As usual, the most interesting part (nearly everything that gives color and smell) is also the hardest to analyze, so it is just swept under the rug and called "tar". – Ivan Neretin May 9 '16 at 8:11
• @IvanNeretin On page 30 of the first source, there is a detailed analysis of exactly what this tar is. This question isn't about tar though, so I didn't mention it. – ringo May 9 '16 at 8:14
• Wow. Now that's very impressive indeed. – Ivan Neretin May 9 '16 at 8:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.718140184879303, "perplexity": 2596.9879199394245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998716.67/warc/CC-MAIN-20190618103358-20190618125358-00444.warc.gz"} |
https://brilliant.org/problems/inversely-proportional/ | # The answer is simply 25, right?
Algebra Level 3
Suppose that $$x$$ and $$y$$ are inversely proportional and are positive quantities. By what percent does $$y$$ decrease if $$x$$ is increased by $$25 \%$$?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6115265488624573, "perplexity": 649.9925980347765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741510.42/warc/CC-MAIN-20181113215316-20181114001316-00321.warc.gz"} |
https://dsp.stackexchange.com/questions/15113/why-does-the-inverse-fourier-transform-of-a-lowpass-filter-have-complex-componen | # Why does the inverse fourier transform of a lowpass filter have complex components in matlab?
I am quite confused whether the following numerical differences i find are just severe round off errors made by matlab, or something i am doing wrong. The following happened when trying to see what a low pass filter looks like in the spatial domain.
By mathematics, the inverse fourier transform of a real symmetric image ($I(x,y) = I(-x,-y)$, where $I$ gives the value of the image at the given pixel) should again be real-valued.
However, when starting out with a low pass filter I defined as a disc of ones centered around the origin, surrounded by zeroes, the matlab command ifft2(ifftshift(I)) gave an image with pretty nontrivial complex component.
When explicitly telling Matlab to utilize the symmetry, the command ifft2(ifftshift(I),'symmetric') did indeed give a matrix with only real entries. However, the numerical difference between the two commands was pretty big compared to what i expected: Letting $I$ be our symmetric image in the frequency domain, i entered the following commands:
J = ifft2( ifftshift(I) );
K = ifft2( ifftshift(I), 'symmetric' );
mean(mean(abs(real(J) - K)))
ans =
3.2851e-04
mean(mean(abs(K)))
ans =
7.4830e-04
as you see, the difference between the real part of J and K is not that big per se, but it is big compared to the mean values of K. So it is quite nontrivial.
Questions: 1. Why do these complex components show up in J? Are they really round off errors made by Matlab? 2. Where does this numerical difference come from when i use this 'symmetric' command? Again just round of errors? 3. In practice, when dealing with symmetric real matrices (or more generally conjugate symmetric matrices), is it advisable to use the 'symmetric' command?
Thanks a lot!
• Why do you expect ifft2( ifftshift(I) ); to give anything sensible? You might be better off defining I the right way first... using fftshift on the image before inverting the FFT is fraught with peril. – Peter K. Mar 19 '14 at 16:01
• @PeterK. thanks for the comment! I was under the complete assumption that this was the right thing to do. What would be the right way to define I? – Joachim Mar 19 '14 at 16:08
• In any case ifftshift(I) should still be symmetric, so its inverse fourier transform should still be real, right? – Joachim Mar 19 '14 at 16:11
• Is the xy dimension of I even or odd? – hotpaw2 Mar 19 '14 at 17:27
• @hotpaw2, sorry i am not sure what the exact meaning of xy dimension is, but i believe i used a 100 by 100 pixel image. So even, i guess. – Joachim Mar 19 '14 at 22:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6739606261253357, "perplexity": 720.1067795930721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998986.11/warc/CC-MAIN-20190619123854-20190619145854-00040.warc.gz"} |
https://andrescaicedo.wordpress.com/2011/01/20/507-problem-list-iii/ | ## 507- Problem list (III)
For Part II, see here.
(Many thanks to Robert Balmer, Nick Davidson, and Amy Griffin for help with this list.)
• The Erdös-Turán conjecture on additive bases of order 2.
• If $R(n)$ is the $n$-th Ramsey number, does $\lim_{n\to\infty}R(n)^{1/n}$ exist?
• Hindman’s problem: Is it the case that for every finite coloring of the positive integers, there are $x$ and $y$ such that $x$, $y$, $x + y$, and $xy$ are all of the same color?
• Does the polynomial Hirsch conjecture hold?
• Does $P=NP$? (See also this post (in Spanish) by Javier Moreno.)
• Mahler’s conjecture on convex bodies.
• Nathanson’s conjecture: Is it true that ${}|A+A|\le|A-A|$ for “almost all” finite sets of integers $A$?
• The (bounded) Burnside’s problem: For which $m,n$ is the free group $B(m,n)$ finite?
• Is the frequency of 1s in the Kolakoski sequence asymptotically equal to $1/2$? (And related problems.)
• A question on Narayana numbers: Find a combinatorial interpretation of identity 6.C7(d) in Stanley’s “Catalan addendum” to Enumerative combinatorics. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705550670623779, "perplexity": 1131.2324761154157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.96/warc/CC-MAIN-20150521113210-00139-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/ode/nonstiff/AdamsMoultonIntegrator.html | org.apache.commons.math3.ode.nonstiff
• All Implemented Interfaces:
FirstOrderIntegrator, ODEIntegrator
public class AdamsMoultonIntegrator
extends AdamsIntegrator
This class implements implicit Adams-Moulton integrators for Ordinary Differential Equations.
Adams-Moulton methods (in fact due to Adams alone) are implicit multistep ODE solvers. This implementation is a variation of the classical one: it uses adaptive stepsize to implement error control, whereas classical implementations are fixed step size. The value of state vector at step n+1 is a simple combination of the value at step n and of the derivatives at steps n+1, n, n-1 ... Since y'n+1 is needed to compute yn+1, another method must be used to compute a first estimate of yn+1, then compute y'n+1, then compute a final estimate of yn+1 using the following formulas. Depending on the number k of previous steps one wants to use for computing the next value, different formulas are available for the final estimate:
• k = 1: yn+1 = yn + h y'n+1
• k = 2: yn+1 = yn + h (y'n+1+y'n)/2
• k = 3: yn+1 = yn + h (5y'n+1+8y'n-y'n-1)/12
• k = 4: yn+1 = yn + h (9y'n+1+19y'n-5y'n-1+y'n-2)/24
• ...
A k-steps Adams-Moulton method is of order k+1.
### Implementation details
We define scaled derivatives si(n) at step n as:
s1(n) = h y'n for first derivative
s2(n) = h2/2 y''n for second derivative
s3(n) = h3/6 y'''n for third derivative
...
sk(n) = hk/k! y(k)n for kth derivative
The definitions above use the classical representation with several previous first derivatives. Lets define
qn = [ s1(n-1) s1(n-2) ... s1(n-(k-1)) ]T
(we omit the k index in the notation for clarity). With these definitions, Adams-Moulton methods can be written:
• k = 1: yn+1 = yn + s1(n+1)
• k = 2: yn+1 = yn + 1/2 s1(n+1) + [ 1/2 ] qn+1
• k = 3: yn+1 = yn + 5/12 s1(n+1) + [ 8/12 -1/12 ] qn+1
• k = 4: yn+1 = yn + 9/24 s1(n+1) + [ 19/24 -5/24 1/24 ] qn+1
• ...
Instead of using the classical representation with first derivatives only (yn, s1(n+1) and qn+1), our implementation uses the Nordsieck vector with higher degrees scaled derivatives all taken at the same step (yn, s1(n) and rn) where rn is defined as:
rn = [ s2(n), s3(n) ... sk(n) ]T
(here again we omit the k index in the notation for clarity)
Taylor series formulas show that for any index offset i, s1(n-i) can be computed from s1(n), s2(n) ... sk(n), the formula being exact for degree k polynomials.
s1(n-i) = s1(n) + ∑j j (-i)j-1 sj(n)
The previous formula can be used with several values for i to compute the transform between classical representation and Nordsieck vector. The transform between rn and qn resulting from the Taylor series formulas above is:
qn = s1(n) u + P rn
where u is the [ 1 1 ... 1 ]T vector and P is the (k-1)×(k-1) matrix built with the j (-i)j-1 terms:
[ -2 3 -4 5 ... ]
[ -4 12 -32 80 ... ]
P = [ -6 27 -108 405 ... ]
[ -8 48 -256 1280 ... ]
[ ... ]
Using the Nordsieck vector has several advantages:
• it greatly simplifies step interpolation as the interpolator mainly applies Taylor series formulas,
• it simplifies step changes that occur when discrete events that truncate the step are triggered,
• it allows to extend the methods in order to support adaptive stepsize.
The predicted Nordsieck vector at step n+1 is computed from the Nordsieck vector at step n as follows:
• Yn+1 = yn + s1(n) + uT rn
• S1(n+1) = h f(tn+1, Yn+1)
• Rn+1 = (s1(n) - S1(n+1)) P-1 u + P-1 A P rn
where A is a rows shifting matrix (the lower left part is an identity matrix):
[ 0 0 ... 0 0 | 0 ]
[ ---------------+---]
[ 1 0 ... 0 0 | 0 ]
A = [ 0 1 ... 0 0 | 0 ]
[ ... | 0 ]
[ 0 0 ... 1 0 | 0 ]
[ 0 0 ... 0 1 | 0 ]
From this predicted vector, the corrected vector is computed as follows:
• yn+1 = yn + S1(n+1) + [ -1 +1 -1 +1 ... ±1 ] rn+1
• s1(n+1) = h f(tn+1, yn+1)
• rn+1 = Rn+1 + (s1(n+1) - S1(n+1)) P-1 u
where the upper case Yn+1, S1(n+1) and Rn+1 represent the predicted states whereas the lower case yn+1, sn+1 and rn+1 represent the corrected states.
The P-1u vector and the P-1 A P matrix do not depend on the state, they only depend on k and therefore are precomputed once for all.
Since:
2.0
• ### Nested classes/interfaces inherited from class org.apache.commons.math3.ode.MultistepIntegrator
MultistepIntegrator.NordsieckTransformer
• ### Fields inherited from class org.apache.commons.math3.ode.MultistepIntegrator
nordsieck, scaled
• ### Fields inherited from class org.apache.commons.math3.ode.nonstiff.AdaptiveStepsizeIntegrator
mainSetDimension, scalAbsoluteTolerance, scalRelativeTolerance, vecAbsoluteTolerance, vecRelativeTolerance
• ### Fields inherited from class org.apache.commons.math3.ode.AbstractIntegrator
isLastStep, resetOccurred, stepHandlers, stepSize, stepStart
• ### Constructor Summary
Constructors
Constructor and Description
AdamsMoultonIntegrator(int nSteps, double minStep, double maxStep, double[] vecAbsoluteTolerance, double[] vecRelativeTolerance)
Build an Adams-Moulton integrator with the given order and error control parameters.
AdamsMoultonIntegrator(int nSteps, double minStep, double maxStep, double scalAbsoluteTolerance, double scalRelativeTolerance)
Build an Adams-Moulton integrator with the given order and error control parameters.
• ### Method Summary
Methods
Modifier and Type Method and Description
void integrate(ExpandableStatefulODE equations, double t)
Integrate a set of differential equations up to the given time.
• ### Methods inherited from class org.apache.commons.math3.ode.nonstiff.AdamsIntegrator
initializeHighOrderDerivatives, updateHighOrderDerivativesPhase1, updateHighOrderDerivativesPhase2
• ### Methods inherited from class org.apache.commons.math3.ode.MultistepIntegrator
computeStepGrowShrinkFactor, getMaxGrowth, getMinReduction, getSafety, getStarterIntegrator, setMaxGrowth, setMinReduction, setSafety, setStarterIntegrator, start
• ### Methods inherited from class org.apache.commons.math3.ode.nonstiff.AdaptiveStepsizeIntegrator
filterStep, getCurrentStepStart, getMaxStep, getMinStep, initializeStep, resetInternalState, sanityChecks, setInitialStepSize, setStepSizeControl, setStepSizeControl
• ### Methods inherited from class org.apache.commons.math3.ode.AbstractIntegrator
acceptStep, addEventHandler, addEventHandler, addStepHandler, clearEventHandlers, clearStepHandlers, computeDerivatives, getCurrentSignedStepsize, getEvaluations, getEvaluationsCounter, getEventHandlers, getExpandable, getMaxEvaluations, getName, getStepHandlers, initIntegration, integrate, setEquations, setMaxEvaluations, setStateInitialized
• ### Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• ### Constructor Detail
public AdamsMoultonIntegrator(int nSteps,
double minStep,
double maxStep,
double scalAbsoluteTolerance,
double scalRelativeTolerance)
throws NumberIsTooSmallException
Build an Adams-Moulton integrator with the given order and error control parameters.
Parameters:
nSteps - number of steps of the method excluding the one being computed
minStep - minimal step (sign is irrelevant, regardless of integration direction, forward or backward), the last step can be smaller than this
maxStep - maximal step (sign is irrelevant, regardless of integration direction, forward or backward), the last step can be smaller than this
scalAbsoluteTolerance - allowed absolute error
scalRelativeTolerance - allowed relative error
Throws:
NumberIsTooSmallException - if order is 1 or less
public AdamsMoultonIntegrator(int nSteps,
double minStep,
double maxStep,
double[] vecAbsoluteTolerance,
double[] vecRelativeTolerance)
throws IllegalArgumentException
Build an Adams-Moulton integrator with the given order and error control parameters.
Parameters:
nSteps - number of steps of the method excluding the one being computed
minStep - minimal step (sign is irrelevant, regardless of integration direction, forward or backward), the last step can be smaller than this
maxStep - maximal step (sign is irrelevant, regardless of integration direction, forward or backward), the last step can be smaller than this
vecAbsoluteTolerance - allowed absolute error
vecRelativeTolerance - allowed relative error
Throws:
IllegalArgumentException - if order is 1 or less
• ### Method Detail
• #### integrate
public void integrate(ExpandableStatefulODE equations,
double t)
throws NumberIsTooSmallException,
DimensionMismatchException,
MaxCountExceededException,
NoBracketingException
Integrate a set of differential equations up to the given time.
This method solves an Initial Value Problem (IVP).
The set of differential equations is composed of a main set, which can be extended by some sets of secondary equations. The set of equations must be already set up with initial time and partial states. At integration completion, the final time and partial states will be available in the same object.
Since this method stores some internal state variables made available in its public interface during integration (AbstractIntegrator.getCurrentSignedStepsize()), it is not thread-safe.
Specified by:
integrate in class AdamsIntegrator
Parameters:
equations - complete set of differential equations to integrate
t - target time for the integration (can be set to a value smaller than t0 for backward integration)
Throws:
NumberIsTooSmallException - if integration step is too small
DimensionMismatchException - if the dimension of the complete state does not match the complete equations sets dimension
MaxCountExceededException - if the number of functions evaluations is exceeded
NoBracketingException - if the location of an event cannot be bracketed | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7030781507492065, "perplexity": 4510.754297842583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297195.79/warc/CC-MAIN-20150323172137-00195-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://assignment-daixie.com/tag/phys3202%E4%BB%A3%E8%80%83/ | # 液体和等离子体|PHYS3202 Fluids and Plasma代写
0
“Fluids and Plasmas” is an applied physics course designed to give the students experience in working with, predicting and measuring the behaviour of fluid flows and plasmas. The course begins with an outline of the fluid equations of motion, which lead to solutions for waves in fluids, convection and buoyancy-driven flows.
Here we consider some special cases of $(100)$ obtained by specializing $a, b, c$, and $d$ in $H$ of (77). Our choices for these four functions will determine the structure of the first integrals $x_{0}$ and $y_{0}$ through (99). For the cases we consider, their structure will be easy to discern and will give some insight into the behavior of $\xi$. How $\mathbf{B}_{p}$ will propagate in each case is pointed out to make the discussion more physically concrete. To conclude, a physical interpretation for the terms of $H$ and the role they play in determining how solutions propagate are discussed as well.
The first case we consider is a rather drastic simplification of the general result (99): we take $a, b, c$, and $d$ all to be zero, getting rid of Hentirely. Then we are simply left with
$$x_{0}=x \quad \text { and } \quad y_{0}=y .$$
Thus, in this case, the general solution for $\xi$ is of the form
$$\xi=\xi(x, y, z-\gamma \tau),$$
which corresponds to a structure propagating toroidally with specd $\gamma$.
$$\psi=(1 / \gamma)\left(\xi-\alpha \nabla_{1}^{2} \xi\right)(x, y, z,-\gamma \tau) .$$
The arguments in parentheses stress that $\psi$ moves in exactly the same way as $\xi$ : surfaces of constant poloidal flux simply propagate in the $z$ direction with constant velocity $\gamma$. Applying $\mathbf{B}{p}=-\epsilon B{T} \hat{\mathbf{z}} \times \nabla_{1} \psi$ to (105) shows that the disturbance $\mathbf{B}{p}$ also propagates in the same way: if we follow a point moving along a characteristic curve, $\mathbf{B}{p}$ at the point will be a constant vector. However, from $(105)$ and the arguments given at the end of Sec. III F, the solution is not necessarily an Alfvén-like wave because, in general, $\mathbf{B}{p}$ will not be proportional to $\mathbf{v}{t}$ for this case.
## PHYS3202 COURSE NOTES :
Having introduced the fluid equations, we next discuss a method for arriving at exact solutions of them.
We denote the partial derivative of a quantity by a subscript, e.g., $\partial U / \partial \tau \equiv U_{\tau}$. Then, after rearranging the terms of $(9)$ and $(10)$ and subtracting (14) from (9), we can write
\begin{aligned} &U_{r}+[\phi, U]+J_{z}+[J, \psi]=0, \ &\psi_{r}+(\phi-\alpha \chi){z}+[\phi-\alpha \chi, \psi]=0, \end{aligned} This is the nonlinear system we will study. Note that we are taking $\hat{\eta}=0$ in (16); the resistivity of the plasma is neglected for all that follows. To satisfy (17) we take $$\chi=g(z)+U,$$ where $g$ is an arbitrary function of $z$. This is by no means the general solution to (17); it is simply a special case that satisfies (17) with little effort. Defining \begin{aligned} &\xi \xi \phi-\alpha g(z), \ &\text { and recasting (15) and (16) in terms of } \xi \text { gives } \ &\qquad U{+}+[\xi, U]+J_{z}+[J, \psi]=0 \ &\text { and } \ &\qquad \psi_{\tau}+(\xi-\alpha U){z}+[\xi-\alpha U, \psi]=0, \end{aligned} where (18) has been used. We note in passing that from (19) and (6), the definition of $U$, we have $$U=\nabla{1}^{2} \xi,$$
a relation that will be used often in what follows.
Now we have to find solutions to (20) and (21). Let us first consider the simpler case of axisymmetric equilibrium. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9589707851409912, "perplexity": 343.70918052080793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00846.warc.gz"} |
https://docs.unraveldata.com/unravel-v475x/en/cdp-install-precheck.html | ## Home
#### Cloudera Data Platform (CDP)
Before installing Unravel, ensure to check and complete the installation requirements. See the following instructions to download, install, and set up Unravel for the CDH platform.
### Notice
The following instructions are for a single cluster environment. For installing Unravel on a multi-cluster environment, refer to Multi-cluster install.
##### 2. Deploy Unravel binaries
Unravel binaries are available as a tar file or RPM package. You can deploy the Unravel binaries in any directory on the server. However, the user who installs Unravel must have the write permissions to the directory where the Unravel binaries are deployed.
If the binaries are deployed to <Unravel_installation_directory>, Unravel will be available in <Unravel_installation_directory>/unravel. The directory layout for the Tar and RPM will be unravel/versions/<Directories and files>.
The following steps to deploy Unravel from a tar file must be performed by a user who will run Unravel.
1. Create an Installation directory.
mkdir /path/to/installation/directory
For example: mkdir /opt/unravel
### Note
Some locations may require root access to create a directory. In such a case, after the directory is created, change the ownership to unravel user and continue with the installation procedure as the unravel user.
chown -R username:groupname /path/to/installation/directory
For example: chown -R unravel:unravelgroup /opt/unravel
2. Extract and copy the Unravel tar file to the installation directory, which was created in the first step. After you extract the contents of the tar file, unravel directory is created within the installation directory.
tar -zxf unravel-<version>tar.gz -C /<Unravel-installation-directory>
For example: tar -zxf unravel-4.7.x.x.tar.gz -C /opt The Unravel directory will be available within /opt
### Important
A root user should perform the following steps to deploy Unravel from an RPM package. After the RPM package is deployed, the remaining installation procedures should be performed by the unravel user.
1. Create an installation directory.
mkdir /usr/local/unravel
2. Run the following command:
rpm -i unravel-<version>.rpm
For example: rpm -i unravel-4.7.x.x.rpm
In case you want to provide a different location, you can do so by using the --prefix command. For example:
mkdir /opt/unravel
chown -R username:groupname /opt/unravel
rpm -i unravel-4.7.0.0.rpm --prefix /opt
The Unravel directory is available in /opt.
3. Grant ownership of the directory to a user who runs Unravel. This user executes all the processes involved in Unravel installation.
chown -R username:groupname /usr/local/unravel
For example: chown -R unravel:unravelgroup /usr/local/unravel The Unravel directory is available in /usr/local.
4. Continue with the installation procedures as Unravel user.
##### 3. Install Unravel
You can install Unravel either with Interactive Precheck utility or manually .
### Note
Unravel recommends installation with Interactive Precheck. However, if you have missed setting any configuration with this method, you can set them later manually.
To install Unravel manually, do the following:
The setup command allows you to do the following:
• Runs Precheck automatically to detect possible issues that prevent a successful installation. Suggestions are provided to resolve issues. Refer to Precheck filters for the expected value for each filter.
• Let you run extra parameters to integrate the database of your choice.
The setup command allows you to use a managed database shipped with Unravel or an external database. When you run the setup command run without any additional parameters, the Unravel managed PostgreSQL database is used. Otherwise, you can specify any of the following databases, which is supported by Unravel, with the setup command:
• MySQL (Unravel managed as well as external MySQL database)
• PostgreSQL (External PostgreSQL)
Refer to Integrate database for details.
• Let you specify a separate path for the data directory other than the default path.
The Unravel data and configurations are located in the data directory. By default, the installer maintains the data directory under <Unravel installation directory>/data. You can also change the data directory's default location by running additional parameters with the setup command. To install Unravel with the setup command.
• Provides more options for setup.
### Notice
The Unravel user who owns the installation directory should run the setup command to install Unravel.
To install Unravel with the setup command, do the following:
1. Switch to Unravel user.
su - <unravel user>
2. Run setup command:
### Note
Refer to setup Options for all the additional parameters that you can run with the setup command
Refer to Integrate database topic and complete the pre-requisites before running the setup command with any other database other than Unravel managed PostgreSQL, which is shipped with the product. Extra parameters must be passed with the setup command when using another database.
### Tip
Optionally, if you want to provide a different data directory, you can pass an extra parameter (--data-directory) with the setup command as follows:
<unravel_installation_directory>/unravel/versions/<Unravel version>/setup --data-directory /the/data/directory
Similarly, you can configure separate directories for other unravel directories —contact support for assistance.
• PostgreSQL
• Unravel managed PostgreSQL
<unravel_installation_directory>/unravel/versions/<Unravel version>/setup
• External PostgreSQL
<unravel_installation_directory>/unravel/versions/<Unravel version>/setup --external-database postgresql <HOST> <PORT> <SCHEMA> <USERNAME> <PASSWORD>
The HOST, PORT, SCHEMA, USERNAME, and PASSWORD are optional fields and are prompted if missing. For example: /opt/unravel/versions/abcd.992/setup --external-database postgresql xyz.unraveldata.com 5432 unravel_db_prod unravel unraveldata
• MySQL
• Unravel managed MySQL
<unravel_installation_directory>/unravel/versions/<Unravel version>/setup --extra /tmp/mysql
• External MySQL
<unravel_installation_directory>/unravel/versions/<Unravel version>/setup --extra /tmp/<MySQL-directory> --external-database mysql <HOST> <PORT> <SCHEMA> <USERNAME> <PASSWORD>
The HOST, PORT, SCHEMA, USERNAME, and PASSWORD are optional fields and are prompted if missing.
<unravel_installation_directory>/unravel/versions/<Unravel version>/setup --extra /tmp/mariadb
<unravel_installation_directory>unravel/versions/<Unravel version>/setup --extra /tmp/<MariaDB-directory> --external-database mariadb <HOST> <PORT> <SCHEMA> <USERNAME> <PASSWORD>
The HOST, PORT, SCHEMA, USERNAME, and PASSWORD are optional fields and are prompted if missing.
Precheck is automatically run when you run the setup command. Refer to Precheck filters for the expected value for each filter.
3. Apply the changes.
<Unravel installation directory>/unravel/manager config apply
4. Start all the services.
<unravel_installation_directory>/unravel/manager start
5. Check the status of services.
<unravel_installation_directory>/unravel/manager report
The following service statuses are reported:
• OK: Service is up and running.
• Not Monitored: Service is not running. (Has stopped or has failed to start)
• Initializing: Services are starting up.
• Does not exist: The process unexpectedly disappeared. A restart will be attempted ten times.
You can also get the status and information for a specific service. Run the manager report command as follows:
<unravel_installation_directory>/unravel/manager report <service>
For example: /opt/unravel/manager report auto_action
The Precheck output displays the issues that prevent a successful installation and provides suggestions to resolve them. You must resolve each of the issues before proceeding. See Precheck filters.
After resolving the precheck issues, you must re-login or reload the shell to execute the setup command again.
### Note
You can skip the precheck using the setup --skip-precheck command in certain situations.
For example:
/opt/unravel/versions/<Unravel version>/setup --skip-precheck
You can also skip the checks that you know can fail. For example, if you want to skip the Check limits option and the Disk freespace option, pick the command within the parenthesis corresponding to these failed options and run the setup command as follows:
setup --filter-precheck ~check_limits,~check_freespace
### Tip
Run --help with the setup command and any combination of the setup command for complete usage details.
<unravel_installation_directory>/unravel/versions/<Unravel version>/setup --help
/opt/unravel/versions/abcd.1004/setup
2021-04-05 15:51:30 Sending logs to: /tmp/unravel-setup-20210405-155130.log
2021-04-05 15:51:30 Running preinstallation check...
2021-04-05 15:51:31 Gathering information ................. Ok
2021-04-05 15:51:51 Running checks .................. Ok
--------------------------------------------------------------------------------
system
Check limits : PASSED
Clock sync : PASSED
CPU requirement : PASSED, Available cores: 8 cores
Disk access : PASSED, /opt/unravel/versions/develop.1004/healthcheck/healthcheck/plugins/system is writable
Disk freespace : PASSED, 229 GB of free disk space is available for precheck dir.
Kerberos tools : PASSED
Memory requirement : PASSED, Available memory: 79 GB
Network ports : PASSED
OS libraries : PASSED
OS release : PASSED, OS release version: centos 7.6
OS settings : PASSED
SELinux : PASSED
--------------------------------------------------------------------------------
Healthcheck report bundle: /tmp/healthcheck-20210405155130-xyz.unraveldata.com.tar.gz
2021-04-05 15:51:53 Prepare to install with: /opt/unravel/versions/abcd.1004/installer/installer/../installer/conf/presets/default.yaml
2021-04-05 15:51:57 Sending logs to: /opt/unravel/logs/setup.log
2021-04-05 15:51:57 Instantiating templates ................................................................................................................................................................................................................................ Ok
2021-04-05 15:52:05 Creating parcels .................................... Ok
2021-04-05 15:52:20 Installing sensors file ............................ Ok
2021-04-05 15:52:20 Installing pgsql connector ... Ok
2021-04-05 15:52:22 Starting service monitor ... Ok
2021-04-05 15:52:27 Request start for elasticsearch_1 .... Ok
2021-04-05 15:52:27 Waiting for elasticsearch_1 for 120 sec ......... Ok
2021-04-05 15:52:35 Request start for zookeeper .... Ok
2021-04-05 15:52:35 Request start for kafka .... Ok
2021-04-05 15:52:35 Waiting for kafka for 120 sec ...... Ok
2021-04-05 15:52:37 Waiting for kafka to be alive for 120 sec ..... Ok
2021-04-05 15:52:42 Initializing pgsql ... Ok
2021-04-05 15:52:46 Request start for pgsql .... Ok
2021-04-05 15:52:46 Waiting for pgsql for 120 sec ..... Ok
2021-04-05 15:52:47 Creating database schema ................. Ok
2021-04-05 15:52:50 Generating hashes .... Ok
2021-04-05 15:52:55 Creating kafka topics .................... Ok
2021-04-05 15:53:36 Creating schema objects ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... Ok
2021-04-05 15:54:03 Request stop ....................................................... Ok
2021-04-05 15:54:16 Done
[unravel@xyz ~]$ 1. Run manager config auto command to automatically pull in all the Hadoop configurations. You will be prompted to provide the location and credentials for Cloudera Manager or Ambari UI. <unravel_installation_directory>/unravel/manager config auto If there are more than one clusters that are handled by Cloudera Manager or Ambari, you will be prompted to enable the cluster that you want to monitor. Run the following command to enable a cluster. <unravel_installation_directory>/unravel/manager config cluster enable <CLUSTER_KEY> Example: /opt/unravel/manager config cluster enable cluster1 ### Tip Here <CLUSTER_KEY> is the name of the cluster that you want to enable for Unravel monitoring. This can be retrieved from the output shown for the manager config auto command. 2. The Hive metastore database password can be recovered automatically only for a cluster manager with an administrative account. Otherwise, it must be set manually as follows: <Unravel installation directory>/unravel/manager config hive metastore password <CLUSTER_KEY> <HIVE_KEY> <PASSWORD> Example: /opt/unravel/manager config hive metastore password cluster1 HIVE p@P@SsWorD ### Tip Here <CLUSTER_KEY> is the name of the cluster where you want to set the Hive configurations. Also, refer to Connecting to Hive metastore in a single cluster environment. 3. Optional: Set up Kerberos to authenticate Hadoop services. • If you are using Kerberos authentication, set the principal path and keytab, enable Kerberos authentication, and apply the changes. <Unravel installation directory>/unravel/manager config kerberos set --keytab </path/to/keytab file> --principal <[email protected]> <Unravel installation directory>/unravel/manager config kerberos enable <unravel_installation_directory>/manager config apply • If you are using Truststore certificates, run the following steps from the manager tool to add certificates to the Truststore: 1. Download the certificates to a directory. 2. Provide permissions to the user, who installs unravel, to access the certificates directory. chown -R username:groupname /path/to/certificates/directory 3. Upload the certificates. ## Option 1 <unravel_installation_directory>/unravel/manager config tls trust add </path/to/the/certificate/files or ## Option 2 <unravel_installation_directory>/unravel/manager config tls trust add --pem </path/to/the/certificate/files> <unravel_installation_directory>/unravel/manager config tls trust add --jks </path/to/the/certificate/files> <unravel_installation_directory>/unravel/manager config tls trust add --pkcs12 </path/to/the/certificate/files> 4. Enable the Truststore <unravel_installation_directory>/unravel/manager config tls trust <enable|disable> <unravel_installation_directory>/unravel/manager config apply 5. Verify the connection. <unravel_installation_directory>/unravel/manager verify connect <Cluster Manager-host> <Cluster Manager-port> For example: /opt/unravel/manager verify connect xyz.unraveldata.com 7180 -- Running: verify connect xyz.unraveldata.com 7180 - Resolved IP: 111.17.4.123 - Reverse lookup: ('xyz.unraveldata.com', [], ['111.17.4.123']) - Connection: OK - TLS: No -- OK • If you are using TLS protocol, refer to Enabling Transport Layer Security (TLS) for Unravel UI. 4. Apply changes. <unravel_installation_directory>/unravel/manager config apply 5. Start all the services. <unravel_installation_directory>/unravel/manager start 6. Check the status of services. <unravel_installation_directory>/unravel/manager report The following service statuses are reported: • OK: Service is up and running • Not Monitored: Service is not running. (Has stopped or has failed to start) • Initializing: Services are starting up. • Does not exist: The process unexpectedly disappeared. Restarts will be attempted 10 times. You can also get the status and information for a specific service. Run the manager report command as follows: <unravel_installation_directory>/unravel/manager report <service> For example: /opt/unravel/manager report auto_action 7. Set additional configurations, if required. 8. Optionally, you can run healthcheck, at this point, to verify that all the configurations and services are running successfully. <unravel_installation_directory>/unravel/manager healthcheck Healthcheck is run automatically on an hourly basis in the backend. You can set your email to receive the healthcheck reports. ##### 4. Enable additional instrumentation for CDP This topic explains how to enable additional instrumentation on your gateway/edge/client nodes that are used to submit jobs to your big data platform. Additional instrumentation can include: • Hive queries in Hadoop that are pushed to Unravel Server by the Hive Hook sensor, a JAR file. • Spark job performance metrics that are pushed to Unravel Server by the Spark sensor, a JAR file. • Impala queries that are pulled from Cloudera Manager or the Impala daemon. • Sensor JARs packaged in a parcel on Unravel Server. • Tez Dag information is pushed to Unravel server by the Tez sensor, a JAR file. Sensor JARs are packaged in a parcel on Unravel server. Run the following steps from the Cloudera Manager to download, distribute, and activate this parcel. ### Note Ensure that Unravel is up and running before you perform the following steps. 1. In Cloudera Manager, click . The Parcel page is displayed. 2. On the Parcel page, click Configuration or Parcel Repositories & Network settings. The Parcel Configurations dialog box is displayed. 3. Go to the Remote Parcel Repository URLs section, click + and enter the Unravel host along with the exact directory name for your CDH version. http://<unravel-host>:<port>/parcels/<cdh <major:minor version>/ For example: http://xyz.unraveldata.com:3000/parcels/cdh 7.1 • <unravel-host> is the hostname or LAN IP address of Unravel. In a multi-cluster scenario, this would be the host where the log_receiver daemon is running. • <port> is the Unravel UI port. The default is 3000. In case you have customized the default port, you can add that port number. • <cdh-version> is your version of CDP. For example, cdh7.1. You can go to http://<unravel-host>:<port>/parcels/ directory (For example: http://xyznode46.unraveldata.com:3000/parcels) and copy the exact directory name of your CDH version (CDH<major.minor>). ### Note If you're using Active Directory Kerberos, unravel-host must be a fully qualified domain name or IP address. ### Tip If you're running more than one version of CDP (for example, you have multiple clusters), you can add more than one parcel entry for unravel-host. 4. Click Save Changes. 5. In the Cloudera Manager, click Check for new parcels find the UNRAVEL_SENSOR parcel that you want to distribute, and click the corresponding Download button. 6. After the parcel is downloaded, click the corresponding Distribute button. This will distribute the parcel to all the hosts. 7. After the parcel is distributed, click the corresponding Activate button. The status column will now display Distributed, Activated. ### Note If you have an old sensor parcel from Unravel, you must deactivate it now. 1. In Cloudera Manager, select the target cluster from the drop-down, click Hive on Tez >Configuration, and search for Service Environment. 2. In Hive on Tez Service Environment Advanced Configuration Snippet (Safety Valve) enter the following exactly as shown, with no substitutions: AUX_CLASSPATH=${AUX_CLASSPATH}:/opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/unravel_hive_hook.jar
3. Ensure that the Unravel hive hook JAR has the read/execute access for the user running the hive server.
1. In Cloudera Manager, select the target cluster from the drop-down, click Oozie >Configuration , and check the path shown in ShareLib Root Directory.
2. From a terminal application on the Unravel node (edge node in case of multi-cluster.), pick up the ShareLib Root Directory directory path with the latest timestamp.
hdfs dfs -ls <path to ShareLib directory>
For example: hdfs dfs -ls /user/oozie/share/lib/
### Important
The jars must be copied to the lib directory (with the latest timestamp), which is shown in ShareLib Root Directory.
3. Copy the Hive Hook JAR /opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/unravel_hive_hook.jar and the Btrace JAR, /opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar to the specified path in ShareLib Root Directory.
For example, if the path specified in ShareLib Root Directory is /user/oozie, run the following commands to copy the JAR files.
hdfs dfs -copyFromLocal /opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/unravel_hive_hook.jar /user/oozie/share/lib/<latest timestamp lib directory>/
For example: hdfs dfs -copyFromLocal /opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/unravel_hive_hook.jar /user/oozie/share/lib/lib_20210326035616/
hdfs dfs -copyFromLocal /opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar /user/oozie/share/lib/<latest timestamp lib directory>/
For example: hdfs dfs -copyFromLocal /opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar /user/oozie/share/lib/lib_20210326035616/
4. From a terminal application, copy the Hive Hook JAR /opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/unravel_hive_hook.jar and the Btrace JAR, /opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar to the specified path in ShareLib Root Directory.
### Caution
Jobs controlled by Oozie 2.3+ fail if you do not copy the Hive Hook and BTrace JARs to the HDFS shared library path.
1. On the Cloudera Manager, go to Tez > configuration and search the following properties:
• tez.am.launch.cmd-opts
2. Append the following to tez.am.launch.cmd-opts and tez.task.launch.cmd-opts properties:
-javaagent:/opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar=libs=mr,config=tez -Dunravel.server.hostport=<unravel_host>:4043
### Note
For unravel-host, specify the FQDN or the logical hostname of Unravel or of the edge node in case of multi-cluster.
### Note
If you are using JDK version 9 or later, ensure to add the following to the existing JAVA options:
--add-exports java.base/jdk.internal.perf=ALL-UNNAMED
--add-exports java.management/sun.management.counter=ALL-UNNAMED
For example, the complete JAVA options are specified as follows:
-javaagent:/opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar=libs=mr -Dunravel.server.hostport=unravel-host:4043 --add-exports java.base/jdk.internal.perf=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED --add-exports java.management/sun.management.counter.perf=ALL-UNNAMED --add-exports java.management/sun.management.counter=ALL-UNNAMED
3. Click the Stale configurations icon () to deploy the client configuration and restart the Tez services.
1. On the Cloudera Manager, click Hive on Tez > Configuration tab.
2. Search for hive-site.xml, which will lead to the Hive Client Advanced Configuration Snippet (Safety Valve) for hive-site.xml section.
3. Specify the hive hook configurations. You have the option to either use the XML text field or Editor to specify the hive hook configuration.
• Option 1: XML text field
Click View as XML to open the XML text field and copy-paste the following.
<property>
<name>com.unraveldata.host</name>
<value><UNRAVEL HOST NAME></value>
<description>Unravel hive-hook processing host</description>
</property>
<property>
<name>com.unraveldata.hive.hook.tcp</name>
<value>true</value>
</property>
<property>
<name>com.unraveldata.hive.hdfs.dir</name>
<value>/user/unravel/HOOK_RESULT_DIR</value>
<description>destination for hive-hook, Unravel log processing</description>
</property>
<property>
<name>hive.exec.driver.run.hooks</name>
<value>com.unraveldata.dataflow.hive.hook.UnravelHiveHook</value>
<description>for Unravel, from unraveldata.com</description>
</property>
<property>
<name>hive.exec.pre.hooks</name> <value>com.unraveldata.dataflow.hive.hook.UnravelHiveHook</value>
<description>for Unravel, from unraveldata.com</description>
</property>
<property>
<name>hive.exec.post.hooks</name> <value>com.unraveldata.dataflow.hive.hook.UnravelHiveHook</value>
<description>for Unravel, from unraveldata.com</description>
</property>
<property>
<name>hive.exec.failure.hooks</name> <value>com.unraveldata.dataflow.hive.hook.UnravelHiveHook</value>
<description>for Unravel, from unraveldata.com</description>
</property>
Ensure to replace UNRAVEL HOST NAME with the Unravel hostname. Replace The Unravel Host Name with the hostname of the edge node in case of a multi-cluster deployment.
• Option 2: Editor:
Click + and enter the property, value, and description (optional).
Property
Value
Description
com.unraveldata.host
Replace with Unravel hostname or with the hostname of the edge node in case of a multi-cluster deployment.
Unravel hive-hook processing host
com.unraveldata.hive.hook.tcp
true
Hive hook tcp protocol.
com.unraveldata.hive.hdfs.dir
/user/unravel/HOOK_RESULT_DIR
Destination directory for hive-hook, Unravel log processing.
hive.exec.driver.run.hooks
com.unraveldata.dataflow.hive.hook.UnravelHiveHook
Hive hook
hive.exec.pre.hooks
com.unraveldata.dataflow.hive.hook.UnravelHiveHook
Hive hook
hive.exec.post.hooks
com.unraveldata.dataflow.hive.hook.UnravelHiveHook
Hive hook
hive.exec.failure.hooks
com.unraveldata.dataflow.hive.hook.UnravelHiveHook
Hive hook
### Note
If you configure CDP with Cloudera Navigator's safety valve setting, you must edit the following keys and append the value com.unraveldata.dataflow.hive.hook.UnravelHiveHook without any space.
• hive.exec.post.hooks
• hive.exec.pre.hooks
• hive.exec.failure.hooks
For example:
<property>
<name>hive.exec.post.hooks</name>
<description>for Unravel, from unraveldata.com</description>
</property>
4. Similarly, ensure to add the same hive hook configurations in HiveServer2 Advanced Configuration Snippet (Safety Valve) for hive-site.xml.
5. Optionally, add a comment in Reason for change and then click Save Changes.
6. From the Cloudera Manager page, Click the Stale configurations icon () to deploy the configuration and restart the Hive services.
7. Check Unravel UI to see if all Hive queries are running.
• If queries are running fine and appearing in Unravel UI, then you have successfully added the hive hooks configurations.
• If queries are failing with a class not found error or permission problems:
• Undo the hive-site.xml changes in Cloudera Manager.
• Deploy the hive client configuration.
• Restart the Hive service.
• Follow the steps in Troubleshooting.
1. In Cloudera Manager, select the target cluster, click Kafka service > Configuration, and search for broker_java_opts.
2. In Additional Broker Java Options enter the following:
-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+DisableExplicitGC -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.local.only=true -Djava.rmi.server.useLocalHostname=true -Dcom.sun.management.jmxremote.rmi.port=9393
3. Click Save Changes.
1. In Cloudera Manager, select the target cluster and then click Spark.
2. Select Configuration.
3. Search for spark-defaults.
4. In Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-defaults.conf, enter the following text, replacing placeholders with your particular values:
spark.unravel.server.hostport=unravel-host:port
spark.driver.extraJavaOptions=-javaagent:/opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar=config=driver,libs=spark-version
spark.executor.extraJavaOptions=-javaagent:/opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar=config=executor,libs=spark-version
spark.eventLog.enabled=true
### Note
If you are using JDK version 9 or later, ensure to add the following to the existing JAVA options:
--add-exports java.base/jdk.internal.perf=ALL-UNNAMED
--add-exports java.management/sun.management.counter=ALL-UNNAMED
For example, the complete JAVA options are specified as follows:
-javaagent:/opt/cloudera/parcels/UNRAVEL_SENSOR/lib/java/btrace-agent.jar=libs=mr -Dunravel.server.hostport=unravel-host:4043 --add-exports java.base/jdk.internal.perf=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED --add-exports java.management/sun.management.counter.perf=ALL-UNNAMED --add-exports java.management/sun.management.counter=ALL-UNNAMED
• <unravel-host>: Specify the Unravel hostname. In the case of multi-cluster deployment use the FQDN or logical hostname of the edge node for unravel-host.
• <Port>: 4043 is the default port. If you have customized the ports, you can specify that port number here.
• <spark-version>: For spark-version, use a Spark version that is compatible with this version of Unravel. You can check the Spark version with the spark-submit --version command and specify the same version.
5. Click Save changes.
6. Click the Stale configurations icon () to deploy the client configuration and restart the Spark services. Your spark-shell will ensure new JVM containers are created with the necessary extraJavaOptions for the Spark drivers and executors.
7. Check Unravel UI to see if all Spark jobs are running.
• If jobs are running and appearing in Unravel UI, you have deployed the Spark jar successfully.
• If queries are failing with a class not found error or permission problems:
• Undo the spark-defaults.conf changes in Cloudera Manager.
• Deploy the client configuration.
• Investigate and fix the issue.
• Follow the steps in Troubleshooting.
### Note
If you have YARN-client mode applications, the default Spark configuration is not sufficient, because the driver JVM starts before the configuration set through the SparkConf is applied. For more information, see Apache Spark Configuration. In this case, configure the Unravel Sensor for Spark to profile specific Spark applications only (in other words, per-application profiling rather than cluster-wide profiling).
Impala properties are automatically configured. Refer to Impala properties for the list of properties that are automatically configured. If it is not set already by auto-configuration, set the properties as follows:
<Unravel installation directory>/manager config properties set <PROPERTY> <VALUE>
For example,
<Unravel installation directory>/manager config properties set com.unraveldata.data.source cm
<Unravel installation directory>/manager config properties set com.unraveldata.cloudera.manager.url http://my-cm-url
<Unravel installation directory>/manager config properties set com.unraveldata.cloudera.manager.username mycmname
<Unravel installation directory>/manager config properties set com.unraveldata.cloudera.manager.password mycmpassword
For multi-cluster, use the following format and set these on the edge node:
<Unravel installation directory>/manager config properties set com.unraveldata.data.source cm
<Unravel installation directory>/manager config properties set com.unraveldata.cloudera.manager.url http://my-cm-url
<Unravel installation directory>/manager config properties set com.unraveldata.cloudera.manager.username mycmname
<Unravel installation directory>/manager config properties set com.unraveldata.cloudera.manager.password mycmpassword
### Note
By default, the Impala sensor task is enabled. To disable it, you can edit the following property as follows:
<Unravel installation directory>/manager config properties set com.unraveldata.sensor.tasks.disabled iw
Optionally, you can change the Impala lookback window. By default, when Unravel Server starts, it retrieves the last 5 minutes of Impala queries. To change this, do the following:
Change the value for com.unraveldata.cloudera.manager.impala.look.back.minutes property.
<Unravel installation directory>/manager config properties set com.unraveldata.cloudera.manager.impala.look.back.minutes -<period>
For example: <Unravel installation directory>manager config properties set com.unraveldata.cloudera.manager.impala.look.back.minutes -7 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20402513444423676, "perplexity": 26511.014154004755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00073.warc.gz"} |
http://openstudy.com/updates/51681806e4b050ab14bf6a53 | Here's the question you clicked on:
55 members online
• 0 viewing
## anonymous 3 years ago The Express won 80% of the 30 games that they played this season. How many games did they win? Delete Cancel Submit
• This Question is Closed
1. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
do u know how to start?
2. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
turn80% to a decimal
3. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
umm..no.. ok see any PERCENT means PER CENT =PER 100
4. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
so 80 percent is 80 per 100 = 80/100 so 80% of 30 = (80/100) X 30 understand?
5. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$\huge\frac{ 80 }{ 100}\times 30$
6. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
100*30/80
7. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
Yeah ... so divide then multiply?
8. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yes...so can u calculate the answer and check it with me?
9. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
sure thing...hold on for a sec
10. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
Ok so my answer would be 24
11. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yeah its 24 80*30/100
12. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
13. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yeah thats right!!
14. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
Thanks!!
15. Not the answer you are looking for?
Search for more explanations.
• Attachments:
Find more explanations on OpenStudy
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999798536300659, "perplexity": 19523.233917702928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721558.87/warc/CC-MAIN-20161020183841-00217-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://researchonline.ljmu.ac.uk/id/eprint/15658/ | The survival of globular clusters in a cuspy Fornax
Shao, S, Cautun, M, Frenk, CS, Reina-Campos, M, Deason, AJ, Crain, RA, Kruijssen, JMD and Pfeffer, JL (2021) The survival of globular clusters in a cuspy Fornax. Monthly Notices of the Royal Astronomical Society, 507 (2). pp. 2339-2353. ISSN 0035-8711
Preview
Text
The survival of globular clusters in a cuspy Fornax.pdf - Published Version
It has long been argued that the radial distribution of globular clusters (GCs) in the Fornax dwarf galaxy requires its dark matter halo to have a core of size $\sim 1$ kpc. We revisit this argument by investigating analogues of Fornax formed in E-MOSAICS, a cosmological hydrodynamical simulation that self-consistently follows the formation and evolution of GCs in the EAGLE galaxy formation model. In EAGLE, Fornax-mass haloes are cuspy and well described by the Navarro-Frenk-White profile. We post-process the E-MOSAICS to account for GC orbital decay by dynamical friction, which is not included in the original model. Dynamical friction causes 33 per cent of GCs with masses $M_{\rm GC}\geq4\times10^4 {~\rm M_\odot}$ to sink to the centre of their host where they are tidally disrupted. Fornax has a total of five GCs, an exceptionally large number compared to other galaxies of similar stellar mass. In the simulations, we find that only 3 per cent of the Fornax analogues have five or more GCs, while 30 per cent have only one and 35 per cent have none. We find that GC systems in satellites are more centrally concentrated than in field dwarfs, and that those formed in situ (45 per cent) are more concentrated than those that were accreted. The survival probability of a GC increases rapidly with the radial distance at which it formed ($r_{\rm init}$): it is 37 per cent for GCs with $r_{\rm init} \leq 1$ kpc and 92 per cent for GCs with $r_{\rm init} \geq 1$ kpc. The present-day radial distribution of GCs in E-MOSAICS turns out to be indistinguishable from that in Fornax, demonstrating that, contrary to claims in the literature, the presence of five GCs in the central kiloparsec of Fornax does not exclude a cuspy DM halo. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.769301176071167, "perplexity": 2266.3201664164703}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00019.warc.gz"} |
https://tex.stackexchange.com/questions/356553/problems-with-newtxmath-math-fonts | # Problems with newtxmath math fonts [closed]
I use newtxtext and newtxmath.
The problems are as follows:
1. $\triangleright$ yields something like \mathcal{F}. With \show\triangleright I get \triangleright=\mathchar"2246
2. The exclamation point doesn't show up when you type $n!$.
• Please provide a full minimal working example which reproduces the issue, possibly starting with \documentclass{...} and ending with \end{document}. In this way we can copy-paste the code and look at the problem without resorting to wild guesses about what you are doing. – campa Mar 2 '17 at 14:44
• In particular, do tell us which document class you use and which packages you load in addition to newtxmath and newtxtext. – Mico Mar 2 '17 at 14:51
A minimal example is:
\documentclass[10pt]{book}
\usepackage[mtphrb]{mtpro2}
\usepackage{newtxmath}
\begin{document}
$\triangleright$ $n!$
\end{document}
By removing \usepackage[mtphrb]{mtpro2}, I solved the problem.
• The piece of information that you were loading both mtpro2 and newtxmath is absolutely crucial, and it should have been mentioned in the original query. Without this piece of information, did you expect anyone to come up with a diagnosis, let alone a cure? – Mico Mar 13 '17 at 17:57
• How can one close a question, or suppress it when it is a silly question? – André Bellaïche Apr 22 '17 at 17:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5935277938842773, "perplexity": 1520.9596606301952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540544696.93/warc/CC-MAIN-20191212153724-20191212181724-00516.warc.gz"} |
http://blog.brucemerry.org.za/2017/06/extra-dcj-2017-r2-analysis.html?showComment=1502261014711 | ## Monday, June 12, 2017
### Flagpoles
My solution during the contest was essentially the same as the official analysis. Afterwards I realised a potential slight simplification: if one starts by computing the second-order differences (i.e., the differences of the differences), then one is looking for the longest run of zeros, rather than the longest run of the same value. That removes the need to communicate the value used in the runs at the start and end of each section.
### Number Bases
I missed the trick of being able to uniquely determine the base from the first point at which X[i] + Y[i] ≠ Z[i]. Instead, at every point where X[i] + Y[i] ≠ Z[i], I determine two candidate bases (depending on whether there is a carry of not). Then I collect the candidates and test each of them. If more than three candidates are found, then the test case is impossible, since there must be two disjoint candidate pairs.
### Broken Memory
My approach was slightly different. Each node binary searches for its broken value, using two other nodes to help (and simultaneously helping two other nodes). Let's say we know the broken value is in a particular interval. Split that interval in half, and compute hashes for each half on the node (h1 and h2) and on two other nodes (p1 and p2, q1 and q2). If h1 equals p1 or q1, then the broken value must be in interval 2, or vice versa. If neither applies, then nodes p and q both have broken values, in the opposite interval to that of the current node. We can tell which by checking whether p1 = q1 or p2 = q2.
This does rely on not having collisions in the hash function. In the contest I relied on the contest organisers not breaking my exact choice of hash function, but it is actually possible to write a solution that works on all test data. Let P be a prime greater than $$10^{18}$$. To hash an interval, compute the sums $$\sum m_i$$ and $$\sum i m_i$$, both mod P, giving a 128-bit hash. Suppose two sequences p and q collide, but differ in at most two positions. The sums are the same, so they must differ in exactly two positions j and k, with $$p_j - q_j = q_k - p_k$$ (all mod P). But then the second sums will differ by
$$jp_j + kp_k - jq_j - kq_k = (j - k)(p_j - q_j)$$, and since P is prime and each factor is less than P, this will be non-zero.
ronaldo said...
nice | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560407161712646, "perplexity": 530.6389535794445}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669730.38/warc/CC-MAIN-20191118080848-20191118104848-00136.warc.gz"} |
https://gilkalai.wordpress.com/2009/01/28/mathematics-science-and-blogs/?like=1&source=post_flair&_wpnonce=81d32a40f7 | ## Mathematics, Science, and Blogs
Michael Nielsen wrote a lovely essay entitled “Doing science online” about mathematics, science, and blogs. Michael’s primary example is a post over Terry Tao’s blog about the Navier-Stokes equation and he suggests blogs as a way of scaling up scientific conversation. Michael is writing a book called “The Future of Science.” He is a strong advocate of doing science in the open, and regard these changes as truly revolutionary. (The term “Science 2.0” is mentioned in the remarks.)
Michael’s post triggered Tim Gowers to present his thoughts about massive collaboration in mathematics, and this post is also very interesting with interesting follow-up remarks. Tim Gowers mentioned the n-category cafe as a place where a whole research programme is advanced on a blog. Terry Tao mentioned comments on posts on his open-problems series as having some value. He mentioned, in particular, the post on Mahler’s conjecture. (Also I think some discussions over Scott Aaronson’s blog had the nature of discussing specific technical math (coming from CS) problems.)
Tim actually proposes an experiment: trying to solve collectively a specific math problem. This would be interesting!!! I suppose we need to give such an effort over a blog a longer than ususal life-span – a few months perhaps. (And maybe not to start with a terribly difficult problem.) (What can be an appropriate control experiment though?)
Ben Webster in “Secret blogging seminar” mentioned, in this context, earlier interesting related posts about “Working in secret“.
Christian Elsholtz mentioned on Gowers’s blog an intermediate problem (called “Moser’s cube problem”) where you look not for combinatorial lines (where the undetermined coordinates should be 1 in x 2 in y and 3 in z), and not for an affine line (where it should be 1,2,3 in x y and z in any order), but for a line: it can be 1 in x 2 in y and 3 in z or 3 in x 2 in y and 1 in z.
Update: Things are moving fast regarding Gowers’s massive collaboration experiment. He peoposes to study together a new approach to the $k=3$ “density Hales -Jewett theorem”. A background post apears here. Hillel Furstenberg and Izzy Katznelson’s proof of the density Hales-Jewett theorem was a crowning achievement of the ergodic theory method towards Szemeredi’s theorem. Like the case for Furstenberg’s proof of Szemeredi’s theorem itself the case $k=3$ was considerably simpler and had appeared in an earlier paper by Hillel and Izzy. The recent extensions of Szemeredi regularity lemma that led to simpler combinatorial proof of Szemeredi’s theorem did not led so far to simpler proofs for the density Hales-Jewett case. If you look at Tim’s background post let me ask you this: What is the case $k=2$ of the density Hales-Jewett’s theorem? It is something familiar that we talked about!
Here is a particularly silly problem that I suggested at some point along the discussions: How large can a family of subsets of an $n$-element set be without having two sets A and B such that the number of elements in A but not in B is twice the number of elements in B but not in A?
Update: This problem was completely reolved by Imre Leader and Eoin Long, their paper Tilted Sperner families contains also related results and conjectures.
Massive collaboration in art (click the picture for details)
Q: What is the case $k=2$of the density Hales-Jewett’s theorem? A: It is Sperner’s theorem! (that we discussed in this post.)
I will keep updating about news from Tim’s project. [Last update is from October 21]. More updates: Tim’s project is getting quickly off the ground. A useful wiki was established. More update: It is probably successful
There are several new posts in Gowers’s blog describing the project, its rules and motivation, and interesting discussions also over What’s new and in In theory. The project itself is fairly focused; Let me mention another connection which is a little beside its defined scope. Combinatorial lines for $k=3$ are simply three vectors $x,y,z$ which in some coordinates they agree, and in some others they are ‘1’ in $x$ ‘2’ in $y$ and ‘3’\$ in $z$. If instead you ask that $x,y,z$ form an affine line or, equivalently an arithmetic progression (in $Z_3^n$) getting a density theorem is easier. This just means that in every coordinate where x y and z are different one of them is ‘1’ one is ‘2’ and one is ‘3’ but you don’t insist which is which. (There is a famous problem regarding the density needed.) This problem is reviwed by Terry Tao here. I am not sure if the regularity lemma approach works for this problem; Hillel and Izzy proved using the ergodic theory appproach density results for affine lines before they moved to the more complicated (less structured) case of combinatorial lines.
And more:there are already almost 100 comments in Gowers’s main post. I will try to keep updating about some highlights in this post from time to time; in my opinion, a good time scale to examine it will be, say, once every 1/2 year. I find it interesting in various ways, and exciting, and nevertheless I am also somewhat skeptical about some of the perhaps too strong and too romantic interpretations.
And more (7/2): The discussion is now divided into three threads. The original project of finding a proof of density Hales Jewett using some form of a regularity lemma is continued as a seperated thread. (The plan is much more detailed than that.) The same thread also discusses issues related to Sperner’s theorem which is sort of a baby case for HJT. Terry Tao hosts a thread about upper and lower bounds on DHJ and related problems. A third thread about obstructions to uniformity and density-increment strategies is forthcoming alive and kicking on Gowers’s blog.
And more (2/3): The discussion moves on in several directions. A wiki was built to describe some background, variants, approaches, proofs of partial results, approaches, links to the original threads and more. (Also a sort of time-table.) Tim Gowers launched a slower-going polymath2 aimed to reach a useful notion of “explicitely defined” Banach space. A well-known example of Tsirelson (described in the post) is the archi example of a “non explicitely defined” space and Tim wants to collectively reach a conjecture that “explicitely defined Banach spaces” contain some simple classical Banach spaces. (Now if Tim will launch polymath3 dealing with expanders, property T, growths in groups, and the congruence subgroup problem these three projects will come close of covering major interests of many of my colleagues here.)
Sort of update (July 2009): Tim Gowers plans to propose a list of ten problems for one to be chosen in a polymath project in October; A polymath3 dedicated to the Hirsch conjecture is proposed over this blog. A mini polymath is taking place on Terry Tao’s blog. Terry set (with Tim Gowers Michael Nielsen and me) a new polymath blog. Probably I will stop updating at this post. When the project starts I was both skeptical and enthusiastic; The success of polymath1 suprised me, it went well beyond my expectations. I think we should remain skeptical and enthusiastic. (August 2009) polymath4 dedicated to finding primes deterministically was launched over the polymathblog. (Polymath4 was very active for several months. It led to some fruitful discussions but not to a definite result.) (September 2009) Tim Gowers wrote a preliminary post about possible problems for the October 2009 polymath.
(October 09 ) The paper with the DHJ proof is now uploaded. Ryan O’donnell who preform the lion share of writing left a little mystery: what Varnavides’ first name was? (A theorem of Varnavides’s played a role in the new proof and the only reference to his first name was the initial P. ) Terry Tao asked it on his blog and in a short time the mystery was resolved when Thomas Sauvaget found the answer. The next day another participant Andreas Varnavides wrote: “Yes his first name was Panayiotis, born in Paphos Cyprus. He was my uncle.”
(January 2010) Polymath5 devoted to Erdos discrepency problem is launched on Gowers’s blog.
(January 2010) A draft of a second paper devoted to the study of DHJ and Moser numbers can be found here.
This entry was posted in Blogging, What is Mathematics and tagged , , , , . Bookmark the permalink.
### 5 Responses to Mathematics, Science, and Blogs
1. Pingback: Polymath « Maxwell’s Demon
2. It was rather interesting for me to read this article. Thank you for it. I like such themes and everything that is connected to this matter. I would like to read a bit more soon.
Bella Benedict
escort schweiz
3. Gil Kalai says:
The problem about Sperner theorem which is mentioned in the post was completely reolved by Imre Leader and Eoin Long, their paper Tilted Sperner families contains also related results and conjectures. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6797023415565491, "perplexity": 1943.1418802583019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461529.84/warc/CC-MAIN-20151124205421-00268-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://cdsweb.cern.ch/collection/NA61%20Papers?ln=el | Due to a planned intervention the CERN Document Server website will be unavailable on Wednesday, February 19th, at 18:00 CEST for about 30 minutes.
# NA61 Papers
Πρόσφατες προσθήκες:
2020-01-08
18:20
$K^{*}(892)^0$ meson production in inelastic p+p interactions at 158 GeV/$c$ beam momentum measured by NA61/SHINE at the CERN SPS / Aduszkieiwicz, A (the NA61/SHINE Collaboration) /NA61 $K^{*}(892)^0$ resonance production via its $K^{+}\pi^{-}$ decay mode in inelastic p+p collisions at beam momentum 158 GeV/$c$ ($\sqrt{s_{NN}}=17.3$ GeV) is presented. The data were recorded by the NA61/SHINE hadron spectrometer at the CERN Super Proton Synchrotron [...] CERN-EP-2020-002.- Geneva : CERN, 2020 - 36. Draft (restricted): PDF; Fulltext: PDF;
2019-12-16
13:47
Search for an exotic S=-2, Q=-2 baryon resonance in proton-proton interactions at $\sqrt{s_{NN}}$ = 17.3~GeV / Aduszkieiwicz, A (NA61/SHINE Collaboration) /NA61 Pentaquark states have been extensively investigated theoretically in the context of the constituent quark model. In this paper experimental searches in the $\Xi^-$$\pi^-, \Xi^-$$\pi^+$, $\bar{\Xi}^+$$\pi^- and \bar{\Xi}^+$$\pi^+$ invariant mass spectra in proton-proton interactions at $\sqrt{s}$=17.3~GeV are presented [...] CERN-EP-2019-283.- Geneva : CERN, 2019 - 9. Draft (restricted): PDF; Fulltext: PDF;
2019-09-12
12:25
Measurements of hadron production in $\pi^{+}$ + C and $\pi^{+}$ + Be interactions at 60 GeV/$c$ / NA61/SHINE Collaboration Precise knowledge of hadron production rates in the generation of neutrino beams is necessary for accelerator-based neutrino experiments to achieve their physics goals. NA61/SHINE, a large-acceptance hadron spectrometer, has recorded hadron+nucleus interactions relevant to ongoing and future long-baseline neutrino experiments at Fermi National Accelerator Laboratory. [...] arXiv:1909.06294; CERN-EP-2019-198.- 2019-12-11 - 28 p. - Published in : Phys. Rev. D 100 (2019) 112004 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: 1909.06294 - PDF; fermilab-pub-19-657-ad-nd-scd - PDF; CERN-EP-2019-198 - PDF; External link: FERMILABPUB
2019-09-06
17:12
Measurements of production and inelastic cross sections for $\mbox{p}+\mbox{C}$, $\mbox{p}+\mbox{Be}$, and $\mbox{p}+\mbox{Al}$ at 60 GeV/$c$ and $\mbox{p}+\mbox{C}$ and $\mbox{p}+\mbox{Be}$ at 120 GeV/$c$ / NA61/SHINE Collaboration This paper presents measurements of production cross sections and inelastic cross sections for the following reactions: 60~GeV/$c$ protons with C, Be, Al targets and 120~GeV/$c$ protons with C and Be targets. The analysis was performed using the NA61/SHINE spectrometer at the CERN SPS. [...] arXiv:1909.03351; CERN-EP-2019-193.- 2019-12-03 - 15 p. - Published in : Phys. Rev. D 100 (2019) 112001 Article from SCOAP3: PDF; CERN-EP-2019-193: PDF; Draft (restricted): PDF; Fulltext: 1909.03351 - PDF; fermilab-pub-19-658-ad-nd-scd - PDF; External link: FERMILABPUB
2019-07-30
12:02
Measurement of $\phi$ meson production in p+p interactions at 40, 80 and 158 GeV/$c$ with the NA61/SHINE spectrometer at the CERN SPS / Aduszkiewicz, A /NA61/SHINE Collaboration Results on $\phi$ meson production in inelastic p+p collisions at CERN SPS energies are presented. They are derived from data collected by the NA61/SHINE fixed target experiment, by means of invariant mass spectra fits in the $\phi$ to $K^+ K^-$ decay channel. [...] CERN-EP-2019-165.- Geneva : CERN, 2019 - 26. Draft (restricted): PDF; Fulltext: PDF;
2019-04-25
16:19
Proton-proton interactions and onset of deconfinement / Aduszkiewicz, A (NA61/SHINE Collaboration) The NA61/SHINE experiment at the CERN SPS is performing a unique study of the phase diagram of strongly interacting matter by varying collision energy and nuclear mass number of colliding nuclei. In central Pb+Pb collisions the NA49 experiment found structures in the energy dependence of several observables in the CERN SPS energy range that had been predicted for the transition to a deconfined phase. [...] CERN-EP-2019-086.- Geneva : CERN, 2019 - 12. Draft (restricted): PDF; Fulltext: PDF;
2018-08-10
12:00
Measurements of $\pi^\pm$, $K^\pm$ and proton yields from the surface of the T2K replica target for incoming 31 GeV/$c$ protons with the NA61/SHINE spectrometer at the CERN SPS / NA61/SHINE Collaboration Measurements of the $\pi^{\pm}$, $K^{\pm}$, and proton double differential yields emitted from the surface of the 90-cm-long carbon target (T2K replica) were performed for the incoming 31 GeV/c protons with the NA61/SHINE spectrometer at the CERN SPS using data collected during 2010 run. The double differential $\pi^{\pm}$ yields were measured with increased precision compared to the previously published NA61/SHINE results, while the $K^{\pm}$ and proton yields were obtained for the first time. [...] arXiv:1808.04927; FERMILAB-PUB-18-441-AD-CD-ND; CERN-EP-2018-222.- Geneva : CERN, 2019-01-31 - 43 p. - Published in : Eur. Phys. J. C 79 (2019) 100 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: 1808.04927 - PDF; fermilab-pub-18-441-ad-cd-nd - PDF; CERN-EP-2018-222 - PDF; Fulltext from Publisher: PDF; External link: Fermilab Library Server (fulltext available)
2018-05-02
18:26
Measurements of total production cross sections for pi+ + C, pi+ + Al, K+ + C, and K+ + Al at 60 GeV/c and pi+ + C and pi+ + Al at 31 GeV/c / Aduszkiewicz, A. (Warsaw U.) ; Andronov, E.V. (St. Petersburg State U.) ; Antićić, T. (Boskovic Inst., Zagreb) ; Baatar, B. (Dubna, JINR) ; Baszczyk, M. (AGH-UST, Cracow) ; Bhosale, S. (Cracow, INP) ; Blondel, A. (Geneva U.) ; Bogomilov, M. (Sofiya U.) ; Brandin, A. (Moscow Phys. Eng. Inst.) ; Bravar, A. (Geneva U.) et al. This paper presents several measurements of total production cross sections and total inelastic cross sections for the following reactions: pi+ + C, pi+ + Al, K+ + C, K+ + Al at 60 GeV/c, pi+ + C and pi+ + Al at 31 GeV/c. The measurements were made using the NA61/SHINE spectrometer at the CERN SPS. [...] arXiv:1805.04546; CERN-EP-2018-099; FERMILAB-PUB-18-210-AD-CD-ND; CERN Preprint CERN-EP-2018-099.- Geneva : CERN, 2018-09-11 - 11 p. - Published in : 10.1103/PhysRevD.98.052001 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: openaccess_PhysRevD.98.052001 - PDF; CERN-EP-2018-099 - PDF; 1805.04546 - PDF; fermilab-pub-18-210-ad-cd-nd - PDF; Preprint: PDF; External link: Fermilab Library Server (fulltext available)
2017-05-09
22:16
Measurement of meson resonance production in $\pi ^-+$ C interactions at SPS energies / NA61/SHINE Collaboration We present measurements of $\rho^0$, $\omega$ and K$^{*0}$ spectra in $\pi^{-} +$C production interactions at 158 GeV/c and $\rho^0$ spectra at 350 GeV/c using the NA61/SHINE spectrometer at the CERN SPS. Spectra are presented as a function of the Feynman's variable $x_\text{F}$ in the range $0 < x_\text{F} < 1$ and $0 < x_\text{F} < 0.5$ for 158 GeV/c and 350 GeV/c respectively. [...] CERN-EP-2017-105; FERMILAB-PUB-17-268-AD-ND; arXiv:1705.08206.- Geneva : CERN, 2017-09-20 - 34. - Published in : Eur. Phys. J. C 77 (2017) 626 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: fermilab-pub-17-268-ad-nd - PDF; CERN-EP-2017-105 - PDF; Preprint: PDF; External link: FERMILABPUB
2017-04-06
17:58
Measurements of $\pi ^\pm$ , K$^\pm$ , p and ${\bar{\text {p}}}$ spectra in proton-proton interactions at 20, 31, 40, 80 and 158 $\text{ GeV}/c$ with the NA61/SHINE spectrometer at the CERN SPS / NA61/SHINE Collaboration Measurements of inclusive spectra and mean multiplicities of $\pi^\pm$, K$^\pm$, p and $\bar{\textrm{p}}$ produced in inelastic p+p interactions at incident projectile momenta of 20, 31, 40, 80 and 158 GeV/c ($\sqrt{s} =$ 6.3, 7.7, 8.8, 12.3 and 17.3 GeV, respectively) were performed at the CERN Super Proton Synchrotron using the large acceptance NA61/SHINE hadron spectrometer. Spectra are presented as function of rapidity and transverse momentum and are compared to predictions of current models. [...] CERN-EP-2017-066; FERMILAB-PUB-17-185-AD-ND; arXiv:1705.02467.- Geneva : CERN, 2017-10-10 - 54 p. - Published in : Eur. Phys. J. C 77 (2017) 671 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: fermilab-pub-17-185-ad-nd - PDF; CERN-EP-2017-066 - PDF; Preprint: PDF; External link: FERMILABPUB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868729114532471, "perplexity": 13049.309098221482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143505.60/warc/CC-MAIN-20200218025323-20200218055323-00180.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/118794-find-square-root-3x3-matrix.html | # Math Help - Find the square root of a 3x3 matrix
1. ## Find the square root of a 3x3 matrix
I apologize if my notation isn't clear, newbie to this forum
I'm trying to find out how to find the square root of a 3x3 matrix.
For A=
[1, 1, 1
0, 1, 1
0 , 0, 1]
I know that, in general, A^x = (P^-1) (D^x) (P) for some invertible P. In the case of linearly independent eigenvectors P should form a basis of A's eigenspace. But, the eigenvalues of A here are all 1, and only has one eigenvector, [1, 0, 0] and its scalar multiples. So that method isn't going to work.
There is a method using Spectral Decomposition that I don't fully understand. It starts with the equation
for A nxn, eigenvalues v1....vs, multiplicities m1....ms, then there exists n uniquely defined consituent matrices E i,k: i = 1...s, k = 0.... m-1
s.t. for any analytic function f(x) we have
f (A) = (s sigma i = 1) (mi sigma k =0) f^(k) (vi) E i,k
Anyways if you can decode that it seems to me you can arrive at the constituent matrices of A by the following equations
(A-I) (A-I) = 0 + 0 + 2 E 1,0
(A-I) (A-I) = 2 E 1,2
which works out to
[ 0, 0, 1/2
0, 0, 0
0, 0, 0 ]
A-I = E 1,1
which is of course
[ 0, 1, 1
0, 0, 1
0, 0, 0]
and finally
I = E1,0
So we have 3 constituent matrices for A, let's say
X E1,0 + Y E 1,1 + Z E 1,2
It turns out for values X=1, Y= 1/2, and Z = -1/4 you get
[ 1, 1/2, 3/8
0, 1, 1/2
0, 0, 1]
whose square is A. So somehow (I don't know) we have to use the constituent matrices in a linear equation to general the square root of A. How to get the values of X,Y,Z I do not know.
2. Originally Posted by Gchan
I apologize if my notation isn't clear, newbie to this forum
I'm trying to find out how to find the square root of a 3x3 matrix.
For A=
[1, 1, 1
0, 1, 1
0 , 0, 1]
I know that, in general, A^x = (P^-1) (D^x) (P) for some invertible P. In the case of linearly independent eigenvectors P should form a basis of A's eigenspace. But, the eigenvalues of A here are all 1, and only has one eigenvector, [1, 0, 0] and its scalar multiples. So that method isn't going to work.
There is a method using Spectral Decomposition that I don't fully understand. It starts with the equation
for A nxn, eigenvalues v1....vs, multiplicities m1....ms, then there exists n uniquely defined consituent matrices E i,k: i = 1...s, k = 0.... m-1
s.t. for any analytic function f(x) we have
f (A) = (s sigma i = 1) (mi sigma k =0) f^(k) (vi) E i,k
Anyways if you can decode that it seems to me you can arrive at the constituent matrices of A by the following equations
(A-I) (A-I) = 0 + 0 + 2 E 1,0
(A-I) (A-I) = 2 E 1,2
which works out to
[ 0, 0, 1/2
0, 0, 0
0, 0, 0 ]
A-I = E 1,1
which is of course
[ 0, 1, 1
0, 0, 1
0, 0, 0]
and finally
I = E1,0
So we have 3 constituent matrices for A, let's say
X E1,0 + Y E 1,1 + Z E 1,2
It turns out for values X=1, Y= 1/2, and Z = -1/4 you get
[ 1, 1/2, 3/8
0, 1, 1/2
0, 0, 1]
whose square is A. So somehow (I don't know) we have to use the constituent matrices in a linear equation to general the square root of A. How to get the values of X,Y,Z I do not know.
finding a square root of a $3 \times 3$ upper triangular matrix $A=[a_{ij}]$ is not hard (i'll assume that the entries on the diagonal of $A$ are real and positive). define the $3 \times 3$ matrix $B=[b_{ij}]$ by:
for $i=1,2,3$ let $b_{ii}=\sqrt{a_{ii}}.$ also define $b_{21}=b_{31}=b_{32}=0.$ finally let $b_{12}=\frac{a_{12}}{b_{11} + b_{22}}, \ b_{23}=\frac{a_{23}}{b_{22}+b_{33}}$ and $b_{13}=\frac{a_{13} - b_{12}b_{23}}{b_{11}+b_{33}}.$ it's easy to see that $B^2=A.$
of course this formula will also work for matrices with complex entries if you choose a square root for each $a_{ii}$ and if the denominators in $b_{12}, \ b_{23}$ and $b_{13}$ are non-zero. finally, since every
square matrix with complex entries is similar to an upper triangular matrix, you can use the above to find a square root of an arbitrary $3 \times 3$ matrix.
3. Thank you very much.
Is there a way to then determine the Jordan form of A?
If eigenvalues are 1, then J=
[ 1, 0, 0
0, 1, 0
0, 0, 1]
or
[1,1,0
0,1,0
0,0,1]
or
[1,1,0
0,1,1
0,0,1]
or
[1,0,0
0,1,1
0,0,1]
Can we determine which is the Jordan form without finding P st A = (P^-1) (J) (P) ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457170963287354, "perplexity": 789.3106519550767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468396.75/warc/CC-MAIN-20151124205428-00134-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/108213/the-multiplicative-system-in-a-symmetric-monoidal-category | # The multiplicative system in a symmetric monoidal category
Let $\mathcal{C}$ be a symmetric monoidal category. In the 1973 paper "Note on monoidal localisation" by Brian Day, the multiplicative system of morphism in $\mathcal{C}$ has been discussed. See also this mathoverflow question by Martin Brandenburg.
My question is: can we consider a multiplicative system consists of both objects and morphisms in $\mathcal{C}$? This means that we have a collection of objects $x_i$ and a collection of morphisms $f_i$ such that $x_i \otimes x_j$ is still in the collection $x_i$ and $x_i$ and $f_j$ satisfies some "compatible condition". And can we define a localization along this more general multiplicative system?
Notice that in this viewpoint the case in the first paragraph can be considered as the multplicative system with only one object $1$ (and a system of morphisms).
-
This is a bit above my categorical pay grade, so I will leave a hopefully helpful comment rather than an answer. A general strategy to working with objects in any category is to encode them via their identity morphisms. Is it enough in your case to use the theory of monoidal localization but with some identity morphisms in the mix? – Theo Johnson-Freyd Sep 27 '12 at 14:02
@Theo: Yes I need some morphism in the mix. But still I'm interested in the case your mentioned: we consider a multiplicative system of objects and the identity morphisms of each object. Then what should be the requirement on the collection of objects to make them a multiplicative system? – Zhaoting Wei Sep 27 '12 at 15:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561235308647156, "perplexity": 212.55242279300737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148558.5/warc/CC-MAIN-20160205193908-00157-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://hal.in2p3.fr/in2p3-00271254 | Untangling supernova-neutrino oscillations with beta-beam data
Abstract : Recently, we suggested that low-energy beta-beam neutrinos can be very useful for the study of supernova neutrino interactions. In this paper, we examine the use of a such experiment for the analysis of a supernova neutrino signal. Since supernova neutrinos are oscillating, it is very likely that the terrestrial spectrum of supernova neutrinos of a given flavor will not be the same as the energy distribution with which these neutrinos were first emitted. We demonstrate the efficacy of the proposed method for untangling multiple neutrino spectra. This is an essential feature of any model aiming at gaining information about the supernova mechanism, probing proto-neutron star physics, and understanding supernova nucleosynthesis, such as the neutrino process and the r-process. We also consider the efficacy of different experimental approaches including measurements at multiple beam energies and detector configurations.
Document type :
Journal articles
Physical Review C, American Physical Society, 2008, 77, pp.055501. <10.1103/PhysRevC.77.055501>
Domain :
http://hal.in2p3.fr/in2p3-00271254
Contributor : Suzanne Robert <>
Submitted on : Tuesday, April 8, 2008 - 4:24:21 PM
Last modification on : Thursday, June 5, 2008 - 10:58:18 AM
Citation
N. Jachowicz, G. C. Mclaughlin, C. Volpe. Untangling supernova-neutrino oscillations with beta-beam data. Physical Review C, American Physical Society, 2008, 77, pp.055501. <10.1103/PhysRevC.77.055501>. <in2p3-00271254>
Metrics
Consultations de la notice | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369293451309204, "perplexity": 3723.8222983392793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00129-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://infoscience.epfl.ch/record/212049 | Infoscience
Journal article
# Phase derivative estimation from a single interferogram using a Kalman smoothing algorithm
We report a technique for direct phase derivative estimation from a single recording of a complex interferogram. In this technique, the interference field is represented as an autoregressive model with spatially varying coefficients. Estimates of these coefficients are obtained using the Kalman filter implementation. The Rauch-Tung-Striebel smoothing algorithm further improves the accuracy of the coefficient estimation. These estimated coefficients are utilized to compute the spatially varying phase derivative. Stochastic evolution of the coefficients is considered, which allows estimating the phase derivative with any type of spatial variation. The simulation and experimental results are provided to substantiate the noise robustness and applicability of the proposed method in phase derivative estimation. (C) 2015 Optical Society of America | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82281494140625, "perplexity": 690.7905834777484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189214.2/warc/CC-MAIN-20170322212949-00328-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://language.world.coocan.jp/scripts/?cmd=backup&page=QuickLookPscript&age=2&action=nowdiff | • The added line is THIS COLOR.
• The deleted line is THIS COLOR.
#freeze
[[FrontPage]]
#logparanoia()
* A Quick Look at Praat Scripting [#ncd89e32]
- This page is intended as a brief but somewhat "unorthodox" introduction to Praat scripting.
-- The primary target readers are ones who have some (partial) knowledge about programming/script language (e.g., C, perl, awk, python, VBA), and thus know, say, what ''array/if (conditional)/for (iteration, loop)'' mean. If you new to programming, see other sources((Highly recommended: Ingmar Steiner (2006-2007) '''Automatic Speech Data Processing with Praat'''. Lecture Notes. (PDF file is available at: http://www.coli.uni-saarland.de/~steiner/praat/lecturenotes.pdf .) )).
-- The contents are fragmentary. For unknown commands, consult ''Help'' in Praat first. It provides you valuable information.
The contents are fragmentary. Consult other sources if you want to learn Praat Scripting from a scratch((Highly recommanded: Ingmar Steiner (2006-2007) '''Automatic Speech Data Processing with Praat'''. Lecture Notes. (PDF file is available at www.coli.uni-saarland.de/~steiner/praat/lecturenotes.pdf.) )).
//PDF file is [[here>www.coli.uni-saarland.de/~steiner/praat/lecturenotes.pdf]].
#contents
** Hello world [#xbb42d70]
+ Choose ''New Praat script'' from the ''Praat menu'' to open the ScriptEditor window.
+ Type:
echo Hello world
+ Choose ''Run'' from the ''Run menu'' (or type C-r) to execute.
- '''Cf.''' [[Scripting 3.1. Hello world>http://www.fon.hum.uva.nl/praat/manual/Scripting_3_1__Hello_world.html]] (official site)
** Language Specification Worth Mentioning [#b3a53f90]
- comment
-- If you want to write a comment at the end of a statement, for example, you can use ";", instead of "#"
str$= "abcdefg" ; string variable num = 3 ; numeric variable echo 'str$''tab$''num' # An error occurs if ";" is replaced with "#" *** General [#k6026e40] //- No declaration is required to use a variable. - variable types: numerics and strings - array - Output to the Info Window: ''echo'', ''printline'', and ''print'' --The three scripts below work in the same way (''clearinfo'' clears the Info window): --- echo Hello world --- clearinfo printline Hello world --- clearinfo print Hello world'newline$'
-- The two scripts below, however, do not work in the same way.
---
echo This is the first sentence.
echo This is the second sentence.
---
clearinfo
printline This is the first sentence.
printline This is the second sentence.
-- "''#''" and "'';''" are comment characters. Praat ignores the comment characters and after them, to the end of the line.
-- Use ";" instead of "#", if you want to write a comment after a statement in the same line.
n = 0
if n = 0 ;conditional
clearinfo
printline Hello world
endif
# An error occurs if ";" is replaced with "#".
--- Some commands, however, cannot be followed by a comment in the same line. For example,
n = 0
if n = 0 ;conditional
clearinfo
printline Hello world ; This sentence is also displayed on the Info window.
endif
// # Hello world script in Praat
// echo Hello world ; This sentence is also displayed on the Info window.
- index numbers start at 1 (not 0).
- Variable Types
-- numerics and strings
--- Variables (either numeric or string) must begin with lower cases.
--- String variables must end with ''$''. str$ = "foo"
num = 3
### statements below causes an error. ###
#Str$= "foo" ; it begins with Upper case. #Num = 3 #str = "foo" ; it lacks$, though it has a string.
-- Arrays are also available.
--- The indices are enclosed with ''"["'' and ''"]"''
--- Like scalars, arrays must begin with lower cases.
--- Similarly, arrays which store strings must have ''$'' before ''"["'' str$[1] = "foo"
str$[2] = "bar" str$[3] = "hoge"
str$[4] = "piyo" num[1] = 3 num[2] = 34 num[3] = 22 num[4] = 98 ### statements below causes an error. ### #str[1] = "foo" ; it lacks$, though it has a string.
#Num[4] = 98 ; it begins with Upper case.
-- Unfortunately, hashes (or associative arrays) are not available.
- Comparison Operators
-- You can use "=", in addition to "=="
- Variable Substitution
-- Variables are substituted when put between single quotes.
fruit$= "apple(s)" count = 3 echo I bought 'count' 'fruit$'.
--- ''NOTE1:'' No double quotations are required to literals in ''print'' (and ''printline'' and ''echo''). Compare
fruit$= "apple(s)" count = 3 echo "I bought" 'count' 'fruit$' "."
--- ''NOTE2:'' If the index in an array is also a variable, a dummy variable is required.
str$[1]= "apple" str$[2]= "orange"
str$[3]= "melon" clearinfo for i from 1 to 3 hop$ = str$[i] ;; a dummy variable printline 'i''tab$''hop$' endfor - Operators -- Comparison Operators --- The "equal" operator is "="((In fact, "==" can also be used as an equal operator.)), identical with the assignment operator. num = 3 str$ = "hoge"
clearinfo
if num = 3
printline 'num'
endif
if str$= "hoge" printline 'str$'
endif
- ''elsif'' (neither ''elseif'' nor ''else if'')
-- Strings
--- Basically, operators used for numerics are also applicable to strings.
--- Interestingly, not only string "addition" (i.e., concatenation), but also string "subtraction" (i.e., truncation) is possible, using "+" and "-".
str1$= "abc" str2$ = "def"
str3$= "c" str4$ = str1$+ str2$
str5$= str1$ - str3$clearinfo printline 'str4$'
printline 'str5$' --- result: abcdef ab -- autoincrement/autodecrement operators (e.g., i++, i--) are not available. - Regular expressions are available in many functions. - Conditional statements -- ''if(-elsif)(-else)-endif'' --- ''NOTE:'' ''elsif'' (or ''elif''), rather than ''elseif'' and ''else if''. - ''form'' and ''endform'' - Loops -- ''for i (from m) to n-endfor'', ''while-endwhile'', ''repeat-until'' --- ''from m'' in ''for'' can be omitted when m = 1. --- It is not possible to decrement the loop counter in ''for''-loop. Use ''while-endwhile'', instead. n = 10 clearinfo while n >= 1 printline 'n' n = n-1 endwhile --- There seems to be no statement which escapes from the loop (''break'' in C, or ''last'' in perl). Instead, you can substitute the loop counter (or something) with a value which does not satisfy the loop condition, for example. clearinfo max = 10 for i to max if i = 5 i = max+1 ; escape from the loop else printline 'i' endif endfor - No double quotations are required in ''print'' (and ''printline'' and ''echo'') - ''goto'' is %%not%% available. &color(red){(modified July 17, 2012)}; # the "goto" command in Praat scripts clearinfo for i to 10 if i mod 2 = 0 goto next endif printline 'i' label next endfor - ''system'' - Subroutines: ''procedure-endproc'' - procedures (or subroutines/functions) - Regular expressions are available in some string functions: -- ''index_regex(str$,regex$)'' -- ''rindex_regex(str$,regex$)'' -- ''replace_regex$(str$,regex$,substr$,n)'' --- ''NOTE:'' Functions which return strings (not numerics) require ''$''.
str$= "abcdefg" regex$ = "b[^e]*"
clearinfo
if index_regex(str$,regex$) <> 0
printline match
else
printline not match
endif
- decrement in ''for''-loop
- Generally, index numbers start at 1 (not 0).
-- Check, for example, the value of the following string functions:
--- ''mid$(str$,index,len)''
str$= "abcdefg" len = length(str$)
clearinfo
for i from 1 to len
hop$= mid$(str$,i,1) printline 'i''tab$''hop$' endfor --- ''index(str$,substr$)'' str$ = "abcdefg"
substr$= "bcd" clearinfo n = index(str$,substr$) printline 'n' - Consider the use of ''Table Object'' when you refer to the contents in text files. - File IO -- ''text$ <'' '''filename''' (read the contents of '''filename''')
file$= "d:\foo.txt" text$ < 'file$' clearinfo print 'text$'
-- ''text$>'' '''filename''' (make a new file and write the contents of text$)
-- ''text$>>'' '''filename''' (add the contents of text$ to '''filename''')
file$= "d:\bar.txt" text$ = "praat scripting is fun." + "'newline$'" text$ > 'file$' #text$ >> 'file\$'
-- ''NOTE:'' Consider the use of ''Object'', rather than "''<''" or "''>''". Very often, Object manipulation is easier and more straightforward. For Object, see below.
//, especially when it contains multiple lines .
- [[Scripting 3.1. Hello world>http://www.fon.hum.uva.nl/praat/manual/Scripting_3_1__Hello_world.html]] (official site)
*** Particular to Praat (1): Object Manipulation [#e34387b5]
RIGHT:&color(red){Those who conquer object manipulation techniques conquer Praat scripting. (The present author)};
*** Particular to Praat (2): form-endform [#z692053a]
- ''form'' and ''endform''
//- ''system''
// # Windows
// system dir > hoge.txt && start notepad.exe hoge.txt
//
//- ''nocheck'' '''command'''
//-- equivalent to ''on error resume next'' in some scripts
//
//- ''exit'' ('''message''') and ''pause'' ('''message''')
** Functions of ScriptEditor [#m4b8a142]
- History
- Run a script
** Other ways to execute a script [#c753a9f4]
- praatcon.exe (in Windows)
- sendpraat.exe
- praatcon.exe (in Windows)
-- scripts can be executed by using sendpraat.exe or praatcon.exe.
-- You may use option -a in praatcon.exe if you handle Unicode. (cf. [[Re: [praat-users] Bug in latest praatcon version (by Paul Boersma)>http://uk.groups.yahoo.com/group/praat-users/message/4950]] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3611851632595062, "perplexity": 18915.792816337213}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00225.warc.gz"} |
https://math.stackexchange.com/questions/1938100/how-do-i-find-999-mod-1000 | # How do I find $999!$ (mod $1000$)?
I came across the following question in a list of number theory exercises
Find $999!$ (mod $1000$)
I have to admit that I have no idea where to start. My first instinct was to use Wilson's Theorem, but the issue is that $1000$ is not prime.
• Hint: $15!$ is divisible by $1000.$ – bof Sep 23 '16 at 6:53
• If that takes too much work, it is at least clear that $50!$ is divisible by $1000$. – Brian Tung Sep 23 '16 at 6:56
• Goodness I just thought of something simple that may work. $999!$ 'contains' inside it a '100' and a '10' inside it for sure. So whatever number we are left with is surely a multiple of $1000$. Hence the residue is $0$. Although a technique very much specific to this question, can somebody verify if this is valid? – Trogdor Sep 23 '16 at 7:08
• $999!$ is indeed divisible by $10!$ and by $100!$, but you need to show that it is also divisible by $10!\cdot100!$, which is a little less trivial (though not that difficult). In short, you can take every decomposition of $1000$ (except for $1\cdot1000$), and use it in order to prove that $1000$ divides $999!$. For example, $1000=500\cdot2$, and $999!$ is divisible by $502!$, which is equal to $500!\cdot501\cdot502$ and is therefore divisible by $500\cdot2$. In order to find the smallest factorial for which this holds, you need to use the prime factorization of $1000$. – barak manos Sep 23 '16 at 9:30
$1000=2\cdot2\cdot2\cdot5\cdot5\cdot5$
$2\cdot2\cdot2$ divides $2\cdot4\cdot6$ without remainder
$5\cdot5\cdot5$ divides $5\cdot10\cdot15$ without remainder
$2\cdot4\cdot6\cdot5\cdot10\cdot15$ divides $15!$ without remainder
$15!$ divides $999!$ without remainder
Therefore $1000$ divides $999!$ without remainder
What are you guys doing? Due to the fact that $500\times2 = 1000$ it follows trivially that $999!$ is congruent to $0 \space \text{(mod 1000)}$. Actually we know for certain that for every $x$ such that $x$ is lager than or equal to $500!$ it will always be the case that $x$ will be congruent to $0 \space \text{(mod 1000)}$.
Why? Because you can always rewrite the factorial as
$(500\times2)\times(\text{the remainding factors})$
thus,
$0\times(\text{the remainding factors}) = 0 \equiv 0 \space \text{(mod 1000)}$.
Short answer: $999!$ is a multiple of $10\cdot20\cdot30$.
For every occurrence of a multiple of 5 in the number you are building up to 999!, an additional zero becomes part of the ending sequence of digits of that number, never to leave again. Your question is tantamount to asking what the final three digits of 999! are. The first factorial to end in three zeroes, then and forevermore, is 15! . So, all following factorials will have the three zeroes. So, 000. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8741517066955566, "perplexity": 171.8852502165069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145746.24/warc/CC-MAIN-20200223032129-20200223062129-00205.warc.gz"} |
Subsets and Splits