text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
The DAMA/LIBRA set-up (about 250 kg highly radiopure NaI(Tl)) is running at the Gran Sasso National Laboratory of the I.N.F.N.. This experiment is mainly dedicated to the investigation of Dark Matter (DM) particles in the galactic halo by exploiting the model independent DM annual modulation signature. In its first phase DAMA/LIBRA has collected data over 7 annual cycles corresponding to an exposure of 1.04 ton $\times$ yr (DAMA/LIBRA--phase1). The DAMA/LIBRA--phase1 and the former DAMA/NaI data (cumulative exposure $1.33$ ton $\times$ yr, corresponding to 14 annual cycles) give evidence at 9.3 $\sigma$ C.L. for the presence of DM particles in the galactic halo on the basis of the exploited model independent signature by using highly radio-pure NaI(Tl) target. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. After a relevant upgrading occurred at end 2010, DAMA/LIBRA-phase2 is in data taking in the new configuration equipped with new high quantum efficiency PMTs. The aim of the upgrade has been to lower the software energy threshold to 1 keV in order to improve the knowledge on corollary aspects regarding the signal. Here results, implications and experimental perspectives of the presently running DAMA/LIBRA--phase2 will be discussed. | CommonCrawl |
A Celtic knot pattern $4\times4$ looks like this. You can see how it is built up on a grid.
A Celtic knot pattern $8\times8$ looks like this on a grid.
A Celtic knot pattern $6\times6$ looks like this.
Can you build it up? There is a copy of the design to download here.
You can do it by making a set of cards to use on a $6\times6$ grid in a $24$ cm square with four copies of this sheet. There is also a second page with two copies of the design.
Or you could try doing it using this interactivity.
Tangram. Art. Interactivities. Visualising. Working systematically. Combinations. Compound transformations. Addition & subtraction. Practical Activity. Investigations. | CommonCrawl |
Password Depot Crack is a powerful and very user-friendly password manager for PC which helps to organize all of your passwords – but also, for instance, information from your credit cards or software licenses. The software provides security for your passwords – in three respects: It safely stores your passwords, guarantees a secure data use and helps you to create secure passwords. However, Password Depot does not only guarantee security: It also stand for convenient use, high customizability, marked flexibility in terms of interaction with other devices and, last but not least, extreme functional versatility.
Password Depot Key is very convenient to use, high customizability, extreme functional versatility, and advanced flexibility in terms of interaction with other devices, and more. Password Depot Keygen allows you to store and share passwords, software license, credit card information, documents, and other information easily and securely. You can store the database locally, or you can even keep the database on the internet servers and also some cloud services such as Google Drive, onedrive, DropBox, etc.
You can not only save your passwords locally, but also on a USB device, mobile phone, in the network or on an FTP server. With Password Depot 2019 Server you can manage password files in the network to use them together in a team. Password Depot 12 Crack is very easy to use and spares you a lot of work. You can configure Password Depot Free Download individually and in this way adapt it precisely to your needs. Password Depot 12 is able to work together with a range of other applications, flexibly and without problems.
In Password Depot, your information is encrypted not merely once but in fact twice, thanks to the algorithm AES or Rijndael 256. In the US, this algorithm is approved for state documents of utmost secrecy!
You can secure your passwords files twice. To start with, you select a master password that has to be entered in order to be able to open the file. Additionally, you can choose to protect your data by means of a key file that must be uploaded to open the file.
After every time the master password is entered incorrectly, the program is locked for three seconds. This renders attacks that rely on the sheer testing of possible passwords – so called "brute-force attacks" – virtually impossible.
Password Depot generates backup copies of your passwords files. The backups may be stored optionally on FTP servers on the Internet (also via SFTP) or on external hard drives. You can individually define the time interval between the backup copies' creation.
All password fields within the program are internally protected against different types of interception of keystrokes (Key Logging). This disables that your sensible data entries can be spied out.
Dealing with your passwords, Password Depot does not leave any traces in your PC's working memory. Therefore, even a hacker sitting directly at your computer and searching through its memory dumps cannot find any passwords.
Password Depot automatically detects any active clipboard viewers and masks its changes to the keyboard; after performing auto-complete, all sensitive data is automatically cleared from the clipboard.
The ultimate protection against keylogging. With this tool you can enter your master password or other confidential information without even touching the keyboard. Password Depot does not simulate keystrokes but instead uses an internal cache, so that they can neither be intercepted software- nor hardware-based.
Typing, using the program's virtual keyboard, you can also set the program to show multiple fake mouse cursors instead of your usual single cursor. This additionally makes it impossible to discern your keyboard activities.
The integrated Password Generator creates virtually uncrackable passwords for you. Thus in future, you will not have to use passwords such as "sweetheart" anymore, a password that may be cracked within minutes, but e.g. "g\/:1bmVuz/z7ewß5T$x_sb}@<i". Even the latest PCs can take a millenium to crack this password!
Let Password Depot check your passwords' quality and security! Intelligent algorithms will peruse your passwords and warn you against 'weak' passwords which you can subsequently replace with the help of the Passwords Generator.
Password Depot protects and manages not only your passwords but also your information from credit cards, EC cards, software licenses and identities. Each information type offers a separate model, with e.g. the credit card window featuring a PIN field.
You may add file attachments containing e.g. additional information to your password entries. These attachments can be opened directly from within Password Depot and may additionally be saved on data storage media.
You can import both password entries from other password managers into Password Depot as well as export entries from Password Depot. To do so the software offers you special wizards that facilitate importing and exporting password information.
Password Depot's user interface is similar to that of Windows Explorer. This allows you to effectively navigate through your password lists and to quickly find any password you happen to be searching for.
If you wish, Password Depot automatically fills in your password data into websites opened within the common browsers. This function runs via an internal setting on the one hand, and via so called Browser Add-ons on the other hand.
You can set the program to automatically recognize which password information corresponds to the website you have called up and to then pre-select this password entry for you – as well as, if desired, to finally automatically fill this information into the website.
Thanks to many program options, Password Depot may be individually configured to the slightest detail – not only in view of its external layout, but also regarding its internal functions such as the use of browsers or networks.
You can determine yourself which browsers you would like to use within the program. This way, you are not bound to common browsers such as Firefox or Internet Explorer but can also use Opera, for example.
As a new user, you can work with only a few functions in the Beginner Mode, whereas as an expert you can use all functions in the Expert Mode or can outline your own demands in the Custom Mode.
Password Depot features a separate server model enabling several users to access the same passwords simultaneously. Accessing password files may run either via a local network or via the Internet.
Password Depot supports web services, among them GoogleDrive, OneDrive, Dropbox and Box. In this way, Password Depot enables you to quickly and easily enter the Cloud!
Improved and modernized user interface.
Improved performance on large databases with thousands of entries.
New trial and freeware mode: The trial now works for 30 days without any restrictions. The freeware now has only one limitation: It can only be used with databases with max. 20 entries.
Improved and reworked browser addons.
Reworked password strength estimation with details on how the result was calculated.
Reminder that the beginner mode is used to switch with reference to the expert mode.
New actions for selected folders, such as search, print or export.
Search results, Favorites or any folder can now optionally be exported as well.
New entry type 'PuTTy connection' with support of protected sign-on.
Revised database cleanup dialog box.
The "Advanced Search" now also allows to search for entries with "History" and "Attachments".
Offline mode: Databases from the Enterprise Server can now be conveniently used in the new offline mode, i.e. when the connection to the server is disconnected. If the connection to the server is renewed, an automatic synchronization takes place.
New option to start the application with a time delay.
Import options from numerous other password managers, as well as from old Password Depot databases themselves.
Revised and improved online help.
Assigned tags for better filtering of database entries.
Options for displaying user name and password of the selected entry in Topbar.
Improved Update Manager: Existing updates can now be installed with a single click.
How To Install Password Depot?
Download Password Depot 12.0.5 from below.
Password Depot 12.0.5 Crack Patch Incl Serial Keygen [Mac+Win] Link is Given Below! | CommonCrawl |
1(a) Give the comparison of DFS, BFS, Iterative deeping and Bidirectional search.
1(c) Explain modus ponen with suitable example.
1(d) Draw and Explain general model of Learning Agent.
1(e) Explain the Limitation of propositional logic with suitable example.
2(a) Explain Hill climbing and simulated Annealing with suitable example.
2(b) Explain Goal based and utility based agent with block diagram.
3(a) Consider the given game tree. Apply $\alpha$-$\beta$ pruning where $\square$-max node, 0- min node.
3(b) Explain Role learning and Inductive learning with suitable examples.
4(a) Consider the following sentence.
Prove that Tom is mortal using modus ponen and Resolution.
4(b) Draw an explain the expert system Architecture.
5(a) Consider the given tree, apply breadth first search algorithm and also write the order in which nodes are expanded.
5(b) Write the Planning algorithm for spare tyre problem.
Q6) Write the short note on any four. | CommonCrawl |
As we are looking for finite-energy solutions to the field equations the SU(2) gauge field must tend to a pure gauge and the Higgs field to its vacuum value. This means we can map the sphere at infinity $S^2_\infty$ into the Higgs vacuum manifold SU(2)~$S^3$.
$$S^1\wedge S^2_\infty\sim S^3_\infty$$ and the map $$S^3_\infty\rightarrow S^3$$ where the target space is the Higgs vacuum manifold three-sphere, now leads to nontrivial topology.
1) If a "point" in configuration space corresponds to some gauge and Higgs field configuration (in a particular gauge), then a path in this space must be a series of continuously varying configurations with respect to some parameter. What is this parameter? Do two neighboring points along this path somehow correspond to a slightly varied field in physical space?
2) Similarly, a "path" in the space of mappings of $$S^2_\infty\rightarrow S^3\sim SU(2)$$ must mean a set of continuously varying maps from spatial infinity to the Higgs vacuum manifold. What exactly is parametrizing this path?
3) How does a loop in configuration space induce a loop in the space of the above maps?
4) Why does considering a loop in configuration space mean considering $$S^1\times S^2_\infty$$?
@DavidBarMoshe: Thanks for the input. The issue I have with Manton's book is that given my current level of knowledge, it usually creates more questions than answers! The toy model he considers there, i.e sphalerons on a circle, is basically what I've been trying to visualize for the past few weeks. I may be wrong, but I think the heart of my problem lies in my inability to form a decent visualization of the configuration space of a field. I'm still eagerly looking forward to your response though. | CommonCrawl |
Given $1\le p\infty$ and $k$ what is the minimal $n$ such that $\ell_p^n$ almost isometrically contain all $k$-dimensional subspaces of $L_p$? I'll survey what is known about this problem and then concentrate on a recent result, basically solving the problem for even $p$. The proof uses a recent result of Batson, Spielman and Srivastava. | CommonCrawl |
Base-10 Blocks to a Billion!
In case any of my readers do not meet the above criterion, these blue (sometime orange) plastic blocks are manipulatives used in classrooms to help teach kids all sorts of mathematical concepts, from addition and subtraction to place value and volume. The smallest discrete unit is $1 cm^3$ and represents the one's place: 10 of these singletons form a stick (sometimes called a "long") of $10 cm^3$ that represents the ten's place, 10 "longs" make a "flat" of $100 cm^3$ representing the hundred's place, and 10 "flats" make the largest commercially sold denomination, the big cube of $1000 cm^3$ (called a "block") representing the thousands place.
One child asked me the other day how big $1,000,000$ would be using these blocks. I really wanted to show him, but we only have one "block" (the big $1000 cm^3$ cube) and saying "1000 times as big as this" really doesn't cut it. I told him to imagine that we now take the big 1000-cube and pretend it's the unit cube. First, we need to build a "long" from these, so how many 1000-cubes do we need? "10." Good, now, 10 of these 1000-cubes is how many units total? He said, "10 thousand?" I said, yes, exactly! Now that we have a long made of 1000-cubes, how many longs do we use to make a flat? He said, "10." OK, great! Each flat is how many longs? He said, "10." And each long is how many 1000-cubes? "10." So 10 tens makes how many 1000-cubes? He said, "100 1000-cubes!" Awesome! How many units is that? He said (after a pause) "100,000 units". Now how many flats make a block? "10". Good, so we just need ten of these flats; each flat is worth 100 thousand, so 10 of them is how much. He said, after another pause "10 hundred thousand?" I said well, when you have 10 hundred thousands, what do you actually have? You know 700 thousand, 800 thousand, 900 thousand… right? What's next? "1 million!" he said triumphantly.
Here's a 174 cm (~5'7") tall figure to scale with the 1 cm3 base ten block. The million cube (1 cubic meter) is only 100 cm tall ($10^6 = 100^3= 100\times 100 \times 100 = 1,000,000$), or about 3.3 feet (1 meter). So, not quite as tall as they are: WolframAlpha's growth curves show that the average 7-year old male is already ~40 inches tall). Still, "as big as you are" definitely holds, per unit volume! The average volume of an adult human, measured by water displacement, is $66,400 cm^3$…almost 1/200th (0.005) of the million cube!
This shows the billion cube on the far left, with the same human figure to scale (the picture above it is the cut-out indicated by the red rectangular border. Woah! The billion cube is enormous! It's 1000 cubic meters, or a cubic decameter. Now it's 1,000 cm tall, or as tall as the highest Olympic diving platform(10 m, ~33 ft) because $(10^3)^3 = 10^9 = 1000^3 = 1000\times 1000 \times 1000 = 1,000,000,000$.
For comparison, if you drilled a hole in the billion cube and filled it with soda, you would need exactly 1,000,000 one liter bottles. If the average school classroom is $25ft \times 25ft \times 10ft$ (seems reasonable), then that's $6250 ft^3$ in volume, which is ~$176,980,291 cm^3$. Since that goes in to a billion about ~5.6 times, you could fit five typical classrooms in the volume of the billion cube! If only they sold them that big!
That's the neat thing about exponents; all we've really done here is taken 1 thing, a little plastic cube $1 cm^3$, multiplied it by 10 just 9 times, and boom, $10^9 = 1 \ billion \ cm^3$. | CommonCrawl |
Abstract: We estimate the potential energy for a system of three static gluons in Lattice QCD. This is relevant for the different models of three-body glueballs have been proposed in the literature, either for gluons with a constituent mass, or for massless ones. A Wilson loop adequate to the static hybrid three-body system is developed. We study different spacial geometries, to compare the starfish model with the triangle model, for the three-gluon potential. We also study two different colour structures, symmetric and antisymmetric, and compare the respective static potentials. A first simulation is performed in a $24^3 \times 48$ periodic Lattice, with $\beta=6.2$ and $a \sim 0.072$ fm. | CommonCrawl |
You are given an array $A[1...N]$ with integers in decreasing order and a list of pairs $(a_1, b_1)$, $(a_2, b_2),$ $\ldots $, $(a_ K, b_ K)$. You wish to sort the array $A$ in increasing order, each turn you choose an $i$ ($i$ can be chosen multiple times) and swap $A[a_ i]$ with $A[b_ i]$. Determine whether this is possible.
The first line contains two integers, representing $N$ and $K$ respectively ($1 \leq N, K \leq 10^6$). The following $K$ lines each contains two integers, representing $a_ i$ and $b_ i$ respectively ($1 \leq a_ i < b_ i \leq N$).
Output "Yes" if it is possible to sort the array in increasing order, "No" otherwise. | CommonCrawl |
Abstract: A regular topological space $X$ is defined to be a $\mathfrak P_0$-space if it has countable Pytkeev network. A network $\mathcal N$ for $X$ is called a Pytkeev network if for any point $x\in X$, neighborhood $O_x\subset X$ of $x$ and subset $A\subset X$ accumulating at a $x$ there is a set $N\in\mathcal N$ such that $N\subset O_x$ and $N\cap A$ is infinite. The class of $\mathfrak P_0$-spaces contains all metrizable separable spaces and is (properly) contained in the Michael's class of $\aleph_0$-spaces. It is closed under many topological operations: taking subspaces, countable Tychonoff products, small countable box-products, countable direct limits, hyperspaces of compact subsets. For an $\aleph_0$-space $X$ and a $\mathfrak P_0$-space $Y$ the function space $C_k(X,Y)$ endowed with the compact-open topology is a $\mathfrak P_0$-space. For any sequential $\aleph_0$-space $X$ the free abelian topological group $A(X)$ and the free locally convex linear topological space $L(X)$ both are $\mathfrak P_0$-spaces. A sequential space is a $\mathfrak P_0$-space if and only if it is an $\aleph_0$-space. A topological space is metrizable and separable if and only if it is a $\mathfrak P_0$-space with countable fan tightness. | CommonCrawl |
Information-theoretic approaches provide methods for model selection and (multi)model inference that differ quite a bit from more traditional methods based on null hypothesis testing (e.g., Anderson, 2007; Burnham & Anderson, 2002). These methods can also be used in the meta-analytic context when model fitting is based on likelihood methods. Below, I illustrate how to use the metafor package in combination with the glmulti package that provides the necessary functionality for model selection and multimodel inference using an information-theoretic approach.
Variable yi contains the effect size estimates (standardized mean differences) and vi the corresponding sampling variances. There are 48 rows of data in this dataset.
The dataset now includes 41 rows of data (nrow(dat)), so we have lost 7 data points for the analyses. One could consider methods for imputation to avoid this problem, but this would be the topic for another day. So, for now, we will proceed with the analysis of the 41 estimates.
With level = 1, we stick to models with main effects only. This implies that there are $2^7 = 128$ possible models in the candidate set to consider. Since I want to keep the results for all these models (the default is to only keep up to 100 model fits), I set confsetsize=128 (or I could have set this to some very large value). With crit="aicc", we select the information criterion (in this example: the AICc or corrected AIC) that we would like to compute for each model and that should be used for model selection and multimodel inference. For more information about the AIC (and AICc), see, for example, the entry for the Akaike Information Criterion on Wikipedia. As the function runs, you should receive information about the progress of the model fitting. Fitting the 128 models should only take a few seconds.
"yi ~ 1 + imag"
10 models within 2 IC units.
77 models to reach 95% of evidence weight.
The horizontal red line differentiates between models whose AICc is less versus more than 2 units away from that of the "best" model (i.e., the model with the lowest AICc). The output above shows that there are 10 models whose AICc is less than 2 units away from that of the best model. Sometimes this is taken as a cutoff, so that models with values more than 2 units away are considered substantially less plausible than those with AICc values closer to that of the best model. However, we should not get too hung up about such (somewhat arbitrary) divisions (and there are critiques of this rule; e.g., Anderson, 2007).
We see that the "best" model is the one that only includes imag as a moderator. The second best includes imag and meta. And so on. The values under weights are the model weights (also called "Akaike weights"). From an information-theoretic perspective, the Akaike weight for a particular model can be regarded as the probability that the model is the best model (in a Kullback-Leibler sense of minimizing the loss of information when approximating full reality by a fitted model). So, while the "best" model has the highest weight/probability, its weight in this example is not substantially larger than that of the second model (and also the third, fourth, and so on). So, we shouldn't be all too certain here that we have really found the best model. Several models are almost equally plausible (in other examples, one or two models may carry most of the weight, but not here).
And here, we see that imag is indeed a significant predictor of the treatment effect (and since it is a dummy variables, it can change the treatment effect from $.1439$ to $.1439 + .4437 = .5876$, which is also practically relevant (for standardized mean differences, some would interpret this as changing a small effect into at least a medium-sized one). However, now I am starting to mix the information-theoretic approach with classical null hypothesis testing, and I will probably go to hell for all eternity if I do so. Also, other models in the candidate set have model probabilities that are almost as large as the one for this model, so why only focus on this one model?
The importance value for a particular predictor is equal to the sum of the weights/probabilities for the models in which the variable appears. So, a variable that shows up in lots of models with large weights will receive a high importance value. In that sense, these values can be regarded as the overall support for each variable across all models in the candidate set. The vertical red line is drawn at .80, which is sometimes used as a cutoff to differentiate between important and not so important variables, but this is again a more or less arbitrary division.
This method properly works with models that are fit with or without the Knapp and Hartung method (the default for rma() is test="z", but this could be set to test="knha", in which case standard errors are computed in a slightly different way, and tests and confidence intervals are based on the t-distribution).
I rounded the results to 4 digits to make the results easier to interpret. Note that the table again includes the importance values. In addition, we get unconditional estimates of the model coefficients (first column). These are model-averaged parameter estimates, which are weighted averages of the model coefficients across the various models (with weights equal to the model probabilities). These values are called "unconditional" as they are not conditional on any one model (but they are still conditional on the 128 models that we have fitted to these data; but not as conditional as fitting a single model and then making all inferences conditional on that one single model). Moreover, we get estimates of the unconditional variances of these model-averaged values. These variance estimates take two sources of uncertainty into account: (1) uncertainty within a given model (i.e., the standard error of a particular model coefficient shown in the output when fitting a model; as an example, see the output from the "best" model shown earlier) and (2) uncertainty with respect to which model is actually the best approximation to reality (so this source of variability examines how much the size of a model coefficient varies across the set of candidate models). The model-averaged parameter estimates and the unconditional variances can be used for multimodel inference. For example, adding and subtracting the values in the last column from the model-averaged parameter estimates yields approximate 95% confidence intervals for each coefficient that are based not on any one model, but all models in the candidate set.
We can also use multimodel methods for computing a predicted value and corresponding confidence interval. Again, we do not want to base our inference on a single model, but all models in the candidate set. Doing so requires a bit more manual work, as I have not (yet) found a way to use the predict() function from the glmulti package in combination with metafor for this purpose. So, we have to loop through all models, compute the predicted value based on each model, and then we can compute a weighted average (using the model weights) of the predicted values across all models.
TASK: Diagnostic of candidate set.
Your candidate set contains 268435456 models.
So, over $2 \times 10^8$ possible models. Fitting all of these models would not only test our patience (and would be a waste of valuable CPU cycles), it would also be a pointless exercise (even fitting the 128 models above could be critiqued by some as a mindless hunting expedition – although if one does not get too fixated on the best model, but considers all the models in the set as part of a multimodel inference approach, this critique loses some of its force). So, I won't consider this any further in this example.
The same principle can of course be applied when fitting other types of models, such as those that can be fitted with the rma.mv() or rma.glmm() functions. One just has to write an appropriate rma.glmulti function and, for multimodel inference, a corresponding getfit method.
For multivariate/multilevel models fitted with the rma.mv() function, one can also consider model selection with respect to the random effects structure. Making this work would require a bit more work. Time permitting, I might write up an example illustrating this at some point in the future.
Anderson, D. R. (2007). Model based inference in the life sciences: A primer on evidence. New York: Springer.
Bangert-Drowns, R. L., Hurley, M. M., & Wilkinson, B. (2004). The effects of school-based writing-to-learn interventions on academic achievement: A meta-analysis. Review of Educational Research, 74(1), 29–58.
Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach (2nd ed.). New York: Springer. | CommonCrawl |
"Our subject starts with homology, homomorphisms, and tensors. Homology provides an algebraic 'picture' of topological spaces, assigning to each space X a family of abelian groups $ H_0(X), \ldots, H_n(X) $, to each continuous map $ f : X \rightarrow Y $ a family of group homomorphisms $ f_n : H_n(X) \rightarrow H_n(Y) $. Properties of the space or the map can often be effectively found from properties of the groups $ H_n $ or the homomorphisms $ f_n $. A similar process associates homology groups to other mathematical objects; for example, to a group $ \Pi $ or to an associative algebra $ \Lambda $. Homology in all such cases is our concern."
The symbol $ \varkappa $ is an alternate way of drawing a kappa. (It's \varkappa in TeX).
The symbol $ \varrho $ is an alternate way of drawing a rho. (It's \varrho in TeX).
Example 7: The segments q x I and p x I should be oriented upwards.
this means more exactly that $ \mu_0 y $ is that function on $ \Pi $ to Z for which...: i.e., if we regard elements of $ \Pi(Z) $ as functions $ \Pi \rightarrow Z $ sending $ x \mapsto m(x) $.
The motivation for studying $ f_a = xa - a $ is to look at fixed points of modules. | CommonCrawl |
There are a large number of different definitions used for sample quantiles in statistical computer packages. Often within the same package one definition will be used to compute a quantile explicitly while other definitions may be used when producing a boxplot, a probability plot or a QQ-plot. We compare the most commonly implemented sample quantile definitions by writing them in a common notation and investigating their motivation and some of their properties. We argue that there is a need to adopt a standard definition for sample quantiles so that the same answers are produced by different packages and within each package. We conclude by recommending that the median-unbiased estimator is used since it has most of the desirable properties of a quantile estimator and can be defined independently of the underlying distribution.
Keywords: sample quantiles, percentiles, quartiles, statistical computer packages.
R code: The quantile() function in R from version 2.0.0 onwards implements all the methods in this paper.
Table 1, p361. P2 should have lower bound equal to $\lfloor np\rfloor$.
p363, left column. P2 is satisfied if and only if $\alpha\ge0$ and $\beta\le1$.
Thanks to Eric Langford and Alan Dorfman for pointing out the errors. 8 May 2007.
For further discussion, see Sample quantiles 20 years later on Hyndsight. | CommonCrawl |
Is there a theory of "hybrid" geometries?
Standard 2D geometries, elliptic, Euclidean and hyperbolic, can be all derived from the same basic idea: start with projective geometry formed by lines and planes through origin in $R^3$ and then put some quadric in its way. $x^2 + y^2 + z^2 = 1$ for elliptic, $z^2 = 1$ for Euclidean, and $x^2 + y^2 - z^2 = 1$ for hyperbolic. This can be rewritten as $1x^2 + 1y^2 + z^2 = 1$, $0x^2 + 0y^2 + z^2 = 1$, and $(-1)x^2 + (-1)y^2 + z^2 = 1$.
I tried to vary the parameters for the projection surface and found some possibilities for hybrid geometries that are no longer isotropic because they contain non-isomorphic lines.
Cylinder $x^2 + z^2 = 1$ leads to a hybrid of elliptic and Euclidean geometry, hyperbolic cylinder $x^2 - z^2 = 1$ to a hybrid of Euclidean and hyperbolic geometry and one-sheet hyperboloid $x^2 - y^2 + z^2 = 1$ to a hybrid of elliptic and hyperbolic geometry.
If we consider elliptic geometry to contain no ideal points, Euclidean geometry to contain an ideal line (line at infinity), and hyperbolic geometry to contain an ideal conic (the absolute), then these three hybrids have a single ideal point, two ideal lines that intersect, and an ideal conic, respectively. The difference between the last one and hyperbolic geometry is that hyperbolic geometry has real points inside the conic and ultra-ideal points outside of it, while the elliptic/hyperbolic hybrid is reversed: its real points are outside the conic.
This is as far as I got -- there should be a way to impose metric on these geometries so that straight lines would be shortest distances between points. I'd be interested in knowing whether someone has studied this further?
I would place your question strongly in the realm of projective geometry. There you'd more likely embed the plane at some offset to the origin (conventionally at $z=1$), and instead have your cone have its apex at the origin. This gives you homogeneous coordinates, and also homogeneous equations for the conics. I believe the question you are asking is equivalent to one in that setup.
Using a projective formalism, and following Perspectives on Projecive Geometry and lectures by it's author J. Richter-Gebert, I'd summarize what you describe under Cayley–Klein geometries. Your definition is not capturing the complete picture. To measure angles, you need not only the primal conic, as a set of incident points, but also the dual conic, as a set of tangent lines. For non-degenerate conics, one can simply compute one from the other. But for conics that factor into a single component with multiplicity two, as in the case of $z^2=0$, you need to explicitly state the dual conic. For Euclidean geometry, that conic consists of the ideal circle points. In standard coordinates (homogenized using $(x,y)\mapsto[x:y:1]$) those ideal circle points would be $[1:\pm i:0]$. So they are two points on the line at infinity with complex coordinates. They are incident with every Euclidean circle.
The general recipe for distance measurements in any of these is the following: given two points, draw the line through them. Interact that line with the fundamental conic; these intersections might be complex. Compute the cross ratio of these for points then take the natural logarithm. Apply a cosmetic factor to match conventions, in particular to get real values in common cases. For angles you do the dual procedure: find the point of intersection, then draw tangents to the fundamental conic and compute the cross ratio of these four lines.
Chapter 20.6 of Perspectives classifies geometries based on the signature of the fundamental conic, given as a primal-dual pair.
$(+,+,+),(+,+,+)$ is a purely complex nondegenerate conic leading to elliptic geometry.
$(+,+,-),(+,+,-)$ is a real nondegenerate conic leading to hyperbolic geometry.
$(+,+,0),(+,0,0)$ is a pair of complex lines intersecting in a real point of multiplicity two. This is dual Euclidean geometry, i.e. Euclidean geometry but with the roles of lines and points swapped.
$(+,-,0),(+,0,0)$ is a pair of real lines intersecting in a point of multiplicity two. You can consider this the dual of case 6.
$(+,0,0),(+,+,0)$ is a double line with two designated complex conjugate points, leading to Euclidean geometry.
$(+,0,0),(+,-,0)$ is a double line with two designated real points. This is sometimes called pseudo-Euclidean geometry or Minkowsky geometry. It is also relevant as the geometry of special relativity, since angles add as relativistic speeds do.
$(+,0,+),(+,0,0)$ is a double line with a designated double point. Perspectives calls this Galilean geometry.
For the case 2, it is important to note that you only get traditional hyperbolic geometry if you restrict yourself to points on the inside. Inside in this sense is the set of real points through which every real line will intersect the fundamental conic in two distinct points. In this sense, every conic is equivalent to every other, but the geometry on the outside is different from that on the inside.
You can also classify based on how many real intersections / tangents you get for the measuring process described above. This describes distance and angle measurement each as elliptic, parabolic or hyperbolic. This leads to $3\times3=9$ cases. But three of them are covered by case 2 above, since you get no real tangents for points on the inside, and depending on where the line joining points on the outside goes you may get real intersections or not. For comparison, Euclidean geometry has elliptic angle measure and parabolic distance.
All right, so what I've learned is that I essentially approached Cayley-Klein geometries. My approach is layman and pretty combinatoric, but I can eventually replicate the geometric structures. The most useful thing I got is the deeper understanding of dualities between various geometries.
Elliptic geometry (my symbol EE), which is self-dual and whose absolute set is empty.
Euclidean geometry and dual Euclidean geometry (symbols PP;e and EP). Euclidean geometry has line as an absolute set, with no special points on that line. Dual Euclidean geometry has a point as an absolute set.
Hyperbolic geometry and dual hyperbolic geometry (symbols HH and EH). They have a conic as an absolute set and they differ in whether the real points are inside or outside of the conic. They could be considered self-dual or dual to each other. Dual hyperbolic geometry has apparently unavoidable imaginary distances, which means it's more of a spacetime geometry than planar geometry.
Galilean geometry (symbol PP;p). Absolute set is a line with one "lightlike" point. This is basically a spacetime geometry in a world with infinite speed of light. It's self-dual.
Minkowski geometry and its dual (symbols PP;h and PH). Minkowski geometry has absolute set formed by a line with two lightlike points that divide the line into spacelike and timelike portion; the dual has absolute set formed by two lines and their intersection point.
EPP has a line as an absolute set, and that line can have an additional structure in three ways. PPP has a plane as an absolute set, and there are six ways to introduce lightlike structures into that plane. PPP;hh, for example, leads to a Minkowski space with two spacelike dimensions and one timeline dimensions; each point will have lightlike lines through it that intersect the ideal plane in its lightlike conic, which naturally forms the light cone.
However, what of PPH geometry? I'm not quite sure whether it can be subdivided or not.
Not the answer you're looking for? Browse other questions tagged geometry hyperbolic-geometry or ask your own question.
Why are perpendicular bisectors 'lines'?
Is there a surface in Euclidean space that admits elliptic geometry?
Out of those geometries below, which one is the most general?
Studying the hyperboloid model, what is represented by the conic sections? | CommonCrawl |
The objective of this research was to provide a vaccine for the control of brucellosis in reindeer, that allows serologic discrimination between vaccinated and infected animals. Three vaccines were tested: (1) Brucella suis 1, (2) B. suis 3, and (3) A rough mutant of the infective strain, B. suis 4. All were heat-killed and prepared in Freund's incomplete adjuvant. Each vaccine was administered to four animals. All vaccines stimulated the production of high levels of antibody in Rangifer that were maintained for the 483-day experiment. Significant delayed-type hypersensitivity reactions were seen in all vaccinated Rangifer. Both B. suis 1 and B. suis 3 vaccines allowed serologic discrimination between vaccinated and infected Rangifer. This was accomplished by means of an indirect ELISA (enzyme-linked immunosorbent assay). This test used whole cell B. melitensis and B. abortus as A and M-dominant antigens. Distinction could be made between vaccinated and infected reindeer based on a percentage difference in spectrophotometric abosorbance values obtained with these antigens. The B. suis 3 vaccine provided the best discrimination. Eighty-nine percent of 117 reindeer were correctly classified as either B. suis 3-vaccinated or B. suis 4-infected. Discrimination between vaccinated and infected reindeer was sufficient to allow assessment of the prevelance of brucellosis and vaccinated herds. In addition, the ELISA was more sensitive than standard agglutination tests in identifying reindeer with exposure to B. suis. The B. suis 3 vaccine was further evaluated in a challenge of 7 vaccinated reindeer. The vaccinated group consisted of 5 pregnant adults and 2 8-month-old female calves. These reindeer were challenged with $3.16\times10\sp7$ colony forming units of B. suis 4 at 63 days post-vaccination. Five pregnant adults and 1 female calf served as experimental controls. B. suis 4 was isolated from 3 of 7 vaccinated reindeer (43%) at the time of necropsy. B. suis 4 was isolated from the aborted fetus of 1 of the infected vaccinates. Another infected vaccinate bore a healthy calf for which B. suis 4 could not be isolated. All control reindeer were infected and all 5 adults aborted. B. suis 4 was isolated from all 5 fetuses. The B. suis 3 vaccine provided significant protection against infection and abortion in reindeer challenged with B. suis 4. | CommonCrawl |
In the following two propositions we will see the connection between a linear map $T$ being injective/surjective and the corresponding adjoint matrix $T^*$ being surjective/injective.
Proposition 1: Let $V$ and $W$ be finite-dimensional nonzero inner product spaces and let $T \in \mathcal L (V, W)$. Then $T$ is injective if and only if $T^*$ is surjective.
Proof: $\Rightarrow$ Suppose that $T$ is an injective. We first note that $T^* : W \to V$.
Proposition 2: Let $V$ and $W$ be finite-dimensional nonzero inner product spaces and let $T \in \mathcal L (V, W)$. Then $T$ is surjective if and only if $T^*$ is injective. | CommonCrawl |
Plotting curves of various types, such as straight lines, functions, parametric functions and polygons of various types form the basis of most graphic functionalities.
To understand how to deal with plotting curves, first we need to understand how a screen works. In most cases, the interaction between the computer itself and the screen is done via an array representing individual pixels. If we have to draw on an area of width $w$ and height $h$, this will be a $w \times h$ array. Most commonly, the $0$ of this array corresponds to the top left of the screen, with incrementing values going first to the right, up to the value of the pixel $w - 1$, corresponding to the top right of the screen. The following pixel $w$ will then be the pixel right underneath pixel $0$. This convention is a leftover from the era where the pixels on the physical screen were displayed in that order because that was the direction of the scanning by the electron beams of a CRT monitor.
This suggests a very simple method to plot both horizontal and vertical lines, by simply iterating over increasing values of the array, either by increments of $1$ (for a horizontal line), or increments of $w$ (for vertical lines).
A problem appears : when the pixel value exceeds the line $[n \times w, n \times w + (w - 1)]$, a horizontal line will continue on the next or previous line, including trying to go beyond the array itself. The same thing will happen with vertical lines, which will always go beyond the array if they are too large. Since this can cause an out of bound error, as well as not being what we want to display, we'll need to either make sure that the line is of proper length beforehand, or clamp the boundaries when drawing them.
In this case the start of the line cannot exceed any boundary, but it may also be necessary to check it as well in some circumstances. Also if the pixel integer is unsigned, it may be useful to check for any underflow to avoid a line going from $1$ to $-1$ turning into a line from $1$ to $255$, especially since with this code, the for loop would never terminate.
The other class of line that remains possible to draw very simply with a single instruction in a loop is a diagonal line. This is simply done by combining the two methods, and every turn will increment the pixel by either $w + 1$, $w-1$, $-w+1$ or $-w-1$, depending on the direction of the diagonal.
With those three types of lines, we can already draw on screen many shapes, the most important one being a rectangle. This is quite practical as it is a very common shape in graphics and this is a very fast algorithm compared to more general line drawing algorithms. Rectangles can be simply implemented by the simultaneous plotting of two horizontal and two vertical lines.
The other convex polygons possible with these are diamonds (or any variety of rectangles rotated by $45°$, isoceles right triangles and octogons.
While it is possible to implement by hand many more types of lines, it will be more interesting beyond those basic cases to get a generic algorithm to plot any line.
The most obvious method to plot a curve, including lines, would be to do it as a basic mathematical function, where we plot a point at the coordinate $(x, f(x))$. With this method we can try to plot a variety of curves beyond straight lines, although one important omission is that as with mathematical functions, we cannot plot anything with a vertical line. This will be a major issue of the idea as we will see.
Up to some eventual rotations or translations (having the $0$ at the top left of the screen isn't necessarily the most convenient). | CommonCrawl |
You are standing on a roof in Israel at the top of the Ladder. You notice that there is a small package taped to the top rung of the ladder. You pick it up and take it to your hotel.
The image above has two prime numbers associated with it. Multiply them together and you have the key for the cryptic-clue that comes with this paper.
Where is the next clue, and which country do you have to go to. Also, where and what is the "Trail of Tears"
I figured out the country and have a guess to the location.
The first thing I noticed is the picture is 397 x 431 px, which are both prime numbers. This gives us the key 171107.
The cipher is referring to the country Pitcairn Islands.
Google maps seems to recognize a Trail of Tears on Pitcairn Island (including a street view), but I can't seem to find any other information about it. It seems just to be the name of a trail on the island so maybe this is the exact location.
There is a red dot (a square containing 4 smaller squares) on the blank paper, which can be replicated by clicking on "Red" in MS Paint and clicking once (not dragging). The RGB for this colour is 237-28-36 (HEX#ED1C24). However, $237 = 79 \times 3$; $28 = 2 \times 2 \times 7$; and $36 = 2 \times 2 \times 3 \times 3$. None of the RGB numbers are prime.
In another vein, @Reibello has suggested that the associated prime numbers are the height and width of the image. The image is $397 px \times 431 px$, both of which are prime. This means that the key to the cryptic clue is 171107.
From @Bennett Bernardoni, decoding the cryptic clue as a Vigenère cipher with this key gives us "Cairn loved swimming around in the island full of pits (8)".
PITCAIRN Island. Is this where the next clue can be found?
Not the answer you're looking for? Browse other questions tagged cipher lateral-thinking visual steganography geography or ask your own question. | CommonCrawl |
A small block having charge +Q placed on a frictionless horizontal table is attached to one end of a spring of force constant k. The other end of the spring is attached to a fixed support and the spring stays horizontally in relaxed state.
Another small block carrying a negative charge -q is brought very slowly from a great distance towards the former block along the line coinciding the axis of the spring.
Find extension in the spring when the blocks collide.
Start by trying to understand what is happening here.
Block #2 is moved to the left very slowly from infinity, held in place by some external force. At each step block #1 is in quasi-static equilibrium : the electrostatic (ES) attractive force to the right is matched by the elastic force to the left.
The natural response of block #1 to a sudden change in the position of block #2 is to oscillate : the ES force is suddenly bigger while the elastic force remains the same. Block #2 oscillates until it loses its KE somehow and finds a new equilibrium position. Regarding the spring and 2 blocks as the system, the external force does -ve work on this system, taking away the kinetic energy as it develops.
So energy is not conserved, and since it is not obvious how the unknown external force varies, the work-energy theorem is not useful here. What we can say is that the KE removed equals the total PE stored at equilibrium (which will be -ve), but that is not useful either.
Your observation that $r=0$ when the blocks collide is correct and warns us that block #1 cannot be in equilibrium at all times. At some point the equilibrium becomes unstable. We have to find out when that happens.
Suppose at this point the separation between the blocks is $r_0$ and the extension of the spring is $x_0$. At this point of unstable equilibrium a small displacement of block #1 causes it to accelerate the rest of the distance $r_0$ towards block #2 while block #2 is held stationary by the external force. Block #1 collides with block #2 and the KE it gained is again dissipated somehow, but we don't need to know what this KE is. The final extension of the spring is $x_0+r_0$.
If you are bringing a small charge -q slowly then why would there be a sudden increment in force ?
I used the idea of sudden small steps to illustrate the need to remove energy from the system. Even if block #2 is moved smoothly but relatively fast block #1 will oscillate. With slow small smooth steps there are still oscillations, but in practice these die out without being detected because of friction or hysteresis in the spring.
If block #1 has reached a position of unstable equilibrium then a small force towards block #2 is all that is needed to make block #1 move. | CommonCrawl |
…the public key $(n,e)$ used to encrypt the plaintexts $p_0$, $p_1$.
…the two plaintexts' difference (that is, $\delta := (p_1 - p_0)\bmod n$).
…the ciphertexts $c_0=m_0^e\bmod n$, $c_1=m_1^e\bmod n$.
This should yield the linear polynomial $z-m$ (except possibly in rare cases).
This means: Assuming the given ciphertexts are not an instance of " rare cases ", the greatest common divisor $d\in(\mathbb Z/n\mathbb Z)[X]$ of $X^e-c_0$ and $(X+\delta)^e-c_1$ equals $X-m$ multiplied by some constant in $\mathbb Z/n\mathbb Z$ (as $d$ is only unique up to multiplication by units). Dividing $d$ by its lead coefficient will result in $X-m$. | CommonCrawl |
It's a simplified model. Suppose $U_t$ is a random variables subject to Lognormal($x_1$, $z_1^2$)distribution. $V_t$ is a random variables subject to Lognormal($x_2$, $z_2^2$)distribution. Suppose they are independent here. The payoff of the heat rate-linked derivatives is $\max(U_T - V_T, 0)$. How to price this option? It's a integration stuff.
This might be a surprise to you, you can evaluate the option using Black Scholes.
The key concept is change your numéraire from dollar to the asset associated with $V$. The $V$ in your payout $\max(U_t-V_t,0)$ will effectively get replaced by a constant, the par forward of asset $V$ at maturity $t$.
This is nothing but the future value of a call option with strike $V_F$ on an asset with par forward $U_F$ and standard derivation $z$ at maturity. You can finish the integral using Black Scholes.
If $U_t$ and $V_t$ are not independent to each other, you can still transform F.V. to an integral of the form $(*2)$. The only difference is $U_F$ and $z$ there will be adjusted by some factors.
If you want to learn how to deal with the case with correlation, pickup any standard textbook on option pricing and look for the pricing of quanto option. The issues you encountered in pricing a quanto option is similar to the one you need to price your heat-linked option under the log normal model.
Not the answer you're looking for? Browse other questions tagged probability optimization finance or ask your own question. | CommonCrawl |
Results for "L. Ya. Glozman"
Chiral symmetry restoration in excited hadronsSep 30 2003The evidence, theoretical justification and implications of chiral symmetry restoration in excited hadrons are presented.
Describtion of tne normal basis of boundary algebrasDec 24 2014We investigate normal basis of algebras with small growth. The paper was supported by the Grant RFBR N 14-01-00548.
Describing the set of words generated by interval exchange transformationNov 15 2007Jan 31 2008Let $W$ be an infinite word over finite alphabet $A$. We get combinatorial criteria of existence of interval exchange transformations that generate the word W.
Comment on ``Parity Doubling and $SU(2)_L \times SU(2)_R$ Restoration in the Hadronic Spectrum''Mar 29 2006This comment critically discusses certain aspects of a recent paper on parity doubling in the hadronic spectrum and its possible connection to chiral symmetry restoration.
Low-Lying Nucleons from Chirally Improved FermionsSep 10 2003We report on our preliminary results on the low-lying excited nucleon spectra which we obtain through a variational basis formed with three different interpolators.
Role of electronic shell in the double beta decayFeb 11 2016Feb 18 2016We demonstrate that the limiting energy available for ejected electrons in double beta decay is diminished by about 400 eV due to inelastic processes in the electronic shell. | CommonCrawl |
Someone called James Davis has found a counterexample to John H. Conway's "Climb to a Prime" conjecture, for which Conway was offering \$1,000 for a solution.
Let $n$ be a positive integer. Write the prime factorization in the usual way, e.g. $60 = 2^2 \cdot 3 \cdot 5$, in which the primes are written in increasing order, and exponents of $1$ are omitted. Then bring exponents down to the line and omit all multiplication signs, obtaining a number $f(n)$. Now repeat.
So, for example, $f(60) = f(2^2 \cdot 3 \cdot 5) = 2235$. Next, because $2235 = 3 \cdot 5 \cdot 149$, it maps, under $f$, to $35149$, and since $35149$ is prime, we stop there forever.
The conjecture, in which I seem to be the only believer, is that every number eventually climbs to a prime. The number 20 has not been verified to do so. Observe that $20 \to 225 \to 3252 \to 223271 \to \ldots$, eventually getting to more than one hundred digits without reaching a prime!
Well, James, who says he is "not a mathematician by any stretch", had a hunch that a counterexample would be of the form $n = x \cdot p = f(x) \cdot 10^y+p$, where $p$ is the largest prime factor of $n$, which in turn motivates looking for $x$ of the form $x=m \cdot 10^y + 1$, and $m=1407$, $y=5$, $p=96179$ "fell out immediately". It's not at all obvious to me where that hunch came from, or why it worked.
The number James found was $13\,532\,385\,396\,179 = 13 \cdot 53^2 \cdot 3853 \cdot 96179$, which maps onto itself under Conway's function $f$ – it's a fixed point of the function. So, $f$ will never map this composite number onto a prime, disproving the conjecture. Finding such a simple counterexample against such stratospherically poor odds is like deciding to look for Lord Lucan and bumping into him on your doorstep as you leave the house.
A lovely bit of speculative maths spelunking!
via Hans Havermann, whom James originally contacted with his discovery.
One problem: that's Wolfram's Rule 135, not the Game of Life. You can tell because of the pixels.
Rule 135 is a 1-dimensional automaton: you start with a row of black or white pixels, and the rule tells you how the colour of each pixel changes based on the colours of the neighbouring pixels. The Cambridge North design shows the evolution of a rule 135 pattern as a distinct row of pixels for each time step. Conway's Game of Life follows the same idea but in two dimensions – a pixel's colour changes depending on the nearby pixels in every compass direction.
Either way, it's a lovely pattern. I suspect the designers went with Rule 135 instead of the Game of Life so that they'd get a roughly even mix of white and black pixels, which is hard to achieve under Conway's rules.
Delayed £50m Cambridge North railway station opens on BBC News.
Cambridge North Station information from Atkins Group, the design consultancy responsible for the station building.
Press release from Greater Anglia trains.
The Game of Life: a beginner's guide by Alex Bellos in the Guardian.
Brought to our attention by @Quendus on Twitter.
Just how big is a big proof?
With news that a recent proof of the Boolean Pythagorean Triples Theorem is the 'largest proof ever', we collect and run-down some of the biggest, baddest, proofiest chunks of monster maths.
Featuring on their blog as the inaugural post under the optimistic tags 'math' and 'John Conway', they explain that three maths puzzles set by Conway will be posted on Pi Day, "varying in level of difficulty from high school to Ph.D. level". Residents of the 48 contiguous US states can leave their answers in the comments when the puzzles are posted, and the winners receive a 3.14-year supply of pizza (or, as the rules clarify, a somewhat more prosaic $1600 Pizza Hut gift card).
Obviously we will have to wait for the questions to be unveiled to be able to judge the appropriate level of excitement for this promotion, but with Conway involved, no maths-is-really-hard nonsense in the blog post, and not a formula for the perfect anything in sight, things are looking promising for a nice bit of harmless maths/poor childhood diet fun.
The author Siobhan Roberts has sent us a copy of her new book, Genius at Play. There was a strong implication that we should review it. I've now read the book, so I'll do that: I enjoyed it.
Numberphile, the supremum over all YouTube channels, has scored a bit of a coup – Brady has sat down and recorded an interview with the famously Internet-reclusive John Conway.
In this first video (there's a bonus one linked at the end of this one, and I hope there'll be more) John talks about his love/hate relationship with his Game of Life.
By the way, I notice from the video's description that the Mathematical Sciences Research Institute is paying for Numberphile these days. Thanks, MSRI! | CommonCrawl |
Yan Gao, Zhiqiang Xu, Lei Wang, Honglei Xu. Preface. Numerical Algebra, Control & Optimization, 2015, 5(1): i-ii. doi: 10.3934\/naco.2015.5.1i.
Yanli Han, Yan Gao. Determining the viability for hybrid control systems on a region with piecewise smooth boundary. Numerical Algebra, Control & Optimization, 2015, 5(1): 1-9. doi: 10.3934\/naco.2015.5.1.
Li-Min Wang, Jing-Xian Yu, Jia Shi, Fu-Rong Gao. Delay-range dependent $H_\\infty$ control for uncertain 2D-delayed systems. Numerical Algebra, Control & Optimization, 2015, 5(1): 11-23. doi: 10.3934\/naco.2015.5.11.
Liyan Qi, Xiantao Xiao, Liwei Zhang. On the global convergence of a parameter-adjusting Levenberg-Marquardt method. Numerical Algebra, Control & Optimization, 2015, 5(1): 25-36. doi: 10.3934\/naco.2015.5.25.
Siqi Li, Weiyi Qian. Analysis of complexity of primal-dual interior-point algorithms based on a new kernel function for linear optimization. Numerical Algebra, Control & Optimization, 2015, 5(1): 37-46. doi: 10.3934\/naco.2015.5.37.
Wei Lv, Ruirui Sui. Optimality of piecewise thermal conductivity in a snow-ice thermodynamic system. Numerical Algebra, Control & Optimization, 2015, 5(1): 47-57. doi: 10.3934\/naco.2015.5.47.
Jingang Zhai, Guangmao Jiang, Jianxiong Ye. Optimal dilution strategy for a microbial continuous culture based on the biological robustness. Numerical Algebra, Control & Optimization, 2015, 5(1): 59-69. doi: 10.3934\/naco.2015.5.59.
Chengxin Luo. Single machine batch scheduling problem to minimize makespan with controllable setup and jobs processing times. Numerical Algebra, Control & Optimization, 2015, 5(1): 71-77. doi: 10.3934\/naco.2015.5.71.
Sanming Liu, Zhijie Wang, Chongyang Liu. Proximal iterative Gaussian smoothing algorithm for a class of nonsmooth convex minimization problems. Numerical Algebra, Control & Optimization, 2015, 5(1): 79-89. doi: 10.3934\/naco.2015.5.79. | CommonCrawl |
Given $n$ line segments on the plane. It is required to check whether at least two of them intersect with each other. If the answer is yes, then print this pair of intersecting segments; it is enough to choose any of them among several answers.
The naive solution algorithm is to iterate over all pairs of segments in $O(n^2)$ and check for each pair whether they intersect or not. This article describes an algorithm with the runtime time $O(n \log n)$, which is based on the sweep line algorithm.
Let's draw a vertical line $x = -\infty$ mentally and start moving this line to the right. In the course of its movement, this line will meet with segments, and at each time a segment intersect with our line it intersects in exactly one point (we will assume that there are no vertical segments).
Thus, for each segment, at some point in time, its point will appear on the sweep line, then with the movement of the line, this point will move, and finally, at some point, the segment will disappear from the line.
We are interested in the relative order of the segments along the vertical. Namely, we will store a list of segments crossing the sweep line at a given time, where the segments will be sorted by their $y$-coordinate on the sweep line.
To find an intersecting pair, it is sufficient to consider only adjacent segments at each fixed position of the sweep line.
It is enough to consider the sweep line not in all possible real positions $(-\infty \ldots +\infty)$, but only in those positions when new segments appear or old ones disappear. In other words, it is enough to limit yourself only to the positions equal to the abscissas of the end points of the segments.
When a new line segment appears, it is enough to insert it to the desired location in the list obtained for the previous sweep line. We should only check for the intersection of the added segment with its immediate neighbors in the list above and below.
If the segment disappears, it is enough to remove it from the current list. After that, it is necessary check for the intersection of the upper and lower neighbors in the list.
Other changes in the sequence of segments in the list, except for those described, do not exist. No other intersection checks are required.
Two disjoint segments never change their relative order.
In fact, if one segment was first higher than the other, and then became lower, then between these two moments there was an intersection of these two segments.
Two non-intersecting segments also cannot have the same $y$-coordinates.
From this it follows that at the moment of the segment appearance we can find the position for this segment in the queue, and we will not have to rearrange this segment in the queue any more: its order relative to other segments in the queue will not change.
Two intersecting segments at the moment of their intersection point will be neighbors of each other in the queue.
Therefore, for finding pairs of intersecting line segments is sufficient to check the intersection of all and only those pairs of segments that sometime during the movement of the sweep line at least once were neighbors to each other.
It is easy to notice that it is enough only to check the added segment with its upper and lower neighbors, as well as when removing the segment — its upper and lower neighbors (which after removal will become neighbors of each other).
It should be noted that at a fixed position of the sweep line, we must first add all the segments that start at this x-coordinate, and only then remove all the segments that end here.
Thus, we do not miss the intersection of segments on the vertex: i.e. such cases when two segments have a common vertex.
Note that vertical segments do not actually affect the correctness of the algorithm.
These segments are distinguished by the fact that they appear and disappear at the same time. However, due to the previous comment, we know that all segments will be added to the queue first, and only then they will be deleted. Therefore, if the vertical segment intersects with some other segment opened at that moment (including the vertical one), it will be detected.
In what place of the queue to place vertical segments? After all, a vertical segment does not have one specific $y$-coordinate, it extends for an entire segment along the $y$-coordinate. However, it is easy to understand that any coordinate from this segment can be taken as a $y$-coordinate.
Thus, the entire algorithm will perform no more than $2n$ tests on the intersection of a pair of segments, and will perform $O(n)$ operations with a queue of segments ($O(1)$ operations at the time of appearance and disappearance of each segment).
The final asymptotic behavior of the algorithm is thus $O(n \log n)$.
The main function here is solve(), which returns the number of found intersecting segments, or $(-1, -1)$, if there are no intersections.
Checking for the intersection of two segments is carried out by the intersect () function, using an algorithm based on the oriented area of the triangle.
The queue of segments is the global variable s, a set<event>. Iterators that specify the position of each segment in the queue (for convenient removal of segments from the queue) are stored in the global array where.
Two auxiliary functions prev() and next() are also introduced, which return iterators to the previous and next elements (or end(), if one does not exist).
The constant EPS denotes the error of comparing two real numbers (it is mainly used when checking two segments for intersection). | CommonCrawl |
Chern-Simons terms describe topological properties of systems. A topological property is something that remains unchanged under small geometric changes.
Chern-Simons terms are known under different names in different branches of physics. In fluid mechanics it is usually called "fluid helicity", in plasma physics and magnetohydrodynamics "magnetic helicity". In the context of field theories it is usually called Chern-Simons term.
In the beginning people tried to make a mechanical model of electrodynamics. For example, Maxwell though of Faraday's electric and magnetic field lines as "fine tubes of variable section carrying an incompressible fluid".
This idea is made precise by the notion "kinetic helicity".
In fluid dynamics, the kinetic helicity is a measure of the degree of knottedness and/or linkage of the vortex lines of the flow.
$$ H \equiv \int_\Omega u \cdot \omega d^3x, $$ where $u$ is the velocity field, $\omega = \Delta \times u$, is the vorticity, defined on $\Omega$, and $x$ is the position vector.
[H]elicity, is an integral over the fluid domain that expresses the correlation between velocity and vorticity, and an invariant of the classical Euler equations of ideal (inviscid) fluid flow. […] It is precisely because vortex lines are frozen in the fluid, thus conserving their topology, that helicity is conserved too. As Kelvin recognized, if two vortex tubes are linked, then that linkage survives in an ideal fluid for all time; if a vortex tube is knotted, then that knot survives in the same way for all time. Helicity is the integral manifestation of this invariance: For two linked tubes, it is proportional to the product of the two circulations (each conserved by Kelvin's circulation theorem), whereas for a single knotted tube, or a deformed unknotted tube, it is proportional to its "writhe plus twist," as encountered in differential geometry—a property of knotted ribbons that is invariant under continuous deformation (4, 5). Thus, for example, if an untwisted ribbon that goes twice round a circle before closing on itself is unfolded and untwisted back to circular form, then its writhe will decrease continuously from 1 to 0, and its twist will increase continuously from 0 to 1, by way of compensation. Such conversion of writhe to twist is familiar to anyone seeking to straighten out a coiled garden hose.
In electrodynamics the Chern-Simons term is known as helicity and described the linking of magnetic flux lines. This can be seen by interpreting the magnetic field as an incompressible fluid flow, with vector potential $\vec A$: $B= \Delta \times A$.
It has been suggested that primordial magnetic fields can develop large correlation lengths provided they carry nonvanishing "magnetic helicity" R d 3 r a · b, a quantity known to particle physicists as the Abelian, Euclidean Chern-Simons term. Here a is an Abelian gauge potential for the magnetic field b = ∇ × a. If there exists a period of decaying turbulence in the early universe, which can occur after a first-order phase transition, a magnetic field with nonvanishing helicity could have relaxed to a large-scale configuration, which enjoys force-free dynamics (source currents for the magnetic fields proportional to the fields themselves) thereby avoiding dissipation . Creation and evolution of magnetic helicity by R. Jackiw et. al.
Ref 39 is C. Adam, B. Muratori, and C. Nash, Particle creation via relaxing hypermagnetic knots, Phys.Rev. D62 (2000) 105027, [hep-th/0006230]. | CommonCrawl |
Yes, if there are non-zero fixed costs, and constant marginal cost, then average cost decreases strictly monotonically with quantity, asymptotic to the marginal cost.
Decreasing average cost implies that marginal cost is less than average cost ($MC<AC$, which can be proved by simply taking the first derivative of $C(q)/q$). With constant marginal cost, there exists a simple linear cost function $C(q)=F+a\times q$ that satisfies the constant $MC$ condition, where the constant $F$ is the fixed cost and $a \times q$ is the variable cost, and the constant $a$ is $MC$. Therefore $AC=F/q+a$ is greater than $MC=a$.
Not the answer you're looking for? Browse other questions tagged microeconomics cost producer-theory cost-functions or ask your own question.
When can one safely talk about decreasing marginal utility? | CommonCrawl |
Abstract: Electroweak gauge boson pair production is one of the most important Standard Model processes at the LHC, not only because it is a benchmark process but also by its ability to probe the electroweak interaction directly. We present full next-to-leading order predictions for the production cross sections and distributions of on-shell massive gauge boson pair production in the Standard Model. This includes the QCD and electroweak (EW) corrections. We study the hierarchy between the different channels when looking at the size of the QCD gluon-induced processes and the EW photon-induced processes and provide the first comprehensive explanation of this hierarchy thanks to analytical leading-logarithmic results. We also provide a detailed study of the theoretical uncertainties affecting the total cross section predictions that stem from scale variation, parton distribution function and $\alpha_s$ errors. We then compare with the present LHC data. | CommonCrawl |
the Tenth Alexander Friedmann International Seminar on Gravitation and Cosmology and Fourth Symposium on the Casimir Effect to be held from June 23, 2019 (date of arrival) to June 29, 2019 (date of departure) at Saint Petersburg, Russia.
The working language of the Seminar and Symposium is English.
Institute of Physics, Nanotechnology and Telecommunications, Peter the Great Saint Petersburg Polytechnic University, where Alexander Friedmann was working as a Professor during the period from 1920 to 1925, i.e., at the time when he created the theory of expanding Universe.
We will confirm the receiving of each abstract and the Registration Form.
The deadline for registration is March 30, 2019.
Please indicate the following information about you. Using this information, we will issue the official invitation which you might need to get the Russian visa from the Russian Consulate (if necessary) or apply for funding of your participation from your National funding Agencies.
Attach the pdf-file of the pages of the passport with this information.
The Proceedings of the Tenth Friedmann Seminar and Fourth Casimir Symposium will be published in two special issues of the Journals Int. J. Mod. Phys. A and Mod. Phys. Lett. A (World Scientific Publishing Company) both of which are indexed by the Scopus and Web of Science. The deadline for submission of full manuscripts for publication is September 20, 2019. All details concerning the preparation rules and submission procedure of the manuscripts will be posted in April of 2019 at this website.
The Organizing Committee will pay the major part of registration fees for participants from the countries of former Soviet Union which do not belong to the European Community. The decreased registration fee for participants from these countries is Rubles 2000. The registration fee covers a Conference bag, participation in all sessions and Conference events (coffee breaks, welcome party and excursion), publication of the Program and the special issues of the Journal containing the Conference Proceedings.
Under construction. The Program will contain longer and shorter oral talks, and the poster sessions. Selection among the submitted abstracts will be made by the Scientific Organizing Committee.
— A. A. Starobinsky (Russia), Inflation, pre-inflation and quantum gravity.
— D. Konkowski (USA), Resolution of singularities in relativistic spacetimes.
— A. D. Dolgov (Russia), Contemporary black holes from the early Universe.
— J. C. Fabris (Brazil), Viscous models and the dark sector of the universe.
— D. V. Gal'tsov (Russia), Disformal dualities and nature of singularities in scalar-tensor gravity.
— S. Capozziello (Italy), The emission of GRBs as test bed for Extended Gravity.
— B. E. Meierovich (Russia), Black hole and dark matter: Phase equilibrium.
— S. V. Ketov (Japan), Supergravity-based cosmology in Friedmann Universe.
— V. M. Mostepanenko (Russia), Axion-like particles in the Universe.
— G. Esposito (Italy), How to realize trapped surfaces in Friedmann models?
— L. Marochnik (USA), Cosmological acceleration from super horizon gravitational waves: Dark energy and inflation.
— K. A. Bronnikov (Russia), The stability problem for regular black holes and wormholes.
— V. P. Frolov (Canada), Integrability and separability in black-hole spacetime.
— A. V. Astashenok, A. V. Yurov, and V. A. Yurov (Russia), The cosmological models with jump discontinuites.
— I. Shapiro (Brazil), Effective action induced by anomaly: new results.
— I. Dymnikova (Russia), Regular black holes and solitons with de Sitter interiors.
— M. Hohmann (Estonia), Hamiltonian formulation of teleparallel gravity.
— A. Burinskii (Russia), Unification gravity with quantum theory: bag-like source of the Kerr-Newman spinning particle and anomalous magnetic moment of electron.
— C. Romero (Brazil), A new approach to Weyl unified field theory.
— D. G. Yakovlev (Russia), Weak interaction and nonequilibrium neutron stars.
— M. Maia (Brazil), The impact of the Higgs on Einstein's gravity.
— M. Maia (Brazil), The gravitational compass.
— A. A. Grib, Yu. V. Pavlov (Russia), 50 years of calculation of the density of particles created in the early Friedmann Universe.
— C. Laemmerzahl (Germany), Signatures of black holes star motion, accretion disks, and shadows.
— V. D. Ivashchuk and A. A. Kobtsev (Russia), Stable exponential cosmological solutions with two factor spaces in the Einstein-Gauss-Bonnet model with a Lambdaterm.
— P. Kuusk (Estonia), Scalar fields in generalized theories of gravity.
— S. V. Chervon (Russia), Effective chiral cosmological models of higher order theories of gravity.
— Pengshun Luo, Jihua Ding, Xiaofang Ren, Rui Luo and Jianbo Wang (China), Test of spin-dependent exotic interactions at the micrometer range.
— M. L. Fil'chenkov, Yu. P. Laptev (Russia), On gravitational radiation of microsystems.
— M. Moumni, M. Falek (Algeria), Minimal length corrections to FLRW solutions.
— A. S. Baigashov, A. V. Astashenok (Russia), Neutron stars in frames of R-square gravity and gravitational waves.
— F. F. Faria (Brazil), Cosmology in massive conformal gravity.
— D. Yu. Tsipenyuk, W. B. Belayev (Russia), Bubble structures in microphysical objects in 5-D extended space model.
— O. B. Zaslavskii (Ukraine), Super-Penrose process.
— A. A. Grib and Yu. V. Pavlov (Russia), On high energy particles from the rotating black holes.
— V. F. Panov, O. V. Sandakova and E. V. Kuvshinova (Russia), Cosmology with rotation in Bianchi's models.
— S. D. Odintsov (Spain), Cosmology and modified gravity: An overview.
— V. V. Kassandrov, N. V. Markova (Russia), Energy spectrum of the regular solutions to the Schroedinger-Newton system of equations.
— A. Beesham (South Africa), First observational test of General Relativity.
— I. L. Erokhin (Russia), To the question on accelerated universe expansion.
— I. V. Fomin (Russia), The Gauss-Bonnet scalar in Friedmann cosmology.
— L. I. Petrova (Russia), Correspondence between equations of field theory and the evolutionary relation of mathematical physics equations: Interpretation of Einstein's equations.
— A. V. Minkevich (Belarus), Gravitation interaction in astrophysics in Riemann-Cartan space-time.
— S. V. Sushkov (Russia), Cosmological perturbations during the kinetic inflation in the Horndeski theory.
— E. G. Mychelkin, M. A. Makukov (Kazakhstan), Antiscalar-modified Kerr spacetime.
— R. Konoplya (Russia), Echoes of compact objects: new physics near the surface and matter at a distance.
— V. B. Bezerra (Brazil), Some remarks on topology and gravitation.
— V. N. Rudenko (Russia), Current status of the GW experiment and Russian participation.
— K. G. Zloshchastiev (South Africa), Resolving cosmological singularity problem in a logarithmic superfluid theory of physical vacuum.
— Yu. N. Eroshenko (Russia), Evolution of matter around primordial black hole at radiation-dominated epoch.
— R. Sedmik, M. Pitschmann and H. Abele (Austria), Parallel plate force metrology as a tool to probe dark sector interactions.
— S. Yu. Vernov (Russia), Static spherically symmetric solutions in scalar-tensor gravity models.
— T. Prokopec (Netherlands), The role of Weyl symmetry in high energy physics.
— V. P. Neznamov and M. V. Gorbatenko (Russia), Quantum mechanics of stationary states of particles in external singular spherically and axially symmetric gravitational fields.
— S. N. Pandey (India), From vacuum to Friedmann universe with a higher order theory of gravity.
— E. O. Pozdeeva (Russia), Exact solutions in non-local Gauss-Bonnet gravity.
— M. Bhatti (Pakistan), Instability constraints: Stability of celestial objects.
— M. Yu. Piotrovich, V. L. Afanasiev, S. D. Buliga and T. M. Natsvlishvili (Russia), Determination of supermassive black hole spins in active galactic nuclei.
— I. Damiao Soares (Brazil), Boosted Kerr black holes in general relativity.
— V. Dzhunushaliev, V. Folomeev, B. Kleihaus and J. Kunz (Kazakhstan), Thin-shell toroidal T2-wormhole.
— R. K. Dubey and S. Pathak (India), Gravitational waves through binary formation in merging galaxies.
— V. Dzhunushaliev, G. K. Nurtayeva and A. A. Serikbolova (Kazakhstan), Domain and branes in multidimensional $\alpha R^n$ gravities.
— Z. Yousaf (Pakistan), Irregular energy density and compact objects.
— J.-P. Petit (France), Mass inversion through the Schwarzschild sphere.
— A. S. Lukyanenko (Russia), A "stationary" Schroedinger equation in quantum cosmology.
— M. Zaeem-ul-Haq Bhatti (Pakistan), Instability constraints: Stability of celestial objects.
— O. I. Chashchina, A. Sen and Z. K. Silagadze (Russia), Planck-scale modification of classical mechanics.
— B. Mishra and S. Tarai (India), Dynamics of Bianchi $VI_h$ universe with bulk viscous fluid in modified gravity.
— E. T. Akhmedov, P. A. Anempodistov and I. D. Ivanova (Russia), Corrections to the Aretakis type behaviour of the metric due to an infalling particle.
— K. Boshkayev (Kazakhstan), Dark matter core at the Galactic center.
— B. Saha (Russia), Interacting self-consistent system of spinor and gravitational field.
— A. Alimgozha (Kazakhstan), Hot rotating white dwarfs with different nuclear composition.
— Yu. G. Ignat'ev (Russia), Limit Euclidean cycles in minimal cosmological models based on scalar fields with a Higgs type potential.
— M. Sharif (Pakistan), Dynamics of scalar shell for black holes.
— M. A. Skugoreva, M. Sami and N. Jaman (Russia), Emergence of cosmological scaling behavior in asymptotic regime.
— C. Parmeggiani (Italy), Quantum fields and gravity. Expanding space-times.
— A. B. Arbuzov (Russia), Conformally coupled general relativity.
— A. Zinhailo (Czech Republic), Quasinormal modes of the four dimensional black hole in Einstein-Weyl gravity.
— A. B. Arbuzov and A. E. Pavlov (Russia), Intrinsic time in geometrodynamics of closed manifolds.
— U. Ualikhanova (Kazakhstan), Post-Newtonian limit of extended teleparallel theories of gravity.
— F. Zaripov (Russia), Oscillatory solutions in cosmological models and in a centrally symmetric space in a modified theory of gravity. Estimated observational effects.
— A. Urazalina (Kazakhstan), Phantom and ordinary solutions with scalar fields in GR with different potentials.
— Yu. V. Dumin (Russia), A unified model of dark energy based on the quantum-mechanical uncertainty relation.
— S. G. Rubin (Russia), Evolution of subspaces from high to low energies.
— L. V. Grunskaya, V. V. Isakevich and D. V. Isakevich (Russia), Anomalous behavior of the terrestrial electric field intensity at multiple frequencies of relativistic binary stellar systems.
— E. Arbuzova (Russia), Early universe in modified gravity.
— P. V. Tretyakov (Russia), On spin connection and cosmological perturbations in teleparallel gravity.
— E. E. Kholupenko (Russia), On the possible lost anisotropy of the Unruh radiation.
— A. P. Lelyakov (Russia), Influence of the gravitational field of a null string domain that radially changes its size on the dynamics of a test null string.
— B. N. Latosh (Russia), Effective field theory application for Fab Four based scalar-tensor gravity.
— B. L. Ikhlov (Russia), Vacuum as a source of rotation.
— A. Toporensky, D. Muller and S. Mistra (Russia), On generality of Starobinsky inflation in Einstein and Jordan frames.
— N. N. Gorobey, A. S. Lukyanenko and A. Shavrin (Russia), Ground state and quantization of the energy of space in an anisotropic model of the universe.
— X. P. H. Calmet (United Kingdom), Quantum gravitational corrections to cosmological and astrophysical processes.
— N. Khusnutdinov (Brazil), The self-force for particle in plane wave space-time.
— B. P. Brassel (South Africa), Collapsing radiation shells in Einstein-Gauss-Bonnet gravity.
— A. V. Nesterenok (Russia), Modelling of non-dissociative shock waves in dense interstellar clouds.
— A. Nikolaev and S. D. Maharaj (South Africa), Embedding with Vaidya geometry.
— S. D. Maharaj (South Africa), Models for bounded radiating structures.
— B. Aliyev (Russia), The new physics in 5D theory.
— A. N. Golubiatnikov, D. B. Lyuboshits (Russia), On the inhomogeneous annihilation stage of the Universe expansion.
— N. Arakelyan (Russia), Impact of the accretion of Sagittarius dwarf on the distribution of Milky Way's globular clusters.
— M. J. Guzman, R. Ferraro (Argentina), Degrees of freedom and Hamiltonian formalism for $f(T)$ gravity.
— S. Pilipenko (Russia), On the initial conditions for cosmological N-body simulations.
— A. Parnachev (Ireland), Black holes, phase shift and holography.
— I. A. Babenko (Russia), Astrophysical objects magnetic fields in three physical concepts.
— A. Yu. Kamenshchik, A. A. Starobinsky, A. Tronconi, T. Vardanyan, G. Venturi (Italy, Russia), Pauli–Zeldovich cancellation of the vacuum energy divergences, auxiliary fields and supersymmetry.
— T. Vardanyan, A. Yu. Kamenshchik (Italy), Exact solutions of the Einstein equations for an infinite slab with constant energy density.
— A. V. Aminova (Russia), Cylindrically symmetric dyonic wormholes in six-dimensional Kaluza-Klein theory.
— A. V. Aminova, D. R. Khakimov (Russia), On mechanical conservation laws in 5-dimensional theory of gravity.
— Z. Nekooei, J. Sadeghi, M. Shokri (Iran), The Lagrangian of charged test particle in Horava-Lifshitz black hole and deformed phase space.
— A. A. Sheykin, S. A. Paston (Russia), Polyakov-like approach to the modified gravity theories.
— E. Griv (Israel), Gravitational many-body simulations of stellar disks in galaxies with non-Newtonian interactions.
— V. Vertogradov (Russia), The diagonalization of generalized Vaidya spacetime.
— M. Shokri, F. Renzi, A. Melchiorri (Iran), Cosmic Microwave Background constraints on non-minimal couplings in inflationary models with power law potentials.
— S. Alexandrov (France), Bigravity with single graviton.
— T. P. Shestakova (Russia), On a probable cosmological scenario in the extended phase space approach to quantization of gravity.
— A. R. Cisterna Roa (Chile), Homogenous anti de-Sitter black strings in General Relativity and Lovelock Gravity.
— R. R. Cuzinatto, L. G. Medeiros, P. J. Pompeia (Brazil), Modified Starobinsky inflation: a higher order approach.
— R. R. Cuzinatto, C. A. M. de Melo, L. G. Medeiros, P. J. Pompeia (Brazil), Higher order theories of gravity and modified Starobinsky inflation model in the Palatini approach.
— S. O. Alexeev (Russia), Extended gravity at galaxy cluster's scales.
— A. A. Kirillov, E. P. Savelova (Russia), Wormhole as a possible accelerator of high-energy cosmic-ray particles.
— E. P. Savelova, A. A. Kirillov (Russia), Virtual wormholes and Dynamical Pauli-Villars Regularization.
— U. Mohideen (USA), Eliminating electrostatic forces in precision Casimir force measurements.
— L. B. Boinovich, K. A. Emelyanenko and A. M. Emelyanenko (Russia), Van der Waals forces in thin liquid interlayers: Impact on the properties and stability of wetting and free films.
— Ho Bun Chan (Hong Kong), Strong geometry dependence of Casimir forces between two rectangular gratings.
— A. I. Volokitin (Russia), Fluctuation-induced electromagnetic phenomena under dynamic and thermal nonequilibrium conditions.
— G. Palasantzas, F. Tajik, Z. Babamahdi, V. B. Svetovoy (Netherlands), Casimir actuation between real materials towards chaos.
— G. L. Klimchitskaya (Russia), Casimir force in graphene systems and the Nernst heat theorem.
— R. S. Decca (USA), Old experiments and a new proposal: Status of Casimir physics at IUPUI.
— D. Vassilevich (Brazil), Quest for Casimir repulsion between Chern-Simons surfaces.
— V. N. Marachevsky (Russia), The Casimir effect for materials with Chern-Simons surface layers.
— R. Esquivel-Sirvent (Mexico), Quantum friction for nanoparticles of arbitrary shape.
— G. V. Dedkov and A. A. Kyasov (Russia), Dynamically and thermally nonequilibrium fluctuation-electromagnetic interactions: Recent results and trends.
— T. Emig (France), Conformal field theory of Casimir forces.
— S. P. Gavrilov, D. M. Gitman and A. A. Shishmarev (Russia), Vacuum mean values in the presence of weakly inhomogeneous critical potential steps.
— M. Bordag (Germany), Casimir effect and entropy.
— I. Pirozhenko (Russia), On the stress-energy tensor near a boundary in CFTs.
— V. Svetovoy, G. Palasantzas (Netherlands), A new method to measure the Casimir forces at short distances.
— Yu. V. Gratz (Russia), Vacuum polarization near higher-dimensional defects.
— V. P. Frolov (Canada), Vacuum polarization effect in a ghost free theory.
— G. L. Klimchitskaya and V. M. Mostepanenko (Russia), Recent measurements of the Casimir force: Comparison between experiment and theory.
— R. Podgornik (China), Thermal Casimir interactions in quadrupolar media.
— N. Khusnutdinov (Brazil), The Casimir and Casimir-Polder effects for layered planar structures.
— M. Karim (USA), Auto-stabilized electron.
— S. O. Gladkov (Russia), About a common field theory.
— Yu. Sitenko (Ukraine), Zero-point oscillations of quantized charged massive matter fields and the Casimir effect.
— C. Villarreal (Mexico), Casimir forces between high-Tc superconductors.
— S. Y. Buhmann (Germany), Casimir forces as probes of exotic optical properties.
— L. M. Woods (USA), Casimir interactions in novel materials.
— M. Antezza (France), Fluctuation-induced forces on an atom near a photonic topological material.
— R. Sedmik, and H. Abele (Austria), Prospects on measuring quantum vacuum forces between parallel plates.
— M. R. R. Good (Kazakhstan), Planck-distributed, finite particle creation of Casimir light.
— A. B. Arbuzov and A. E. Pavlov (Russia), Static Casimir condensates of massive fields in Friedmann Universe.
— A. S. Kuraptsev (Russia), The influence of a metallic surface on the spectrum of eigenstates of dense polyatomic clusters.
— E. G. Drukarev and A. I. Mikhailov (Russia), Quantum electrodynamics in the fields of complex nanostructures.
—X. Ren, R. Luo, J. Wang, X. Jia, P. Luo (China), Test of exotic monopole-dipole interaction at micrometer range.
—D. V. Doroshenko, V. V. Dubov, S. P. Roshchupkin (Russia), Resonant annihilation and production of high-energy electron-positron pairs in an external electromagnetic field.
—N. R. Larin, V. V. Dubov, S. P. Roshchupkin (Russia), Resonant production of electron-positron pairs by a hard gamma-ray on a nucleus in an external electromagnetic field.
—A. V. Dubov, V. V. Dubov, S. P. Roshchupkin (Russia), Resonant emission of hard gamma-quants at scattering of ultrarelativistic electronsby a nucleus in the presence of the external electromagnetic field.
—A. A. Pustyntsev, V. V. Dubov, S. P. Roshchupkin (Russia), Resonant Breit-Wheeler process in an external electromagnetic field.
—C. Henkel (Germany), Thermally excited quasiparticles in metals, dispersion forces and heat transfer.
—E. N. Velichko, G. L. Klimchitskaya and E. K. Nepomnyashchaya (Russia), The Casimir repulsion through a water-based magnetic fluid.
—D. Fermi (Italy), Scalar Casimir effect for delata-type potentials.
—F. Intravaia (Germany), Frictional dispersion forces.
—E. N. Velichko, M. A. Baranov and V. M. Mostepanenko (Russia), The change of sign in the Casimir free energy of peptide films deposited on a dielectric substrate.
—B. Miao (China), Casimir forces in Ising and Brazovskii fluctuation media within a Gaussian model.
The beginning of the Conference is approaching and it is due time already to make the reservation of a Hotel at St.Petersburg.
Participants may stay at any city Hotel in St.Petersburg or at the business Hotels (hostels) belonging to the Peter the Great St.Petersburg Polytechnic University.
2. The 4-stars city Hotel "Oktiabrskaia", Ligovsky prospekt, 10/118 is situated near the metro station "Ploshchad' Vosstaniya" at the downtown of St.Petersburg close to all places of interest. From this Hotel one can reach the conference site at the Polytechnic University by metro during about 25 minutes with no connections. For the period of June 23-29 the price for a room per night (suitable for one person or for a couple) is Rubles 8600, i.e., approximately US$130 (breakfast is included).
no later than on May 20, 2019. In this letter, please indicate your name and the exact date and time of your arrival and departure. You will make payment for your stay at the Hotel registration desk on your arrival to St.Petersburg (the Organizing Committee has the special agreement with the Hotel "Oktiabrskaia").
PLEASE MAKE YOUR RESERVATION AT THE CITY HOTELS AS SOON AS POSSIBLE BECAUSE THE END OF JUNE IS THE VERY TOP SEASON OF WHITE NIGHTS WHEN ALL HOTELS AT ST.PETERSBURG ARE FULLY BOOKED.
1. At Grazhdanskii prospect, 28. From this hostel one can reach the conference site during about 15 min. walk. The prices for a single and double rooms per night are Roubles 1500 and 2000, respectively, i.e., approximately US$23 and 30 (breakfast is not included). The bathroom and kitchen are shared with another room at the same apartment.
2. At Lesnoi prospect 65/1, metro station "Lesnaya". From this hostel one can reach the conference site by metro during about 20 min (two stations). The prices for a single and double rooms per night are Roubles 1500 and 2000, respectively, i.e., approximately US$23 and 30 (breakfast is not included). The bathroom and kitchen are shared with another room at the same apartment.
3. At Lesnoi prospect 67/2, metro station "Lesnaya". From this hostel one can reach the conference site by metro during about 20 min (two stations). The price for a separate small apartment with all facilities designated for 1 or 2 persons for your choice is Rubbles 2000, i.e., approximately US$30 (breakfast is not included). The larger one- and two-rooms separate apartments designated for up to 3 and 4 persons are also available for Rubles 3000 and 4000 per night, respectively (i.e., approximately US$46 and 61).
4. At Khlopina street 9/2, metro station "Ploshchad' Muzhestva". From this hostel one can reach the conference site by metro during about 15 min (one station) or by a bus and a trolley-bus. The prices for a single and double rooms per night are Roubles 1200 and 1600, respectively, i.e., approximately US$18 and 25 (breakfast is not included). The bathroom is shared with another room at the same apartment. A kitchen shared with several other apartments is available.
In this letter, please indicate your name, the desirable hostel from the above list, the type of the room, apartment (including price), and the dates of your arrival and departure. Please be aware that for late reservations some of the options might be already unavailable.
We are looking forward to see you at St.Petersburg in June.
Saint Petersburg is well known all over the world as a great city suggesting outstanding architecture attractions combined with a gripping view of the mighty Neva River and its granite embankments.
The cultural attractions of Saint Petersburg include more than two hundred richest museums starting from the splendid Hermitage and Russian Museum, numerous theatres leaded by the famous Mariinsky theatre, Concert Halls, and picturesque gardens.
Around Saint Petersburg there are several palace and park ensembles each of which is of greater scale than, for instance, Versailles.
The Tenth Friedmann Seminar and Fourth Casimir Symposium will work during the best period of the so-called "white nights" when darkness retreats and even at night time one can easily read a newspaper with no artificial illumination. During this period of time Saint Petersburg is immersed in a mysterious and promising atmosphere so expressively described in classic Russian literature.
All participants of the Friedmann Seminar and Casimir Symposium will have an opportinity to take part in the bus city tour and have individual excursions to various places of interest for their choice.
The First Alexander Friedmann International Seminar on Gravitation and Cosmology devoted to the centenary of his birth took place in 1988, June 22-26, at Leningrad (now St.Petersburg). The Proceedings of this event were published in A. A. Friedmann Centenary Volume, Eds. M. A. Markov, V. A. Berezin and V. F. Mukhanov (World Scientific, Singapore, 1989). There was a decision to organize the Alexander Friedmann Laboratory for Theoretical Physics at St.Petersburg which was formally realized in 1991.
In September 12-19, 1993 the Central Astronomical Observatory at Pulkovo of the Russian Academy of Sciences in collaboration with Alexander Friedmann Laboratory of Theoretical Physics organized at St. Petersburg the Second Alexander Friedmann International Seminar on Gravitation and Cosmology. The Proceedings of this event were published in 1994 (Eds. Yu. N. Gnedin, A. A. Grib and V. M. Mostepanenko, Central Astronomical Observatory at Pulkovo of the Russian Academy of Sciences and Friedmann Laboratory Publishing, St.Petersburg, Russia).
The Third Alexander Friedmann International Seminar on Gravitation and Cosmology took place on July 4-12, 1995 at St. Petersburg. It was organized by the same organizers. The Proceedings of this Seminar were edited by Yu. N. Gnedin, A. A. Grib and V. M. Mostepanenko (Friedmann Laboratory Publishing, St.Petersburg, 1995).
The Fourth Alexander Friedmann International Seminar on Gravitation and Cosmology was organized at St.Petersburg, June 17-25, 1998 by the Pulkovo Astronomical Observatory of the Russian Academy of Sciences and A. Friedmann Laboratory for Theoretical Physics. The Proceedings of this Seminar (Eds. Yu. N. Gnedin, A. A. Grib, V. M. Mostepanenko and W. A. Rodrigues Jr.) were published by the Jnstituto de Matematica, Estatistica e Computacao Cientifica, Universidade Estadual de Campinas, Campinas, Brasil, Pulkovo Astronomical Observatory of the Russian Academy of Sciences, and Friedmann Laboratory for Theoretical Physics, St. Petersburg, Russia in 1999.
The Fifth Alexander Friedmann International Seminar on Gravitation and Cosmology was held at Joao Pessoa (Brazil), 24-30 April 2002. It was organized by the Federal University of Paraiba and by the Alexander Friedmann Laboratory for Theoretical Physics. The Proceedings of this event (Eds. V. M. Mostepanenko and C. Romero) were published as a special issue of the International Journal of Modern Physics A, Vol. 17, No 29 (2002).
The Sixth Alexander Friedmann International Seminar on Gravitation and Cosmology was held at Cargese, France, from June 28 to July 3, 2004. The Proceedings of this event (Eds. V. M. Mostepanenko and R. Triay) were published as a special issue of the International Journal of Modern Physics A, Vol. 20, No 11 (2005).
The Seventh Alexander Friedmann International Seminar on Gravitation and Cosmology accompanied by the Satellite Symposium on the Casimir Effect was held at Joao Pessoa (Brazil) from June 29 to July 5, 2008. It was organized by the Federal University of Paraiba. The Proceedings were published as a special issue of the International Journal of Modern Physics A, Vol. 24, No 8&9 (2009) and were edited by V. B. Bezerra, V. M. Mostepanenko and C. Romero.
The Eighth Alexander Friedmann International Seminar on Gravitation and Cosmology accompanied by the Second Satellite Symposium on the Casimir Effect was held at Rio de Janeiro (Brazil) from May 30 to June 3, 2011. It was organized by the Brazilian Center for Physical Research, Institute for Cosmology, Relativity and Astrophysics. The Proceedings were published as the special issues of the International Journal of Modern Physics A, Vol. 26 , No 22 (2011) and International Journal of Modern Physics: Conference Series, Vol. 3 (2011), Editors V. M. Mostepanenko and M. Novello.
The Ninth Alexander Friedmann International Seminar on Gravitation and Cosmology accompanied by the Third Satellite Symposium on the Casimir Effect was held at St.Petersburg (Russia) from June 21 to June 27, 2015. It was organized by the Peter the Great Saint Petersburg Polytechnic University and Central Astronomical Observatory at Pulkovo of the Russian Academy of Sciences. The Proceedings of this event were published as the special issues of the International Journal of Modern Physics A, Vol. 31 , No 2&3 (2016) and International Journal of Modern Physics: Conference Series, Vol. 41 (2016), Editors V. M. Mostepanenko and V. M. Petrov.
© Peter the Great Saint Petersburg Polytechnic University. All rights reserved.
Web: C. C. Korikov, Text: V. M. Mostepanenko, Design: HTML5 UP. | CommonCrawl |
In this paper we study semi-Markov modulated M/M/$\infty$ queues, which are to be understood as infinite-server systems in which the Poisson input rate is modulated by a Markovian background process (where the times spent in each of its states are assumed deterministic), and the service times are exponential. Two specific scalings are considered, both in terms of transient and steady-state behavior. In the former the transition times of the background process are divided by $N$, and then $N$ is sent to $\infty$; a Poisson limit is obtained. In the latter both the transition times and the Poissonian input rates are scaled, but the background process is sped up more than the arrival process; here a central-limit type regime applies. The accuracy and convergence rate of the limiting results are demonstrated with numerical experiments.
Blom, J.G, Mandjes, M.R.H, & Thorsdottir, H. (2013). Time-scaling limits for Markov-modulated infinite-server queues. Stochastic Models, 29, 112–127. | CommonCrawl |
The Dual space of L infty space is not L1 ,are there some example to show this? I am going to use rieze representation thm,but it can not be used because p= $\infty$.
Browse other questions tagged functional-analysis or ask your own question.
What is the dual of the space L-infinity ($L^\infty$)?
Why is the space of finite Borel-measure dual to the space of finite continuous function. | CommonCrawl |
ElGamal is a public key encryption scheme with security based on the discrete logarithm problem.
How to motivate, that in ElGamal Alice the recipient keeps her keys fixed, while Bob the sender changes the key each time - why not both of them change their keys each time?
Can I perform a division of two integers homomorphically using ElGamal?
El-Gamal like encryption, how can i guess the key?
When choosing the public key for ElGamal, $\alpha$ must be chosen as a primitive mod p. What if $\alpha$ is not a primitive root ? How will it influence the encryption and decryption ?
Does this scheme break under a DHP adversary?
Is this problem still as hard as discrete logarithm (modified ElGamal)?
For ElGamal-based key encapsulation, is it necessary to hash before using as AES key?
What is an example of elgamal homomorphic double encryption?
Is EC El Gamal the only option?
How to encode messages in $\Bbb Z_p^*$ to be encrypted with ElGamal scheme?
Is it possible to re-cipher with Paillier?
How to recover secret $x$ from Elgamal signatures with repeated $k$?
Subgroup generation: should we check that order is not 2? or g in not too small?
Can I use modulo $n^2$ arithmetic where $n=p \cdot q$ for ElGamal encryption?
Are there any natural ways to transform ElGamal encryption system or ElGamal signature scheme into an authentication protocol? | CommonCrawl |
Abstract: We study the positive recurrence of multi-dimensional birth-and-death processes describing the evolution of a large class of stochastic systems, a typical example being the randomly varying number of flow-level transfers in a telecommunication wire-line or wireless network.
We first provide a generic method to construct a Lyapunov function when the drift can be extended to a smooth function on $\mathbb R^N$, using an associated deterministic dynamical system. This approach gives an elementary proof of ergodicity without needing to establish the convergence of the scaled version of the process towards a fluid limit and then proving that the stability of the fluid limit implies the stability of the process. We also provide a counterpart result proving instability conditions.
We then show how discontinuous drifts change the nature of the stability conditions and we provide generic sufficient stability conditions having a simple geometric interpretation. These conditions turn out to be necessary (outside a negligible set of the parameter space) for piece-wise constant drifts in dimension 2. | CommonCrawl |
Over the past few years, Batch-Normalization has been commonly used in deep networks, allowing faster training and high performance for a wide variety of applications. However, the reasons behind its merits remained unanswered, with several shortcomings that hindered its use for certain tasks. In this work, we present a novel view on the purpose and function of normalization methods and weight-decay, as tools to decouple weights' norm from the underlying optimized objective. This property highlights the connection between practices such as normalization, weight decay and learning-rate adjustments. We suggest several alternatives to the widely used $L^2$ batch-norm, using normalization in $L^1$ and $L^\infty$ spaces that can substantially improve numerical stability in low-precision implementations as well as provide computational and memory benefits. We demonstrate that such methods enable the first batch-norm alternative to work for half-precision implementations. Finally, we suggest a modification to weight-normalization, which improves its performance on large-scale tasks. | CommonCrawl |
This short presentation on generalization by the coauthors Sasha Rakhlin is also worth looking at - though I have to confess much of the references to learning theory are lost on mesho.
While I can't claim to have understood all the bounding and proofs going on in Section 4, I think I got the big picture so I will try and summarize the main points in the section below. In addition, I wanted to add some figures I did which helped me understand the restricted model class the authors worked with and to understand the "gradient structure" this restriction gives rise to. Feel free to point out if anything I say here is wrong or incomplete.
The main mantra of this paper is along the lines of results by Bartlett (1998) who observed that in neural networks, generalization is about the size of the weights, not the number of weights. This theory underlies the use of techniques such as weight decay and even early stopping, since both can be seen as ways to keep the neural network's weight vector small. Reasoning about a neural network's generalization ability in terms of the size, or norm, of its weight vector is called norm-based capacity control.
There are various versions of the Fisher information matrix, and therefore of the Fisher-Rao norm, depending on which distribution the expectation is taken under. The empirical form samples both $x$ and $y$ from the empirical data distribution. The model form samples $x$ from the data, but assumes that the loss is a log-loss of a probabilistic model, and we sample $y$ from this model.
Importantly, the Fisher-Rao norm is something which depends on the data distribution (at least the distribution of $x$). It is also invariant under reparametrization, which means that if there are two parameters $\theta_1$ and $\theta_2$ which implement the same function, their FR-norm is the same. Finally, it is a measure related to flatness inasmuch as the Fisher-information matrix approximates the Hessian at a minimum of the loss under certain conditions.
for linear networks without rectification (which in fact is just a linear function) the Rademacher complexity can be bounded by the FR-norm and the bound does not depend on the number of layers or number of units per layer. This suggests that the Fisher-Rao norm may be a good, modelsize-independent proxy measure of generalization.
the authors do not prove bounds for more general classes of models, but provide intuitive arguments based on its relationship to other norms.
the authors also did a bunch of experiments to show how the FR-norm correlates with generalization performance. They looked at both vanilla SGD and 2-nd order stochastic method K-FAC. They looked at what happens if we mix in random labels to training, and found that the FR-norm of the final solution seems to track the generalization gap.
There are still open questions, for example explaining what, specifically, makes SGD converge to better minima, and how this changes with increasing batchsize.
The one thing I wanted to add to this paper, is a little bit more detail on the particular model class - rectified linear networks without bias - that the authors studied here. This restriction turns out to guarantee some very interesting properties, without hurting the empirical performance of the networks (so the authors claim and to some degree demonstrate).
The left-hand panel shows the function itself. The panels next to it show the gradients with respect to $x_1$ and $x_2$ respectively. The function is piecewise linear (which is hard to see because there are many, many linear pieces), which means that the gradients are piecewise constant (which is more visually apparent).
These functions are clearly very flexible and by adding more layers, the number of linear pieces grows exponentially.
Importantly, the above plot would look very similar had I plotted the function's output as a function of two components of $\theta$, keeping $x$ fixed. This is significantly more difficult to plot though, so I'm hoping you'll just believe me.
It's less clear from these plots why a function like this can model data just as well as the more general piece-wise linear one we get if we enable biases. One thing that helps is dimensionality: in high dimensions, the probability that two randomly sampled datapoints fall into a the same "pyramind", i.e. share the same linear region, is extremely small. Unless your data has some structure that makes this likely to happen for many datapoints at once, you don't really have to worry about it, I guess.
Furthermore, if my network had three input dimensions, but I only use two dimensions $x_1$ and $x_2$ to encode data and fix the third coordinate $x_3=1$, I can implement the same kind of functions over my inputs. This is called using homogeneous coordinates, and a bias-less network with homogeneous coordinates can be nearly as powerful as one with biases in terms of the functions it can model. Below is an example of a function a rectified network with no biases can implement when using homogeneous coordinates.
This is because the third variable $x_3=1$ multiplied by its weights practically becomes a bias for the first hidden layer.
Second observation is that we can consider $f_\theta(x)$ as a function of the weight matrix of a particular layer, keeping all other weights and the input the same, the function behaves exactly the same way as it behaves with respect to the input $x$. The same radial pattern would be observed in $f$ if I plotted it as a function of a weight matrix (though weight matrices are rarely 2-D so I can't really plot that).
where $L$ is the number of layers.
We got the $L+1$ from the $L$ hidden layers plus the output layer.
It can be seen very clearly in this form that the Fisher-Rao norm only depends on the output of the function $f_\theta(x)$ and properties of the loss function. This means that if two parameters $\theta_1$ and $\theta_2$ implement the same input-output function $f$, their F-R norm will be the same.
I think this paper presented a very interesting insight into the geometry of rectified linear neural networks, and highlighted some interesting connections between information geometry and norm-based generalization arguments.
What I think is still missing is the kind of insight which would explain why SGD finds solutions with low F-R norm, or how the F-R norm of a solution is effected by the batchsize of SGD, if at all it is. The other thing missing is whether the F-R norm can be an effective regularizer. It seems that for this particular class of networks which don't have any bias parameters, the model F-R norm could be calculated relatively cheaply and added to as a regularizer since we already calculate the forward-pass of the network anyway.
© 2019 inFERENCe. All rights reserved. Powered by Ghost. Crisp theme by Kathy Qian. | CommonCrawl |
With more and more sensors readily available and collection of data becomes more ubiquitous and enables machine to machine communication(a.k.a internet of things), time series signals play more and more important role in both data collection process and also naturally in the data analysis. Data aggregation from different sources and from many people make time-series analysis crucially important in these settings. Detecting trends and patterns in time-series signals enable people to respond these changes and take actions intelligibly. Historically, trend estimation has been useful in macroeconomics, financial time series analysis, revenue management and many more fields to reveal underlying trends from the time series signals.
Trend estimation is a family of methods to be able to detect and predict tendencies and regularities in time series signals without knowing any information a priori about the signal. Trend estimation is not only useful for trends but also could yield seasonality(cycles) of data as well. Robust estimation of increasing and decreasing trends not only infer useful information from the signal but also prepares us to take actions accordingly and more intelligibly where the time of response and to action is important.
When you have a quite volatile signal and want to see a mid-to-long term range ignoring the short-term or seasonal effects in the time series signal, then you could actually use a band-pass filter in order to get a medium range signal change over time. Or there are trend filters that economics people have been sing in order to separate seasonality from the mid-long term range. Consider a product sale time series signal, it will definitely show a seasonality(in holiday season, product is sold much more than any other period). When you visualize this over years, you would see an actual cycle effect in the time series signal.
There are various ways to do trend estimation methods, you could decompose the signal into two compoenent; one cycle part(which is short-term) and one trend part(which is medium-to-long term), which is what Hodrick-Prescott Filter tries to do.
Hodrick Prescott filter is a bandpass filter where it tries to decompose the time-series signal into a trend $x_t$ (mid-term growth) and a cylical component(recurring and seasonal signal) $c_t$.
The first term is the square of difference of original signal and growth signal(cylical component) and $\lambda$ is the smoothing parameter.
Based on the smoothing parameter, you could actually change what type of effects you may want to include or capture(if you want to capture some variation and volatility in short-term signal, then you may want to use a smaller smoothing parameter so that you have less smooth signal. If you want to also capture only a long term range signal, the smoothing parameter could be chosen arbitrarily large. However, in order to get some changes, we need to not to choose very large smoothing optimization parameter.
In the following section, I will look at the revenue numbers of Apple and stock price of Apple to see if we could use trend estimation to see if there is an increase or a decrease in the time-series signals over time.
In this plot, we could easily see the trend and the cycle plots using a trend estimation. The linear fit is particulary good and does not get affected in short term volatility. One another application of trend estimation is to be able to capture the seasonality or periodicity of the signal as well if you are interested more in the seasonality rather than the trend signal.
If the cycling behavior is not that obvious, then it does not do as good job as the one that has cycling behavior. We could see in here it removes some volatility in the signal, but it is not as powerful in terms of extracting the medium range signal as you could see from the above graph.
This fits well and capture somehow the short term subtle changes as well. As you increase the smoothing parameter, you would get a much smoother trend signal where the cyclical component gets more and more high frequency componets.
The trend signal is much smoother, and we are capturing more and more short-term volatility in the cycling component. Let's increase the smoothing parameter one more time in order to make the signal a little bit smoother.
This signal only takes the medium range changes and only when they actually persist over time in the time series signals. Even significant jumps(e.g. around 2012), does not affect the signal too much.
In general, if the cycling behavior is not good and there are a lot of short-term volatility in the signal, then you should choose a larger smoothing parameter to further smooth the signal and then get the trend from the signal. | CommonCrawl |
Prime number is a number greater than "1" and is not divisible by any other number except 1 and itself. Prime numbers are positive whole numbers bigger than 1.
Every number can be written in the form of product of prime numbers. These numbers are called prime factors of that number. Every number has a unique set of prime factors.
$6=2\times 3$, where 2 and 3 are referred as prime factors of 6.
The method of finding prime factors of a numbers is known as prime factorization. In other words, prime factorization is the technique of determining prime numbers which multiply together to give the required number.
Step 1 : Find a prime number by which the given number is divisible.
Step 2: Divide the given number by that prime number and write down the result.
Step 3: Now determine a prime number by which the result obtained (in step 2) is divisible. Divide it by that number.
Step 4: Repeat this process until reach at 1.
Step 5: All the prime numbers obtained in this process will be the prime factors of given number.Let us understand prime factorization method by an example.
Example: Evaluate prime factors of 384 using prime factorization method. | CommonCrawl |
Krishnamurthy, Hanumanthappa and Kumar, Prasanna KM and Joshi, chirag v and Krishnamurthy, Hegganahalli N and Moudgal, Raghuveer N and Sairam, Ram M (2000) Alterations in Sperm Characteristics of Follicle-Stimulating Hormone (FSH)-Immunized Men Are Similar to Those of FSH-Deprived Infertile Male Bonnet Monkeys. In: Journal Of Andrology, 21 (2). pp. 316-327.
The quality of sperm ejaculated by bonnet monkeys and normal, healthy proven fertile volunteer men, both actively immunized with ovine follicle-stimulating hormone (oFSH), was examined at different times of study for chromatin packaging and acrosomal glycoprotein concentration by flow cytometry. Susceptibility of sperm nuclear DNA to dithiothreitol (DTT)-induced decondensation, as measured by ethidium bromide binding, was markedly high compared with values at day 0 in men and monkeys during periods when FSH antibody titer was high. Sperm chromatin structure assay yields $\alpha t$ values, which is another index of chromatin packaging. Higher $\alpha t$ values, signifying poor packaging, occurred in both species following immunization with heterologous pituitary FSH. The binding of fluorosceinated pisum sativum agglutinin (PSA-FITC) to acrosome of sperm of monkeys and men was significantly low, compared with values at day 0 (control) during periods when cross-reactive FSH antibody titer was high and endogenous FSH was not detectable. Blockade of FSH function in monkeys by active immunization with a recombinant oFSH receptor protein corresponding to a naturally occurring messenger RNA (mRNA) also resulted in production of sperm with similar defects in chromatin packaging and reduced acrosomal glycoprotein concentration. Thus, it appears that in monkeys and men, lack of FSH signaling results in production of sperm that exhibit defective chromatin packaging and reduction in acrosomal glycoprotein content. These characteristics are similar to that exhibited by sperm of some class of infertile men. Interestingly, these alterations in sperm quality occur well ahead of decreased sperm counts in the ejaculate.
Copyright of this article belongs to The American Society of Andrology. | CommonCrawl |
Betatron is a particle accelerator. It is cyclic in nature and is used to accelerate electrons. Betatron is named after the name of electrons, as they are also called beta particles. It was developed by Donald Kerst in 1940 at the University of Illinois.
Due to the variation in mass , time period and frequency will also start changing their value and hence it is not able to accelerate the particle.
Its concept is originated from Rolf Widroe who failed in the discovery of induction accelerator because of the lack of traverse focusing. It was the first machine to produce high energy electrons.
Betatron is actually a transformer. It has a torus shaped vacuum tube which acts as the secondary coil of the transformer. An alternating current is applied in the primary coil of the transformer. This accelerates the electrons in the vacuum tube around a circular path. Betatron works under constant electric field and variable magnetic field.
Provides high energy beam electrons of about 300 MeV.
It can be used as a source of X rays and gamma rays if electron beam is made to direct on a metal plate.
The X rays produced with the help of betatron can be used in industrial and medical fields.
High energy electrons can be used in particle physics.
The maximum energy range imparted by it to electrons is limited by the strength of the magnetic field because of the saturation of iron and by the size of magnetic core practically.
Synchrotrons overcome these limitations of Betatron.
Where $K$ is restoring force.
The number of oscillations/turn is Qx or Qy. ( this is known as Betatron Tune). To restore the oscillation back towards the equilibrium, restoring force is required. This restoring force is provided by focusing the components in the magnetic field which bends the particle back towards the equilibrium orbit. If the restoring force (K) is constant in "s" then this is just Simple Harmonic Motion. s is just longitudinal displacement around the ring or torus tube.
In synchrotron accelerators of modern (strong focusing) design there are several cycles of betatron oscillation per revolution of beam particles.
Betatron is a type of transformer where a ring of electrons act as secondary coil. This secondary coil or ring is a vacuum tube in torus shape. An alternating current is applied in the primary coil of the transformer which in turn accelerates the electrons present in the vacuum tube. This makes the electrons to move in the circle inside the tube. They move in the same way as the current is induced in the secondary coil by primary coil as per Faraday's law.
The magnetic field which is used to make the electrons follow the circular path is also responsible for accelerating them.
The magnets should be properly designed such that the average field strength at the orbit radius is equal to half of the average field strength linking the orbit.
$B$ strength is average field strength at the orbit radius.
$Ḃ$ is average field strength linking the orbit.
$B$ is the magnetic field at r.
$\phi$ is the flux inside the are enclosed by orbit of electron.
Above condition is known as Wideroe Condition.
If the magnetic field increases, changing flux links the loop of electrons. This induces electromagnetic field which further accelerates the electrons. As the electrons gain more speed they require a larger magnetic field to move at a constant radius. This magnetic field is provided by the increasing field. The effect is proportional; therefore the field is always strong enough to keep the electrons in same orbit radius.
The magnetic field is changed by passing an alternating current to the primary coils. This makes occurrence of particle acceleration in the first quarter of the voltage sine wave's cycle. The last quarter of the cycle also has a changing field that could accelerate the particles (electrons) but it is in the wrong direction for them to move in the correct circle. The target is bombarded with pulses of particles at the frequency of the ac supply.
Provides high energy beam of electrons of about 300 M eV.
Large electron positron collider, 8$\times$104 M eV. | CommonCrawl |
What happens if a certain dataset contains different "groups" that follow different linear models?
For example, let's imagine that examining the scatterplot of a certain feature $x_i$ against $y$ we can see that some points follow a linear relationship with a coefficient $\beta_A<0$ while other points clearly have $\beta_B>0$. We can infer that these points belong to two different populations, population $A$ responds negatively to high values of feature $x_i$ while population $B$ responds positively. We then create a categorical feature (or one hot encoding) to show which population each row belongs to.
Is splitting the dataset required or are commonly used algorithms able to recognize the different relations between features from different categorical variables?
You can't really do that, there may be some factor which binds certain "groups" of data together, but there are many reasons for this. Your relationship may be nonlinear, or the "groups" of data may represent subjects / objects, where a stronger correlation exists. Unless you know for a fact that these points belong to different populations you shouldn't do that, use the data that you have to model these groupings.
For the case of unobservable groups, you could use mixture models, in your case a mixture of linear regression models. Mixture models identify latent (=unobserved) clusters in the data so that each cluster has the same parameters in the consequent part of the model. The text book example are mixed Gaussians, where each individual observation comes from a Normal distribution, but the mean is different for each group. In your case, a mixture model would infer clusters of individuals that share regression coefficients and estimate the coefficients for each cluster in one step.
Finite mixture models require the number of latent groups to be specified (e.g. domain knowledge or cross-validation). Infinite mixture models find a good number of groups from the data.
These models typically do not give you clear rules as to why an individual belongs to a cluster and consequently cannot be used for unknown individuals, but could possibly be extended by a prior that explicitly models cluster probabilities based on observed data.
Not the answer you're looking for? Browse other questions tagged regression linear-regression missing-data or ask your own question.
How to solve this regression problem?
Which of the following linear regression model is better? | CommonCrawl |
Recall from the Compact Sets in a Metric Space page that if $(M, d)$ is a metric space then a subset $S \subseteq M$ is said to be compact in $M$ if for every open covering of $S$ there exists a finite subcovering of $S$.
We will now look at a rather important theorem which will tell us that if $S$ is a compact subset of $M$ then we can further deduce that $S$ is also a bounded subset.
Theorem 1: If $(M, d)$ be a metric space and $S \subseteq M$ is a compact subset of $M$ then $S$ is bounded.
It should not be hard to see that $\mathcal F$ is an open covering of $S$, since for all $s \in S$ we have that $d(x_0, s) = r_s > 0$, so $s \in B(x_0, r_s) \in \mathcal F$. | CommonCrawl |
Abstract: The aim of the MIMAC project is to detect non-baryonic Dark Matter with a directional TPC. The recent Micromegas efforts towards building a large size detector will be described, in particular the characterization measurements of a prototype detector of 10 $\times$ 10 cm$^2$ with a 2 dimensional readout plane. Track reconstruction with alpha particles will be shown. | CommonCrawl |
Abstract We measured the density of vibrational states (DOS) and the specific heat of various glassy and crystalline polymorphs of SiO2. The typical (ambient) glass shows a well-known excess of specific heat relative to the typical crystal ($\alpha$-quartz). This, however, holds when comparing a lower-density glass to a higher-density crystal. For glassy and crystalline polymorphs with matched densities, the DOS of the glass appears as the smoothed counterpart of the DOS of the corresponding crystal; it reveals the same number of the excess states relative to the Debye model, the same number of all states in the low-energy region, and it provides the same specific heat. This shows that glasses have higher specific heat than crystals not due to disorder, but because the typical glass has lower density than the typical crystal. | CommonCrawl |
We already know how to find the distribution of the sum of any two discrete random variables.
By induction, we can extend this to the sum of any finite number of independent variables.
So in principle, we know how to find the distribution of the sum of $n$ independent random variables for $n > 1$. However, this method can be hard to put into practice for large $n$.
In this section we examine another way of approaching the problem of finding the distribution of a sum. It is easier to automate, as you will see, though it too comes up against computational barriers eventually.
Let $X$ be a random variable with possible values $0, 1, 2, \ldots, N$ for some fixed integer $N$. For brevity, let $p_k = P(X = k)$ for $k$ in the range 0 through $N$.
For the extension to random variables with infinitely many non-negative integer values, see the Technical Note at the end of the section.
You can see that $G_X$ is a polynomial of degree $N$, and that the coefficient of $s^k$ is $p_k = P(X=k)$.
So if you were given the pgf of a random variable, you could read off the distribution of the random variable by simply listing all the powers and the corresponding coefficients: the powers are the possible values, and the coefficients are the corresponding probabilities.
We have used the fact that for independent random variables, the expectation of the product is the product of the expectations.
The result says that the pgf of the sum of two independent random variables is the product of the two pgfs. This extends easily to more than two random variables and yields a simple formula for the pgf of the sum of i.i.d. variables.
We now have an algorithm for finding the distribution of $S_n$.
Start with the pgf of $X_1$.
Raise it to the power $n$. That's the pgf of $S_n$.
Read the distribution of $S_n$ off the pgf.
Wonderful! We're done! Except that actually doing this involves raising a polynomial to a power. That is a daunting task if the power is large.
Fortunately, as you will see in the next section, NumPy comes to our rescue with a set of polynomial methods.
Technical Note. We have defined the probability generating function for random variables that have finitely many non-negative integer values. The definition can be extended to random variables that have infinitely many non-negative integer values. But in that case the pgf is an infinite series and we have to be careful about convergence. Typically the pgf is defined only on the domain $|s| \le 1$ so that it converges. | CommonCrawl |
There is an array that consists of $n$ integers. Some values of the array will be updated, and after each update, your task is to report the maximum subarray sum in the array.
The first input line contains integers $n$ and $m$: the size of the array and the number of updates. The array is indexed $1,2,\ldots,n$.
The next line has $n$ integers: $x_1,x_2,\ldots,x_n$: the initial contents of the array.
Then there are $m$ lines that describe the changes. Each line has two integers $k$ and $x$: the value at position $k$ becomes $x$.
After each update, print the maximum subarray sum. Empty subarrays (with sum $0$) are allowed. | CommonCrawl |
Minimum Cost flow problem is a way of minimizing the cost required to deliver maximum amount of flow possible in the network. It can be said as an extension of maximum flow problem with an added constraint on cost(per unit flow) of flow for each edge. One other difference in min-cost flow from a normal max flow is that, here, the source and sink have a strict bound on the limits on the flow they can produce or take in respectively, $$B(s) > 0, B(t) < 0$$. Intermediate nodes have no bounds or can be represented as $$B(x) = 0$$.
This algorithm is used to find min cost flow along the flow network. Pseudo code for this algorithm is provided below.
Negative cycle in cost network$$(G_c)$$ are cycle with sum of costs of all the edges in the cycle is negative. They can be detected using Bellman Ford algorithm. They should be eliminated because, practically, flow through such cycles cannot be allowed. Consider a negative cost cycle, if at all flow has to pass through this cycle, the total cost is always reducing for every cycle completed. And so would result in an infinite loop in desire of minimizing the total cost. So, whenever a cost network includes a negative cycle, it implies, the cost can further be minimized (By flowing through other side of cycle instead of the side currently considered). A negative cycle once detected are removed by flowing a bottleneck capacity through all the edges in the cycle.
There are various applications of minimum cost flow problem. One of which is solving the minimum weighted bipartite matching. Bipartite graph$$(B)$$ is a graph whose nodes can be divided into two disjoint sets$$(P$$ and $$Q)$$ and all the edges of graph joins a node in $$P$$ to node in $$Q$$. Matching means that no two edges in final flow touch each other(share common node). It can be considered as a multi-source multi-destination graph. Convert such graphs to single source and single destination by creating a source node $$S$$ and join all nodes in set $$P$$ with $$S$$ and a destination node $$T$$ and join all nodes in set $$Q$$ with $$T$$. Now, above algorithm can be applied to find min cost max flow in graph $$B$$.
A variant of weighted bipartite matching problem is known as assignment problem. In simple terms, assignment problem can be described as having $$N$$ jobs and $$N$$ workers, each worker does a job for particular cost. Also, each worker should be given only one job and each job should be assigned to only worker. This can be solved using Hungarian algorithm. Pseudocode for this problem is given below.
Input will be an $$N \times N$$ matrix showing cost charged by each worker for each job.
FindMinCost does an optimal selection of $$0$$s in matrix $$X$$ such that $$N$$ cells are selected and non of them lie in same row or column. The values in cells in $$C$$ corresponding to selected cells in $$X$$, are added and are returned as answer for minimum cost that is to be calculated. | CommonCrawl |
Journal: Journal of Math. Analysis and Appl.
We establish a Liouville-type theorem for a subcritical nonlinear problem, involving a fractional power of the sub-Laplacian in the Heisenberg group. To prove our result we will use the local realization of fractional CR covariant operators, which can be constructed as the Dirichlet-to-Neumann operator of a degenerate elliptic equation in the spirit of Caffarelli and Silvestre. The main tools in our proof are the CR inversion and the moving plane method, applied to the solution of the lifted problem in the half-space $\mathbb H^n\times \mathbb R^+$. | CommonCrawl |
Can I get random element of a user defined class of objects? For example if I define xyz(n) to generate all the $n \times n$ matrices with a particular property. How can I get a random matrix in my class xyz(5).
You have to implement a random_element method by yourself, since sage will not discover which measure has to be sampled. If the property happens frequently among all matrices, you can use rejection the method: pick a random matrix (in the larger space) until you find one with the property and return that matrix.
It really depends on the property. Could you provide some examples of some xyz(n) that you would like to sample ? | CommonCrawl |
Let $M_1$ and $M_2$ be two symmetric $d\times d$ matrices. What is the relationship between $tr(M_1M_2M_1M_2)$ and $tr(M_1^2 M_2^2 )$?
P.S. I tried a few examples and found $$ tr(M_1M_2M_1M_2) \le tr(M_1^2 M_2^2 ) $$ seems always true. Is there a theorem?
Your conjecture is a special case of the following result which essentially follows from the Lieb-Thirring inequality.
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices matrix-analysis traces or ask your own question.
Can a rational symmetric matrix $A$ be diagonalized as $A = P \Lambda S$ for some $P, S $ in the general linear group? | CommonCrawl |
$B$ is a filtered colimit of étale $K$-algebras.
Moreover, every finitely generated $K$-subalgebra of $B$ is étale over $K$.
Assume (1) and (2) hold. We will prove (3) and the finite statement of the lemma. A field is absolutely flat ring, hence $B$ is a absolutely flat ring by Lemma 15.91.8. Hence $B$ is reduced and every local ring is a field, see Lemma 15.91.5.
Let $\mathfrak q \subset B$ be a prime. The ring map $B \to B_\mathfrak q$ is weakly étale, hence $B_\mathfrak q$ is weakly étale over $K$ (Lemma 15.91.9). Thus $B_\mathfrak q$ is a separable algebraic extension of $K$ by Lemma 15.91.15.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 092Q. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 092Q, in case you are confused. | CommonCrawl |
Given a connected weighted graph with non-negative edges (might have cycles), find the shortest path from a vertex s to a vertex t with at most k edges.
I have done some research on this problem. All proposed solutions point to using Bellman-Ford's algorithm by modifying its outer loop to perform k iterations. This will yield a worst case time complexity of O(VE).
I wish to know if it is possible to solve this in O(k * (V+E)LogV) or better using Dijkstra's algorithm?
I have seen this post that discusses the same problem. Dijkstra's algorithm to compute shortest paths using k edges?
However, I don't know how to prove the correctness of the solution that uses product construction.
Use Breadth first search to mark label the distances of all the nodes within k hops of the source vertex.
Now, if the destination can be reached within k hops, run Dijkstra's algorithm to find the shortest path, using all the labelled nodes, to the destination vertex.
Modify Dijkstra's algorithm such that it will run with a path length counter.
Every time a edge to a vertex is relaxed, mark the distance of that vertex with the counter.
(b) Else we update the counter with the distance value of that vertex.
Now the algorithm should give us the shortest path of max length k.
I am not sure of the correctness and efficiency of my algorithms either. Could someone please advise me if there is a better solution to this problem?
The product construction means that you don't just store the shortest distance to a given node, but store the shortest distance to reach a node on a shortest path with $k$ nodes. You certainly don't need to explicitly construct the edges of the product graph. But since the algorithm will produce $k\cdot|V|$ shortest distances of this form, it seems as if you must at least construct an array of this size to hold those distances. However, this is not exactly true, because only the distances different from $\infty$ must be stored.
So you could either generate the nodes of the product graph on the fly and store them in a map (i.e. something like a red black tree), or first determine the reachable part of the product graph by a suitable algorithm. If you only care about the complexity and not about the practical efficiency, then the solution with the map is simpler and still good enough. That map can then also be used for Dijkstra's algorithm itself to determine the next node on which the algorithm continues.
The disadvantage compared to Bellman-Ford's algorithm is that you can't reuse the memory from the previous iteration for the current iteration, because Dijkstra's algorithm will not produce all distance $j-1$ paths before producing distance $j$ paths. But maybe it is possible to modify Dijkstra's algorithm to do exactly this, and this might indeed yield the most efficient algorithm for your problem.
Dijkstra's algorithm to compute shortest paths using k edges?
Using Dijkstra's algorithm with negative edges?
Fastest algorithm for shortest path with atmost k edges on a DAG with non-negative edge weights?
Is single-source single-destination shortest path problem easier than its single-source all-destination counterpart?
Dijkstra single-source shortest path $\Omega(n\log n)$?
Compute single-source shortest paths in O(n+m) time? | CommonCrawl |
Abstract: We study the problem of constructing a block-cipher on a "possibly-strange" set $\mathcal S$ using a block-cipher on a larger set $\mathcal T$. Such constructions are useful in format-preserving encryption, where for example the set $\mathcal S$ might contain "valid 9-digit social security numbers" while $\mathcal T$ might be the set of 30-bit strings. Previous work has solved this problem using a technique called cycle walking, first formally analyzed by Black and Rogaway. Assuming the size of $\mathcal S$ is a constant fraction of the size of $\mathcal T$, cycle walking allows one to encipher a point $x \in \mathcal S$ by applying the block-cipher on $\mathcal T$ a small /expected/ number of times and $O(N)$ times in the worst case, where $N = |\mathcal T|$, without any degradation in security. We introduce an alternative to cycle walking that we call /reverse cycle walking/, which lowers the worst-case number of times we must apply the block-cipher on $\mathcal T$ from $O(N)$ to $O(\log N)$. Additionally, when the underlying block-cipher on $\mathcal T$ is secure against $q = (1-\epsilon)N$ adversarial queries, we show that applying reverse cycle walking gives us a cipher on $\mathcal S$ secure even if the adversary is allowed to query all of the domain points. Such fully-secure ciphers have been the the target of numerous recent papers. | CommonCrawl |
I got this problem from Rustan Leino, who got it from Vladislav Shcherbina, but changed gloves into T-shirts to emphasize the people rather than the spaces between them.
Each of ten friends goes into a private dressing room where he chooses either a white T-shirt or a black T-shirt to wear. Then, the other T-shirt is taken away and he's given a hat with a real number written on it. Before selecting his T-shirt, he's given a list of the other nine friends; next to each name on that list is what that friend's hat number will be. All ten hats will have different numbers.
The friends then put on their hats and T-shirts and gather in a room. They may not remove or exchange hats or T-shirts, but must immediately line up in order of hat number. The desired property is that the T-shirt colors alternate.
The friends are allowed to decide on a strategy before walking into their dressing rooms, but may not otherwise communicate. Design a strategy that lets the friends always end up with alternating T-shirt colors.
Each friend will make an estimate about what $C$ is. He can mostly figure this out, except he can't tell whether pairs that include himself are correctly ordered. So, for his estimate, he'll assume that his hat number is less than everyone else's (as if his hat said $-\infty$). If his estimate is odd, he chooses a white T-shirt. If his estimate is even, he chooses a black T-shirt.
The reason this works is as follows.
Each friend will make a certain number of mistakes when evaluating the correctly-orderedness of pairs. He will never make mistakes for pairs that don't include himself, since his list tells him both the numbers in such pairs. For pairs that do include himself, he'll make a mistake if and only if the other friend in that pair has a hat number less than his hat's number.
This means that the number of mistakes each friend makes is equal to the number of friends with lower hat numbers. The person with the lowest hat number will make 0 mistakes, the person with the second-lowest hat number will make 1 mistake, the person with the third-lowest hat number will make 2 mistakes, etc. Thus, when they line up they will be lining up in order of number of mistakes made.
Each mistake induces an error of $+1$ or $-1$ in the estimate. Thus, mistakes can cancel, but only in pairs. As a result, the error in the estimate will have the same parity as the number of mistakes. In other words, a friend's error will be odd if his mistake count is odd, and it will be even if his mistake count is even. | CommonCrawl |
We will make it more routine to discuss about some of the Daily Magic Spell problems that the majority of students who have attempted the question on given day did not get it correct. We are interested in hearing your thoughts and interpretation about the problem and offer feedback to your approach in determining the solution. We encourage everyone to participate in the discussion! Feel free to post your questions if you have any!
Without further ado, we will begin with the question given on January 11, 2017.
Suppose that a restaurant sells $7$ different burgers and $4$ different salads. Two people decide to order different things, but both order a burger or both order a salad. How many different ways can this happen?
To approach this problem, note that two people must each order a burger, or a salad. That is, if Person $A$ orders a burger, Person $B$ orders a burger. If Person $A$ orders a salad, Person $B$ must order a salad.
In the case when Person $A$ orders a burger, Person $A$ has $7$ options to select from. After Person $A$ made his choice, there are $6$ remaining burgers for Person $B$ to select from. By the Product Rule, this yields $7 \times 6 = 42$ ways that two people can order different burgers.
In the case when Person $A$ orders a salad, Person $A$ has $4$ options to select from. After Person $A$ made his choice, there are $3$ remaining salads for Person $B$ to select from. Again, by the Produce Rule, this yields $4 \times 3 = 12$ ways that two people can order different salads.
By the Sum Rule, this implies that there is a total of $42 + 12 = 54$ ways to do this.
I believe that the most common mistake with the question is the interpretation of the problem. This is very common in counting problems. One possible misinterpretation is the phrase "two people decide to order different things." This phrase may indicate that the $7$ different burgers and $4$ different salads are free to choose from and therefore each person could potentially order a burger and salad. This is incorrect since either both people must order a burger or both people must order a salad.
Do you also have some thoughts about the problem? How did you misinterpret the problem? Let us know below! | CommonCrawl |
Abstract: The scattering of dark matter (DM) particles with sub-GeV masses off nuclei is difficult to detect using liquid xenon-based DM search instruments because the energy transfer during nuclear recoils is smaller than the typical detector threshold. However, the tree-level DM-nucleus scattering diagram can be accompanied by simultaneous emission of a Bremsstrahlung photon or a so-called "Migdal" electron. These provide an electron recoil component to the experimental signature at higher energies than the corresponding nuclear recoil. The presence of this signature allows liquid xenon detectors to use both the scintillation and the ionization signals in the analysis where the nuclear recoil signal would not be otherwise visible. We report constraints on spin-independent DM-nucleon scattering for DM particles with masses of 0.4-5 GeV/c$^2$ using 1.4$\times10^4$ kg$\cdot$day of search exposure from the 2013 data from the Large Underground Xenon (LUX) experiment for four different classes of mediators. This analysis extends the reach of liquid xenon-based DM search instruments to lower DM masses than has been achieved previously. | CommonCrawl |
A temporal graph is a graph that changes over time. Assuming discrete time and a fixed set $V$ of vertices, a temporal graph can be viewed as a discrete sequence $G_1, G_2, \ldots$ of static graphs, each with vertex set $V$. Research in this area is motivated by the fact that many modern systems are highly dynamic and relations (edges) between objects (vertices) vary with time. Although static graphs have been extensively studied for decades from an algorithmic point of view, we are still far from having a concrete set of structural and algorithmic principles for temporal graphs. Many notions and algorithms from the static case can be naturally transferred in a meaningful way to their temporal counterpart, while in other cases new approaches are needed to define the appropriate temporal notions. In particular, some problems become radically different and substantially more difficult when the time dimension is additionally taken into account. In this talk we will introduce temporal graphs and we will survey recent algorithmic results. | CommonCrawl |
The calendars can be marked with events with date ranges, with different markers and styles (updated Dec 18, 2016). Can now be compiled with xelatex (updated Dec 18, 2018).
The corresponding calendar to fit into a 3.5" floppy disk jewel case can be found here; while the one to fit into a CD jewel case can be found here.
% Here are the actual printed calendars. The smaller calendar (9\,cm $\times$ 9.5\,cm) fits floppy disk jewel cases; while the bigger one (11.7\,cm $\times$ 13.65\,cm) fits CD jewel cases. | CommonCrawl |
The factorial of $4$ is $4\times 3\times 2\times 1=24$.
As for use, the factorial is written $n!$, where "the factorial of 3" would be written as $3!$, and you are meant to compute $3!$ by taking $3\times 2\times 1=6$. So the factorial of a number is found by multiplying the number by all of the integers smaller than it, stopping at $1$. The result means, in one example, the number of ways $n$ different objects can be arranged. For example, the three letters a, b, and c, can be arranged $3!=6$ different ways.
In this lesson, as more practice on how to write functions, we'll implement a factorial function. Remember, given a number $n$, the factorial is $n\times (n-1)\times (n-2)\times 1$. But if you look at a bit differently, you'll also see that $n!=1\times 2\times 3\times 4\times n$. This second definition is easiest to implement using a single for-loop. Use a for-loop to multiply the numbers from 1 to $n$ together.
See if you can complete the factorial function below. As for testing, be sure you get that fact(5)=120 and fact(10)=3628800. Note: We have an actual use for the factorial in a future lesson.
Now you try. Fix the for i=??? do line and the p=p* line to handle the factorial as described above.
This code will not run! Fix the for loop. What will you have it count over? Next, fix the p=p * line to keep track of the rolling factorial product (p for "product"). Dismiss. | CommonCrawl |
(英) To control immune functions by electrostimulation, we have studied the effect of electrostimulation on macrophage functions in vitro. We have reported that ELF electrostimulation affected the endocytic activity of macrophage and that humoral factors mediated this effect. In this study, we examined the amount change of scavenger receptors and possible involvement of humoral factor in this change. As one of the humoral factors, we measured the release change of TNF-$\alpha$ caused by electrostimulation. The results of experiments suggested that the amount of scavenger receptor increases with statistical significance and that humoral factors mediate this increase. However, we could not see statistical significance in the release change of TNF-$\alpha$.
文献情報 信学技報, vol. 108, no. 98, MBE2008-19, pp. 25-28, 2008年6月. | CommonCrawl |
A Lambert quadrilateral is a quadrilateral three of whose angles are right angles. And in 2-d hyperbolic space $\mathbb H^2$, we have nice formulas for the fourth angle.
By this I mean that if we consider another Lambert quadrilateral $A'O'B'F'$ with $|O'A'|=|OA|$ and $|O'B'|=|OB|$ on a compact surface $(M^2,g)$ with curvature $\kappa<-1$, can we compare the two acute angles $\angle F$ and $\angle F'$? In this case it seems natural to expect $\angle F'\le \angle F$.
Browse other questions tagged dg.differential-geometry mg.metric-geometry hyperbolic-geometry or ask your own question.
Pythagorean Theorem for Right-Corner Hyperbolic Simplices?
Is every elementary absolute geometry Euclidean or hyperbolic? | CommonCrawl |
The workshop 'Group Theory, Measure, and Asymptotic Invariants' organized by Miklos Abert (Budapest), Damien Gaboriau (Lyon) and Andreas Thom (Leipzig) was held 18 - 24 August 2013. The event was a continuation of the previous Oberwolfach workshop 'Actions and Invariants of Residually Finite Groups: Asymptotic Methods' organized by Miklos Abert (Budapest), Damien Gaboriau (Lyon) and Fritz Grunewald (Dusseldorf) that was held September 5 - September 11, 2010. Fritz Grunewald passed away in March 2010 and Andreas Thom joined the organizing team.
The workshop aimed to study finitely generated groups and group actions using ergodic and measure theoretic methods, incorporating asymptotic invariants, such as ℓ2-invariants, the rank gradient, cost, torsion growth, entropy-type invariants and invariants coming from random walks and percolation theory.
The participant body came from a wide range of areas: finite and infinite group theory, geometry, ergodic theory, graph theory, topology, probability theory, representation theory, von Neumann algebras and $\mathcal l^2$-theory. The participants typically did not speak each other's mathematical dialect fluently. To address this situation, the organizers asked the speakers to put a special emphasis on the first, introductory part of their talks. This aspect worked very well.
As a general rule, the organizers asked speakers to talk about specific subjects, not just any nice piece of their research. In some cases, this meant sacrificing hearing about some new results from excellent mathematicians that were further away from the workshop's main directions. | CommonCrawl |
Matilde Yáñez, Luc\'ıa Galán, Jorge Mat\'ıas-Guiu, Alvaro Vela, Antonio Guerrero and Antonio G Garc\'ıa.
CSF from amyotrophic lateral sclerosis patients produces glutamate independent death of rat motor brain cortical neurons: protection by resveratrol but not riluzole.. Brain research 1423:77–86, 2011.
Abstract The neurotoxic effects of cerebrospinal fluid (CSF) from patients suffering amyotrophic lateral sclerosis (ALS), have been reported by various authors. However, variable results have been communicated and the mechanism of such neurotoxicity has been attributed to excess glutamate concentrations in ALS/CSF. We have studied here the properties of 14 CSFs from control patients and 29 CSFs from patients of ALS. We found that while ALS/CSF impairs the viability of rat brain cortical motoneurons maintained in primary cultures, this effect seemed to be exerted through a glutamate-independent mechanism. Resveratrol protected against such neurotoxic effects and antagonized the [Ca(+2)](c) elevation produced by ALS/CSF. However, riluzole did not afford protection and antagonized the resveratrol-elicited neuroprotective effects. We conclude that ALS/CSF elicited neurotoxicity on in vitro cultures of rat brain cortical motor neurons may become a sound microassay to test available novel multitargeted neuroprotective compounds with potential therapeutic application in ALS patients.
Anti-inflammatory Effects of Resveratrol, (-)-Epigallocatechin-3-gallate and Curcumin by the Modulation of Toll-like Receptor Signaling Pathways. Korean Journal of Food Science and Technology 39(5), January 2007.
Abstract Toll-like receptors (TLRs) induce innate immune responses that are essential for host defenses against invading microbial pathogens, thus leading to the activation of adaptive immune responses. In general, TLRs have two major downstream signaling pathways: the MyD88- and TRIF-dependent pathways, which lead to the activation of and IRF3. Numerous studies have demonstrated that certain phytochemicals possessing anti-inflammatory effects inhibit activation induced by pro-inflammatory stimuli, including lipopolysaccharides and . However, the direct molecular targets for such anti-inflammatory phytochemicals have not been fully identified. Identifying the direct targets of phytochemicals within the TLR pathways is important because the activation of TLRs by pro-inflammatory stimuli can induce inflammatory responses that are the key etiological conditions in the development of many chronic inflammatory diseases. In this paper we discuss the molecular targets of resveratrol, (-)-epigallocatechin-3-gallate (EGCG), and curcumin in the TLR signaling pathways. Resveratrol specifically inhibited the TRIF pathway in TLR3 and TLR4 signaling, by targetting TBK1 and RIP1 in the TRIF complex. Furthermore, EGCG suppressed the activation of IRF3 by targetting TBK1 in the TRIF-dependent signaling pathways. In contrast, the molecular target of curcumin within the TLR signaling pathways is the receptor itself, in addition to . Together, certain dietary phytochemicals can modulate TLR-derived signaling and inflammatory target gene expression, and in turn, alter susceptibility to microbial infection and chronic inflammatory diseases. Anti-inflammatory Effects of Resveratrol, (-)-Epigallocatechin-3-gallate and Curcumin by the Modulation of Toll-like Receptor Signaling Pathways - ResearchGate. Available from: http://www.researchgate.net/publication/264185041_Anti-inflammatory_Effects_of_Resveratrol_(-)-Epigallocatechin-3-gallate_and_Curcumin_by_the_Modulation_of_Toll-like_Receptor_Signaling_Pathways [accessed Nov 6, 2015].
Dohoon Kim, Minh Dang Nguyen, Matthew M Dobbin, Andre Fischer, Farahnaz Sananbenesi, Joseph T Rodgers, Ivana Delalle, Joseph A Baur, Guangchao Sui, Sean M Armour, Pere Puigserver, David A Sinclair and Li-Huei Tsai.
SIRT1 deacetylase protects against neurodegeneration in models for Alzheimer's disease and amyotrophic lateral sclerosis. The EMBO Journal 26(13):3169–3179, 2007.
Abstract A progressive loss of neurons with age underlies a variety of debilitating neurological disorders, including Alzheimer's disease (AD) and amyotrophic lateral sclerosis (ALS), yet few effective treatments are currently available. The SIR2 gene promotes longevity in a variety of organisms and may underlie the health benefits of caloric restriction, a diet that delays aging and neurodegeneration in mammals. Here, we report that a human homologue of SIR2, SIRT1, is upregulated in mouse models for AD, ALS and in primary neurons challenged with neurotoxic insults. In cell-based models for AD/tauopathies and ALS, SIRT1 and resveratrol, a SIRT1-activating molecule, both promote neuronal survival. In the inducible p25 transgenic mouse, a model of AD and tauopathies, resveratrol reduced neurodegeneration in the hippocampus, prevented learning impairment, and decreased the acetylation of the known SIRT1 substrates PGC-1alpha and p53. Furthermore, injection of SIRT1 lentivirus in the hippocampus of p25 transgenic mice conferred significant protection against neurodegeneration. Thus, SIRT1 constitutes a unique molecular link between aging and human neurodegenerative disorders and provides a promising avenue for therapeutic intervention.
Eduardo Candelario-Jalil, Antonio Pinheiro C Oliveira, Sybille Gräf, Harsharan S Bhatia, Michael Hüll, Eduardo Muñoz and Bernd L Fiebich.
Resveratrol potently reduces prostaglandin E2 production and free radical formation in lipopolysaccharide-activated primary rat microglia.. Journal of neuroinflammation 4:25, 2007.
Abstract BACKGROUND: Neuroinflammatory responses are triggered by diverse ethiologies and can provide either beneficial or harmful results. Microglial cells are the major cell type involved in neuroinflammation, releasing several mediators, which contribute to the neuronal demise in several diseases including cerebral ischemia and neurodegenerative disorders. Attenuation of microglial activation has been shown to confer protection against different types of brain injury. Recent evidence suggests that resveratrol has anti-inflammatory and potent antioxidant properties. It has been also shown that resveratrol is a potent inhibitor of cyclooxygenase (COX)-1 activity. Previous findings have demonstrated that this compound is able to reduce neuronal injury in different models, both in vitro and in vivo. The aim of this study was to examine whether resveratrol is able to reduce prostaglandin E2 (PGE2) and 8-iso-prostaglandin F2alpha (8-iso-PGF2 alpha) production by lipopolysaccharide (LPS)-activated primary rat microglia. METHODS: Primary microglial cell cultures were prepared from cerebral cortices of neonatal rats. Microglial cells were stimulated with 10 ng/ml of LPS in the presence or absence of different concentrations of resveratrol (1-50 microM). After 24 h incubation, culture media were collected to measure the production of PGE2 and 8-iso-PGF2 alpha using enzyme immunoassays. Protein levels of COX-1, COX-2 and microsomal prostaglandin E synthase-1 (mPGES-1) were studied by Western blotting after 24 h of incubation with LPS. Expression of mPGES-1 at the mRNA level was investigated using reverse transcription-polymerase chain reaction (RT-PCR) analysis. RESULTS: Our results indicate that resveratrol potently reduced LPS-induced PGE2 synthesis and the formation of 8-iso-PGF2 alpha, a measure of free radical production. Interestingly, resveratrol dose-dependently reduced the expression (mRNA and protein) of mPGES-1, which is a key enzyme responsible for the synthesis of PGE2 by activated microglia, whereas resveratrol did not affect the expression of COX-2. Resveratrol is therefore the first known inhibitor which specifically prevents mPGES-1 expression without affecting COX-2 levels. Another important observation of the present study is that other COX-1 selective inhibitors (SC-560 and Valeroyl Salicylate) potently reduced PGE2 and 8-iso-PGF2 alpha production by LPS-activated microglia. CONCLUSION: These findings suggest that the naturally occurring polyphenol resveratrol is able to reduce microglial activation, an effect that might help to explain its neuroprotective effects in several in vivo models of brain injury.
Ozgur Kutuk, Giuseppe Poli and Huveyda Basaga.
Resveratrol protects against 4-hydroxynonenal-induced apoptosis by blocking JNK and c-JUN/AP-1 signaling.. Toxicological sciences : an official journal of the Society of Toxicology 90(1):120–32, 2006.
Abstract In the present study we have studied the effect of resveratrol in signal transduction mechanisms leading to apoptosis in 3T3 fibroblasts when exposed to 4-hydroxynonenal (HNE). In order to gain insight into the mechanisms of apoptotic response by HNE, we followed MAP kinase and caspase activation pathways; HNE induced early activation of JNK and p38 proteins but downregulated the basal activity of ERK (1/2). We were also able to demonstrate HNE-induced release of cytochrome c from mitochondria, caspase-9, and caspase-3 activation. Resveratrol effectively prevented HNE-induced JNK and caspase activation, and hence apoptosis. Activation of AP-1 along with increased c-Jun and phospho-c-Jun levels could be inhibited by pretreatment of cells with resveratrol. Moreover, Nrf2 downregulation by HNE could also be blocked by resveratrol. Overexpression of dominant negative c-Jun and JNK1 in 3T3 fibroblasts prevented HNE-induced apoptosis, which indicates a role for JNK-c-Jun/AP-1 pathway. In light of the JNK-dependent induction of c-Jun/AP-1 activation and the protective role of resveratrol, these data may show a critical potential role for JNK in the cellular response against toxic products of lipid peroxidation. In this respect, resveratrol acting through MAP kinase pathways and specifically on JNK could have a role other than acting as an antioxidant-quenching reactive oxygen intermediate.
J A Baur, K J Pearson, N L Price, H A Jamieson, C Lerin, A Kalra, V V Prabhu, J S Allard, G Lopez-Lluch and K Lewis.
Resveratrol improves health and survival of mice on a high-calorie diet. Nature, 2006.
Abstract Resveratrol (3,5,4'-trihydroxystilbene) extends the lifespan of diverse species including Saccharomyces cerevisiae, Caenorhabditis elegans and Drosophila melanogaster. In these organisms, lifespan extension is dependent on Sir2, a conserved deacetylase proposed to underlie the beneficial effects of caloric restriction. Here we show that resveratrol shifts the physiology of middle-aged mice on a high-calorie diet towards that of mice on a standard diet and significantly increases their survival. Resveratrol produces changes associated with longer lifespan, including increased insulin sensitivity, reduced insulin-like growth factor-1 (IGF-I) levels, increased AMP-activated protein kinase (AMPK) and peroxisome proliferator-activated receptor-gamma coactivator 1$\alpha$ (PGC-1$\alpha$) activity, increased mitochondrial number, and improved motor function. Parametric analysis of gene set enrichment revealed that resveratrol opposed the effects of the high-calorie diet in 144 out of 153 significantly altered pathways. These data show that improving general health in mammals using small molecules is an attainable goal, and point to new approaches for treating obesity-related disorders and diseases of ageing. | CommonCrawl |
This is Problem 15.47 from V. V. Prasolov, Problems in Geometry, V 2 (Nauka, 1986).
Two equal rectangles intersect in eight points.
Prove that the area of their intersection (the common area, both of them enclose) is greater than half the area of any of them.
Thus $AC$ is the bisector of the angle between the two long sides. Similarly, $BD$ is the bisector of the angle between two short sides. They intersect, and are, therefore, perpendicular.
We have $AC\times BD=2[ABCD],$ half the area of the quadrilateral $ABCD.$ Since one of them is not shorter than the short side, and the other is not shorter of the long side, of the rectangles, half their product is not less than half the area of each of the rectangles. But $ABCD$ is only a part of the common area enclosed by both rectangles. Thus the proof is complete. | CommonCrawl |
4 How can I undo a recommendation?
4 Should we stop using the homework tag?
16 How can I avoid AUCTeX wrongly inferring the language of a document?
9 How can I set file creation times in ZFS?
7 What does host% mean in command line?
6 Can the vspace below a list environment be decoupled from the one above?
6 $\Bbb Z^\ast$ What is this notation?
5 If $S\subset\mathbb R$ is a $G_\delta$ there is a function $\mathbb R\to\mathbb R$ continuous exactly on $S$. Reference? | CommonCrawl |
Abstract: The stochastic variational method is applied to excitonic formations within semiconducting transition metal dichalcogenides using a correlated Gaussian basis. The energy and structure of two- to six-particle systems are investigated along with their dependence on the effective screening length of the two-dimensional Keldysh potential and the electron-hole effective mass ratio. Excited state biexcitons are shown to be bound, with binding energies of the L=0 state showing good agreement with experimental measurements of biexciton binding energies. Ground and newly discussed excited state exciton-trions are predicted to be bound and their structures are investigated.
Abstract: The Stochastic Variational Method (SVM) is used to show that the effective mass model correctly estimates the binding energies of excitons and trions but fails to predict the experimental binding energy of the biexciton. Using high-accuracy variational calculations, it is demonstrated that the biexciton binding energy in transition metal dichalcogenides is smaller than the trion binding energy, contradicting experimental findings. It is also shown that the biexciton has bound excited states and that the binding energy of the $L=0$ excited state is in very good agreement with experimental data. This excited state corresponds to a hole attached to a negative trion and may be a possible resolution of the discrepancy between theory and experiment.
See conference program listing and published abstract.
Abstract: Recently, experimental measurements and theoretical modeling have been in a disagreement concerning the binding energy of biexctions in transition metal dichalcogenides. While theory predicts a smaller binding energy (∼20 meV) that is, as logically expected, lower than that of the trion, experiment finds values much larger (∼60 meV), actually exceeding those for the trion. In this work, we show that there exists an excited state of the biexciton which yields binding energies that match well with experimental findings and thus gives a plausible explanation for the apparent discrepancy. Furthermore, it is shown that the electron-hole correlation functions of the ground state biexciton and trion are remarkably similar, possibly explaining why a distinct signature of ground state biexcitons would not have been noticed experimentally.
Talk delivered March 22, 2014, 3:15 p.m.
Abstract: Traditionally, linear multistep methods (LMMs) for the numerical solution of initial value problems, such as Adams methods and backward differentiation formulas, have been derived through the use of polynomial interpolation and collocation through continuous schemes. While these methods can be implemented in modern computer algebra systems, they require the use of highly expensive operations such as symbolic matrix inversion. This imposes a severe limit on the complexity of LMMs that can be derived. In this presentation, we present a generalized algorithm for deriving LMMs based upon Taylor series expansion. By our approach, we show that the derivation of a LMM containing $k + 1$ terms is reducible to the numerical solution of a $k \times k$ linear system, allowing for the efficient derivation of methods including hundreds or thousands of terms. Furthermore, we show that this algorithm is trivially generalizable to methods including arbitrarily many off-grid points, and that it can be generalized to create LMMs for directly solving initial value problems of arbitrarily high order, with the inclusion of all intermediate derivative terms. Specific methods are stated and tested numerically on well-known problems given in the literature. | CommonCrawl |
$U$. ⋆ 100% Private Proxies - Fast, Anonymous, Quality, Unlimited USA Private Proxy!
Let $ u$ and $ v$ be vectors in $ \mathbb R^n$ . The exercise is to prove that if $ ||u+tv|| \ge ||u||$ for all real $ t$ , then $ u\cdot v=0$ ($ u$ and $ v$ are perpendicular).
I tried writing $ v$ as $ (n+xu)$ , where $ u\cdot n=0$ , and then try to prove that $ x$ must be zero, but was unable to develop this solution. | CommonCrawl |
Abstract: Lov\' asz and Schrijver have constructed semidefinite relaxations for the stable set polytope of a graph $G=(V,E)$ by a sequence of lift-and-project operations; their procedure finds the stable set polytope in at most $\alpha(G)$ steps, where $\alpha(G)$ is the stability number of $G$. Two other hierarchies of semidefinite bounds for the stability number have been proposed by Lasserre and by de Klerk and Pasechnik , which are based on relaxing nonnegativity of a polynomial by requiring the existence of a sum of squares decomposition. The hierarchy of Lasserre is known to converge in $\alpha(G)$ steps as it refines the hierarchy of Lov\'asz and Schrijver, and de Klerk and Pasechnik conjecture that their hierarchy also finds the stability number after $\alpha(G)$ steps. We prove this conjecture for graphs with stability number at most 8 and we show that the hierarchy of Lasserre refines the hierarchy of de Klerk and Pasechnik. | CommonCrawl |
However, the $\mathrm pK_\mathrm a$ of aniline is 27, which means, that it is, if at all, a very very poor acid, making it in my eyes impossible to deprotonate. Maybe it could be possible with n-BuLi, but that was not part of the reaction. The main topic was a nucleophilic aromatic substitution to 1-bromo-4-nitrobenzene, with aniline.
Is it even hypothetically possible for such a deprotonation in pure aniline to occur?
I'm going to side with you @TheChemist.
Your TA might be suggesting this because you need often need a negatively charged nucleophile in order to carry out nucleophilic aromatic substitutions.
Having the nitro group in the 4 position makes the your electrophile much more receptive to attack, since the negatively charged intermediate will be resonance stabilized.
Even if the one aniline does deprotonate another, the concentration of the deprotonated anilide will be so low (on the order of 5 x 10-28 mol/L in pure aniline, if the pKa is 27) that this will have no impact on the reaction.
Not the answer you're looking for? Browse other questions tagged acid-base aromatic-compounds amines or ask your own question.
Why cannot nitrosonium ion attack aniline in electrophylic aromatic substitution?
Why does sulfonation of aniline occur para rather than ortho?
How many valid resonance forms can be constructed for aniline?
Does the amine group participate in resonance in aniline?
Which one is more basic: acetamide or aniline?
Does aniline react with diazonium ions at C or N? | CommonCrawl |
We study the nonlinear stability of viscous, immiscible multilayer flows in channels driven both by a pressure gradient and/or gravity in a slightly inclined channel. Three fluid phases are present with two internal interfaces. Novel weakly nonlinear models of coupled evolution equations are derived and we concentrate on inertialess flows with stably stratified fluids, with and without surface tension. These are $2\times 2$ systems of second-order semilinear parabolic PDEs that can exhibit inertialess instabilities due to resonances between the interfaces - mathematically this is manifested by a transition from hyperbolic to elliptic behavior of the nonlinear flux functions. We consider flows that are linearly stable (i.e the nonlinear fluxes are hyperbolic initially) and use the theory of nonlinear systems of conservation laws to obtain a criterion (which can be verified easily) that can predict nonlinear stability or instability (i.e. nonlinear fluxes encounter ellipticity as they evolve spatiotemporally) at large times. In the former case the solution decays asymptotically to its base state, and in the latter nonlinear traveling waves emerge. | CommonCrawl |
A triangle sheet of paper has been folded and punched as shown below. How will it appear when opened?
If 2 $\times$ 3 = -1, 3 $\times$ 4 = - 5 and 4 $\times$ 6 = - 14, then 6 $\times$ 9 = ?
6 . In each of the following questions, select the related letter/word/ number from the given alternatives.
123 : 169 : : 126 : ?
7 . In each of the following questions, select the related letter/word/ number from the given alternatives.
Lucknow : Uttar Pradesh : : ?
8 . In each of the following questions, select the related letter/word/ number from the given alternatives.
9 . In each of the following questions, select the related letter/word/ number from the given alternatives.
22 : 495 : : 24 : ?
10 . Find the odd word/letters/ number pair/number from the given alternatives. | CommonCrawl |
This challenge is a game for two players. The first player chooses two numbers in this grid and either multiplies or divides them.
He or she then marks the answer to the calculation on the number line.The second player then chooses two numbers and either $\times$ or $\div$, and marks that number in a different colour on the number line.
If the answer is too big or too small to be marked on the number line, the player misses a go.
The winner is the player to get four marks in a row with none of their opponent's marks in between.
What good ways do you have of winning the game?
Does it matter if you go first or second?
Calculators. PrimaryResourceful. Interactivities. PrimaryGames-Strategy. Working systematically. Addition & subtraction. Games. PrimaryGames-Number. Estimating and approximating. Visualising. | CommonCrawl |
Von-Newman has a single memory space, share for data and program instructions. Harvard Architecture has separate memory spaces for data and instructions (so you cannot execute from the data memory).
It is important to know that the hardware does all arithmetic in 2's compliment. It is up to the programmer to interpret the number as signed or unsigned.
To go the other way from say -1 to the 2's compliment form 11111111 we use that $latex 2^p - X$ formula. I'm not exactly sure how its supposed to work so I've hacked it to make it work.
If the number you wish to convert is negative, let $latex X = -n$, so that X is positive then take $latex 2^p$ where p is the number of bits you are using (say 8), then subtract X. If the number to convert is less than $latex 2^p$ (where p is the number of bits, say 8 ) then leave it as is and that in your 2's compliment.
Now that was complicated. But its the only way I can get that advertised $latex 2^p - X$ formula to work with the given set of sample data (as in that table above).
Why do we need sign extension? We need it in order to do operations on numbers than have different bit lengths (the number of bits used to represent the number).
From a human kind of approach to convert 221 to binary, we see that $latex 2^7 = 128$, that is 7 is the largest power of 2 less than 221, so we have $latex 1 \times 2^7$. That gives us 128, so we still have 93 (221-128) to go. We try $latex 2^6$, this is less than 93. So far we have $latex 1 \times 2^7 + 1 \times 2^6$. 29 left now, but $latex 2^5$ is greater than 29, so we put a zero in that digit, ie. $latex 1 \times 2^7 + 1 \times 2^6 + 0 \times 2^5$. If we go on we get $latex 1 \times 2^7 + 1 \times 2^6 + 0 \times 2^5 + 1 \times 2^4 + 1 \times 2^3 + 1 \times 2^2 + 0 \times 2^1 + 1 \times 2^0$. Taking the coefficients of the $latex \times 2^x$ terms we get the number 221 in binary, 11011101.
does a + b, result in is a.
There are 3 multiplication operations, MUL (Multiply Unsigned), MULS (Multiply Signed) and MULSU (Multiply Signed with Unsigned). They each do this. Notice the result is stored in r1:r0.
;* From AVR Instruction Set Guide, pg 99-100.
;* Signed multiply of two 16-bit numbers with 32-bit result.
brge is Branch if Greater or Equal, Signed. if ($latex N \oplus V = 0$) then branch. When you do cp Rd, Rr then brge, the branch will be taken if Rd $latex \ge$ Rr, where Rd and Rr are taken to be signed numbers.
brsh is Branch if Same or Higher. if ($latex C = 0$) then branch. When you do cp Rd, Rr then brsh, the branch will be taken if Rd $latex \ge$ Rr, where Rd and Rr are taken to be unsigned numbers.
A C program consists of five functions. Their calling relations are shown as follows (the arguments and irrelevant C statements are omitted).
func1() is a recursive function and calls itself 15 times for the actual parameters given in main(). Both func3() and func4() do not call any function. The sizes of all stack frames are shown as follows.
The path with the most total weight is main() > func2() > func3(), so this is the total stack space needed.
When a switch makes contact, its mechanical springiness will cause the contact to bounce, or make and break, for a few millisecond (typically 5 to 10 ms). Two software solutions are wait and see and counter-based.
If we detect it as closed, wait for a little bit and check again.
Poll the switch constantly. For each poll if the switch is closed increment the counter. If we reach a certain value in a certain time then the switch was closed (or button pressed).
;add the contents of the array.
For some reason in my lecture notes I have "eg. fine 2nd or 3rd smallest or largest" so here is a modification to do something like that.
To explain interrupts, Wu used an example of a network card that is downloading a file. The network card has a buffer, and only once this buffer is full (or data stream is complete) should the CPU then copy the contents from the buffer to the RAM. So how does the CPU know when the network card's buffer is full and when to execute the copy? He described two ways here interrupt and polling.
Polling involves the CPU periodically asking the network card, are you full? Two problems with this method are a) there may be a delay as you have to wait for the poll request to be made and b) it wastes a lot of CPU time. Polling is implemented in software, not hardware.
An alternative to polling is using interrupts whereby the network card will send an interrupt signal to the CPU to get its attention. This needs special hardware to implement, however it is very efficient compared with polling.
Wait for the current instruction to finish before taking care of the interrupt.
Return to the interrupted program at the point where it was interrupted.
Signal the interrupting device with an acknowledge signal when the interrupt has been recognised.
IRQ is an interrupt request signal.
A daisy chain arrangement (as seen below) allows multiple devices to send and IRQ. However the CPU cannot determine from the IRQ line which device sent the interrupt. So in a daisy chain system when the CPU receives an IRQ, it will send a signal to IO1 asking "did you send the IRQ?" if IO1 sent the request it will reply "yes", if not it will pass the question on to the next IO device and so on. The response is passed back in the same way.
Reset is an interrupt in AVR, and in the AVR Mega64 there are five different sources of reset. There is a flag in the MCU Control Register for each of these and can be used to determine the source of a reset interrupt. The watchdog timer is one source of reset.
The watchdog timer is used to try to reset the system if an error such hang occurs. The watchdog timer in AVR can be enabled or disabled.
If the Watchdog timer is enabled, it needs to be periodically reset using the wdr instruction. When (if) the Watchdog times out, it will generate a short reset pulse.
Some general notes from an AVR Tutorial that I have found useful and need to reiterate.
"Only the registers from R16 to R31 load a constant immediately with the LDI command, R0 to R15 don't do that.
The first register is always the target register where the result is written to!
Define names for registers with the .DEF directive, never use them with their direct name Rx.
If you need pointer access reserve R26 to R31 for that purpose.
16-bit-counter are best located R25:R24.
If you need to read from the program memory, e.g. fixed tables, reserve Z (R31:R30) and R0 for that purpose.
Sections marked with ¹ are ©2002-2009 by http://www.avr-asm-tutorial.net, from Beginners Introduction to the Assembly Language of ATMEL AVR Microprocessors by Gerhard Schmidt 2003. Used under the supplied license, "You may use, copy and distribute these pages as long as you keep the copyright information with it."
This week we learnt some of the things that separate assembly language from machine code in the context of AVR (or should I say AVR Studio).
AVR Assembly (using AVR studio) has some additional commands that are not part of the AVR instruction set, but the assembler (that is part of AVR studio) interprets into machine code.
, will upon assembly go through the code and replace CONST with 31.
Here are the AVR assembly Pseudo Instructions.
.byte - Reserve some space (only allowed in dseg). eg.
There AVR three segments of an AVR assembly program. Also when writing an assembly program you need to be able to specify which segment you are referring to, so you need to declare this using an AVR assembler directive shown in brackets below.
The machine code executable produced by AVR studio was 24 bytes long, the question was why. It's actually quite simple, all of the here instructions are 1 word (2 bytes = 16 bits), there are 12 instruction so total 24 bytes. One thing that initially confused me was the weird encoding. Back in 1917 I got the impression that if you have something like mov r16 r17 that this would be represented in machine code as some number for the mov operation followed by two more numbers of the same bit size for the registers. However this can vary depending on the architecture.
For example the mov operation, MOV Rd, Rr has encoding 0010 11rd dddd rrrr. The instruction takes up 6 bits, the source register takes up 5 bits and the destination takes up 5 bits (total 16 bits). The reason that the source and destination bits are intertwined is that it makes things easier on the hardware implementation to do it this way.
Last week we saw how signed and unsigned numbers are stored, so if you look at the program above you will see that the last part just increments a register infinitely over and over. Stepping through this shows us what the status register does as we do this. Remember that AVR does all arithmetic in two's compliment.
As you can see the values that are negative under the two's complement 128-255 have the N (negative) flag. Also from 127 then to 128 under two's compliment we have gone from 127 to -128, so the V (Two's complement overflow indicator) flag is indicated. 0 has the zero flag.
The lecture notes go over some of the AVR instructions but I think the docs provided by Atmel are fine. What I do think needs explaining (and me learning) is the carry flag and the difference between operations like add without carry (ADD) and add with carry (ADC).
Here is how I understand the carry bit. Say we have 4 bit registers and try to add (in binary) 1000 and 1000, the answer is 10000 however this is 5 bits and we only have 4 bits available to store the result. An overflow has occurred. To introduce some terminology, the most significant bit (or byte) (msb) is the left most bit (or byte) in big-endian (right most in little-endian), and vice-versa for least significant bit (or byte) (lsb). The carry bit (C flag) is the carry from the msb. This can indicate overflow in some cases but not always, it those cases the V flag (Two's complement overflow indicator) is set.
So getting back to ADD and ADC, ADD will just add the two numbers ignoring the carry bit and ignoring overflow whereas ADC will actually add the carry bit to the result. At least this is what I've observed, though I'm not 100% sure on this.
Just some reflections on the first week of my 09s1 COMP2121 Lectures and other materials. I will base a lot of this material on the course lecture notes by Hui Wu. Actually if you read this and the lecture notes you might see that they are pretty much identical.
This first lecture clarified a lot of the things I'd learnt about low level computing in COMP1917, which was good. Back in COMP1917 we were introduced to the 4917 microprocessor (a hypothetical designed just for this course). It was 4bit, has 16 memory locations and 4 registers (instruction pointer, instruction store, general register 0 and general register 1). Each memory location could store a number between 0 and 15, and there were 16 instructions.
The instruction at the address given by the instruction pointer is loaded into the instruction store.
The instruction pointer is incremented by 1 or 2 depending on whether the instruction store is a 1 or 2 byte instruction.
The instruction in the instruction store is executed.
So for example the following 4917 machine code program would print 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15.
This is understandable but was difficult for me to tie into my desktop computer level, for instance the CPU and the RAM. This became a bit clearer today.
Another key thing is the Arithmetic and Logic Unit (ALU) and the Control Unit are collectively called the CPU (Central Processing Unit). A microprocessor is just a CPU on a single chip. A microcontroller is basically a computer on a chip, so the CPU, memory, etc. are all on a single chip.
I learnt a lot from experimenting with this architecture and the 4917 (and its successors). Though when I began to write more complex programs I found myself constantly putting a GOTO instruction number n command at the very begining, and using that skiped space as memory for data. I also came to see that this architecture allows for the buffer overflow vulnerability as program data can be executed as instructions if there are vulnerability. These two observations tend to lead to the Harvard architecture (which I am new to).
This is the architecture used for the Atmel AVR microprocessor, which is what we will focus on in this course (I think). I'll come back to this when I talk about address space.
This is the computer memory hierarchy diagram from the lecture notes.
This helps put a lot of my misunderstandings of how the 4917 from COMP1917 relates to modern processors, as I didn't quite see if the instructions and data were stored in RAM or on the CPU. But really this is just an implementation issue so it didn't really matter to the 4917.
In the CPU there are registers which are really fast, but there are few (apparently in x86 (eg. Pentium, Core 2) there are only 8 integer and 8 floating point registers). Then there is the cache (on chip memory), this is what that 4MB cache that I see advertised that my CPU has. This cache can be separated for program code and data. This is faster that reading from RAM (the off chip memory), so currently active program code is moved to here from the RAM when necessary to speed things up (this is apparently implemented on the hardware level (which is always nice for a software developer ;) ). Then there is off-chip memory and auxiliary storage. This fits in nicely with the picture I get when I open up my computer.
The material on the types of RAM and ROM in the lecture notes needs no further commentary, so I'll skip that part.
This sums up the situation nicely, and makes perfect sense.
Problem: How do we represent negative numbers in a computer? 4 main methods (from Wikipedia).
This raises an issue for comparison operations. eg. Is 1111 1100two > 0000 0010two? If those two numbers are unsigned then YES, if they are signed thin NO. As such the hardware uses the S flag in AVR to indicate the result of the signed comparison, and the C flag to indicate the result of unsigned comparison.
Representing Strings. We saw one method back in COMP1917 though its nice to see that the other methods that come to mind were used also.
How do we extend a binary number of m bits to an equivalent binary number of m + n bits? If the number is positive add n 0's to the most significant side (usually left) of the binary number. If the number is negative add n 1's to the most significant side of the binary number. This is called sign extension. To add twe binary numbers of different lengths just sign extend the shorter one so that both numbers have the same bit length then add as usual. | CommonCrawl |
A one step method for the attachment of a single fluorescent label to peptides through their N-terminus is described. The method employs fluorescein-5-isothiocyanate as the label of choice and a lower than normal derivatization buffer pH that discriminates against $\varepsilon$-amino derivatization in favor of $\alpha$-amino derivatization provided that this group may act as an efficient nucleophile. A kinetic study was performed in order to identify the optimal derivatization buffer pH for $\alpha$-amino group selectivity, using $\rm N\alpha$-t-BOC-L-lysine and $\rm N\varepsilon$-t-BOC-L-lysine as model compounds for derivatization through $\varepsilon$-amino and $\alpha$-amino groups, respectively. The optimal pH was found to be 8.5. Six small peptides, Arg-Lys, Lys-Trp-Lys, Lys-Lys, Lys-Lys-Lys, Lys-Lys-Lys-Lys and Arg-Pro-Lys-Pro, were derivatized at pH 8.5 in an effort to corroborate the results of the kinetic study. It was found that this method of single label derivatization via buffer pH control was successful for peptides with fewer than four lysines residues. | CommonCrawl |
C++ Template : one list by class, how to factorize the code?
Hard combinatorics and probability question.
Is there a set non-countable around zero but countable elsewhere?
Prove without using graphing calculators that $f: \mathbb R\to \mathbb R,\,f(x)=x+\sin x$ is both one-to-one, onto (bijective) function.
Can we prove $BA=E$ from $AB=E$?
Why is $v/\|v\|$ not a unit vector?
Why square a constant when determining variance of a random variable?
$f_n → f$ uniformly on $S$ and each $f_n$ is cont on $S$. Let $(x_n)$ be a sequence of points in $S$ converging to $x \in S$. Then $f_n(x_n) → f(x)$.
In what sense is the "average" of $sin(x)$ equal to 0?
Matrix inverses - Why are they derived the way they are?
Integral calculation - where the $i$ came from?
Why does $\log(n!)$ and $\log(n^n)$ have the same big-O complexity?
Why can't Fubini's/Tonelli's theorem for non-negative functions extend to general functions?
Continuous functions in the indiscrete topology? | CommonCrawl |
It's the time of year for Farmer John to plant grass in all of his fields. The entire farm consists of $N$ fields ($1 \leq N \leq 10^5$), conveniently numbered $1 \ldots N$ and conveniently connected by $N-1$ bidirectional pathways in such a way that every field can reach every other field via some collection of pathways.
Farmer John can potentially plant a different type of grass in each field, but he wants to minimize the number of grass types he uses in total, since the more types of grass he uses, the more expense he incurs.
Unfortunately, his cows have grown rather snobbish about their selection of grass on the farm. If the same grass type is planted in two adjacent fields (directly connected by a pathway) or even two nearly-adjacent fields (both directly connected to a common field with pathways), then the cows will complain about lack of variety in their dining options. The last thing Farmer John needs is complaining cows, given how much mischief they have been known to create when dissatisfied.
Please help Farmer John determine the minimum number of types of grass he needs for his entire farm.
The first line of input contains $N$. Each of the remaining $N-1$ lines describes a pathway in terms of the two fields it connects.
Print the minimum number of types of grass that Farmer John needs to use.
In this simple example, there are 4 fields all connected in a linear fashion. A minimum of three grass types are needed. For example, Farmer John could plant the fields with grass types A, B, and C as A - B - C - A. | CommonCrawl |
Let $X,Y$ be two compact, smooth, orientable 3 manifolds, each with an incompressible boundary component diffeomorphic to some genus $g $ surface $S_g$. Under an orientation-reversig diffeomorphism $f:S_g \to S_g$, those two manifolds can be glued together to obtain a new smooth, orientable manifold $X \cup_f Y$. I wonder now in how far the diffeomorphism type of this result depends on the choice of $f $. Using a collar, one can show that if $f $ and $g $ are two isotopic diffeomorphisms of $S_g $, then the corresponding gluings are diffeomorphic . Is this also a necessary condition ?
Some thoughts I have made so far: 1) If both X and Y are irreducible, then so is $X \cup_f Y$, and if $X \cup_f Y $ still has some boundary component, it must be an aspherical manifold. Hence, all important Information is contained in the fundamental group. Is it true that the homeomorphism type of aspherical 3 - manifolds obtained this way is already determined by their fundamental group ?
2) Also, I wonder if its true that the isotopy type of two orientation-reversing diffeomorphisms on a surface $S_g$ is determined by their action on the fundamental group. Edit: This is probably false, since mapping class groups of surfaces are very distinct from the corresponding fundamental group.
Edit: I apologize for missing and/or inaccurately placed capital letters. I wrote this question yesterday evening on my phone, and it was impossible to fix all the mistakes made by auto-correct (I am writing on a german phone).
Edit 2: I have also updated this question, according to what has already been solved by the answers and what is still open.
Using a collar, one can show that if $f$ and $g$ are two isotopic diffeomorphisms of $S_g$, then the corresponding gluings are diffeomorphic. Is this also a necessary condition?
No, it is not. Suppose we have glued to obtain $M = X \cup_f Y$. Suppose that $X$ admits a self-homeomorphism $\Phi$. Define $\phi = \Phi|\partial X$. Then the map $g = \phi \circ f$ gives the manifold $N = X \cup_g Y$ and this is homeomorphic to $M$.
Now, it is simple to find such $\Phi$ if $X$ has compressible boundary - namely we can do a Dehn twist on a disk. You've ruled that out. But we can still find examples by twisting along an essential properly embedded annulus in $X$. For example, if $X$ is a twisted $I$-bundle over a non-orientable surface.
If you further assume that $X$ is "acylindrical" then there are still examples, but they are harder to find. We can build a hyperbolic manifold $X$ which has a self-homeomorphism $\Phi$ of finite order (eg Thurston's knotted Y).
If you further assume that $X$ (and $Y$) has no symmetries, then examples should still exist, but they will be very hard to find. Basically, we need to find a manifold $M$ that contains homeomorphic surfaces $S$ and $S'$, but where there is no homeomorphism of $M$ taking $S$ to $S'$. We then need to "get lucky" and find that $M - n(S)$ and $M - n(S')$ are homeomorphic and win. One way to do this is by a search through one of the many censuses of closed three-manifolds (eg snappy or regina). Another way that should work is to think deeply about hyperbolic three-manifolds with "corners".
Ad (2): the mapping class group of surfaces is isomorphic to the outer automorphism group of its fundamental group - this is a theorem of Baer-Dehn-Nielsen. In particular there is even a finite set of closed loops such that an element of the mapping class group is determined by its action on this set.
Ad (1): irreducible, orientable 3-manifolds (with the exception of $S^2\times S^1$) have $\pi_2=0$ and - assuming $\mid \pi_1\mid=\infty$ - it is an exercise in algebraic topology to prove that they are then aspherical. It is then a classical fact that the homotopy type is determined by $\pi_1$. From the answer to 3-manifolds with isomorphic fundamental groups one sees that even the homeomorphism type is determined.
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology differential-topology 3-manifolds manifolds or ask your own question.
The fundamental group of a closed surface without classification of surfaces?
Is the following 3-manifold irreducible?
Is the following 3-manifold always a trivial I-bundle over a surface? | CommonCrawl |
This is intuitively clear. Being an interior point the first two lines follows from the definition. Problem is at the last two sentences. Can someone elaborate it more mathematically. Thanks in advance.
Fib1123 is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
There are obviously many other valid choices for $\xi$'s.This explains the third sentence.
To understand the last sentence just apply what we just did. Suppose by contradiction that $\alpha \in \Omega$ is such that $\forall \xi \in \Omega : \vert \alpha \vert \geq \vert \xi \vert $ but that $\alpha \not \in \partial \Omega$. Then $\alpha$ is in the interior of $\Omega$ and there is $\rho > 0 $ and $\xi_1 \in B(\alpha, \rho) \subset \Omega$ such that $$\vert \xi_1 \vert> \vert \alpha \vert.$$ a contradiction !
Fib1123 is a new contributor. Be nice, and check out our Code of Conduct.
Not the answer you're looking for? Browse other questions tagged complex-analysis analysis or ask your own question.
Why do these points lie in different half planes? | CommonCrawl |
I've written a small routine using the LAPACK routine dsyev, which is fine for small matrices but takes a while for the above dimensions I am interested in.
Would anyone suggest a more appropriate routine? I am only interested in the lower eigenvalues, so an iterative procedure would probably be suitable.
Or are there any (ready-made) routines that exploit sparse matrices? LAPACK seems to be primarily for dense matrices.
As nicoguaro mentioned, ARPACK has routines that will naturally handle banded matrices that are stored in LAPACK band format. Moreover, since ARPACK uses matrix-vector products (MVP) to find eigenvalues, you might "connect" it to your own MVP subroutine without the need to reorganize matrix storage.
As a primarily C++ user, at some point, I tried using Eigen unsupported ARPACK module, but did not check it for a while. Eigen will provide you with convenient wrappers for your matrix storage and basic linear algebra operations. Though you can use ARPACK via Matlab, Python, and many other ways.
The python scipy page on Sparse Eigenvalue Problems with ARPACK gives an idea of what to do for your problem of finding $n$ smallest eigenvalues by switching to a corresponding computation mode. See references on the which parameter: SM (smallest magnitude), SR (smallest real part), SI (smallest imaginary part), SA (smallest algebraic).
$2500 \times 2500$ does not sound like a large problem; however, if the scaling to larger sizes becomes an issue, one might also look into SLEPc.
Not the answer you're looking for? Browse other questions tagged linear-algebra computational-physics eigenvalues software or ask your own question. | CommonCrawl |
If you are given the graph of a function $z=f(x,y)$, how can you find the graph of the function $z=a\,f(x-x_0,y-y_0)+c$ for some numbers $a$, $c$, $x_0$, and $y_0$?
This is exactly analogous to what you've probably already done in one-variable calculus. Changing these numbers (we'll call them parameters) results in a translation, rescaling, and/or reflection of the graph.
Below is an applet that illustrates these manipulations using the function $f(x,y)=x^2+y^2$. Before you play with the sliders, can you tell what will happen as you change the parameters? Once you start to move them, it is obvious. Make sure you understand why the parameters behave as they do. In particular, why do we need to put a minus sign in front of $x_0$ and $y_0$ but not in front of $c$? After all, they each introduce translations.
Translation, rescaling, and reflection. You can translate, rescale, and reflect of the graph $f(x,y)=x^2+y^2$, resulting in the graph of $z=af(x-x_0,y-y_0)+c$. Drag the sliders with your mouse to change $a$, $c$, $x_0$, and $y_0$. Changing $c$, $x_0$, and $y_0$ translates the graph. Changing the magnitude of $a$ rescales the graph; changing the sign of $a$ reflects the graph across the plane $z=c$.
Changing $x_0$, $y_0$, or $c$ translates the graph. If we change $a$ but keep its sign constant (for example, keep $a$ positive), then we rescale the figure. But if we change the sign of $a$, then we reflect the graph across the plane $z=c$. For example, the graph of $z=-f(x,y)$ is the reflection of the graph of $z=f(x,y)$ across the $xy$-plane (the plane $z=0$).
Translation, rescaling, and reflection by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
Abstract: Let $L$ be a linear elliptic, a pseudomonotone or a generalized monotone operator (in the sense of F. E. Browder and I. V. Skrypnik), and let $F$ be the nonlinear Nemytskij superposition operator generated by a vector-valued function $f$. We give two general existence theorems for solutions of boundary value problems for the equation $Lx=Fx$. These theorems are based on a new functional-theoretic approach to the pair $(L,F)$, on the one hand, and on recent results on the operator $F$, on the other hand. We treat the above mentioned problems in the case of strong non-linearity $F$, i.e. in the case of lack of compactness of the operator $L-F$. In particular, we do not impose the usual growth conditions on the nonlinear function $f$; this allows us to treat elliptic systems with rapidly growing coefficients or exponential non-linearities. Concerning solutions, we consider existence in the classical weak sense, in the so-called $L_\infty$-weakened sense in both Sobolev and Sobolev-Orlicz spaces, and in a generalized weak sense in Sobolev-type spaces which are modelled by means of Banach $L_\infty$-modules. Finally, we illustrate the abstract results by some applied problems occuring in nonlinear mechanics. | CommonCrawl |
The Environmental Protection Agency (EPA) continues to establish criteria for the reduction of pathogens in environmental waters used for drinking water. The EPA also states criteria on the amount of chlorine by-products allowed in drinking water. Alternative removal methods that are used in conjunction with chlorine are needed to assure safe drinking water.
The Surface Water Treatment Rule (SWTR) states that a 99.99% reduction of viruses and a 99.9% reduction in Giardia lamblia cysts needs to be achieved by a treatment facility that uses surface water for the production of drinking water. The SWTR also states Maximum Contaminant Levels (MCL) on trihalomethane production. The Groundwater Disinfection Rule (GWDR) states that a 99.99% reduction of viruses must be achieved in treatment facilities using groundwater for the production of drinking water.
The ability of polysulfone hollow fiber (HF) ultrafiltration (UF) membranes to remove bacteriophage, MS2, Giardia lamblia cysts, and Cryptosporidium parvum oocysts from MilliQ water was examined. The membranes were placed in a Koch 5 bench scale disinfection unit. Operating conditions such as transmembrane pressure, pH, and temperature were varied to assess their effects on removal capability. A 100,000D PMPW membrane consistently removed 99.99% of MS2 titer and 99.999% of G. lamblia cysts and C. parvum oocysts under different operating conditions. The PMPW membrane achieved a 99.99% removal of MS2 in trials where MS2 was continually added to the influent at a concentration of $1.73\times10\sp4$ pfu/ml-min.
Ultraviolet (UV) irradiation was assessed for its ability to inactivate MS2, poliovirus LSc-1, hepatitis A virus HM-175, and rotavirus Wa in bench scale petri dish assays. Known concentrations of these viruses were added to MilliQ water and different groundwaters and UV irradiated. The least susceptible virus to UV irradiation was rotavirus Wa strain. In MilliQ water and groundwater, 99.99% reductions in titer occurred at 97.0 mWs/cm$\sp2.$ A 99.99% reduction in MS2 occurred at 80 mWs/cm$\sp2$ in MilliQ water and between 64.0 mWs/cm$\sp2$ and 93.0 mWs/cm$\sp2$ in groundwaters. Poliovirus and hepatitis A virus were considerably less resistant to UV inactivation. A cell culture-RT PCR assay was used to determine rotavirus Wa titers before and after UV irradiation.
Filtration was capable of meeting the pathogen removal criteria stated in the SWTR. UV irradiation met the virus inactivation criteria stated in the GWDR. Both methodologies could be used in full scale treatment plants under controlled conditions to determine their ability to remove pathogens from drinking water.
Hogan, Shannon Patrick, "Alternative removal methodologies for environmental waters" (1998). Doctoral Dissertations. 2013. | CommonCrawl |
For any function, the average rate of change between two points is the same as the gradient of a straight line between those two points.
Note $\Delta$ (delta) is the Greek symbol for capital D and is used to denote "change."
… … ie $\Delta x$ is read as "change in x."
is defined by the limit of the above gradient rule as $h \rightarrow 0$.
also represents the gradient of the tangent to the curve at that point.
This gradient function is usually referred to as the derivative by first principles.
… … … … The derivative of $f(x) = x^2 + 2$ is therefore $f'(x) = 2x$.
of any point on the graph of f(x) can be found using the rule: $f'(x) = 2 \times x$.
A function can only be differentiated at a point if its graph is continuous and smooth at that point.
Also, we can not differentiate the endpoint of a function.
If the graph is discontinuous, or an endpoint, or not smooth then the tangent is undefined.
See here for the conditions of differentiation. | CommonCrawl |
Lai, Ming-Jun and Yin, Wotao. "Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm." (2012) https://hdl.handle.net/1911/102192.
This paper studies the models of minimizing $||x||_1+1/(2\alpha)||x||_2^2$ where $x$ is a vector, as well as those of minimizing $||X||_*+1/(2\alpha)||X||_F^2$ where $X$ is a matrix and $||X||_*$ and $||X||_F$ are the nuclear and Frobenius norms of $X$, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing $||x||_1$ and $||X||_*$ under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector $x^0$, minimizing $||x||_1+1/(2\alpha)||x||_2^2$ returns (nearly) the same solution as minimizing $||x||_1$ almost whenever $\alpha\ge 10||x^0||_\infty$. The same relation also holds between minimizing $||X||_*+1/(2\alpha)||X||_F^2$ and minimizing $||X||_*$ for recovering a (nearly) low-rank matrix $X^0$, if $\alpha\ge 10||X^0||_2$. Furthermore, we show that the linearized Bregman algorithm for minimizing $||x||_1+1/(2\alpha)||x||_2^2$ subject to $Ax=b$ enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on $A$. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms. | CommonCrawl |
Proteins are the machines of life. In order to perform their functions, they must move continuously. The motions correspond to equilibrium fluctuations and to non-equilibrium relaxations. At least three different fluctuation processes occur: $\alpha$- and $\beta$-fluctuations and processes that occur even below one Kelvin. The $\alpha$-fluctuations can be approximated by the Vogel-Tammann-Fulcher relation, while the $\beta$-fluctuations appear to follow a conventional Arrhenius law (but may in some cases be better characterized by a Ferry law). Both are usually nonexponential in time. These phenomena are similar in proteins and glasses, but there is a fundamental difference between fluctuations in glasses and proteins: In glasses, they are independent of the environment, in proteins the $\alpha$-fluctuations are slaved to the $\alpha$-fluctuations in the solvent surrounding the protein; they follow their rate coefficients but they are entropically slowed. The studies of the protein motions are actually still in their infancy, but we can expect that future work will not only help understanding protein functions, but will also feed back to the physics of glasses. | CommonCrawl |
In the study of statistical inference, entropy is a measure of uncertainty, or lack of information. But now we can interpret it as a measure of biodiversity: it's zero when just one species is present, and small when a few species have much larger populations than all the rest, but gets big otherwise.
Last time I showed that in this case, the entropy will eventually decrease. It will go to zero as $t \to +\infty$ whenever one species is fitter than all the rest and starts out with a nonzero population—since then this species will eventually take over.
However, I never said the entropy is always decreasing, because that's false! Even in this pathetically simple case, entropy can increase.
This seems to conflict with our idea that the population's entropy should decrease as it acquires information about its environment. But in fact this phenomenon is familiar in the study of statistical inference. If you start out with strongly held false beliefs about a situation, the first effect of learning more is to become less certain about what's going on!
Get it? Say you start out by assigning a high probability to some wrong guess about a situation. The entropy of your probability distribution is low: you're quite certain about what's going on. But you're wrong. When you first start suspecting you're wrong, you become more uncertain about what's going on. Your probability distribution flattens out, and the entropy goes up.
So, sometimes learning involves a decrease in information—false information. There's nothing about the mathematical concept of information that says this information is true.
So, the $i$th term contributes positively to the change in entropy if the $i$th species is fitter than average, but negatively if it's less fit than average.
At what rate does the entropy change at $t = 0$? Which species is responsible for most of this change?
I had to work through these examples to understand what's going on. Now I do, and it all makes sense.
Often there is such a quantity. But it's not the naive entropy: it's the relative entropy. I'll talk about that next time. In the meantime, if you want to prepare, please reread Part 6 of this series, where I explained this concept. Back then, I argued that whenever you're tempted to talk about entropy, you should talk about relative entropy. So, we should try that here.
There's a big idea lurking here: information is relative. How much information a signal gives you depends on your prior assumptions about what that signal is likely to be. If this is true, perhaps biodiversity is relative too. | CommonCrawl |
This is a question about Rubik's Cube and generalizations of this puzzle, such as Rubik's Revenge, Professor's cube or in general the $n \times n \times n$ cube.
Let $g(n)$ be the smallest number $m$, such that every realizable arrangement of the $n \times n \times n$ cube can be solved with $m$ moves. In other words, this is the "radius" of the Cayley graph of the $n \times n \times n$ cube group with respect to the canonical generating system.
We have $g(1)=0$, $g(2)=11$ and - quite recently - in 2010 it was proven that $g(3)=20$: God's number is 20.
Question. Is anything known about $g(4)$ or $g(5)$?
I expect that the precise number is unkwown since the calculation for Rubik's cube already took three decades. Nevertheless, is there any work in progress? Are any lower or upper bounds known?
Question. Is anything known about the asymptotic value of $g(n)$?
I am not an expert, but I remember seeing this press release from MIT not too long ago. The corresponding arXiv article should answer your second question.
Based on this discussion from 2015, God's number for the 4x4x4 cube lies beween 35 and 55 (inclusive) for the outer block turn metric, with the corresponding bounds being 32 and 53 for the single slice turn metric, and 29 and 53 for the block turn metric. Scroll down to the comments for the optimal bounds.
The choice of metric is irrelevant for the asymptotic results of Demaine, Demaine, Eisenstat, Lubiw and Winslow which gives $\Theta(n^2/\log(n))$ as the answer to question two.
Since this question was first asked, cube20.org claims to have also shown that God's number for the quarter turn metric is 26.
As for the 5x5x5, one can find this post claiming an upper bound of 130 for the outer block turning metric.
Not the answer you're looking for? Browse other questions tagged gr.group-theory puzzle asymptotics computational-group-theory or ask your own question.
Simple lower bounds for Bell numbers (number of set partitions)?
How hard is it to compute the diameter and the growth function of a finite permutation group of small degree?
What is God's number for the WrapSlide puzzle?
Number of positions of Rubik's cube grows with multiplier 13 with the distance - what are explanations and groups with similar growth pattern? | CommonCrawl |
Abstract: Using the relation between distance modulus $(m - M)$ and redshift $z$, deduced from Friedman- Robertson-Walker (FRW) metric and assuming different values of deceleration parameter $q_0$. We constrained the Hubble parameter $h$. The estimates of the Hubble parameters we obtained using the median values of the data obtained from NASA Extragalactic Database (NED), are: $h = 0.7\pm0.3$, for $q_0 = 0$, $h = 0.6\pm0.3$, for $q_0 = 1$ and $h = 0.8\pm0.3$ , for $q_0 = -1$. The corresponding age $\tau$ and size $R$ of the observable universe were also estimated as: $\tau = 15\pm1$ $ Gyr$, $R = (5\pm2) \times 10^3$ $Mpc$, $\tau = 18\pm1$ $Gyr$, $R=(6\pm2)\times 10^3$ $Mpc$ and $\tau=13\pm1$ $Gyr$, $R = (4 \pm 2)\times10^3$ $Mpc$ for $q_0 = 0$, $q_0 = 1$ and $q_0 = -1$ respectively. | CommonCrawl |
What are the debts and how did the mathematician determine them?
4, 4, 3 and 1 dollar.
Now, the mathematician knows his own debt, so if he's unsure of the distribution of the others, there must be multiple partitions of 12 in four parts with a product equal to his debt. That's only the case for 48, which can be split as $6 \times 2 \times 2 \times 2$ and $4 \times 4 \times 3 \times 1$. Since there is a single least number among those (corresponding to the statistician's debt), it must be the second option.
nothing is said that the individual debts are unique, only that the lowest one must be.
The mathematician can't owe \$45 because there is only one way to sum to 12 and multiply to 45.
The mathematician can't owe \$36 because there is only one way to sum to 12 and multiply to 36.
The mathematician can't owe \$28 because there is only one way to sum to 12 and multiply to 28.
The mathematician could owe \$48 because 2,2,2,6 is another valid solution that sums to 12 and multiplies to 48, but is ruled out when the discovery is made.
Not the answer you're looking for? Browse other questions tagged logical-deduction number-theory or ask your own question. | CommonCrawl |
The following question I jotted down from an online quantum chemistry pdf.
in which $P_j$ is the projection operator and $H$ is the Hamiltonian. We are given the usual time-dependent form of Schrodinger's equation.
If this works (I don't assume that), my question is whether there is a direct way to do this, using the definition of $P_j$, and if so whether it would not also be an appeal to this theorem?
$^*$ Using eq. (9) from this note.
Before I go into how this result can be derived, I want to point out a mistake in your attempt. $|\psi\rangle\langle\psi|\neq1$, but rather $\sum_i|i\rangle\langle i|=1$, where the sum of is over a complete set of states. Basically, this result only holds when you sum over all the basis vectors that span the space you are studying, not just for any single vector in the space. Its easiest to understand this in analogy to more conventional vectors. Assuming a column vector $v$ is normalized, than $v^\dagger v=1$, which is the inner/dot product of the vector with itself. This is analogous to $\langle\psi|\psi\rangle=1$ in bra-ket notation. If we reverse the order of the product, $vv^\dagger=\mathbf A$ where $\mathbf A$ is some matrix. This operation is called an outer product and is akin to $|\psi\rangle\langle\psi|$. As it turns out, for a complete orthonormal basis, the sum of these outer products has to equal the identity matrix $\mathbf1$. We can make sense of this by recognizing that the sum is a sum of projectors onto all the basis vectors, which in QM might be all of the possible eigenstates of the system. Your wavefunction has to be some linear combination of these basis vectors, so if your project your wavefunction onto all the eigenstates and then sum those projections, you should get your original wavefunction back, hence the sum of the all the projectors is the identity matrix.
Why is there a need of polar coordinates to solve the Schrödinger wave equation for the hydrogen atom?
Why cannot the Schrödinger equation be solved exactly for systems in which more than two particles interact?
Can the eigenvalue for a quantum mechanical operator be zero?
Is there a simple way to predict the molecular pi orbitals in conjugated pi systems?
In 1 dimension, the curvature of Ψ = K.E. of wave. Is this the case for 2 dimensions and 3 dimensions also? | CommonCrawl |
Abstract: These lectures provide an overview of Quantum Chromodynamics (QCD), the SU(3) gauge theory of the strong interactions. The running of the strong coupling and the associated property of Asymptotic Freedom are analyzed. Some selected experimental tests and the present knowledge of $\alpha_s$ and the quark masses are briefly summarized. A short description of the QCD flavour symmetries and the dynamical breaking of chiral symmetry is also given.
Description: 50 páginas, 24 figuras, 1 tabla.-- Trabajo presentado al ICTP Summer School in Particle Physics celebrado del 21 de Junio al 9 de Julio de 1999 en Trieste (Italia).-- Editores: G. Senjanovc y A. Yu. Smirnov. | CommonCrawl |
Abstract: The simple $m^2\phi^2$ potential as an inflationary model is coming under increasing tension with limits on the tensor-to-scalar ratio $r$ and measurements of the scalar spectral index $n_s$. Cubic Galileon interactions in the context of the Horndeski action can potentially reconcile the observables. However, we show that this cannot be achieved with only a constant Galileon mass scale because the interactions turn off too slowly, leading also to gradient instabilities after inflation ends. Allowing for a more rapid transition can reconcile the observables but moderately breaks the slow-roll approximation leading to a relatively large and negative running of the tilt $\alpha_s$ that can be of order $n_s-1$. We show that the observables on CMB and large scale structure scales can be predicted accurately using the optimized slow-roll approach instead of the traditional slow-roll expansion. Upper limits on $|\alpha_s|$ place a lower bound of $r\gtrsim 0.005$ and conversely a given $r$ places a lower bound on $|\alpha_s|$, both of which are potentially observable with next generation CMB and large scale structure surveys. | CommonCrawl |
Geometric / physical / probabilistic interpretations of Riemann zeta(n>1)?
Are there some known identities of elliptic polylogarithms similar to the Abel identity of polylogarithm?
For any given $p$ such that the $p$ dependent part of the argument is a negative integer one can do the usual Laurent expansion of the Gamma function in $\epsilon$ for each of the 3 factors and then multiply. But what can be said about the Laurent expansion in general as a function of $p$?
I wish one could write down the Laurent expansion in $\epsilon$ as a function of $p$!
One sees that there are these special cases like if $p$ is such that there are two integers $N$ and $M$ satisfying, $p= 1-2N^2 = -5 -2M^2$ then the later two Gamma function can have poles simultaneously. (like $N=2, M =1, p= -7$) Existence of such special $p$ naively seems to make things more tricky.
Are you missing a square root in the definition of your product? Otherwise I can't imagine why it should matter that the argument of $\Gamma$ is near a square.
@Greg Martin Are you referring to the fact that I have written $N^2$ and $M^2$? That I did so that, $(p-1)/2 = - N^2$ is a negative definite integer and hence the second Gamma function has a pole. Similarly, $(5+p)/2 = -M^2$ is again a negative definite integer and hence the 3rd Gamma function has a pole. It would be great if you can help with the question.
Anirbit: the Gamma function has poles for every non-positive integer argument. Squares do not seem to enter the picture anywhere.
@S.Carnahan As you can see in my comment above to Greg, I have explained why I parametrized using squares - to ensure negative definiteness.
Forget the $p$'s and consider a product $\Gamma(x-\epsilon)\Gamma(y-\epsilon)\ldots\Gamma(z-\epsilon)$. If one of $x,y,\ldots,z$ is not a negative integer, it is regular, and you can put $\epsilon=0$ in the factor. So it is enough to consider the case where all of $x,y,\ldots,z$ are negative integers. In this case, if there are $k$ factors, $\epsilon^k\Gamma(x-\epsilon)\Gamma(y-\epsilon)\ldots\Gamma(z-\epsilon)$ is analytic at $\epsilon=0$. | CommonCrawl |
by Eric Worrall so it may be a good idea to recall the story and mention several rather predictable events that took place in recent days.
Nuclear fusion cannot be "cold". This fact must be comprehensible to every good undergraduate student who has gone at least through the first semester of some nuclear physics course or anything equivalent to it. The energy needed for two nuclei to overcome the repulsive Coulomb barrier and get very close – which is what the word "fusion" really means – is as high as many megaelectronvolts.
The electrostatic potential energy is \(Q_1 Q_2 / 4\pi\varepsilon_0 R\). In the \(\varepsilon_0=\hbar=c=1\) units, if you want \(R\) to be comparable to the QCD scale (nuclear radius), some \(1/150\MeV\), you will need the energy around \(Q^2 \times 150\MeV\) which is still megaelectronvolts because \(Q^2\sim \alpha\sim 1/137\). The energy gets even higher as \(Z_1 Z_2\) if there are many protons in the nuclei.
Because of the estimate \(E\sim kT\) in statistical physics, a single nucleus with this energy had to have the temperature corresponding to megaelectronvolts – many millions of degrees – to start with. Without this energy, the Coulomb barrier has to be "quantum tunneled through" and the resulting rates are negligibly tiny. If you want to increase the probability of tunneling to "reasonably visible" values, you simply need to equip the nuclei with the initial (kinetic) energy that is a sizeable fraction of the barrier, so it's still megaelectronvolts and millions of degrees.
Even if the nuclei were slow (cold) before the fusion, the very fact that megaelectronvolts of energy are produced per nucleus pair means that lots of energy per particle is produced in the fusion, to the neighborhood of the macroscopically fusing matter becomes hot afterwards – millions of degrees.
The Sun's core has 15 million degrees Celsius but the reactions proceed slowly – it's OK, the Sun lives for billions of years and the power is still high because the Sun is large – but realistic fusion reactors on Earth typically demand higher rates so they need temperatures comparable to 100 million degrees Celsius.
Nuclear physics and its events – especially fission and fusion – simply do take place at some energy scale that is about 1 million times higher than the energy scale of atomic physics. That's what makes the "energy in uranium" 1 million times more concentrated than the "energy in coal or lithium", for example. But it also means that the conditions in which the reactions are taking place are much more extreme.
I've explained these matters in 2011 when I reacted to Watts' article Andrea Rossi's E-cat fusion device on target that has really shocked me (Watts hopefully agrees that the device is not "on target" now in 2016 LOL). Anthony wasn't the only "typically sensible" man who has promoted this self-evident nonsense. As recently as in 2013, CMS particle physicist Tommaso Dorigo was writing enthusiastic reports on these Rossi cold fusion papers, therefore showing Dorigo's complete incompetence in nuclear physics.
Most of the people who have supported this crap were some kind of "extraterrestrials are everywhere around us" and "restrictions sold as constraints from physics are just an evil propaganda – no laws of Nature can ever stop geniuses led by saviors such as Rossi". So with this concentration of a mixture of stupidity and fanaticism, you may perhaps understand that even a life-long criminal such as Andrea Rossi may keep on cheating and stealing and lying for years.
Incidentally, if you have forgotten, the experiments that have claimed to "prove" that lots of energy is produced in Rossi's gadgets were claiming to have vaporized lots of water, much more water than could have been vaporized from the electricity that the experimenters admitted to have consumed. However, the reality is that almost no water was vaporized. They just saw "just a little bit of vapor" and screamed that all the water had to be above the boiling point – while neglecting the fact that the water was actually safely below the boiling point because the boiling point is higher at higher pressure.
OK, nevertheless, some people think that even a self-evident charlatan such as Rossi should get the money to become important and some three years ago, the nutty Rossi cultists boasted that "they" have secured $100 million from a company called Industrial Heat LLC which defines itself by "ambitious" goals – mostly a "rigorous verification and development of low-energy fusion technologies" – but at least, the capital in the company is damn real.
When scammers such as Rossi start to build companies and team up with others, it may become confusing and all the companies may look similar but that shouldn't obscure the point that some companies are the thieves while others are the victims. So there may be lots of companies that are basically just "sock puppets of Rossi himself" (e.g. the company named Defkalion; only a small Canadian branch of it has survived). But Industrial Heat LLC was clearly a victim, the only company cooperating with Rossi, not led by Rossi, but paying him the real money of some independent entrepreneurs. When the thieves and victims sign an agreement, they cooperate. But it's obvious that nothing good could have come out of this cooperation because it's unscientific nonsense. So the actual difference between the two kinds of companies becomes clear later.
where you can learn that as recently as as in Fall 2015, Industrial Heat LLC folks were talking about "isotopes they observed" and other promising signs.
You may want to read a short statement by the victim company, Industrial Heat LLC.
What happened is that after 3 years, Industrial Heat LLC has finally figured out that Rossi's "ingenious reactor" is complete fraud and doesn't produce a microjoule of energy. They have previously paid $11 million to Rossi and because the remaining $89 million in their contract depended on a successful replication etc., they obviously decided not to pay the remaining $89 million.
Rossi has abused the fact that he has managed to avoid the electric chair so far and sued them. They want to steal his "ingenious" idea that has been "proven" by many "independent" researchers and they are probably just pretending that it doesn't work. Industrial Heat LLC just states that the thing doesn't work and Rossi and pals have repeatedly breached their agreements in other ways, too.
But you see that the situation into which Industrial Heat LLC have maneuvered themselves is potentially dangerous for them because in other situations, it could be plausible that someone's business partner just pretends that an idea doesn't work, so that he can steal it for free. Obviously, Rossi's idea really doesn't work because if it did, it would be far more profitable for Industrial Heat LLC to make it work, mass produce the reactors, and pay the modest $89 million to Rossi. But it's plausible and this giant Italian criminal who should have been executed for many years is simply an experienced aßhole who has learned to deal with the courts, too.
Now, the Industrial Heat LLC is clearly not innocent. To some extent, they are parts of the same bogus industry as Rossi himself – and given their "mission", nothing good can ever come out of their work, either. So the wealthy men should have sent 10% of the money they have already paid to Rossi ($1 million or so) to your humble correspondent via PayPal instead. That would be a far more promising way to have a chance to develop new cool energy sources.
However, despite this incompetence of the Industrial Heat LLC which makes you think that they may deserve this hassle, I cannot overlook the fact that they are victims. They may have just idealistically wanted to find the truth and discover something useful that "could" work, and they just didn't understand why it couldn't, while Rossi is a cold-blooded criminal who just decided to steal some $100 million. Some of the money that Rossi has already stolen – and may steal, if his dirty tricks succeed – may have been paid by very good people who have worked hard for a long time, and so on.
So I obviously wish Industrial Heat LLC a lot of good luck in their new efforts to liquidate the Italian criminal, their former pal. But this sympathy cannot change the fact that people and companies simply have to be more careful when they pay lots of money to folks that others consider self-evidently and demonstrably fraudulent. | CommonCrawl |
Recently I decided to give Julia a try. Julia is a relatively new scientific programming language that is free, open source, and fast. Besides the huge appeal of a performant high level scripting language, Julia intrigued me because the language actually has a reasonable type system and seems to be well-designed (unlike Matlab, which Julia is quite similar to). I recently implemented low-rank matrix approximations (in Matlab) for my numerical linear algebra class, so I figured reimplementing part of the algorithm in Julia would be a good way to get my feet on the ground in Julia land.
Rather than implementing all of the routines required to compute the low-rank approximation to a matrix, I just implemented the QR decomposition of a tridiagonal matrix using Householder reflections. To learn more about this algorithm, I highly recommend checking out section 3.4.1 of "Applied Numerical Linear Algebra" by Demmel. I was quite interested in the performance of Julia, so I iteratively optimized my algorithm and compared each version to an implementation of the same algorithm using Python and numpy.
This extremely inefficient implementation mainly serves to illustrate how to compute the QR decomposition of a matrix using Householder projections.
In iteration $i$ of the loop, the Householder projection $P$ makes elements below the diagonal of column $i$ equal to 0. This leads to $R$ being upper triangular by the end of the loop.
Note that this loop explicitly forms the full $P$ matrix in memory. This leads to the poor performance of this implementation: ~0.31s for 200x200 matrix.
This very small "optimization" caused no significant change in run-time.
Here we'll take advantage of the tridiagonal form of input matrix $T$. To recall, a matrix is tridiagonal if it is 0 everywhere except for its superdiagonal, main diagonal, and subdiagonal.
This optimization gives huge performance gains both in terms of forming the matrix $M$ (as it is just $2 \times 2$ rather than $n \times n$) as well as the matrix multiplications by $R$ and $Q^T$. This implementation takes only ~0.005s for 200x200 matrix, making it ~23 times faster than the previous implementation. As the code just became much faster, all future benchmarking will be done with 1500x1500 tridiagonal matrices. This implementation takes ~0.36s for 1500x1500 matrix.
Profiling this code gives ~0.19s for 1500x1500 matrix.
One small difficulty I had with this translation was the difference in indexing between Julia and Python. Slicing in Python does not include the end index, but slicing in Julia does. Profiling this codes gives ~0.16s for 1500x1500 matrix (edit: This previously read 0.25s, that was for numpy.linalg.qr). This is quite similar to the 0.19s from the equivalent Julia code, and this is not surprising due to most of the run-time being spent in BLAS calls.
I do not know of anyway to speed up this Python code (without using either another language or an external library). The optimizations that I make to the Julia code below would not work with Python due to the lack of JIT (and relatively poor performance) of Python.
Besides the hopefully optimized machine code the JIT is generating, inlining the matrix multiplication also allowed me to fuse the loops required for the two seperate matrix multiplications. The profiler confirms that Julia's JIT did a good job, as this implementation takes ~0.05s for a 1500x1500 matrix. This is a 4x speed increase just from unrolling a loop!
Julia arrays are stored in column-major order. This means that A[i, j] is directly next to A[i+1, j] in memory and that we want to iterate over columns rather than rows whenever possible to take full advantage of CPU caching. In both the computation of $R$ and $Q^T$, I was iterating over rows. However, I really want $Q$, not $Q^T$. Thus I can have the double-win of getting to iterate over columns rather than rows, and not needing to compute transpose $Q^T$ to get $Q$ after finishing the loop.
Benchmarking this code shows each iteration takes ~0.03 seconds for a 1500x1500 matrix. This is about ~2x as fast as without cache optimizations, and about 6x as fast as td_qr4, which performs the exact same math.
Overall, this project gave me a very favorable impression of Julia. The language was no slower to write than Python and was overall pretty pleasant to program in. Critically, I was able to perform optimizations in Julia that were not possible in Python. Julia seems to hit the awesome sweet spot of being a high level scripting language with performance more comparable to C or Fortran. I'm going to try doing the rest of the assignments for my numerical analysis class in Julia (particularly with IJulia, which merges the best aspects of IPython notebooks and Julia).
© Eric Martin, license unless otherwise noted. Generated by Pelican with crowsfoot theme. | CommonCrawl |
2. How to calculate this Feynman diagram?
I have looked on the internet and at quite a few references, and could not figure it out. Any help is appreciated.
It is interesting to view your question above not only in light of the Goldstone-Wilczek (G-W) approach (G-W has provided a method for computing the fermion charge induced by a classical profile), but also by computing $1/2$-fermion charge found by [Jackiw-Rebbi](http://journals.aps.org/prd/abstract/10.1103/PhysRevD.13.3398) using G-W method. For simplicity, let us consider the 1+1D case, and let us consider the $Z_2$ domain wall and the $1/2$-charge found by Jackiw-Rebbi. The construction, valid for 1+1D systems, works as follows.
separating regions with opposite values of the v.e.v. of $\lambda$.
where $x_0$ denotes the center of the domain wall.
You can try to extend to other dimensions, but then you may need to be careful and you may not be able to use the bosonization.
See more details of the derivation here in the [page 13](https://arxiv.org/abs/1403.5256) of [this paper](http://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.195134).
Thanks for the reply. Indeed G-W's current becomes very clear in the language of bosonization. It is possible as well to come to the same formula via dimension reduction, see for example http://journals.aps.org/prb/abstract/10.1103/PhysRevB.78.195424. Basically the angle $\theta$ in G-W's current correspond to the gauge field in the Quantum Hall response, which is kind of an "extension" to higher dimensions.
But I am just curious how G-W come up with this diagram in the first place? Why we have to insert five scalar fields in the diagram, not one or two for example?
It seems related to the soliton background, which I could not find a suitable reference... Any idea?
1) Yes, the dimensional reduction may work.
2) They Feynman diagram shall come up in any number of scaler field background insertion. The 5 dashed lines are just schematic. You can consider just 1 dashed line is good enough. I remember that I compute it before directly through this Feynman diagram, but it is a bit long calculation. I have to dig it out, it is somewhere in my files. In any case of two approaches, the result shall be the same.
That would make much more sense, I was struggling to understand why five insertion gives only one derivative of the scalar field... Having say that, it will be great if you could sketch your calculation here, just briefly. Thank you. | CommonCrawl |
In the lectures I will formulate a conjecture asserting that there is a hidden action of certain motivic cohomology groups on the cohomology of arithmetic groups. One can construct this action, tensored with $\mathbb C$, using differential forms. Also one can construct it, tensored with $\mathbb Q_p$, by using a derived version of the Hecke algebra (or a derived version of the Galois deformation rings).
I will describe these constructions and the evidence for the conjecture. The ideas and results presented here are based on papers with Prasanna and Galatius and Harris (all on the arxiv) and work in progress with Darmon, Harris, Rotger. | CommonCrawl |
A fun way to teach first year students the different methods of proof is to play a game with chocolate bars, Chomp.
The players take turns to choose one chocolate block and "eat it", together with all other blocks that are below it and to its right. There is a catch: the top left block contains poison, so the first player forced to eat it dies, that is, looses the game.
Let's prove some results about Chomp illustrating the strengths and weaknesses of different methods of proof.
By far the most satisfying method is the 'direct proof'. Once you understand it, the truth of the statement is unescapable.
Theorem 1: The first player to move has a winning strategy for a $2 \times n$ chocolate bar.
Here's the winning move: you take the lower right block!
What can the other player do? She can bring the bar again in rectangular shape of strictly smaller size (in which case you eat again the lower right block), or she can eat more blocks from the lower row (but then, after her move, you eat as many blocks from the top row bringing the bar again in a shape in which the top row has exactly one more block than the lower one. You're bound to win!
Theorem 2: Given any Chomp position, either the first player to move has a winning strategy or the second player has one.
A proof by induction relies on simplifying the situation at hand. Here, each move reduces the number of blocks in the chocolate bar, so we can apply induction on the number of blocks.
If there is just one block, it must contain poison and so the first player has to eat it and looses the game. That is, here the second player has a winning strategy and we indicate this by labelling the position with $0$. This one block position is the 'basis' for our induction.
Now, take a position $P$ for which you want to prove the claim. Look at all possible positions $X$ you get after one move. All these positions have strictly less blocks, so 'by induction' we may assume that the result is true for each one of them. So we can label each of these positions with $0$ if it is a second player win, or with $\ast$ if it is a first player win.
How to label $P$? Well, if all positions in $X$ are labeled $\ast$ we label $P$ with $0$ because it is a second player win. The first player has to move to a starred position (which is a first player win) and so the second player has a winning strategy from it (by moving to a $0$-position). If there is at least one position in $X$ labeled $0$ then we label $P$ with $\ast$. Indeed, the first player can move to the $0$-position and then has a winning strategy, playing second.
What to do if neither a direct proof is found, nor a reduction argument allowing a proof by induction? Virtually all professional mathematicians have no objection whatsoever to resort to a very drastic tactic: proof by contradiction.
Theorem 3: The first player has a winning strategy for a rectangular starting position.
If the first player does not have a winning strategy for that rectangular chocolate bar, the second player must have one by Theorem2. That is, for every possible first move the second player has a winning response.
Assume the first player eats the lower right block.
The second player must have some winning response to this. Whatever her move, the resulting position could have been obtained by the first player already after the first move.
So, the second player cannot have a winning strategy.
This strategy stealing argument is a cheap cheat. We have not the slightest idea of what the winning strategy for the first player is or how to find it.
Sooner or later in their career they will hear arguments in favour of 'constructive mathematics', which does not accept the law of the excluded third.
Andrej Bauer has described was happens next. They will go through the five stages people need to come to terms with life's traumatising events: denial, anger, bargaining, depression, and finally acceptance.
His paper is just out in the Bulletin of the AMS Five stages of accepting constructive mathematics.
This paper is an extended version of a talk he gave at the IAS.
Next Post Where's Bourbaki's tomb?
Theorem 3 fails in the 1×1 case! | CommonCrawl |
A convex polygon, or prototile, in the regular decomposition, or tiling, of the plane by equal polygons, i.e. in a decomposition for which there is a group of motions of the plane mapping the decomposition into itself and acting transitively on the set of tiles. In the Euclidean plane there are 11 combinatorial types of tilings, called Shubnikov–Laves tilings (see Fig.). However, the symmetry group for a single combinatorial type can act in different ways. The relationship between the combinatorial type and the symmetry group is characterized by the so-called adjacency symbol. In the Euclidean plane, there are 46 general regular tilings with different adjacency symbols.
The planigons in the Lobachevskii plane are regular polygons with an arbitrary number of sides $k$ such that an arbitrary fixed number $\alpha$ of them meet at each vertex of the planigon. For numbers of sides $k=3,4,5,6$, and $>6$, one may choose a planigon such that $\alpha\geq7$, $\geq5$, $\geq4$, $\geq4$, and $\geq3$. The multi-dimensional analogue of a planigon is a stereohedron.
For more general types of periodic patterns the relationship between combinatorial type and symmetry group can be more conveniently described by the so-called Delone symbols, also spelled Delaney symbols, [a2]–[a3].
A general survey and a modern classification of tilings including planigons, i.e. prototiles of isohedral tilings, is given in [a1].
This page was last modified on 11 April 2014, at 23:41. | CommonCrawl |
Sankara-Ramakrishnan, R and Vishveshwara, Saraswathi (1989) A hydrogen bonded chain in bactereorhodopsin by computer modeling approach. In: Journal of Biomolecular Structure & Dynamics, 7 (1). pp. 187-205.
The 7 $\alpha$-helical segments of bacteriorhodopsin (BR) passing through the membrane are investigated for a continuous H bonded chain (HBC)9. The study is carried out by computer modeling approach. It is assumed that the 7 helixes are placed as (AGFEDCB), which has been accepted as the best model by several groups. Helixes A, D, E and G are considered to be present in right handed $\alpha$-helical conformation. The inter-orientation of these helixes are represented by Eulerian angles $\alpha$, $\beta$, and $\gamma$. For the helixes B, C and F which contain proline in the middle, several conformational possibilities were considered. In these cases apart from the Eulerian angles $\alpha$, $\beta$ and $\gamma$, the dihedral angles $\Phi$p-1 and $\Psi$p-1 of the residues that are succeeded by proline residue in the helical regions were also used in fixing the position of the helixes with respect to each other. All these parameters were varied to fit with the top, middle, and bottom distances reported by electron diffraction studies. Good fit was obtained for all right handed $\alpha$-helical conformations and also for helixes B, C and F with a left handed turn at the residue preceding proline. Hence 2 structures were analyzed for continuous HBC, structure I which contained all the 7 helixes in right handed $\alpha$-helical conformation and structure II, which had the helixes A, D, E and G in right handed conformation and the helixes B, C and F in right handed $\alpha$-helical conformation with a left handed turn at the residue preceding proline. All possible staggered conformations were considered for the side chains and the inter at. distances were analyzed for H bonds. It was possible to obtain a continuous chain in both the structures which includes most of the residues found to be important by the expts. However lysine-216 has to be considered in 2 different conformations to connect the cytoplasmic side with the extra cellular side. The overall height spanned by HBC is about 25 .ANG. The chains obtained by both the structures I and II are analyzed in terms of the conformational parameters. It has also been possible to place the retinal in the region as predicted by the expts. The tryptophan residues which affect the spectral characteristics can be aligned on either side of the retinal. | CommonCrawl |
Abstract: In recent work, globally well-defined Type IIB supergravity solutions with geometry $AdS_6 \times S^2$ warped over a Riemann surface $\Sigma$ were constructed and conjectured to describe the near-horizon geometry of $(p,q)$ five-brane webs in the conformal limit. In the present paper, we offer more evidence for this interpretation of the supergravity solutions in terms of five-brane webs. In particular, we explore the behavior of probe $(p,q)$-strings in certain families of these $AdS_6 \times S^2\times \Sigma$ backgrounds and compare this behavior to that predicted by microscopic, brane web considerations. In the microscopic picture, we argue that the embedding of a probe string may give rise to the formation of string junctions involving open strings anchored on the branes of the web. We then identify a quantity on the supergravity side that is conjectured to be equivalent to the total junction tension in a class of backgrounds corresponding to brane webs with four semi-infinite external five-branes. In the process, we will show that for general brane web backgrounds, the minimal energy probe string embeddings do not coincide with the embeddings preserving half of the background supersymmetries. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.