text
stringlengths
100
500k
subset
stringclasses
4 values
Volume 9, Number 4 (1996), 761-778. We are concerned with the asymptotic behavior of global solutions for a class of reaction-diffusion systems under homogeneous Neumann boundary conditions. An example of the system which we consider in this paper is what we call a diffusive epidemic model. After we show that every global solution uniformly converges to the corresponding constant function as $t \to \infty$, we investigate the rate of this convergence. We can obtain it with use of $L^p$-estimates, integral equations via analytic semigroups, fractional powers of operators and some imbedding relations. Differential Integral Equations, Volume 9, Number 4 (1996), 761-778.
CommonCrawl
A monoid is an algebraic structure with a single associative binary operation and an identity element. Is (topology, union, empty set) with basis as generator a monoid? Set - $\tau $ Generator - basis of $\tau $ Operator - $\cup $ Unit - empty set Is it a monoid? What is the Krull dimension of the Burnside ring of $\mathbb N$? Is every summation structure a complete monoid? Why are monoids not treated in most algebra courses? What properties of $R$ does the monoid ring $R[M]$ inherit? How to check if the identity function lies in a given set of functions to verify monoids? How to think about this monoid? Can every monoid be turned into a ring? Are simple commutative monoids monogeneous? How to classify $\mathbb N^2$-orbits? Question about construction of The Grothendieck group. Are multiplicative monoids of different rings isomorphic? Neutral element for a Cantor Set. Underlying set of the free monoid, does it contain the empty string? Consider the set F under the operation of composition of functions ◦. Classify monoids that are generated by one element. Is a monoid commutative if $(ab)^2=a^2b^2$? A proof for condition under which a monoid must also be a group: is it correct? Is there a name for pairs of elements $(a,b)$ of a semigroup $S$ satisfying $\forall x,y \in S : axbayb = axyb$? Terminology for a "subgroup" that has a different identity element. In a Cayley table, which Group axioms fail when an entry appears twice in a row or a column? Is the monoid ring of a Noetherian monoid Noetherian?
CommonCrawl
The paper is about squeezing the number of parameters in a convolutional neural network. The number of parameters in a convolutional layer is given by (number of input channels)$\times$(number of filters)$\times$(size of filter$\times$size of filter). The paper proposes 2 strategies: (i) replace 3x3 filters with 1x1 filters and (ii) decrease the number of input channels. They assume the budget of the filter is given, i,e., they do not tinker with the number of filters. Decrease in number of parameters will lead to less accuracy. To compensate, the authors propose to downsample late in the network. The results are quite impressive. Compared to AlexNet, they achieve a 50x reduction is model size while preserving the accuracy. Their model can be further compressed with existing methods like Deep Compression which are orthogonal to this paper's approach and this can give in total of around 510x reduction while still preserving accuracy of AlexNet. $\bf Question$: The impact on running times (specially on feed forward phase which may be more typical on embedded devices) is not clear to me. Is it certain to be reduced as well or at least be *no worse* than the baseline models?
CommonCrawl
The following theorem tells us that the union of every finite collection of Lebesgue measurable sets is also Lebesgue measurable. Theorem 1: The union of a finite collection of Lebesgue measurable sets is Lebesgue measurable. So $(E_1 \cup E_2)$ is Lebesgue measurable. Corollary 2: The set $\mathcal M$ of Lebesgue measurable sets is an algebra.
CommonCrawl
Two people have to spend exactly 15 consecutive minutes in a bar on a given day, between 12:00 and 13:00. Assuming uniform arrival times, what is the probability they will meet? but my methods felt a little ad hoc to me, and I would like to learn to make it more formal. Also I'm curious whether people think the problem is formulated unambiguously. I added the assumption of independent arrival myself for instance, because I think without such an assumption the problem is not well defined. This is a great question to answer graphically. First note that the two can't arrive after 12:45, since they have to spend at least 15 minutes in the bar. Second, note that they meet if their arrival times differ by less than 15 minutes. If we plot the arrival time of person 1 on the x axis, and person 2 on the y axis, then they meet if the point representing their arrival times is between the two blue lines in the figure below. So we just need to calculate that area, relative to the area of the whole box. I'd say that each of them arrives at some time uniformly distributed on [12:00, 12:45], where the times are independent. They meet if their arrival times differ by less than fifteen minutes. In the interest of not having annoying numbers all over the place, measure time in units of fifteen minutes, starting at noon. Let the first person's arrival time be $X$ and the second person's arrival time be $Y$. Then $X$ and $Y$ are independent and uniform on $[0,3]$ and we want $P(|X-Y|<1)$. Then this becomes a geometry problem. The area in the square $[0,3] \times [0,3]$ which satisfies $|x-y|<1$ is $5$; the area of the whole square is $9$; the answer is $5/9$. I had a hard time envisioning both A and B times simultaneously, so I considered the probability as a function of A's arrival time. Given: both arrival times are in the range 12:00-12:45, so A can arrive at any time in those 45 minutes. If A arrives at 12:00, B is met iff B arrives in the first 15 minutes, so the probability is 1/3. If A arrives at 12:15, B can arrive any time from 0-30 minutes, so p = 2/3. This value of 2/3 remains until 12:30, because there remains a 30 minute window that their times could overlap. After 12:30, p decreases until it reaches 1/3 at 12:45 (symmetric to the condition at 12:00). not included because this is my first math post. Rather than measure time continuously, lets start with discrete time to try to get a handle on an approach. Divide the time into n equal minutes: d, 2d, 3d, ...,nd where nd= 1 hr. = 60 min. The first person arrives at time jd. Ignoring end points, the probability that the next person arrives within 15d of that time is (j-15)d to (j+15)d or 30d. Since j can be any of n numbers we have, ignoring end points, the probability is 30d/60d = 50%. That gives us an approach, but we have to take into account the end points, i.e. the time restriction. Well 1/n times the arrival will be at 1d and the sum will be restricted to d to 16d or just 15d. Next is arrive at 2d and sum is 16d, etc. So, instead of a constant function of 30d, we have a ramp function starting at 15d going up to 30d, staying there for awhile and going back down to 15d. Add those up with the proper weights. Not the answer you're looking for? Browse other questions tagged probability soft-question probability-theory or ask your own question. Lunch Meeting Probability for two person to meet in given 1 hour slot and none would wait more then 15 minute. Combined probability distribution function of two uniformly distributed random variable. What is the probability of two people meeting?
CommonCrawl
Suppose that each person in a group of n people votes for exactly two people from a set of candidates to fill two positions on a committee. The top two finishers both win positions as long as each receives more than n/2 votes. Describe a divide-and-conquer algorithm that determines the top two candidates and whether these two candidates received more than n/2 votes. Our algorithm will take a sequence of 2n names (two different names provided by each of n voters) and determine whether the two top vote-getters occur on our list more than n/2 times, and if so, who they are. Actually for technical reasons we will need the top 3 vote-getters.** The votes of each voter are adjacent. Note that we can have at most 3 people (but not 4) with more than half of the votes. Divide the list into two parts, the first half and the second half. (No one could have gotten more than n/2 votes on this list without having more than half votes in one half or the other, since if a candidate got less than or equal to half the votes in each half, then he got less than or equal to half the votes in all.) Thus apply the algorithm recursively to each half to come up with at most six names (three from each half). Then run through the entire list to count the number of occurrences of each of these names to decide which, if any, are the winners. This requires at most 12n additional comparisons for a list of length 2n. I don't understand why we are coming up with at most six names. I also don't get where the 12n comes from. To be more clear, I sort of understand why the 12n is a thing given the 6 names, but I don't understand why it's 6, and if it means 6 names for each step of recursion, or 6 names after all of the recursion (and if the latter, what happens during each step of recursion?). Here is one correct version as it appears as exercise 18 in section 8.3 of the book Discrete Mathematics and Its Applications, 7th Edition by Kenneth H. Rosen, with slight modification. Suppose that each person in a group of $n$ people votes for exactly two people from a set of candidates to fill two positions on a committee. The top two finishers both win positions as long as each receives more than $n/2$ votes. Devise a divide-and-conquer algorithm that determines whether the two candidates who received the most votes each received more than $n/2$ votes. If so, determine who these two candidates are. What is difference? As I have emphasized, only when it has been determined that each of the two candidates who received the most votes (the top two candidates) has received more than $n/2$ votes, the desired algorithm is required to determine these two candidates . In other word, if it has been determined that at least one of the top two candidates received at most $n/2$ votes, the algorithm is not required to determine these candidates. The catch here is that it is possible to determine whether the top candidate received more than $n/2$ vote without identifying who is the top candidate. It is also possible to determine whether the second top candidate received more than $n/2$ vote without identifying who is the second candidate. For example, if we are able to ascertain the most votes received by any single candidate is at most $n/2$, which can happen when we have checked enough but not all of the votes (so we might not be able to identify who are the top candidates), then we are sure that none of the top two candidates received more than $n/2$ votes. On the other hand, by its very nature, any (usual) divide-and-conquer algorithm cannot find the top (two) candidates unconditionally. The idea is that it can happen that the top candidates overall is not among the top candidates in either half of the list. In fact, it can happen that the top candidate overall is the bottom candidate in each half of the list. For example, 10 people can vote in the following way, $(A,B), (A,B), (A,B), (C,D), C,D), (E,F), (E,F), (E,F), (C,D), (C,D)$. While $C$ and $D$ are the top candidates overall, they are the bottom candidates voted by the first 5 people and they are the bottom candidates voted by the other 5 people. The "answer" given by the OP, the fourth paragraph in the question, which starts with "our algorithm" and ends before the separator line, is in fact a part of an answer key. For brevity, I will refer to it as "the answer key". You can check the other cool answer written by D.W., which includes a very detailed analysis of the answer key. D.W.'s answer describes in great detail, according to the answer key, an algorithm that returns a list of three candidates which contains all the candidates who got more than $n/2$ votes and possibly other candidates. More specifically, if a candidate got more than $n/2$ votes, then he/she must be in the returned list. However, the list may contain people who did not get more than $n/2$ votes. It can also happen that none of the people in the returned list is the top candidate, or the second top candidate, or the third top candidate. D.W.'s answer also explains clearly "why we are coming up with at most six names" as well as "where the $12n$ comes from". I have confirmed that D.W.'s algorithm is correct (since he, in his modest way, claim he does not "know whether this algorithm is actually correct") in that it conforms to the answer key and it is a full answer to the original problem except it omits the easy steps such as "if so, determines who these two candidates are". It turns out, as you must have noticed by now, somewhat subtle to understand exactly what is the requirement of the original problem. Here is my version of the problem that should be much clearer, at least for me. Suppose that each person in a group of $n$ people votes for exactly two people from a set of candidates. Devise a divide-and-conquer algorithm that determines all candidates who received more than $n/2$ votes. If you are careful, you may point out the my version missed "... to fill two positions on a committee. The top two finishers both win positions as long as each receives more than n/2 votes". Well, although it is interesting to know that election goal and election rule, it has nothing to do with the specification of the desired algorithm. You may also point out that my version does not require "if so, determine who these two candidates are". Well, if we have determined all candidates who received more than n/2 votes, the number of whom is at most 3, then by just counting the number of votes received by each of those candidates, we can, easily, "if so, determine who these two candidates are". There is an algorithm using linear time and constant space that determines all candidates who received more than $n/2$ votes. The construction of such an algorithm is left as an intriguing and challenging exercise for the readers who has read thus far. Thanks to Tom van der Zanden, who pointed out a typo in my previous formulation of the exercise. 2. Set $S_L := F(A[1..n/2][1..2])$ and $S_R := F(A[n/2+1..n][1..2])$ and $S := S_L \cup S_R$. 3. For each $x \in S$, count the number of times that $x$ appears in $A[1..n][1..2]$ and sort the $x$'s by that count. 4. Return the three values that occur the most often in $S$ (breaking ties arbitrarily). where $A[i]$ denotes the first vote from the $i$th voter and $A[i]$ the second vote from the $i$th voter. At the end, after running $F(A[1..n][1..2])$, we take the three values returned by this algorithm and check which (if any of them) have received at least $n/2$ votes. (Why? It's easy to see that $F$ returns at most 3 elements, so $|S| \le 6$. Each iteration of step 3 can be done in $O(n)$ time, and we do only $6=O(1)$ iterations of the loop, so the total running time for steps 3 and 4 is $O(n)$.) This recurrence solves to $T(n) = O(n \log n)$. So, this gives an $O(n \log n)$ time algorithm for the problem. I don't know whether this algorithm is actually correct. I'm just trying to explain what algorithm I believe they are proposing, and to explain their running time analysis. I haven't tried to prove the algorithm correct. The solution you listed appears to sketch the key ideas behind such a proof (or, at least that's the claim). I haven't tried to fill in the details and verify carefully that the proof works out, so you should do that if you care, but it looks plausible to me -- all the ideas make sense to me. Now, to answer your questions: The reason we come up with only 6 names is that each recursive call returns at most 3 names, and we perform two recursive calls, so we end up with at most $3+3=6$ different names in $S$. Why $12n$? Because each iteration of the loop in step 3 takes $2n$ comparisons (you compare $x$ to each element of $A[1..n][1..2]$, and there are $2n$ such elements), and you do at most 6 iterations of the loop, so the total number of comparisons in step is at most $6 \times 2n = 12n$. I hope this answers your questions and makes the proposed solution clearer. Not the answer you're looking for? Browse other questions tagged algorithm-analysis divide-and-conquer or ask your own question.
CommonCrawl
For spiked population model, we investigate the large dimension $N$ and large sample size $M$ asymptotic behavior of the Support Vector Machine (SVM) classification method in the limit of $N,M\rightarrow\infty$ at fixed $\alpha=M/N$. We focus on the generalization performance by analytically evaluating the angle between the normal direction vectors of SVM separating hyperplane and corresponding Bayes optimal separating hyperplane. This is an analogous result to the one shown in Paul (2007) and Nadler (2008) for the angle between the sample eigenvector and the population eigenvector in random matrix theorem. We provide not just bound, but sharp prediction of the asymptotic behavior of SVM that can be determined by a set of nonlinear equations. Based on the analytical results, we propose a new method of selecting tuning parameter which significantly reduces the computational cost. A surprising finding is that SVM achieves its best performance at small value of the tuning parameter under spiked population model. These results are confirmed to be correct by comparing with those of numerical simulations on finite-size systems. We also apply our formulas to an actual dataset of breast cancer and find agreement between analytical derivations and numerical computations based on cross validation.
CommonCrawl
Peptide Grafting to Aminated Cellulose and Related Aminated Condensation Polymers. Natural and synthetic macroinitiators with primary amino substituents have been synthesized by one of the following techniques: (1) cyanoethylation of cellulose followed by diborane reduction to produce O-(3-aminopropyl)cellulose, (1), (2) reduction of 6-azido-6-deoxycellulose acetate with 1,3-propanedithiol to 6-amino-6-deoxycellulose acetate, (2), (3) reduction of 2,3-O-diphenylcarbamoyl-6-azido-6-deoxycellulose with 1,3-propanedithiol to 2,3-O-diphenylcarbamoyl-6-amino-6-deoxycellulose, (3), (4) nitration then SnCl$\sb2$ reduction of poly(arylene ether sulfone) to produce poly(3-amino-arylene ether sulfone), (4), (5) phthalimidomethylation them followed by hydrazinolysis to yield poly(3-aminomethylarylene ether sulfone), (5), and (6) LiAlH$\sb4$ reduction of poly(2-cyano-1,3-phenylene arylene ether) to poly(2-aminomethyl-1,3-phenylene arylene ether), (6). Heterogeneous grafting of $\gamma$-benzyl-L-glutamate-N-carboxyanhydride, (BLG-NCA), onto polymer (1) resulted in a non-random distribution of peptide residues; $\alpha$-helical conformations were detected at low BLG-NCA/amine feed ratios ($<$5). Homogeneous grafting of BLG-NCA onto soluble polymer (2) in DMF at room temperatue was carried out with high grafting efficiency ($>$80%). In that case, the concentration of macroinitiator (2) in DMF was an important factor in controlling the grafting efficiency. Macroinitiator, (4), which contains aromatic amino functions, is not nucleophilic enough to initiate BLG-NCA polymerization even under homogeneous conditions. Using molar ratios ranging from 1 to 100 of BLG-NCA relative to the amine concentration, grafting to polymers, (5) and (6), was effected in anhydrous THF at room temperature under homogeneous conditions. If reaction times between 24 and 48 h were utilized, high grafting efficiencies ($>$80%) were also obtained. The conformation of the polypeptide chain was evaluated by NMR and IR spectroscopies. Polypeptides grafted to polymers, (5) and (6), appeared to adopt the expected conformation for the chain length predicted, i.e. a progression from random coil (D.P. $<$ 4) to $\beta$-pleated sheet (9.3 $<$ D.P. $$ 15). The benzyl ester functions on the BLG grafts are subject to direct modification with amine nucleophiles. Water-soluble graft copolymer, (53), was obtained under aminolysis with tris(hydroxymethyl)aminomethane in DMSO at 65$\sp\circ$C for 180 h. Studies with n-butylamine correlate reaction conditions with extent of ester modification vs peptide cleavage. In the presence of 1-hydroxybenzotriazole, aminolysis of the ester is effected without peptide cleavage. Completely hydrolyzed BLG grafts, (55), are obtained with 1.25N NaOH within 5 h at room temperature. Lee, Soo, "Peptide Grafting to Aminated Cellulose and Related Aminated Condensation Polymers." (1988). LSU Historical Dissertations and Theses. 4579.
CommonCrawl
Abstract: In the work we consider a topological module $\mathcal P(a;b)$ of entire functions, which is the isomorphic image of the Schwarz space of distributions with compact supports in a finite or infinite interval $(a;b)\subset\mathbb R$ under the Fourier–Laplace transform. We prove that each weakly localizable module in $\mathcal P (a;b)$ is either generated by its two elements or is equal to the closure of two submodules of special form. We also provide dual results on subspaces in $C^\infty(a;b)$ invariant w.r.t. the differentiation operator. Keywords: entire functions, subharmonic functions, Fourier–Laplace transform, finitely generated submodules, description of submodules, local description of submodules, invariant subspaces, spectral synthesis.
CommonCrawl
Several mechanical systems are modeled by the static momentum balance for the displacement $u$ coupled with a rate-independent flow rule for some internal variable $z$. We consider a class of abstract systems of ODEs which have the same structure, albeit in a finite-dimensional setting, and regularize both the static equation and the rate-independent flow rule by adding viscous dissipation terms with coefficients $\varepsilon^\alpha$ and $\varepsilon$, where $0<\varepsilon \ll 1$ and $\alpha>0$ is a fixed parameter. Therefore for $\alpha \neq 1$ $u$ and $z$ have different relaxation rates. We address the vanishing-viscosity analysis as $\varepsilon \downarrow 0$ of the viscous system. We prove that, up to a subsequence, (reparameterized) viscous solutions converge to a parameterized curve yielding a Balanced Viscosity solution to the original rate-independent system, and providing an accurate description of the system behavior at jumps. We also give a reformulation of the notion of Balanced Viscosity solution in terms of a system of subdifferential inclusions, showing that the viscosity in $u$ and the one in $z$ are involved in the jump dynamics in different ways, according to whether $\alpha>1$, $\alpha=1$, and $\alpha \in (0,1)$.
CommonCrawl
I will report on new joint work with Leandro Arosio (University of Rome, Tor Vergata). Complex manifolds can be thought of as laid out across a spectrum characterised by rigidity at one end and flexibility at the other. On the rigid side, Kobayashi-hyperbolic manifolds have at most a finite-dimensional group of symmetries. On the flexible side, there are manifolds with an extremely large group of holomorphic automorphisms, the prototypes being the affine spaces $\mathbb C^n$ for $n \geq 2$. From a dynamical point of view, hyperbolicity does not permit chaos. An endomorphism of a Kobayashi-hyperbolic manifold is non-expansive with respect to the Kobayashi distance, so every family of endomorphisms is equicontinuous. We show that not only does flexibility allow chaos: under a strong anti-hyperbolicity assumption, chaotic automorphisms are generic. A special case of our main result is that if $G$ is a connected complex linear algebraic group of dimension at least 2, not semisimple, then chaotic automorphisms are generic among all holomorphic automorphisms of $G$ that preserve a left- or right-invariant Haar form. For $G=\mathbb C^n$, this result was proved (although not explicitly stated) some 20 years ago by Fornaess and Sibony. Our generalisation follows their approach. I will give plenty of context and background, as well as some details of the proof of the main result. Suppose M is a smooth Riemannian manifold on which a Lie group G acts properly and isometrically. In this talk I will explore properties of a particular class of G-invariant operators on M, called G-Callias-type operators. These are Dirac operators that have been given an additional Z_2-grading and a perturbation so as to be "invertible outside of a cocompact set in M". It turns out that G-Callias-type operators are equivariantly Fredholm and so have an index in the K-theory of the maximal group C*-algebra of G. This index can be expressed as a KK-product of a class in K-homology and a class in the K-theory of the Higson G-corona. In fact, one can show that the K-theory of the Higson G-corona is highly non-trivial, and thus the index theory of G-Callias-type operators is not obviously trivial. As an application of the index theory of G-Callias-type operators, I will mention an obstruction to the existence of G-invariant metrics of positive scalar curvature on M.
CommonCrawl
Starting at (0,0) top left, the objective is to find a dijikistra path to the bottom right. We must go through each color exactly once, and once we go outside a color, we can't go back to the same one. As per dikisjtra algorithm, we update the distance at once node if d[current] + weight(this_node, next_node) < d[next_node]. Usually these weights are given to us, but in this case, we must create a weight function such that given any two pixels (x1,y1),(x2,y2), our path follows something like what I have drawn in white. You can assume all the colors are indeed different, even though they might look similar because of shades. What are the cases of the weights I can assign so dijikistra finds the path shown in white? Summary of your problem: You have a graph and a particular path through the graph, and you want to assign weights to the edges so that running Dijkstra's algorithm on that graph will give you that path. Solution to your problem: assign a weight of 1 to each edge in the path, and a weight of $\infty$ (or some very large number) to each edge not in the path. (It suffices to choose a weight that is larger than the number of vertices in the graph.) You can easily verify that the shortest path only uses edges of weight 1 (any path that includes any other edge will have a total distance that is larger than that of the desired path). Not the answer you're looking for? Browse other questions tagged algorithms graphs shortest-path minimum-spanning-tree or ask your own question. What type of knapsack problem is this?
CommonCrawl
Suppose that $f(\theta)=g(\theta)/c$, where $g(\theta)$ is known but we cannot compute the integral $c=\int g(\theta)d\theta$. Given a sample $\theta_1,\ldots,\theta\sim f$, our interest is to estimate $c$? is useless, which doesn't depend on the data. This treatment acted as if we had a family of densities $f(\theta\mid c)$ indexed by $c$. But we don't: $f(\theta)=g(\theta)/c$ is a valid density only for one value of $c$, namely, $c=\int g(\theta)d\theta$. What is a valid Bayes estimator of $c$? Pretending I don't know $g$ or simply declaring it to be a non-statistical problem seem like giving up. I really think there should be a good Bayesian estimator here but I don't know what it is. In addition to the post, there are many insightful comments. It is necessary to check out the discussion on stack exchange – Bayesians: slaves of the likelihood function? and xi'an's og: estimating a constant.
CommonCrawl
Mariana P Torrente, Edward Chuang, Megan M Noll, Meredith E Jackrel, Michelle S Go and James Shorter. Mechanistic Insights into Hsp104 Potentiation.. The Journal of biological chemistry 291(10):5101–15, March 2016. Abstract Potentiated variants of Hsp104, a protein disaggregase from yeast, can dissolve protein aggregates connected to neurodegenerative diseases such as Parkinson disease and amyotrophic lateral sclerosis. However, the mechanisms underlying Hsp104 potentiation remain incompletely defined. Here, we establish that 2-3 subunits of the Hsp104 hexamer must bear an A503V potentiating mutation to elicit enhanced disaggregase activity in the absence of Hsp70. We also define the ATPase and substrate-binding modalities needed for potentiated Hsp104(A503V) activity in vitro and in vivo. Hsp104(A503V) disaggregase activity is strongly inhibited by the Y257A mutation that disrupts substrate binding to the nucleotide-binding domain 1 (NBD1) pore loop and is abolished by the Y662A mutation that disrupts substrate binding to the NBD2 pore loop. Intriguingly, Hsp104(A503V) disaggregase activity responds to mixtures of ATP and adenosine 5'-($\gamma$-thio)-triphosphate (a slowly hydrolyzable ATP analogue) differently from Hsp104. Indeed, an altered pattern of ATP hydrolysis and altered allosteric signaling between NBD1 and NBD2 are likely critical for potentiation. Hsp104(A503V) variants bearing inactivating Walker A or Walker B mutations in both NBDs are inoperative. Unexpectedly, however, Hsp104(A503V) retains potentiated activity upon introduction of sensor-1 mutations that reduce ATP hydrolysis at NBD1 (T317A) or NBD2 (N728A). Hsp104(T317A/A503V) and Hsp104(A503V/N728A) rescue TDP-43 (TAR DNA-binding protein 43), FUS (fused in sarcoma), and $\alpha$-synuclein toxicity in yeast. Thus, Hsp104(A503V) displays a more robust activity that is unperturbed by sensor-1 mutations that greatly reduce Hsp104 activity in vivo. Indeed, ATPase activity at NBD1 or NBD2 is sufficient for Hsp104 potentiation. Our findings will empower design of ameliorated therapeutic disaggregases for various neurodegenerative diseases. Korrie L Mack and James Shorter. Engineering and Evolution of Molecular Chaperones and Protein Disaggregases with Enhanced Activity.. Frontiers in molecular biosciences 3:8, January 2016. Abstract Cells have evolved a sophisticated proteostasis network to ensure that proteins acquire and retain their native structure and function. Critical components of this network include molecular chaperones and protein disaggregases, which function to prevent and reverse deleterious protein misfolding. Nevertheless, proteostasis networks have limits, which when exceeded can have fatal consequences as in various neurodegenerative disorders, including Parkinson's disease and amyotrophic lateral sclerosis. A promising strategy is to engineer proteostasis networks to counter challenges presented by specific diseases or specific proteins. Here, we review efforts to enhance the activity of individual molecular chaperones or protein disaggregases via engineering and directed evolution. Remarkably, enhanced global activity or altered substrate specificity of various molecular chaperones, including GroEL, Hsp70, ClpX, and Spy, can be achieved by minor changes in primary sequence and often a single missense mutation. Likewise, small changes in the primary sequence of Hsp104 yield potentiated protein disaggregases that reverse the aggregation and buffer toxicity of various neurodegenerative disease proteins, including $\alpha$-synuclein, TDP-43, and FUS. Collectively, these advances have revealed key mechanistic and functional insights into chaperone and disaggregase biology. They also suggest that enhanced chaperones and disaggregases could have important applications in treating human disease as well as in the purification of valuable proteins in the pharmaceutical sector. Meredith E Jackrel, Keolamau Yee, Amber Tariq, Annie I Chen and James Shorter. Disparate Mutations Confer Therapeutic Gain of Hsp104 Function.. ACS chemical biology 10(12):2672–9, December 2015. Abstract Hsp104, a protein disaggregase from yeast, can be engineered and potentiated to counter TDP-43, FUS, or $\alpha$-synuclein misfolding and toxicity implicated in neurodegenerative disease. Here, we reveal that extraordinarily disparate mutations potentiate Hsp104. Remarkably, diverse single missense mutations at 20 different positions interspersed throughout the middle domain (MD) and small domain of nucleotide-binding domain 1 (NBD1) confer a therapeutic gain of Hsp104 function. Moreover, potentiation emerges from deletion of MD helix 3 or 4 or via synergistic missense mutations in the MD distal loop and helix 4. We define the most critical aspect of Hsp104 potentiation as enhanced disaggregase activity in the absence of Hsp70 and Hsp40. We suggest that potentiation likely stems from a loss of a fragilely constrained autoinhibited state that enables precise spatiotemporal regulation of disaggregase activity. Meredith E Jackrel and James Shorter. Engineering enhanced protein disaggregases for neurodegenerative disease.. Prion 9(2):90–109, January 2015. Abstract Protein misfolding and aggregation underpin several fatal neurodegenerative diseases, including Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), and frontotemporal dementia (FTD). There are no treatments that directly antagonize the protein-misfolding events that cause these disorders. Agents that reverse protein misfolding and restore proteins to native form and function could simultaneously eliminate any deleterious loss-of-function or toxic gain-of-function caused by misfolded conformers. Moreover, a disruptive technology of this nature would eliminate self-templating conformers that spread pathology and catalyze formation of toxic, soluble oligomers. Here, we highlight our efforts to engineer Hsp104, a protein disaggregase from yeast, to more effectively disaggregate misfolded proteins connected with PD, ALS, and FTD. Remarkably subtle modifications of Hsp104 primary sequence yielded large gains in protective activity against deleterious $\alpha$-synuclein, TDP-43, FUS, and TAF15 misfolding. Unusually, in many cases loss of amino acid identity at select positions in Hsp104 rather than specific mutation conferred a robust therapeutic gain-of-function. Nevertheless, the misfolding and toxicity of EWSR1, an RNA-binding protein with a prion-like domain linked to ALS and FTD, could not be buffered by potentiated Hsp104 variants, indicating that further amelioration of disaggregase activity or sharpening of substrate specificity is warranted. We suggest that neuroprotection is achievable for diverse neurodegenerative conditions via surprisingly subtle structural modifications of existing chaperones. Meredith E Jackrel, Amber Tariq, Keolamau Yee, Rachel Weitzman and James Shorter. Isolating potentiated Hsp104 variants using yeast proteinopathy models.. Journal of visualized experiments : JoVE (93):e52089, January 2014. Abstract Many protein-misfolding disorders can be modeled in the budding yeast Saccharomyces cerevisiae. Proteins such as TDP-43 and FUS, implicated in amyotrophic lateral sclerosis, and $\alpha$-synuclein, implicated in Parkinson's disease, are toxic and form cytoplasmic aggregates in yeast. These features recapitulate protein pathologies observed in patients with these disorders. Thus, yeast are an ideal platform for isolating toxicity suppressors from libraries of protein variants. We are interested in applying protein disaggregases to eliminate misfolded toxic protein conformers. Specifically, we are engineering Hsp104, a hexameric AAA+ protein from yeast that is uniquely capable of solubilizing both disordered aggregates and amyloid and returning the proteins to their native conformations. While Hsp104 is highly conserved in eukaryotes and eubacteria, it has no known metazoan homologue. Hsp104 has only limited ability to eliminate disordered aggregates and amyloid fibers implicated in human disease. Thus, we aim to engineer Hsp104 variants to reverse the protein misfolding implicated in neurodegenerative disorders. We have developed methods to screen large libraries of Hsp104 variants for suppression of proteotoxicity in yeast. As yeast are prone to spontaneous nonspecific suppression of toxicity, a two-step screening process has been developed to eliminate false positives. Using these methods, we have identified a series of potentiated Hsp104 variants that potently suppress the toxicity and aggregation of TDP-43, FUS, and $\alpha$-synuclein. Here, we describe this optimized protocol, which could be adapted to screen libraries constructed using any protein backbone for suppression of toxicity of any protein that is toxic in yeast. Reversing deleterious protein aggregation with re-engineered protein disaggregases.. Cell cycle (Georgetown, Tex.) 13(9):1379–83, January 2014. Abstract Aberrant protein folding is severely problematic and manifests in numerous disorders, including amyotrophic lateral sclerosis (ALS), Parkinson disease (PD), Huntington disease (HD), and Alzheimer disease (AD). Patients with each of these disorders are characterized by the accumulation of mislocalized protein deposits. Treatments for these disorders remain palliative, and no available therapeutics eliminate the underlying toxic conformers. An intriguing approach to reverse deleterious protein misfolding is to upregulate chaperones to restore proteostasis. We recently reported our work to re-engineer a prion disaggregase from yeast, Hsp104, to reverse protein misfolding implicated in human disease. These potentiated Hsp104 variants suppress TDP-43, FUS, and $\alpha$-synuclein toxicity in yeast, eliminate aggregates, reverse cellular mislocalization, and suppress dopaminergic neurodegeneration in an animal model of PD. Here, we discuss this work and its context, as well as approaches for further developing potentiated Hsp104 variants for application in reversing protein-misfolding disorders. Meredith E Jackrel, Morgan E DeSantis, Bryan A Martinez, Laura M Castellano, Rachel M Stewart, Kim A Caldwell, Guy A Caldwell and James Shorter. Potentiated Hsp104 variants antagonize diverse proteotoxic misfolding events.. Cell 156(1-2):170–82, January 2014. Abstract There are no therapies that reverse the proteotoxic misfolding events that underpin fatal neurodegenerative diseases, including amyotrophic lateral sclerosis (ALS) and Parkinson's disease (PD). Hsp104, a conserved hexameric AAA+ protein from yeast, solubilizes disordered aggregates and amyloid but has no metazoan homolog and only limited activity against human neurodegenerative disease proteins. Here, we reprogram Hsp104 to rescue TDP-43, FUS, and $\alpha$-synuclein proteotoxicity by mutating single residues in helix 1, 2, or 3 of the middle domain or the small domain of nucleotide-binding domain 1. Potentiated Hsp104 variants enhance aggregate dissolution, restore proper protein localization, suppress proteotoxicity, and in a C. elegans PD model attenuate dopaminergic neurodegeneration. Potentiating mutations reconfigure how Hsp104 subunits collaborate, desensitize Hsp104 to inhibition, obviate any requirement for Hsp70, and enhance ATPase, translocation, and unfoldase activity. Our work establishes that disease-associated aggregates and amyloid are tractable targets and that enhanced disaggregases can restore proteostasis and mitigate neurodegeneration. Parveen Salahuddin, Gulam Rabbani and Rizwan Hasan Khan. The role of advanced glycation end products in various types of neurodegenerative disease: a therapeutic approach.. Cellular & molecular biology letters 19(3):407–37, 2014. Abstract Protein glycation is initiated by a nucleophilic addition reaction between the free amino group from a protein, lipid or nucleic acid and the carbonyl group of a reducing sugar. This reaction forms a reversible Schiff base, which rearranges over a period of days to produce ketoamine or Amadori products. The Amadori products undergo dehydration and rearrangements and develop a cross-link between adjacent proteins, giving rise to protein aggregation or advanced glycation end products (AGEs). A number of studies have shown that glycation induces the formation of the $\beta$-sheet structure in $\beta$-amyloid protein, $\alpha$-synuclein, transthyretin (TTR), copper-zinc superoxide dismutase 1 (Cu, Zn-SOD-1), and prion protein. Aggregation of the $\beta$-sheet structure in each case creates fibrillar structures, respectively causing Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, familial amyloid polyneuropathy, and prion disease. It has been suggested that oligomeric species of glycated $\alpha$-synuclein and prion are more toxic than fibrils. This review focuses on the pathway of AGE formation, the synthesis of different types of AGE, and the molecular mechanisms by which glycation causes various types of neurodegenerative disease. It discusses several new therapeutic approaches that have been applied to treat these devastating disorders, including the use of various synthetic and naturally occurring inhibitors. Modulation of the AGE-RAGE axis is now considered promising in the prevention of neurodegenerative diseases. Additionally, the review covers several defense enzymes and proteins in the human body that are important anti-glycating systems acting to prevent the development of neurodegenerative diseases. Helen R Broom, Jessica A O Rumfeldt and Elizabeth M Meiering. Many roads lead to Rome? Multiple modes of Cu,Zn superoxide dismutase destabilization, misfolding and aggregation in amyotrophic lateral sclerosis.. Essays in biochemistry 56:149–65, January 2014. Abstract ALS (amyotrophic lateral sclerosis) is a fatal neurodegenerative syndrome characterized by progressive paralysis and motor neuron death. Although the pathological mechanisms that cause ALS remain unclear, accumulating evidence supports that ALS is a protein misfolding disorder. Mutations in Cu,Zn-SOD1 (copper/zinc superoxide dismutase 1) are a common cause of familial ALS. They have complex effects on different forms of SOD1, but generally destabilize the protein and enhance various modes of misfolding and aggregation. In addition, there is some evidence that destabilized covalently modified wild-type SOD1 may be involved in disease. Among the multitude of misfolded/aggregated species observed for SOD1, multiple species may impair various cellular components at different disease stages. Newly developed antibodies that recognize different structural features of SOD1 represent a powerful tool for further unravelling the roles of different SOD1 structures in disease. Evidence for similar cellular targets of misfolded/aggregated proteins, loss of cellular proteostasis and cell-cell transmission of aggregates point to common pathological mechanisms between ALS and other misfolding diseases, such as Alzheimer's, Parkinson's and prion diseases, as well as serpinopathies. The recent progress in understanding the molecular basis for these devastating diseases provides numerous avenues for developing urgently needed therapeutics. Ali Chaari, Jessica Hoarau-Véchot and Moncef Ladjimi. Applying chaperones to protein-misfolding disorders: molecular chaperones against $\alpha$-synuclein in Parkinson's disease.. International journal of biological macromolecules 60:196–205, September 2013. Abstract Parkinson's disease (PD) is a neurodegenerative disorder characterized by the accumulation of a protein called $\alpha$-synuclein ($\alpha$-syn) into inclusions known as lewy bodies (LB) within neurons. This accumulation is also due to insufficient formation and activity of dopamine produced in certain neurons within the substantia nigra. Lewy bodies are the pathological hallmark of the idiopathic disorder and the cascade that allows $\alpha$-synuclein to misfold, aggregate and form these inclusions has been the subject of intensive research. Targeting these early steps of oligomerization is one of the main therapeutic approaches in order to develop neurodegenerative-modifying agents. Because the folding and refolding of alpha synuclein is the key point of this cascade, we are interested in this review to summarize the role of some molecular chaperones proteins such as Hsp70, Hsp90 and small heat shock proteins (sHsp) and Hsp 104. Hsp70 and its co-chaperone, Hsp70 and small heat shock proteins can prevent neurodegeneration by preventing $\alpha$-syn misfolding, oligomerization and aggregation in vitro and in Parkinson disease animal models. Hsp104 is able to resolve disordered protein aggregates and cross beta amyloid conformers. Together, these chaperones have a complementary effect and can be a target for therapeutic intervention in PD. Qian Ma, Ji-Ying Hu, Jie Chen and Yi Liang. The role of crowded physiological environments in prion and prion-like protein aggregation.. International journal of molecular sciences 14(11):21339–52, January 2013. Abstract Prion diseases and prion-like protein misfolding diseases are related to the accumulation of abnormal aggregates of the normal host proteins including prion proteins and Tau protein. These proteins possess self-templating and transmissible characteristics. The crowded physiological environments where the aggregation of these amyloidogenic proteins takes place can be imitated in vitro by the addition of macromolecular crowding agents such as inert polysaccharides. In this review, we summarize the aggregation of prion proteins in crowded physiological environments and discuss the role of macromolecular crowding in prion protein aggregation. We also summarize the aggregation of prion-like proteins including human Tau protein, human $\alpha$-synuclein, and human copper, zinc superoxide dismutase under macromolecular crowding environments and discuss the role of macromolecular crowding in prion-like protein aggregation. The excluded-volume effects caused by macromolecular crowding could accelerate the aggregation of neurodegenerative disease-associated proteins while inhibiting the aggregation of the proteins that are not neurodegenerative disease-associated. Sergio Camero, Mar\'ıa J Ben\'ıtez and Juan S Jiménez. Anomalous protein-DNA interactions behind neurological disorders.. Advances in protein chemistry and structural biology 91:37–63, January 2013. Abstract Aggregation, nuclear location, and nucleic acid interaction are common features shared by a number of proteins related to neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, transmissible spongiform encephalopathy, Huntington's disease, spinobulbar muscular atrophy, dentatorubro-pallidoluysian atrophy, and several spinocerebellar ataxias. $\beta$-Amyloid peptides, tau protein, $\alpha$-synuclein, superoxide dismutase1, prion protein, huntingtin, atrophin1, androgen receptor, and several ataxins are proteins prone to becoming aggregated, to translocate inside cell nucleus, and to bind DNA. In this chapter, we review those common features suggesting that neurological diseases too may share a transcriptional disorder, making it an important contribution to the origin of the disease. Dmitry Kryndushkin, Gudrun Ihrke, Tetsade C Piermartiri and Frank Shewmaker. A yeast model of optineurin proteinopathy reveals a unique aggregation pattern associated with cellular toxicity.. Molecular microbiology 86(6):1531–47, December 2012. Abstract Many neurodegenerative diseases including amyotrophic lateral sclerosis (ALS) are linked to the accumulation of specific protein aggregates in affected regions of the nervous system. SOD1, TDP-43, FUS and optineurin (OPTN) proteins were identified to form intraneuronal inclusions in ALS patients. In addition, mutations in OPTN are associated with both ALS and glaucoma. As the pathological role of OPTN in neuronal degeneration remains unresolved, we created a yeast model to study its potential for aggregation and toxicity. We observed that both wild type and disease-associated mutants of OPTN form toxic non-amyloid aggregates in yeast. Similar to reported cell culture and mouse models, the OPTN E50K mutant shows enhanced toxicity in yeast, implying a conserved gain-of-function mechanism. Furthermore, OPTN shows a unique aggregation pattern compared to other disease-related proteins in yeast. OPTN aggregates colocalize only partially with the insoluble protein deposit (IPOD) site markers, but coincide perfectly with the prion seed-reducing protein Btn2 and several other aggregation-prone proteins, suggesting that protein aggregates are not limited to a single IPOD site. Importantly, changes in the Btn2p level modify OPTN toxicity and aggregation. This study generates a mechanistic framework for investigating how OPTN may trigger pathological changes in ALS and other OPTN-linked neurodegenerative disorders. Lori Krim Gavrin, Rajiah Aldrin Denny and Eddine Saiah. Small molecules that target protein misfolding.. Journal of medicinal chemistry 55(24):10823–43, 2012. Abstract Protein misfolding is a process in which proteins are unable to attain or maintain their biologically active conformation. Factors contributing to protein misfolding include missense mutations and intracellular factors such as pH changes, oxidative stress, or metal ions. Protein misfolding is linked to a large number of diseases such as cystic fibrosis, Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and less familiar diseases such as Gaucher's disease, nephrogenic diabetes insipidus, and Creutzfeldt-Jakob disease. In this Perspective, we report on small molecules that bind to and stabilize the aberrant protein, thereby helping it to attain a native or near-native conformation and restoring its function. The following targets will be specifically discussed: transthyretin, p53, superoxide dismutase 1, lysozyme, serum amyloid A, prions, vasopressin receptor 2, and $\alpha$-1-antitrypsin. Pathological roles of wild-type cu, zn-superoxide dismutase in amyotrophic lateral sclerosis.. Neurology research international 2012:323261, January 2012. Abstract Dominant mutations in a Cu, Zn-superoxide dismutase (SOD1) gene cause a familial form of amyotrophic lateral sclerosis (ALS). While it remains controversial how SOD1 mutations lead to onset and progression of the disease, many in vitro and in vivo studies have supported a gain-of-toxicity mechanism where pathogenic mutations contribute to destabilizing a native structure of SOD1 and thus facilitate misfolding and aggregation. Indeed, abnormal accumulation of SOD1-positive inclusions in spinal motor neurons is a pathological hallmark in SOD1-related familial ALS. Furthermore, similarities in clinical phenotypes and neuropathology of ALS cases with and without mutations in sod1 gene have implied a disease mechanism involving SOD1 common to all ALS cases. Although pathogenic roles of wild-type SOD1 in sporadic ALS remain controversial, recent developments of novel SOD1 antibodies have made it possible to characterize wild-type SOD1 under pathological conditions of ALS. Here, I have briefly reviewed recent progress on biochemical and immunohistochemical characterization of wild-type SOD1 in sporadic ALS cases and discussed possible involvement of wild-type SOD1 in a pathomechanism of ALS. Shayne A Bellingham, Belinda B Guo, Bradley M Coleman and Andrew F Hill. Exosomes: vehicles for the transfer of toxic proteins associated with neurodegenerative diseases?. Frontiers in physiology 3:124, 2012. Abstract Exosomes are small membranous vesicles secreted by a number of cell types including neurons and can be isolated from conditioned cell media or bodily fluids such as urine and plasma. Exosome biogenesis involves the inward budding of endosomes to form multivesicular bodies (MVB). When fused with the plasma membrane, the MVB releases the vesicles into the extracellular environment as exosomes. Proposed functions of these vesicles include roles in cell-cell signaling, removal of unwanted proteins, and the transfer of pathogens between cells. One such pathogen which exploits this pathway is the prion, the infectious particle responsible for the transmissible neurodegenerative diseases such as Creutzfeldt-Jakob disease (CJD) of humans or bovine spongiform encephalopathy (BSE) of cattle. Similarly, exosomes are also involved in the processing of the amyloid precursor protein (APP) which is associated with Alzheimer's disease. Exosomes have been shown to contain full-length APP and several distinct proteolytically cleaved products of APP, including A$\beta$. In addition, these fragments can be modulated using inhibitors of the proteases involved in APP cleavage. These observations provide further evidence for a novel pathway in which PrP and APP fragments are released from cells. Other proteins such as superoxide dismutase I and alpha-synuclein (involved in amyotrophic lateral sclerosis and Parkinson's disease, respectively) are also found associated with exosomes. This review will focus on the role of exosomes in neurodegenerative disorders and discuss the potential of these vesicles for the spread of neurotoxicity, therapeutics, and diagnostics for these diseases. Magdalini Polymenidou and Don W Cleveland. The seeds of neurodegeneration: prion-like spreading in ALS.. Cell 147(3):498–508, October 2011. Abstract Misfolded proteins accumulating in several neurodegenerative diseases (including Alzheimer, Parkinson, and Huntington diseases) can cause aggregation of their native counterparts through a mechanism similar to the infectious prion protein's induction of a pathogenic conformation onto its cellular isoform. Evidence for such a prion-like mechanism has now spread to the main misfolded proteins, SOD1 and TDP-43, implicated in amyotrophic lateral sclerosis (ALS). The major neurodegenerative diseases may therefore have mechanistic parallels for non-cell-autonomous spread of disease within the nervous system. Petra Steinacker, Andreas Hawlik, Stefan Lehnert, Olaf Jahn, Stephen Meier, Evamaria Görz, Kerstin E Braunstein, Marija Krzovska, Birgit Schwalenstöcker, Sarah Jesse, Christian Pröpper, Tobias Böckers, Albert Ludolph and Markus Otto. Neuroprotective function of cellular prion protein in a mouse model of amyotrophic lateral sclerosis.. The American journal of pathology 176(3):1409–20, 2010. Abstract Transgenic mice expressing human mutated superoxide dismutase 1 (SOD1) linked to familial forms of amyotrophic lateral sclerosis are frequently used as a disease model. We used the SOD1G93A mouse in a cross-breeding strategy to study the function of physiological prion protein (Prp). SOD1G93APrp-/- mice exhibited a significantly reduced life span, and an earlier onset and accelerated progression of disease, as compared with SOD1G93APrp+/+ mice. Additionally, during disease progression, SOD1G93APrp-/- mice showed impaired rotarod performance, lower body weight, and reduced muscle strength. Histologically, SOD1G93APrp-/- mice showed reduced numbers of spinal cord motor neurons and extended areas occupied by large vacuoles early in the course of the disease. Analysis of spinal cord homogenates revealed no differences in SOD1 activity. Using an unbiased proteomic approach, a marked reduction of glial fibrillary acidic protein and enhanced levels of collapsing response mediator protein 2 and creatine kinase were detected in SOD1G93APrp-/- versus SOD1G93A mice. In the course of disease, Bcl-2 decreases, nuclear factor-kappaB increases, and Akt is activated, but these changes were largely unaffected by Prp expression. Exclusively in double-transgenic mice, we detected a significant increase in extracellular signal-regulated kinase 2 activation at clinical onset. We propose that Prp has a beneficial role in the SOD1G93A amyotrophic lateral sclerosis mouse model by influencing neuronal and/or glial factors involved in antioxidative defense, rather than anti-apoptotic signaling. Protein-DNA interaction at the origin of neurological diseases: a hypothesis.. Journal of Alzheimer's disease : JAD 22(2):375–91, January 2010. Abstract A number of neurodegenerative diseases, including Alzheimer's disease, tauopathies, Parkinson's disease, and synucleinopathies, polyglutamine diseases, including Huntington's disease, amyotrophic lateral sclerosis, and transmissible spongiform encephalopathy, are characterized by the existence of a protein or peptide prone to aggregation specific to the disease: amyloid-$\beta$, tau protein, $\alpha$-synuclein, atrophin 1, androgen receptor, prion protein, copper-zinc superoxide dismutase, $\alpha$ 1A subunit of CaV2.1, TATA-box binding protein, huntingtin, and ataxins 1, 2, 3, and 7. Beside this common molecular feature, we have found three additional main properties related to the disease-connected protein or peptide, which are shared by all those neurological disorders: first, proneness to aggregation, which, in many cases, seems to be bound to the lack of a clearly defined secondary structure; second, reported presence of the disease-related protein inside the nucleus; and finally, an apparently unspecific interaction with DNA. These findings, together with the lack of clear details to explain the molecular origin of these neurodegenerative diseases, invite a hypothesis that, together with other plausible molecular explanations, may contribute to find the molecular basis of these diseases: I propose here the hypothesis that many neurological disorders may be the consequence, at least in part, of an aberrant interaction of the disease-related protein with nucleic acids, therefore affecting the normal DNA expression and giving place to a genetic stress which, in turn, alters the expression of proteins needed for the normal cellular function and regulation. Ruth Chia, Howard M Tattum, Samantha Jones, John Collinge, Elizabeth M C Fisher and Graham S Jackson. Superoxide dismutase 1 and tgSOD1 mouse spinal cord seed fibrils, suggesting a propagative cell death mechanism in amyotrophic lateral sclerosis.. PloS one 5(5):e10627, January 2010. Abstract BACKGROUND: Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease that specifically affects motor neurons and leads to a progressive and ultimately fatal loss of function, resulting in death typically within 3 to 5 years of diagnosis. The disease starts with a focal centre of weakness, such as one limb, and appears to spread to other parts of the body. Mutations in superoxide dismutase 1 (SOD1) are known to cause disease and it is generally accepted they lead to pathology not by loss of enzymatic activity but by gain of some unknown toxic function(s). Although different mutations lead to varying tendencies of SOD1 to aggregate, we suggest abnormal proteins share a common misfolding pathway that leads to the formation of amyloid fibrils. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that misfolding of superoxide dismutase 1 leads to the formation of amyloid fibrils associated with seeding activity, which can accelerate the formation of new fibrils in an autocatalytic cascade. The time limiting event is nucleation to form a stable protein "seed" before a rapid linear polymerisation results in amyloid fibrils analogous to other protein misfolding disorders. This phenomenon was not confined to fibrils of recombinant protein as here we show, for the first time, that spinal cord homogenates obtained from a transgenic mouse model that overexpresses mutant human superoxide dismutase 1 (the TgSOD1(G93A) mouse) also contain amyloid seeds that accelerate the formation of new fibrils in both wildtype and mutant SOD1 protein in vitro. CONCLUSIONS/SIGNIFICANCE: These findings provide new insights into ALS disease mechanism and in particular a mechanism that could account for the spread of pathology throughout the nervous system. This model of disease spread, which has analogies to other protein misfolding disorders such as prion disease, also suggests it may be possible to design assays for therapeutics that can inhibit fibril propagation and hence, possibly, disease progression. P G Ince, J Tomkins, J Y Slade, N M Thatcher and P J Shaw. Amyotrophic lateral sclerosis associated with genetic abnormalities in the gene encoding Cu/Zn superoxide dismutase: molecular pathology of five new cases, and comparison with previous reports and 73 sporadic cases of ALS.. Journal of neuropathology and experimental neurology 57(10):895–904, 1998. Abstract Molecular pathology has identified 2 distinct forms of neuronal inclusion body in Amyotrophic Lateral Sclerosis (ALS). ALS-type inclusions are skeins or small dense filamentous aggregates which can only be demonstrated by ubiquitin immunocytochemistry (ICC). In contrast hyaline conglomerates (HC) are large multifocal accumulations of neurofilaments. Previous reports have failed to clarify the distinction and relationship between these inclusions. Correlation of molecular pathology with sporadic and familial cases of ALS will detect specific associations between molecular lesions and defined genetic abnormalities; and determine the relevance of molecular events in familial cases to the pathogenesis of sporadic disease. We describe the molecular pathology of 5 ALS cases linked to abnormalities of the SOD1 gene, in comparison with a series of 73 sporadic cases in which SOD1-gene abnormalities were excluded. Hyaline conglomerate inclusions were detected only in the 2 cases with the SOD1 I113T mutation and showed a widespread multisystem distribution. In contrast ALS-type inclusions characterized sporadic cases (70/73) and were restricted to lower motor neurons. Hyaline conglomerates were not seen in sproadic cases. Confocal microscopic analysis and ICC shows that HC contain equally abundant phosphorylated and nonphosphorylated neurofilament epitopes, indicating that phosphorylation is not essential for their formation. In contrast neurofilament immunoreactivity is virtually absent from typical ALS-type inclusions. The SOD1-related cases all had marked corticospinal tract and dorsal column myelin loss. In 4 cases the motor cortex was normal or only minimally affected. This further illustrates the extent to which upper motor neuron damage in ALS is usually a distal axonopathy. Previously reported pathological accounts of SOD1-related familial ALS (FALS) are reviewed. Hyaline conglomerates are so far described in cases with mutations A4V, I113T and H48Q. In only 1 of 12 cases (H48Q) reported were both HC and ALS-type inclusions present in the same case. These findings suggest the possibility that the molecular pathology of neuronal inclusions in ALS indicates 2 distinct pathogenetic cascades.
CommonCrawl
Abstract: We investigate dynamo action in global compressible solar-like convective dynamos in the framework of mean-field theory. We simulate a solar-type star in a wedge-shaped spherical shell, where the interplay between convection and rotation self-consistently drives a large-scale dynamo. To analyze the dynamo mechanism we apply the test-field method for azimuthally ($\phi$) averaged fields to determine the 27 turbulent transport coefficients of the electromotive force, of which six are related to the $\alpha$ tensor. This method has previously been used either in simulations in Cartesian coordinates or in the geodynamo context and is applied here for the first time to fully compressible simulations of solar-like dynamos. We find that the $\phi\phi$-component of the $\alpha$ tensor does not follow the profile expected from that of kinetic helicity. The turbulent pumping velocities significantly alter the effective mean flows acting on the magnetic field and therefore challenge the flux transport dynamo concept. All coefficients are significantly affected by dynamically important magnetic fields. Quenching as well as enhancement are being observed. This leads to a modulation of the coefficients with the activity cycle. The temporal variations are found to be comparable to the time-averaged values and seem to be responsible for a nonlinear feedback on the magnetic field generation. Furthermore, we quantify the validity of the Parker-Yoshimura rule for the equatorward propagation of the mean magnetic field in the present case.
CommonCrawl
Abstract: We consider the problem of morphing between two planar drawings of the same triangulated graph, maintaining straight-line planarity. A paper in SODA 2013 gave a morph that consists of $O(n^2)$ steps where each step is a linear morph that moves each of the $n$ vertices in a straight line at uniform speed. However, their method imitates edge contractions so the grid size of the intermediate drawings is not bounded and the morphs are not good for visualization purposes. Using Schnyder embeddings, we are able to morph in $O(n^2)$ linear morphing steps and improve the grid size to $O(n)\times O(n)$ for a significant class of drawings of triangulations, namely the class of weighted Schnyder drawings. The morphs are visually attractive. Our method involves implementing the basic "flip" operations of Schnyder woods as linear morphs.
CommonCrawl
Deep Neural Networks have achieved extraordinary results on image classification tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of the input data. Although most attacks usually change values of many image's pixels, it has been shown that deep networks are also vulnerable to sparse alterations of the input. However, no efficient method has been proposed to compute sparse perturbations. In this paper, we exploit the low mean curvature of the decision boundary, and propose SparseFool, a geometry inspired sparse attack that controls the sparsity of the perturbations. Extensive evaluations show that our approach outperforms related methods, and scales to high dimensional data. We further analyze the transferability and the visual effects of the perturbations, and show the existence of shared semantic information across the images and the networks. Finally, we show that adversarial training using $\ell_\infty$ perturbations can slightly improve the robustness against sparse additive perturbations.
CommonCrawl
Bandaru, Narasimha Murthy and Sampath, S and Jayaraman, Narayanaswamy (2005) Synthesis and Langmuir Studies of Bivalent and Monovalent $\alpha$-D-Mannopyranosides with Lectin Con A. In: Langmuir, 21 (21). pp. 9591-9596. Highly avid interaction between carbohydrate ligands and lectin receptors nominally requires the ligand presentation in a clustered form.Wepresent herein an approach involving Langmuir monolayer formation of the sugar ligands and the assessment of their lectin binding at the air-water interface. Bivalent $\alpha$-Dmannopyranoside containing the glycolipid ligand was used to study its binding profiles with lectin Con A, in comparison to the corresponding monovalent glycolipid. In addition to the bivalent and monovalent nature of the glycolipid ligands at the molecular level, the ligand densities at the monolayer level were varied with the aid of a nonsugar lipid molecule so as to obtain mixed monolayers with various sugarnonsugar ratios. Lectin binding of bivalent and monovalent ligands at different ratios was monitored by differential changes in the surface area per molecule of the mixed monolayer, with and without the lectin. The present study shows that maximal binding of the lectin to the bivalent ligand occurs at lower sugar densities at the interface ( $\sim10%$ sugar in the mixed monolayer) than for that of the monovalent ligand ( $\sim20%$ sugar in the mixed monolayer). It is observed that complete coverage of the monolayer with only the sugar ligands does not allow all of the sugars to be functionally active.
CommonCrawl
Cobalt stabilized gamma-chloroalkynones and alkynoates as gamma-carbonyl cation synthons. $\gamma$-Cationic-$\alpha,\beta$-unsaturated carbonyl synthons have been prepared via hexacarbonyldicobalt complexed $\gamma$-chloroalkynoates and alkynones (100). Treatment of 100 with silver salts in the presence of silyl enol ethers (102-104) generated 2-alkynyl-1,6-dicarbonyl compounds (134-149) in poor to good yields. Products of 101, with a non hydrogen substituent at the propargyl site, with 102 and 103 generated syn and anti diastereomers. The ratios of syn:anti diastereomers range from 1.3: 1 to 2.1: 1 for 102 to 6.9: 1 to 15: 1 for 103. These ratios are similar to literature compounds. Decomplexation to the alkyne effected in good to excellent yields. A dimer 151 was formed accidentally in poor yield with structural characteristics similar to compounds in its class. Source: Masters Abstracts International, Volume: 34-06, page: 2379. Adviser: J. R. Green. Thesis (M.Sc.)--University of Windsor (Canada), 1995. Vizniowski, Charles Stephen., "Cobalt stabilized gamma-chloroalkynones and alkynoates as gamma-carbonyl cation synthons." (1995). Electronic Theses and Dissertations. 1455.
CommonCrawl
How does the topology of the graphs' Riemann surface relate to its knot representation? Looking at the orientation of the edges around the vertices, it is obvious that left and right are oriented opposite. where I stuck to the convention that I flipped every fat graph edge in the same direction. The resulting knot is a trefoil. Further, the bicubic planar graphs can be related to Riemann surfaces (see here and references therein). Is there a relation between the Riemann surfaces and the knot? Definition 2.1 A left-hand turn path on $(\Gamma, \mathcal O)$ is a closed path on [the cubic graph] $\Gamma$ such that, at each vertex, the path turns left in the orientation $\mathcal O$. Browse other questions tagged graph-theory knot-theory riemann-surfaces zeta-functions or ask your own question. Is there a "knot theory" for graphs? How to understand a rooting of a dessin d'enfant? What does this connection between Chebyshev, Ramanujan, Ihara and Riemann mean? Does there exist a graph with maximum degree 8, chromatic number 8, clique number 6?
CommonCrawl
While one can imagine, that a 3D surface could exist, that realizes all pairs of principal curvatures, because the set of all pairs of principal curvatures is, losely speaking, also 2D, it is counterintuitive that also a 3D curve could exists, that realizes all pairs of curvature $\kappa(s)$ and torsion $\tau(s)$ for $s \in (-\infty,+\infty)$. However, interpreting $(\kappa(s),\tau(s))$ as a parametric curve in the Euclidean plane, it is fairly easy to conclude, that $(\kappa(s),\tau(s))$ must be space-filling. what is the fractal dimension of 3D curves with space-filling $(\kappa(s),\tau(s))$ trajectory? what is the radius of the smallest 3D sphere into which such a 3D curve fits? what about selfintersection of the 3D curve? what about 3D curves, whose $(\kappa(s),\tau(s))$ trajectory has fractal dimension between $1$ and $2$, maybe resembling one of the prominent fractal 2D curves (cf e.g. http://en.wikipedia.org/wiki/Category:Fractal_curves)? have such 3D curves been described or studied? use a point sampling of the $(\kappa(s),\tau(s))$ trajectory, calculate the length of the trajectory between two adjacent sampling points and use that as the parameter range for a segment of a helix, whose curvature and torsion equal the coordinate values of the sample point. The desired 3D curve is then obtained by letting the number of sampling points tend to $\infty$. However, in that construction curvature and torsion are not continuous for the approximate 3D curves, but I'm not sure if that is an issue for the limit curve. use rectilinear approximations of the $(\kappa(s),\tau(s)))$ and smoothly piece together so called $Salkowski$ and $anti$-$Salkowski$ curves (cf e.g. http://www.uv.es/~monterde/pdfsarticlesmeus/CAGD-6.pdf); here I don't know, whether it can be guaranteed that $s$ simultaneously parameterizes the 2D trajectory and the associated 3D curve by length. This construction would fit Hilbert curves as spacefilling planar trajectories. for piecewise linear approximations of the trajectory use segments of 3D curves for which both curvature and torsion are linear functions of $s$; however, I could not find a description of such curves. I hope, that the above suggestions help in visualizing such strange curves and, at least approximately, allow answering other questions related to such strange curves. Concerning only "the task of constructing a 3D space curve from the planar (κ(s),τ(s)) trajectory," see this MSE question: One needs to solve a system of 1st-order DiffEqs (as you realize). Not the answer you're looking for? Browse other questions tagged dg.differential-geometry fractals or ask your own question. How many curves can fit on a sphere without intersecting? What is the analog of the "Fundamental Theorem of Space Curves," for surfaces, and beyond? For a 3D Apollonian packing, do we really know that the Hausdorff dimension of the complement is approximated by the growth rate of curvature?
CommonCrawl
Abstract: MDS matrices are important building blocks providing diffusion functionality for the design of many symmetric-key primitives. In recent years, continuous efforts are made on the construction of MDS matrices with small area footprints in the context of lightweight cryptography. Just recently, Duval and Leurent (ToSC 2018/FSE 2019) reported some $32 \times 32$ binary MDS matrices with branch number 5, which can be implemented with only 67 XOR gates, whereas the previously known lightest ones of the same size cost 72 XOR gates. In this article, we focus on the construction of lightweight involutory MDS matrices, which are even more desirable than ordinary MDS matrices, since the same circuit can be reused when the inverse is required. In particular, we identify some involutory MDS matrices which can be realized with only 78 XOR gates with depth 4, whereas the previously known lightest involutory MDS matrices cost 84 XOR gates with the same depth. Notably, the involutory MDS matrix we find is much smaller than the AES MixColumns operation, which requires 97 XOR gates with depth 8 when implemented as a block of combinatorial logic that can be computed in one clock cycle. However, with respect to latency, the AES MixColumns operation is superior to our 78-XOR involutory matrices, since the AES MixColumns can be implemented with depth 3 by using more XOR gates. We prove that the depth of a $32\times 32$ MDS matrix with branch number 5 (e.g., the AES MixColumns operation) is at least 3. Then, we enhance Boyar's SLP-heuristic algorithm with circuit depth awareness, such that the depth of its output circuit is limited. Along the way, we give a formula for computing the minimum achievable depth of a circuit implementing the summation of a set of signals with given depths, which is of independent interest. We apply the new SLP heuristic to a large set of lightweight involutory MDS matrices, and we identify a depth 3 involutory MDS matrix whose implementation costs 88 XOR gates, which is superior to the AES MixColumns operation with respect to both lightweightness and latency, and enjoys the extra involution property.
CommonCrawl
Here is a interesting picture with two arrangements of four shapes. How can they make a different area with the same shapes? This is a famous physical puzzle that can be tied to the fibonacci series. To answer the question as posed, the issue is that the two slopes are different ($\frac25$ vs $\frac38$). Note that all those numbers are in the fibonacci series ($1,1,2,3,5,8,13,21,\ldots$). Successive fractions are closer approximations to $\varphi$, alternating between above and below. Diagrams like this can be generated by making a square with sides equal to a number in the fibonacci series (in this question 8), then dividing it into two rectangles with widths of the two fibonacci numbers that make up the first one chosen (3 and 5). Cut the smaller one down the diagonal, and cut the bigger one down the middle at a diagonal, such that the width of the diagonal cut is the next smallest number (2 in this case). Note that this will leave a trapezoid, whose small parallel size matches the original small rectangle's smaller side (3 in this case), and whose larger parallel size matches the original larger rectangle's smaller side (5 in this case). Since $\frac25\approx\frac38$, and from the above constructions, the pieces can be rearranged into a rectangle (as shown), the area of which will always be one away from the original square, but will look approximately correct, since the slopes almost match. Edit: Since this answer received so many up-votes (thank you!), I suppose people are very interested in it, so I thought I'd draw up a few images! The diagram is misleading, as it hides a gap in the middle of the second configuration. This is what we actually get if we rearrange the shapes in question. Notice that the diagonal "bows" slightly, leaving some extra space between the shapes – this is where the extra unit of area creeps in. But you shouldn't trust me any more than the person who drew the original picture! As we see here, pictures can be misleading – so my diagram isn't proof that the original diagram was wrong. This just gives an intuitive sense of where the extra space has come from. Since the gradients don't match, we can't arrange them side-by-side like this without some blank space between them. But because they're close, the eye can be tricked into thinking they form a single continuous line, and doesn't notice the slope on the triangle changing midway down. Substituting the values into the formula gives exactly 0.5 for $A$. There are two such triangles, so that's a total 1 = the expected discrepancy. It's a misleading diagram. In reality, the angles do not match up- the larger interior angle of the orange triangle is about 69.5 degrees, whereas it's 68.2 for the grey quadrilateral. (Correct me if I'm wrong- dusting off my trig here.) In the diagram with area 65, the orange areas are actually quadrilaterals. If you look closely, you can see that they have a slight inflection where they meet the other orange section. So that extra area comes from expanding them just a bit. The triangles don't have the same slope; you can see that the large diagonal line through the "larger" rectangle bends. It's covered up by the thick lines around the triangles, but there is a very thin hole that has a total area of one square - the same square that supposedly "appeared out of nowhere". Just enlarge the image and you'll see the answer. Those shapes (in orange) at the right side of picture, are not triangles at all! they are two quadrilaterals. an thus they have area greater than visually expected. so there is no equity here. They are different and thus have different total area. The picture of the bottom rectangle is misleading, because it fools people into incorrectly assuming the width of the triangles to be exactly 3 units. The real width can be easily calculated - it's a fraction of the total width, defined by the height of the point on the diagonal, or at exactly 8/13th of 5, ie. 3.076923077 (and not 3), q.e.d. Not the answer you're looking for? Browse other questions tagged mathematics calculation-puzzle geometry lateral-thinking or ask your own question.
CommonCrawl
After applying the Goertzel algorithm, there will be 512 power spectrum values at $k=1,2,\ldots,512$. Looking at the results, it appears that in the upper half of the scale ($257 \leq k \leq 512$), the real (vs. imag) power spikes for frequencies 1/260 and 1/333 are negative. In addition, in the lower half of the range ($1 \leq k \leq 256$), the power value for $k=60$ is zero, and there are false positive spikes at spurious frequency values. However, if I double the length of the generated signal to 1024, and only apply the Goertzel algorithm to $k=1,2,\ldots,512$, all of the power spectrum values at $k=30, 60, 120, 260$, and $330$ are positive and there are no false positive values below 512 and the value at k=60 is non-zero and positive. My understanding of the Nyquist requirement is that the power value at $k=1$ is unreliable. So what I believe I have observed is that if you want to use the real (vs imag) power spectrum values at frequencies between 1/2 and 1/512, you need to provide a signal with twice (1024) the number of samples. Another way of saying this is that if you have a discrete signal with 1024 samples, you can only determine the power spectrum at frequencies greater than 1/512, i.e., 1/511,1/510,...,1/2. Is there a law or equation that states that the sample size $N$ needs to be at least twice as long as the longest wavelength to be assessed? Or, is this really the meaning of Nyquist, i.e. $2N$. However if you used an equivalent complex signal, then N samples would be sufficient. Not the answer you're looking for? Browse other questions tagged fft power-spectral-density nyquist or ask your own question. How do we calculate Power Spectrum Density (PSD) which is given in dB/Hz and not just dB? How to get finer grained spectral decomposition for audio signals?
CommonCrawl
I'm trying to implement metropolis light transport based on this paper, and I have the basic thing working. I run the algorithm multiple times with different starting points, add the results for each starting point together and multiply them by a scaling factor to approximate the real image. In the resulting image, the starting points are noticeably brighter (they are aligned in a grid below). The paper I was reading doesn't have any advice on avoiding this start up bias. I found this other paper that has advice on this, but I find the paper a little hard to understand. It says each initial path should be assigned a weight $f(x_0) / p_0(x_0)$. $f(x_0)$ should be the luminance of the sample at the starting point $x_0$, but is it raw luminance or normalized? $p_0$ is supposed to be a path distribution (possibly sampled by path tracing). How do I get the value of a path distribution at a specified point? Is it the ratio of the point's luminance over the average luminance of the distribution (i.e. average brightness of the image)? Browse other questions tagged rendering pathtracing or ask your own question.
CommonCrawl
Hi, I am not sure on how to do questions 5.28 and 5.29, and I do not know how to finish 3.27 after putting the 6 red balls in place. Thanks! For 5.28 try using complementary counting (proceed similar to example 5.8). To count how many ways are there to sum to $200$ with repeated digits consider cases: all numbers repeated and two numbers repeated. For 5.29, proceed similar to example 5.9. Note $40 = 2^3 \times 5$. So any factor of $40$ is of the form $2^a \times 5^b$. Consider three factors of this form and multiply them so they equal $40$. What can you say about the exponents of the numbers? For 5.27 pretend for a second that all black cards are identical. How would you proceed? Once you find the answer to the problem pretending all black cards are identical, arrange the black cards in some order and arrange them with the red cards as you did before.
CommonCrawl
As volatility goes to infinity, the delta of a call option goes to 1. The delta approximates the probability that the option expires in the money. So it seems that the probability of expiring in the money is very close to 1. However, the price of the put option approaches the constant function at the strike price. This is what one would see if the probability of the call expiring out of the money is close to 1. This seems to suggest that the probability that the call option expires in the money and out of the money is both close to 1. Put another way, as the volatility increases, the probability of a call expiring in the money increases as well. But so does the price of a put option. So it would seem that the price of a put increases, even though the probability of the put expiring in the money decreases. Where you are right, because of call-put parity $$C-P=DF(F-K)$$ with $F$ representing the forward price, the difference between the European call and a European put price is independent of the volatility. This suggests that when $C$ increases due to increasing volatility, $P$ should therefore increase by the same amount all other things equal. And indeed the maximum put price is $DF\times K$ at which point the call is worth $DF\times F$. Where you are wrong is that, under the risk-neutral measure: $\Delta$ (or rather $N(d_1)$ in the BS equation) is not the probability of expiring in the money for a call, $N(d_2)$ is. See also these questions: Probability of exercise in the Black-Scholes Model and Yet another question about the risk-neutral measure. Why is the risk-neutral probability of an infinitely volatile GBM 0?
CommonCrawl
We have a $5\times 5$ square table and $n$ different colours. We cut out $2\times 2$ squares from all vertex and remains $9$ unit squares. We will paint $9$ squares with $n$ colours. Some squares can be same colour. If the rotatings with respect to center of the table or the reftections of hortizonal symmetry axis, vertical symmetry axis, diagonals are indetical coloring, then how many are there distinc colouring (in terms of $n$) ? The following Perl script will compute the first five entries of this sequence. How many ways move n pies to m distances? Does there exist Latin square critical sets for which deleting any entry results in arbitrarily many completions? An evenly divided $k$ coloring of an $(n,d,\lambda)$ graph leaves one vertex adjacent to all $k$ colors, given $k\lambda \leq d$. How many equivalence classes of squares are there?
CommonCrawl
Abstract: Bayesian inference is the workhorse of gravitational-wave astronomy, for example, determining the mass and spins of merging black holes, revealing the neutron star equation of state, and unveiling the population properties of compact binaries. The science enabled by these inferences comes with a computational cost that can limit the questions we are able to answer. This cost is expected to grow. As detectors improve, the detection rate will go up, allowing less time to analyze each event. Improvement in low-frequency sensitivity will yield longer signals, increasing the number of computations per event. The growing number of entries in the transient catalog will drive up the cost of population studies. While Bayesian inference calculations are not entirely parallelizable, key components are embarrassingly parallel: calculating the gravitational waveform and evaluating the likelihood function. Graphical processor units (GPUs) are adept at such parallel calculations. We report on progress porting gravitational-wave inference calculations to GPUs. Using a single code - which takes advantage of GPU architecture if it is available - we compare computation times using modern GPUs (NVIDIA P100) and CPUs (Intel Gold 6140). We demonstrate speed-ups of $\sim 50 \times$ for compact binary coalescence gravitational waveform generation and likelihood evaluation and more than $100\times$ for population inference within the lifetime of current detectors. Further improvement is likely with continued development. Our python-based code is publicly available and can be used without familiarity with the parallel computing platform, CUDA.
CommonCrawl
Previously, we saw how a regression line can help us describe our relationship between an input variable like a movie budget and an output variable like the expected revenue from a movie with that budget. Let's take a look at that line again. We also showed how we can describe a line with the formula $y = mx + b $, where $m$ is the slope and $b$ is the y-intercept of the line. One of the benefits of representing our line as a formula is that we then can calculate the $y$ value of our line for any input of $x$. We know what these $m$ and $b$ values represent. However, we still need to learn how to derive these values from an input line. Say the following are a list of points along a line. How do we calculate the slope $m$ given these points along the line? This is the technique. We can determine the slope by taking any two points along the line and looking at the ** ratio of the vertical distance travelled to the horizontal distance travelled**. Rise over run. The $\Delta$ is the capitalized version of the Greek letter Delta. Delta means change. So you can the read the above formula as $m$ equals change in $y$ divided by change in $x$. In other words, change in $x$ means our ending $x$ value minus our starting $x$ value and change in $y$ means our ending $y$ value minus our $y$ initial $y$ value . where $y_1$ is our ending point's $y$ value, $y_0$ is our initial point's $y$ value and $x_1$ and $x_0$ are our ending and initial $x$ values, respectively. So that is how we calculate the slope of a line. Rise over run. Take any two points along that line and divide distance travelled vertically from the distance travelled horizontally. Change in $y$ divided by change in $x$. Now that we know how to calculate the slope, let's turn our attention to calculating the y-intercept. For example, look at the line below. If you look at the far-left of the x-axis you will see that our $y$ value no longer is 0 when $x$ is 0. So we should calculate the value of our y-intercept, $b$. Here's what we can do. Now to solve for $b$, we need to fill in values for $y$ and $x$. Let's see how well we did by providing a value of $x$, and seeing if the $y$ value lines up to the $y$ value of the line in our chart. When plugging an $x$ value of 20 million into our formula, we see that $y$ equals 88 million. Let's look at our graph above and compare this result to the $y$ value where $x$ is 20 million. It seems we did a good job of getting the slope by calculating $m$ and using an $x$ and $y$ value pair to solve for the y-intercept. In this lesson, we saw how to calculate the slope and y-intercept variables that describe a line. We can take any two points along the line to calculate our slope variable. This is because given two points along the straight line, we can divide the change in $y$ over those two points by the change in $x$ over those two points to get the slope. Then we can take that slope ($m$), pick a point along the line, plug that point's $x$ value and $y$ value into the $y = mx + b$ formula, and finally use algebra to solve for $b$ to discover the y-intercept.
CommonCrawl
Just as circumference of circle will remain $\pi$ for unit diameter, no matter what standard unit we take, are the speeds of light and sound irrational or rational in nature ? I'm talking about theoretical speeds and not empirical, which of course are rational numbers. "Rational" and "irrational" are properties of numbers. Quantities with units aren't numbers, so they're neither rational nor irrational. A quantity with units is the product of a number and something else (the unit) that isn't a number. By choosing the unit you use to express a quantity, you can arrange for the numeric part of the quantity to be pretty much any number you want (though switching units won't let you change its sign or direction). In particular, it can be rational or irrational. And choices of units are a human convention, so it wouldn't make any sense to extend the idea of rationality or irrationality to the quantity itself. By the way, empirical measurements always have some uncertainty associated with them, so they're not really numbers either and are also neither rational nor irrational. A measurement is probably better thought of as a range (or better yet, a probability distribution) which will necessarily include both rational and irrational numbers. It depends on the unit you want to express it. If you choose c/100 as the speed unit, c will be expressed with a rational number. If you choose c/π, you'll have an irrational one. That depends on measure, not on nature. Well it's a tricky question in some way. You can for example consider the second as a rational number because its definition (a number of times the time needed for some atom to change state) is rational in nature (you can see it like this at least): you're technically just counting a number of occurrences of an event. For the speed of sound I guess it's harder to see it as rational as there's hardly "the" speed of sound since it depends on environmental parameters (no speed of sound in the void of space as everyone knows) so it's harder to associate it with something like a rational number. I do agree with the previous answer saying that physical quantities are not really rational or irrational. in any case, it all comes down to how you see things. In the underlying physics, c = 1 (Planck units). 1 is rational. But your unit system might not have a rational length. Speed of sound is rational in nature if macroscopic quantum mechanics holds (this is still open to debate that I will not enter). We should be able prove given macroscopic quantum that speed of sound is an integer multiple of Planck length / Plank Time because of the way particle interactions drive the speed of sound. I am also puzzled by this question. I would like to pose it in an alternative fashion. The question posed by kaka is simple and clear but the answers are too complicated. Rational numbers and irrational numbers are mutually exclusive sets of numbers. Speed of light in vacuum has a constant numerical value,say in units of m/s. The question is does the numerical value fall in the set of rational numbers or the set of irrational numbers? We talk of speed of mass to be 2m/s, sqrt2m/s etc., when we ask students to solve simple problems. Thus we treat speeds to be both rational and irrational numbers. But the numerical value of the speed of light which is a universal constant must fall either in the category of rational numbers or irrational numbers. To which category does it belong is the question. The circumference of a circle will be $\pi$ for a unit circle geometrically, but not physically; essentially because infinite precision doesn't obtain in physics. The question whether the speed of light or sound is rational or irrational is similarly ill-posed physically. Not the answer you're looking for? Browse other questions tagged speed-of-light units dimensional-analysis mathematics physical-constants or ask your own question. Is gravitational constant a rational number? Is it possible for a physical object to have a irrational length? Are there any quantities in the physical world that are inherently rational/algebraic? Can light have an irrational wavelength? Why are radians more natural than any other angle unit? Why isn't it $E \approx 27.642 \times mc^2$? Is speed of light is impossible, even for light in nature? Is there a maximum energy for a relativistic particle? What is the meaning of non-integer powers of physical units in electrodynamics? $E=mc^2$: Why does the speed of light constant affect the Energy or Mass of an object?
CommonCrawl
Abstract: These lecture notes form an expanded account of a course given at the Summer School on Topology and Field Theories held at the Center for Mathematics at the University of Notre Dame, Indiana during the Summer of 2012. A similar lecture series was given in Hamburg in January 2013. The lecture notes are divided into two parts. The first part, consisting of the bulk of these notes, provides an expository account of the author's joint work with Christopher Douglas and Noah Snyder on dualizability in low-dimensional higher categories and the connection to low-dimensional topology. The cobordism hypothesis provides bridge between topology and algebra, establishing important connections between these two fields. One example of this is the prediction that the $n$-groupoid of so-called `fully-dualizable' objects in any symmetric monoidal $n$-category inherits an O(n)-action. However the proof of the cobordism hypothesis outlined by Lurie is elaborate and inductive. Many consequences of the cobordism hypothesis, such as the precise form of this O(n)-action, remain mysterious. The aim of these lectures is to explain how this O(n)-action emerges in a range of low category numbers ($n \leq 3$). The second part of these lecture notes focuses on the author's joint work with Clark Barwick on the Unicity Theorem, as presented in arXiv:1112.0040. This theorem and the accompanying machinery provide an axiomatization of the theory of $(\infty,n)$-categories and several tools for verifying these axioms. The aim of this portion of the lectures is to provide an introduction to this material.
CommonCrawl
Abstract: The GT200 is a device that has been extensively used by the Mexican armed forces to remotely detect and identify substances such as drugs and explosives. A double blind experiment has been performed to test its effectivity. In seventeen out of twenty attempts, the GT200 failed in the hands of certified operators to find more than 1600 amphetamine pills and four bullets hidden in a randomly chosen cardboard box out of eight identical boxes distributed within a 90m$\times$20m ballroom. This result is compatible with the 1/8 probability expected for a completely ineffectual device, and is incompatible with even a moderately effective working one.
CommonCrawl
Why did the cow cross the road? Well, one reason is that Farmer John's farm simply has a lot of roads, making it impossible for his cows to travel around without crossing many of them. FJ's farm is arranged as an $N \times N$ square grid of fields ($3 \leq N \leq 100$), with a set of $N-1$ north-south roads and $N-1$ east-west roads running through the interior of the farm serving as dividers between the fields. A tall fence runs around the external perimeter, preventing cows from leaving the farm. Bessie the cow can move freely from any field to any other adjacent field (north, east, south, or west), as long as she carefully looks both ways before crossing the road separating the two fields. It takes her $T$ units of time to cross a road ($0 \leq T \leq 1,000,000$). One day, FJ invites Bessie to visit his house for a friendly game of chess. Bessie starts out in the north-west corner field and FJ's house is in the south-east corner field, so Bessie has quite a walk ahead of her. Since she gets hungry along the way, she stops at every third field she visits to eat grass (not including her starting field, but including possibly the final field in which FJ's house resides). Some fields are grassier than others, so the amount of time required for stopping to eat depends on the field in which she stops. Please help Bessie determine the minimum amount of time it will take to reach FJ's house. The first line of input contains $N$ and $T$. The next $N$ lines each contain $N$ positive integers (each at most 100,000) describing the amount of time required to eat grass in each field. The first number of the first line is the north-west corner. Print the minimum amount of time required for Bessie to travel to FJ's house. The optimal solution for this example involves moving east 3 squares (eating the "10"), then moving south twice and west once (eating the "5"), and finally moving south and east to the goal.
CommonCrawl
We give necessary and sufficient conditions on a presentable $\infty$-category $C$ so that families of objects of $C$ form an $\infty$-topos. In particular, we prove a conjecture of Joyal that this is the case whenever $C$ is stable. Theory and Applications of Categories, Vol. 34, 2019, No. 9, pp 243-248.
CommonCrawl
In the context of geometric quantisation, one starts with the data of a symplectic manifold together with a pre-quantum line bundle, and obtains a quantum Hilbert space by means of the auxiliary structure of a polarisation, i.e. typically a Lagrangian foliation or a Kähler structure. One common and widely studied problem is that of quantising Hamiltonian flows which do not preserve it. Principal bundles and their moduli have been important in various aspects of physics and geometry for many decades. It is perhaps not so well-known that a substantial portion of the original motivation for studying them came from number theory, namely the study of Diophantine equations. I will describe a bit of this history and some recent developments. I discuss a geometric interpretation of the twisted indexes of 3d (softly broken) $\cN=4$ gauge theories on $S^1 \times \Sigma$ where $\Sigma$ is a closed genus $g$ Riemann surface, mainly focussing on quivers with unitary gauge groups. The path integral localises to a moduli space of solutions to generalised vortex equations on $\Sigma$, which can be understood algebraically as quasi-maps to the Higgs branch. I demonstrate that the twisted indexes computed in previous work reproduce the virtual Euler characteristic of the moduli spaces of twisted quasi-maps. I will review the geometric approach to the description of Coulomb branches and Chern-Simons terms of gauge theories coming from compactifications of M-theory on elliptically fibered Calabi-Yau threefolds. Mathematically, this involves finding all the crepant resolutions of a given Weierstrass model and understanding the network of flops connecting them together with computing certain topological invariants. I will further check that the uplifted theory in 6d is anomaly-free using Green-Schwartz mechanism. Continued discussion to the two informal talks given by Dylan Butson on January 21st and 28th. The Heisenberg algebra plays an important role in many areas of mathematics and physics. Khovanov constructed a categorical analogue of this algebra which emphasizes its connections to representation theory and combinatorics. Recently, Brundan, Savage, and Webster have shown that the Grothendieck group of this category is isomorphic to the Heisenberg algebra. However, applying an alternative decategorification functor called the trace to the Heisenberg category yields a richer structure: a W-algebra, an infinite-dimensional Lie algebra related to conformal field theory.
CommonCrawl
The tip calculator can be enhanced by asking the user the the meal cost and desired tip using input from the keyboard. You might want to review the input function from an earlier lesson. Here's the base code for doing the tip calculator with some interaction from the user (remember that $tip=per/100\times meal$ given that $per$ and $meal$ are the percentage of tip to leave and cost of the meal, respectively). Now you try. Try finishing this tip calculator by finishing in the tip= line.
CommonCrawl
Abstract: The Higgs field mass term, being superrenomalizable, has a unique status within the standard model. Through the opening it affords, $SU(3) \times SU(2) \times U(1)$ singlet fields can have renormalizable couplings to standard model fields. We present examples that are neither grotesque nor unnatural. A possible consequence is to spread the Higgs particle resonance into several weaker ones, or to afford it additional, effectively invisible decay channels.
CommonCrawl
The multiplicative identity property holds that $a\times1=a$, for some real number $a$. In other words, the property holds that the product of 1 and any real number $a$ will be equal to $a$. In this example, we can set $a\times 1=7\times 1$. Therefore, the other side of the equation will be $a=7$.
CommonCrawl
This post is an overview of different optimization algorithms for neural networks. In this post, we focus on two mainstreams of one-stage object detection methods: YOLO family and SSD family. Compared to two-stage methods (like R-CNN series), those models skip the region proposal stage and directly extract detection results from feature maps. For that reason, one-stage models are faster but at the cost of reduced accuracy. In this post, we discuss the computally efficient DCNN architectures, such as MobileNet, ShuffleNet and their variants. In this post, we are looking into two high-resolution image generation models: ProGAN and StyleGAN. They generates the artificial images gradually, starting from a very low resolution and continuing to a high resolution (finally $1024\times 1024$).
CommonCrawl
What happens when you cut a biconvex lens in half? Specifically, does the focal length change? How can this be rationalized? It might help to think about the symmetry of a sliced biconvex lens. If the biconvex lens can focus light from point F1 to F2, both distance $f$ from the lens, then when you cut the lens in half, each half will have a focal length equal to $f$. The focal length of the biconvex lens is $f/2$. Can calculate the ratio of focal lenghts before and after cutting by doing $R_2\rightarrow\infty$ and $d\rightarrow d/2$. The focal power of the second surface that now is plane will be smaller (actually 0) and, unless this is compensated by a huge decrease in thickness, the lens will have longer focal distance. That's what intuitively seems to happens with the dimensions used in real-life lenses. Not the answer you're looking for? Browse other questions tagged optics lenses or ask your own question. What happens to the index of refraction of a lens if placed in water? How does $1/f$ come out as the intercept when plotting a graph of $1/u$ and $1/v$ in convex lens? What is the actual significance of the focal point of a lens? What happens if we cut a convex lens horizontally?
CommonCrawl
Farmer John's $N$ cows, conveniently numbered $1 \ldots N$ ($2 \leq N \leq 10^5$), have a complex social structure revolving around "moo networks" --- smaller groups of cows that communicate within their group but not with other groups. Each cow is situated at a distinct $(x,y)$ location on the 2D map of the farm, and we know that $M$ pairs of cows $(1 \leq M < 10^5)$ moo at each-other. Two cows that moo at each-other belong to the same moo network. In an effort to update his farm, Farmer John wants to build a rectangular fence, with its edges parallel to the $x$ and $y$ axes. Farmer John wants to make sure that at least one moo network is completely enclosed by the fence (cows on the boundary of the rectangle count as being enclosed). Please help Farmer John determine the smallest possible perimeter of a fence that satisfies this requirement. It is possible for this fence to have zero width or zero height. The first line of input contains $N$ and $M$. The next $N$ lines each contain the $x$ and $y$ coordinates of a cow (nonnegative integers of size at most $10^8$). The next $M$ lines each contain two integers $a$ and $b$ describing a moo connection between cows $a$ and $b$. Every cow has at least one moo connection, and no connection is repeated in the input. Please print the smallest perimeter of a fence satisfying Farmer John's requirements.
CommonCrawl
Abstract. We consider a model of a leaky quantum wire with the Hamiltonian $-\Delta -\alpha \delta(x-\Gamma)$ in $L^2(\R^2)$, where $\Gamma$ is a compact deformation of a straight line. The existence of wave operators is proven and the S-matrix is found for the negative part of the spectrum. Moreover, we conjecture that the scattering at negative energies becomes asymptotically purely one-dimensional, being determined by the local geometry in the leading order, if $\Gamma$ is a smooth curve and $\alpha \to\infty$.
CommonCrawl
Abstract. We establish upper bounds for the spectral gap of the stochastic Ising model at low temperatures in an $l \times l$ box with boundary conditions which are not purely plus or minus; specifically, we assume the magnitude of the sum of the boundary spins over each interval of length $l$ in the boundary is bounded by $\delta l$, where $\delta < 1$. We show that for any such boundary condition, when the temperature is sufficiently low (depending on $\delta$), the spectral gap decreases exponentially in $l$.
CommonCrawl
Book — xxii, 300 pages : illustrations, maps ; 23 cm. Chapter 18 : Local Adaptation to Climate Change: What Comes Next? Don Albrecht and Paul Lachapelle. The concept of community, in all its diverse definitions and manifestations, provides a unique approach to learn more about how groups of individuals and organizations are addressing the challenges posed by climate change. This new volume highlights specific cases of communities developing innovative approaches to climate mitigation and adaptation around the United States. Defining community more comprehensively than just spatial geography to include also communities of interest, identity and practice, this book highlights how individuals and organizations are addressing the challenges posed by climate change through more resilient social processes, government policies and sustainable practices. Through close examinations of community efforts across the United States, including agricultural stakeholder engagement and permaculture projects, coastal communities and prolonged drought areas, and university extension and local governments, this book shows the influence of building individual and institutional capacity toward addressing climate change issues at the community level. It will be useful to community development students, scholars and practitioners learning to respond to unexpected shocks and address chronic stress associated with climate change and its impacts. Book — xii, 238 pages : illustrations, maps ; 24 cm. Chapter 5. Hurricanes vs. "Mass Idleness" Chapter 7. Adaptive Practices, Past and Present Bibliography Index. Hurricanes have been a constant in the history of New Orleans. Since before its settlement as a French colony in the eighteenth century, the land entwined between Lake Pontchartrain and the Mississippi River has been lashed by powerful Gulf storms that have wrought immeasurable loss and devastation, prompting near-constant reinvention and ingenuity on the part of its inhabitants. Changes in the Air offers a rich and exhaustively researched history of how hurricanes have shaped and reshaped New Orleans from the colonial era to the present day, focusing on how its residents have continually adapted to a uniquely unpredictable and destructive environment across more than three centuries. Book — xiv, 288 pages : illustrations ; 29 cm. Exploring environmental changes through Earth's geological history using chemostratigraphy Chemostratigraphy is the study of the chemical characteristics of different rock layers. Decoding this geochemical record across chronostratigraphic boundaries can provide insights into geological history, past climates, and sedimentary processes. Chemostratigraphy Across Major Chronological Boundaries presents state-of-the-art applications of chemostratigraphic methods and demonstrates how chemical signatures can decipher past environmental conditions. Volume highlights include: Presents a global perspective on chronostratigraphic boundaries Describes how different proxies can reveal distinct elemental and isotopic events in the geologic past Examines the Archaean-Paleoproterozoic, Proterozoic-Paleozoic, Paleozoic-Mesozoic, and Mesozoic-Paleogene boundaries Explores cause-and-effect through major, trace, PGE, and REE elemental, stable, and radiogenic isotopes Offers solutions to persistent chemostratigraphic problems on a micro-global scale Geared toward academic and researchgeoscientists, particularly in the fields of sedimentary petrology, stratigraphy, isotope geology, geochemistry, petroleum geology, atmospheric science, oceanography, climate change and environmental science, Chemostratigraphy Across Major Chronological Boundaries offers invaluable insights into environmental evolution and climatic change. 11. The Consequential Intersection of Social Inequality and Climate Change: Health, Coping, and Community Organizing. The year 2016 was the hottest year on record and the third consecutive record-breaking year in planet temperatures. The following year was the hottest in a non-El Nino year. Of the seventeen hottest years ever recorded, sixteen have occurred since 2000, indicating the trend in climate change is toward an ever warmer Earth. However, climate change does not occur in a social vacuum; it reflects relations between social groups and forces us to contemplate the ways in which we think about and engage with the environment and each other. Employing the experience-near anthropological lens to consider human social life in an environmental context, this book examines the fateful global intersection of ongoing climate change and widening social inequality. Over the course of the volume, Singer argues that the social and economic precarity of poorer populations and communities-from villagers to the urban disadvantaged in both the global North and global South-is exacerbated by climate change, putting some people at considerably enhanced risk compared to their wealthier counterparts. Moreover, the book adopts and supports the argument that the key driver of global climatic and environmental change is the global economy controlled primarily by the world's upper class, which profits from a ceaseless engine of increased production for national middle classes who have been converted into constant consumers. Drawing on case studies from Alaska, Ecuador, Bangladesh, Haiti and Mali, Climate Change and Social Inequality will be of great interest to students and scholars of climate change and climate science, environmental anthropology, medical ecology and the anthropology of global health. Book — xv, 138 pages : illustrations, maps ; 25 cm. 8. Conclusion 8.1 Summary of contribution 8.2 Research Implications 8.3 Future directions. Very few studies have been conducted to explore the vulnerability of women in the context of climate change. This book addresses this absence by investigating the structure of women's livelihoods and coping capacity in a disaster vulnerable coastal area of Bangladesh. The research findings suggest that the distribution of livelihood capitals of vulnerable women in rural Bangladesh is heavily influenced by several climatic events, such as cyclones, floods and seasonal droughts that periodically affect the region. Women face several challenges in their livelihoods, including vulnerability to their income, household assets, lives and health, food security, education, water sources, sanitation and transportation systems, because of ongoing climate change impacts. The findings have important policy relevance for all involved in disaster and risk management, both within Bangladesh and the developing countries facing climate change impacts. Based on the research findings, the book also provides recommendations to improving the livelihoods of women in the coastal communities. This book will appeal to academics, researchers and professionals in environmental management, gender and development, and climate change governance looking at the effects of and adaptation to climate change, gender issues and natural disaster management strategies. Women in rural Bangladesh are heavily influenced by several climatic events, such as cyclones, floods and seasonal droughts that periodically affect the region. Yet very few studies focus on the vulnerability of women in the context of climate change. This book addresses this gap by investigating the structure of women's livelihoods and coping capacity in a disaster-prone coastal area of Bangladesh. Climate change impacts mean women face several challenges in their livelihoods, such as income, household assets, health, food security, education, water sources, sanitation and transportation systems. Chapters look at the importance of relevant policy for all of those involved in disaster and risk management, both on a rural Bangladeshi and international level. This book provides recommendations to improving the livelihoods of women in coastal communities. It will appeal to academics, researchers and professionals in environmental management, gender and development. It will be useful to individuals studying climate change governance who are looking at the effects of and adaptation to climate change, gender issues and natural disaster management strategies. London, UK ; New York, NY, USA : Bloomsbury Academic, 2019. Book — 287 pages ; 25 cm. 7. What We've Learned From Climate Sceptics Notes Bibliography Index. Climate Change Scepticism is the first ecocritical study to examine the cultures and rhetoric of climate scepticism in the UK, Germany, the USA and France. Collaboratively written by leading scholars from Europe and North America, the book considers climate skeptical-texts as literature, teasing out differences and challenging stereotypes as a way of overcoming partisan political paralysis on the most important cultural debate of our time. Book — vii, 97 pages : illustrations ; 26 cm. Introduction Preliminaries A $2\times 2$ matrix trick Ultrapowers of trivial $\mathrm W^*$-bundles Property (SI) and its consequences Unitary equivalence of totally full positive elements $2$-coloured equivalence Nuclear dimension and decomposition rank Quasidiagonal traces Kirchberg algebras Addendum Bibliography. The authors introduce the concept of finitely coloured equivalence for unital $^*$-homomorphisms between $\mathrm C^*$-algebras, for which unitary equivalence is the $1$-coloured case. They use this notion to classify $^*$-homomorphisms from separable, unital, nuclear $\mathrm C^*$-algebras into ultrapowers of simple, unital, nuclear, $\mathcal Z$-stable $\mathrm C^*$-algebras with compact extremal trace space up to $2$-coloured equivalence by their behaviour on traces; this is based on a $1$-coloured classification theorem for certain order zero maps, also in terms of tracial data. As an application the authors calculate the nuclear dimension of non-AF, simple, separable, unital, nuclear, $\mathcal Z$-stable $\mathrm C^*$-algebras with compact extremal trace space: it is 1. In the case that the extremal trace space also has finite topological covering dimension, this confirms the remaining open implication of the Toms-Winter conjecture. Inspired by homotopy-rigidity theorems in geometry and topology, the authors derive a ``homotopy equivalence implies isomorphism'' result for large classes of $\mathrm C^*$-algebras with finite nuclear dimension. Book — xi, 119 pages ; 23 cm. Appendix. Chapter headings from Du Châtelet's foundations of physics, and from several early 18th century Newtonian textbooks. The influence of Foundations of Physics (also called Institutions de physique), published in Paris in 1740, written by Gabrielle Emilie Le Tonnelier de Breteuil Du Châtelet (or Emilie Du Châtelet (1706-1749)). "A firsthand chronicle of the catastrophic reality of our planet's changing ecosystems and the necessity of relishing this vulnerable, fragile Earth while we still can"-- Provided by publisher. Book — xii, 352 pages : illustrations ; 29 cm. 22. Using natural analogs to design and engineer effective sealing materials TBA Appendix: Supplemental materials. Geological Carbon Storage Subsurface Seals and Caprock Integrity Seals and caprocks are an essential component of subsurface hydrogeological systems, guiding the movement and entrapment of hydrocarbon and other fluids. Geological Carbon Storage: Subsurface Seals and Caprock Integrity offers a survey of the wealth of recent scientific work on caprock integrity with a focus on the geological controls of permanent and safe carbon dioxide storage, and the commercial deployment of geological carbon storage. Volume highlights include: Low-permeability rock characterization from the pore scale to the core scale Flow and transport properties of low-permeability rocks Fundamentals of fracture generation, self-healing, and permeability Coupled geochemical, transport and geomechanical processes in caprock Analysis of caprock behavior from natural analogues Geochemical and geophysical monitoring techniques of caprock failure and integrity Potential environmental impacts of carbon dioxide migration on groundwater resources Carbon dioxide leakage mitigation and remediation techniques Geological Carbon Storage: Subsurface Seals and Caprock Integrity is an invaluable resource for geoscientists from academic and research institutions with interests in energy and environment-related problems, as well as professionals in the field. Book — xxi, 106 pages : illustrations (some color), color map ; 22 cm. Chapter Six. Where to From Here?: Learning from our Pacific Neighbours. Situating Maori Ecological Knowledge (MEK) within traditional environmental knowledge (TEK) frameworks, this book recognizes that indigenous ecological knowledge contributes to our understanding of how we live in our world (our world views), and in turn, the ways in which humans adapt to climate change. As an industrialized nation, Aotearoa/New Zealand (A/NZ) has responsibilities and obligations to other Pacific dwellers, including its indigenous populations. In this context, this book seeks to discuss how A/NZ can benefit from the wider Pacific strategies already in place; how to meet its global obligations to reducing GHG; and how A/NZ can utilize MEK to achieve substantial inroads into adaptation strategies and practices. In all respects, Maori tribal groups here are well-placed to be key players in adaptation strategies, policies, and practices that are referenced through Maori/Iwi traditional knowledge. Book — xvii, 154 pages ; 24 cm. Paris : Classiques Garnier, 2019. Book — 339 pages ; 22 cm. La structure de la matière modelisée d'après une "mécanique du feu" Nollet dans le projet de diffusion du XVIIIe siècle. "Au côté du Denis Diderot de la Description des arts et du Jean-Jacques Rousseau de l'Émile, Jean-Antoine Nollet participe à l'éveil d'une conscience technique. La reproduction des faits naturels, s'adossant au paradigme mécaniste, permet chez lui d'impliquer les arts mécaniques dans la modélisation théorique et de lier la machine expérimentale à la construction du savoir. Le génie technique, qui réalise l'équilibre entre abstraction mathématique et contraintes de la matière, mobilise également la sensorialité, qui se révèle à la fois condition et objet de l'investigation. Nous découvrons enfin chez Nollet une stratégie discursive fondée sur le dévoilement de la technique, précieux éclairage sur la socialisation des acquis scientifiques au XVIIIe siècle."--Page 4 of cover. Hoboken, NJ : John Wiley & Sons ; Washington, D.C. : American Geophysical Union, 2019. Book — ix, 208 pages, 16 unnumbered pages of plates : illustrations (some color), maps ; 29 cm. Cratonic lithosphere discontinuities : dynamics of small-volume melting, metacratonization, and a possible role for brines / Sonja Aulbach. A multidisciplinary update on continental plate tectonics and plate boundary discontinuities Understanding the origin and evolution of the continental crust continues to challenge Earth scientists. Lithospheric Discontinuities offers a multidisciplinary review of fine scale layering within the continental lithosphere to aid the interpretation of geologic layers. Once Earth scientists can accurately decipher the history, internal dynamics, and evolution of the continental lithosphere, we will have a clearer understanding of how the crust formed, how plate tectonics began, and how our continents became habitable. Volume highlights: Theories and observations of the current state of tectonic boundaries and discontinuities Contributions on field observations, laboratory experiments, and geodynamic predictions from leading experts in the field Mantle fabrics in response to various mantle deformation processes Insights on fluid distribution using geophysical observations, and thermal and viscosity constraints from dynamic modeling Discontinuities associated with lithosphere and lithosphere-asthenosphere boundary An integrated study of the evolving physical and chemical processes associated with lithosphere asthenosphere interaction Written for academic and researchgeoscientists, particularly in the field of tectonophysics, geophysicists, geodynamics, seismology, structural geology, environmental geology, and geoengineering, Lithospheric Discontinuities is a valuable resource that sheds light on the origin and evolution of plate interaction processes. First edition. - New York : MCD/Farrar, Straus and Giroux, 2019. "By 1979, we knew nearly everything we understand today about climate change--including how to stop it. Over the next decade, a handful of scientists, politicians, and strategists, led by two unlikely heroes, risked their careers in a desperate, escalating campaign to convince the world to act before it was too late. [This] is their story"-- Publisher marketing. Book — viii, 365 pages : illustrations ; 29 cm. A rigorous mathematical problem-solving framework for analyzing the Earth's energy resources GeoEnergy encompasses the range of energy technologies and sources that interact with the geological subsurface. Fossil fuel availability studies have historically lacked concise modeling, tending instead toward heuristics and overly-complex processes. Mathematical GeoEnergy: Oil Discovery, Depletion and Renewal details leading-edge research based on a mathematically-oriented approach to geoenergy analysis. Volume highlights include: Applies a formal mathematical framework to oil discovery, depletion, and analysis Employs first-order applied physics modeling, decreasing computational resource requirements Illustrates model interpolation and extrapolation to fill out missing or indeterminate data Covers both stochastic and deterministic mathematical processes for historical analysis and prediction Emphasizes the importance of up-to-date data, accessed through the companion website Demonstrates the advantages of mathematical modeling over conventional heuristic and empirical approaches Accurately analyzes the past and predicts the future of geoenergy depletion and renewal using models derived from observed production data Intuitive mathematical models and readily available algorithms make Mathematical GeoEnergy: Oil Discovery, Depletion and Renewal an insightful and invaluable resource for scientists and engineers using robust statistical and analytical tools applicable to oil discovery, reservoir sizing, dispersion, production models, reserve growth, and more. Although hints of a crisis appeared as early as the 1570s, the temperature by the end of the sixteenth century plummeted so drastically that Mediterranean harbors were covered with ice, birds literally dropped out of the sky, and "frost fairs" were erected on a frozen Thames-with kiosks, taverns, and even brothels that become a semi-permanent part of the city. Recounting the deep legacy and far-ranging consequences of this "Little Ice Age, " acclaimed historian Philipp Blom reveals how the European landscape had suddenly, but ineradicably, changed by the mid-seventeenth century. While apocalyptic weather patterns destroyed entire harvests and incited mass migrations, they gave rise to the growth of European cities, the emergence of early capitalism, and the vigorous stirrings of the Enlightenment. A timely examination of how a society responds to profound and unexpected change, Nature's Mutiny will transform the way we think about climate change in the twenty-first century and beyond. Newman, William R., 1955- author. A book that finally demystifies Newton's experiments in alchemy When Isaac Newton's alchemical papers surfaced at a Sotheby's auction in 1936, the quantity and seeming incoherence of the manuscripts were shocking. No longer the exemplar of Enlightenment rationality, the legendary physicist suddenly became "the last of the magicians." Newton the Alchemist unlocks the secrets of Newton's alchemical quest, providing a radically new understanding of the uncommon genius who probed nature at its deepest levels in pursuit of empirical knowledge. In this evocative and superbly written book, William Newman blends in-depth analysis of newly available texts with laboratory replications of Newton's actual experiments in alchemy. He does not justify Newton's alchemical research as part of a religious search for God in the physical world, nor does he argue that Newton studied alchemy to learn about gravitational attraction. Newman traces the evolution of Newton's alchemical ideas and practices over a span of more than three decades, showing how they proved fruitful in diverse scientific fields. A precise experimenter in the realm of "chymistry, " Newton put the riddles of alchemy to the test in his lab. He also used ideas drawn from the alchemical texts to great effect in his optical experimentation. In his hands, alchemy was a tool for attaining the material benefits associated with the philosopher's stone and an instrument for acquiring scientific knowledge of the most sophisticated kind. Newton the Alchemist provides rare insights into a man who was neither Enlightenment rationalist nor irrational magus, but rather an alchemist who sought through experiment and empiricism to alter nature at its very heart. Hoboken, NJ, USA : John Wiley & Sons, Inc. ; Washington, D.C. : American Geophysical Union, 2019. Book — ix, 508 pages : illustrations ; 29 cm. A comprehensive and practical guide to methods for solving complex petroleum engineering problems Petroleum engineering is guided by overarching scientific and mathematical principles, but there is sometimes a gap between theoretical knowledge and practical application. Petroleum Engineering: Principles, Calculations, and Workflows presents methods for solving a wide range of real-world petroleum engineering problems. Each chapter deals with a specific issue, and includes formulae that help explain primary principles of the problem before providing an easy to follow, practical application. Volume highlights include: A robust, integrated approach to solving inverse problems In-depth exploration of workflows with model and parameter validation Simple approaches to solving complex mathematical problems Complex calculations that can be easily implemented with simple methods Overview of key approaches required for software and application development Formulae and model guidance for diagnosis, initial modeling of parameters, and simulation and regression Petroleum Engineering: Principles, Calculations, and Workflows is a valuable and practical resource to a wide community of geoscientists, earth scientists, exploration geologists, and engineers. This accessible guide is also well-suited for graduate and postgraduate students, consultants, software developers, and professionals as an authoritative reference for day-to-day petroleum engineering problem solving.
CommonCrawl
In this paper, as elastic-plastic fracture toughness test under mixed mode loading was proposed using a single edge-cracked specimen subjected to bending moment(M), shearing force(F), and twisting moment(T). The J-integral of a crack in the specimen is expressed in the form J=$J_I$+ $J_II$$J_III$, where $J_I$, $J_II$ and $J_III$ are the components of mode I, mode II and mode III deformation, respectively. $J_I$, $J_II$ and $J_III$ can be estimated from M-$\theta$ ($\theta$;crack opening angle), F-U(U; crack shear displacement) and T-$\alpha$ ($\alpha$;crack twisting angle). In order to obtain the the M<-TEX>$\theta$, F-U and T-$\alpha$ diagram inreal time, a new deformaiton gage for mixed mode loading was proposed using the optical position sensing device(PSD). The elastic-plastic fracture toughness test was carried out with an aluminum alloy. The loading apparatus was designed and manufactured for this experiment. For the loading condition of the crack initatio in the mixed mode, the MMT -3(mode I+ mode II+ mode III) has the lowest values out of the all specimens. This implies that MMT-3 is possible of the crackinitation at lower load, if the specimen acts on together with the torque under the same loading condition. An elastic-plastic fracture toughness test using the PSD brings a successful experimentation in measuring the crack deformation(mode I+ mode II+ mode III).
CommonCrawl
I've been using Google Earth Engine recently to scale up my remote sensing analyses, particularly by leveraging the full Landsat archive available on Google's servers. I've followed Earth Engine's development for years, and published results from the platform, but, before now, never had a compelling reason to use it. Now, without hubris, I can say that some of the methods I'm using (radiometric rectification of thousands of images in multi-decadal time series) are already straining the limits of the freely available computing resources on Google's platform. After an intensive pipeline that merely normalizes the time series data I want to work with, I don't seem to have the resources to perform, say, a pixel-level time series regression on my image stack. Whatever the underlying issue (is it never quite clear with Earth Engine), regressions as the scale of 30 meters (a Landsat pixel) for the study area I'm working on, following the necessary pre-processing, hasn't been working. I started wondering if I could calculate the regressions myself in (client-side) Python. Image exports from Earth Engine used to be infeasible but they have vastly improved and, recently, I've been able to schedule export "tasks," monitor them using a command-line interface, and download the results directly from Google Drive. With the pre-processed rasters downloaded to my computer, I turned to NumPy to develop a vectorized regression over each pixel in a time series image stack. Here, I describe the general procedure I used and how it can be scaled up using Python's concurrency support, pointing out some potential pitfalls associated with using multiple processes in Python. I recently attended part of a workshop run by XSEDE, a collaborative organization funded by the National Science Foundation to further high performance computing (HPC) projects. The introductory material was very interesting to me, not because I was unfamiliar with HPC, but because of the many compelling reasons for learning and using concurrency in scientific computing applications. First and perhaps best known among these reasons, is that Moore's Law, which predicts a doubling in the number of transistors on a commercially available chip every two years, started to level off at around 2004. The rate is slowing, and in the graph at the top of Wikipedia's page on the Law, you can already see the right-hand turn this trajectory is making. This is largely due to real physical limitations that chip developers are starting to encounter. The gains in the number of transistors per chip have come from making transistors progressively smaller; the smaller the transistor, the more heat (density) that must be dissipated. According to one of XSEDE's instructors during the recent "Summer Bootcamp" that I attended, computer chips today are tasked with dissipating heat, in terms of watts-per-square meter, on the order of a nuclear reactor! Keeping things from melting has become the main concern of chip design. But, as we reduce the clock rate and the voltage required to run the chip, we can reduce the amount of heat generated. There is a reduction in performance, but if we add a second chip and run both at a lower voltage, we can get more performance for the same total amount of voltage used by the single chip. Lower voltage means less power consumed and less heat generated . In short, because of physical limitations to chip design and the demand for both power and heat dissipation, commercially available computers today almost exclusively ship with two or more central processing units (CPUs or "cores") running at a clock speed (measured in GHz) than many chips sold just a few years ago. When I was building computers as a kid in the early 2000s, I could buy Intel chips with clock speeds in excess of 3 GHz. A quick search on any vendor's website (say, Dell Corporation's) today, however, reveals that clock speeds haven't really budged: their new desktops ship with cores clocked at 3-4 GHz. The new computers are still "faster" for many applications, however, for the reasons I just discussed: multiple cores running simultaneously. However, if an application isn't designed to take advantage of multiple cores, you won't see that performance gain: you'll have a single-threaded or single-core (more on this) or, more generally, serial set of instructions (computer code) running on a single, slower core. So, how do take advantage of multiple cores? First, Clay Breshear helpfully disambiguates some terminology for us: the author argues that "parallel" programming should be considered a special case of "concurrent" programming. Whereas concurrent programming specifies any practice where multiple tasks can be "in progress" at the same time, parallel programming describes a practice where the tasks are proceeding simultaneously. More concretely, a single-core system can be said to allow concurrency if the concurrent code can queue tasks (e.g., as threads) but it cannot be said to allow parallel computation. Because threads are somewhat complicated and, perhaps, also because of historical developments I'm not aware of, CPython (the standard Python implementation) has in place a feature termed the Global Interpreter Lock (GIL). If your multi-threaded Python program were like a discussion group with multiple participants (threads), the GIL is like a ball, stone, or other totem that participants must be holding before they can speak. If you don't have the ball, you're not allowed to speak; the ball can be passed from one person to another to allow that person to speak. Similarly, a thread without the GIL cannot be executed . In short, spawning multiple threads in Python does not improve performance for CPU-intensive tasks because only one thread can run in the interpreter at a time. Multiple processes, however, can be spun up from a single Python program and, by running simultaneously, get the total amount of work done in a shorter span of time. You can see examples both of using multiple threads and of using multiple cores in Python in this article by Brendan Fortuner . In this example of linear regression at the pixel-level of a raster image, our parallel execution is that of linear regression on a single pixel: with more cores/ processes, we can compute more linear regressions simultaneously, thereby more quickly exhausting the finite number of regressions (and pixels) we have to do. For each pixel's time series, calculate the slope of a regression line. I have a time series of maximum NDVI from 1995 through 2015 (21 images) covering the City of Detroit. So, to start with, I need to read in each raster file (for each year) and concatenate them together into a single, large, \(N\)-dimensional array, where \(N=21\) for 21 years, in this case. Strictly speaking, my array ends up being of a size \(N\times Y\times X\) where \(Y,X\) are the number of rows and columns in any given image, respectively (note: each image must have the same number of rows and columns). Below, I demonstrate this directly; I use the glob library to get a list of files that match a certain pattern: *.tiff. Note that calling the file list's sort() method is essential: if the files are not in the right order, our regression line won't be fit properly (we will be assigning dependent variables to the wrong independent variable/ wrong year). Our target array from regression over \(N\) years (files) is a \(P\times N\) array, where \(P = Y\times X\). That is, \(P\) is the product of the number of rows and the number of columns. This array can be thought of as a collection of 1-D subarrays with \(N\) items: the measured outcome (here, maximum NDVI) in each year. Now that we have a suitably shaped array of our dependent variable data, maximum NDVI, we want to generate an array of identical shape of our independent variable, the year. Finally, we can combine both our dependent and independent variables into a single \((P\times N\times 2)\) array. If only x is given (and y=None), then it must be a two-dimensional array where one dimension has length 2. The two sets of measurements are then found by splitting the array along the length-2 dimension. This is why we created a combined \((P\times N\times 2)\) array; each pixel is then a \((N\times 2)\) subarray that is already set up for the stats.linregress() function. The last part, since we want to calculate regressions on every one of \(P\) total pixels is to map the stats.linregress() function over all pixels. We'll define a function to do just this, as below. The first (zeroth) element in the sequence that is returned is the slope, which is what we want. Potential pitfall: It may seem straightforward now to farm out a range of pixels, e.g., \([i,j] \in [0\cdots P]\) for \(P\) total pixels. However, with multiple processes, each process gets a complete copy of the resources required to get the job done. For instance, if you spin up 2 processes asking one to take pixel indices \(0\cdots P/2\) and the other to take pixel indices \(P/2\cdots P\), then each process needs a complete copy of the master (\(P\times N\times 2\)) array. See the issue here? For large rasters, that's a huge duplication of working memory. A better practice is to literally divide the master array into chunks before farming out those pixel (ranges) to each process. Dividing rectangular arrays in Python based on the number of processes you want to spin up may seem tricky at first. Below, I use some idiomatic Python to calculate the range of pixel indices each process would get based on P processes (note: here, P is the number of processes, whereas earlier I referred \(P\) as the number of pixels). It might be useful if you see what work contains. In my example, I had 730331 total pixels and I wanted to farm them out, evenly, to 4 processes. Note that the last range ends on 730332, since the Python range() function does not include the ending number (that is, we want to make sure we count up to, but not including pixel 730332). Finally, to farm out these subarrays to multiple processes, we need to use the ProcessPoolExecutor that ships with Python 3, available in the concurrent.futures module. If you do need an anonymous or dynamically created function, like a lambda function, you can still use such a pattern with concurrency in Python; you just need to use the partial() function as a wrapper. The ProcessPoolExecutor creates a context in which we can map a (globally defined, picklable) function over a subset of data. Because it creates a context, we invoke it using the with statement. After the processes terminate, their results are stored as a sequence which we can coerce to a list using the list() function. In our case, in particular, because we split up the \(P\) pixels into 4 sets (for 4 processes), we want to concatenate() them back together as a single array. And, ultimately, if we want to write the pixel-wise regression out as a raster file, we need to reshape it to a 2-dimensional, \(Y\times X\) raster, for \(Y\) rows and \(X\) columns. No discussion of concurrency would be complete without an analysis of the performance gain. If you're not already aware, Python's built-in timeit module is de rigeur for timing Python code; below, I use it to time our pixel-wise regression in both its serial and parallel (multiple-process) forms. $ python -m timeit -s "from my_regression_example import main" -n 3 -r 3 "main('~/Desktop/*.tiff')" As you can see, with 4 processes we're finishing the work in about one-third of the time as it takes with only one process. You might have expected us to finish in a quarter of the time, but because of the overhead associated with spinning up 4 processes and collecting their results, we never quite get a \(1/P\) reduction in time for \(P\) processes. This speed-up is still quite an achievement, however. No matter how many processes we use, the regression results are, of course, the same; below is the image we created, with colors mapped to regression slope quintiles. DON'T use a lambda function as the function to map over the array; instead, use regular, globally defined Python functions, with or without functools.partial, as needed. DO chunk up the array into subarrays, passing each process only its respective subarray. In case my walkthrough above was overwhelming, a general pattern for parallel processing of raster array chunks is presented below. Breshears, C. 2009. The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications. O'Reilly Media Inc. Sebastopol, CA, U.S.A. Beazley, D. 2010. "Understanding the Python GIL." PyCon 2010. Atlanta, Georgia. Fortuner, B. 2017. "Intro to Threads and Processes in Python." © 2019 K. Arthur Endsley · Powered by Ubuntu GNU/Linux, Python and Pelican. Cabin Font by Pablo Impallari.
CommonCrawl
During germband retraction in the early embryonic development of fruit fly embryos, the epithelial cells of the amnioserosa (AS) undergo a dramatic change in cell shape. The average cell aspect ratio reduces from $\alpha $ $\sim $10 to $\sim $1 within three hours. We performed laser hole-drilling and confocal microscopy to investigate the mechanics of this process in live fly embryos. We find that the laser-induced recoil dynamics of AS cells during germband retraction (when $\alpha \quad \sim $10) is dramatically different from that during the later dorsal closure stage (when $\alpha $ $\sim $1). First, in the earliest stage of germband retraction, some AS cells actually shrink instead of expand in the first one second after ablation. After this point, the cells do slowly expand. Second, in either phase, the cell speeds were much slower, in the range of $\pm $ 1 $\mu $m/s (compared with speeds in excess of 10 $\mu $m/s during dorsal closure). Theses results suggest a much smaller tensile (and in some cases, compressive) stress in the whole cell sheet in early germband retraction. As retraction proceeds towards dorsal closure, the stresses increase.
CommonCrawl
The parameters $a,b,c,d,e,f,g,h$ are known and $x_1,x_2,x_3,\dot x_1,\dot x_2,\dot x_3$ are known for $t = 0$ and $u$ are known $\forall t$. Also $u$ is constant $\forall t$. So is there a way to to find $x_1,x_2,x_3$ by using linear algebra? I just got started with Armadillo and C++, then I realize that Armadillo does not have and ODE solver. But Armadillo is optimized for linear algebra. Browse other questions tagged ordinary-differential-equations simulation or ask your own question. What's the motivation of Runge-Kutta method? Combine second order ODE with a first order ODE into a state space model? Lecture to solve 2nd order differential equation in matrix form. Solve sine and exponential nonlinear differential equation?
CommonCrawl
We have been in the process of developing the ARICH detector for identifying charged $\pi $ and $K$ mesons in a super-B factory experiment (Belle II) to be performed at the High Energy Accelerator Research Organization (KEK), Japan. The ARICH detector is a ring-imaging Cherenkov counter that uses silica aerogel as a radiator and hybrid avalanche photo-detectors as position-sensitive photo-sensors which are installed at the endcap of the Belle II spectrometer. The particle identification performance of the ARICH detector is basically measured by the Cherenkov angular resolution and the number of detected photoelectrons. At momenta below 4 GeV/$c$, to achieve high angular resolution, the refractive index of the aerogel must be approximately 1.05. A scheme for focusing the propagation pass of emitted Cherenkov photons on the photo-detectors is introduced by using multiple layers of aerogel tiles with different refractive indices. To increase the number of detected photoelectrons, the aerogel is expected to be highly transparent. A support module to install the aerogel tiles is comprised of a cylindrical shape with a diameter of approximately 2.3 m. It is important to reduce adjacent boundaries between the aerogel tiles where particles cannot be clearly identified. Accordingly, larger-sized, crack-free aerogel tiles are therefore preferred. Installing the tiles to the module by trimming them with a water jet cutter and avoiding optical degradation of the aerogel by moisture adsorption during long-term experiments should ultimately result in highly hydrophobic conditions. By 2013, our group established a method for producing, with high yield, large-area aerogel tiles (18 cm $\times $ 18 cm $\times $ 2 cm; approximately tripled) that fulfilled optical performance level requirements (transmission length ~40 mm at 400-nm wavelength; almost doubled). This enabled us to divide the module into 124 segments to install the trimmed aerogel tiles. Two aerogel tiles with refractive indices of 1.045 and 1.055 were installed to each segment (total of 248 tiles), thus resulting in a radiator thickness of 4 cm. By 2014, 450 aerogel tiles were mass-produced and optically characterized. After water jet machining, the optical parameters were re-investigated. Ultimately, selected aerogel tiles were successfully installed to the module by the end of 2016.
CommonCrawl
Abstract: We study the simple hypothesis testing problem for the drift coefficient for stochastic fractional heat equation driven by additive noise. We introduce the notion of asymptotically the most powerful test, and find explicit forms of such tests in two asymptotic regimes: large time asymptotics, and increasing number of Fourier modes. The proposed statistics are derived based on Maximum Likelihood Ratio. Additionally, we obtain a series of important technical results of independent interest: we find the cumulant generating function of the log-likelihood ratio; obtain sharp large deviation type results for $T\to\infty$ and $N\to\infty$.
CommonCrawl
Implementation of <a href = "https://doi.org/10.1016/S0166-218X(96)00010-8"> Parberry's algorithm</a> for <a href = "https://en.wikipedia.org/wiki/Knight%27s_tour">closed knight's tour problem</a>. This algorithm was firstly introduced in "Discrete Applied Mathematics" Volume 73, Issue 3, 21 March 1997, Pages 251-260. A knight's tour is a sequence of moves of a knight on a chessboard such that the knight visits every square only once. If the knight ends on a square that is one knight's move from the beginning square (so that it could tour the board again immediately, following the same path), the tour is closed, otherwise it is open. The knight's tour problem is the mathematical problem of finding a knight's tour. The time complexity of the algorithm is linear in the size of the board, i.e. it is equal to $O(n^2)$, where $n$ is one dimension of the board. The Parberry's algorithm finds CLOSED knight's tour for all boards with size $n \times n$ and $n \times n + 2$, where $n$ is even and $n \geq 6$. The knight's tour is said to be structured if it contains the following $8$ UNDIRECTED moves: Knight's tour on board of size $n \times m$ is called structured if it contains the following $8$ UNDIRECTED moves: 1). $(1, 0) \to (0, 2)$ - denoted as $1$ on the picture below. 2). $(2, 0) \to (0, 1)$ - denoted as $2$ on the picture below. 3). $(n - 3, 0) \to (n - 1, 1)$ - denoted as $3$ on the picture below. 4). $(n - 2, 0) \to (n - 1, 2)$ - denoted as $4$ on the picture below. 5). $(0, m - 3) \to (1, m - 1)$ - denoted as $5$ on the picture below. 6). $(0, m - 2) \to (2, m - 1)$ - denoted as $6$ on the picture below. 7). $(n - 3, m - 1) \to (n - 1, m - 2)$ - denoted as $7$ on the picture below. 8). $(n - 2, m - 1) \to (n - 1, m - 3)$ - denoted as $8$ on the picture below. ######################################### #*12*********************************34*# #2*************************************3# #1*************************************4# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #5*************************************7# #4*************************************6# #*54*********************************67*# ######################################### If you are confused with the formal definition of the structured knight's tour please refer to the illustration on the page 3 of the paper <a href = "https://doi.org/10.1016/S0166-218X(96)00010-8"> "An efficient algorithm for the Knight's tour problem" </a> by Ian Parberry. Algorithm description: Split the initial board on $4$ boards as evenly as possible. Solve the problem for these $4$ boards recursively. Delete the edges which contract the start and the finish cell of the tour on each board, so that on each on $4$ boards closed knight's tour became open knight's tour. Contract these $4$ boards by adding $4$ additional edges between the quadrants. Returns a closed knight's tour. n - width of the board. m - height of the board.
CommonCrawl
This is the exercise 2.7.6 of the book Understanding analysis of Abbott, I want a check of my proof and if is needed additional information to complete it. Your counter-example in the second part is good. Not the answer you're looking for? Browse other questions tagged real-analysis sequences-and-series proof-verification absolute-convergence or ask your own question. If $\sum c_n$ converges absolutely and $k_n$ is bounded, does $\sum c_nk_n$ converge absolutely? Is it true that if $x_n$ converges and $y_n$ is bounded, then $x_ny_n$ converges? Give sequences $X_n$, $Y_n$ such that $X_n$ is not convergent but lim($X_n + Y_n) $ exists.
CommonCrawl
(1) The class wi(ξ) ∈ Hi(B; \(\mathbb Z\)2), i ∈ \(\mathbb N\), with w0(ξ) = 1 and wi(ξ) = 0, for all i > n. (2) (Naturality) If f : B → B' is covered by a map ξ → ξ', then wi(ξ) = f *wi(ξ'). wk(E ⊕ F) = ∑i=1k wi(E) ∪ wk – i(F) , or w(E ⊕ F) = w(E) ∪ w(F) . (4) (They are not all trivial) The class w1(γ11) ≠ 0, where γ11 is the open Möbius strip. $ Total Stiefel-Whitney class: For a bundle ξ, it is w(ξ):= 1 + w1(ξ) + w2(ξ) + ... ∈ H*(B; \(\mathbb Z\)2). @ References: in Milnor & Stasheff 74. * When the base space dimension is even, they coincide with the Euler class. * \(\mathbb R\)n-bundle: In general, all n classes may be non-zero; However, if ξ is a \(\mathbb R\)n-bundle with k nowhere-dependent cross-sections, then wi(ξ) = 0 for i = n – k + 1, ..., n. * For a tangent bundle TM, the w(TM)s are topological invariants of M. * For a trivial bundle ξ, wi(ξ) = 0, for all i > 0, and wi(ξ ⊕ η) = wi(η), for all i and η. * w(TM) = 0 iff M is orientable. * w(TM) ≠ 0 iff M has no spin structure. * ξ ~ η iff wi(ξ) = wi(η), for all i. * ξ ⊕ η is trivial iff w(η) can be expressed in terms of w(ξ), as w(η) = w(ξ)–1 [> see in particular the Whitney Duality theorem]. * w(T(\(\mathbb R\)Pn)) = (1+a)n+1, where a is a generator of H1(T(\(\mathbb R\)Pn); \(\mathbb Z\)2). * w1(T(\(\mathbb C\)Pn)) = 0, w2(T(\(\mathbb C\)Pn)) = 0 for n odd, and = x for n even, where x is a generator of H2(\(\mathbb C\)Pn; \(\mathbb Z\)2). * w(TSn) = 1 (i.e., same as for the trivial bundle). * w(γn1) = 1 + a, where a is a generator of H1(\(\mathbb R\)Pn; \(\mathbb Z\)2). Applications > s.a. characteristic classes. * And physics: The first two are used to establish whether a manifold admits a spin structure, and one can define spinor fields on it (this has been known since the 1960s); The third one is related to chirality; The (vanishing of the) highest Stiefel-Whitney class of a spacetime manifold is related to stable causality. @ And physics: Nielsen Flagga & Antonsen IJTP(02) [spin and chirality], IJTP(04) [causality]. where μM is the fundamental homology class of M. * And bordism: Two closed n-manifolds M and N are bordant if and only if all their Stiefel-Whitney numbers agree [@ Thom CMH(54)]. * And boundaries: All Stiefel-Whitney numbers of a manifold M vanish iff M is the boundary of some smooth compact manifold.
CommonCrawl
The tags on convexity are convex-geometry ($\times$560), convex-analysis ($\times$ 266), convexity ($\times$ 420). Here the number is the current (2019/02/23) number of uses and I ignore some more specific tags whose meaning is quite well-identified such as convex-polytopes or convex-optimization. The tag convex-geometry currently (2019/02/23) has the usage guidance: A branch of geometry dealing with convex sets and functions. Polytopes, convex bodies, discrete geometry, linear programming, antimatroids, ... Also, convexity and convex-analysis have no usage guidance at this date. Adding tag guidance would require to have a more precise idea how to distribute the roles of these tags. But I see no coherent way to do so. I see two essential meanings for "convexity": convexity of functions (closer to convex analysis) and convexity of bodies (closer to convex geometry, which also encompasses convexity in metric spaces). The distinction is clear in a number of cases, but this is also well-intertwined since convex functions are usually defined on convex subsets, while convex sets can usually be defined as sub-level sets of convex functions. At this moment, a quick look at the actual use of convexity seems to be a slight majority of questions on convex functions, an a large minority of convex geometry questions, and it's regularly used in combination with one of the other two tags (40%? I made no serious count). Keep these 3 tags. I guess this considers that convexity is the union of convex-analysis and convex-geometry. Improve usage guidance accordingly (how?) to avoid a too random use. Clarify when should convexity be used: when both others are applicable? when at least one is applicable? Deprecate convexity and use the two others (one or both). Make both convex-analysis and convex-geometry synonyms of convexity, considering that the difference is not significant enough and that a broader tag would benefit. Others options are welcome. The only thing I'm convinced is that statu quo is not a good option. "convexity is perplexing since it's almost a meta-tag. My preference would be to have these questions remapped to convex-analysis, convex-geometry, and similar. However, this seems very implausible at the moment." The closest option above is the technically easy (2), solving the issue for future questions. If one wants to solve the issue for past questions, if one wants to completely delete convexity, one could imagine (if technically doable): remove convexity from every question with at least one of the other 2 tags; for those tagged convexity and none of the other 2, remove convexity and add the other two (this will block for a few remaining ones for which there are already 5 tags but one can imagine these few can be dealt manually). But this can sound arbitrary, so just deprecating sounds enough, while users can then progressively manually delete past occurrences of deprecated convexity. a) convex-geometry: its current tag guidance (which can be amended anyway, as any tag) is: "A branch of geometry dealing with convex sets and functions. Polytopes, convex bodies, discrete geometry, linear programming, antimatroids, ...". Add "Can be used in combination with related tags such as convex-polytopes, convex-analysis, etc". b) Add a tag guidance to convex-analysis. Here's a suggestion (which can be improved now or afterwards): "Convex functions, analysis on convex sets. Can be used in combination with related tags such as fa.functional-analysis, convex-geometry, convex-optimization, etc." c) Deprecate convexity. Namely, change its (currently empty) tag guidance to "Deprecated: do not use this tag. Instead, use convex-geometry, convex-analysis, or related tags". All this is easy to do, so if we agree let's go, and we can discuss afterwards about past posts tagged convexity. Not the answer you're looking for? Browse other questions tagged discussion tags tag-synonyms tag-wikis . What subject specific tags could replace the tag 'divisors'? Should the tags [inner-models] and [inner-model-theory] be merged?
CommonCrawl
the J+K and by 90 days in the L+M bands. The IR echoes are sharp, while the H$\alpha$ echo is smeared. This together favours a bowl shaped toroidal geometry where the dust sublimation radius is defined by a bowl surface, which is virtually aligned with a single iso-delay surface, thus leading to the sharp IR echoes. The BLR clouds, however, are located inside the bowl and spread over a range of iso-delay surfaces, leading to a smeared echo.
CommonCrawl
Abstract: The Quantum Heisenberg Ferromagnet can be naturally reformulated in terms of interacting bosons (called spin waves or magnons) as an expansion in the inverse spin size. We calculate the first order interaction correction to the free energy, as an upper bound in the limit where the spin size $S \to \infty$ and $\beta S$ is fixed ($\beta$ being the inverse temperature). Our result is valid in two and three spatial dimensions. We extrapolate our result to compare with Dyson's low-temperature expansion. While our first-order correction has the expected temperature dependance, in higher orders of the perturbation theory cancellations are necessary.
CommonCrawl
In another paper (Butterfield 2011), one of us argued that emergence and reduction are compatible, and presented four examples illustrating both. The main purpose of this paper is to develop this position for the example of phase transitions. We take it that emergence involves behaviour that is novel compared with what is expected: often, what is expected from a theory of the system's microscopic constituents. We take reduction as deduction, aided by appropriate definitions. Then the main idea of our reconciliation of emergence and reduction is that one makes the deduction after taking a limit of an appropriate parameter $N$. Thus our first main claim will be that in some situations, one can deduce a novel behaviour, by taking a limit $N\to\infty$. Our main illustration of this will be Lee-Yang theory. But on the other hand, this does not show that the $N=\infty$ limit is physically real. For our second main claim will be that in such situations, there is a logically weaker, yet still vivid, novel behaviour that occurs before the limit, i.e. for finite $N$. And it is this weaker behaviour which is physically real. Our main illustration of this will be the renormalization group description of cross-over phenomena.
CommonCrawl
I have heard of the puzzle with 100 or 10 dwarves wearing a hat of a color, either red or blue, and standing in a straight line, and having to guess the color of their own hat - it's quite easy. The solution to save all dwarves or all dwarves minus the first one is fairly simple and relies on the fact that the number of dwarves is finite. So what would happen if you had an infinite number of dwarves standing in a straight line? Every dwarf wears a hat of color either red or blue and sees the color of the hats of all the dwarves standing in front of him. There is explicitly a first dwarf, who has to start guessing the color of his hat and then the guessing proceeds with the next one in the line. If a dwarf guessed correctly, it is freed; if he guessed wrong, it is fried. Every dwarf can hear the voice of all other dwarves without a problem. Everybody is only allowed to speak out either the color red or blue, but no further information. Is there a possibility for all dwarves to be freed? Well, probably no. But is it at least possible that only finitely many of them are killed? EDIT: This is a mathematical puzzle. No loopholes. This puzzle has been discussed on the Mathematics Stack Exchange as in this question: Prisoners Problem. Encode the colors into $0$ and $1$, and define the equivalence relation on $2^\Bbb N$, $\langle x_i\rangle\sim\langle y_i\rangle$ if and only if there is some $k$ such that for all $n\geq k$, $x_n=y_n$. Using the axiom of choice the prisoners pick a representative from each equivalence class. In his turn, the $n$-th prisoner looks for the representative class fitting the string of hats he sees ahead, assuming that all hats up to him are blue. Since all the prisoners follow the same representative to guess their own color, it is guaranteed that after finitely many deaths, the representative and the fashionable selection of hats by the warden will agree, and everyone else will survive. Note that since the Axiom of Choice is non-constructive, so is this solution and hence may not be practically useful for the dwarves in the question. There seems to be a fair bit of discussion on whether the Axiom of Choice is required. Well it was shown by Hardin and Taylor [Hardin, Christopher S.; Taylor, Alan D. An introduction to infinite hat problems. Math. Intelligencer 30 (2008), no. 4, 20–25] that there being no winning strategy is consistent with ZF (plus other axioms which don't imply the Axiom of Choice). So in that sense the Axiom of Choice is necessary for a winning strategy. We define the notion of a class of infinite sequence of hats. All sequences in this class differ from each other by only a finite number of elements. There are an infinite number of classes, each containing a infinite number of elements. Why? As differences between classes are of infinite length, there is an infinite number of possible distances, which means there is an infinite number of classes. Choice of a representative can be easily defined as choosing the sequence where the differences appear at the beginning of the sequence and are lexicographically smaller. is a member of that class. Each dwarf can see the infinite sequence of hats in front of it and can recognize the class. The dwarves can see the mismatches in front of them, but do not know if their own hat is not a mismatch. But, the difference is finite, and a modification of the standard rules for guessing a finite sequence can be applied. The first dwarf says "red" if there is an odd number of hats that are different in the "sequence", compared to the "representative" (or "blue" if the mismatches are even). From there on, each dwarf has sufficient info to tell the colour of their hat. Note that, the number of classes is infinite, however, since we are talking about infinite dwarves, I assume this is acceptable. Finally, since the first dwarf "guesses", the first dwarf has a 50% chance of survival, all other dwarves survive. Oh, and because of the infinite number of classes, this can be directly applied to the problem with an infinite number of colours. Two sequences are equivalent if they are identical after a finite number of items. As the sequence is infinite, and differences are finite, there exists an infinite subsequence, that can uniquely define the class of the sequence. Refer to the following link. The one answer which is simple to understand is the High pitch and low pitch. Each dwarfs answers his hat color in either high pitch or low pitch depending on the next dwarfs hat whether its in Red or Blue color. I was wondering if a dwarf with infinite range of sight can identify repeating pattern(s)? Due to the fact that the number of dwarfs is infinite, there must be a repeating pattern at some point (this is almost too complicated for me now!). With infinite sight, every dwarf can find his own position within the repeating pattern (which can be very, very, very long). Having said that, a dwarf can never be sure whether he really identified a pattern correctly since the number of dwarfs is infinite and he would have to see all dwarfs at once in order to be sure - which is impossible due to "infinite number",... I think?
CommonCrawl
The angle between two circles at the point of intersection is the angle between the tangents to the circles at that point contained in the overlapping area or lune. Prove that, for any two circles, the angles at both points of intersection are equal. Three circles intersect at the point $D$ and, in pairs, at the points $A,\ B$ and $C$ so that the arcs $AB$, $BC$ and $CA$ form a curvilinear triangle with interior angles $\alpha$, $\beta$ and $\gamma$ respectively. The diagrams show two possible cases. Prove that, for any three such circles, $\alpha + \beta + \gamma = \pi$. 'Where the angles of a triangle don't add up to 180 degrees'. On the surface of a sphere the straight lines (lines of shortest distance between points) are great circles, the angles in a triangle add up to more than $\pi$ and the area of a triangle depends on its angles not on the lengths of the sides. Angles - points, lines and parallel lines. Similarity and congruence. Creating and manipulating expressions and formulae. Circle properties and circle theorems. Investigations. Visualising. Pythagoras' theorem. Sine, cosine, tangent. 2D shapes and their properties. Regular polygons and circles.
CommonCrawl
Sze-Bi Hsu, Hal L. Smith, Xiaoqiang Zhao. Special issue dedicated to the memory of Paul Waltman. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): i-ii. doi: 10.3934\/dcdsb.2016.21.2i. Ebraheem O. Alzahrani, Yang Kuang. Nutrient limitations as an explanation of Gompertzian tumor growth. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 357-372. doi: 10.3934\/dcdsb.2016.21.357. Joydeb Bhattacharyya, Samares Pal. Microbial disease in coral reefs: An ecosystem in transition. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 373-398. doi: 10.3934\/dcdsb.2016.21.373. Zhilan Feng, Qing Han, Zhipeng Qiu, Andrew N. Hill, John W. Glasser. Computation of $\\mathcal R $ in age-structured epidemiological models with maternal and temporary immunity. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 399-415. doi: 10.3934\/dcdsb.2016.21.399. Karl P. Hadeler. Stefan problem, traveling fronts, and epidemic spread. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 417-436. doi: 10.3934\/dcdsb.2016.21.417. Sze-Bi Hsu, Bernold Fiedler, Hsiu-Hau Lin. Classification of potential flows under renormalization grouptransformation. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 437-446. doi: 10.3934\/dcdsb.2016.21.437. Wen Jin, Horst R. Thieme. An extinction\/persistence threshold for sexually reproducing populations:The cone spectral radius. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 447-470. doi: 10.3934\/dcdsb.2016.21.447. Don A. Jones, Hal L. Smith, Horst R. Thieme. Spread of phage infection of bacteria in a petri dish. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 471-496. doi: 10.3934\/dcdsb.2016.21.471. Le Li, Lihong Huang, Jianhong Wu. Cascade flocking with free-will. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 497-522. doi: 10.3934\/dcdsb.2016.21.497. Chiu-Ju Lin. Competition of two phytoplankton species for light with wavelength. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 523-536. doi: 10.3934\/dcdsb.2016.21.523. Zhihua Liu, Pierre Magal, Shigui Ruan. Oscillations in age-structured models of consumer-resource mutualisms. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 537-555. doi: 10.3934\/dcdsb.2016.21.537. Jaume Llibre, Claudia Valls. On the analytic integrability of the Li\u00E9nard analytic differential systems. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 557-573. doi: 10.3934\/dcdsb.2016.21.557. Shujuan L\u00FC, Hong Lu, Zhaosheng Feng. Stochastic dynamics of 2D fractional Ginzburg-Landau equationwith multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 575-590. doi: 10.3934\/dcdsb.2016.21.575. Manjun Ma, Xiao-Qiang Zhao. Monostable waves and spreading speed for a reaction-diffusion model with seasonal succession. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 591-606. doi: 10.3934\/dcdsb.2016.21.591. Linfeng Mei, Sze-Bi Hsu, Feng-Bin Wang. Growth of single phytoplankton species with internal storage in a water column. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 607-620. doi: 10.3934\/dcdsb.2016.21.607. Hua Nie, Yuan Lou, Jianhua Wu. Competition between two similar species in the unstirred chemostat. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 621-639. doi: 10.3934\/dcdsb.2016.21.621. Kunimochi Sakamoto. Destabilization threshold curves for diffusion systems with equal diffusivity under non-diagonal flux boundary conditions. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 641-654. doi: 10.3934\/dcdsb.2016.21.641. Amy Veprauskas, J. M. Cushing. Evolutionary dynamics of a multi-trait semelparous model. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 655-676. doi: 10.3934\/dcdsb.2016.21.655. Xiaoying Wang, Xingfu Zou. On a two-patch predator-prey model with adaptive habitancy of predators. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 677-697. doi: 10.3934\/dcdsb.2016.21.677. Dongmei Xiao. Dynamics and bifurcations on a class of population model with seasonal constant-yield harvesting. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 699-719. doi: 10.3934\/dcdsb.2016.21.699. Lifeng Chen, Jifa Jiang. Stochastic epidemic models driven by stochastic algorithms with constant step. Discrete & Continuous Dynamical Systems - B, 2016, 21(2): 721-736. doi: 10.3934\/dcdsb.2016.21.721.
CommonCrawl
Abstract: Homotopy Type Theory may be seen as an internal language for the $\infty$-category of weak $\infty$-groupoids which in particular models the univalence axiom. Voevodsky proposes this language for weak $\infty$-groupoids as a new foundation for mathematics called the Univalent Foundations of Mathematics. It includes the sets as weak $\infty$-groupoids with contractible connected components, and thereby it includes (much of) the traditional set theoretical foundations as a special case. We thus wonder whether those `discrete' groupoids do in fact form a (predicative) topos. More generally, homotopy type theory is conjectured to be the internal language of `elementary' $\infty$-toposes. We prove that sets in homotopy type theory form a $\Pi W$-pretopos. This is similar to the fact that the $0$-truncation of an $\infty$-topos is a topos. We show that both a subobject classifier and a $0$-object classifier are available for the type theoretical universe of sets. However, both of these are large and moreover, the $0$-object classifier for sets is a function between $1$-types (i.e. groupoids) rather than between sets. Assuming an impredicative propositional resizing rule we may render the subobject classifier small and then we actually obtain a topos of sets.
CommonCrawl
Y. Fu and N. C. Turgay Complete classification of biconservative hypersurfaces with diagonalizable shape operator in Minkowski 4-space (submitted). N. C. Turgay Some classifications of Lorentzian surfaces with finite type Gauss map in the Minkowski 4-space (submitted). E. Ö. Canfes, N. C. Turgay On the Gauss map of minimal Lorentzian surfaces in 4-dimensional semi-Euclidean spaces (submitted). N. C. Turgay A classification of biharmonic hypersurfaces in the Minkowski spaces of arbitrary dimension (submitted). U. Dursun, N. C. Turgay Space-like Surfaces in Minkowski Space $\mathbb E^4_1$ with Pointwise 1-Type Gauss Map (submitted). N. C. Turgay Some classifications of biharmonic Lorentzian hypersurfaces in Minkowski 5-space (accepted) Mediterr. J. Math., DOI: 10.1007/s00009-014-0491-1. Y. H. Kim, N. C. Turgay On the ruled surfaces with L1-pointwise 1-type Gauss Map (accepted) Kyungpook Math. J. N. C. Turgay On the quasi-minimal surfaces in the 4-dimensional de Sitter space with 1-type Gauss map (accepted) Sarajevo J. Math. N. C. Turgay H-hypersurfaces with 3 distinct principal curvatures in the Euclidean spaces (accepted, to print) Ann. Mat. Pura Appl., DOI: 10.1007/s10231-014-0445-z. N. C. Turgay On the marginally trapped surfaces in 4-dimensional space-times with finite type Gauss map, Gen. Relativ. Gravit. (2014) 46:1621, DOI: 10.1007/s10714-013-1621-y. Y. H. Kim, N. C. Turgay On the helicoidal surfaces in $\mathbb E^3$ with $L_1$-pointwise 1-type Gauss map, Bull. Korean Math. Soc. 50 (2013), 4, pp. 1345--1356. Y. H. Kim, N. C. Turgay Surfaces in $\mathbb E^3$ with $L_1$-pointwise 1-type Gauss map, Bull. Korean Math. Soc., 50 (2013), 3, 935--949. U. Dursun, N. C. Turgay Minimal and Pseudo-Umbilical Rotational Surfaces in Euclidean Space $\mathbb E^4$, Mediterr. J. Math., 10 (2013), 1, 497-506. U. Dursun and Emel Coşkun, Flat surfaces in the Minkowski space E31 with pointwise 1-type Gauss Map, Turk J. Math 36 (2012) , 613 – 629. U. Dursun, Hypersurfaces with pointwise 1-type Gauss map in Lorentz-Minkowski space, Proc. Est. Acad. Sci., 58 (2009), 146-161. G. G. Arsan, E. O. Canfes, U. Dursun On null 2-type submanifolds of the pseudo Euclidean space E^5_t, Int. Math. Forum, 3(2008) no. 13, 609-622. U. Dursun, Hypersurfaces with pointwise 1-type Gauss map. Taiwanese J. Math. 11 (2007), no. 5, 1407--1416. U. Dursun, Null 2-type submanifolds of the Euclidean space E5 with non-papallel mean curvature vector, J. Geom. 86(2006), 73-80. U. Dursun, Null 2-type space-like submanifolds of E5_t with normalized parallel mean curvature vector, Balkan J. Geom. Appl. 11 (2006), no. 2, 61-72. E.O.Canfes, F.Ozdemir, On Generalized Recurrent Kahlerian Weyl Spaces, Int. Math Forum, Vol. 6, (2011), no. 60, 2975 – 2983. E.O.Canfes, Isotropic Weyl manifolds with semi-symmetric connection,Acta Mathematica Scientia , 29B(1), (2009),, 176 -180. E.O.Canfes, On Generalized Recurrent Weyl Spaces and Wong's conjecture, Differential Geometry and Dynamical Systems, 8 (2006) 34-42.
CommonCrawl
Questions about the branch of combinatorics called graph theory (not to be used for questions concerning the graph of a function). This tag can be further specialized via using it in combination with more specialized tags such as extremal-graph-theory, spectral-graph-theory, algebraic-graph-theory, topological-graph-theory, random-graphs, graph-colorings and several others. Are there $2$ non-adjacent points in the icosahedron graph $G$ such that contracting them leaves the Hadwiger number unchanged? When can any graph $G$ be expressed as a union of $\alpha(G)$ complete graphs? Are all even regular undirected Cayley graphs of Class 1?
CommonCrawl
Exotic, neutron-rich proton-induced spallation products of 232Th and 238U obtained from the PS Booster ISOLDE facility have been investigated by $\gamma$-$\gamma$, $\alpha$-$\gamma$ coincidence and spectrum-multiscaling measurements. A new method for the reduction of isobaric contamination enabled to study the unknown region beyond 208Pb for the decay chain A = 217. A new isotope 217Bi with a half-life of $98.5 \pm 0.8$ s was discovered and its $\beta$-decay studied. For the first time, a half-life value of $1.53 \pm 0.03$ s for the $\alpha$-decay of 217Po was measured.
CommonCrawl
This question is about estimating cut-off scores on a multi-dimensional screening questionnaire to predict a binary endpoint, in the presence of correlated scales. I was asked about the interest of controlling for associated subscores when devising cut-off scores on each dimension of a measurement scale (personality traits) which might be used for alcoholism screening. That is, in this particular case, the person was not interested in adjusting on external covariates (predictors) -- which leads to (partial) area under covariate-adjusted ROC curve, e.g. (1-2) -- but essentially on other scores from the same questionnaire because they correlate one to each other (e.g. "impulsivity" with "sensation seeking"). It amounts to build an GLM which includes on the left-side the score of interest (for which we seek a cut-off) and another score computed from the same questionnaire, while on the right-hand side the outcome may be drinking status. To clarify (per @robin request), suppose we have $j=4$ scores, say $x_j$ (e.g., anxiety, impulsivity, neuroticism, sensation seeking), and we want to find a cut-off value $t_j$ (i.e. "positive case" if $x_j>t_j$, "negative case" otherwise) for each of them. We usually adjust for other risk factors like gender or age when devising such cut-off (using ROC curve analysis). Now, what about adjusting impulsivity (IMP) on gender, age, and sensation seeking (SS) since SS is known to correlate with IMP? In other words, we would have a cut-off value for IMP where effect of age, gender and anxiety level are removed. Do you have a more thorough understanding of this particular situation, with link to relevant papers when possible? Janes, H and Pepe, MS (2008). Adjusting for Covariates in Studies of Diagnostic, Screening, or Prognostic Markers: An Old Concept in a New Setting. American Journal of Epidemiology, 168(1): 89-97. Janes, H and Pepe, MS (2008). Accommodating Covariates in ROC Analysis. UW Biostatistics Working Paper Series, Paper 322. The way that you've envisioned the analysis is really not the way I would suggest you start out thinking about it. First of all it is easy to show that if cutoffs must be used, cutoffs are not applied on individual features but on the overall predicted probability. The optimal cutoff for a single covariate depends on all the levels of the other covariates; it cannot be constant. Secondly, ROC curves play no role in meeting the goal of making optimum decisions for an individual subject. To handle correlated scales there are many data reduction techniques that can help. One of them is a formal redundancy analysis where each predictor is nonlinearly predicted from all the other predictors, in turn. This is implemented in the redun function in the R Hmisc package. Variable clustering, principal component analysis, and factor analysis are other possibilities. But the main part of the analysis, in my view, should be building a good probability model (e.g., binary logistic model). The point of the Janes, Pepe article on covariate adjusted ROC curves is allowing a more flexible interpretation of the estimated ROC curve values. This is a method of stratifying ROC curves among specific groups in the population of interest. The estimated true positive fraction (TPF; eq. sensitivity) and true negative fraction (TNF; eq. specificity) are interpreted as "the probability of a correct screening outcome given the disease status is Y/N among individuals of the same [adjusted variable list]". At a glance, it sounds like what you're trying to do is improve your diagnostic test by incorporating more markers into your panel. A good background to understand these methods a little better would be to read about the Cox proportional hazards model and to look at Pepe's book on "The Statistical Evaluation of Medical Tests for Classification and ...". You'll notice screening reliability measures share many similar properties with a survival curve, thinking of the fitted score as a survival time. Just as the Cox model allows for stratification of the survival curve, they propose giving stratified reliability measures. The reason this matters to us might be justified in the context of a binary mixed effects model: suppose you're interested in predicting the risk of becoming a meth addict. SES has such an obvious dominating effect on this that it seems foolish to evaluate a diagnostic test, which might be based on personal behaviors, without somehow stratifying. This is because [just roll with this], even if a rich person showed manic and depressive symptoms, they'll probably never try meth. However, a poor person would show a much larger increased risk having such psychological symptoms (and higher risk score). The crude analysis of risk would show very poor performance of your predictive model because the same differences in two groups were not reliable. However, if you stratified (rich versus poor), you could have 100% sensitivity and specificity for the same diagnostic marker. The point of covariate adjustment is to consider different groups homogeneous due to lower prevalence and interaction in the risk model between distinct strata. Not the answer you're looking for? Browse other questions tagged epidemiology roc or ask your own question. Based only on these sensitivity and specificity values, what is the best decision method?
CommonCrawl
The mesophase structures of three novel mesomorphic porphyrin derivatives were examined using polarized optical microscopy and microfocus synchrotron X-ray diffraction at various temperatures using a beam with a 14 $\mu $m$\times $14$\mu $m cross-section at the bending magnet beamline of Sector 20 at the Advanced Photon Source. The x rays were diffracted from microscopic monodomains in thin glass cells while simultaneously observing the optical textures. The results confirmed a hexagonal arrangement of discotic columns in the liquid crystalline phase. At a lower temperature, highly ordered plastic crystal phase was obtained. The results of the microdiffraction experiment and promising properties of these compounds as a carrier transporting material will be presented. *Supported by the National Science Foundation grant DMR-0312793.
CommonCrawl
Abstract: Using the current precision electroweak data, we look for the minimal particle content which is necessary to add to the standard model in order to have a complete unification of gauge couplings and gravity at the weakly coupled heterotic string scale. We find that the addition of a vector-like fermion at an intermediate scale and a non-standard hypercharge normalization are in general sufficient to achieve this goal at two-loop level. Requiring the extra matter scale to be below the TeV scale, it is found that the addition of three vector-like fermion doublets with a mass around 700 GeV yields a perfect string-scale unification, provided that the affine levels are $(k_Y, k_2 ,k_3)=(13/3, 1, 2) $, as in the $SU(5) \times SU(5)$ string-GUT. Furthermore, if supersymmetry is broken at the unification scale, the Higgs mass is predicted in the range 125 GeV - 170 GeV, depending on the precise values of the top quark mass and $\tan \beta$ parameter.
CommonCrawl
We present a new fast dynamo model for galactic magnetic fields, which is based on the Parker-shearing instability and magnetic reconnection, in the spirit of the model proposed by Parker (1992). We introduce a new scenario of flux tube interactions and estimate the dynamo transport coefficient basing on simple geometrical arguments. The obtained expressions are equivalent to the formally derived helicity $\alpha_d$ and diffusivity $\eta_d$ in the first paper of this series. The model we propose predicts that the $\alpha$-effect in galactic discs has opposite sign with respect to that resulting directly from the sign of the Coriolis force. We estimate the rate of magnetic heating due to the reconnection of magnetic flux tubes, which plays an important role in our dynamo model. The corresponding luminosities of the diffuse X-ray emission are consistent with the ROSAT observations of nearby galaxies. The present considerations synthesize the ideas of Parker with our own results presented in the preceding papers (Hanasz & Lesch 1993, 1997; Hanasz 1997).
CommonCrawl
I understand that there are several characteristics of curly hair which differ from straight hair (such as an asymmetrical distribution of disulphide bonds in curly hair), but really am struggling to understand the root cause of inherent curl on the macroscopic level. The most relevant part of the hair for our discussion is the cortex, which is made up of many axially aligned macrofibrils, or macroscopic keratin fibers, which are in some filler essentially of lipids/proteins. Inside these macrofibrils are microfibrils. These are also aligned axially and in some filler of lipids/ proteins. Inside the microfibrils are the protofibrils, even smaller fibers which are twisted around each other like a yarn. These protofibrils are made up of 4 keratin chains which are essentially twisted together and connected by disulphide bonds and hydrogen bonds. While hydrogen bonds are easily broken, the disulphide bonds are not. In curled hair, the macrofibril isn't symmetrical but I am unsure as to how this, or what other effects, actually lead to the curling. I had previously assumed that the reason for curling is that in an asymmetrical configuration of the hair(IE not cylindrical but more oval in cross section), some residual stresses are formed by the disulphide bonds between the keratin, If this is true, it would lead me to believe that it would cause increased buckling of the protofibrils. But, even with straight hair and more symmetrical disulphide bonds, you already have some curling and buckling of the protofibrils, but the hair doesn't curl on a macroscopic level. As such, it doesnt seem like this buckling due to asymetrical disulphide bonds would necessarily lead to a curled or organized structure on the macroscale, with the macrofibrils. This leads me to believe I am missing something. Nice question. I must say it took me many hours to get satisfactory answer. Curling of hair can be justified on both microscopic and macroscopic level. Curly hair has to do with the chemical bonds in the protein that makes up hair - $\alpha$-keratin. Keratins, particularly $\alpha$-keratin, have long sequences of amino acids (often more than 300) which forms a helical structure. Pairs of these helical structures then coil about each other in a left-hand coiled-coil structure. These are then attracted to another coiled coil so two keratin helices will stick together. These four-chain structures then associates with another four-chain structure to form the hair filament. One of the amino acids which make up these chains, is cysteine which has a sulpher group which is able to make connections with other sulpher groups on other coiled coils. The more interactions a filament has with another filament, the more tightly coiled the coiled-coil becomes. Hence, curly hair has more interactions than straight hair. The process of `perming' hair introduces more accessibility of the sulpher on the amino acid, cysteine, to make these connections. Finger and toe nails have many many bonds between filaments and are thus, very hard. For example, see how hairs are curled artificially via perming. This change in location of bonds is what causes curling of the hair (look again at the first diagram and notice the orientation of disulfide bridges). Those vertical strands now get curled due to change in these bonds. Here, the required mechanical tension for bending is provided artificially. Although how shape of follicle determines structure of hair strand is probably not known, the major contributing factor likely is how hair emerges from the scalp. When it emerges in a bent shape, it faces a lot of tension from the scalp, which causes it to bend in the shape of the follicle to form curls. Since straight-emerging strands don't face such forces, they do not lead to curled hairs. In this way, hair curls are a consequence of large number of disulfide bridges between keratin molecules and how the hair emerges from the scalp. Not the answer you're looking for? Browse other questions tagged biochemistry hair bio-mechanics biomedical-engineering or ask your own question. Approximately how thick is a single strand of cat (domestic housecat) hair? Cut strand of hair longevity? What determines the length to which hair grows? Do pulled hair from the root grow back? How can I extract DNA from horse hair?
CommonCrawl
We know that $5\times~2=10$, but what about $-5\times~-2$? Or $5\times~-2$? To learn how to multiply positive and negative numbers, we will make a table and look for patterns. We will also begin with simple problems that we definitely know the answer to. Starting with answers we know, we will follow the pattern to discover the answers to unfamiliar questions. We know that 5 • 3 = 15, 5 • 2 = 10, and 5 • 1 = 5 because they are the "times tables" we learned in earlier grades. Let's put these multiplication facts into a table and look for patterns. In this table we can see that as the second number goes down by one, the product goes down by 5. Since multiplication is Commutative and (+5)(-3) equals -15, then this means (-3)(+5) is also equal to -15. This means we can make a new table with (-3) as the leading factor. In this table we can see that as the second number goes down by one, the product goes up by 3. These two tables tell us all we need to know about multiplying positive and negative numbers. When multiplying TWO positives and negatives, you just multiply the absolute values of the numbers. To determine the sign of the product, if both signs match, then the answer is positive and if the two signs are different, then the answer is negative. How to multiply two numbers. Multiply the absolute values of the two numbers. If the two signs match, the product is positive. If the two signs do not match, the product is negative. Begin by multiplying the absolute values of the two numbers, 4 • 7 = 28. Since -4 and +7 have different signs, the product is negative. The product of (-4)(+7) is -28. Both numbers are positive, so the product is positive. (+3)(+9) = +27. Since both numbers are negative, we automatically know the product will be positive. Multiplying the absolute values, we get 6 • 9 = 54. The product is +54. Begin by multiplying the absolute values of the two numbers, 8 • 7 = 56. Since +8 and -7 have different signs, the product is negative. The product of (+8)(-7) is -56.
CommonCrawl
Abstract: The paper is devoted to a certain class of doubly nonlinear higher-order anisotropic parabolic equations. Using Galerkin approximations it is proved that the first mixed problem with homogeneous Dirichlet boundary condition has a strong solution in the cylinder $D=(0,\infty)\times\Omega$, where $\Omega\subset\mathbb R^n$, $n\geq 3$, is an unbounded domain. When the initial function has compact support the highest possible rate of decay of this solution as $t\to \infty$ is found. An upper estimate characterizing the decay of the solution is established, which is close to the lower estimate if the domain is sufficiently 'narrow'. The same authors have previously obtained results of this type for second order anisotropic parabolic equations. Keywords: higher-order anisotropic equation, parabolic equation with double nonlinearity, existence of a solution, rate of decay of a solution.
CommonCrawl
Just like equations, inequality problems can require multiple steps to calculate the answer. The steps to solve multi-step inequality problems start out the same as the steps to solve multi-step equations. First, if parentheses are used in the problem, simplify those terms. Use the Distributive Property if needed. Next step, collect the like terms by adding and subtracting. After that, solve for the variable using opposite operations. This all seems the same – but watch out. The difference is the inequality sign. Your last step in solving the problem is to attend to the inequality sign. How to set up an inequality. Determine the number of workout sessions Rocky needs to reach his goal of losing $20$ pounds. Decide how many workout sessions Bruno needs to complete. Examine the effect of different diets on the Rocky's weight loss. Describe the steps required to solve the inequality $-5x+(2\times5)\le -20$.
CommonCrawl
Abstract: The wandering subspace problem for an analytic norm-increasing $m$-isometry $T$ on a Hilbert space $\mathcal H$ asks whether every $T$-invariant subspace of $\mathcal H$ can be generated by a wandering subspace. An affirmative solution to this problem for $m=1$ is ascribed to Beurling-Lax-Halmos, while that for $m=2$ is due to Richter. In this paper, we capitalize on the idea of weighted shift on one-circuit directed graph to construct a family of analytic cyclic $3$-isometries, which do not admit the wandering subspace property and which are norm-increasing on the orthogonal complement of a one-dimensional space. Further, on this one dimensional space, their norms can be made arbitrarily close to $1$. We also show that if the wandering subspace property fails for an analytic norm-increasing $m$-isometry, then it fails miserably in the sense that the smallest $T$-invariant subspace generated by the wandering subspace is of infinite codimension.
CommonCrawl
We have received many questions over the years about the meaning of multiplication. When we multiply \(2\times 3\), what are we really doing? This can confuse not only students and their parents, but also teachers. The next couple posts will deal with various aspects of this question. Is the multiplier on the left or on the right? 456 is called the multiplicand and 10 is called the multiplier? (456 is the multiplier and 10 is the multiplicand)? Some parents insist the one written in the Encyclopaedia Britannica is the correct one; no matter how much I explain to them, they refuse to accept. Since there is what we know as commutative property of multiplication, why are they insisting all the textbooks and the teachers are committing a mistake? To my mind, it makes no difference at all which is which. In fact, today it is more common to call them both "factors" and not make such a distinction. I wouldn't fight over this, on either side. I recently saw a facsimile of a 19th-century text that defined the multiplier as the SMALLER of the two numbers, regardless of the order. So there's yet a third definition to use. As we'll eventually see, there is a sense in which I favor that last one, yet I disagree with all of them in another sense. I see 10 as the multiplier, because in the usual process of multiplying, I multiply each digit of 456 BY a digit of 10. I'm operating on the 456, using the 10. Even then, I'm not sure that means anything. But I suspect this is the reason for calling the smaller number the multiplier, because it is easier to use the smaller number on the bottom (or to add that many of the larger number). This is my key idea: the distinction really only applies to the application of multiplication, and possibly to the technique of multiplication, not to the meaning of the expression we write. which the summands are grouped and is written 3 x 5. Since they take the 3 as the number of 5's, it must be the multiplier. Again, the distinction lies only in the assumed meaning of the multiplication. When a multiplication problem is given abstractly, there is no such distinction, so we prefer to use the symmetrical term "factor." The Multiplicand is the number taken or multiplied. The multiplier must always be regarded as an abstract number. The multiplicand and product are like numbers, and may be either concrete or abstract. When one of the factors is concrete, the concrete number is the true multiplicand, but when it is the smaller number, it may be used abstractly as the multiplier. I love this! First, multiplicand and multiplier are defined only in terms of the application or (later) the algorithm. Second, where today people often describe \(2+2+2\) as "adding 2 to itself 3 times", this author sees as I do that there are only two additions involved there, so he says "as many times less one". (I prefer to say either that we are adding together 3 copies of the 2, or that we start with 0 and add 2, 3 times.) Third, he emphasizes that either number in the written form may be thought of as the multiplier; he distinguishes the two by the way it is read ("multiplied by" or "times"), as I suggested above and will be discussing later. Finally, he brings in the idea of concrete numbers (numbers with units), which we will look at below. In essence, this determines which number is the multiplier in the meaning of the problem; but he points out that we can carry out the multiplication abstractly, and take the most convenient number (the smaller one) as the multiplier. I don't know that the pedagogy of this presentation is the best, and the mathematics differs from what I am used to, but the handling of words is excellent! Is the multiplier on the top or on the bottom? This may seem like a ridiculous question, but the resources I've used to determine the answer contradict each other. I would like to know when multiplying factors vertically, which factor represents the number of groups? Is it the top factor or the bottom factor next to the multiplication symbol? Some resources show it as being read from the top down, and others from the bottom up. For example, when multiplying factors where the groups are nine and the amount in each is eight, would the nine (groups) be placed on the top or bottom next to the sign? It is simple when the direction is horizontal - 9x8. The factor for groups is the first one (on the left). I would sincerely appreciate your help as it makes a big difference if you have to draw out the multiplication facts. in order to multiply two numbers, we are not representing a physical problem on paper. Rather, we have already determined that in order to solve a problem, we must multiply these two numbers together; and we know certain techniques for doing so. One of the things we know is that the order doesn't affect the result; we can either multiply 835 by 24, or multiply 24 by 835. So when we come to doing the actual multiplication, we can choose whichever way is easier. In this case, I would always put the 24 on the bottom, because it is smaller. So the fact that 24 is on the bottom has absolutely nothing to do with whether I have 24 groups, or groups of 24. Even when written as 835 * 24 (or 24 * 835) the order does not really reflect the problem. As you'll see in one of the answers cited, I myself vacillate on how to interpret it. That's because, again, it makes no difference at all. It may be of use to choose one meaning when you first introduce multiplication to children, and stick with that yourself for the sake of consistency; but you must not insist that they learn that as the only correct order, but rather should emphasize from the start that the order is unimportant, and that this gives them the freedom to see the same multiplication in two ways, and to change views at will if it makes anything easier. As I already said above, in the vertical form, which is properly thought of a part of the algorithm for multiplying rather than as just stating a problem, it is appropriate to think of the bottom number as the multiplier as far as that algorithm is concerned; but this will typically be the smaller number, with no necessary connection to the original horizontal order, or to the application. Can the multiplier have units? Further Musings on "Multiplicand" and "Multiplier" I have seen Dr. Math's answer to the definition of multiplicand and multiplier, and would like to share my thoughts. I believe these designations become clearer when the objective is written or spoken, such as "What is your age times 4?" If your age is 4, then four is the multiplicand and the multiplier is 9. If on the other hand your age is 9, then the multiplicand is nine and the factor is 4. The distinction between multiplicand and multiplier is less clear with questions about the total of contributions if, to continue the example, four individuals each give nine dollars. In my opinion, the multiplicand is the number that has the same units as the product. For example, I would say that the multiplicand is the dollar amount, because it is a four dollar contribution that is magnified by the number of contributors. Carter may have read any of the discussions we looked at above (or the one we'll see next time); but the question is rightly on the application rather than the written form. And he is almost right; you will see echoes here of the 1897 book I quoted, "When one of the factors is concrete, the concrete number is the true multiplicand". A concrete number relates to objects, either as a mere count or in terms of units (a denominate number), as opposed to an abstract number (which is purely number). When the multiplier is abstract, the product has the same units as the multiplicand, as Carter says. So, if we calculate 4 times 9 years, the product is 36 years, and 4 is being used as the multiplier; the same is true if we multiply $9 by 4. But there's more to it than that. In any APPLICATION of multiplication, the multiplicand is the number to be multiplied (or scaled up, or repeated, or whatever), and the multiplier is the number by which it is to be multiplied (aka, the scale factor, repeat count, etc.). As you say, that is really unrelated to the way it happens to be written. The equation 9 x 4 = 36 need not represent "What is your age times 4?" It might just as well be what you'd write for "What is 9 times your age?" In either case, it is clear that the age (9 or 4 years, respectively) is the multiplicand, because it is the number you start with and modify. But it is written as the first number in one case, and the second in the other. As we said before, either the multiplier or the multiplicand may be written first, or said first. The distinction is entirely in the application. But what about the question of units? I agree with you that dollar amounts (unit prices) are multiplicands, while numbers or quantities are multipliers. But I would not consider it a good general principle to say that the multiplicand has the same units as the product. In the case of 9 pounds at 4 dollars per pound, the product is in dollars; no two numbers have the same unit! What you say would apply only when the multiplier is a dimensionless quantity (a mere number of times, or items). So in simple problems that require multiplication, it's fairly easy to identify the multiplier and multiplicand based on the application. The distinction, however, becomes less and less meaningful as you do more complex things. (For example, when calculating the force of gravity using F = GMm/d^2, which of the two masses is the multiplicand?) In the abstract, however, just given as A x B with no connection to an application, they are both just "factors" and play an equal role. White, the 1897 author, might not say that 4 dollars per pound is a multiplier, though it still seems somewhat reasonable to call it that. The reality is that at this point we are multiplying two denominate numbers, but verging on the abstract. The distinction hardly matters now; we are in the realm where the numbers being multiplied are mere factors. Why can I not intuitively resolve this equivalency of dollars and cents?! And that goes ahead and plots on an x-y graph just fine. But the fact that 1 dollar is equal to 100 cents in VALUE keeps messing me up. Surely, that would suggest that 1D = 100C? I know the discrepancy has to do with the value not being the number. But as cents HAVE a dollar value, is there any way to resolve this intuitively? It works fine in my head for things that don't have an equivalent value, but are still directly proportional -- things like ingredients. But how can 100 DOLLARS = 1 CENT?! 100 dollars equals 10,000 cents! Almost everything that I find online about ratios explains it just fine, but this little detail keeps confusing my intuition. You're confusing the idea of a unit with the idea of a variable representing a number of units. This is entirely different. Unit names are not variables. The tricky part comes when we use dimensional analysis to convert units, where we include the units in an expression, and in fact treat the units as if they were variables! This is what lies behind Dominic's equation \(100D = 1C\), where D and C are variables and units are not stated. Now, can we put these two perspectives together? The usual way we obtain this equation is to multiply D*(1 dollar) by 1, in the form of the fraction (100 cents)/(1 dollar), obtaining 100*D*(1 cent). Now we are back at the start: this is the equation relating amounts of dollars and cents. Again, note that I used units in my definitions of the variables D and C. For the sake of clarity, I recommend always doing this (and always writing out such definitions, rather than keeping them unstated). This has bothered me a few times, too, when I tried to do the math mechanically rather than thinking about what it means. So the main answer I can give is that you just have to be clear on what the parts mean. 8 x 9 is 8 times 9, and 9 multiplied by 8, and 9 over 8 (when written as a multiplication vertically). Example 1. Polynomials are the central object of study in algebra (non-abstract). We've arranged our notation to be able to write them easily (multiplication has higher precedence than addition). Further, if p(x) = 3x^2 + 2x + 1, most would agree that 3x^2 is more naturally thought of as 3 groups of x^2, or x^2 three times, rather than x^2 groups of 3, or 3, x^2 times. In this example, the multiplier is on the left, not the right. Example 2. If you're familiar with vector spaces, you'd agree that we do the same with vector notation. For a vector v in V over R, we write the scalar multiplier on the left, e.g. 3v rather than v3. Since there is generally no concept of adding 3, v times, 3v can only be interpreted as v 3 times, meaning the multiplier is on the left. I think this concept is complex because English is a subject-verb-object language, it would be natural to see the first number as the one being multiplied, and in fact with addition or division, the first number is the main object to which something is "done" (added, divided). Thanks for the comment, Vishaal. This is an interesting additional perspective. In both of your examples, there is a convention to write the numerical coefficient, or the scalar multiplier, on the left. This does support the suggestion that we most naturally think of a multiplication as "multiplier times multiplicand". On the other hand, it is not illegal to write "x^2 3" or "v3"; for evidence of the latter, see Wikipedia, which talks about both left and right scalar multiplication (which are equivalent when the scalar comes from a commutative ring). So it is only a convention, not a requirement, that the "multiplier" is written first. As for your grammatical point, I have pointed out that "times" in "a times b" serves not as a verb, but as an adjective or a preposition (my dictionary says the latter); and "multiplied by" is a participle, not a finite verb that would be followed by an object but a phrase followed by an agent that does the action. The latter reading definitely puts the multiplier on the right. I think the main grammatical point to make is ambiguity: we have different ways to read it in English. Considering only symbolic forms and ignoring language, consistency with addition, subtraction, and division supports thinking of the multiplier as the second number as you suggest. Ultimately, at best your thoughts suggest a tendency; but this relates only to the specific applications you are using, which was my main point anyway. It is not the symbols written, but the application for which they are used, that determines which is the multiplier.
CommonCrawl
We study the convergence of volume-normalized Betti numbers in Benjamini-Schramm convergent sequences of non-positively curved manifolds with finite volume. In particular, we show that if $X$ is an irreducible symmetric space of noncompact type, $X \neq \mathbb H^3$, and $(M_n)$ is any Benjamini-Schramm convergent sequence of finite volume $X$-manifolds, then the normalized Betti numbers $b_k(M_n)/vol(M_n)$ converge for all $k$. As a corollary, if $X$ has higher rank and $(M_n)$ is any sequence of distinct, finite volume $X$-manifolds, the normalized Betti numbers of $M_n$ converge to the $L^2$ Betti numbers of $X$. This extends our earlier work with Nikolov, Raimbault and Samet, where we proved the same convergence result for uniformly thick sequences of compact $X$-manifolds.
CommonCrawl
I'm doing some analysis on log returns and I notice that returns can exceed 100%. For example, if a security's close price \$1 today and \$10 yesterday, the log return is $ln(1) - ln(10) = -230\%$! Under arithmetic computation, returns for a long position cannot exceed 100% (i.e. the initial investment). So how would i interpret a log return of -230%? The relationship between normal and log returns is $$(normal return) = exp(log return)-1$$ Therefore log-returns can be from $-\infty$ to $+\infty$ while normal ones can only be between $-1$ and $+\infty$. The curve for ln(x) is only in the positive domain of x. Not the answer you're looking for? Browse other questions tagged log-returns or ask your own question. How to compute simple and log portfolio returns?
CommonCrawl
I debated whether to put this in SO or Mathematics but the problem I suspect is more a mathematical question rather than a programming one. # consider x: same deal.. The reason it fails is due to the line search not returning a stepsize $\alpha$ through Wolfe condition search. But I believe that a stepsize should exist that satisfies the Wolfe conditions. So what is my misunderstanding or what is minpack2.dcsrch doing during the Wolfe search that I don't understand. Browse other questions tagged optimization gradient-descent python or ask your own question. Line search Armijo, Wolfe, Strong Wolfe and Goldstein. Is Frank Wolfe a descent algorithm? What is difference between "Frank–Wolfe algorithm" and "Gradient steepest descent algorithm"?
CommonCrawl
Algebras up to homotopy, or homotopy algebras, such as $A_\infty$, $C_\infty$, $E_\infty$ and $L_\infty$, play an increasingly important role in many areas of mathematics and physics, like topology, mathematical physics, and deformation theory. Let us mention a few examples of interest. Homotopy Lie algebras, also known as $L_\infty$-algebras, have been essential in Kontsevich's proof of deformation quantization. In algebraic topology, $C_\infty$ and $L_\infty$-algebras arise as structures governing the rational homotopy type of spaces, and $E_\infty$-(co)algebras naturally appear as an algebraic structure on infinite loop spaces and on the singular (co)chains of any space. Recent advances in this line include the use of higher homotopy algebras to model all sorts of interesting spaces, such as mapping spaces, sections of fiber bundles, configuration spaces, among many others. This mini-conference focuses on the most recent developments and applications of homotopy algebras in topology. In addition to the scheduled talks, 6 more lectures will be determined in the traditional Arbeitstagung way by open discussion during the meeting.
CommonCrawl
Format: MarkdownItexadded _[references on geometric BRST quantization](http://ncatlab.org/nlab/show/geometric+quantization#ReferencesBRST)_. added references on geometric BRST quantization. Format: MarkdownItexHey, I looked into your transparancies given at cafe. It is too hi level for my understanding yet, and certainly interesting. However, I would disagree with the first line: that Isbell duality or whatever duality interchanges deformation quantization and geometric quantization. Deformation quantizaton makes sense both at algebra level and space/manifold/variety level; the side of Isbell duality is not essential. Also the geometric quantization could be described in terms of coordinate algebras if you like (and hence extended to noncommutative manifolds, for example, supermanifolds). But the idea of both is very different. The second is about quantization line bundle and its sections, and produces true Hilbert space. The deformation quantizuation rather has a formal parameter and is not producing a representation at a Hilbert space. This is why Connes was criticising deformation quantization very much. There is also recent paper by Witten which explains why geometric quantization gives much more when applicable. Hey, I looked into your transparancies given at cafe. It is too hi level for my understanding yet, and certainly interesting. However, I would disagree with the first line: that Isbell duality or whatever duality interchanges deformation quantization and geometric quantization. Deformation quantizaton makes sense both at algebra level and space/manifold/variety level; the side of Isbell duality is not essential. Also the geometric quantization could be described in terms of coordinate algebras if you like (and hence extended to noncommutative manifolds, for example, supermanifolds). But the idea of both is very different. The second is about quantization line bundle and its sections, and produces true Hilbert space. The deformation quantizuation rather has a formal parameter and is not producing a representation at a Hilbert space. This is why Connes was criticising deformation quantization very much. There is also recent paper by Witten which explains why geometric quantization gives much more when applicable. I would disagree with the first line: that Isbell duality or whatever duality interchanges deformation quantization and geometric quantization. Oh, yes, I didn't mean that. The duality arrow is meant only to apply to the first line. Ah, but I see now that it is misleading. Maybe I should change it. The point that I felt like hinting at in that table is that it is not entirely a coincidence that there are two formalized concepts of quantization, because they correspond to the two different sides of reality: algebra, and geometry. The deformation quantizuation rather has a formal parameter and is not producing a representation at a Hilbert space. Format: MarkdownItex> There is "$C^\ast$-algebraic deformation quantization" which precisely studies the corrections to this deficiency Right, that is interesting. I forgot about this. On the other hand, I would like to remind you that it is not entirely true that there are only _two_ approaches to quantization, there are so many more major types of quantization, e.g. Weyl quantization, Fedosov quantization, path integral quantization etc. > Heisenberg picture ↔ Schrödinger picture This duality is in a way, and more closely (sometimes even literally, e.g. in the case of free theory) achieved via the Segal-Bargmann transform between the coherent state quantization (coherent states evolve just like the classical equations for operators) and geometric quantization for the polarization which corresponds to the choice of L^2(configuration space). I would still think that the deformation quantization is NOT necessarily, and not even generically, about algebra -- the infinitesimal deformations apply to manifolds as well, and the main case of Kontsevich quantization PRECISELY does not work so well for algebraic Poisson varieties, while it works for smooth Poisson manifolds. So Kontsevich, in the case of varieties, a posteriori suggested a different framework which I think he called the semialgebraic deformation quantization. Also there is a thought -- the coherent states, which come out from geometric quantization (as evaluation functionals) are used to define the Berezin symbols of operators. Berezin quantization is contentwise so much closer to geometric quantization than to the deformation quantization (to latter almost no connection in my knowledge), while it is about symbols of operators, hence closer to Heisenberg picture. I'd think that the geometry of various quantizations is not that closely parallel to time/space and function/space dualities. P.S. in fact, many of these quantizations can be considered as ordering rules which make isomorphism between noncommutative and commutative. Sometimes, the ordering is determined by polarization, or some other line bundle tricks (like Fedosov). The isomorphism is in the definition of the star product. Weyl quantization is about specific ordering prescription which is "symmetric". This is an interesting viewpoint. P.S. II Of course, there is a short memo on deformation quantization -- it is formal. So it is just a formal precursor to quantization, just like formal group or formal scheme is to an algebraic/Lie group or algebraic scheme. If we extend to C-star algebras, this is interesting, though still, being deformational it restricts in a way which geometric does not. Right, that is interesting. I forgot about this. On the other hand, I would like to remind you that it is not entirely true that there are only two approaches to quantization, there are so many more major types of quantization, e.g. Weyl quantization, Fedosov quantization, path integral quantization etc. This duality is in a way, and more closely (sometimes even literally, e.g. in the case of free theory) achieved via the Segal-Bargmann transform between the coherent state quantization (coherent states evolve just like the classical equations for operators) and geometric quantization for the polarization which corresponds to the choice of L^2(configuration space). I would still think that the deformation quantization is NOT necessarily, and not even generically, about algebra – the infinitesimal deformations apply to manifolds as well, and the main case of Kontsevich quantization PRECISELY does not work so well for algebraic Poisson varieties, while it works for smooth Poisson manifolds. So Kontsevich, in the case of varieties, a posteriori suggested a different framework which I think he called the semialgebraic deformation quantization. Also there is a thought – the coherent states, which come out from geometric quantization (as evaluation functionals) are used to define the Berezin symbols of operators. Berezin quantization is contentwise so much closer to geometric quantization than to the deformation quantization (to latter almost no connection in my knowledge), while it is about symbols of operators, hence closer to Heisenberg picture. I'd think that the geometry of various quantizations is not that closely parallel to time/space and function/space dualities. P.S. in fact, many of these quantizations can be considered as ordering rules which make isomorphism between noncommutative and commutative. Sometimes, the ordering is determined by polarization, or some other line bundle tricks (like Fedosov). The isomorphism is in the definition of the star product. Weyl quantization is about specific ordering prescription which is "symmetric". This is an interesting viewpoint. P.S. II Of course, there is a short memo on deformation quantization – it is formal. So it is just a formal precursor to quantization, just like formal group or formal scheme is to an algebraic/Lie group or algebraic scheme. If we extend to C-star algebras, this is interesting, though still, being deformational it restricts in a way which geometric does not. Format: MarkdownItex> I would still think that the deformation quantization is NOT necessarily, and not even generically, about algebra But it is manifestly about deforming algebras. I am not sure I understand what you have in mind. In deformation quantization one takes the algebras of observables as the basic datum of a physical system and discusses how the commutative algebra of classical observables is deformed to a non-commutative algebra of quantum observables. But it is manifestly about deforming algebras. I am not sure I understand what you have in mind. In deformation quantization one takes the algebras of observables as the basic datum of a physical system and discusses how the commutative algebra of classical observables is deformed to a non-commutative algebra of quantum observables. Format: MarkdownItex> But it is manifestly about deforming algebras. Yes, you are right. I meant the following. Deforming Poisson structure on manifold, keeps the manifolds and changes the bracket. So you are right in this sense. On the other hand, there are many examples where the dual picture is concerned. Like the deformation quantization of Lie groups leading to quantum groups. Poisson structure there corresponds to the classical r-matrix. Now if we take the point of view of universal enveloping algebra then we deform the coproduct while the algebra is isomorphic, and if take the dual point of view of function algebra then we deform the product while the coproduct is undeformed. Both is deformation quantization. Regarding that it is affine it looks like both is done (co)algebraically, but if we were defining homogeneous spaces, than the situation on one side can be more geometric while both are deformation quantization. There are also sheaf and stack versions implied as well. _By no dualization whatsoever you will get from this geometric quantization._ I hope you agree. But it is manifestly about deforming algebras. Yes, you are right. I meant the following. Deforming Poisson structure on manifold, keeps the manifolds and changes the bracket. So you are right in this sense. On the other hand, there are many examples where the dual picture is concerned. Like the deformation quantization of Lie groups leading to quantum groups. Poisson structure there corresponds to the classical r-matrix. Now if we take the point of view of universal enveloping algebra then we deform the coproduct while the algebra is isomorphic, and if take the dual point of view of function algebra then we deform the product while the coproduct is undeformed. Both is deformation quantization. Regarding that it is affine it looks like both is done (co)algebraically, but if we were defining homogeneous spaces, than the situation on one side can be more geometric while both are deformation quantization. There are also sheaf and stack versions implied as well. By no dualization whatsoever you will get from this geometric quantization. I hope you agree. Format: MarkdownItex> _By no dualization whatsoever you will get from this geometric quantization._ I hope you agree. I did agre with this in #4. But now that I thought about it, I am not so sure anymore if I don't want to disagree after all :-) So if you look at the [[geometric quantization of symplectic groupoids]] and [[C-star algebraic deformation quantization]], at least, you see that both quantization proceudres do end up precisely dual to each other: one constructs an actual centrally extended groupoid (geometric) the other precisely its algebra of functions (algebraic). But I agree that even if there is this duality at the horizon, it is far from being formalized. Nevertheless, I think it is very useful to make it explicit, for it contains a whole bunch of other aspects. To reflect this, I have now created a table: _[[Isbell duality - table]]_ Check it out and let me know what you think. So if you look at the geometric quantization of symplectic groupoids and C-star algebraic deformation quantization, at least, you see that both quantization proceudres do end up precisely dual to each other: one constructs an actual centrally extended groupoid (geometric) the other precisely its algebra of functions (algebraic). Format: MarkdownItex>Isbell duality - table What might a 'higher deformation quantization' be? What might a 'higher deformation quantization' be? Format: MarkdownItexYou know, I was wondering about the same question when writing that entry. The [[Poisson n-algebras]] appearing for instance in the context of quantization via [[factorization algebras]] are certainly an aspect of this. But I am not aware that an attempt at a more comprehensive approach exists yet. Well, looking at [[n-plectic geometry]] there are various evident definitions to make, concering deformation quantization of Poisson bracket Lie $n$-algebras. But I don't think to date anyone has seriously thought about this. On the other hand, if we strictly take the view of $C^\ast$-algebraid deformation quantization in the sense of looking at algebras of function on the [[geometric quantization of symplectic groupoids]] then one could say the case is better understood: just consider the $\infty$-algebras of functions on [[geometric quantization of symplectic infinity-groupoids]]. Anyway, all this needs to be studied more. You know, I was wondering about the same question when writing that entry. The Poisson n-algebras appearing for instance in the context of quantization via factorization algebras are certainly an aspect of this. But I am not aware that an attempt at a more comprehensive approach exists yet. Well, looking at n-plectic geometry there are various evident definitions to make, concering deformation quantization of Poisson bracket Lie nn-algebras. But I don't think to date anyone has seriously thought about this. On the other hand, if we strictly take the view of C *C^\ast-algebraid deformation quantization in the sense of looking at algebras of function on the geometric quantization of symplectic groupoids then one could say the case is better understood: just consider the ∞\infty-algebras of functions on geometric quantization of symplectic infinity-groupoids. Anyway, all this needs to be studied more. Format: MarkdownItex> So if you look at the geometric quantization of symplectic groupoids and C-star algebraic deformation quantization, at least, you see that both quantization proceudres do end up precisely dual to each other It seems we are converging to the essential point: if you add specifics to Rieffel strict quantization, then you can make picture rich enough to enable true Hilbert space and hence you are in the setup of geometric quantization essentially. In any case, it is a matter of terminology to some extent. You are right that deformation quantization is usually on the dual side, though I find it more essential that the construction and requirements are stronger to enable trie HIlbert space, unlike the generic deformation quantization which is weaker and only formal. I'll dig the Rieffel reference soon. It seems we are converging to the essential point: if you add specifics to Rieffel strict quantization, then you can make picture rich enough to enable true Hilbert space and hence you are in the setup of geometric quantization essentially. In any case, it is a matter of terminology to some extent. You are right that deformation quantization is usually on the dual side, though I find it more essential that the construction and requirements are stronger to enable trie HIlbert space, unlike the generic deformation quantization which is weaker and only formal. I'll dig the Rieffel reference soon. Format: MarkdownItexRe #8 >both quantization procedures do end up precisely dual to each other: one constructs an actual centrally extended groupoid (geometric) the other precisely its algebra of functions (algebraic). Is there a name for the algebraic dual of forming a central extension? both quantization procedures do end up precisely dual to each other: one constructs an actual centrally extended groupoid (geometric) the other precisely its algebra of functions (algebraic). Is there a name for the algebraic dual of forming a central extension? Not sure what the right word would be, but the general mechanism here is one known from many other situations (such as C *C^\ast-algebraic K-theory, for instance): one deforms an algebra of functions on some space by replacing functions by sections of a non-trivial bundle over the space. In the present case that non-trivial bundle happens to be a line 2-bundle over a groupoid (this is the central extension of the groupoid), but otherwise it's the same kind of phenomenon. Format: MarkdownItexbrief paragraph _[geometric quantization -- Example -- 2-sphere](http://ncatlab.org/nlab/show/geometric+quantization#ExampleThe2Sphere)_. brief paragraph geometric quantization – Example – 2-sphere. Format: MarkdownItexI have finally started to work on filling the section on the actual geometric quantization step: * _[Geometric quantization -- The space of quantum states](http://ncatlab.org/nlab/show/geometric+quantization#GeometricQuantizationProper)_. After a brief survey, the discussion proceeds in four steps * traditional formulation by polarization and metaplectic correction of prequantum bundle; * formulation in complex geometric as Euler characteristic of abelian sheaf cohomology of metaplectically corrected prequantum bundle; * formulation as the Dolbeault-Dirac index of the prequantum bundle with the metaplectic correction now understood as nothing but the Spin-structure; * finally the full truth: formulation as the spin^c index of the prequantum bundle. This is still just a start though. Geometric quantization – The space of quantum states. finally the full truth: formulation as the spin^c index of the prequantum bundle. This is still just a start though. Format: MarkdownItexAdded pointer to [Souriau 74](http://ncatlab.org/nlab/show/geometric+quantization#Souriau74) and added the flow chart diagram from that article to to _[geometric quantization -- History and variants](http://ncatlab.org/nlab/show/geometric+quantization#HistoryAndVariants)_. This needs more accompanying text, but I have to run now. Added pointer to Souriau 74 and added the flow chart diagram from that article to to geometric quantization – History and variants. This needs more accompanying text, but I have to run now. Format: MarkdownItexI have expanded the section _[geometric quantization -- Examples -- Schrödinger representation](https://ncatlab.org/nlab/show/geometric+quantization#ExamplesSchroedingerRepresentation)_, making all signs, all conventions and all identifications fully explicit. I have expanded the section geometric quantization – Examples – Schrödinger representation, making all signs, all conventions and all identifications fully explicit. Format: MarkdownItexThe page [geometric quantization](https://ncatlab.org/nlab/show/geometric+quantization) has an error that prevents it from displaying. I don't know how to fix this. The page geometric quantization has an error that prevents it from displaying. I don't know how to fix this. Format: MarkdownItexThanks for the alert. I tried re-saving the entry after adding a trivial whitespace somewhere, but problem remains. Just saw a similar problem with the page _[[schreiber:thesis Wellen]]_, but there re-saving did solve the issue. Will contact Richard. I tried re-saving the entry after adding a trivial whitespace somewhere, but problem remains. Just saw a similar problem with the page thesis Wellen (schreiber), but there re-saving did solve the issue. Format: MarkdownItexRichard kindly fixed it ([here](https://nforum.ncatlab.org/discussion/8898/some-bug/?Focus=74171#Comment_74171)). Thanks to both of you! Richard kindly fixed it (here).
CommonCrawl
this is my first asking question here... forgive me if my question is not clear enough. This is analogous to comparing coefficients of two parametric models (as in here and here). In parametric models, "test if two models behave in similar way" can be defined as test that coefficients of the models are statistically indistinguishable (I do not need to statistically prove that two models are indistinguishable; I only want to show that I cannot reject the null hypothesis that two models are different). However, I want to do similar things on a non-parametric model such as GAM with splines, in which case I cannot compare coefficients (or can I?). I do not intend to compare the goodness of fit; so doing anova(fit1, fit2) or comparing BIC is not appropriate. Is there any direct way (preferably with R) that I can do the above stuffs? calculate probability of $y_i$ given $x_i$ given model 1? Browse other questions tagged r regression statistical-significance gam or ask your own question. Which distribution should I use when building a GAM in R?
CommonCrawl
Abstract: Two triangle meshes are conformally equivalent if for any pair of incident triangles the absolute values of the corresponding cross-ratios of the four vertices agree. Such a pair can be considered as preimage and image of a discrete conformal map. In this article we study discrete conformal maps which are defined on parts of a triangular lattice $T$ with strictly acute angles. That is, $T$ is an infinite triangulation of the plane with congruent strictly acute triangles. A smooth conformal map $f$ can be approximated on a compact subset by such discrete conformal maps $f^\varepsilon$, defined on a part of $\varepsilon T$ for $\varepsilon>0$ small enough, see [U. Bücking, Approximation of conformal mappings using conformally equivalent triangular lattices, in "Advances in Discrete Differential Geometry" (A.I. Bobenko ed.), Springer (2016), 133--149]. We improve this result and show that the convergence is in fact in $C^\infty$. Furthermore, we describe how the cross-ratios of the four vertices for pairs of incident triangles are related to the Schwarzian derivative of $f$.
CommonCrawl
My solution: if $|x-1|=x-1$, then $x-1 \ge 0$, hence $x \ge 1$. For $x \ge 1$ your equations reads as follows: $x-1=x-1$. Your answer is right, apart from the square bracket pointed out in @ElevenEleven's comment ($\infty$ can't be the upper end of a closed interval). which gives you the answer without needing to consider different cases. Nicely done! Ultimately, you don't need to consider the second case, since $\lvert t\rvert\neq t$ for $t<0,$ but it doesn't hurt to be thorough. The only thing to change about your conclusion is from $[1,\infty]$ to $[1,\infty),$ as "$\infty$" is a notational convention and not a real number. Not the answer you're looking for? Browse other questions tagged algebra-precalculus proof-verification absolute-value or ask your own question. How to solve this linear equation and "discuss its solutions"? Solve the following equation. Check all proposed solutions. Is $|x^3-1|\ge1-x$ true for all real numbers? What is the solution set?
CommonCrawl
You would like to write a list of positive integers $1,2,3,\ldots$ using your computer. However, you can press each key $0$–$9$ at most $n$ times during the process. What is the last number you can write? The only input line contains the value of $n$. Print the last number you can write. Explanation: You can write the numbers $1,2,\ldots,12$. This requires that you press key $1$ five times, so you cannot write the number $13$.
CommonCrawl
At some point you might come upon some operation that you wish it existed in Tensorflow or PyTorch, but no GPU implementation is available. In addition it might even be something that is easily parallelizable on GPU. So why not write your own CUDA kernel and integrate it in your main program? Let us start with the CUDA kernel itself since it will be the same in both implementations. Kernel: name of a function run by CUDA on the GPU. Thread: CUDA will run many threads in parallel on the GPU. Each thread executes the kernel. Blocks: Threads are grouped into blocks, a programming abstraction. Currently a thread block can contain up to 1024 threads. When should we write a custom CUDA kernel? Data size: you should make sure you will launch a lot of threads and blocks in order to beat the CUDA overhead. Otherwise, you might not see a great improvement between a CPU and GPU version. Parallelizable: you should be able to pinpoint a single or double for loop where the iterations are independent of each other. The only tricky part is to figure out how to balance the load: how many threads and blocks should be launched, what portion of the data is going to be processed by each of these. A list of crop centers coordinates in the original image ($M$, 3) where $M$ is the total number of crops. The size $D$ of a crop (we require for simplicity that all crops have the same size). The output should be a list of the crops and have a shape ($M$, $D$, $D$, $D$, $C$). In our case, a first naive approach would be to assign to each thread a single voxel to copy from the input data array to the output crop array. We launch as many blocks as we have crops (i.e. $M$ blocks), and the threads inside the block will go over all the voxels inside a single crop (i.e. $D^3 \times C$ threads per block). Remember that the number of threads per block is fixed, so a single thread might have to work on several voxels, not just one. The keyword __global__ signals that the function will be compiled by nvcc (NVIDIA compiler, a wrapper around gcc) and run on GPU. In our case we will need a pointer to the (flattened) big image, an array of (flattened) crop centers coordinates, as well as the image size, the number of channels, the crop size, and the total number of crops. The output result will be stored in crops_ptr array. Since the crop centers coordinates array was flattened we retrieve the current crop center coordinates with the block index blockIdx.x information. We specified 1 block per crop, hence the block index will correspond exactly to the crop index. We have to process all the pixels of the output array. Each thread is going to loop over them with a step of size the block dimension (i.e. the total number of threads working on this crop). We reconstruct the coordinates of the current pixel inside the crop from the loop index. Note that this and all the following conversions between index/coordinates will depend on how the array was flattened. We retrieve the equivalent coordinates in the original image. Finally we proceed to copy the pixel from the original image array to the final output array. After profiling the previous CUDA kernel we found out that it wasn't that much faster than a Numpy version running on CPU. The reason is that the number of crops (estimated around 100) was not high enough to harness the GPU power, which relies on high parallelization. A second more refined approach would be to set the number of blocks to $M \times D$: each block will process a 2D slice of a single crop, i.e. $D^2 \times C$ voxels. The kernel declaration does not change. This was a short introduction to CUDA kernels. This custom one is able to crop a big image into smaller pieces. Now you probably want to compile it in order to integrate it into your main Tensorflow or PyTorch program, so continue with the next part of the tutorial.
CommonCrawl
Abstract: Subway shuffle is an addicting puzzle game created by Bob Hearn. It is played on a graph with colored edges that represent subway lines; colored tokens that represent subway cars are placed on the nodes of the graph. A token can be moved from its current node to an empty one, but only if the two nodes are connected with an edge of the same color of the token. The aim of the game is to move a special token to its final target position. We prove that deciding if the game has a solution is PSPACE-complete even when the game graph is planar. In the last years the study of the complexity of puzzles and (video)games has gained much attention (see for example the survey ). Most games can be generalized to arbitrary instance size and transformed to decision problems in which the question is usually: "Given an instance of size $m \times n$ of the game X, does it have a solution?". It turns out that most static puzzles (sudoku, kakuro, binary puzzle, light up, …) are NP-complete and that most dynamic puzzles (sokoban, rush hour, atomix, …) are PSPACE-complete. One of the puzzles for which the complexity was still unknown is Subway Shuffle; we proved, as conjectured in , that its rules are rich enough to be PSPACE-complete. The proof uses the framework of the nondeterministic constraint logic model of computation (, ): given a planar constraint graph in normal form, it is PSPACE-complete to find a sequence of edge reversals (moves) that keep the constraint graph valid, ending in the reversal of a special edge $e^*$. We build an equivalent subway shuffle board with EDGE, AND and LATCH gadgets that has a solution (i.e. a sequence of moves that shift the special token to its final target position) if and only if there is a sequence of moves that reverses $e^*$. This entry was posted in CSTheory, Games'n'Puzzles and tagged computational complexity, PSPACE-completeness, Puzzles by admin. Bookmark the permalink.
CommonCrawl
Abstract: We study the full stable pair theory --- with descendents --- of the Calabi-Yau 3-fold $X=K_S$, where $S$ is a surface with a smooth canonical divisor $C$. By both $\mathbb C^*$-localisation and cosection localisation we reduce to stable pairs supported on thickenings of $C$ indexed by partitions. We show that only strict partitions contribute, and give a complete calculation for length-1 partitions. The result is a surprisingly simple closed product formula for these "vertical" thickenings. This gives all contributions for the curve classes $[C]$ and $2[C]$ (and those which are not an integer multiple of the canonical class). Here the result verifies, via the descendent-MNOP correspondence, a conjecture of Maulik-Pandharipande, as well as various results about the Gromov-Witten theory of $S$ and spin Hurwitz numbers.
CommonCrawl
The suggestion that a number/sequence that does not terminate is the same as (in fact is) a number/sequence that terminates is very obviously untrue in general however you list them unless you have a proof to the contrary. Zylo's process has no irrational numbers in it. It has few of the rationals in fact because every number in his list terminates. There exist irrational numbers in the interval (1, 2). One such is the square root of 2. Take its reciprocal and put it in the list. In the interval (2, 3), there is the irrational number 1 plus the square root of 2. Take its reciprocal and put it into the list. In the interval, (3, 4) .... That is a conceptually non-terminating process for generating a countably infinite list of unduplicated irrationals. So it is certainly conceptually possible to set up a list containing a countably infinite number of irrationals in the interval (0, 1). I just gave a constructive proof. But I did not show that the list contains every irrational in the interval (0, 1). Dan may also be saying that it is indeterminate whether treating the process as non-terminating means that the resulting non-terminating list of sums with non-terminating summands will include $\pi - 3.$ But let's assume that it will (though I agree with you that Zylo has not proved it will). It still does not do what Zylo thinks it does. He has not shown that his non-terminating list contains all the reals in (0, 1). He is never going to show that because of Cantor's diagonal argument. Any process such as mine shown above that generates a denumerably infinite list of irrationals in the interval (0, 1) does not generate them all. Where Zylo gets annoying is not that he always fails to describe non-terminating processes. It is that when he does describe a non-terminating process, he just assumes the final step, the one linking each member of the list to the natural numbers in a defined way. Which natural number is $\pi - 3$ to correspond to? He can't just say that is in the list. He has to say where. I do not think he really understands the rules of the game. EDIT While I was writing this Zylo put up a new post. Here he does seem to be advocating a purely finitistic approach. In that case he should not be dealing with Cantor's argument at all, but saying that mathematics should deal with what is physically observable, which is always a finite number of rationals, and putting that on a solid logical foundation. Last edited by JeffM1; July 18th, 2016 at 01:10 PM. I doubt even Zylo believes that any finite list contains all the reals in the interval [0, 1). While I was writing this Zylo put up a new post. Here he does seem to be advocating a purely finitistic approach. In that case he should not be dealing with Cantor's argument at all, but saying that mathematics should deal with what is physically observable, which is always a finite number of rationals, and putting that on a solid logical foundation. You are vastly overestimating Zylo's abilities and understanding. He defines finite lists of terminating numbers of $n$ digits. He then attempts, wrongly, to let $n$ become infinite while remaining a natural number. This is because he doesn't understand the difference between potential and actual infinities and he doesn't know what a natural number is. He falsely claims then to have an infinite list of non-terminating numbers. He then claims to unify his finite and infinite lists of terminating and non-terminating numbers to get results that are nonsense. He doesn't understand any of the reasons why his guesswork is nonsense, instead believing that assumption is a rigorous form of proof. He's too stubborn to attempt to learn any of this stuff properly. What he is doing is nothing to do with finitism. He claims to use infinite structures. Real finitists don't bother with Cantor because he uses an axiom that they don't wish to accept. Zylo just doesn't know what he is talking about. I'll try once more despite the ad hominem attacks. Cantor HYPOTHESIZES a denumerably infinite list in his diagonal proof. People who accept that proof do not ask for photographs of the list. They do not ask for notarized affidavits from qualified witnesses that they confirmed either actual or potential infinities. Infinity is a mental concept dealt with in the imagination. And here is where, every time, Zylo misses the boat. He thinks because he has in his imagination a set of every real in [0, 1), he has somehow disproved Cantor. But of course he has not. He must show how to make each element in his set correspond one to one with the natural numbers. The trick is not to imagine the reals in [0, 1). Mathematicians do it every day. The hard part (actually the impossible part) is to show how to bring them into 1-to-1 correspondence with the natural numbers. I haven't attacked you at all. I'm not looking for an argument. Please stop trying to pick one with me. Cantor HYPOTHESIZES a denumerably infinite list in his diagonal proof. People who accept that proof do not ask for photographs of the list. They do not ask for notarized affidavits from qualified witnesses that they confirmed either actual or potential infinities. It's a never-ending list. It has an infinite number of elements. It is what is known as an actual infinity. It is not the limit of something as some parameter grows without bound. That would be a potential infinity. Potential infinities do not exist, even in the mathematical imagination. They are just the limit as some finite parameter grows arbitrarily large but remains finite. I am not sure whether the process described by Zylo at the start of this thread would construct all real numbers in the interval [0, 1) if continued without termination. It wouldn't. His decimals are all of finite length $n$ where $n \in \mathbb N$. Zylo misses the boat. He thinks because he has in his imagination a set of every real in [0, 1), he has somehow disproved Cantor. He doesn't. He thinks he has, but that is because he doesn't know the difference between actual infinities and potential infinities. He doesn't understand that a limit is not a part of the sequence (or function) of which it is a limit. It is a separately defined mathematical object. If that is indeed what he is trying to say, you and I are in total agreement. He hasn't even defined something with all the rationals, and it cannot be put into 1-to-1 correspondence with the set of natural numbers because it is finite. Where you and I seem to differ is this. I think (but am not sure because he is not a careful writer) he is imagining the set of infinite decimal representations, repeating or non-repeating, of all real numbers in [0,1). If that is correct, there is no problem with the set (though there may be with how he describes it). That set certainly does include $3 - \pi$ or any other real number in the designated interval by definition. He then needs to show (to disprove Cantor) how to put that set into 1-to-1 correspondence with the natural numbers. He never does this; he assumes it. He wastes everyone's time with constructions of a set that, for those who are not finitists, needs no construction. He is "imagining" the set of reals and trying to produce a list (or in this case, a countably infinite set of lists) of them. But his constructions never contain all the real numbers and they nwver will while he persists in trying to create a list because such a list impossible (Cantor). Analysis can be dealt with very nicely with the concepts of finite and limit as n -> infinity (countable), based on the natural numbers and fractions (continuum). The rational numbers are not the continuum. I'm not even sure that they are even a continuum. They certainly aren't complete which is required for general analysis - specifically limits. But since analysis uses numbers, it naturally uses only finite things. There is no infinity in real analysis except as a linguistic shorthand for unboundedness. No interval ever includes infinity because infinity does not exist. But that doesn't really matter, because you aren't talking about analysis. You are talking about set theory. In set theory infinities do exist (but they are not called infinities). What is the difference between an imaginary mental list with infinite lines and a set with infinite elements? Cantor did not prove that the set of reals was impossible so he did not prove that an imaginary list of them was impossible. (Why does the mind boggle at an imaginary list with infinite lines if the infinity is the number of the continuum but not boggle at an imaginary list with infinite lines if the infinity is aleph null.) The list is just a visualizing metaphor for a set. But of course each line of Cantor's "list" implicitly contained two numbers, a natural number and an irrational number. What he proved was that such a list would not contain all the irrational numbers in (0, 1). Cantor's proof does not depend in any formal way on a list. It depends on hypothesizing a one-to one mapping of an arbitrary set of irrationals onto the set of natural numbers, and then showing how to construct an irrational that is not in the first set, thereby showing that the irrationals cannot be put into one to one correspondence with the natural numbers. Now I do not know what is in Zylo's mind. If he is thinking of a finite set, he clearly cannot prove it to be infinite of any variety. So it seems to me more charitable to assume that he is indeed ULTIMATELY thinking of the set of all reals in [0, 1) arrived at through whatever psychological tricks of visualization, which he mistakenly thinks are relevant. Granting that, he simply does not even try to put that set into one to one correspondence with the natural numbers. A list is automatically in 1-1 correspondence with the natural numbers because it has a first element, a second element, ..., an $n$th element. Some mathematicians use the word "listable" in preference to "countable". If that is the common definition of "list," then clearly Cantor did prove that real numbers cannot be put into a "list" as defined. I must admit that when I think of a list without an end, I see no reason to suppose that it must have a beginning either, but it is silly to argue over definitions of words. So I concede the point. In any case, it is really irrelevant. Nothing in Cantor's proof depends on a "list" however defined. What he posits is a one-to-one correspondence between an arbitrary set of irrational numbers in the interval (0, 1) and the set of all natural numbers. The crux of his proof is an explanation of how to construct an irrational number not in the first set, which proves that the first set does not contain all irrational numbers in (0, 1). It is an unbelievably spare and elegant proof. So where you and I seem to disagree is on a relatively small point. You do not believe that Zylo ever has in mind an infinite set, let alone a set of all the real numbers in (0, 1). If you are correct, his argument is infantile because a finite set cannot by definition be put into one-to-one correspondence with an infinite one. On the other hand, I believe that Zylo has in mind (at least some of the time) the set of all real numbers in the interval [0, 1) or (0, 1). He does not, however, even attempt (and would necessarily fail if he did attempt) to show that the set is in one-to-one correspondence with the natural numbers. So all I am saying is that his efforts to construct the set of real numbers in [0, 1) may or may not be nonsense. I do not care because I am willing to accept either set conceptually. What strikes me is his absence of any understanding that, to prove his point, he must construct a one-to-one correspondence between that set and the set of natural numbers. I'll let you have the last word on this because ultimately we are both saying that Zylo's demonstrations fail, albeit for different reasons.
CommonCrawl
Abstract: We model the Lights Out game on general simple graphs in the framework of linear algebra over the field $\mathbb F_2$. Based upon a version of the Fredholm alternative, we introduce a separating invariant of the game, i.e., an initial state can be transformed into a final state if and only if the invariant of both states agrees. We also investigate certain states with particularly interesting properties. Apart from the classical version of the game, we propose several variants, in particular a version with more than only two states (light on, light off), where the analysis resides on systems of linear equations over the ring $\mathbb Z_n$. Although it is easy to find a concrete solution of the Lights Out problem, we show that it is NP-hard to find a minimal solution. We also propose electric circuit diagrams to actually realize the Lights Out game.
CommonCrawl
Abstract: We investigate the magnetic susceptibility $\chi(T)$ of quantum spin chains of $N=1280$ spins with power-law long-range antiferromagnetic coupling as a function of their spatial decay exponent $\alpha$ and cutoff length $\xi$. The calculations are based on the strong disorder renormalization method which is used to obtain the temperature dependence of $\chi(T)$ and distribution functions of couplings at each renormalization step. For the case with only algebraic decay ($ \xi = \infty$) we find a crossover at $\alpha^*=1.066$ between a phase with a divergent low-temperature susceptibility $\chi(T\rightarrow 0) $ for $\alpha > \alpha^*$ to a phase with a vanishing $\chi(T\rightarrow 0) $ for $\alpha < \alpha^*$. For finite cutoff lengths $\xi$, this crossover occurs at a smaller $\alpha^*(\xi)$. Additionally we study the localization of spin excitations for $ \xi = \infty$ by evaluating the distribution function of excitation energies and we find a delocalization transition that coincides with the opening of the pseudo-gap at $\alpha_c=\alpha^*$.
CommonCrawl
Is M-theory just a M-yth? Type I' String theory as M-theory compactified on a line segment? I was considering the S-dual of the Type I' String theory (the solitonic Type I string theory). That is the same as the S-dual of the T-Dual of Type I String theory. Then, that means both length scales and coupling constant are inverted. So, since inverting the length scale of the theory before inverting the coupling constant is the same as inverting the coupling constant before the length scale, I think the S-dual of the T-dual of the Type I String theory is the same as the T-dual of the S-dual of the Type I String theory. The S-dual of the Type I string theory is the Type HO String theory. The T-dual of the Type HO string theory is the Type HE String theory. Therefore, the S-dual of the Type I' String theory is the Type HE String theory. But the Type HE String theory is S-dual to M-theory compactified on a line segment. So does this mean that the Type I' String theory is M-theory compactified on a line segment? Type I' string theory is equivalent to M-theory compactified on a line segment times a circle, i.e. M-theory on a cylinder. M-theory on a line segment only is the Hořava-Witten M-theory, a dual description of the $E_8\times E_8$ heterotic string, because every 9+1-dimensional boundary in M-theory has to carry the $E_8$ gauge supermultiplet. The extra compactified circle is needed to break the $E_8\times E_8$ gauge group to a smaller one; and to get the right number of large spacetime dimensions, among other things. Type I' string theory has D8-branes that come from the end-of-the-world branes in M-theory on spaces with boundaries; it also possesses orientifold O8-planes. Interestingly enough, the relative position of O8-planes and D8-branes in type I' string theory may be adjusted. This freedom goes away in the M-theory limit; the D8-branes have to be stuck at the orientifold planes, those that become the end-of-the-world domain walls of M-theory, and this obligation is explained by the observation that an O8-plane with a wrong number of D8-branes on it is a source of the dilaton that runs. In the M-theory limit, the running of the dilaton becomes arbitrarily fast which sends the maximum tolerable distance between the O8-plane and D8-branes to zero. Thanks a lot for the answer but I have 1 more question: M-theory compactified on a cylinder definitely isn't equivalent to M-theory compactified on a line segment. So, there must be some fallacy with my reasoning. Do the T- and S- Dualities not commute? Or is the S-dual of Type HE String theory not M-theory compactified on a line segment (but instead a cylinder)? Thanks! There are various fallacies - you use the dualities in a bizarre way. The equivalence of the heterotic strings to M-theory with boundary isn't really a normal S-duality, it's a strong coupling limit and a general equivalence. Even more importantly, the mistake is in the first S-duality between type I' and HE. Type I' is a 9-dimensional theory (counting large dimensions only) so it can't be equivalent to a 10-dimensional one. It can't be hard to trace the number of large dimensions of spacetime and avoid simple mistakes of the sort, can it? Thanks. So does that mean that The Type I' String theory is only T-dual to Type I string theory compactified on a circle, rather than Type I string theory itself? If so what is the T-dual of the actual 10 dimensional string theory called? Dear @dimension10, T-duality always requires some dimensions to be compactified on a circle or for type I', on line segment. For 10D string theories, T-duality relates two theories with a circular dimension (of inverse radii) and 8+1 large dimensions. It is nonsense to ask what is the T-dual of a 10-dimensional vacuum. At most, you may understand it as the infinite $R$ limit of some vacua; the T-dual is formally a singular $R=0$ compactification. Let me also mention that the infinite $R$ limit of type I' = type IA looks like type IIA string theory everywhere away from the orientifold planes.
CommonCrawl
We consider partially observable discrete-time risk-sensitive control problems with path-dependent dynamics and costs. Using information state processes on past state trajectories, we reduce the problem to a completely observable stochastic control on time-varying infinite-dimensional state spaces. By the method of dynamic programming for time-varying infinite-dimensional spaces, we construct an optimal control of feedback type on information states . We will also discuss a small noise limit of the risk-sensitive control of the path-dependent system and derive a zero-sum game related to an $H^\infty$-control. This is a joint work with S. Tomiyama.
CommonCrawl
Abstract: This article presents an efficient hierarchical clustering algorithm that solves the problem of core community detection. It is a variant of the standard community detection problem in which we are particularly interested in the connected core of communities. To provide a solution to this problem, we question standard definitions on communities and provide alternatives. We also propose a function called compactness, designed to assess the quality of a solution to this problem. Our algorithm is based on a graph traversal algorithm, the LexDFS. The time complexity of our method is in $O(n\times log(n))$. Experiments show that our algorithm creates highly compact clusters.
CommonCrawl
How many distinct ways can a cube have $1$ green face, $2$ blue faces and $3$ red faces? Note: Two ways are only distinct if one cube can't be rotated to look like the other. Put the green anywhere. Now, consider all the ways to put the blue: 1. One opposite the green and the other anywhere else. 2. Two blue adjacent to each other. (None of them opposite to the green) 3. Two blue opposite to each other. Now put the red. How many balls can be selected? How many different throws are there of $2$ red, $3$ blue and $4$ green dice? Color $27$ unit cube so that by rearranging, they could form a blue $3\times3$ cube, a green one, and a red one? How many ways can the cube be arranged such that the red face is adjacent to the blue face?
CommonCrawl
Abstract: The Korteweg–de Vries equation with a source given as a Fourier integral over eigenfunctions of the so-called generating operator is considered. It is shown that depending on the choice of a basis of eigenfunctions we have the following three possibilities: 1) evolution equations for the scattering data are nonintegrable; 2) evolution equations for the scattering data are integrable but the solution of the Cauchy problem for the Korteweg–de Vries equation with a source at some $t'>t_0$ leaves the considered class of functions decreasing rapidly enough as $x\to \pm \infty$; 3) evolution equations for the scattering data are integrable and the solution of the Cauchy problem for the Korteweg–de Vries equation with a source exists at all $t>t_0$. All these possibilities are widespread and occur in other Lax equations with a source.
CommonCrawl
Title: A Family of Godunov-type Solvers for the Pressureless Gas Dynamics and Related Models. One of the fundamental results of Linear Algebra is the Cyclic Decomposition Theorem. Let $A:X o X$ be a linear operator on a finite dimensional vector space $X$ over a field $F$. The theorem states that $X$ is a direct sum of $A$ invariant subspaces which are generated by a single vector. The special case that $F$ is the field of complex numbers yields the Jordan Canonical Form. We present a short proof of the Cyclic Decomposition Theorem using a result on projections. The problem of pricing the passport option, whose contingent claim is dependent on the balance of a trading account, is considered. For the European passport option, a closed form solution exists for the symmetric case, when the risk-free rate is identical to the cost of carry. However, in absence of an explicit solution for the non-symmetric case, we need to use numerical methods in order to solve the corresponding pricing partial differential equation. In addition, we derive the Greeks, namely, Delta and Gamma for the symmetric case, since the optimal holding strategy is dependent on them. The key result is the improvement in the pricing of the option, as well as estimation of these Greeks, with significantly better results being observed near zero accumulated gain (in the symmetric case), by using the three time level scheme, which is then extended to estimate the price and the Greeks in the non-symmetric case. For the American passport option, the pricing is done by presenting the free boundary problem as a sequence of linear complementarity problems, with the numerical implementation being carried out using the three time level scheme. The key result is the usage of lesser number of grid points as compared to the numerical approaches used previously for this problem, while maintaining the accuracy of the option prices obtained. Venue: Committee Room, Department of Mathematics. for the number of solutions of these equations which are independent of the size of the coefficients of $F$. In this talk, we discuss the upper bounds predicted by some of the central conjectures of this area and present some partial contributions in that direction. Title: Equality of transvection groups. that they can be visualized as the generalization of classical groups in the set up of projective modules. In this talk, we will recall the definitions of linear, symplectic, and orthogonal transvection groups and discuss some results regarding equality of transvection group and elementary transvection group in the relative case with respect to an ideal of the ring. parameters involved. In this generality, our result is ineffective, i.e. we cannot bound the size of the exceptional indices. We also give an effective result, under some stronger assumptions. Title: Hilbert functions of Gorenstein Algebras. Abstract: Hilbert function is an important numerical invaraint associated to an affine or a projective variety. It is a usual philosophy that the Hilbert function reflects the additional structure of the variety. Classification of the Hilbert functions of algebras with additional properties (like Gorenstein, level or complete interesection) is a challenging problem in commutative algebra. Recently, jointly with M. E. Rossi, we classified the possible Hilbert funcions of Gorenstein (more generally, level) algebras in some cases (Artinian algebras of socle degree 4). In this talk we will discuss these new developements. Abstract: In 1980s Goldman introduced a Lie algebra structure on the free vector space generated by the free homotopy classes of oriented closed curves in any orientable surface. This Lie bracket is known as the Goldman bracket and the Lie algebra is known as the Goldman Lie algebra. In this talk I will define Goldman Lie algebra and discuss a conjecture of Chas and Sullivan about the center of the Goldman Lie algebra. I will explain the relation between Goldman Lie algebra and character varieties of surface groups. If time permits, I will also show how techniques from geometric group theory could be used to compute center of the Goldman Lie algebra. I will mention some open problems related to Goldman Lie algebra. Abstract: We define mixed multiplicities of (not necessarily Noetherian) filtrations of $\mathfrak m$-primary ideals in a Noetherian local ring $(R,\mathfrak m)$, generalizing the classical theory for $\mathfrak m$-primary ideals. We construct a real polynomial whose coefficients give the mixed multiplicities. Many of the classical theorems for mixed multiplicities of $\mathfrak m$-primary ideals hold for filtrations (not necessarily Noetherian). In this talk we mainly focus on the famous Minkowski inequalities of Teissier and Rees' Theorem on multiplicity. $\ln\sigma$ is established by deriving a Stein-type estimator. This improved estimator is not smooth. We derive a smooth estimator improving upon the BAEE. Further, the integral expression of risk difference (IERD) approach of Kubokawa is used to derive a class of improved estimators. To illustrate these results, we consider two specific loss functions: squared error and linex loss functions, and derive various estimators improving upon the BAEE. Finally, a simulation study has been carried out to numerically compare the risk performance of the improved estimators. Venue: MZ 195, Committee Room, Department of Mathematics, IIT Delhi. Abstract: Let G be a topological group and H a closed subgroup of G. An irreducible representation of G is said to be H-distinguished if it admits a linear form that is H-invariant. Given a pair (G,H) it is a natural question to classify all irreducible H-distinguished representations of G. I will begin the talk by describing why such distinction questions are interesting in representation theory. After that we will see some recent classification results for certain specific "symmetric" pairs (G,H). deal with these problems. One of the potential application is Railway Junction Rescheduling Problem. Ex. Suppose a train get delayed due to a disturbance, which leads to miss its scheduled time table. This results in conflict with another train scheduled to use that same track. To avoid the conflict a train dispatcher may have to delay other trains competing for the same track, which will propagate the delay throughout the network. Rescheduling is more difficult to deal with than scheduling (making a perfect schedule for the trains at a juncrion) because it involves rapid decision making within a time frame. Abstract: Many phenomena in various scientific fields are mathematically expressed by using the well-known evolution equations. The diffusion equation is one of them. We aim to develop new numerical methods to study diffusion equations. Significant literature can be found over second order approximations for cubic B-spline collocation method. The fourth order method is never studied for numerical solutions of partial differential equations. We have developed fourth order cubic B-spline collocation method and a B-spline ADI method to solve partial differential equations. Two types of ill-posed problems are studied, namely, parabolic final value problem (FVP) and abstract source identification problem. We define a mild solution for parabolic FVP and prove some properties of the mild solution. Truncated spectral regularization method and quasi-reversibility method are considered as regularization method. We derive error estimates for exact as well as noisy data and obtain estimates under a priori parameter choice strategies. Obtained estimates include many results in the literature. Title: Signcryption in a quantum world. Post-quantum cryptography deals with cryptosystems that run on conventional computers and are secure against attacks by potential quantum computers. Quantum security is an ultimate goal in the race of modern cryptographic designs. This will ensure the security against quantum adversary even if the protocols run in the quantum computers. (EtS), sign-then-encrypt (StE) and commit-then-encrypt-and-sign (CtE&S). In this talk, we first explain a brief overview of post-quantum and quantum security of cryptographic protocols. Then, we discuss syntax, security definition and different paradigms (EtS, StE and CtE&S) of signcryption. Finally, we briefly illustrate the security of these paradigms in a quantum world. Seminar by Prof. Jacques Giacomoni from , Laboratory of Mathematics and their Applications (LMAP), Universite de Pau et des pays de l'adour, Pau, France. (LMAP), Universite de Pau et des pays de l'adour, Pau, France. Title: On the equivalence of heat Kernels. positive minimal heat kernels of P-V and of P on M are equivalent. This is a joint work with Yehuda Pinchover. Abstract: In this talk, I will give a overview of problems addressed in the spectral theory of random operators in Quantum Mechanics. challenges in developing matching methods arise from the tension among (i) inclusion of as many covariates as possible in defining the matched groups, (ii) having matched groups with enough treated and control units for a valid estimate of average treatment effect in each group, (iii) computing the matched pairs efficiently for large datasets, and (iv) dealing with complicating factors such as non-independence among units. We propose the Fast Large-scale Almost Matching Exactly (FLAME) framework to tackle these problems for categorical covariates. At its core this framework proposes an optimization objective for match quality that captures covariates that are integral for making causal statements while encouraging as many matches as possible. We demonstrate that this framework is able to construct good matched groups on relevant covariates and further extend the methodology to incorporate continuous and other complex covariates. Seminar by Prof. Krishna Athreya, Distinguished Professor Emeritus, Departments of mathematics and statistics, Iowa State University, USA. mathematics and statistics, Iowa State University, USA. Abstract: In this talk, an explicit construction of standard Brownian motion (SBM) using N(0,1) random variables and Haar functions will be described. We shall also discuss the Paley-Wiener-Zygmund(1933) theorem on the support of SBM by non-differentiable paths. An ergodic theorem of Kallianpur and Robbins will also be described. Abstract: Cayley-Hamilton Theorem is a well-known result in linear algebra, usually covered in a first course. We survey the history of this result and outline various proof techniques that have been used. Straubing gave a graph-theoretic proof of the Cayley-Hamilton Theorem. A readable exposition of the proof was given by Zeilberger. We describe this proof. We then turn to recent joint work with Souvik Roy where we have used the same technique to obtain an extension of the Cayley-Hamilton Theorem to mixed discriminants. Abstract: The discovery by M. Bhargava (1997), of generalized factorials in Dedekind domains, unified past work as well as answered questions in the fields of integer valued polynomials, polynomial mappings over finite abelian groups and fixed divisors of polynomials. The first part of the talk will be a historical account of these developments. In the second part, we will focus on attempts by M. Bhargava (2000) and S. Evrard (2012) to define the notion of generalized factorials in several variables. We will then outline a joint work with D. Prasad and A. S. Reddy, where this notion is further developed. Title: IDEALS AND LIE IDEALS OF CERTAIN NORMED ALGEBRASAbstract: Every associative algebra inherits a canonical Lie algebra structure and there exists a deep relationship between ideals and Lie ideals of such algebras. This relationship extends to the category of normed algebras as well. After providing a quick overview of objects of interest, we will discuss some recent developments made in this direction. Some part of this talk will be based on joint works with Ranjana Jain and Bharat Talwar. Title: Multistability for Liquid Crystal SystemsAbstract: Nematic liquid crystals are classical examples of mesophases intermediate between solids and liquids, with anisotropic or direction-dependent optical and electro-magnetic properties. We review two popular continuum theories for nematic liquid crystals: the Oseen-Frank and the Landau-de Gennes theories. We illustrate how these theories can be applied to liquid crystal devices, with the Planar Bistable Nematic device as a case study. We model the Planar Bistable Nematic Device in terms of problems in the calculus of variations and theory of elliptic partial differential equations. We use tools from singular perturbation theory, topology, functional analysis and numerical methods to study the multiple solution branches as a function of the geometry, boundary conditions, temperature and material properties. We provide a semi-analytic description of a previously unreported Well Order Reconstruction solution in this planar device. Seminar by Prof. Rajendra Bhatia, ISI Delhi & Ashoka University, Haryana. Speaker: Prof. Rajendra Bhatia, ISI Delhi & Ashoka University, Haryana. Abstract: Everyone knows that the trace of a matrix is the sum of its diagonal entries as well as the sum of its eigenvalues. A lot more can be said, and leads to much interesting mathematics., some of which will be discussed in the talk.
CommonCrawl
Dawson, Clint. "Error Estimates for Godunov Mixed Methods for Nonlinear Parabolic Equations." (1988) https://hdl.handle.net/1911/101642. Many computational fluids problems are described by nonlinear parabolic partial differential equations. These equations generally involve advection (transport) and a small diffusion term, and in some cases, chemical reactions. In almost all cases they must be solved numerically, which means approximating steep fronts, and handling time-scale effects caused by the advective and reactive processes. We present a time-splitting algorithm for solving such parabolic problems in one space dimension. This algorithm, referred to as the Godunov-mixed method, involves splitting the differential equation into its advective, diffusive, and reactive components, and solving each piece sequentially. Advection is approximated by a Godunov-type procedure, and diffusion by a mixed finite element method. Reactions split into an ordinary differential equation, which is handled by integration in time. The particular scheme presented here combines the higher-order Godunov MUSCL algorithm with the lowest-order mixed method. This splitting approach is capable of resolving steep fronts and handling the time-scale effects caused by rapid advection and instantaneous reactions. The scheme as applied to various boundary value problems satisfies maximum principles. The boundary conditions considered include Dirichlet, Neumann and mixed boundary conditions. These maximum principles mimic discretely the classical maximum principles satisfied by the true solution. The major results of this thesis are discrete L$\sp\infty$(L$\sp2$) and L$\sp\infty$(L$\sp1$) error estimates for the method assuming various combinations of the boundary conditions mentioned above. These estimates show that the scheme is essentially first-order in space and time in both norms; however, in the L$\sp1$ estimates, one sees a much weaker dependence on the lower bound of the diffusion coefficient than is usually derived in standard energy estimates. All of these estimates hold for uniform and non-uniform grid. Error estimates for a lower-order Godunov-mixed method for a fully nonlinear advection-diffusion-reaction problem are also considered. First-order estimates in L$\sp1$ are derived for this problem.
CommonCrawl
Abstract: We show that Khovanov homology and Hochschild homology theories share common structure. In fact they overlap: Khovanov homology of a $(2,n)$-torus link can be interpreted as a Hochschild homology of the algebra underlining the Khovanov homology. In the classical case of Khovanov homology we prove the concrete connection. In the general case of Khovanov-Rozansky, $sl(n)$, homology and their deformations we conjecture the connection. The best framework to explore our ideas is to use a comultiplication-free version of Khovanov homology for graphs developed by L. Helme-Guizon and Y. Rong and extended here to to $\mathbb M$-reduced case, and to noncommutative algebras (in the case of a graph being a polygon). In this framework we prove that for any unital algebra $\A$ the Hochschild homology of $\A$ is isomorphic to graph homology over $\A$ of a polygon. We expect that this paper will encourage a flow of ideas in both directions between Hochschild/cyclic homology and Khovanov homology theories.
CommonCrawl
After numerous unfortunate freak fatalities and the lawsuits, settlements, protests, and boycotts that naturally followed, the beleaguered executives at ACME Clock Manufacturers have decided they need to finally fix their disastrous quality control issues. It has been known for years that the digital clocks they manufacture have an unacceptably high ratio of faulty liquid-crystal display (LCD) screens, and yet these heartless souls have repeatedly failed to address the issue, or even warn their hapless consumers! You have been called in as a quality consultant to finally put a stop to the madness. Your job is to write an automated program that can test a clock and find faults in its display. These clocks use a standard 7-segment LCD display for all digits (shown on the left in Figure 1), plus two small segments for the ':', and show all times in a 24-hour format. The minute before midnight is 23:59, and midnight is 0:00. The ':' segments of a working clock are on at all times. The representation of each digit using the seven segments is shown on the right in Figure 1. Figure 1: LCD display of each digit. Your program will be given the display of a clock at several consecutive minutes, although you do not know exactly what time these displays start. Some of the LCD segments are burnt out (permanently off) and some are burnt in (permanently on). Your program must determine, where possible, which segments are definitely malfunctioning and which are definitely in working order. The first input line contains a single integer $n$ ($1 \leq n \leq 100$), which is the number of consecutive minutes of a clock's display. The next $8n-1$ lines contain $n$ ASCII images of these clock displays of size $7 \times 21$, with a single blank line separating the representations. All digit segments are represented by two characters, and each colon segment is represented by one character. The character 'X' indicates a segment that is on. The character '.' indicates anything else (segments that are off or non-segment portions of the display). See the sample input/output for details; the first output shows every possible LCD segment along with the smaller segments used to represent the ':'. No clock representation has an 'X' in a non-segment position or only half of a segment showing. Display a $7 \times 21$ ASCII image with a '0' for every segment that is burnt out, a '1' for every segment that is burnt in, a 'W' for every segment that is definitely working, and a '?' for every segment for which the status cannot be determined. Use '.' for non-segments. If the given displays cannot come from consecutive minutes, display impossible.
CommonCrawl
Consider a paging system with a page table stored in memory where memory reference takes 200 nanoseconds and all pages are in memory. What is the effective memory reference time if the hit ratio for the associative registers is 70%, the hitratio for memory is 20%, and 10% of the time we must go out to the disk to load the faulted page inmemory, where context switching, disk transfer time, etc takes 100 nanoseconds.). TLB access time is 10ns. The given answer is 299ns but i am getting 290ns. Suppose: TLB lookup time = 20 ns TLB hit ratio = 80% Memory access time = 75 ns Swap page time = 500,000 ns 50% of pages are dirty OS uses a single level page table What is the effective access time (EAT) if we assume the page fault rate is 10%? Assume the cost toupdate the TLB, the page table, and the frame table (if needed) is negligible. My approach: Given EMAT = 4 4 = (1-p)(m) + p(page fault service + m) ……………. p = page fault rate Page fault service = (0.6) * 10 + (0.4)* (3) ==> 7.2 4 = (1-p)(1) + p(7.2 + 1) They have taken 4 = (1-p)(1) + p(7.2) In page fault also we should consider Memory access time right ..?? I got some doubt while solving previous year questions:- Since While calculating EMAT and question involves page fault service time,we use formulae:- p*s+(1-p)*m ,taken from Galvin p is page fault rate s is page fault service time m ... sense. So why in first reference/doubt we are not using 2m instead of m in case of NOT a page fault? What has to be the general and final formula for calculating the effective memory access time, taking in consideration the $\alpha$-level page table, TLB hit ratio as $h$, miss ratio as $m$, memory access time as $M$ ... for page fault servicing as $x$? There seems to be so many formulae, each different from each other depending on the question.
CommonCrawl