text
stringlengths
100
500k
subset
stringclasses
4 values
>>> Indeed, what does *properly* actually mean now ? >> (but control sequences in general would be easier). >> needs to be fixed. >> First: do we actually want \mathrm to take the font of the text? Not necessarily; indeed, probably not. >> terms of Computer Modern, and probably similarly with euler. >> maths font packages need to be patched in XeLaTeX to reflect this. you can deduce what is intended. But there is the following fact. a previous message. You need to use combining pairs. Presumably there is a standard that determines these code-points. mathematics, just as TeX does already. Note that there is no math-upright (of medium weight). Thus \mathrm would use ordinary characters, with their usual spacing. (e.g., use Code2000 when math is in Code2001 , say). escape out of math-mode, back into text-mode. How do accents get applied ? There are no combining characters such as "0307 and "0308 in Code2001. fudging it in macros (e.g. something like Bruno's attempt). how does one search for mathematical expressions in the PDF ? For example, search for instances of sin x (as from $\sin x$ ). being indicated in the visible parts. Hans, how do the above ideas compare with the discussions at EuroTeX ?
CommonCrawl
How do you calculate/estimate hypersonic leading edge and skin temperatures? At lower speeds (below Mach 5-ish), stagnation temperature (TAT) is a very accurate proxy for skin temperature. But at mid/high hypersonic speeds (especially in the thin upper atmosphere where mass flow is low), thermal radiation bleeds off a significant amount of heat, especially as temperatures climb into the thousands of Kelvin. How far off am I? How do you actually estimate hypersonic skin temperatures (without CFD)? (Data are actual temperatures, where available. Stag is stagnation temperature. Rad is predicted temp using the above formula. Drag areas are pure guesses. Surface areas were 10-12 m^2 for the X-43A, X-51A, and HTV-2. Mass flows were 20-40 kg/s/m^2), except for the X-51A, which encountered 140 kg/s/m^2). For predicting skin temp, stagnation temperature seems more accurate at lower mach, formula temperatures at higher mach, as expected. Admittedly, I'm pleased (and surprised) that the formula even yields ballpark figures. However, it's a bit sensitive to drag area and radiating surface area, and these are the only data for which I have estimated surface area, so I can't be confident this formula works well for other aircraft. Some background: I'm taking on a fun project (nothing serious), so a first approximation (say, to within 100 K) is good enough. I've tried to follow the etiquette as best as I can, but I'm pretty new to Stack Exchange, so let me know if I should change anything :) Thanks! where the parameters $N$, $M$, and $C$ depend on the configuration and $q_w$ is the heating in $W/cm^2$ (this is all from Hypersonic and High Temperature Gas Dynamics and I highly recommend this book). where $R$ is the radius, $h_w$ is the wall enthalpy and $h_0$ is the total enthalpy. where $T_w$ is the wall temperature and $x_T$ is the distance along the body measured from the onset of the turbulent boundary layer. Phew, I believe I typed all those correctly. These are approximations but are really the simplest approach to get answers without requiring simulation or data measurements. Great for an initial estimate. In the aforementioned book, these expressions are attributed to the paper Aerothermodynamics of Transatmospheric Vehicles. The approach there is to assume that the heating can take a form like mentioned in the first expression and relating that to the wall temperature for a fully catalytic material, then finding the values for $C$, $M$ and $N$ that are roots of the system. I found a NASA technical memo for very accurate "Real-Time Aerodynamic Heating and Surface Temperature Calculations for Hypersonic Flight Simulation." The authors also discuss a bit how they obtained their expression. I have to say, though, Dave's dissertation and tgp2114's answer are wholly sufficient and more straightforward. Not the answer you're looking for? Browse other questions tagged fluid-dynamics temperature aerodynamics or ask your own question. How is the Joukowsky Transform used to calculate the Flow of an Airfoil? How do you calculate vortex shedding frequency? How can you calculate air resistances at different speeds? How do you add temperatures? How to estimate wind speed from a pressure difference? How to calculate fluid or air velocity and pressure field with boundary conditions? But how exactly do you calculate the Joukowsky Airfoil, within a minimal margin of error?
CommonCrawl
In this article, we'll learn some algorithms based on the simple tools of computational geometry. What is a sweep line? A sweep line is an imaginary vertical line which is swept across the plane rightwards. That's why, the algorithms based on this concept are sometimes also called plane sweep algorithms. We sweep the line based on some events, in order to discretize the sweep. The events are based on the problem we are considering , we'll see them in the algorithms discussed below. Other than events, we maintain a data structure which stores the events generally sorted by y coordinates (the criteria for ordering of data structure may vary sometimes) which is helpful in the processing when we encounter some event. At any instance, the data structure stores only the active events. One other thing to note is that the efficiency of this technique depends on the data structures we use. Generally , we can use set in C++ but sometimes we require some extra information to be stored, so we go for balanced binary tree. So, we need a better algorithm for this . Here, we'll discuss it using line sweep technique. For this problem, we can consider the points in the array as our events. And in a set, we store the already visited points sorted by y coordinate. So, first we sort the points in x direction as we want our line to move towards right. Now, suppose we have processed the points from 1 to N-1, and let h be the shortest distance we have got so far. For Nth point, we want to find points whose distance from Nth point is less than or equal to h. Now, we know we can only go till h distance from $$x_N$$ to find such point, and in the y direction we can go in h distance upwards and h distance downwards. So, all such points whose x coordinate lie in [$$x_N-h,x_N$$] and y coordinates lie in [$$y_N-h , y_N+h$$] are what we are concerned with and these form the active events of the set . All points in the set with x coordinates less than $$x_N-h$$ are to be deleted . After this processing, we'll add the Nth point to the set. One thing to note is that at any instance, the number of points which are active events is O(1)(there can be atmost 5 points around a point which are active excluding the point itself). The red region in the image is the region containing points which are to be evaluated with the current point.The points left to this region are removed from the set. 1. First ,we have sorted the array of points on x coordinates. 2. Then we inserted the first point in the pnts array to the set box. Note we have defined py as the first in the pair, so set will be sorted by y coordinates. 4. In the second for loop , we are iterating over all points whose x coordinates lie in [$$x_N-h,x_N$$] and y coordinates lie in [$$y_N-h , y_N+h$$]. Finding the lower_bound takes $$O(log N)$$ and this loop runs for atmost 5 times. 5. For each point, insert into the set. This step takes $$O(log N)$$. Now, let's move on to our next problem. Problem: Given a set of $$N$$ axis aligned rectangles(edges of rectangles parallel to x axis or y axis), find the area of union of all of the rectangles. A rectangle is represented by two points , one lower-left point and one upper-right point. We start our algorithm by sorting the events by x coordinates. When a lower left point of a rectangle is hit (i.e., we encounter left edge of rectangle), we insert the rectangle into the set . When we hit an upper right point of a rectangle (we encounter right edge of rectangle), we remove the rectangle from the set. At any instance, the set contains only the rectangles which intersect the sweep line (rectangles whose left edges are visited but right edges are not). The area swept at any instance is = $$\Delta$$y * $$\Delta$$x where $$\Delta$$y is the length of the sweep line which is actually cut by the rectangle(s) (sum of the vertical lengths of the orange region, in the figure below) and $$\Delta$$x is the distance between two events of this sweep line. But here we just know which are the rectangles intersecting the sweep line. So, here we have a new problem: how to find the length of the sweep line cut by the rectangles? The solution to this problem is pretty the same we have been doing by now. We use the line sweep technique to find this but this time we apply it 90 degrees rotated, i.e., we sweep a horizontal line from bottom to up . The events for this sweep line would be the horizontal edges of the active rectangles(rectangles cut by vertical sweep line). When we encounter a bottom horizontal edge of an active rectangle, we increment the counter (counter here maintains the number of rectangles that overlap at current time) and we decrement it on top horizontal edge of active rectangle. When the counter becomes zero from some non zero value, we have found cut length of the vertical sweep line, so we add the area to our final answer. The images above show how we're doing the horizontal sweep bottom up. $$\Delta$$y is the sum of the length of the two arrows shown in the last image. This we do, for all events of vertical sweep line. This was our algorithm. So, let's come to implementation part. For every event of vertical sweep line , we need to find the length of the cut on the sweep line, that means we need to go for horizontal sweep line. Here, we may use boolean array as our data structure because we would have once sorted the rectangles in order of vertical edges(vertical sweep) and once in order of horizontal edges(horizontal sweep), so we would have the sorting in both the directions. The complexity of the algorithm can be easily seen to be $$O(N^2)$$. The complexity can be reduced by some other data structures such as BST instead of boolean array. By now, you would have understood somewhat how to use this technique, right? Let's jump to one more problem that can be solved using this technique. Let S be a set of points. Then, convex hull is the smallest convex polygon which covers all the points of S. There exists an efficient algorithm for convex hull (Graham Scan) but here we discuss the same idea except for we sort on the basis of x coordinates instead of angle. Sort the points of P by x-coordinate (in case of a tie, sort by y-coordinate). Initialize U and L as empty lists. The lists will hold the vertices of upper and lower hulls respectively. Remove the last point of each list (it's the same as the first point of the other list). Concatenate L and U to obtain the convex hull of P. Points in the result will be listed in counter-clockwise order. That was our convex hull using Andrew's algorithm, here we sorted using x coordinates for sweeping our line rightwards. This was using our line sweep technique. Complexity of this algorithm is $$O(N* log N)$$ because of sorting. It may seem to be $$O(N^2)$$ because of while loop inside but this loop runs for overall $$O(N)$$ as we are deleting points in this loop and we have only N points , so it gives $$O(N)$$. Now you have got some taste of this technique, now try solving the attached problem. Do explore this technique's application in some other problems.
CommonCrawl
Abstract: For $\alpha \in (1,2]$, the $\alpha$-stable graph arises as the universal scaling limit of critical random graphs with i.i.d. degrees having a given $\alpha$-dependent power-law tail behavior. It consists of a sequence of compact measured metric spaces (the limiting connected components), each of which is tree-like, in the sense that it consists of an $\mathbb R$-tree with finitely many vertex-identifications (which create cycles). Indeed, given their masses and numbers of vertex-identifications, these components are independent and may be constructed from a spanning $\mathbb R$-tree, which is a biased version of the $\alpha$-stable tree, with a certain number of leaves glued along their paths to the root. In this paper we investigate the geometric properties of such a component with given mass and number of vertex-identifications. We (1) obtain the distribution of its kernel and more generally of its discrete finite-dimensional marginals; we will observe that these distributions are related to the distributions of some configuration models (2) determine the distribution of the $\alpha$-stable graph as a collection of $\alpha$-stable trees glued onto its kernel and (3) present a line-breaking construction, in the same spirit as Aldous' line-breaking construction of the Brownian continuum random tree.
CommonCrawl
However, $(A - I)^2 = 0$, so I am confused on how to find the generalized eigenvectors. Thanks in advance! take any vector $u_3$ in $\ker(A-I)^2\smallsetminus\ker(A-I)$, i.e. any vector in $\mathbf R^3$ which does not satisfy the equation $\;x+2y+3z=0$, e.g. $u_3=(1,0,0)$. set $u_2=(A-I)u_3$. This vector is an eigenvector. complete $u_2$ with a linearly independent vector $u_1$, so as to obtain a basis of the eigenspace.
CommonCrawl
how to sample data for regression that is the most informative? How to sample $x_1, x_2$ so that I got most informative data? Perhaps I should ask what's the informative, first. I think this can be answered by drawing from sequential Monte-Carlo, and quasi Monte-Carlo methods. Exploitation: but ideally you would rather spend time sampling places where the value of $f$ varies a lot, rather than where it is flat. For the exploration part, quasi Monte-Carlo methods tell us to choose low-discrepancy sequences in order to cover the space as efficiently as possible. In 2D the method you choose does not matter too much; you basically need a grid-like structure, see the Sobol sequence for example. If you just want to explore as much as possible, this answers your question (see the Koksma-Hlawka inequality). For the exploitation part, we need to adopt a more dynamic perspective; somehow, we need to learn from the samples that already exist, in order to focus on places that matter to us, as we sample more and more. Here, importance sampling gives us a way of focusing on places of interest. For example if we were interested in finding the peaks of $f$, given a set of $N$ existing samples $(x_i, f(x_i))$, we would resample in the neighbourhood of $x_k$ with a probability proportional to $f(x_k)$. In your case, you may want to resample where the gradient of $f$ is large for example. How to balance exploration and exploitation is a question without an answer; it really depends on your problem and on the resources available. re-evaluate the "gradient" in each cell by comparing the value obtained with the neighbours. This would be an adaptive way of sampling $f$. Obviously this is just a sketch of a solution, and many details need to be addressed (mainly how to compare values obtained between boxes of different sizes, and what is the variance to be expected on the estimate of the gradient). Not the answer you're looking for? Browse other questions tagged sampling optimization importance-sampling optimal or ask your own question. How to Sample from a Randomisation Distribution? Does a data-dependent sampling rule induce correlation?
CommonCrawl
Abstract: This paper is the first part of a systematic survey on the structure of classical groups over general rings. We intend to cover various proofs of the main structure theorems, commutator formulae, finiteness and stability conditions, stability and pre-stability theorems, nilpotency of $\mathrm K_1$, centrality of $\mathrm K_2$, automorphism and homomorphisms, etc. This first part covers background material such as one-sided inverses, elementary transformations, definitions of obvious subgroups, Bruhat and Gauss decompositions, relative subgroups, finitary phenomens, and transvections. Key words and phrases: linear groups, general linear group, associative rings, one-sided inverses, weakly finite rings, IBN rings, elementary transvections, linear transvections, congruence subgroups, elementary subgroups, Bruhat decomposition, Gauss decomposition, parabolic subgroups, group of finitary matrices, Whitehead type lemmas.
CommonCrawl
The anomaly-anomaly correlator is studied using QCD sum rules. Using the matrix elements of anomaly between vacuum and pseudoscalars \pi, \eta and $\eta^1$, the derivative of correlator $X�^1(0)$ is evaluated and found to be $\approx 1.82 \times 10^-^3 GeV^2$. Assuming that $X�^1(0)$ has no significant dependence on quark masses, the mass of η� in the chiral limit is found to be \approx723 MeV. The same calculation also yields for the singlet pseudoscalar decay constant in the chiral limit a value of \approx 178 MeV.
CommonCrawl
We explore the top-K rank aggregation problem in which one aims to recover a consistent ordering that focuses on top-K ranked items based on partially revealed preference information. We examine an M-wise comparison model that builds on the Plackett-Luce (PL) model where for each sample, M items are ranked according to their perceived utilities modeled as noisy observations of their underlying true utilities. As our result, we characterize the minimax optimality on the sample size for top-K ranking. The optimal sample size turns out to be inversely proportional to M. We devise an algorithm that effectively converts M-wise samples into pairwise ones and employs a spectral method using the refined data. In demonstrating its optimality, we develop a novel technique for deriving tight $\ell_\infty$ estimation error bounds, which is key to accurately analyzing the performance of top-K ranking algorithms, but has been challenging. Recent work relied on an additional maximum-likelihood estimation (MLE) stage merged with a spectral method to attain good estimates in $\ell_\infty$ error to achieve the limit for the pairwise model. In contrast, although it is valid in slightly restricted regimes, our result demonstrates a spectral method alone to be sufficient for the general M-wise model. We run numerical experiments using synthetic data and confirm that the optimal sample size decreases at the rate of 1/M. Moreover, running our algorithm on real-world data, we find that its applicability extends to settings that may not fit the PL model.
CommonCrawl
I asked a perhaps related question here. Here is my code in below. The goal is that to define a function which must be integrated numerically. The function itself first is calculated over different times, then I give it another input and finally it is integrated . The time needed to complete calculations is 57 seconds. I want to use this function inside NMinimize to obtain some parameters. However, I removed almost everything to make the problem clear. I think one reason for slow calculation is BesselJZero function. Note that this function in my application has a variable input but I made it fix for simplicity. Without it calculation is done in 10 seconds, but still it is slow. As I said I need to use this function in NMinimize so slow evaluation makes it longer to find the desired parameters. I have the same code in Matlab. It calculates in 0.28 seconds. What am I doing wrong in Mathematica? When there is a few time point and hence the length of the list which NIntegrate act upon is small, there isn't any difference between Mathematica and Matlab. Yet by increasing the number of elements of this list Mathematica is left behind Matlab. Is this a clue to solve this problem?! Note than the upper limit on integration is 100 which meant to represent infinity. When I used infinity I got some errors. The example works. Before I missed a minus sign so it didn't work. I changed the taxis in Mathematica code to make it the same as t in Matlab code, not a big difference though. In the first one I ran NIntegrate over a list and finally in the next line I summed all the terms. In the improved one at first I summed the terms and then used NIntegrate. Yet, it is much slower than Matlab. Also you don't need to evaluate a bunch of Bessel functions, since BesselJZero[1/2,n] is $n\pi$. As noted by @belisarius, your first term would diverge if you integrate to $\infty$, since the integrand is 1. There is something wrong with the expression of your intent. Not the answer you're looking for? Browse other questions tagged numerical-integration performance-tuning numerics matlab or ask your own question. Why can't I change the value of MaxRecursion in NIntegrate when integrating BesselJ? How to overcome this error in NIntegrate?
CommonCrawl
> Continued by Rémi and Lilian :ok_hand: ! # 1. What is Python :snake: ? | **Support?** | Community (StackOverflow, IRC, mailing lists etc) | By MathWorks ? - On Linux and Mac OS : already installed! - Takes about 10 minutes... and it's free ! + **PyCharm** if you want "the most powerful Python IDE ever" - Or with the standard installer, use `pip install [name]`. ## :mag: How to find the module you need ? - Ask your colleagues :smile: ! - Look on the Internet ! | **Maximum** | `np.max(a)` | `max(max(a))` ? > Using keras (keras.io) it's very simple and concise :sunglasses: ! # just like in the Scikit-Learn API. - :sparkles: Lots of Python code written for numerical values can work directly for symbolic values! + or symbols $\mu_1,\ldots,\mu_K$ ! > By you ? Any idea is welcome!
CommonCrawl
Abstract. We isolate a large class of self-adjoint operators $H$ in $L^2(X)$ whose essential spectrum is determined by their behavior at $x\sim\infty$ and we give a canonical representation of the essential spectrum of $H$ in terms of spectra of limits at infinity of translations of $H$. The configuration space $X$ is an abelian locally compact not compact group.
CommonCrawl
Seminar — Lisbon Mathematics PhD. The purpose of the LisMath seminar is to provide an initiation into research as well as to make students train their oral skills. Each LisMath student will be asked to give a talk, based on research papers, chosen from a list covering a variety of research topics. The seminar should be comprehensible to everyone. The student will be asked to make an effort to explain why he/she finds the topic interesting, and how it fits into the broader research picture. The LisMath seminar will thus help broadening the students training. The LisMath seminar takes place on a weekly basis in the Spring semester. Attendance is mandatory for LisMath students. Venue: Wednesday 17h-18h, alternating between FCUL (seminar room 6.2.33 of the Department of Mathematics) and IST (seminar room P9 of the Department of Mathematics) except for 26/6/2018, LisMath Seminar Day, when all sessions will be held at the former location. Maria Silva, Instituto Superior Técnico. Pedro Filipe, LisMath, Instituto Superior Técnico. The Leibniz hierarchy arose as an attempt to rank which types of logics are more amenable to be studied from an algebraic point of view and plays a central role in modern abstract algebraic logic. Within this hierarchy, the class of protoalgebraic logics resides at the very bottom, not being included in any other class. In this sense, protoalgebraicity in one of the weakest properties of logics that makes them amenable to most of the standard methods in algebra. In this seminar we will study the order properties of this lattice of logics with some interesting results. Salvatore Baldino, Instituto Superior Técnico. The goal of this seminar is to introduce the concept of integrability, showing how it can be applied to obtain solutions for dynamical systems. We will then apply the tools that we build in the first part of the talk to attack a problem that is relevant in topological string theory, the problems of the KP and KdV hierarchy. We will introduce those problems in a self-contained way, explore the use of the technique of the Lax pairs to solve them and finally we will see how those solutions are of use in matrix models, that appear in minimal string theories of interest. Augusto Pereira, LisMath, Instituto Superior Técnico. The twistor correspondence and the ADHM construction on $S^4$. We shall give a brief overview of Yang-Mills theory, discussing some of its applications to other areas of geometry. We then focus on the construction of holomorphic bundles corresponding, via the twistor transform, to instanton solutions of the Yang-Mills equation. Pedro Cardoso, LisMath, Instituto Superior Técnico. Spectral Gap of Markov Chains. Roberto Vega, LisMath, Instituto Superior Técnico. Mirror Symmetry is a conjecture that suggests the connection between the structures of two mirror manifolds. In this talk, we will present an introduction to this symmetry, first through the Strominger-Yau-Zaslow conjecture and then in more general terms. Finally, we will mention the origin of Mirror Symmetry in the context of Topological Strings and we will make some comments on the topological A and B models. Martí Rosselló, LisMath, Instituto Superior Técnico. Localization in supersymmetric quantum field theories. Supersymmetric localization is an effective technique to obtain exact results in certain supersymmetric quantum field theories. It can be seen as an extension of the localization formula of equivariant cohomology. A brief introduction to both topics will be given. Stefano Cremonesi, An Introduction to Localisation and Supersymmetry in Curved Space, Ninth Modave Summer School in Mathematical Physics, 2013. Localization techniques in quantum field theories. Special volume. M. Atiyah and R. Bott, The Moment map and equivariant cohomology, Topology 23 (1984) 1-28. E. Witten, Mirror manifolds and topological field theory. Paulo Rocha, LisMath, Faculdade de Ciências. An introduction to PT-Symmetric Quantum Theory. Traditionally in quantum mechanics it is assumed that the Hamiltonian must be Hermitian in order to obtain real energy levels and unitary time evolution. Here we will show that the requirement of Hermiticity may be replaced by space-time reflection (PT-symmetry) without losing any of the essential physical features of quantum mechanics. In this seminar we will give an introduction to PT-symmetric quantum theory and work with some examples. Carl M. Bender, Introduction to PT-Symmetric Quantum Theory. Carl M. Bender and Javad Komijani, Painlevé Transcendents and PT-Symmetric Hamiltonians. Dorje C. Brody, Consistency of PT-symmetric quantum mechanics. Maximilian Schwick, LisMath, Instituto Superior Técnico. Resurgence is a method used to solve differential equations with a wide range of applications. It is based on the so called alien calculus. The talk will give a brief insight on what resurgence is used for. Then, via example, a short introduction to alien calculus is given. D. Sauzin, Introduction to 1-Summability and Resurgence, in Divergent Series, Summability and Resurgence I: Monodromy and Resurgence, Lec. Notes Math. 2153 (2016). Carllos Holanda, LisMath, Instituto Superior Técnico. Applications of ergodic theory to number theory. Ergodic theory can be described as the study of measurable maps and flows preserving a certain measure. Some emphasis is given to the study of the recurrence properties and stochastic properties of the dynamics. It turns out that there are many nontrivial applications of ergodic theory to number theory. As an illustration, we shall consider fractional parts of polynomials and continued fractions. L. Barreira, Ergodic Theory, Hyperbolic Dynamics and Dimension Theory, Springer, 2012. H. Weyl, Ueber die Gleichverteilung von Zahlen mod. Eins, Math. Ann. 77 (1916), 313-352. Miguel Duarte, LisMath, Instituto Superior Técnico. Singularity theorems of Hawking and Penrose. Singularity theorems in General Relativity: proof, relevance, and open questions. S. W. Hawking, R. Penrose, The singularities of gravitational collapse and cosmology, Proc. Roy. Soc. Lond. A 314, 529-548 (1970). S. W. Hawking and G. Ellis, The large scale structure of space-time, Cambridge University Press, 1995. J. Senovilla, D. Garfinkle, The 1965 Penrose singularity theorem. J. Senovilla, Singularity theorems in General Relativity: achievements and open questions. Sílvia Reis, LisMath, Faculdade de Ciências, Universidade de Lisboa. Generically Stable Types and Banach Spaces. We discuss the notion of generically stable types in the framework of dependent theories in continuous first order logic. We will also mention some applications of this framework to structures arising in functional analysis, Banach spaces in particular. Fábio Silva, LisMath, Faculdade de Ciências, Universidade de Lisboa. Patience Sorting monoids and their combinatorics. Monoids arising from combinatorial objects have been intensively studied in recent years. Important examples include the plactic, the sylvester, the Chinese, the hypoplactic, the Baxter, and the stalactic monoids, which are, respectively, associated to the following combinatorial objects: Young tableaux, binary trees, Chinese staircases, quasi-ribbon tableaux, pairs of twin binary trees, and stalactic tableaux. In this talk we present two monoids which arise in a similar way, the left Patience Porting monoid (lPS monoid), also known in the literature as the Bell monoid, and the right Patient Sorting monoid (rPS monoid), that are, respectively, associated to lPS tableaux and rPS tableaux. the cyclic shift graph of the finitely ranked rPS monoids and the diameter of their connected components. Juan Pablo Quijano, LisMath, Instituto Superior Técnico. Sheaves and functoriality of groupoid quantales. This talk has two main aims, one being the study of functoriality of groupoid quantales, which is accomplished in the étale case (in a sense completing the previously ongoing program concerning quantales of étale groupoids), and the other being to provide steps for addressing a similar program for quantales of non-étale groupoids, in this case studying sheaves for a suitable subclass of open groupoids, namely those with "étale covers". Pedro Pinto, LisMath, Faculdade de Ciências, Universidade de Lisboa. The Bounded Functional Interpretation and Proof Mining. Proof mining is the research program that aims to analyse proofs of mathematical theorems in order to extract hidden quantitative information — such as rates of convergence, rates of metastability and rates of asymptotic regularity. Proof theoretical tools like Kohlenbach's monotone functional interpretation (), a variant of Gödel's Dialectica, are of standard use. A newer functional interpretation was introduced by Ferreira and Oliva in 2005 (), dubbed the bounded functional interpretation (BFI). The focus of my research was the better understanding of the BFI in the context of proof mining. I will show a general technique that allows the elimination of weak sequential compactness arguments in the analysis of certain types of proofs. It also gives a better understanding of previous quantitative results done Kohlenbach () where this argument was already eliminated. This technique was also employed to produce a first quantitative version of Bauschke's theorem (). Other results, in the context of the proximal point algorithm (, ), were also analysed with the BFI and their first quantitative versions were obtained. These results are new and the first practical application of the BFI in the proof mining program. Kohlenbach, Ulrich. Applied proof theory: proof interpretations and their use in mathematics. Springer Science & Business Media, 2008. Ferreira, Fernando, and Paulo Oliva. Bounded functional interpretation. Annals of Pure and Applied Logic 135.1-3 (2005): 73-112. Kohlenbach, Ulrich. On quantitative versions of theorems due to FE Browder and R. Wittmann. Advances in Mathematics 226.3 (2011): 2764-2795. Bauschke, Heinz H. The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. Journal of Mathematical Analysis and Applications 202.1 (1996): 150-159. H. K. Xu, Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(1) (2002): 240-256. Boikanyo, Oganeditse A., and G. Morosanu. Inexact Halpern-type proximal point algorithm. Journal of Global Optimization 51.1 (2011): 11-26. Hillal M. Elshehabey, LisMath, Instituto Superior Técnico. Mathematical Modelling and Numerical Simulation of an Anaerobic Digester. Anaerobic digestion is a bacterial process, carried out in the absence of oxygen, used to convert the organic fraction of large volumes of slurries and sludge into biogas and a digested product. The objective of this work is to perform a numerical modeling of the fluid dynamics process inside an anaerobic digestion tank and numerical simulations of the model, which might indicate properly sized extra piping and pumping systems, in order to minimize the deposition of inert materials. This research is being developed within a consulting project for Valorlis - Valorização e Tratamento de Resíduos Sólidos, SA. In this seminar, we begin by presenting the mathematical model which describes the behavior of the pseudo-plastic fluid in the tank, where parameters such as temperature and total solids content are compatible with several experimental cases reported in the literature and have been validated by Valorlis. The influence of such parameters in the fluid behavior will be discussed in simpler, classical geometries. Following , we propose alternative conditions for outflow. The benefits of using the directional do-nothing boundary condition comparing with the classical one will be presented for the proposed non-Newtonian model and for some benchmark problems, including a comparison with the Newtonian model. M. Braack and P. B. Mucha, Directional do-nothing condition for the Navier-Stokes equations, Journal of Computational Mathematics, 32, No.5 (2014), 507-521. Filipe Gomes, LisMath, Faculdade de Ciências, Universidade de Lisboa. Supercharacter Theories and Multiplicative Ramification Graphs. Supercharacter theories are generalizations of the usual character theory of a group. In this talk, we construct graded graphs using restriction and superinduction of supercharacters and use them to determine the extreme supercharacters of direct limits of certain groups. We mention the infinite unitriangular group as a particularly important example of this construction. João Dias, LisMath, Faculdade de Ciências, Universidade de Lisboa. Supercharacters for algebra groups and their geometric relations. How does the supercharacter theory behave with respect to change of field (i. e. finite field extensions)? Does there exist an object that contains all supercharacter theory for all changes of field? If the answer to the second question is positive, does there exist a group and a supercharacter theory that has the information given by that object? In this talk I will give a brief introduction to the supercharacter theory and give the answer to the questions above. Alexandra Symeonides, LisMath, Faculdade de Ciências, Universidade de Lisboa. Invariant and quasi-invariant measures for Euler equations. We will discuss how invariant (or quasi-invariant) probability measures can be used to show existence of statistical solutions for the two-dimensional Euler equation (or a slight modification of it), both in the periodic and non periodic case. For initial data in the support of the measures, these solutions are globally defined in time and they are unique. This is joint work with Ana Bela Cruzeiro (IST-UL). Pedro Oliveira, LisMath, Instituto Superior Técnico. Cosmic no-hair in spherically symmetric black hole spacetimes. We analyze in detail the geometry and dynamics of the cosmological region arising in spherically symmetric black hole solutions of the Einstein-Maxwell-scalar field system with a positive cosmological constant. More precisely, we solve, for such a system, a characteristic initial value problem with data emulating a dynamic cosmological horizon. Our assumptions are fairly weak, in that we only assume that the data approaches that of a subextremal Reissner-Nordstrm-de Sitter black hole, without imposing any rate of decay. We then show that the radius (of symmetry) blows up along any null ray parallel to the cosmological horizon ("near" $i^+$), in such a way that $r=+\infty$ is, in an appropriate sense, a spacelike hypersurface. We also prove a version of the Cosmic No-Hair Conjecture by showing that in the past of any causal curve reaching infinity both the metric and the Riemann curvature tensor asymptote those of a de Sitter spacetime. Finally, we discuss conditions under which all the previous results can be globalized.
CommonCrawl
Can someone show mathematically how gimbal lock happens when doing matrix rotation with Euler angles for yaw, pitch, roll? I'm having a hard time understanding what is going on even after reading several articles on Google. Is the only work-around to use quaternions? "Euler Angles" you can think of as a function $(S^1)^3 \to SO_3$ or $\mathbb R^3 \to SO_3$. The derivative of this function does not always have rank 3, so you have degenerate submanifolds where the function is many-to-one. In this special case that's called "gimbal lock". One formalism that avoids this is quaternions. You can of course use other formalisms, and many other formalisms are naturally related to the quaternion version, so people tend to gravitate to the quaternion version. One version that's closely related to quaternions would be to use the exponential map for the unit quaternion group. But this also has "gymbal lock" but of a different kind. But it does have the rather appealing interpretation as rotations about arbitrary axis -- this is perhaps more useful if you're only interested in rotations that differ from the identity matrix (or some given matrix) by a small amount, they're very natural coordinates on "small scales" in $SO_3$. Are there any special properties you'd like for coordinates on $SO_3$? That might give a sense for where you want to go with this. $\theta_3$ would be the roll, $\theta_2$ the pitch and $\theta_1$ the yaw. To relate my coordinates to the picture, $(1,0,0)$ is the direction the plane is pointing. $(0,1,0)$ is the direction of the left wing. $(0,0,1)$ is the direction of the yellow axis sticking out of the top of the plane. So in this case, one occurance of "gimbal lock" is $\theta_2 = 0$. In the Tait-Bryan variant it would be when the aeroplane is either pointing straight up or down, which is $\theta_2 = \pm \pi/2$. This refers to Ryan's answer, but was too long for a comment. If you only care about the beginning and end, you can express any rotation with the Euler angles. But if you want a smooth transition between the beginning and the end, then with some starting orientations, you have a problem, and this is called the gimbal lock. But say, we start from a position pointing directly upwards, so our object is pointing to (x,y,z) = (0,0,1) and we want to rotate it down around the x-axis by 45 degrees. Now we have to turn yaw (or heading) 90 degrees, and lower the pitch 45 degrees (and perhaps roll -90 degrees, depending what orientation we want, but my dots cannot illustrate this). we get the black dots in the figure below, whereas a straight line by 5 degree turns around the x-axis is plotted with the white dots. We can still reach the same endpoint, but we cannot start to turn directly into the "correct" or "straight" direction. You could redefine the Euler angles so that the first rotation is not around the z-axis, but around the x-axis, and then this situation could be remedied. But then a similar problem could happen with an object that is initially pointing to the x-direction. However you reorder the three Euler rotations, there is always some orientation from which you cannot start turning directly to a certain direction. Here is also a Youtube-video on the topic. Now that our vector P has been transformed we will apply another 90° rotation but this time to the $R_y$ axis with the new values. And as you can see in the calculations of the matrices there has been a change in direction for each time we rotated by 90 degrees. Here we have lost a degree of freedom of rotation. What this means is we rotated the X components by 90° which happens to be perpendicular or orthogonal to both the Y & Z axis which is evident of the fact that $\cos(90°) = 0$. Then when we rotate again by 90° along the Y axis and once again the Y axis is perpendicular to both the X & Z axis now we have 2 axis of rotations that are aligned so when we try to rotate in the 3rd dimension of space we lost a degree of freedom because we can no longer distinguish between the X & Y as they will both rotate simultaneously and there is no method to separate them. This can be seen from the calculations that were done by the matrices. It may not be completely evident now, but if you were to do all 6 permutations of the order of axis rotations you will see the pattern emerge. These kind of rotations are called Euler Angles. It also doesn't matter what combination of axis you rotate with because it will happen with every combination when two axis of rotations become parallel. It may not seem quite apparent by the numbers as to what exactly is causing the gimbal lock, but the results of the transformations should give you some insight to what is going on. It might be easier to visualize than just by looking at the math. So I provided a link to a good video below. Now if you are interested in proofs then you have plenty of work ahead of you for there are also some other factors that cause this to happen such as the relationships of the $\cos(\theta)$ between two vectors being equal to the dot product between those vectors divided by their magnitudes. Other contributing factors are the rules of calculus on the trigonometric functions especially the $\sin$ and $\cos$ functions. Another interesting fact that I think that may lead to the reasoning of Gimbal Lock is quite interesting but that is a topic for another day as that in itself would merit its own page, but do forgive me if the math formatting isn't perfect I am new to this particular stack exchange site and I'm learning the math tags and formatting as I go. Not the answer you're looking for? Browse other questions tagged geometry trigonometry euclidean-geometry quaternions or ask your own question. How do you prove Euler's angle formula? Converting a rotation matrix to euler angles when gimbal lock occurs. Why does aliasing cause loss of a degree of freedom in Euler angles? Why Quaternion rotations are smoother than Euler rotation?
CommonCrawl
Let $V$ be the set of all $n\times n$ matrices over a field $F$. Let $A$ be a fixed element of $V$. Define a linear operator $T$ on $V$ by $T(B)=AB$. I am trying to show that if $\lambda$ is an eigenvalue of $A$, then $\lambda$ is also an eigenvalue of $T$. So suppose $Av=\lambda v$ for some $v\neq 0$ in $V$ and $\lambda\in F$. So I'd like to prove the existence of a matrix $B$ such that $T(B)=AB=\lambda A$, or equivalently, show that $T-\lambda I_V$ is not invertible (or injective or surjective). But I am not sure how to proceed from here. What can I do? Take the matrix such that all of its columns are equal to $v$. An eigenvalue of $T$ is some number $\lambda$ such that $T(B)=\lambda B$. Thus, we need to find a matrix $B$ such that $AB=\lambda B$, not $AB=\lambda A$. We can write a matrix as a row vector of column vectors, i.e. $B=[b_1,b_2,b_3...]$. Matrix multiplication acts on those columns independently: $AB=[Ab_1,Ab_,Ab_3...]$. If $b_i=v$ for all $i$, then $AB=[\lambda v, \lambda v, \lambda v, ...]=\lambda [v,v,v,...]=\lambda B$. So this shows that $[v,v,v,...]$ is an eigenvector with eigenvalue $\lambda$. Not the answer you're looking for? Browse other questions tagged linear-algebra matrices eigenvalues-eigenvectors linear-transformations or ask your own question.
CommonCrawl
Gowers norms are useful tools in additive combinatorics in measuring additivity of a subset $A$ in an abelian group $Z$ - how close $A$ is to being a subgroup of $Z$. In this talk, I demonstrate that there are uses of Gowers norms in Euclidean spaces. In particular I show that a measurable subset $E \subset \mathbb R^n, |E| = 1$ must assume convexity or near convexity if its $k$th Gowers norm is maximized or nearly maximized, respectively. Similar statements can be made about a measurable function in $\mathbb R^n$. This talk is based on recent papers of Christ, Eisner and Tao, and my current work.
CommonCrawl
When are Ehrhart functions of compact convex sets polynomials? Given a lattice $L$ and a subset $P\subset \mathbb R^d$, we define for each positive integer $t$ $$f_P(L,t)=|(tP\cap L)|$$ the number of lattice points in $tP$. Let's say $P$ is nice if $f_P(L,t)$ is a polynomial. We know that if $P$ is a convex polytope with vertices in $L$ then $P$ is nice and $f_P(L,t)$ is its Ehrhart polynomial. My question is about some converse of this statement. Are there some mild assumptions (for example convexity etc.) on $P$, under which if $f_P(L,t)$ is a polynomial with respect to at least some lattice $L$ then $P$ must be a convex polytope? Or a weaker question: Is any polynomial arising this way also the Ehrhart polynomial of some polytope? P.S. I haven't thought much about this question so I apologize if it is well-known or it has an obvious negative answer. Also feel free to retag. Could the following be true? It seems more in line with the question. Let $P$ be a compact convex $n$-dimensional set in $\mathbb R^n$. Suppose that the Ehrhart function $f_P(t)$ is a polynomial for positive integers $t$. Then $P$ is a translation of a rational polytope. Edit: I would also be interested in a slightly weaker statement: Suppose a convex set has positive curvature almost everywhere, must the Ehrhart function necessarily be non-polynomial? For example given an arbitrary lattice, what would be the easiest way to see that a circle doesnt have a polynomial Ehrhart function? Just to remark that for a rational polytope whose vertices are not integral, the function $f_P(t)$ could still be a polynomial (and not just a quasipolynomial). A large class of examples is provided by degenerations of flag varieties $G/B$. There are many degenerations, each corresponding to a representation of the longest word $w\in W$ in the Weil group as the shortest product of standard reflections. All of these correspond to rational polytopes. They all have the same Erhart function. Some of them are integral but others are not. For more details, see R. Chiriv`ı, LS algebras and application to Schubert varieties, Transform. Groups 5 (2000), no. 3, 245–264, or Alexeev-Brion Toric degenerations of spherical varieties. I believe that the strong form of the conjecture is false. In lieu of a simple counterexample, let me point you towards a centrally symmetric 10-gon $\hat P$ in arXiv:0801.2812, Figure 6. It is a bit of a mess to explain exactly what it is, but it has something to do with Picard lattice of a toric DM stack. It need not be rational or a translate of rational. (1) It is centrally symmetric. (2) The midpoints of all the sides are lattice points. As a result, opposite sides are lattice translates of each other. As a result, generic translates of $\hat P$ have the same number of lattice points. Indeed, as you move the polytope in a plane along a general curve as soon as a point appears on one side of it, another point exits from the opposite side. This implies that the opposite sides of $\hat P$ glue together to give a "no-gaps" cover of the torus $\mathbb R^2/L$ (preimage of a generic point has the same cardinality $k$). Then if one takes a $t$-multiple of it, one gets a "no-gaps" cover of $\mathbb R^2/tL$, and will thus have $kt^2$ points in $t(\hat P+ c)$ for a generic shift $c$. I assume that this construction can be simplified to give something more explicit and palatable, so long as the property that the opposite sides are lattice translates of each other is satisfied. It clearly requires flat sides to be able to glue them together on the torus, so this idea is not going to work for the positive curvature problem. If you want to dive into some Ehrhart theory then I highly recommend you pick up Computing the Continuous Discretely: Integer-Point Enumeration in Polyhedra by Matthias Beck and Sinai Robins. Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry combinatorial-geometry polynomials convex-polytopes lattices or ask your own question. Reciprocity (Ehrhart-style) for real polytopes? How "accidental" are equalities between parts of Ehrhart quasi-polynomials? When do they persist to Euler-Maclaurin? How different can the constituents of an Ehrhart quasi-polynomial be?
CommonCrawl
If $X \times X$ is normal, then is $X \times X \times X$ normal? I am looking at some topological dimension theory for product spaces, and in trying to construct a certain type of counterexample it's become relevant to consider the question in the title above. I am interested in finding a normal space $X$ whose products with itself is eventually non-normal, but not immediately. It's not actually important for my application that it happens in three steps as opposed to more. An alternative question would be: Is there a normal space $X$ with $X \times X = Y$ normal, but $Y \times Y$ is not normal? As mentioned in a comment below, if we assume that $X$ is a compact Hausdorff space and that $X \times X \times X$ is completely normal, then $X$ is metrizable. Thus it stands to reason that a compact counterexample may be harder (if not impossible) to construct. The author in the linked paper wonders aloud if the complete normality of $X \times X$ is sufficient for the metrizability of $X$, so it may also be advisable to avoid cases where $X \times X$ is completely normal. Przymusinski, Teodor C., Normality and paracompactness in finite and countable Cartesian products, Fundam. Math. 105, 87-104 (1980). ZBL0438.54021. $X^n$ is normal (collectionwise normal) if and only if $n < m$. In particular, we can construct a (normal) space such that the failure of normality of its powers happens first at any prescribed finite power. Not the answer you're looking for? Browse other questions tagged general-topology examples-counterexamples dimension-theory separation-axioms product-space or ask your own question. Do Hausdorff spaces that aren't completely regular appear in practice? Why are these 'counter' examples in topology? Which of the following spaces are completely normal? Is a completely regular space whose convergent sequences are eventually constant discrete?
CommonCrawl
Jaiteh M, Taly A, Hénin J. 2016. Evolution of Pentameric Ligand-Gated Ion Channels: Pro-Loop Receptors.. Plos One. 11:e0151934. Moraga-Cid G., Sauguet L., Huon C., Malherbe L., Girard-Blanc C., Petres S., Murail S., Taly A, Baaden M., Delarue M. et al.. 2015. Allosteric and hyperekplexic mutant phenotypes investigated on an α1 glycine receptor transmembrane structure. Proc. Natl. Acad. Sci. U.s.a.. 112:2865–2870. Chakraborty D, Taly A, Sterpone F. 2015. Stay Wet, Stay Stable? How Internal Water Helps the Stability of Thermophilic Proteins The Journal of Physical Chemistry B. 119:12760–12770. Nawrocki W.J, Tourasse N.J, Taly A, Rappaport F., Wollman F.A. 2015. The plastid terminal oxidase: its elusive function points to multiple contributions to plastid physiology. Annu. Rev. Plant Biol.. 66:49–74. Doutreligne S., Gageat C., Cragnolini T., Taly A, Pasquali S., Derreumaux P., Baaden M.. 2015. UnityMol: interactive and ludic visual manipulation of coarse-grained RNA and other biomolecules. Virtual and Augmented Reality for Molecular Science (VARMS@IEEEVR), 2015 IEEE 1st International Workshop on. :1–6. Taly A, Hénin J, Changeux J-P, Cecchini M. 2014. Allosteric regulation of pentameric ligand-gated ion channels: An emerging mechanistic perspective. Channels. 8:350–360. Garret M., Boue-Grabot E., Taly A. 2014. Long distance effect on ligand-gated ion channels extracellular domain may affect interactions with the intracellular machinery. Commun. Integr. Biol.. 7:e27984. Lemoine D, Habermacher C, Martz A, Méry P-F\ccois, Bouquier N, Diverchy F, Taly A, Rassendren F\ccois, Specht A, Grutter T. 2014. Optogating a powerful approach to control an ion-channel gate. PURINERGIC SIGNALLING. 10:762–762. Chaumont S, André C, Perrais D, Boué-Grabot E, Taly A, Garret M. 2013. Agonist-dependent endocytosis of $\gamma$-aminobutyric acid type A (GABAA) receptors revealed by a $\gamma$2 (R43Q) epilepsy mutation. J. Biol. Chem.. 288:28254–28265. Calimet N., Simoes M., Changeux J.P, Karplus M., Taly A, Cecchini M.. 2013. A gating mechanism of pentameric ligand-gated ion channels. Proc. Natl. Acad. Sci. U.s.a.. 110:E3987–3996. Jiang R., Taly A, Grutter T.. 2013. Moving through the gate in ATP-activated P2X receptors. Trends Biochem. Sci.. 38:20–29. Jiang R, Taly A, Grutter T. 2013. Moving through the gate in ATP-activated P2X receptors. Trends Biochem. Sci.. 38:20–29. Taly A. 2013. Novel approaches to drug design for the treatment of schizophrenia. Expert Opin. Drug Discovery. 8:1285–1296. Lemoine D., Habermacher C., Martz A., Mery P.F, Bouquier N., Diverchy F., Taly A, Rassendren F., Specht A., Grutter T.. 2013. Optical control of an ion channel gate. Proc. Natl. Acad. Sci. U.s.a.. 110:20813–20818. Taly A, Charon S. 2012. $\alpha$7 nicotinic acetylcholine receptors: a therapeutic target in the structure era. Curr. Drug Targets. 13:695–706. Lemoine D, Jiang R, Taly A, Chataigneau T, Specht A, Grutter T. 2012. Ligand-gated ion channels: new insights into neurological disorders and ligand recognition. Chem. Rev.. 112:6285–6318. Jiang R., Taly A, Lemoine D., Martz A., Cunrath O., Grutter T.. 2012. Tightening of the ATP-binding sites induces the opening of P2X receptor channels. Embo J.. 31:2134–2143. Russo P., Taly A. 2012. α7-Nicotinic acetylcholine receptors: an old actor for new different roles. Curr. Drug Targets. 13:574–578. Charon S., Taly A, Rodrigo J., Perret P., Goeldner M.. 2011. Binding modes of noncompetitive GABA-channel blockers revisited using engineered affinity-labeling reactions combined with new docking studies. J. Agric. Food Chem.. 59:2803–2807.
CommonCrawl
What has to be the general and final formula for calculating the effective memory access time, taking in consideration the $\alpha$-level page table, TLB hit ratio as $h$, miss ratio as $m$, memory access time as $M$, TLB access time as $T$, page fault probability as $p$ and time for page fault servicing as $x$? There seems to be so many formulae, each different from each other depending on the question. You learn the meaning of the question you won't need to remember formula, try to understand what type of of memory system is there. Reason is for this question itself there are many formula, is access time is absolute then one formula, if access time is relative then other formula, if memory access is parallel then another formula. How much will you remember and what are chances that you won't forget it won't do any mistake? Even when there are two questions of same type, sometimes the approach to solve one differs from the other, which feels like the solution is intentionally approached like that just to reach the desired answer. Consider the following information about a hypothetical processor. Assume the cache is physically addressed TLBHit Rate: 95% access time 1 cycle Cache Hit Rate: 90% access time 1 cycle when tlb and cache both get miss, page fault rate 1% TLB access and ... 1 + .1(5+ .01(100) ) = 2.675 Could someone pls point out the flaw in the logic? Consider a memory system consists of a single external cache with an access time of 20ns and a hit rate of 0.92, and a main memory with an access time of 60ns. Now we add virtual memory to the system. The TLB is implemented internal to the ... ratio is 100% and the page table hit ratio is 50%. What is the effective memory access time of the system with virtual memory?
CommonCrawl
have exactly one solution, no solutions, or an infinite number of solutions Give the form of the solutions for the last case. do not intersect in a single point. Show that for this value of $\alpha$ there is no point common to all these planes unless $\beta = 3$. For the case of the three planes having a common line of intersection, find its equation in cartesian form. and construct a unitary matrix $U$ such that $U^\dagger H U = \Lambda$ where $\Lambda$ is a real diagonal matrix. is an ellipsoid with semi-axes of lengths $2, 1$ and $0.5$. Find the direction of its longest axis. so we conclude that the length of the semi-major lengths are $2, 1, 0.5$, and for the length corresponding to $2$, the eigenvector is along $(1, 1, 1)^T$.
CommonCrawl
J. Peradze. On the accuracy of an iteration method when solving a system of Timoshenko equations. Seminar of I. Vekua Institute of Applied Mathematics. 2008წ. Tbilisi, v.34, pp.4-10. J. Peradze, B. Dzagania, G. Papukashvili. On the accuracy of solution approximation with respect to a spatial variable for a nonlinear integro-differential equation. Reports of enlarged sessions of seminar of I.Vekua Institute of Applied Mathematics of Tbilisi State University. 2010წ. v.24, 108-112. Sh. Gagoshidze, A. Gogoladze, M. . ON THE ACTION OF LONGITUDINAL WAVES ON BANK SLOPES OF THE SOIL CHANNELS. Scientific-Technical Journal HydroEngineering . 2013წ. # 1-2 (15-16) გვ. . 57-61. I. Margalitadze, Vazinram F., Safi M., Rasti R.,, Mammadov A. On the algorithm of articial neural networks (ANN) for application in dam monitoring. GEN . 2005წ. #1.p.65-67. D. Natroshvili. On the alternative version of proving the existence theorems in exterior problems of the steady state oscillation theory . Tr. In-ta Prikl. Mat. Tbilis. Univ.. 1981წ. 10, 90-98. D. Ugulava, D. Zarnadze. On the application of Ritz's extended method for some ill-posed problems. Reports of enlarged session of the seminar of I.Vekua Inst. of Appl. Math. 2007წ. v.21, 60-63. D. Zarnadze, D.Ugulava. On the application of Ritz's extended method to approximate solution of inverse and computing tomography problems. Abstracts of International Conferense celebrating the 100th birth aniversary of I.Vekua organized by ISAAC and IUTAM. 2007წ. 20-27, Tbilisi, Georgia. . G. Berikelashvili, D.Devadze. On the application of the net method to the solution of one class of problems of the theory of optimal control (Russian). Current problems of applied mathematics and cybernetics ( Tbilisi University). 1991წ. pp. 53-56. D. Zarnadze, D.Ugulava. On the application Ritz's extended method for some ill-posed problems. (English). Reports of enlarged session of I.N.Vekua Inst. of applied Math. . 2007წ. v.21, N3, 60-63.. J. Peradze. On the approximate solution of a Kirchhoff type static beam equation. Trans. A. Razmadze Math. Inst.. 2016წ. v.170, issue 2, 266-271. J. Peradze. On the approximate solution of a nonlinear system of ordinary differential equations. Reports of enlarged sessions of seminar of I.Vekua Institute of Applied Mathematics of Tbilisi State University. 2012წ. v. 26, 57-60. G. Berikelashvili, G.D.Pavlenishvili. On the approximate solution of some systems of nonlinear Volterra integral equations of the second kind (Russian). Soobshch. Akad. Nauk Gruz. SSR. 1980წ. V.99, no.2, 313-316. J. Peradze. On the approximate solution of the Kirchhoff-Bernsein nonlinear wave equation. ISAAC Conference I.Vekua-100 Applied Mathematics, Informatics and Mechanics. 2007წ. Tbilisi, v.12, no.2, pp.87-92. D. Ugulava. On the approximation of functions in the Hardy spaces. Bulletin of the Georgian Academy of Sciences. 2000წ. v.162, No. 2, 209-212. D. Ugulava. On the approximation of periodic functions of many variables. Proceedings of Comp. Center Georg. Ac. Sci., (in Russian). 1983წ. 23:1, 101-110. ნ. ღონღაძე, N. Kutsiava, I. Legashvili. On the assessment of envizometal pollution with industrial organic pollutants. Georgian enj. News. . 2008წ. 4, 2008, 136-138. N. Kutsiava, Chankseliani A.B., Legashvili I.T., Gongadze N.P. On the assessment of inviron-mental pollution with industrial organic pollitants . Georgian Engineering News. №4 . 2008წ. c. 136-138. Z. Kiguradze, T. Jangveladze. On the Asymptotic Behavior as $t\to\infty$ of Solutions of One Nonlinear Integro-Differential Parabolic Equation Arising in Penetration of a Magnetic Field into a Substance. Rep. Enl. Sess. Sem. of I.Vekua Inst. Appl. Math. 2005წ. V.20, N1, p.8-11. Z. Kiguradze, T. Jangveladze. On the Asymptotic Behavior of Solution for One System of Nonlinear Integro-differential Equations. Rep. Enl. Sess. Sem. of I.Vekua Inst. Appl. Math. 1999წ. V.14, N1, p.35-38. J. Peradze. On the asymptotic property of probabilstic estimates of the iteration process of linear equation solution. Candidate of Phys.-Math.Sciences, Dissertation. 1969წ. Tbilisi, 88 p. (Russian).
CommonCrawl
Normal subgroup $N$ of a p-group $G$ intersects $Z(G)$ nontrivially; What is wrong with the following trivial argument? If the answer was this, I don't think my Algebra professor would ask it, so what is wrong with the above argument ? $Z(N)$ and $Z(G)$ need not be at all related. If $N$ is abelian, then $Z(N) =N$, but the center of $G$ might intersect $N$ trivially. For example, the center of $S_3\times \mathbb Z_3$ intersects a subgroup of order $2$ in the first factor trivially. In general, $Z(G) $ is a subgroup of the centralizer of $N$, but not necessarily the center. Not the answer you're looking for? Browse other questions tagged group-theory normal-subgroups p-groups or ask your own question. A normal subgroup so that any homomorphism into a $p$-group is trivial on it. The Commutator Subgroup is Normal. If $H$ is a normal subgroup of $G$ with $G/H$ abelian, then the commutator subgroup of $G$ is in $H$. Fitting subgroup of a finite solvable group with trivial center and trivial Frattini subgroup. Example of normal subgroup that contains the commutator group as a proper subgroup. Does the Lie group $G_2$ contain any normal subgroups?
CommonCrawl
This paper presents a high-speed sensing and noise cancellation technique for large touch screens, which is called FDCS (Frequency Division Concurrent Sensing). Most conventional touch screen detection methods apply excitation pulses sequentially and analyze the sensing signals sequentially, and so are often unacceptably slow for large touch screens. The proposed technique applies sinusoidal signals of orthogonal frequencies simultaneously to all drive lines, and analyzes the signals from each sense line in frequency domain. Its parallel driving allows high speed detection even for a very large touch screens. It enhances the sensing SNR (Signal to Noise Ratio) by introducing a frequency domain noise filtering scheme. We also propose a pre-distortion equalizer, which compensates the drive signals using the inverse transfer function of touch screen panel to further enhance the sensing SNR. Experimental results with a 23" large touch screen show that the proposed technique enhances the frame scan rate by 273% and an SNR by 43dB compared with a conventional scheme. H. R. Kim, Y. K. Choi, San-Ho Byun, Sang-Woo Kim, Kwang-Ho Choi, Hae-Yong Ahn, Jong-Kang Park, Dong-Yul Lee, Zhong-Yuan Wu, Hyung-Dal Kwon, Yong-Yeob Choi, Chang-Ju Lee, Hwa-Hyun Cho, Jae-Suk Yu, Myunghee Lee, "A Mobile-Display-Driver IC Embedding a Capacitive-Touch-Screen Controller System", in Proc. of ISSCC 2010, pp.114-116, San Francisco, 8 Feb. 2010. U. Y. Jang, H. W. Kim, T. W. Cho, H. G. Jang, S. W. Lee, "Architecture of Multi Purpose Touch Screen Controller with Self Calibration Scheme", in Proc. of IEEK Fall Conference 2013, pp.162-166, Seoul, Korea, Nov. 2013. H. C. Shin, S. H. Ko, H.J. Jang, I. H. Yun, and K. Y. Lee, "A 55dB SNR with 240Hz frame scan rate mutual capacitor 30$\times$24 touch-screen panel read-out IC using code-division multiple sensing technique," in Proc. of IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pp. 388-389, San Francisco, USA, Feb. 2013. J. S. Lee, D. H. Yeo, J. Y. Um, E. W. Song, J. Y. Sim, H. J. Park, S. M. Seo, M. H. Shin, D. H. Cha, H. S. Lee, "A 10-Touch Capacitive-Touch Sensor Circuit with the Time-Domain Input-Node Isolation", SID 2012 DIGEST, Vol.43, Issue 1, pp.493-496, June 2012. I. S. Yang and O. K. Kwon, "A Touch Controller Using Differential Sensing Method for On-Cell Capacitive Touch Screen Panel Systems", IEEE T-CE, Vol. 57, No.3, pp.1027-1032, Aug 2011. T. H. Hwang, W. H. Cui, I. S. Yang, and O. K. Kwon, "A Highly Area-Efficient Controller for Capacitive Touch Screen Panel Systems", IEEE T-CE, Vol. 56, No.2, pp.1115-1122, May 2010. M.G.A. Mohamed, U.Y Jang, I.C Seo, H. W. Kim, T. W. Cho, H. G. Jang, S. O. Lee, "Efficient Algorithm for Accurate Touch Detection of Large Touch Screen Panels", in Proc. of ISCE 2014, pp.243-244, Jeju, Korea, 22-25 Jun.2014. I. C. Seo, U. Y. Jang, M.G.A. Mohamed, T. W Cho, and H. W. Kim, H. K. Chang and S. O. Lee, "Voltage shifting Double Integration Circuit for High Sensing Resolution of Large Capacitive Touch Screen Panels", in Proc. of ISCE 2014, pp.241-242, Jeju, Korea, 22-25 Jun.2014. Chungbuk national university industry-academic cooperation foundation, Kortek corporation, "Touch screen detection apparatus using voltage-shifting double-sided integrator with automatic level calibration", Korea, patent application number 10-2014-0080811, Jun.30.2014. I. C. Seo, T. W. Cho, H. W. Kim, H. G. Jang, S. O. Lee, "Frequency Domain Concurrent Sensing Technique for Large Touch Screen Panels", in Proc. of IEEK Fall CONFERENCE 2013, pp.55-58, Seoul Korea, 23 Nov. 2013. ATMEL Corporation, "Touch Sensors Design Guide, 10620 D-AT42" Sept. 2004. S. H. Ko, H, C. Shin, J. M. Lee, H. J. Jang, K. Y. Lee, "Low Noise Capacitive Sensor for Multi-touch Mobile handset's applications", in Proc. of ASSCC 2010, pp.247-250, Beijing China, 8-10 Nov. 2010. K. S. Byun, B. W. Min, "Circuit Modeling and Analysis of Touch Screen Panel", KIEES, Vol 25, Num1, pp.47-52, Jan. 2014.
CommonCrawl
I want to know if this problem can be verified or rejected. I tried to make a counterexample but I couldn't find anything, I wanted to prove it using the definition too, but I did not get anything. Is there a way to prove, or is there a counterexample? The left-hand side of your inequality counts the distinct pairs from a size-$k$ set, while the right-hand side takes a partition into size-$x_i$ disjoint subsets, then counts the number of pairs that don't cut across partitions. Not the answer you're looking for? Browse other questions tagged combinatorics combinations combinatorial-proofs or ask your own question. Counting all possible ways to choose $M$ numbers from $1$ to $N$ given some conditions . given two sets how many ways can we choose 2 subsets of same length? What is the maximum number of subsets we can choose from a set of size 20 such that no two subsets have more than 2 common elemtns.
CommonCrawl
We are continuing the series on non-unique factorisation. For a handy table of contents, visit the Post Series directory. Here, $U(R)$ denotes the group of units of $R$. We have already seen in a ring with nontrivial idempotents like $\Z\times \Z$, a nontrivial idempotent $e$ will be satisfy $e\sim e$ and $e\approx e$, but $e\not\cong e$ because $e = ee$ and yet $e$ is not a unit and nonzero. Proof. Suppose $a\cong b$. Then $a\sim b$ and so $b\sim a$. If $a$ and $b$ are not both zero, write $a = sb, b = ta$. If $b = ra$ then $a = sra = s^2rb$. Since $a\cong b$, this implies that $s^2r$ is a unit and so $r$ is a unit. Hence $b\cong a$. Guess what? The relation $\cong$ is also transitive. Since the proof is similarly short I'll leave the proof to the reader. So, $\cong$ is just missing being reflexive for all rings to be an equivalence relation for all rings. If $\cong$ is an equivalence relation for a ring $R$, then we say that $R$ is presimplifiable. We introduced this type of ring last time. It's easy to see that if $\cong$ is an equivalence relation (equivalently, if it is reflexive) for a ring $R$, then $a\sim b$ in $R$ implies that $a\cong b$. Because $a\sim b\rightarrow a\approx b\rightarrow a\cong b$ in general, this means that presimplifiable rings are exactly those rings for which the concept of associates, strong associates, and very strong associates all coincide. In the next post, we look at some examples of commutative rings that are presimplifiable, and some that are not.
CommonCrawl
Аннотация: Let $\mathcal L$ be the class of locally compact abelian (LCA) groups. For certain subclasses $\mathcal S$ of $\mathcal L$, we obtain information about the groups $X\in\mathcal S$ such that the ring $E(X)$ of continuous endomorphisms of $X$ is commutative. The main results concern torsionfree groups, groups with splitting torsion subgroups and their duals. Ключевые слова и фразы: LCA groups, ring of continuous endomorphisms, commutativity.
CommonCrawl
Abstract: An $E_0$-semigroup of $B(H)$ is a one parameter strongly continuous semigroup of $*$-endomorphisms of $B(H)$ that preserve the identity. Every $E_0$-semigroup that possesses a strongly continuous intertwining semigroup of isometries is cocycle conjugate to an $E_0$-semigroup induced by the Bhat induction of a $CP$-flow over a separable Hilbert space $K$. We say an $E_0$-semigroup $\alpha$ is $q$-pure if the $CP$-subordinates $\beta$ of norm one (i.e. $\Vert\beta_t(I)\Vert = 1$ and $\alpha_t-\beta_t$ is completely positive for all $t \geq 0$) are totally ordered in the sense that if $\beta$ and $\gamma$ are two $CP$-subordinates of $\alpha$ of norm one, then $\beta \geq \gamma$ or $\gamma \geq \beta$. This paper shows how to construct and classify all $q$-pure $E_0$-semigroups induced by $CP$-flows over a finite-dimensional Hilbert space $K$ up to cocycle conjugacy.
CommonCrawl
From the discussion here, it seems that general Hochschild cohomology classes correspond to deformations where the deformation parameter can have nonzero degree. How can I interpret this geometrically? What is the "base space" of the deformation? What kind of object is it? In other words, what is the "Spec" of a graded ring or a graded algebra (e.g. $k[t]$ or $k[[t]]$ or $k[t]/(t^n)$ with the variable $t$ having some nonzero degree)? One possible answer is in Toën-Vezzozi paper From HAG to DAG, who were themselves inspired by Ciocan-Fontanine and Kapranov (Derived Quot schemes and Derived Hilbert schemes). This approach works well in characteristic zero (otherwise ne has to deals with simplicial commutative rings or $E_\infty$-ring spectra, like in Lurie's work). Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry deformation-theory hochschild-cohomology graded-rings-modules or ask your own question. Non-examples of model structures, that fail for subtle/surprising reasons? What is a deformation of a category? Ring-theoretic characterization of open affines? A matrix algebra has no deformations? Is the restriction map an epimorphism of commutative rings? What about schemes built up out of graded rings?
CommonCrawl
Abstract: Symmetry of information states that $C(x) + C(y|x) = C(x,y) + O(\log C(x))$. We show that a similar relation for online Kolmogorov complexity does not hold. Let the even (online Kolmogorov) complexity of an n-bitstring $x_1x_2... x_n$ be the length of a shortest program that computes $x_2$ on input $x_1$, computes $x_4$ on input $x_1x_2x_3$, etc; and similar for odd complexity. We show that for all n there exist an n-bit x such that both odd and even complexity are almost as large as the Kolmogorov complexity of the whole string. Moreover, flipping odd and even bits to obtain a sequence $x_2x_1x_4x_3\ldots$, decreases the sum of odd and even complexity to $C(x)$.
CommonCrawl
potential applications in targeting and bioseparation. Figure 6 Room temperature magnetization curves of bare Fe 3 O 4 and Fe 3 O 4 @Y 2 O 3 :Tb 3+ composite particles. Conclusions Bifunctional [email protected]:Tb3+ composites were prepared using a facile urea-based homogeneous precipitation method. These composite particles offer two distinct functionalities: an inner Fe3O4 core, which gives the composites strong magnetic properties, making them easy to manipulate magnetically, and an outer Y2O3:Tb3+ shell with strong luminescent properties. A similar approach can be used to develop certain bifunctional composites with different core-shell structures. In addition, the simple design concept for bifunctional composites might open up new opportunities in bioanalytical and biomedical applications. Acknowledgements This work was supported by the National Research Foundation of Korea (grant no. "Introduction Biodiversity continues Quisqualic acid to be lost at an alarming rate (Pereira et al. 2010). Our knowledge of biodiversity status and trends, and the drivers of change, has increased markedly and is highlighting where action is needed to improve biodiversity conservation efforts (e.g. Brooks et al. 2006). However, conservation and sustainable use of biodiversity continues to be allocated low importance compared to other policy challenges, leading to a perception that research on biodiversity is still under-used in decision-making and implementation (Spierenburg 2012). Many initiatives already exist to tackle this perceived underuse of scientific knowledge. However, their design—and expectations of what they will achieve—often reflect an understanding of science-policy interfaces only as an overly simple process of transferring neutral facts to solve problems perceived by policy-makers (the 'linear model') (Nutley et al. 2007). There is ample evidence that transforming scientific evidence into 'usable knowledge' is neither automatic nor straightforward (Haas 2004; Knight et al. 2010; McNie 2007; Ozawa 1996; Rosenberg 2007). Indeed, as Vogel et al. Li W, Nayak V, Pennington C, Pinney DF, Pitts B, Roos DS, Srinivasamoorthy G, Stoeckert CJ, Treatman C, Wang H: AmoebaDB and MicrosporidiaDB: functional genomic resources for Amoebozoa and Microsporidia species. Nucleic Acids Res 2011, 39:D612–619.PubMedCrossRef 58. Sherry ST, Ward MH, Kholodov M, Baker J, Phan L, Smigielski EM, Sirotkin K: dbSNP: Acalabrutinib manufacturer the NCBI database of genetic variation. Nucleic Acids Res 2001, 29:308–311.PubMedCrossRef 59. Meyer M, Kircher M: Illumina sequencing library preparation for highly multiplexed target capture and sequencing. Cold Spring Harbor Protocols 2010, 2010:pdb.prot5448.PubMedCrossRef 60. Altshuler D, Pollara VJ, Cowles CR, Van Etten WJ, Baldwin J, Linton L, Lander ES: An SNP map of the human genome generated by reduced representation shotgun sequencing. Nature 2000, 407:513–516.PubMedCrossRef 61. Dewey CN: Aligning multiple whole genomes with Mercator and MAVID. Meth Mol Biol 2007, 395:221–236.CrossRef 62. Benjamini Y, Hochberg Y: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J Royal Stat Soc. Series B (Methodological) 2010, 57:289–300. 63. Ihaka R, Gentleman R: R: A Language for Data Analysis and Graphics. "Background Chemolithoautotrophic bacteria utilize inorganic compounds as electron donors for growth. each option on providing good quality habitat (i.e. suitable Decitabine chemical structure nesting or forage resources) for a wide range of wild pollinators (bees and hoverflies) in farmed landscapes across the UK on a scale from 0 (no benefit) to 3 (great benefit). This simple scale was selected due to the volume of options under consideration potentially increasing respondent fatigue. Experts were also asked to report their confidence in their response on a four point scale from (0) not confident to (3) very confident. From this the Pollinator Habitat Benefit (PHB) values, weighted by expert confidence, of each option were calculated as: $$PHB_i = \frac\sum_e = 1^E (H_ei \times C_e )\sum\nolimits_e = 1^E C_e $$ (1)where H ei is the habitat quality score allocated by expert e to option i and C e is expert's self-reported confidence. To avoid respondent fatigue, only one confidence measure was taken for all options. To control for the effects of between expert variation (Czmebor et al. 2011) this was then divided by the total confidence values to produce an average across all experts within the original 0–3 scale. they are attributed to hormesis, they can be explained easily in terms of a model of additive effects (different from the habitual concentration addition and independent action hypotheses), with loss of one independent variable. Indeed, consider the assay of a solution containing two effectors whose actions imply additive effects. In such a case, a rigorous description of the response would require a bivariate function (two doses; Figure 9, left) of the type: Figure 9 Simulations of responses to the simultaneous action of two effectors. These simulations were generated by means of the model (A2) and were additive (A) and subtractive (S) responses to the joint effect of two agents. Right: degenerate responses which are obtained when treating the results as Tryptophan synthase a function of a series of dilutions from a solution containing both effectors. (A2) However, if the response is simply expressed as a function of the dilution, a common practice in the preliminary examination of materials as those above mentioned, or if one only bears in mind a sole effector, the result is equivalent to what would be obtained selecting the values of the response on the line bisecting the plane defined by the two independent variables (Figure 9, right). If both responses imply the same values for m and a, the profile will be able to be described by means of a simple sigmoidal model (mW). "Background Peritoneal carcinomatosis (PC) is a common disseminated type of gastric and ovarian cancer. It is associated with a poor prognosis with a median survival of only few months [1, 2]. PC is accompanied by obsessing symptoms like malignant ascites and ileus due to abdominal obstruction, which is treated by paracentesis or palliative surgery. No efficient standard treatment to prevent or eradicate peritoneal spread is available so far. is morphologically most similar to Aeruginospora, and if found to be congeneric, Aeruginospora would have priority. Haasiella and Aeruginospora both have bidirectional trama, a thickening pachypodial hymenial palisade, and thick-walled spores with a metachromatic endosporium – a combination of characters not found elsewhere in the Hygrophoraceae (Figs. 18 and 29; Online Resource 10). Haasiella differs from Aeruginospora in having abundant clamp connections in tetrasporic forms, yellowish salmon rather than green tinted spores, and Aeruginospora was reported on soil under bamboo whereas Haasiella is mostly lignicolous. As with Haasiella, basing a habit on few collections may mislead. It is unknown if Aeruginospora has carotenoid pigments – a character found in both Haasiella and Chrysomphalina. Fig. 18 Subf. Hygrophoroideae, tribe Chrysomphalineae, Aeruginospora singularis lamellar cross section (v. Overeem 601 A, BO-93, Bogor Botanical Garden, Indonesia, 1921). Scale bar = 20 μm Aeruginospora Höhn., Sber. Akad. Wiss. Wien, Math.-naturw. Kl., Abt. 1 117: 1012 (1908), Type species: Aeruginospora singularis Höhn., Sber. Akad. Wiss. Wien, Math.-naturw. Kl., Abt. 1 117: 17-DMAG (Alvespimycin) HCl 1012 (1908). Aeruginospora emended here by Lodge & E. Horak as hymenial pachypodial palisade present. Basidiomes robust, cuphophylloid or cantharelloid; pileus cream colored with gray-brown or ochraceous tint in center, sometimes red-brown on margin or overall, weakly radially wrinkled or smooth. Lamellae decurrent, with 2–3 lengths of lamellulae inserted, occasionally forked, fleshy, waxy, hygrophanous, fragile, colored pale bluish-green from the basidiospores. Stipe cylindrical, flared at apex, sometimes bent; surface smooth, dry. Trama monomitic, hyphae thin-walled, some walls up to 0. Although cisplatin-based combination chemotherapies are the standard treatment for NSCLC , our study clearly showed a lower response to cisplatin-based chemotherapy buy X-396 in HER2-positive patients than in HER2-negative patients. both methodologies . The HER2-FISH results Cediranib (AZD2171) were marginally correlated with IHC results, and only the HER2-FISH data were determined to be an independent factor for poor prognosis of cisplatin-based chemotherapy and survival . In our study, we measured HER2 protein expression by IHC. Although FISH results are demonstrably better for determining HER2 status in breast cancer, until it becomes clear which method is better for evaluating HER2 status in NSCLC, IHC remains a widely available, simple, and less expensive method for determining HER2 expression. Conclusion Despite advances in chemotherapy, the prognosis for NSCLC patients remains poor. Many factors, including HER2 overexpression, may contribute to this adverse outcome Only a few studies have correlated HER2 status and cisplatin-based chemotherapy resistance. Here, we showed that advanced NSCLC that express a high level of HER2 are resistant to cisplatin-based chemotherapies, which are the standard for this disease. HER2 status thus appears to represent both a predictive and prognostic factor for advanced NSCLC. Acknowledgements We thank Timur KOCA (MD) from Erzurum Numune Hospital, Department of Radiation Oncology, for his valuable contribution to this study. References 1. Greenlee RT, Hill-Harmon MB, Murray T, Thun M: Cancer statistics. CA Cancer J Clin 2001, 51: 15–36.CrossRefPubMed 2. leaves. PAK5 Planta 186:434–441 Luwe M, Heber U (1995) Ozone detoxification in the apoplast and symplast of spinach, broad bean and beech leaves at ambient and elevated concentrations of ozone in air. Planta 107:448–455 Menke W (1990) Retrospective of a botanist. Photosynth Res 25:77–82CrossRef Mimura T, Dietz KJ, Kaiser W, Schramm MJ, Kaiser G, Heber U (1990) Phosphate transport across biomembranes and cytosolic phosphate homeostasis in barley leaves. Planta 180:139–146CrossRef Miyake H, Komura M, Itoh S, Kosugi M, Kashino Y, Satoh K, Shibata Y (2011) Multiple dissipation components of excess light energy in dry lichen revealed by ultrafast fluorescence study at 5 K. Photosynth Res 110:39–48PubMedCrossRef Oja V, Savchenko G, Jakob B, Heber U (1999) pH and buffer capacities of apoplatic and cytoplasmic cell compartments in leaves. fluoroquinolone-resistant clinical Escherichia coli isolates from The Netherlands. J Infect Dis 2002, 186:1852–1856.CrossRefPubMed 28. Bauer RJ, Zhang L, Foxman B, Siitonen A, Jantunen ME, Saxen H, Marrs CF: Molecular epidemiology of 3 putative virulence genes for Escherichia coli urinary tract infection– usp , iha, and iroN E. coli . J Infect Dis 2002, 185:1521–1524.CrossRefPubMed 29. Gannon VP, D'Souza S, Graham T, King RK, Rahn K, Read S: Use of the flagellar H7 gene as a target in multiplex PCR assays and improved specificity in identification of enterohemorrhagic Escherichia coli strains. J Clin Microbiol 1997, 35:656–662.PubMed 30. Clermont O, Bonacorsi S, Bingen E: Rapid and simple determination of the Escherichia coli phylogenetic group. Appl Environ Microbiol 2000, 66:4555–4558.CrossRefPubMed 31. Tenover FC, Arbeit RD, Goering RV, Mickelsen PA, Murray BE, Persing DH, Swaminathan B: Interpreting chromosomal DNA restriction patterns produced by pulsed-field gel electrophoresis: criteria for bacterial strain typing. J Clin Microbiol 1995, 33:2233–2239.PubMed Authors' contributions AM carried out the MLST studies, the analysis and interpretation of all data, and drafted the manuscript.
CommonCrawl
This is the twelfth part of "An Outsider's Tour of Reinforcement Learning." Part 13 is here. Part 11 is here. Part 1 is here. This series began by describing a view of reinforcement learning as optimal control with unknown costs and state transitions. In the case where everything is known, we know that dynamic programming generically provides an optimal solution. However, when the models and costs are unknown, or when the full dynamic program is intractable, we must rely on approximation techniques to solve RL problems. How you approximate the dynamic program is, of course, the hard part. Bertsekas recently released a revised version of his seminal book on dynamic programming and optimal control, and Chapter 6 of Volume 2 has a comprehensive survey of data-driven methods to approximate dynamic programming. Though I don't want to repeat everything Bertsekas covers here, I think describing his view of the problem builds a clean connection to receding horizon control, and bridges the complementary perspectives of classical controls and contemporary reinforcement learning. While I don't want to belabor a full introduction to dynamic programming, let me try, in as short a space as possible, to review the basics. Though we can solve this directly on finite time horizons using some sort of batch solver, there is an often a simpler strategy based on dynamic programming and the principle of optimality: If you've found an optimal control policy for a time horizon of length $N$, $\pi_1,\ldots, \pi_N$, and you want to know the optimal strategy starting at state $x$ at time $t$, then you just have to take the optimal policy starting at time $t$, $\pi_t,\ldots,\pi_N$. Dynamic programming then let's us recursively find a control policy by starting at the final time and recursively solving for policies at earlier times. This equation, known as Bellman's equation, is almost obvious given the structure of the optimal control problem. But it defines a powerful recursive formula for $V$ and forms the basis for many important algorithms in dynamic programming. Also note that if we have a convenient way to optimize the right hand side of this expression, then we can find the optimal action by finding the $u$ that minimizes the right hand side. Classic reinforcement learning algorithms like TD and Q-learning take the Bellman equation as a starting point, and try to iteratively solve for the value function using data. These ideas also form the underpinnings of now-popular methods like DQN. I'd again highly recommend Bertsekas' survey describing the many different approaches one can take to approximately solve this Bellman equation. Rather than covering this, I'd like to use this as jumping off point to compare this viewpoint to that of receding horizon control. As we discussed in the previous posts, 95% of controllers are PID control. Of the remaining 5%, 95% of those are probably based on receding horizon control (RHC). RHC, also known as model predictive control (MPC), is an incredibly powerful approach to controls that marries simulation and feedback. In RHC an agent makes a plan based on a simulation from the present until a short time into the future. The agent then executes one step of this plan, and then, based on what it observes after taking this action, returns to short-time simulation to plan the next action. This feedback loop allows the agent to link the actual impact of its choice of action with what was simulated, and hence can correct for model mismatch, noise realizations, and other unexpected errors. Though I have heard MPC referred to as "classical control" whereas techniques like LSTD and Q-learning are more in the camp of "postmodern reinforcement learning," I'd like to argue that these are just different variants of approximate dynamic programming. Here we have just unrolled the cost beyond one step, but still collect the cost-to-go $N$ steps in the future. Though this is trivial, it is again incredibly powerful: the longer we make the time horizon, the less we have to worry about the value function $V$ being accurate. Of course, now we have to worry about the accuracy of the state-transition map, $f$. But, especially in problems with continuous variables, it is not at all obvious which accuracy is more important in terms of finding algorithms with fast learning rates and short computation times. There is a tradeoff between learning models and learning value functions, and this is a tradeoff that needs to be better understood. Though RHC methods appear fragile to model mismatch, because they are only as good as the model, the repeated feedback inside RHC can correct for many modeling errors. As an example, it's very much worth revisiting the robotic locomotion tasks inside the MuJoCo framework. These tasks actually were designed to test the power of a nonlinear RHC algorithm developed by Tassa, Erez, and Todorov. Fast forward to 2:50 to see the humanoid model we discussed in the random search post. Note that the controller works to keep the robot upright, even when the model is poorly specified. Hence, the feedback inside the RHC loop is providing a considerable amount of robustness to modeling errors. Also note that this demo does not estimate the value function at all. Instead, they simply truncate the infinite time-horizon problem. The receding horizon approximation is already quite good for the purpose of control. All these behaviors were generated by MPC in real-time. The walking is not as what can be obtained from computationally intensive long-horizon trajectory optimization, but it looks considerably better than the sort of direct policy search gaits we discussed a previous post. Is there a middle ground between expensive offline trajectory optimization and real time model-predictive control? I think the answer is yes in the very same way that there is middle ground between learning dynamical models and learning value functions. Performance of a receding control system can be improved by better modeling of the value function which defines the terminal cost. The better a model you make of the value function, the shorter a time horizon you need for simulation, and the closer you get to real-time operation. Of course, if you had a perfect model of the value function, you could just solve the Bellman equation and you would have the optimal control policy. But by having an approximation to the value function, high performance can still be extracted in real-time. So what if we learn to iteratively improve the value function while running RHC? This idea has been explored in a project by my Berkeley colleagues Rosolia, Carvalho, and Borrelli. In their "Learning MPC" approach, the terminal cost is learned by nearest neighbors. The terminal cost of a state is the value obtained last time you tried that state. If you haven't visited that state, the cost is infinite. This formulation constrains the terminal condition to be in a state observed before. You can explore new ways to decrease your cost on the finite time horizon as long as you reach a state that you have already demonstrated is safe. After only a few laps, the learned controller works better than a human operator. Simple nearest-neighbors suffices to learn rather complex autonomous actions. And, if you're into that sort of thing, you can even prove monotonic increase in control performance. Quantifying the actual learning rate remains open and would be a great problem for RL theorists out there to study. But I think this example cleanly shows how the gap between RHC methods and Q-learning methods is much smaller than it first appears. Another reason to like this blended RHC approach to learning to control is that one can hard code in constraints on controls, states, and easily incorporate models of disturbance directly into the optimization problem. Some of the most challenging problems in control are how to execute safely while continuing to learn more about a system's capability, and an RHC approach provides a direct route towards balancing safety and performance. In the next post, I'll describe an optimization-based approach to directly estimate and incorporate modeling errors into control design.
CommonCrawl
From the graph we see that the points of intersection are $(1,5)$ and $(3,3)$. See step-by-step solution for analytical check. We read from the graph the points of intersection $(1,5)$ and $(3,3)$ ($x$ coordinate is cited 1st, and $y$ coordinate is cited second). To analytically check them we have to substitute $x$ and $y$ from each point (one point by one) to the equations of the graphs and see if they become valid equalities (if number calculated on the right side simply equals the number calculated on the left): For $(1,5)$ $$5=-|2\times 1 - 3|+6 = -|-1|+6 = -1+6=5$$ which is correct. $$5=6-1=5$$ which is also correct so the point $(1,5)$ is analytically verified. For $(3,3)$ $$3=-|2\times3 - 3|+6 = -|6-3|+6 = -|3|+6 = -3+6 = 3$$ which is correct. $$3=6-3=3$$ which is also correct so the point $(3,3)$ is also analytically verified.
CommonCrawl
I asked the question Fourier series is to Fourier transform what Laurent series is to …? over at MSE, since that's where my questions usually belong to. But since I couldn't find any resources on it, I was wondering if it were actually adequate for migration to MO? To summarize that question, it is about the similarity between Fourier analysis and the Laurent series on a circle and whether the $r\to\infty$-limit of the latter a) makes sense b) also yields an integration similar to the Fourier series turning into the Fourier transform and c) whether its useful. I don't think this is appropriate for Math Overflow because it is not a research level mathematics question. What happens when a question is migrated to MSE?
CommonCrawl
When I tried some of the questions that involved matrices with complex eigenvalues, I sometimes ended up with matrices whose reduced row echelon forms that have pivot positions on each column (after subracting the main diagonal entries with the eigenvalues; no free variables). Does this sometimes happen whenever I try to find eigenvectors with this? Not c lear what do you mean.May be an example? Not really I think. We have to get a matrix contains non-zero number on the first row and zero on the second row in reduced row echelon forms. There should always be at least one free variable if eigenvalue is chosen correctly. Free variable need to be present so that your solution is a line or plane or etc. If the characteristic polynomial $\det (A-kI)$ has a multiple root (say root $k$ of multiplicity $m$) then it can have any number $l=1,\ldots, m$ of linearly independent eigenvectors, associated with this eigenvalue (we do not distinguish here real or complex eigenvalues). Then one needs to find Jordan (echelon?) form.
CommonCrawl
Abstract: The mass of the bottom quark (both the pole mass $M_b$ and the $\MSb$ mass $m_b$) and the strong coupling constant $\alpha_s$ have been determined from QCD moment sum rules for the $\Upsilon$ system. In the pole-mass scheme large perturbative corrections resulting from coulombic contributions have been resummed. The results of this analysis are: $M_b=4.60 \pm 0.02 \gev$, $m_b(m_b)=4.13 \pm 0.06 \gev$ and $\alpha_s(M_Z)=0.119 \pm 0.008$.
CommonCrawl
Abstract. Motivated by recent claims of a proof that the length scale exponent for the end-to-end distance scaling of self-avoiding walks is precisely $7/12=0.5833\ldots$, we present results of large-scale simulations of self-avoiding walks and self-avoiding trails with repulsive contact interactions on the hypercubic lattice. We find no evidence to support this claim; our estimate $\nu=0.5874(2)$ is in accord with the best previous results from simulations.
CommonCrawl
Y. Cheng, S. Friedman, and J. D. Hamkins, "Large cardinals need not be large in HOD," Annals of Pure and Applied Logic, vol. 166, iss. 11, pp. 1186-1198, 2015. Abstract. We prove that large cardinals need not generally exhibit their large cardinal nature in HOD. For example, a supercompact cardinal $\kappa$ need not be weakly compact in HOD, and there can be a proper class of supercompact cardinals in $V$, none of them weakly compact in HOD, with no supercompact cardinals in HOD. Similar results hold for many other types of large cardinals, such as measurable and strong cardinals. 1. To what extent must a large cardinal in $V$ exhibit its large cardinal properties in HOD? 2. To what extent does the existence of large cardinals in $V$ imply the existence of large cardinals in HOD? For large cardinal concepts beyond the weakest notions, we prove, the answers are generally negative. In Theorem 4, for example, we construct a model with a supercompact cardinal that is not weakly compact in HOD, and Theorem 9 extends this to a proper class of supercompact cardinals, none of which is weakly compact in HOD, thereby providing some strongly negative instances of (1). The same model has a proper class of supercompact cardinals, but no supercompact cardinals in HOD, providing a negative instance of (2). The natural common strengthening of these situations would be a model with a proper class of supercompact cardinals, but no weakly compact cardinals in HOD. We were not able to arrange that situation, however, and furthermore it would be ruled out by Conjecture 13, an intriguing positive instance of (2) recently proposed by W. Hugh Woodin, namely, that if there is a supercompact cardinal, then there is a measurable cardinal in HOD. Many other natural possibilities, such as a proper class of measurable cardinals with no weakly compact cardinals in HOD, remain as open questions. This entry was posted in Publications and tagged definability, forcing, HOD, homogeneous forcing, indestructibility, large cardinals, measurable, supercompact, weakly compact by Joel David Hamkins. Bookmark the permalink. V [G], if there are no measurable cardinals in V ? final extension. Now find some intermediate model, by noting for inaccessible $\alpha, Add(\alpha, 1)$ can be written as two step iteration, first adding an $\alpha$-Souslin tree, and then forcing with that tree. The intermediate model is an iteration with the Souslin parts. But this idea does not work for trivial reasons. But what if instead of iteration, we have a kind of product. Then we would be able to do the job. Now, there is a paper by Mack Stanley "Notes on a theorem of Silver" (see http://www.math.sjsu.edu/~stanley/gch.pdf) which gives a proof of Silver's theorem by a kind of forcing which is not an iteration and can be considered as product. I don't see if his forcing adds Cohen subsets to weakly compacts below the supercompact. But maybe a modification of his method can be used to answer the above question and similar ones.
CommonCrawl
Fang, Min, Pascucci, Ilaria, Edwards, Suzan, et al.. "A New Look at T Tauri Star Forbidden Lines: MHD-driven Winds from the Inner Disk." The Astrophysical Journal, 868, no. 1 (2018) https://doi.org/10.3847/1538-4357/aae780. Magnetohydrodynamic (MHD) and photoevaporative winds are thought to play an important role in the evolution and dispersal of planet-forming disks. We report the first high-resolution (Δ v ∼ 6 km s −1 ) analysis of [S ii ] λ 4068, [O i ] λ 5577, and [O i ] λ 6300 lines from a sample of 48 T Tauri stars. Following Simon et al. we decompose them into three kinematic components: a high-velocity component (HVC) associated with jets, and low-velocity narrow (LVC-NC) and broad (LVC-BC) components. We confirm previous findings that many LVCs are blueshifted by more than 1.5 km s −1 and thus most likely trace a slow disk wind. We further show that the profiles of individual components are similar in the three lines. We find that most LVC-NC and LVC-BC line ratios are explained by thermally excited gas with temperatures between 5000 and 10,000 K and electron densities of ∼10 7 –10 8 cm −3 . The HVC ratios are better reproduced by shock models with a pre-shock H number density of ∼10 6 –10 7 cm −3 . Using these physical properties, we estimate ##IMG## [http://ej.iop.org/images/0004-637X/868/1/28/apjaae780ieqn1.gif] $\dotM_\mathrmwind/\dotM_\mathrmacc$ for the LVC and ##IMG## [http://ej.iop.org/images/0004-637X/868/1/28/apjaae780ieqn2.gif] $\dotM_\mathrmjet/\dotM_\mathrmacc$ for the HVC. In agreement with previous work, the mass carried out in jets is modest compared to the accretion rate. With the likely assumption that the LVC-NC wind height is larger than the LVC-BC, the LVC-BC ##IMG## [http://ej.iop.org/images/0004-637X/868/1/28/apjaae780ieqn3.gif] $\dotM_\mathrmwind/\dotM_\mathrmacc$ is found to be higher than the LVC-NC. These results suggest that most of the mass loss occurs close to the central star, within a few au, through an MHD-driven wind. Depending on the wind height, MHD winds might play a major role in the evolution of the disk mass.
CommonCrawl
Format: MarkdownItexstarted to (re)structure the entry [[higher category theory]] roughly along the lines of the new structure at [[category theory]]. But for the moment many sections just contain link lists. started to (re)structure the entry higher category theory roughly along the lines of the new structure at category theory. But for the moment many sections just contain link lists. Removed an incorrect statement about the relation between (∞,∞)(\infty,\infty)-categories of cobordisms and cobordism spectra. I removed a somewhat contentious wording. The main driving force for the development of higher category theory is not possible applications in extended QFT although that has fed in some interesting conjectures that have shaped the way the theory grew. Format: MarkdownItexSeems to me equally contentious to remove it. It's easier to forget than to remember, way back, John Roberts introduced $\omega$-categories for purposes of AQFT (see [here](https://ncatlab.org/nlab/show/strict%20omega-category#references)). Seems to me equally contentious to remove it. It's easier to forget than to remember, way back, John Roberts introduced ω\omega-categories for purposes of AQFT (see here). Format: MarkdownItexMany people developed their theory with regard to purely categorical aspects or to links with topology or algebraic geometry. There were several breakthroughs, one being from earlier with Boardman and Vogt, followed on by Cordier with weak Kan complexes and, of course, Street's orientals were important and owed inspiration to John Roberts. The page you link to, Urs, makes the point about Ronnie's work, but all the work by researchers such as Batanin, Makkai, Lenster, Cheng, Verity, Joyal, is independent of TQFT ideas and even John Baez and Jim Dolan is only distantly linked to extended QFTs. The homotopy hypothesis etc dates from interpretations of Grothendieck's pursuing stacks which was a very important input to the development. I am not suggesting there was no link with TQFTs merely that very few of those people I mention and who were very important in the development of the theory of higher categories, would have mentioned extended TQFTs as a dominant inspiration. John Roberts work was very important as motivation for Ross Street, but mostly because of the link with non-Abelian cohomology. Much later, of course, Lurie worked both on quasi-categories and on cobordisms but that was not an early motivation. Many people developed their theory with regard to purely categorical aspects or to links with topology or algebraic geometry. There were several breakthroughs, one being from earlier with Boardman and Vogt, followed on by Cordier with weak Kan complexes and, of course, Street's orientals were important and owed inspiration to John Roberts. The page you link to, Urs, makes the point about Ronnie's work, but all the work by researchers such as Batanin, Makkai, Lenster, Cheng, Verity, Joyal, is independent of TQFT ideas and even John Baez and Jim Dolan is only distantly linked to extended QFTs. The homotopy hypothesis etc dates from interpretations of Grothendieck's pursuing stacks which was a very important input to the development. I am not suggesting there was no link with TQFTs merely that very few of those people I mention and who were very important in the development of the theory of higher categories, would have mentioned extended TQFTs as a dominant inspiration. John Roberts work was very important as motivation for Ross Street, but mostly because of the link with non-Abelian cohomology. Much later, of course, Lurie worked both on quasi-categories and on cobordisms but that was not an early motivation. Format: MarkdownItexThe wording that Tim removed seemed inoffensive to me, but if the problem was with "to a large extent", then I think "one of the driving forces" would be a reasonable replacement (and incontestable I believe). Other driving forces could be added in as one sees fit. The wording that Tim removed seemed inoffensive to me, but if the problem was with "to a large extent", then I think "one of the driving forces" would be a reasonable replacement (and incontestable I believe). Other driving forces could be added in as one sees fit. Format: MarkdownItexAgreed with "one of the driving forces". Agreed with "one of the driving forces". Format: MarkdownItexI'm ok with that. Put in "one of the driving forces" etc.
CommonCrawl
I have built a logistic regression where the outcome variable is being cured after receiving treatment (Cure vs. No Cure). All patients in this study received treatment. I am interested in seeing if having diabetes is associated with this outcome. My question is: Why don't the p-values and the confidence interval including 1 agree? The Wald test assumes that the likelihood is normally distributed, and on that basis, uses the degree of curvature to estimate the standard error. Then, the parameter estimate divided by the SE yields a $z$-score. This holds under large $N$, but isn't quite true with smaller $N$s. It is hard to say when your $N$ is large enough for this property to hold, so this test can be slightly risky. Likelihood ratio tests look at the ratio of the likelihoods (or difference in log likelihoods) at its maximum and at the null. This is often considered the best test. The score test is based on the slope of the likelihood at the null value. This is typically less powerful, but there are times when the full likelihood cannot be computed and so this is a nice fallback option. The tests that come with summary.glm() are Wald tests. You don't say how you got your confidence intervals, but I assume you used confint(), which in turn calls profile(). More specifically, those confidence intervals are calculated by profiling the likelihood (which is a better approach than multiplying the SE by $1.96$). That is, they are analogous to the likelihood ratio test, not the Wald test. The $\chi^2$-test, in turn, is a score test. As your $N$ becomes indefinitely large, the three different $p$'s should converge on the same value, but they can differ slightly when you don't have infinite data. It is worth noting that the (Wald) $p$-value in your initial output is just barely significant and there is little real difference between just over and just under $\alpha=.05$ (quote). That line isn't 'magic'. Given that the two more reliable tests are just over $.05$, I would say that your data are not quite 'significant' by conventional criteria. # D 1 3.7997 0 0.0000 0.05126 . Not the answer you're looking for? Browse other questions tagged r hypothesis-testing logistic generalized-linear-model odds-ratio or ask your own question. Why score test, Wald test, Likelihood Ratio Test etc? How does R calculate the p-value for this binomial regression? t Test, Chi Squared or logistic regression..? Is the chi-squared test correct to see if different algorithms differ in output? How to calculate Odds ratio and 95% confidence interval for logistic regression for the following data?
CommonCrawl
Bracket on $\mathcal C^\infty(TM)$ in classical mechanics ? Dirac bracket on a Poisson manifold in relation to the Courant bracket on the Whitney sum. When do phase space functions' Poisson brackets inherit the Lie algebra structure of a symmetry? What is the connection between Poisson brackets and commutators?
CommonCrawl
We present results from recent experimental and theoretical investigations of DNA hairpin retraction from an $\alpha$-hemolysin nanopore in the presence of an assisting voltage. By mapping the translocation process to that of biased diffusion of a Brownian particle we compute the probability of the polymer to stay in the pore as a function of time. Using this model we back out the diffusion constant and the drift velocity of the polymer as a function of the assisting voltage. While the drift-diffusion model gives good agreement with experiments at low voltages it fails for high assisting voltages. We discuss possible reasons for this along with the implications of our work.
CommonCrawl
One of the candidates to reconcile quantum mechanics with general relativity is the generalization of the Heisenberg Uncertainty Principle to incorporate gravitational effects.As a result, the Generalized Uncertainty Principle (GUP) "deforms" the commutation relation given by the Heisenberg Uncertainty Principle via a GUP parameter $\alpha$ .Furthermore, the relativistic dispersion relation becomes modified.We present a calculation of the entropy density, speed of sound, and the resulting impact on the bulk viscosity to shear viscosity ratio of an ideal quark gluon plasma when the effects of the GUP are taken into consideration.When the GUP parameter tends to zero, we obtain the value of the speed of sound for an ideal gas of massless particles i.e. $c_s^2 =1/3$ and the expected result that the bulk viscosity vanishes. In addition, in the high temperature limit, the speed of sound tends to $c_s^2=1/4$ . The consequence this has on the bulk viscosity is that in the high temperature limit, the ratio of the bulk to shear viscosity tends to $\zeta/\eta=5/48$. Our results suggest that the GUP introduces a scale into the system breaking the a priori conformal invariance of a system of massless noninteracting particles.
CommonCrawl
Given a permutation $w$ in $S_n$, the matroid of a generic $n \times n$ matrix whose non-zero entries in row $i$ lie in columns $w(i)$ through $n+i$ is an example of a positroid. We enumerate the bases of such a positroid as a sum of certain products of Catalan numbers, each term indexed by the 3$-avoiding permutations above $w$ in Bruhat order. We also give a similar sum formula for their Tutte polynomials. These are both avatars of a structural result writing such a positroid as adisjoint union of a direct sum of Catalan matroids (up to isomorphism) and free matroids.
CommonCrawl
Meet Vincent. He's a painter and quite the eccentric. He just got a new commission, to embellish the façade of an old building on the fancy side of the town. The neighbors just want the ugly wall to disappear. For inspiration, he goes to take a look at the building. Those darn kids – they painted graffiti all over it. No matter, Vincent will paint over the mess. To help him cover the graffiti, Vincent can solve quadratic equations by factoring and grouping. Back at home, as he considers the specifics of the building, he divises a strategy. What does he know? The front of the building is equal to an area of 45 yards squared, but not to be included in the painting is the fire escape on the right side of the building, it's 2 yards wide, and the windows at the top of the building, they have a height of 3 yards. The painting must be in the shape of a rectangle with the height equal to 2 times the width. Because he doesn't know the width or the height, he uses the variables, x and 2x. He sets up an equation and sets the total area equal to 45 yards squared. The quantity x plus 2 times the quantity 2x plus 3 is equal to 45. To calculate x, first we FOIL. Then, we combine like terms. Next, since this is a quadratic equation, to find the solution, we modify the equation - so it's equal to 0. Now we're ready to factor with grouping. First, find the factors of ac that sum to b. ac is equal to 2 times -39, so -78. b is 7. Now what factors of -78 sum to 7? Hmmm, let's go through the list. Ahah, -6 and 13 will work. Now, we split up 7x into two terms, -6x and 13x. Pay close attention to this next step: use parentheses to group the 4 terms into 2 binomials and then factor out the GCF from each binomial. This can be tricky, so watch carefully. The end result is the binomial: x - 3 times the binomial 2x + 13, and the product is equal to zero. But not so fast, we have one last step. Apply the Zero Product Property and solve for both values of 'x'. There one last thing to think about with this problem. You can't have a negative length or width, so only one of the solutions is a possible answer. X is equal to 3 yards, so the height of the painting is equal to 6 yards and the width is equal to 3 yards. There are easy ways to solve this quadratic equations, like 3x2 - 25x + 56 = 14. One way is by combining factoring and grouping. The standard form of the quadratic equation is given by ax2 + bx + c = 0. Once such an equation is put into standard form, we can then determine its factors by finding the factors of a*c such that a+c=b. For example, the equation 3x2 - 25x + 56 = 14 in standard form is 3x2 - 25x + 42 = 0, with a = 3, b = -25, and c = 42. with (-7,-18) fulfilling the statement a + c = b: (-7) + (-18) = -25. we can factor x from the first group and 6 from the second group to get x (x - 7) - 6 (x - 7) = 0. We can then isolate (x - 7) as a common factor and finally get the equation: (x - 6)(x - 7) = 0. We have thus factored 3x2 - 25x + 42 = 0 into (x - 6)(x - 7) = 0, from which we can finally see that either x = 6 or x = 7. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Factoring with Grouping kannst du es wiederholen und üben. Establish the equation corresponding to the area of the façade of the building. The area of a rectangle is given by its height multiplied by its width. This rectangle represents the house wall. The height is given by $2x+3$. The width is given by $x+2$. Determine the solutions and decide which solution is a reasonable length. Each factor gives a solution. Check each solution, keeping in mind that $x$ represents a length. By the zero product property, we get that either $2x+13=0$ or $x-3=0$. This can't be a solution, because $x$ is a length and thus cannot be negative. This is the desired solution. The part of the house wall to paint is 6 yds. high and 3 yds. wide. Solve the equation describing the area of the building to be painted. If you multiply any term by zero the product is also zero. Multiply the First to get $2x^2$. Multiply the Outer to get $3x$. Multiply the Inner to get $4x$. Multiply the Last to get $6$. Adding all those terms together, we have $2x^2+3x+4x+6=45$. Combining like terms gives us $2x^2+7x+6=45$. Then subtract $45$ from both sides of the equation to get $2x^2+7x-39=0$. Again factoring, we get $(2x+13)\times (x-3)=0$. The zero product property tells us that we get either $2x+13=0$ or $x-3=0$. This can't be a solution in our case because $x$ is a length and thus cannot be negative. This is the solution we want. So the part of the building to paint is 6 yds. high and 3 yds. wide. Determine the solutions of the equation. You have to find the factors of $-114$ which sum to $13$. Check your solutions (one is a decimal number) by inserting them into the equation above. Multiply the First to get $2x\times x=2x^2$. Multiply the Outer to get $2x\times 4=8x$. Multiply the Inner to $5x$. Multiply the Last to get $5\times 4=20$. Adding all of these terms together, we get $2x^2+13x+20=77$. Subtracting $77$ on both sides gives us $2x^2+13x-57=0$. Because a product equals zero if one of the factors equals zero we get either $2x+19=0$ or $x-3=0$. Those are the wanted solutions: Either $x=-9.5$ or $x=3$. Find the mistakes in Vincent's calculations. Multiply the First to get $x\times x=x^2$. Multiply the Outer to get $x\times 5=5x$. Multiply the Inner to get $4\times x=4x$. Multiply the Last to get $4\times 5=20$. Keep in mind that you only can combine like terms; for example, $2x+3x=5x$, but you can't combine anymore terms in $2x+3x^2$. You can check the factorization using the FOIL method. If you multiply the factors of the linear terms of each binomial you get the factor of the resulting quadratic term of the trinomial. you have to find the factors of $a\times c$ which sum to $b$. The other way is to check if a given factorization is correct using the FOIL method. Let's start with $3x^2+4x-4=0$. We need to find all factors of $3\times (-4)=-12$ and choose the pair which sums to $4$. $(4x+4)\times(2x-3)=4x\times 2x+4x\times (-3)+4\times 2x+4\times (-3)$. Combining like terms we get $...=8x^2-12x+8x-12=8x^2-4x-12$.
CommonCrawl
What is too big for standard linear algebra/optimization methods? Different numerical linear algebra and numerical optimization methods have different size regimes where they're a 'good idea', in addition to their own properties. For example, for very large optimization problems, gradient, stochastic gradient and coordinate descent methods are used instead of Newton or Interior Point methods because you don't have to deal with the Hessian. Similarly, dense linear solver methods stop being feasible after a certain size. So given that both the algorithms and the computer hardware are changing constantly, what's a good way to know, and keep up with, how big is too big for standard linear algebra and optimization solvers? EDIT: For more concreteness, what got me thinking about this was varying rules of thumb on the upper bounds for how large a problem interior point algorithms could solve. Earlier papers said the dimensionality should be around 1000 while later papers had revised upwards to 5000 and even more recent papers allow for even larger depending on if you can take advantage of sparsity. That's a rather large range, so I'm curious what is large for state of the art interior point methods. If sparsity is preserved, optimal preconditioners are available, and inequality constraints can be resolved by a multiscale method (or the number of active constraints is not too large), the overall algorithm can be $\mathcal O(n)$ time and space. Distributing across a parallel machine adds a logarithmic term to time. If enough sparsity is available, or if matrix-free methods are used, on the order of one million degrees of freedom can be solved per core. That puts the problem size for today's largest machines at around one trillion degrees of freedom. Several groups have run PDE simulations at this scale. Note that it is still possible to use Newton-based optimization with large design spaces, you just need to solve iteratively with the Hessian. There are many approaches to doing so efficiently. So it all depends how you define "standard methods". If your definition includes multilevel structure-preserving methods, then extremely large problems are tractable. If your definition is limited to unstructured dense methods, the feasible problem sizes are much smaller because the algorithms are not "scalable" in either time or space. The limit is primarily given by the memory it takes to store the matrix representation, and the time to retrieve it from memory. This makes a few thousand the limit for dense matrix method in simple environments. For sparse problems the limit for direct solvers is much higher, but depends on the sparsity pattern, as fill-in must be accomodated. The limit for iterative methods for linear solvers is essentiall the cost of a matrix vector multiply. The limit for solving the linear subproblems directly translates into corresponding limits for local solvers for nonlinear systems of equations and optimization problems. Global solvers have much more severe limits, as they are limited by the number of subproblems that need to be solved in a branch and bound framework, or by the curse of dimensionality in stochastic search methods. try it out yourself (on a serial machine, or if you have the resources, a parallel machine). See how large of an instance you can run, or run several instances of different sizes, and do an empirical scaling analysis (time vs. problem size). To give an example, in global optimization, the concrete answer is extremely structure-dependent. As Arnold Neumaier notes, deterministic global optimization algorithms tend to be limited by the number of subproblems that must be solved in a branch-and-bound (or branch-and-cut) framework. I have solved mixed-integer linear programs (MILPs) containing thousands of binary variables, but I suspect that the reason I can solve such large problems (comparatively speaking, for MILPs) is because the problem structure was such that few subproblems were required to solve some critical set of binary variables, and the rest could be set to zero. I know that my problem is "large"; I've constructed other MILPs of similar size that solve tens of thousands of times more slowly. There are global optimization test sets that give you an idea of what is "run-of-the-mill," and the literature can give you ideas of what problems are "large". Similar tactics exist for figuring out the state-of-the-art in problem sizes in other fields, which is how Jed Brown and Arnold Neumaier can quote these figures. It's great to get these numbers, but it's far more valuable to be able to figure out how to get them yourself when the time comes. Not the answer you're looking for? Browse other questions tagged linear-algebra optimization nonlinear-programming or ask your own question. State-of-the-art for active set optimization algorithms?
CommonCrawl
Abstract: In 1969, Vic Klee asked whether a convex body is uniquely determined (up to translation and reflection in the origin) by its inner section function, the function giving for each direction the maximal area of sections of the body by hyperplanes orthogonal to that direction. We answer this question in the negative by constructing two infinitely smooth convex bodies of revolution about the $x_n$-axis in $\R^n$, $n\ge 3$, one origin symmetric and the other not centrally symmetric, with the same inner section function. Moreover, the pair of bodies can be arbitrarily close to the unit ball.
CommonCrawl
Nguyen Van Thoai. Decomposition branch and bound algorithm for optimization problems over efficient sets. Journal of Industrial & Management Optimization, 2008, 4(4): 647-660. doi: 10.3934\/jimo.2008.4.647. Yinfei Li, Shuping Chen. Optimal traffic signal control for an $M\\times N$ traffic network. Journal of Industrial & Management Optimization, 2008, 4(4): 661-672. doi: 10.3934\/jimo.2008.4.661. Junfeng Yang. Dynamic power price problem: An inverse variational inequality approach. Journal of Industrial & Management Optimization, 2008, 4(4): 673-684. doi: 10.3934\/jimo.2008.4.673. Caglar S. Aksezer. On the sensitivity of desirability functions for multiresponse optimization. Journal of Industrial & Management Optimization, 2008, 4(4): 685-696. doi: 10.3934\/jimo.2008.4.685. Vadim Azhmyakov. An approach to controlled mechanical systems based on the multiobjective optimization technique. Journal of Industrial & Management Optimization, 2008, 4(4): 697-712. doi: 10.3934\/jimo.2008.4.697. Qiying Hu, Chen Xu, Wuyi Yue. A unified model for state feedback of discrete event systems II: Control synthesis problems. Journal of Industrial & Management Optimization, 2008, 4(4): 713-726. doi: 10.3934\/jimo.2008.4.713. Cai-Ping Liu. Some characterizations and applications on strongly $\\alpha$-preinvex and strongly $\\alpha$-invex functions. Journal of Industrial & Management Optimization, 2008, 4(4): 727-738. doi: 10.3934\/jimo.2008.4.727. Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial & Management Optimization, 2008, 4(4): 739-755. doi: 10.3934\/jimo.2008.4.739. Shaoyong Lai, Qichang Xie. A selection problem for a constrained linear regression model. Journal of Industrial & Management Optimization, 2008, 4(4): 757-766. doi: 10.3934\/jimo.2008.4.757. Radu Ioan Bo\u0163, Anca Grad, Gert Wanka. Sequential characterization of solutions in convex composite programming and applications to vector optimization. Journal of Industrial & Management Optimization, 2008, 4(4): 767-782. doi: 10.3934\/jimo.2008.4.767. Kai Zhang, Xiaoqi Yang, Kok Lay Teo. A power penalty approach to american option pricing with jump diffusion processes. Journal of Industrial & Management Optimization, 2008, 4(4): 783-799. doi: 10.3934\/jimo.2008.4.783. Lin Xu, Rongming Wang, Dingjun Yao. On maximizing the expected terminal utility by investment and reinsurance. Journal of Industrial & Management Optimization, 2008, 4(4): 801-815. doi: 10.3934\/jimo.2008.4.801. Ling Lin, Dong He, Zhiyi Tan. Bounds on delay start LPT algorithm for scheduling on two identical machines in the $l_p$ norm. Journal of Industrial & Management Optimization, 2008, 4(4): 817-826. doi: 10.3934\/jimo.2008.4.817. Jonas C. P. Yu, H. M. Wee, K. J. Wang. Supply chain partnership for Three-Echelon deteriorating inventory model. Journal of Industrial & Management Optimization, 2008, 4(4): 827-842. doi: 10.3934\/jimo.2008.4.827. Xiaolin Xu, Xiaoqiang Cai. Price and delivery-time competition of perishable products: Existence and uniqueness of Nash equilibrium. Journal of Industrial & Management Optimization, 2008, 4(4): 843-859. doi: 10.3934\/jimo.2008.4.843.
CommonCrawl
У нас вы можете загрузить совершенно даром Password Depot 12.0.5. Password Depot - эффективный инструмент, управления всеми Вашими паролями. Вы больше никогда не забудете пароль. Password Depot Professional защищает Ваши пароли от внешнего несанкционированного доступа и при этом она очень удобна в использовании. Особенности программы: создание почти нерасшифровываемых паролей, шифрование данных, автоматизированный вход в систему. Программа Password Depot сохранит ваши пароли в зашифрованном виде в базе данных. Для доступа к ним необходимо запомнить только один пароль. Доступ к базе можно получить и по сети. Есть поддержка съемных USB-носителей: пароли всегда будут с вами на флеш-носителе. Но этим функциональность программы не исчерпывается. Password Depot не только сохранит секретную информацию, но и предложит стойкий к взлому пароль. Также с ее помощью можно удалять файлы без возможности восстановления. • Лучшая защита ваших данных, используя двойное кодирование Rijndael 256! • Интегрированный генератор паролей, виртуально создает не взламываемые пароли: вместо пароля типа "sweetheart" или "John", которые легко взламываются за несколько минут, вы получите пароль типа "g/:1bmV5T$x_sb}8T4@CN?A:y:Cwe-k)mUpHiJu:0md7p@" * Best possible enryption . In Password Depot, your information is encrypted not merely once but in fact twice, thanks to the algorithm AES or Rijndael 256. In the US, this algorithm is approved for state documents of utmost secrecy! * Double protection. You can secure your passwords files doubly. To start with, you select a master password that has to be entered in order to be able to open the file. Additionally, you can choose to protect your data by means of a key file that must be uploaded to open the file. * Protection against brute-force attacks. After every time the master password is entered incorrectly, the program is locked for three seconds. This renders attacks that rely on the sheer testing of possible passwords – so called "brute-force attacks" – virtually impossible.. * Backup copies. Password Depot generates backup copies of your passwords files. The backups may be stored optionally on FTP servers on the Internet (also via SFTP) or on external hard drives. You can individually define the time interval between the backup copies' creation. * Protection from keylogging. All password fields within the program are internally protected against different types of the interception of keystrokes (Key Logging). This disables that your sensible data entries can be spied out. * Traceless Memory. Dealing with your passwords, Password Depot does not leave any traces in your PC's working memory. Therefore, even a hacker sitting directly at your computer and searching through its memory dumps cannot find any passwords. * Clipboard protection: Password Depot automatically detects any active clipboard viewers and masks its changes to the keyboard; after performing auto-complete, all sensitive data is automatically cleared from the clipboard. * Virtual keyboard. The ultimate protection against keylogging. With this tool you can enter your master password or other confidential information without even touching the keyboard. Password Depot does not simulate keystrokes, but uses an internal cache, so that they can neither be intercepted software- nor hardware-based. * Uncrackalble passwords. The integrated Password Generator creates virtually uncrackable passwords for you. Thus in future, you will not have to use passwords such as "sweetheart" anymore, a password that may be cracked within minutes. * Verified password quality. Let Password Depot check your passwords' quality and security! Intelligent algorithms will peruse your passwords and warn you against 'weak' passwords which you can subsequently replace with the help of the Passwords Generator. * Password policies. You can define basic security requirements that must be met by all passwords which are added or modified. For instance, you can specify the passwords' minimum length and the characters contained therein. * Security warnings. Password Depot contains a list of warnings which always keep an eye on your passwords' security. For instance, the program warns you in case you use the unsafe FTP protocol and in this case advices you to use SFTP instead. * Protection against dictionary attacks. An important warning featured in Password Depot is the notification in case you are using unsafe passwords. These are passwords which are frequently used, therefore appear in hacker dictionaries and are easily crackable. * Warning against password expiry. You can set Password Depot to warn you before your passwords expire, for instance before the expiry date of your credit card. This ensures that your password data always remains up-to-date and valid. • Added an option to use the old (colored) icons in the user interface. • Restored Drag&Drop function in the Favorites view. • Added an option to hide the search field in the top bar. • Added UTF-8 support to the CSV export function. • Numerous bug fixes and user interface improvements.
CommonCrawl
Today I am starting to learn Markov chain and this is a question I've got. A simple message - either "yes" or "no" - is passed from one person to the next in a large group. Each person who receives the message "yes" has probability $ p $ of passing on the message "yes", and probability $ 1-p $ of passing on the message "no". Each person who receives the message "no" has probability of $ q $ of passing on the message "no", and probability $ 1-q $ of passing on the message "yes". Assume that $ 0 < p,q < 1 $. After many iterations, what is the probability that the final person in the group receives the message "yes"? How can I apply this to solve this question or there is another simple way to do it. Thank you. The transition matrix should be the transpose of what you have written. but of course you need to show this. It's just a bit of algebra. where $x$ is any initial probabilities you care to assign to the system. Last edited by romsek; August 11th, 2018 at 10:11 AM. Thank you for your help. I am so sorry for a repeated reply above but I cannot delete it by now. Since $T$ is primitive then there exist a matrix $V$ such that $T^n \to V $ as $n \to \infty $. Each row of $V$ is the same row vector $ v = (p_y, 1 - p_y) $. This is what I've got from a theorem in my class and your answer. But what is this value $p_y$? Is it the probability of the final person in the group receives the message "yes" after n iterations? Or could you please give me some references where I can learn to come up with these? I am sort of new to Markov chain. Last edited by Shanonhaliwell; August 11th, 2018 at 04:46 PM. $p_y$ is the overall probability that you receive a yes message. You start the system in some initial probabilistic state. Have you done any work on eigenvalues and eigenvectors yet? The transition matrix should be the transpose . . . Shanonhaliwell's post implied using $vT = v$, which wouldn't need $T$ to be transposed.
CommonCrawl
1 . What should come in place of the question mark (?) in the following equations? 2 . What should come in place of the question mark (?) in the following equations? 3 . What should come in place of the question mark (?) in the following equations? 14 $\times$ 18.6 $\div$ 12 + 19.3 = ? 4 . What should come in place of the question mark (?) in the following equations? 5 . In the following number series only one number is wrong. Find out the wrong number. 6 . In the following number series only one number is wrong. Find out the wrong number. 7 . In the following number series only one number is wrong. Find out the wrong number. 8 . In the following number series only one number is wrong. Find out the wrong number. 9 . In the following number series only one number is wrong. Find out the wrong number. A 250-metre-long train crosses a platform in 10 seconds. What is the speed of the train?
CommonCrawl
The author introduces the notion of a Galois extension of commutative $S$-algebras ($E \infty$ ring spectra), often localized with respect to a fixed homology theory. There are numerous examples, including some involving Eilenberg-Mac Lane spectra of commutative rings, real and complex topological $K$-theory, Lubin-Tate spectra and cochain $S$-algebras. He establishes the main theorem of Galois theory in this generality. Its proof involves the notions of separable and etale extensions of commutative $S$-algebras, and the Goerss-Hopkins-Miller theory for $E \infty$ mapping spaces. He shows that the global sphere spectrum $S$ is separably closed, using Minkowski's discriminant theorem, and he estimates the separable closure of its localization with respect to each of the Morava $K$-theories. He also defines Hopf-Galois extensions of commutative $S$-algebras and studies the complex cobordism spectrum $MU$ as a common integral model for all of the local Lubin-Tate Galois extensions.
CommonCrawl
In this paper, we introduce the class of ideals with $(d_1,\ldots,d_m)$-linear quotients generalizing the class of ideals with linear quotients. Under suitable conditions we control the numerical invariants of a minimal free resolution of ideals with $(d_1,\ldots,d_m)$-linear quotients. In particular we show that their first module of syzygies is a componentwise linear module.
CommonCrawl
Abstract: We present results from the Wendelstein Weak Lensing (WWL) pathfinder project, in which we have observed three intermediate redshift Planck clusters of galaxies with the new 30'$\times 30$' wide field imager at the 2m Fraunhofer Telescope at Wendelstein Observatory. We investigate the presence of biases in our shear catalogues and estimate their impact on our weak lensing mass estimates. The overall calibration uncertainty depends on the cluster redshift and is below 8.1-15 per cent for $z \approx 0.27-0.77$. It will decrease with improvements on the background sample selection and the multiplicative shear bias calibration. We present the first weak lensing mass estimates for PSZ1 G109.88+27.94 and PSZ1 G139.61+24.20, two SZ-selected cluster candidates. Based on Wendelstein colors and SDSS photometry, we find that the redshift of PSZ1 G109.88+27.94 has to be corrected to $z \approx 0.77$. We investigate the influence of line-of-sight structures on the weak lensing mass estimates and find upper limits for two groups in each of the fields of PSZ1 G109.88+27.94 and PSZ1 G186.98+38.66. We compare our results to SZ and dynamical mass estimates from the literature, and in the case of PSZ1 G186.98+38.66 to previous weak lensing mass estimates. We conclude that our pathfinder project demonstrates that weak lensing cluster masses can be accurately measured with the 2m Fraunhofer Telescope.
CommonCrawl
Let $R$ be a commutative ring. The zero divisors of $R$, which we denote $Z(R)$ is the set-theoretic union of prime ideals. This is just because in any commutative ring, the set of subsets of $R$ that can be written as unions of prime ideals is in bijection with the saturated multiplicatively closed sets (the multiplicatively closed sets that contain the divisors of each of their elements). Istvan Beck in 1986* introduced an undirected graph (in the sense of vertices and edges) associated to the zero divisors in a commutative ring. Recall that an undirected graph is just a set of vertices (points) and edges connecting the point. What is his graph? His idea was to let the vertices correspond to points of $R$, and the edges correspond to the relation than the product of the corresponding elements is zero. There is a slightly different definition due to Anderson and Livingston, which is the main one used today. Let $Z(R)^*$ denote the nonzero zero divisors. Their graph is $\Gamma(R)$, which is defined as the graph whose vertices are the elements of $Z(R)^*$, and whose edges are defined by connecting two distinct points if and only if their product is zero. Naturally, if $Z(R)^*$ is not empty then the resulting graph $\Gamma(R)$ will have some edges. The actual information contained in $\Gamma(R)$ is pretty much the same as the information contained in Beck's version and so we'll just stick with $\Gamma(R)$. For this idea to be more than just a curiosity, the graph theoretic properties of $\Gamma(R)$ should tell us something about hte ring theoretic properties of $R$. Does it? Anderson and Livingston showed in 1998 that there exists a vertex of $\Gamma(R)$ adjacent to every other vertex if and only if either $R = \Z/2\times A$ where $A$ is an integral domain or $Z(R)$ is an annihilator. They also showed that for $R$ a finite commutative ring, if $\Gamma(R)$ is complete, then $R\cong \Z/2\times \Z/2$ or $R$ is local with characteristic $p$ or $p^2$. Akbari and Mohammadian got pretty interesting results about $\Gamma(R)$ in 2004. They showed that if $R$ is a finite ring with no nontrivial nilpotent elements that is not $\Z/2\times \Z/2$ or $\Z/6$ and $S$ is such that $\Gamma(R)\cong \Gamma(S)$ as graphs then $R\cong S$. So basically for finite reduced rings, the set of zero divisors under multiplication very nearly determines the ring itself! * Note: Years quoted in this post are the years of the published papers.
CommonCrawl
Can I apply word2vec to find document similarity? I appreciate word2vec is used more to find the semantic similarities between words in a corpus, but here is my idea. Essentially, we are weighing and summing the word vectors for each word in a document. Each document is now represented by an $N\times 1$ dimensional vector, where $N$ is the number of features chosen in word2vec (the dimensionality hyperparamter - and I have mine set quite low at 150). Now that all documents create a sort of unit ball in the $N$ dimensional space, we can now find clusters of similar documents / the most similar documents given an input query document, using k-nearest neighbors or k-means. My question is, is this method viable? I have tried doc2vec, TfIDF, LDA and used appropriate similarity metrics for each (with good results), but my documents are quite short (20-100 tokens) and word2vec has worked very well alone. So I want to know if I can apply the method above or is there anything blatantly wrong with what I am doing here? Any other tips + advice would also be much appreciated. Some time ago I tried this idea on 20 newsgroups data. I used GloVe embeddings from the authors site (Wikipedia ones). Aggregating word embeddings using TF-IDF doesn't give good results. It is actually worse than just using TF-IDF features. See results in this notebook (Accuracy on tfidf data vs Accuracy on weighted embedded words). I also made plots of truncated SVD/PCA of the encoded documents - it seems like aggregated embeddings just make everything close to everything. To illustrate this I tried to find closest words for document encodings in Word embeddings space - it seems like they just lie close to common words (see Closest $10$ words to mean-aggregated texts). That being said, This notebook is just a toy example and it only suggests that the simplest approach won't work for this data. For instance I didn't try to filter out common words based on some threshold. Also maybe it would make more sense to first extract summaries from the documents first (for example TextRank sort of retrieves most informative paragraphs based partly on TF-IDF score of their words). If you want to try more elaborate techniques, I think that Gensim covers much of this stuff (for example extractive summarization via TextRank and similar algorithms). There is nothing wrong with the method, it has been explored in the literature a lot. This is for instance the way that many papers use to evaluate extrinsically word embeddings with tasks like classification. One would expect however to lose in terms of accuracy as the length of the documents increases. Models like doc2vec have been proposed to address such limitations, but it is always better to test them in your benchmark. Basically the word2vec method intrinsically takes into account the tf (term frequency) of each word. There is no need to emphasis it twice. on the other hand maybe it is a good idea to emphasis on the words with high tf-idf owing the fact that these words are not seen enough in the training phase. I think the way to do that is not simple multiplication however you can feed the network with the context of high tf-idf words more than the other contexts. Not the answer you're looking for? Browse other questions tagged text-mining similarities word2vec or ask your own question. How to calculate the similarity of two corpora (each of which contains a set of documents)? How to train sentence/paragraph/document embeddings? How does word2vec work for word similarity?
CommonCrawl
The subtraction operation in the domain of integers $\Z$ is written "$-$". As the set of integers is the Inverse Completion of Natural Numbers, it follows that elements of $\Z$ are the isomorphic images of the elements of equivalence classes of $\N \times \N$ where two tuples are equivalent if the difference between the two elements of each tuples is the same. Thus subtraction can be formally defined on $\Z$ as the operation induced on those equivalence classes as specified in the definition of integers. In the context of mathematical logic it is sometimes referred to as proper subtraction so as to distinguish it from the partial subtraction operation as defined on the natural numbers.
CommonCrawl
Abstract: The paper studies a subclass, referred to as $PBDD(n_1,n_2)$, of the class of nonsingular $H$-matrices. A new characterization of matrices in $PBDD(n_1,n_2)$ is suggested. Two-sided bounds for the determinants of matrices in the class $PBDD(n_1,n_2)$ are derived, and their applications to strictly diagonally dominant matrices and to matrices with the Ostrowski–Brauer diagonal dominance are presented. An upper bound for the infinity norms of the inverses of matrices in $PBDD(n_1,n_2)$ is considered. Extensions to the case of block $k\times k$ matrices, $k\ge 2$, are addressed. Li Hou-Biao, Huang Ting-Zhu, Li Hong, "Some New Results on Determinantal Inequalities and Applications", J. Inequal. Appl., 2010, 847357, 16 pp.
CommonCrawl
The extra bonus DVD features the four music videos for this album, the 5.1 mixes, tour programs, and an hour long VHS-like recording of a rehearsal of this tour that makes you feel like a fly on Genesis' wall. Worth the price I paid, I'm very happy to finally get rid of my old CD of it for once & That's All!... The Boot FAQ from the old official site forum is now available here. Many thanks to Eric R. The Farm tape list is also available here. Over recent months, a couple of fantastic sounding soundboard recordings have become available on the Wolfgang's Vault site. BK-5-1-8 Black (Ivory).....Carbon Black + Gray 06 2:1 BK-5-1-9 Black (Lamp)..Black (Carbon) 01 BK-4-1-9 Black (Mars).....Black (Mars) The first number in the ratio refers to the first color in the mix, the second refers to the second, and so on. T in the mixing ratio means just a touch of this color. (D) after a color name indicates it is Discontinued. The printed color is a... The Bluetooth� word mark and logos are registered trademarks owned by Bluetooth SIG, Inc. and any use of such marks by Genesis Technologies is under license. For instance, if you have 9 cups of the 80% mix, you'll need to add $0.6 \times 9 = 5.4 cups$ of water to it to get to a 50% mix; when you're done, you'll have the original 9 cups of mix plus 5.4 cups of water to get 14.4 cups of mix, whic is $1.6 \times 9$. Introduction to Genesis (Genesis 1:1-2:4) The book of Genesis is the first of the five books that Moses wrote (known collectively as the Pentateuch or Torah), apparently during the 40 years that Israel wandered in the wilderness before being brought into Canaan, the Promised Land, under Joshua. The Bluetooth� word mark and logos are registered trademarks owned by Bluetooth SIG, Inc. and any use of such marks by Genesis Technologies is under license. Comes in a Double Jewel Case with a booklet containing lyric and credits. Track durations are not printed on the release. Disc one is a hybrid CD/SACD.
CommonCrawl
Abstract : A packing of subsets $\mathcal S_1,\dots, \mathcal S_n$ in a group $G$ is a sequence $(g_1,\dots,g_n)$ such that $g_1\mathcal S_1,\dots,g_n\mathcal S_n$ are disjoint subsets of $G$. We give a formula for the number of packings if the group $G$ is finite and if the subsets $\mathcal S_1,\dots,\mathcal S_n$ satisfy a genericity condition. This formula can be seen as a generalization of the falling factorials which encode the number of packings in the case where all the sets $\mathcal S_i$ are singletons.
CommonCrawl
When does a sheaf of categories represent a homotopy sheaf? Suppose that $F$ is a sheaf of categories (on a Grothendieck site or even a topological space). By this, I mean a sheaf in the naive 1-categorical sense, so it can equivalently be viewed as a category object in sheaves of sets. By taking the nerve, one can view $F$ as a simplicial sheaf. $F$ will not take values in Kan complexes however (unless $F$ takes values in groupoids). Question: Are there checkable conditions for $F$ to satisfy homotopy descent (besides the Cech diagram consisting of fibrations), in the sense that if $RF$ is the fibrant replacement of $F$ in the Joyal model structure on simplicial sheaves (so I'm modelling $\infty$-sheaves of $\infty$-groupoids here), then the canonical map $$F \to RF$$ is object-wise a weak equivalence of simplicial sets? Browse other questions tagged ct.category-theory higher-category-theory simplicial-stuff model-categories or ask your own question. Is there a combinatorial way to factor a map of simplicial sets as a weak equivalence followed by a fibration? Computing homotopy (co)limits in a nice simplicial model category? Why is the Straightening functor the analogue of the Grothendieck construction?
CommonCrawl
Performs a safe (local) redirect, using wp_redirect(). If the host is not allowed, then the redirect defaults to wp-admin on the siteurl instead. This prevents malicious redirects which redirect to another host, but only used in a few places. (string) (Required) The path or URL to redirect to. (int) (Optional) HTTP response status code to use. Default '302' (Moved Temporarily). (string) (Optional) The application doing the redirect. (bool) $redirect False if the redirect was cancelled, true otherwise. * Filters the redirect fallback URL for when the provided redirect is not safe (local). * @param string $fallback_url The fallback URL to use by default. * @param int $status The HTTP response status code to use. 5.1.0 The return value from wp_redirect() is now passed on, and the $x_redirect_by parameter was added. Filters the redirect fallback URL for when the provided redirect is not safe (local).
CommonCrawl
Since $g\circ f\,$ is a bijection, $f\,$ is bound to be an injection.;an injection.;a surjection.;a bijection Indeed, $f(x_1)=f(x_2)\,$ $x_1\ne x_2,\,$ would imply $g(f(x_1))=g(f(x_2))\,$ in contradiction with $g\circ f\,$ being a bijection. Since $f\,$ is an injection but not a surjection, |X|\lt |Y|.;$|X|\lt |Y|.$;$|X|=|Y|.$;$|X|\gt |Y|.$ This directly implies that $g\,$ is not an injection;an injection;a surjection;a bijection and $f,\,$ indeed, is not a surjection.;an injection.;a surjection.;a bijection. We then have g(f(a))=b,;$g(f(a))=a,$;$g(f(a))=b,$;$g(f(a)),$=c g(f(b))=c,;$g(f(b))=a,$;$g(f(b))=b,$;$g(f(b))=c,$ g(f(c))=a.;$g(f(c))=a.$;$g(f(c))=b.$;$g(f(c))=c.$ So that $g\circ f\,$ is not an identity. Thus all the conditions of the problem are fulfilled. This problem has been given as a homework assignment for CIS160 at UPenn, where my young son is a freshman (2016-2017).
CommonCrawl
Consider a piece that starts at a corner of an ordinary $8 \times 8$ chessboard. At each turn, it moves one step, either up, down, left, or right, with equal probability, except that it must stay on the board, of course, and it must not return to a square previously visited. Clarification. On any given step, if the piece has $n$ available moves (excluding those that would put the piece on a previously visited square), it chooses randomly and uniformly from those $n$ moves. Example: Starting from a corner, at first move, $n = 2$, and either move is chosen with probability $1/2$. Next move, $n = 2$ also, because it cannot return to the corner, and so either of the two other moves are chosen with probability $1/2$. On the third move, if it is on the edge, $n = 2$, while if it is off the edge, $n = 3$. And so on. It is possible for the piece to be deadlocked at some point prior to completing a tour of the chessboard. For instance, if it starts at lower left, and moves up, right, right, down, left, it is now stuck. With probability $1/2$, the piece moves to the center square $(2, 2)$ on its second move. From there, only one move permits completion of the tour—the move to $(1, 2)$—and in that case, the tour is guaranteed to complete. This move is chosen with probability $1/3$. With probability $1/2$, the piece moves to $(3, 1)$ on its second move. It is then forced to move to $(3, 2)$. With probability $1/2$, it then moves to $(3, 3)$ on its fourth move and is guaranteed to complete the tour. Otherwise, also with probability $1/2$, it moves to the center square $(2, 2)$ on its fourth move. From there, it moves to $(1, 2)$ with probability $1/2$ (and is then guaranteed to complete the tour), or to $(2, 3)$ also with probability $1/2$ (and is then unable to complete the tour). Suppose a piece stands in the vertice $v$ of a graph $G = (V, E)$. At each turn, it moves along any edge starting from the vertex it is in currently, with equal probability, except that it must not return to a vertex previously visited. Here $G\setminus v$ stands for a graph that is constructed by removing $v$ and all adjacent edges. Using that formula the exact probability can always be calculated. Not the answer you're looking for? Browse other questions tagged probability combinatorics stochastic-processes random-walk chessboard or ask your own question. What is the Probability that a Knight stays on chessboard after N hops? What is the probability of traversing through an $n \times n$ board in exactly $K$ moves by moving uniformly at random?
CommonCrawl
Weak convergence in a Hilbert space is defined as pointwise convergence of functionals associated to the elements of the space. Specifically, weakly if the associated functionals converge to pointwise. The triangle inequality shows that strong convergence implies weak convergence, as expected. And the converse is not necessarily true, as the example of a Hilbert space shows. Aside: the above is not the only way to define weak convergence in metric spaces. Another approach is to think of in terms of projection onto a line through . A metric space version of this concept is the nearest-point projection onto a geodesic curve. This is a useful approach, but it is only viable for metric spaces with additional properties (geodesic, nonpositive curvature). Also, both of these approaches take the Hilbert space case as the point of departure, and do not necessarily capture weak convergence in other normed spaces. Considering both sides as functions of one variable , for a fixed , shows that for , because the left hand side is non-differentiable at while the right hand side is non-differentiable at . Then the desired identity simplifies to which is false. Oh well, that sequence wasn't weakly convergent to begin with: by Schur's theorem, every weakly convergent sequence in also converges strongly. This example also shows that not every bounded sequence in a metric space has a weakly convergent subsequence, unlike the way it works in Hilbert spaces. I suggest an alternative: if $X$ is the metric space, then $x_n \to x$ weakly iff $f(x_n) \to f(x)$ for every short map $f : X \to \mathbb R$. This does not seem different from ordinary convergence in the metric, since $f$ could be the distance function to $x$, $f(z) = d(z,x)$.
CommonCrawl
You're standing outside your apartment building after a late night out, with perhaps one beer too many, and you realize you have completely forgotten the code to get in. Luckily, you're a mathematical genius, and somehow that part of the brain is completely unaffected by the alcohol, so you decide to find the optimal strategy to get in. The code is entered on a standard 0-9 keypad, and you know that it is a four digit code. Entering the correct sequence of digits will open the door. When entering a sequence, every four digit sub sequence will be evaluated by the security system. I.e. entering 195638 will evaluate the following three sequences: 1956, 9563, and 5638, so it is a waste of time to try each four digit sequence one by one. Question: What is the smallest number of keypresses needed to try every sequence between 0000 and 9999? The De Bruijn sequence $B(k,n)$ is a cyclic sequence over an alphabet of size $k$ that contains every possible word of length $n$ exactly once; see the wikipedia for more information. The length of such a De Bruijn sequence $B(k,n)$ is $k^n$, that is, it equals the number of words of length $n$ over an alphabet of size $k$. The De Bruijn sequence $B(10,4)$ is over an alphabet of size 10 (that is, the ten digits 0,1,2,$\ldots$,9), and it contains every possible word of length 4 (that is, every 4-digit number). The length of $B(10,4)$ is $10^4$. The puzzle now does not ask about a cyclic sequence, but about an ordinary linear sequence. If we follow the cyclic De Bruijn sequence $B(10,4)$ around the cycle, we will encounter every 4-digit number exactly once. This takes $10^4$ digits plus an additional 3 digits at the end (where the cycle closes, while the linear sequence remains open). Hence the answer is $10^4+3=10003$ keypresses. Not the answer you're looking for? Browse other questions tagged mathematics calculation-puzzle combinatorics or ask your own question.
CommonCrawl
That is, the future stock price follows a normal distribution. I am now confused, which process do I use to answer questions about the probabilistic nature of future stock prices? The context for my last question can be found here. 1) Why do we use geometric Brownian motion ($\ln S_t-\ln S_0$ is normally distributed)? In this case you have $$ S_t = S_0 \exp( (\mu-\sigma^2/2) t + \sigma B_t), $$ which means that you model positive prices. Furthermore the log-return $$ \ln(S_t/S_0) = (\mu-\sigma^2/2) t + \sigma B_t, $$ is normally distributed. As log returns can cover the whole real line $(-\infty,\infty)$ this is a nice model. Keep in mind that simple returns $S_t/S_0-1$ can only take values from $[-1,\infty)$. The best place to model a normal distribution is the whole real line. If you use a model (the Bachelier model) $$ S_t = S_0 + \mu t + \sigma B_t, $$ then your returns $S_t-S_0$ are normally distributed. But there is the chance that prices get negative (if $B_t$ becomes very negative). You probably don't want this in your model. Some people use this model nevertheless to price options that are close to maturity as you don't need such large $\sigma$ to match (relatively high) prices of OTM options. For 2) Why is $P(\ln S>\ln X)=P(S > X)$? because the logarithm is a monotonous transformation. We speak of the same events. If $S>X$ then always $\ln S > \ln X$. Thus the same events have the same probability. Another example $$ P ( S > X ) = P ( S+4 > X + 4). $$ Just the same events. (Independence of increments) W(t) − W(s) , for t > s , is independent of the past, that is, of W(u) , 0 ≤ u ≤ s, or of $F_s$ , the σ-field generated by W(u), u ≤ s. (Normal increments) W(t) − W(s) has Normal distribution with mean 0 and variance t − s. This implies (taking s = 0) that W(t) − W(0) has N(0, t) distribution. (Continuity of paths) W(t), t ≥ 0 are continuous functions of t. Not the answer you're looking for? Browse other questions tagged brownian-motion normal-distribution lognormal or ask your own question. How to compute the conditional expected value of a geometric brownian motion?
CommonCrawl
A fence consists of $n$ vertical boards. The width of each board is 1 and their heights may vary. You want to attach a rectangular advertisement to the fence. What is the maximum area of such an advertisement? The first input line contains an integer $n$: the width of the fence. After this, there are $n$ integers $k_1,k_2,\ldots,k_n$: the height of each board. Print one integer: the maximum area of an advertisement.
CommonCrawl
You have a bathtub with two taps (cold and hot). Each tap pours out water at a fixed temperature. You can control the flow rate of each tap separately. For each tap, you can set the rate from 0 to the maximum rate for that tap. You are given the cold temperature, the hot temperature, the two maximum rates, and a target temperature. Your job is to fill the bathtub as fast as possible, while keeping the water temperature as close as possible to a target value. However, you must not make the water any colder than the target. The target temperature $t_0$ is guaranteed to be greater than or equal to the cold temperature and less than or equal to the hot temperature, so that the problem is guaranteed to have a solution. You need to find $y_1$ and $y_2$, the rate of each tap that will produce the optimal value for $t$. That's the end of the problem statement, except for one important detail that I'll reveal shortly. If you read through it a few times, the problem itself isn't too complicated. Or maybe you find it simple after just one reading, in which case you're ahead of the game. It turns out that this solution doesn't work on CodeForces. To prevent contestants from using it, the puzzle authors cleverly set the range set the range of the $x_1$ and $x_2$ values to $1 \leq x_1,x_2 \leq 10^6$, which eliminates any possibility of this $O(n^2)$ algorithm completing in the allotted time (a few seconds) for all test cases. (That's the important detail that I left out of the problem statement). But we don't have to give up on the solution entirely. It turns out that we can use a slightly more clever brute force solution that relies on eliminating the inner (hot tap) loop. For each cold tap rate, we can calculate a corresponding hot tap rate that allows us to check only a subset of the hot tap values and still find the optimal pair. The last step is not strictly necessary, but it ensures that the temperature values are positive when we use this formula in the algorithm, since $t_1 \leq t_0 \leq t_2$. That simplifies the code a bit. The result, $y_2$, is a floating point number representing the hot rate that will produce a water temperature of exactly $t_0$ when the cold rate is $y_1$. But the problem statement says the tap rates are integers, so we won't be able to the calculated $y_2$ value in most cases. To convert the result to an integer, we can use $\lceil y_2 \rceil$ (the ceiling function). In general, that will give us a temperature value that differs from the target. We'll need to calculate how much it differs, and then whether we found a temperature that is closer to the target than we have seen so far. Now that we have a good idea of how to solve the problem, it's time to work out the details of the algorithm. One detail to work out is what to do in a few special cases. First, what happens when the cold or the hot temperature (or both) is equal to the target temperature. If the cold temperature is equal to the target temperature and the hot temperature is some other value, then we can match the target temperature exactly by just using the cold tap and leaving the hot tap off. And since we want to fill the bath as fast as possible, we need to use the maximum rate. There is an analogous case when the hot temperature is equal to the target. If both temperatures are equal to the target, then we can use the maximum rate for both taps. Is there any other case where we should leave the cold tap off to get the optimal temperature? It turns out that the answer is no. In other words, the special case in which $t_2=t_0$ is the only situation in which $y_1=0$ is the optimal cold rate. To see why this is, consider a case where $t_2 \neq t_0$ (the hot temperature is not equal to the target). Then it must be true that $t_2>t_0$ (the hot temperature must be greater than the target) because we know that the target is between the hot and cold temperatures. So we can always get closer to $t_0$ by applying some cold water — that is, by applying water with temperature $t_1 \leq t_0$. Therefore, when we are looping through cold temperature values to find the optimal one, the values we need to loop through are the integers $1 \leq y_1 \leq x_1$. Using the current cold rate (loop variable $y_1$) and the givens, calculate a hot rate $y_2$ using equation (2), and round up to the nearest integer $\lceil y_2 \rceil$. If $y_2$ is not in the range $0..x_2$, it's not a valid rate, so skip the remaining steps and continue with the next iteration. Using equation (1), calculate the actual temperature $t$ produced by the current $y_1$ and $y_2$. If $t$ is closer to $t_0$ than any $t$ found so far, save $t-t_0$, $y_1$, and $y_2$. When the loop is complete, the saved $y_1$ and $y_2$ will be the values that produce the optimal temperature and total rate. One decision to make when implementing the algorithm is whether to use any floating point variables, or only integers. The floating point implementation is conceptually simpler, but you need to be careful about rounding issues when comparing values. The standard way to avoid unexpected results when comparing floats is to select a small epsilon value, and verify that numbers are within this value rather than strictly equal. For the integer implementation, the challenge is that you can't use the formulas as-is, since they produce floating point results. For my implementation, I decided to use floating point values. The solution to Hot Bath can be implemented in under 75 lines of code, and the algorithm is mostly brute force. This is why the best competitive programmers can finish it in about 10 minutes. The main insight required to solve the puzzle is realizing that formula (1), the one given in the problem description, must be manipulated into form (2). This provides an efficient way to calculate the optimal $y_2$ for any $y_1$, rather than having to consider every $(y_1,y_2)$ combination, which is impractical given the input ranges. Despite the apparent simplicity of the solution, there are some tricky details to consider. For someone like me who hasn't yet solved a lot of these types of puzzles, it's plenty challenging. First you have to get the insight about how the formula given in the problems statement needs to be transformed. Then you need a "half brute-force" algorithm using the modified formula, Finally, there are the implementation details about how to work with floating point numbers while preserving correctness or, alternatively, how to adjust the formulas so that integer data types can be used. Looking at the steps required to come up with an implementation from scratch, it seems amazing that top contestants could come up with a solution in barely more time than it takes to type the code. But it's a characteristic of complex skills that what seems impossible from the perspective of a beginner turns out to be a routine performance for people with who have been doing the right kind of practice for long enough.
CommonCrawl
Three argon gas-puff implosions were performed on the Z-machine at SNL. These three loads had the same density profile from an 8cm dia. nozzle, a 1mg/cm mass, and a 2.5cm length. The experiments produced similar radiative powers and yields (B. Jones et al. PoP 22,020706(2015)). Simulations with the 2D MHD code Mach2-TCRE reproduced the experimental K-shell powers, yields, and emission region. It was also shown that the ratio of the Ly$\alpha $ to He$\alpha +$IC lines from the simulation had good agreement to measurements after peak K-power; however, the simulation's line ratio was higher prior to the peak power. The authors attribute the difference to 3D effects or on the implicit assumption of steady-state population kinetics (J. Thornhill et al. IEEE TPS 43,2480(2015)). This presentation will illustrate the effect of time-dependent level populations on the radiation from simulations using the NRL DZAPP code. DZAPP is a coupled 1D MHD, detailed non-LTE atomic physics with radiation transport, incorporating a transmission line circuit. The line ratios and K-powers from the steady-state and time-dependent populations will be presented and compared with experiment. This work supported by DOE/NNSA. SNL is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the US DOE/NNSA under contract DE-NA-0003525.
CommonCrawl
Keywords Molecular Dynamics, Brittle Fracture, Crack Propagation, Grain Boundary ,$\alpha$-Fe. Abstract In this paper, we present a classical molecular dynamics algorithm and its implementation on Cray C90 and Fujitsu VPP700. The characters of this algorithm consist in a grid based on the block division of the atomic system and a neighbor list based on the use of a short range potential. The computer program is used for large scale simulations on a Cray C90 and a 32-node VPP700, and measurements of computational performance are reported. Then, we examine the interaction between a crack propagating and a tilt grain boundary under uniaxial tension using this computer program. The Johnson potential for$\alpha$-Fe is used in these simulations. A structural transition from bcc to hcp induced by hydrostatic stress and brittle crack propagation are observed in a system including a crack whose direction is in the (101) plane. In a system including both the crack and a (112) grain boundary which is symmetric and stable, not only the phase transition but also crack propagation is restrained by the grain boundary. In a system including both the crack and a (111) grain boundary which is asymmetric and unstable, intergranular crack propagation occurs after the crack tip reaches the grain boundary.
CommonCrawl
Each column, each row and each box ($3\times3$ subgrid) must have the numbers $1$ through $9$. The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid. At the bottom and right side of the $9\times9$ grid are numbers, each of which is the product of a column or row of unknown digits marked by asterisks. Altogether a set of 18 equations can be formed from the columns and rows of unknown digits and constants. Combinatorics. Working systematically. Modular arithmetic. Indices. Logo. Networks/Graph Theory. Mathematical reasoning & proof. Games. Factors and multiples. Visualising.
CommonCrawl
Let's say I have 20 people who rated a service, the average rating for them was 4/5. If a new person did the rating for 3/5, what's the new average of rating? Note the answer doesn't have to be exact, I am using the calculation for rating component, so I am going to round the result as I will have only integers. Assuming that the true average rating (i.e. without any rounding) was exactly 4, then the total rating for all 20 people is $4 \times 20 = 80$. So the total rating after adding the 21st rating in is $80 + 3 = 83$, so the average rating across 21 people is $83/21 \approx 3.95$. If you don't store either an accurate value for the average, or the total of all ratings, you're going to have trouble adjusting the rating as new ones come in since rounding will tend to push everything back to the previous value. If you just store that value rounded to the nearest whole number, you'll say all of those are 4. Then as new ratings come in, you'll probably keep rounding the result to 4 for as long as you like - even if a million people all give a rating of 1, if you update after every rating you'll never see the value shift. Storing to 2 decimal places means that the same problem will happen once you reach a few thousand ratings. On the estimation of the average of a deteriministic, scalar, real-valued function of two variables.
CommonCrawl
The king of Ebonchester has just returned from his recent conquests of Baronshire and with him he brought four cart loads of plunder. However Baronshire is known for its lacklustre fiscal regulation, so the king would like to test the authenticity of every coin. The problem is his old accountant died at his side in battle (a good king never goes anywhere without his accountant). Thus he wants to recruit the most meticulous counters in the land. Having placed posters on every street corner offering the handsomely paid job he was soon inundated with applications. Each applicant had to prove their worth by determining the forged coin from a stack of 12 (which could either by lighter or heavier) in just 3 weighings. Since the great infrastructure drive of a few years prior everyon in the kingdom has internet access, and it turns out that everyone and their mother had rushed to Stack Exchange in preparation for their interview and every single one of them knew the answer. In the first test once the scale has fallen to the left or right during a weighing, it must fall the same way, or balance equally, in all future weighings. As per test 2 but now a coin increases or decreases (mod 12), when light or heavy respectively, by the current number on the coin itself. For example if coin 7 was light, after the next weighing it would have its label switched with coin 2, and if it were heavy it would switch with 12. Question: What is the minimum number of weighings required (if possible) to be certain of which coin is the counterfeit and whether it is light or heavy in each case? The relabelings double the number of a light coin, and set the number of a heavy coin to 12. In particular, because 12 is divisible by 4, the odd coin must have an even number after one weighing and must be divisible by 4 after two. Therefore, after two weighings, there are only four possible situations due to relabeling: either 4 is light, 8 is light, 12 is light, or 12 is heavy. Distinguishing them in one weighing is not possible, but we can do it with two weighings. First weighing: 1,4,7,10 against 2,5,8,11. Second weighing: 3,9 against 6,12. If the first weighing balances, then the odd coin is now labeled either 6 or 12. If these two together are lighter than 3 and 9 (which must be two regular coins after relabeling), the odd coin is lighter, otherwise it is heavier. After the second weighing, the odd coin is labeled 12 and we are done. If the first weighing does not balance, suppose 1,4,7,10 was lighter. Then either the odd coin was light and is now labeled 2 or 8, or the odd coin was heavy and is now labeled 12. If the second weighing balances, the odd coin is light. After the second weighing, it is now labeled 4. Otherwise, the odd coin is heavy and is labeled 12. The case where 2,5,8,11 is lighter is similar: if the second weighing balances, the odd coin is light and is now labeled 8; otherwise, it is heavy and labeled 12. You can't distinguish 24 possibilities of (forged coin, heavier or lighter) with only 15 possible weighing results. If you know which coin was originally fake, you can also determine its current label if required. again, you can't distinguish the 24 possibilities with only 2 weighings, as those can only give 9 distinct results. Read his answer to know why this works. The result is either Right Heavy, Left Heavy, or Balanced. Right heavy: There are three possibilities now. Either 1 is light, 2 is light, or 4 is heavy. Weigh 1 against 7, then 2 against 7 to find out which, being careful to put 7 on the right. Balance: There are three possibilities now. Either 3 is light, 5 is heavy or 6 is heavy. Weigh 7 against 5, then 7 against 6, to find out which, being careful to put 7 on the left. Left heavy: this is symmetric to the right heavy case. Right heavy: 7,8 vs 1,2. If the scale tips right, then one of 7 or 8 is light. Weigh 7 vs 1 to check which. If the scale balances, then one of 9 or 10 is heavy. Weigh 1 vs 9 to check which. Balance: The counterfeit coin is either 11 or 12. Weight 1 vs 11, then 1 vs 12 to find out. Here's why this many weighings are necessary. After three weighings, there are only 15 possible results. To see this: every result looks like UUU, UUB, UBU, UBB, BUU, BUB, BBU, or BBB, where B = balanced, U = unbalanced. For the first seven of these, U can be either Left or Right, leading to $2\times 7+1=15$ possible results. Since there are 24 > 15 possibilities to distinguish between, three weighings is insufficient.
CommonCrawl
Say I have an $N \times N$ matrix and I want to know the eigenvalues to a precision of $\pm \epsilon$. How many qubits and how many gates do I need? Browse other questions tagged algorithm quantum-gate or ask your own question. HHL algorithm, how to decide n qubits to prepare for expressing eigenvalue of A? If $|\psi\rangle, U|\psi\rangle$ are known, how many pairs of such qubits are required to find the operator $U$?
CommonCrawl
If $(X,d)$ is a metric space, the Hilbert compression of $X$ is the supremum of all $\alpha$'s for which there exists a Lipschitz embedding $f$ from X to a Hilbert space, such that $C.d(x,y)^\alpha \leq \|f(x)-f(y)\|$ for every $x,y\in X$. When $G$ is a finitely generated group, Hilbert compression is a quasi-isometry invariant which has been related to concepts such as exactness, amenability, Haagerup property. In this survey talk, we will review the known results about the range of this invariant, then we will move on to some recent results (due to Naor-Peres, Li, Dreesen) on the behaviour of Hilbert compression under various group constructions (wreath products, free and amalgamated products, HNN-extensions, etc...).
CommonCrawl
Short selling (also known as shorting or going short) refers to selling securities or other financial instruments that you do not currently own. Traders go short when they believe the price of a security will decline in the future. A few days later, HPQ's share price declines to $25. You can then buy back the shares you've sold – this is called covering your short. Notice that you originally sold the shares for $30, but only have to pay $25 per share to buy them back. In the process, you've made a profit of $5 per share. Let's assume that you currently have $1000 of cash and no open positions. You believe that the price of HPQ, currently at 30, is too high and will decline in the future. You can initiate a short position in HPQ, by selling say 20 shares of HPQ. After this transaction, you'd own \(-20\) shares of HPQ whose market value is \(-20 \times 30 = -600 \), and your cash balance will rise to \( 1000 + 20 \times 30 = 1600 \) – your cash balance rises because someone paid to buy these shares from you. Your total account value is now \(1600 - 600 = 1000\). A few days later, HPQ's share price declines to $25. You can then buy back the shares you've sold – this is called covering your short. Notice that you sold the shares for $30, but only have to pay $25 per share to buy them back. In the process, you've made a profit of $5 per share. Mathematically, your cash balance is now \( 1600 - 25\times 20 = 1100 \) and you no longer have any open positions. Your total account value is therefore $1100 – you've made $100 shorting HPQ! Last modified: Jan. 26, 2016, 8:11 p.m.
CommonCrawl
I am wondering if there are known conditions on the Neumann datum $g$ which guarantee the physical solution $u$ of problem (NP) (the unique solution under the growth condition in space) to be bounded or to have a limit as $t$ goes to $+\infty$ (by intuition I would say a condition on the integral of $g$). I am interested in the exterior problem, but to start I am also interested in similar results for the interior one. To me it seems a quite classical question, so I am surprised that I couldn't find almost nothing in the literature. For example Friedman in his famous monograph "PDE of Parabolic Type" considers only the large time behavior for the Dirichlet problem and for a specific Robin problem ($\partial_\nu u(t,x)+f(t,x)u(t,x) = g(t,x)$ on $]0,+\infty[\times \partial \Omega$ with $f<-c$ for some c<0) which do not include the Neumann problem. Browse other questions tagged asymptotics parabolic-pde heat-equation or ask your own question.
CommonCrawl
What would be the restrictions to diagonalization of this type transformation of Laplacian matrix? What I would like to know is if the ULW matrix has restrictions to be diagonalized, and what would they be? Browse other questions tagged graph-theory diagonalization graph-laplacian or ask your own question. Proving Cayley formula using Kirchhoff matrix theorem? Diagonalize the $n \times n$ matrix with ones along both diagonals. Given any matrix $A$, does there exist a symplectic transformation such that $P^TAP=B$ where B is block diagonal?
CommonCrawl
The majority of limit-cycle oscillators have the intrinsic property of amplitude dependent frequency or shear, meaning that transient trajectories corresponding to different amplitudes have different average frequencies. This property does not typically affect the stability of a solitary oscillator and is best illustrated by invariant sets called isochrons . Substantial shear is intrinsic to semiconductor lasers, which are widely applied examples of nonlinear oscillators described by a complex-valued electric field, \(E(t)\), and a real-valued population inversion, \(N(t)\). Its physical origin is the dependence of the laser-resonator frequency on population inversion, which is quantified by the single parameter \(\alpha\). In our paper we demonstrate how this seemingly innocent property can lead to the rich variety of instabilities and chaos displayed by externally perturbed semiconductor lasers. We were guided by the work of Wang and Young who proved that any hyperbolic limit cycle, when suitably perturbed, can be turned into observable chaos (a strange attractor). This result is obtained for periodic and discrete-time kicks that deform the otherwise stable limit cycle. A key concept is the creation of Smale horseshoes via a stretch-and-fold action due to an interplay between the kicks and properties of the phase space flow. In systems without shear the stretch-and-fold action requires very carefully chosen kicks that need to be in both the radial and angular directions. In contrast, in the presence of shear, it may be sufficient to kick in the radial direction alone and let the natural forces of shear provide the stretch-and-fold action. These effects are illustrated in the animation. The stable limit cycle of a solitary laser with two complex conjugate Floquet multipliers is shown in red and henceforth referred to \(\Gamma\). Because the limit cycle is rotationally symmetric, motion along \(\Gamma\) can be frozen in a suitable reference frame so as to not obscure any stretch-and-fold action. Kicks modify the electric field amplitude, \(|E|\), by a factor of \(0.8*sin(4*\arg(E))\) at times \(t = 0\), \(0.25\), \(0.5\), and \(0.75\) but leave its argument, \(\arg(E)\), and population inversion, \(N\), unchanged. For \(\alpha = 0\), kicks leave each point on its original isochron which is given by a constant phase, \(\arg(E) = const\). Therefore, all points on the black curve rotate with the same frequency about the origin of the \(E\)-plane and no folds appear as the black curve settles back to \(\Gamma\). However, for \(\alpha = 2\), kicks move most points to different isochrons which are now logarithmic spirals given by \(\arg(E) + \alpha*\ln(|E|) = const\). As the black curve converges to \(\Gamma\), points with larger amplitudes \(|E(t)|\) rotate faster on average. This gives rise to an intricate stretch-and-fold action that is additionally enhanced by the spiralling transient motion about \(\Gamma\). Folds and horseshoes are formed even though the kicks are applied in the radial direction alone. Although laser systems usually have continuous-time perturbations that may not be periodic, the rigorous results in conjunction with numerical computations give new valuable insight as to why vast parameter regions of persistent chaos appear in externally perturbed lasers with $\alpha$ sufficiently large. The results also suggest that creating observable chaos for \(\alpha=0\) may be difficult, but not impossible. Shear-induced Chaos in Lasers : Animation. this paper appeared in SIADS 10(2): 469-509, 2011.
CommonCrawl
Using equipment, especially equipment that matches the character's current class. Leveling up. Every two levels gained (up to and including level 100) applies a one-point level bonus to all attributes. Receiving a buff from a skill. Receiving a buff by performing a Perfect Day. Every time players level up, they gain 1 point which they can assign to the character attribute of their choice (unless they are above level 100). This feature unlocks at level 10 when all points earned from earlier levels become available. Strength (STR) affects critical hits, damage done to a boss, and the effects of two skills used by Warriors and one of the Rogues' skills. It is the primary attribute for Warriors and the secondary attribute for Rogues. This attribute increases the chance that a player will land a critical hit when scoring a task and increases the bonus gained from critical hits. It also increases damage dealt to bosses. Both normal damage and damage from a critical hit are increased. A player of any class can increase their own Strength by allocating stat points to it or by equipping armor and weapons that provide a bonus to the STR stat. Warriors and Rogues gain an additional class bonus for wearing the STR-providing weapons and helms that are specifically intended for their class. Warriors can use the skill Valorous Presence to buff their own and their party mates' Strength by an amount determined by their own unbuffed Strength, and increased Strength will also increase the effect of Warriors' Brutal Smash skill and Rogues' Backstab skill. Constitution (CON) affects health and defense. It is the primary attribute for Healers and the secondary attribute for Warriors. Having a higher Constitution decreases the amount of damage (health loss) taken from your own missed Dailies and from clicking your negative Habits (e.g., succumbing to junk food). Constitution does not decrease the amount of damage received from bosses. In other words, boss damage is calculated without anyone's Constitution taken into account, regardless of whether the source of the missed Daily is you or a member of your party. A player of any class can increase their own Constitution by allocating stat points to it or by wearing equipment that provides a CON bonus. Healers and Warriors gain additional class bonuses for wearing the CON-providing armor and shields that are specifically intended for their class. Healers and Warriors can also temporarily buff their own and their party members' Constitution using the skills Protective Aura and Intimidating Gaze, respectively. The amount of increase in Constitution depends on the caster's unbuffed Constitution. Protective Aura is more powerful than Intimidating Gaze, so a Healer is always more effective at buffing CON than a Warrior with the same CON. Intelligence (INT) affects experience and mana points. It is the primary attribute for Mages and the secondary attribute for Healers. Higher Intelligence allows the player to earn more XP from doing tasks, which means that they will level up more quickly. It also increases the player's mana point (MP) cap and rate of MP regeneration. Once the player's maximum MP is over 100 (requiring an Intelligence of 35 points or greater, as Maximum MP can be determined by the following formula: ($ Maximum MP = 2 \times INT + 30 $)), their maximum daily mana regeneration rate becomes 10% of their maximum MP, instead of a base 10 MP every Cron. Their mana gained from positive Habits becomes 0.25% of their maximum MP, instead of a base 0.25 MP, and mana gained per Daily, To-Do or To-Do checklist item becomes 1% of their maximum MP, instead of a base 1 MP. A player of any class can increase their own Intelligence by allocating stat points to it (also adds 1 MP per point allocated to INT) or by equipping armor and weapons that provide a bonus to the INT stat. Mages and Healers gain an additional class bonus for wearing the INT-providing weapons and armor that are specifically intended for their class. Mages can use the skill Earthquake to buff their own and their party mates' Intelligence by an amount determined by their own unbuffed Intelligence. Perception increases gold and item drop rates. Perception (PER) is a character attribute that affects the rate of earning drops and gold points (GP) for each task. It is the primary attribute for Rogues and the secondary attribute for Mages. This attribute increases the likelihood of finding drops (including drops for collection quests) when completing Tasks, a player's daily drop-cap, Streak Bonuses, and the amount of gold awarded for every Task completed. A player of any class can increase their own Perception by allocating stat points to it or by equipping armor and weapons that provide a bonus to the PER stat. Rogues and Mages gain an additional class bonus for wearing the PER-providing equipment that is specifically intended for their class. Rogues can use the skill Tools of the Trade to buff their own and their party mates' Perception by an amount determined by their own unbuffed Perception. Increase gold gained from tasks by 2% per Perception point (also applies to gold gained from streak bonuses). Increase drop chance bonus subtotal by 1% per Perception point (subtotal is then passed through an asymptotic diminishing returns function, so additional perception yields limited returns to drop chance per task). Increase drop-cap by 1 for every 25 points of PER. Note: You will NOT find level-based quest scrolls by increasing your Perception attribute, as they are not random drops. Attribute points are found under Stats in the User Icon menu on the website, or within the Stats page on the menu of the Android App and iOS App. When you have attribute points to allocate, arrows will appear beside all attributes. Click the up or down arrow to assign or remove points from an attribute. Workaround: Currently users who select automatic allocation in the website are required to allocate based on task activity. In addition, it is not currently possible to assign ability values to tasks in the main system. To do this, use the Task Adjustor to set your tasks to the correct options. Alternatively, users can use the Android App to set to the option as desired under the Stats menu. Players can either assign points manually or enable the Automatic Allocation feature to automatically assign points according to one of three distribution modes. Once assigned, attribute points cannot be redistributed unless the player changes class or uses the Orb of Rebirth. Changing class will refund all attribute points and give the player the option to choose a new class. Five sources of attributes are summed to get the final total for each attribute. Bonus provided by your equipment. Your class uses its equipment more effectively than other classes. Equipped gear from your current class gets a 50% bonus. Bonus points from your allocation, either automatic or manual. At level 100, a player will receive their 100th attribute point for allocation. After this, they will no longer be able to increase their attributes via leveling up. They will receive no further attribute points for allocation, and their level bonus to attributes will not increase with level up. This is to prevent the game from becoming unbalanced at very high levels. Players above level 100 may still increase their attributes via the use of equipment and/or buffs. Previously, attribute allocation was not capped at level 100. Therefore, players who were above level 100 before the cap was introduced may have more than 100 attribute points for allocation. However, these players will not gain any more attribute points on further level-ups. Additionally, if these players choose to reassign their attribute points or change class, they will only be given 100 points to reassign.
CommonCrawl
Abstract: It is well known that ill-posed problems in the space $V[a,b]$ of functions of bounded variation cannot generally be regularized and the approximate solutions do not converge to the exact one with respect to the variation. However, this convergence can be achieved on separable subspaces of $V[a,b]$. It is shown that the Sobolev spaces $W_1^m[a,b]$, $m\in\mathbb N$ can be used as such subspaces. The classes of regularizing functionals are indicated that guarantee that the approximate solutions produced by the Tikhonov variational scheme for ill-posed problems converge with respect to the norm of $W_1^m[a,b]$. In turn, this ensures the convergence of the approximate solutions with respect to the variation and the higher order total variations. Key words: ill-posed problems, regularizing algorithms, space of functions of bounded variation, Sobolev space.
CommonCrawl
Abstract: We construct D-branes in the Nappi-Witten (NW) and Guadagnini-Martellini-Mintchev (GMM) gauged WZW models. For the $SL(2,R)\times SU(2)/U(1)\times U(1)$ NW and $SU(2)\times SU(2)/U(1)$ GMM models we present the explicit equations describing the D-brane hypersurfaces in their target spaces. In the latter case we show that the D-branes are classified according to the Cardy theorem. We also present the semiclassical mass computation and find its agreement with the CFT predictions.
CommonCrawl
Just would like some help on getting my understanding on the right footing on certain terms. A balanced expression; such that, it has two expressions on each side. I would use algorithm to describe this term, because it is essentially an expression with an input. It has its own rules under geometry/calculus on how it must behave. almost like a functions except with more than one input. Granted a function can have more than one variable, however, the 'function' still obeys the input output behavior designated by its definition. Surely there is only one expression on each side of the equal sign. An equation is a statement asserting that two expressions represent the the same mathematical object. Some functions can be expressed by algorithms. Most can't. For example if we take functions from $\mathbb N\rightarrow \mathbb N$ there are uncountably many such functions, but there are only countably many expressions. As a specific example of a function that has no algorithm, suppose we flip a fair coin one time for each natural number. Most of the time the resulting bitstring will be random; that is, not compressible to a smaller string. That's a function that has no algorithm. Formula is more a term from logic, as in well-formed formula. It's a syntactic entity composed of symbols from some formal alphabet. But that terminology is not used consistently in math. For example we have the "quadratic formula" that gives the solution to a quadratic equation. Last edited by Maschke; January 5th, 2017 at 12:18 PM.
CommonCrawl
I have the following Magma code, and I want to rewrite it in Sage. Given a lattice L with basis matrix B, return the LLL basis matrix B' of L, together with the transformation matrix T such that B'=TB. The LLL basis matrix B' is simply defined to be a LLL-reduced form of B; it is stored in L when computed and subsequently used internally by many lattice functions. The LLL basis matrix will be created automatically internally as needed with δ=0.999 by default (note that this is different from the usual default of 0.75); by the use of parameters to this function one can ensure that the LLL basis matrix is created in a way which is different to the default. How does one create a lattice in Sage? And does Sage have a function similar to LLLBasisMatrix above? If not, how can I achieve the same functionality in Sage? What is $L$ exactly as a mathematical object? for the "Basis matrix" (given in Magma, edited with bare hands to work in sage - no magma here). @dan_fulea $L$ is supposed to be a lattice generated by two vectors $(N_2,0)$ and $(\tau,1)$. The Magma code is not written by me, I'm just trying to convert it to Sage, that's why I also don't know exactly the need of the second matrix in the lattice generation. I will use the initial data, now removed. FURTHER EDIT regarding the declaration of an inner product (diagonal) matrix D in the magma lattice. and some further output that i could not figure out using the strict magma documentation in a world without examples. So the difference is recovered by a twist with X. Your result will produce more or less correct results (just the sign will be wrong) if one matrix is used. But, in my case as you can see in the original question, there are two matrices used. I added some numeric examples to the original question, to see the contrast. Please note that there is another value called D. $L$ is supposed to be a lattice generated by two vectors $(N_2,0)$ and $(\tau,1)$ . This made sense immediately, we really have a lattice in $V=\mathbb R^2$, two $2$-component vectors in $V$. Now first, i see there is also that D, did not have seen it before. So basically the second matrix is the inner product matrix. I hope this is more clear. The answer was edited above with an other (smaller) value of $\sqrt D$, to also cover such a declaration. Thanks, this works, and using sqrt(D) for the original values given above, it produces the correct results. Thanks for the answer. Though, I wonder whether it is the correct solution here. Please check the above modified question, with some actual values comparing the results of Magma and Sage. I think what I look for is whether Sage supports creating lattices, and if it has method similar to LLLBasisMatrix from Magma. Is this a bug in QuadraticForm?
CommonCrawl
Let be a positive integer. Find the smallest integer with the following property: Given any real numbers such that and for , it is possible to partition these numbers into groups (some of which may be empty) such that the sum of the numbers in each group is at most . %V0 Let $n$ be a positive integer. Find the smallest integer $k$ with the following property: Given any real numbers $a_1, \ldots, a_d$ such that $a_1 + a_2 + \cdots + a_d = n$ and $0 \leq a_i \leq 1$ for $i = 1, 2, \ldots, d$, it is possible to partition these numbers into $k$ groups (some of which may be empty) such that the sum of the numbers in each group is at most $1$.
CommonCrawl
Justiina and Kotivalo are playing a game, whose initial setting consists of $n$ heaps of coins and a chosen parameter $a$. Justiina begins the game, and the players move alternately. The winner of the game is the player who makes the last move. At each move the player chooses one of the heaps and removes some number of coins from it. After this, they keep $1 \ldots a$ coins and distribute the remaining coins back to the game. Each coin can be added to any existing heap or as the first coin in a new heap. Can Justiina win the game if she plays optimally? And what is a possible winning opening move? The first input line contains two integers $n$ and $a$: the number of heaps and the parameter $a$. The next line contains $n$ integers $p_1,p_2,\ldots,p_n$: the number of coins in each heap. First print "YES" if Justiina can win the game and "NO" otherwise. If Justiina can win, print also an example how she can make her opening move. First print two lines, the first of the form "TAKE $x$ $y$" and the second of the form "KEEP $z$". This means that Justiina takes $y$ coins from a heap that has $x$ coins, and keeps $z$ coins. After this, print some number of lines of the form "ADD $x$ $y$". This means that Justiina adds $y$ coins to a heap that already contains $x$ coins. It is allowed that $x=0$ which creates a new heap. Finally, print a line "END" which ends the description of the move. Explanation: Justiina removes all coins from the heap that contains three coins. Then she keeps two coins and adds one coin to the heap that already contains one coin. After the move, there are two heaps in the game and both of them contain two coins.
CommonCrawl
Abstract: A measurement of the number of $J/\psi$ events collected with the BESIII detector in 2009 and 2012 is performed using inclusive decays of the $J/\psi$ . The number of $J/\psi$ events taken in 2009 is recalculated to be $(223.7\pm1.4)\times 10^6$, which is in good agreement with the previous measurement, but with significantly improved precision due to improvements in the BESIII software. The number of $J/\psi$ events taken in 2012 is determined to be $(1086.9\pm 6.0)\times 10^6$. In total, the number of $J/\psi$ events collected with the BESIII detector is measured to be $(1310.6\pm 7.0)\times 10^6$, where the uncertainty is dominated by systematic effects and the statistical uncertainty is negligible.
CommonCrawl
For example, without text formatting, the original task in the form of $X = 21^2 + 125^3$ became a task in the form of $X = 212 + 1253$. Help the teacher by writing a program that will, for given $N$ integers from $P_1$ to $P_ N$ determine and output the value of $X$ from the original task. The first line of input contains the integer $N$ ($1 \leq N \leq 10$), the number of the addends from the task. Each of the following $N$ lines contains the integer $P_ i$ ($10 \leq P_ i \leq 9999$, $i = 1, \ldots , N$) from the task. The first and only line of output must contain the value of $X$ ($X \leq 1\, 000\, 000\, 000$) from the original task.
CommonCrawl
The MIPP collaboration Paley, J.M. ; Messier, M.D. ; Raja, R. ; et al. Phys.Rev. D90 (2014) 032001, 2014. The production yields of PI+ and PI- and the ratio of these yields. The first uncertainty given on each value combines statistical uncertainties and systematic uncertainties from backgrounds. The MIPP collaboration Nigmanov, T.S. ; Rajaram, D. ; Longo, M.J. ; et al. Phys.Rev. D83 (2011) 012002, 2011. We have measured cross sections for forward neutron production from a variety of targets using proton beams from the Fermilab Main Injector. Measurements were performed for proton beam momenta of 58 GeV/c, 84 GeV/c, and 120 GeV/c. The cross section dependence on the atomic weight (A) of the targets was found to vary as $A^(alpha)$ where $\alpha$ is $0.46\pm0.06$ for a beam momentum of 58 GeV/c and 0.54$\pm$0.05 for 120 GeV/c. The cross sections show reasonable agreement with FLUKA and DPMJET Monte Carlos. Comparisons have also been made with the LAQGSM Monte Carlo. Total inelastic PP cross section. Average multiplicities and production cross section for neutral particles from PP interactions at 84 GeV. Cross sections for neutron production greater than threshold and within an angular range of 20.4 mrad. Cross sections per nucleus for neutron production after correcting for the detector geometric acceptance. Atomic weight dependence of the neutron cross section. Lorentz-invariant cross section as a function of xF for pp collisions at 58 GeV. Lorentz-invariant cross section as a function of xF for pC collisions at 58 GeV. Lorentz-invariant cross section as a function of xF for pBi collisions at 58 GeV. Lorentz-invariant cross section as a function of xF for pU collisions at 58 GeV. Lorentz-invariant cross section as a function of xF for pp collisions at 84 GeV. Lorentz-invariant cross section as a function of xF for pBe collisions at 120 GeV. Lorentz-invariant cross section as a function of xF for pC collisions at 120 GeV. Lorentz-invariant cross section as a function of xF for pBi collisions at 120 GeV. dsig/dxF as a function of xF for pp collisions at 58 GeV. dsig/dxF as a function of xF for pC collisions at 58 GeV. dsig/dxF as a function of xF for pBi collisions at 58 GeV. dsig/dxF as a function of xF for pU collisions at 58 GeV. dsig/dxF as a function of xF for pp collisions at 84 GeV. dsig/dxF as a function of xF for pBe collisions at 120 GeV. dsig/dxF as a function of xF for pC collisions at 120 GeV. dsig/dxF as a function of xF for pBi collisions at 120 GeV.
CommonCrawl
Cosmo puts 100 coins on the table and tells Fredo: "Thirty of these coins are genuine and seventy of them are fake." Then Cosmo leaves the room. On the table, there is a balance with two pans (but there are no weights). Question: What is the smallest possible number of weighings that guarantee Fredo to identify at least one genuine coin? Suppose the genuine coins have weight 0, and the other coins have weights of distinct powers of two. Then any try on the old balance will always just tip the scale to the side of the heaviest fake coin, since it weighs more than all the other coins on the scale put together. Now suppose the scale somehow magically marked the heaviest coin on the scale with a big red X everytime you weighed some coins. Now, there is no use using the coin with an X again. Also, you've learned absolutely nothing about the other coins. Hence, even with this extra magic, the only information you can gain each time is to eliminate one coin as being fake. Thus, it takes in the worst case seventy weighings to eliminate all the fake coins and find at least one genuine one. I'm not sure if there's way to improve that by weighing multiple coins against each other at once instead of just having one on each pan - it doesn't seem likely to me because one fake coin and one real can weigh more than two fakes, but it's still possible that you could improve this. Let's consider the information we get from each type of weighing. If we weigh one coin against another, an even weighing tells us that both coins are good, whereas an uneven weighing tells us only that the heavier coin is fake. If we weigh multiple coins against each other, an even result tells us nothing unless we know that we have a group with only two fake coins; if we have three, we have the possibility of two fake coins on one side weighing the same as a fake and a genuine coin on the other. An uneven result tells us only that the heavier side has at least one fake coin. It should be obvious that weighing multiple coins gives less information in all cases unless we already have a group with only two fake coins (at which point an uneven weighing would still not identify a genuine coin). The best strategy is then to weigh single coins against each other. If the weights are even, we have found a good coin; if not, we eliminate a fake coin. In the worst case, all weighings will be uneven, in which case we will need 70 weighings to identify a genuine coin. Note that it makes no difference which coins we weigh, as long as we set the heavier coin aside each time; we could continue to weigh the lighter coin against a new coin each time, or weigh 71 coins in a "tournament", or just take random coins for each weighing. This is because an uneven weighing provides absolutely no information about the lighter coin, so the only thing that changes about our situation after the first weighing is that we effectively have 99 coins, of which 69 are fake. The only point at which a genuine coin will be identified in the worst case is after all the fake ones have been eliminated, at which point all the remaining ones will be genuine by default. If we have 70 fake coins and 30 real coins, and we want to pair them one by one, then we need to guarantee a case where the scale is balanced in order to win. If we make 50 comparisons among the 100 coins AND THEN discard the heavy coins, we'll be left with 50 of the lighter coins. Note that among these 50 light coins, 30 are guaranteed to be the genuine coins. If we perform 21 comparisons, we guarantee that all of the fake coins have been paired up with a real coins, and been discarded for being too heavy, so the last pair MUST contain a real coin. This means we have the initial 50 comparisons, then an extra 21, yielding 71. Weighing multiple coins gives us no real information because there are no bounds on how much more the fake coins weigh. Weighing one pair tells us only that (1) both are the same and both are real, or (2) the heavier coin is fake and can be eliminated from the pool of candidates. Worst case, 70 weighings will eliminate 70 fakes and all the rest are real (tho most likely a pair of reals would be found earlier). Note than any solution must access (weigh at least once) a minimum of 71 coins, to be sure that at least one real coin has been considered. 70 weighings can do this (for example, just compare the lighter of each weighing with the next coin from the pool of unweighed ones; after 70 compares you have tested 71 coins). I have not yet found any way to make use of the number of real coins. This answer is the same whether there are 70 fake + 1 real, or 70 fake + 1000 real - just do 70 compares of any type and each time eliminate the heavier one from the pool (or both are real if equal). If there is any way to beat 70 compares, it's going to have to infer some information from the number of real coins, but I'm not finding any way to do that. edit: oops, I had a bad answer which somebody commented on while I was (immediately) revising it - ignore that first comment if it makes no sense, it was my blunder. My strategy is to essentially find the lightest coin. Split the coins into a group of 64, a group of 8, and a group of 28 ignored. This guarantees at least 1 legit coin is part of the group. Do a binary comparison for each group to find the lightest coin. Then compare the coins. The lightest(s) are genuine. Split the coins into two groups. One group should be 71 coins, and in the other, 29 coins. Since we know that the group containing 71 coins has at least one genuine coin. And we know that the genuine coin is the lightermost of them all. So we begin by placing one two coins randomly on the scale and place away the heavier one for each weigh. This way we will have to make a total of 70 weighs to make sure all coins have been weighed and we have the lightermost of them all. This is the genuine coin. If we ever get two coins of the same mass any time in the weighing process we stop there. Both of these coins are genuine. If pans are equal, either both or none of the pans have fake coins. If pans are unequal, either one or both of the pans have fake coins. We also notice that finding actual weights of coins in terms of other coins is pointless, since we might end up with equations like $Fake\ coin\ 1 \geq 200 \times Fake\ coin\ 2$. The only that seems of practical importance is sorting the coins relatively (in ascending/descending order). Let's make the assumption that equal pans always means no fake coins in the given scenario. (There exist no two sets of fake coins with equal weights.) However, we will not know this. So even if we stumble upon two equal pans, we will repeatedly use different weighings till we conclude that there is no fake coin. This can take a long time. This also means that the fact as to which pan is greater is of no importance, once we know we have unequal pans. According to worst case, we will never get equal pans unless this is logically implied from previous facts (in which case, weighing them is pointless). Hence we can conclude that we will never get equal pans in our perfect strategy under the worst case. If pans are unequal, we gain no new information. Hence, we conclude that it most be possible to achieve a state where it is logically implied that a particular coin is genuine, without us ever getting equal pans in our weighings. This only seems possible by doing $70$ weighings, as others have said. Minimum 1 weight (best case) when you find two coins with the same weight. Worst case 70 comparisons (worst case) if you have to weight all the wrong ones. Edit: If you're lucky enough to weight two coins that weights the same (good ones are all equal and fake ones are all different) then you resolved the problem. Each time you weight you just have to make sure to grab the lighter one. If you're unlucky enough to get 70 times the balance unbalanced, you will remain with the 29 other genuine coins. Not the answer you're looking for? Browse other questions tagged mathematics strategy weighing or ask your own question.
CommonCrawl
Can you arrange each of the 12 free pentominoes in a $6\times10$ rectangle such that each piece touches the edge of the rectangle? You can find the answer online if you search a bit and nobody here will be able to tell. You will know, though, and the remorse shall haunt you the rest of your days. Update: One solution has been found. There is exactly one more possible configuration that meets the requirement. I found these solutions while playing around on https://www.scholastic.com/blueballiett/games/pentominoes_game.htm.
CommonCrawl
A very important and complicated machine consists of $n$ wheels, numbered $1, 2, \ldots , n$. They are actually cogwheels, but the cogs are so small that we can model them as circles on the plane. Every wheel can spin around its center. Two wheels cannot overlap (they do not have common interior points), but they can touch. If two wheels touch each other and one of them rotates, the other one spins as well, as their micro-cogs are locked together. A force is put to wheel $1$ (and to no other wheel), making it rotate at the rate of exactly one turn per minute, clockwise. Compute the rates of other wheels' movement. You may assume that the machine is not jammed (the movement is physically possible). Each test case consists of one line containing the number of wheels $n$ ($1 \leq n \leq 1\, 000$) . Each of the following lines contain three integers $x$, $y$ and $r$ ($-10\, 000 \leq x, y \leq 10\, 000$; $1 \leq r \leq 10\, 000$) where $(x, y)$ denote the Cartesian coordinates of the wheel's center and $r$ is its radius. For each test case, output $n$ lines, each describing the movement of one wheel, in the same order as in the input. For every wheel, output either $p/q$ clockwise or $p/q$ counterclockwise, where the irreducible fraction $p/q$ is the number of wheel turns per minute. If $q$ is $1$, output just $p$ as an integer. If a wheel is standing still, output not moving.
CommonCrawl
In the game of Power Countdown, you use a set of numbers to make a target number, but unlike the usual Countdown game where you can use $+, -, \times$ or $\div$, the only operations you can use are raising a number to a power, taking the reciprocal of a number, or finding the product of two numbers. Each number can only be used once. You don't have to use all the numbers. There is often more than one way of making a particular target, so see how many different ways you can find. Watch the video to see some examples. Can you find any other ways of making $8$? Are there any ways which use all the numbers? How many ways are there to make the target number of $125$? Below is a selection of numbers and five targets. How many different ways can you find to make each target? Are there any targets you can't make? How close can you get? Physics. Fibonacci sequence. Mathematical reasoning & proof. Powers & roots. Mental calculation strategies. Estimating and approximating. Making and proving conjectures. Surds. Indices. Video.
CommonCrawl
I'm a homotopy theorist at the University of Illinois, with interests ranging from higher category theory to p-divisible groups. Which manifolds are homeomorphic to simplicial complexes? Are there pairs of highly connected finite CW-complexes with the same homotopy groups? Why aren't all small categories accessible? "Functors between monads": what are these really called? When does a representation admit a spin structure? Should a published paper with a published correction be replaced on arXiv? Are non-empty finite sets a Grothendieck test category? Why are the characters of the symmetric group integer-valued? Why does one think to Steenrod squares and powers? What is the shortest Ph.D. thesis? What is the "intuition" behind "brave new algebra"? How do you keep your research notes organized? A possible generalization of the homotopy groups. Is there an accepted definition of $(\infty,\infty)$ category? How do you define the strict infinity groupoids in Homotopy Type Theory? A more natural proof of Dold-Kan? Why the "W" in CGWH (compactly generated weakly Hausdorff spaces)? Are the Stiefel-Whitney classes of a vector bundle the only obstructions to its being invertible? How to get product on cohomology using the K(G, n)? What is a TMF in topology? Can the Kan-Thurston theorem be turned into some kind of equivalence between groups and spaces? What is the intuitive meaning of the coskeleton of a simplicial set? Cov. right-exact additive functors that don't commute with direct sums? Does the cohomology ring of a simply-connected space X determine the cohomology groups of ΩX?
CommonCrawl
Abstract: Let $\mathfrak B$ be the variety of associative (special Jordan, respectively) algebras over an infinite field of characteristic 2 defined by the identity $((((x_1,x_2),x_3),((x_4,x_5),x_6)),(x_7,x_8))=0$ ($((x_1x_2\cdot x_3)(x_4x_5\cdot x_6))(x_7x_8)=0$, respectively). In this paper, we construct infinite independent systems of identities in the variety $\mathfrak B$ ($\mathfrak D$ , respectively). This implies that the set of distinct nonfinitely based subvarieties of the variety $\mathfrak B$ has the cardinality of the continuum and that there are algebras in $\mathfrak B$ with undecidable word problem.
CommonCrawl
$2n$ soldiers are standing in a double-row. They have to be rearranged, so that there are no equally tall soldiers in each row - then we shall say, that the soldiers are set up properly. A single operation consists in swapping two soldiers who occupy the same position (but in different rows). Your task is to determine the minimum number of swaps necessary to set the soldiers up properly. There is a double-row of $18$ soldiers in the figure. Arrows indicate the swaps that rearrange the soldiers in a proper way. reads from the standard input the number and heights of soldiers, as they stand initially,determines the minimum number of swaps (of soldiers standing on the same position in different rows) necessary to set up soldiers properly,writes the result to the standard output. In the first line of the input there is one integer $n$, $1\le n\le 50\ 000$. In each of the two rows there are $n$ soldiers standing. In each of the following two lines there are $n$ positive integers separated by single spaces. In the second line there are numbers $x_1,x_2,\cdots,x_n$, $1\le x_i\le 100\ 000$; $x_i$ denotes the height of the $i$'th soldier in the first line. In the third line there are numbers $y_1,y_2,\cdots,y_n$, $1\le y_i\le 100\ 000$; $y_i$ denotes the height of the $i$'th soldier in the second line. It is guaranteed that in the instances from the test data it is possible to set up soldiers properly. In the first and only line of the standard output one integer should be written - the minimum number of swaps necessary to set up soldiers properly.
CommonCrawl
How exactly analyticity of S-matrix comes from causality principle? What is the sense of introducing generating functional to the summands of expansion of S-matrix? Is it indeed the case there are several different S-matrices? If so, why is it "carefully hidden" in the literature? Is it possible to compute the "fully interacting" S-matrix in perturbation theory using the Bethe-Salpeter equation to extract the bound state spectrum? I don't think the S-Matrix depends on the existence of a free limit of the theory. Rather, I think it depends on cluster decomposition, which identifies the (non-perturbative) asymptotic states as approximately non-intercating multi-particle states, whose relationship to a specific Lagrangian may be complicated, e.g. they may be solitons or bound states etc. There are two dimensional models where the S-matrices can be computed exactly using algebraic methods, both for perturbative and solitonic states, which demonstrates the distinction nicely. On the other hand, if you want to calculate the S-matrix in perturbation theory using the LSZ reduction, you need a concrete identification of that Fock space and those asymptotic states. In perturbation theory, for perturbative states, this is the Hilbert space of the free theory. Note that this may no be the complete Hilbert space of the theory, and there are known examples where the perturbative S-matrix is not unitary since there is some non-zero probability to creating non-perturbative states. This is also demonstrated very concretely in two-dimensional examples. I vaguely recall Weinberg having a nice discussion of the general definition in his QFT course, I am assuming he covers this in the first volume of his QFT series. So you mean that what I call the "fully interacting" S-matrix is the only one which is defined nonperturbatively? Do I identify the space on which it acts correctly? That seems to capture the gist of it, but precise formulation already exists in the literature (I suspect Weinberg's text is a good entry point). There is only one S-matrix, but it is frequently introduced in sloppy ways. The S-matrix is a unitary matrix between two isomorphic Fock spaces whose 1-particle sector contains precisely one particle for each bound state of the system. It can be constructed by the usual adiabatic textbook procedure if and only if there are no bound states (which is a standard requirement for ordinary perturbation theory already for nonrelativistic QM without fields). In particular, in case of QCD, only the S-matrix where the asymptotic states are hadrons, makes physical (and hence nonperturbative) sense. But this is not tractable perturbatively anyway, as it is part of the unsolved infrared problem for QCD. The problem is carefully hidden from textbooks because nobody really knows how to treat bound states in QFT, and talk about ignorance isn't very suitable for textbooks. Weinberg treats bound states in Chapter 14, but only for an electron in an external field, which begs the real question. He barely mentions the oldest (and quite unreliable) method for bound states, the Bethe-Salpeter equation, which figures on p.560, where one can find the remark ''It must be said that the theory of relativistic effects and radiative corrections in bound states is not yet in entirely satisfactory state.'' - an euphemism for the fact that it is a real mess, and nobody knows how to treat it well. or with lattice gauge theory to get bound state information. In the literature which is old enough, the existence of free limits in $g\to\infty$ or elsewhere – if the moduli space is more complicated – is neglected because S-duality and the multiplicity of the free limits wasn't known. However, it's not too big an omission because all the matrices you mention differ at most by phases that depend on the external particle masses; the interacting part of the information in the S-matrix is identical. The interactions really occur in a region of the spacetime where the coupling constant has a particular finite value. The adiabatic turning-on is just a way to define the S-matrix rigorously and its details don't really matter. So up to some transformations that are largely trivial, there is just one S-matrix in each physical system. Dear @Squark, it depends on whether there are lines of marginal stability in the theory (values of couplings at which some stable external states cease to exist) etc. If the "adiabatic" prescription for the S-matrix works, then the $g\to 0$ and $g\to \infty$ Hilbert spaces of free particles are isomorphic and there is a simple isomorphism. Of course, if there exist asymptotic states but this existence depends on the finiteness of $g$, then the isomorphism of the two "free" Hilbert spaces breaks down: but your original adiabatic method won't work, anyway. Take QCD for example. What S-matrix are we computing in perturbation theory? I assumed it is the "adiabatic" S-matrix since the asymptotic states are free quarks and gluons. However in the "fully interacting" S-matrix the asymptotic states are hadrons. So, is it the case that both kinds of S-matrix exist or is that the latter matrix is the only one which makes sense nonperturbatively? Is it indeed the case there are several different S-matrices? The S-matrix is approached in a quite a few different ways. Here are seven (or is it six?) examples. I personally bless only the final three, and only the last of the blessed three is useful in practice. Ryder may be echoing "the propagator approach". Whatever it is, it differs from the approaches below, and he attributes it to unpublished lectures by Veltman. where Ω₊ and Ω₋ are "Moller operators". Haag mentions this only for comparison. It is not suitable for QFT. But I was taught it in my first QFT class. E. Araki-Haag collision theory. I won't go into the details. But we can now drop the requirement that we have a field that creates single-particle states. This is important to algebraic QFT-ers, because fields that create single fermions are unobservable. Again, computationally intractable. To summarize: In practice, we (should) obtain Green functions from Feynman diagrams (by one of two methods) and then obtain S-matrix elements from the Green functions à la LSZ. I feel safe in saying that this is the approach to Feynman diagrams and the S-matrix that is found in "better" textbooks. If so, why is it "carefully hidden" in the literature? The recovery may be incomplete. Is it possible to compute the "fully interacting" S-matrix in perturbation theory using the Bethe-Salpeter equation to extract the bound state spectrum? What you're asking for, if I understand correctly, is some way to get the infinite sums (that are probed by the B-S equation) into the Green functions for composite fields that create bound states. Then you could, ideally, compute proton-proton cross sections. As Arnold Neumaier states above, the Schwinger-Dyson equations are yet a third route to computing Green functions.
CommonCrawl
Upon thinking about this question, I have a feeling that there is an interesting general problem like that, but I cannot verbalise it. Here is an approximation. The question is: given a finitely generated group $G$ and a finite set $S\subset G$, we want to find out whether the subgroup generated by $S$ is the whole group. Is there an algorithm deciding this? One has to specify how $G$ and $S$ are presented to a machine. As for $S$, its elements are given as words in some generating set for $G$. The question how to represent a group is more delicate. Obviously not by a set of relations, since the identity problem is undecidable. There are two ways I see. 1) Let's say that $G$ is nice if there is an algorithm deciding the above problem for this group only. For example, $\mathbb Z^n$ is nice, [edit:] free groups are nice, and probably many others. Are all groups nice? What about some reasonable classes, like lattices in Lie groups? 2) This one is the best approximation of my intuitive picture of a question. Suppose we have an algorithm $A$ deciding the identity problem in $G$. (This algorithm tells us whether two elements of $G$, presented as words, are equal.) Or maybe it's an oracle rather than algorithm, I don't feel the difference. Can we decide the above problem using $A$? Update. It turns out that even $F_6\times F_6$ is not nice in the above sense (see John Stillwell's answer). This kills the second part too. The only question that remains unanswered is the one with the least motivation behind it: Is "generating problem" solvable in lattices of Lie groups? The answer to the title question is that the problem is unsolvable. See p. 194 of Lyndon and Schupp's Combinatorial Group Theory, where it is called the "generating problem." It is unsolvable even when $G$ is the direct product of free groups of rank at least 6. Hyperbolic groups are not nice in your sense: Baumslag-Miller-Short used Rips construction to build a hyperbolic group $G$ for which there is no algorithm to decide whether a finitely generated subgroup is all of $G$, see here. Let me mention that in the context of computational group theory, it is more natural to consider the group membership problem: given the generators, say as matrices $A_i \in SL(n,\Bbb Z)$ you want to decide whether matrix $M$ lies in the group generated by $A_i$. Clearly, we can apply the group membership to standard generators to decide whether $A_i$ they generate some given group $G$. Unfortunately, this is in fact a much harder problem - since $SL(4,\Bbb Z)$ contains a product of two copies of $F_2$, this is undecidable. Interestingly, one can decide in polynomial time whether the group the $A_i$'s generate is finite or infinite, as well as whether it is virtually solvable: see here and here. Not the answer you're looking for? Browse other questions tagged co.combinatorics algorithms gr.group-theory or ask your own question. Does every generating set of the first homology group of a Cayley graph give rise to a presentation of its group? If a group $G$ has decidable word problem, must it have a decidable square problem? Is there an algorithm to decide if a word is in a finitely generated subgroup of a free group?
CommonCrawl