text
stringlengths
100
500k
subset
stringclasses
4 values
There are 49 balls and if your choice of 6 numbers matches 5 winning balls plus the bonus number (a 7th ball drawn) then this generally wins around £100,000 and has a probability 1 in 2,330,636. The new 'Health Lottery' has a poor payout to the punters of around 33p in the pound, but 5 numbers out of 50 would win 'up to £100,000' with probability 1 in 2,118,760. There are around 41,000,000,000 Premium bonds, and currently each month 4 win £100,000 and 1 wins £1,000,000. So with a £1 Bond there is around 1 in 8,000,000,000 of winning at least £100,000. However you keep your £1 stake, and so a fairer comparison is to assume that you find a friend to lend you £500 for a month for £1 – this is 2.4% annual interest which is not too bad at the moment. With 500 bonds held for a month the odds of wining at least £100,000 are 1 in 16,000,000, around 8 times worse than the lotteries. Suppose you examine a race meeting with 6 races, and in each race choose a horse at medium odds of around 6 to 1 against. Then an accumulator, in which the winnings of each race are passed to the next horse, will pay out, will give you $7 \times 7 \times 7 \times 7 \times 7 \times 7$ = £117,000 if they all win. Given a bookmakers margin of, say, 15% each bet, the true odds may be around 1 in 230,000. If you can find a casino to let you bet £1, place it on your lucky number between 1 and 36. When it wins, either leave the £36 there or move it to another number. When that comes up too, move the £1296 you now have to another number, or leave it where it is – it doesn't make any difference to the odds, but somehow it seems that the chance increases when the money is moved. When that comes up you will have £46.656, so move it all to Red, and when that comes up you will have £93,312, almost enough for your Maserati. $ 1/37 \times 1/37 \times 1/37 \times 18/37 $= 1 in 104,120. Roulette is your best bet – about twice as good as horse-racing, about 20 times as good as lotteries, and about 160 times as good as premium bonds. yes, I was doing an 'all or nothing' analysis. But in fact the additional chance from re-investing one's lottery winnings is fairly feeble, given that there is only a 1 in 52 chance of winning £10. I find it fascinating that if you were to rank those methods of gambling in order of social acceptability, it would be almost completely backwards. A personal comment on Dave S. Another one for the "coincidences" section...?
CommonCrawl
Real Algebraic and Analytic Geometry. Related manuscripts and search engines for mathematical preprints. Vincent Astier and Thomas Unger: Positive cones on algebras with involution. Vincent Astier, Thomas Unger: Signatures of hermitian forms, positivity, and an answer to a question of Procesi and Schacher. Patrick Speissegger: Quasianalytic Ilyashenko algebras. Malgorzata Czapla, Wieslaw Pawlucki: Michael's Theorem for a Mapping Definable in an O-Minimal Structure on a Set of Dimension 1. Malgorzata Czapla, Wieslaw Pawlucki: Michael's Theorem for Lipschitz Cells in O-minimal Structures. Zofia Ambroży, Wiesław Pawłucki: On Implicit Function Theorem in O-Minimal Structures. Murray Marshall: Application of localization to the multivariate moment problem II. K. Kurdyka, W. Pawlucki: O-minimal version of Whitney's extension theorem. J. William Helton, Igor Klep, Christopher S. Nelson: Noncommutative polynomials nonnegative on a variety intersect a convex set. Alessandro Berarducci, Marcello Mamino: Groups definable in two orthogonal sorts. Hoang Phi Dung: Lojasiewicz-type inequalities for nonsmooth definable functions in o-minimal structures and global error bounds. Clifton F. Ealy, Jana Maříková: Model completeness of o-minimal fields with convex valuations. Alessandro Berarducci, Mário Edmundo, Marcello Mamino: Discrete Subgroups of Locally Definable Groups. Iwona Krzyżanowska and Zbigniew Szafraniec: On polynomial mappings from the plane to the plane. Krzysztof Jan Nowak: A counter-example concerning quantifier elimination in quasianalytic structures. Murray Marshall: Application of localization to the multivariate moment problem. Beata Kocel-Cynk, Wiesław Pawłucki, Anna Valette: A short geometric proof that Hausdorff limits are definable in any o-minimal structure. Krzysztof Jan Nowak: Quasianalytic structures revisited: quantifier elimination, valuation property and rectilinearization of functions. M. Dickmann, A. Petrovich: Real Semigroups, Real Spectra and Quadratic Forms over Rings. Matthias Aschenbrenner, Lou van den Dries, Joris van der Hoeven: Towards a Model Theory for Transseries. Annalisa Conversano, Anand Pillay: On Levi subgroups and the Levi decomposition for groups definable in o-minimal structures. Annalisa Conversano, Anand Pillay: Connected components of definable groups, and o-minimality II. Ehud Hrushovski, Anand Pillay: Affine Nash groups over real closed fields. Annalisa Conversano , Anand Pillay: Connected components of definable groups and o-minimality I. Vincent Astier: Elementary equivalence of lattices of open sets definable in o-minimal expansions of fields. Iwona Krzyzanowska Zbigniew Szafraniec: Polynomial mappings into a Stiefel manifold and immersions. Andreas Fischer: Approximation of o-minimal maps satisfying a Lipschitz condition. Mehdi Ghasemi, Murray Marshall, Sven Wagner: Closure of the cone of sums of 2d powers in certain weighted \ell_1-seminorm topologies. Igor Klep , Markus Schweighofer: Infeasibility certificates for linear matrix inequalities. Janusz Adamus, Serge Randriambololona: Tameness of holomorphic closure dimension in a semialgebraic set. Janusz Adamus, Serge Randriambololona, Rasul Shafikov: Tameness of complex dimension in a real analytic set. Janusz Adamus, Rasul Shafikov: On the holomorphic closure dimension of real analytic sets. Mehdi Ghasemi, Murray Marshall: Lower bounds for polynomials using geometric programming. Abdelhafed Elkhadiri: On connected components of some globally semi-analytic sets. Krzysztof Jan Nowak: Supplement to the paper "Quasianalytic perturbation of multi-parameter hyperbolic polynomials and symmetric matrices". J. William Helton, Igor Klep, Scott McCullough: The convex Positivstellensatz in a free algebra. Edoardo Ballico, Riccardo Ghiloni: The principle of moduli flexibility in Real Algebraic Geometry. M. Dickmann, F. Miraglia: Faithfully Quadratic Rings. Timothy Mellor, Marcus Tressl: Non-axiomatizability of real spectra in L∞λ. Nicolas Dutertre: On the topology of semi-algebraic functions on closed semi-algebraic sets. Krzysztof Jan Nowak: On the real algebra of quasianalytic function germs. Riccardo Ghiloni, Alessandro Tancredi: Algebraic models of symmetric Nash sets. Riccardo Ghiloni: On the Complexity of Collaring Theorem in the Lipschitz Category. Tim Netzer, Andreas Thom: Polynomials with and without determinantal representations. Krzysztof Jan Nowak: On the singular locus of sets definable in a quasianalytic structure. Aleksandra Nowel, Zbigniew Szafraniec: On the number of branches of real curve singularities. Elías Baro, Eric Jaligot, Margarita Otero: Commutators in groups definable in o-minimal structures. Andreas Fischer: Recovering o-minimal structures. Murray Marshall, Tim Netzer: Positivstellensätze for real function algebras. Tim Netzer, Andreas Thom: Tracial algebras and an embedding theorem. Krzysztof Jan Nowak: Quasianalytic perturbation of multiparameter hyperbolic polynomials and symmetric matrices. Krzysztof Jan Nowak: The Abhyankar-Jung theorem for excellent henselian subrings of formal power series. J. William Helton, Igor Klep, Scott McCullough: The matricial relaxation of a linear matrix inequality. Elzbieta Sowa: Picard-Vessiot extensions for real fields. Nicolas Dutertre: Euler characteristic and Lipschitz-Killing curvatures of closed semi-algebraic sets. Dang Tuan Hiep: Representations of non-negative polynomials via the critical ideals. Sabine Burgdorf, Igor Klep: The truncated tracial moment problem. Jana Maříková: O-minimal residue fields of o-minimal fields. Dang Tuan Hiep: Representation of non-negative polynomials via the KKT ideals. J. William Helton, Igor Klep, Scott McCullough: Analytic mappings between noncommutative pencil balls. Mehdi Ghasemi, Murray Marshall: Lower bounds for a polynomial in terms of its coefficients. Annalisa Conversano: Lie-like decompositions of groups definable in o-minimal structures. Vincent Astier, Hugo L. Mariano: Realizing profinite reduced special groups. Jean-Philippe Monnier: Very special divisors on real algebraic curves. Małgorzata Czapla: Definable Triangulations with Regularity Conditions. Małgorzata Czapla: Invariance of Regularity Conditions under Definable, Locally Lipschitz, Weakly Bi-Lipschitz Mappings. Tim Netzer: On semidefinite representations of sets. Igor Klep, Markus Schweighofer: Pure states, positive matrix polynomials and sums of hermitian squares. Ikumitsu Nagasaki, Tomohiro Kawakami, Yasuhiro Hara and Fumihiro Ushitaki: Smith homology and Borsuk-Ulam type theorems. Matthias Aschenbrenner, Andreas Fischer: Definable versions of theorems by Kirszbraun and Helly. Jose Capco: Real closed * reduced partially ordered Rings. Andreas Fischer, Murray Marshall: Extending piecewise polynomial functions in two variables. Sabine Burgdorf, Claus Scheiderer, Markus Schweighofer: Pure states, nonnegative polynomials and sums of squares. Doris Augustin: The Membership Problem for quadratic modules. Alessandro Berarducci, Marcello Mamino: Equivariant homotopy of definable groups. Andreas Fischer: A strict Positivstellensatz for definable quasianalytic rings. Andreas Fischer: Positivstellensätze for families of definable functions. Jaka Cimprič, Murray Marshall, Tim Netzer: Closures of quadratic modules. Andreas Fischer: Infinite Peano differentiable functions in polynomially bounded o-minimal structures. Nicolas Dutertre: Radial index and Poincaré-Hopf index of 1-forms on semi-analytic sets. Claus Scheiderer: Weighted sums of squares in local rings and their completions, II. Claus Scheiderer: Weighted sums of squares in local rings and their completions, I. Tim Netzer, Daniel Plaumann, Markus Schweighofer: Exposed faces of semidefinitely representable sets. Tim Netzer: Representation and Approximation of Positivity Preservers. Tomohiro kawakami: Locally definable $C^\infty G$ manifold structures of locally definable $C^r G$ manifolds. Tomohiro Kawakami: Locally definable fiber bundles. Andreas Fischer: On smooth locally o-minimal functions. Elías Baro, Margarita Otero: Locally definable homotopy. Andreas Fischer: On compositions of subanalytic functions. Igor Klep, Thomas Unger: The Procesi-Schacher conjecture and Hilbert's 17th problem for algebras with involution. Alessandro Berarducci, Marcello Mamino, Margarita Otero: Higher homotopy of groups definable in o-minimal structures. Artur Piękosz: O-minimal homotopy and generalized (co)homology. János Kollár, Frédéric Mangolte: Cremona transformations and diffeomorphisms of surfaces. J. William Helton, Igor Klep, Scott McCullough, Nick Slinglend: Noncommutative ball maps. Niels Schwartz: SV-Rings and SV-Porings. Niels Schwartz: Real closed valuation rings. Elías Baro, Margarita Otero: On o-minimal homotopy groups. Doris Augustin, Manfred Knebusch: Quadratic Modules in R[[X]]. Jaka Cimprič, Murray Marshall, Tim Netzer: On the real multidimensional rational $K$-moment problem. Iwona Karolkiewicz, Aleksandra Nowel, Zbigniew Szafraniec: An algebraic formula for the intersection number of a polynomial immersion. Tomohiro Kawakami: Relative properties of definable C^\infty manifolds with finite abelian group actions in an o-minimal expansion of R_\exp. Andreas Fischer: Algebraic models for o-minimal manifolds. Johannes Huisman, Frédéric Mangolte: Automorphisms of real rational surfaces and weighted blow-up singularities. Fabrizio Catanese, Frédéric Mangolte: Real singular Del Pezzo surfaces and 3-folds fibred by rational curves, II. Y. Peterzil, S. Starchenko: Mild Manifolds and a Non-Standard Riemann Existence Theorem. Stanisław Łojasiewicz, Maria-Angeles Zurro: Closure theorem for partially semialgebraics. F. Bihan, F. Sottile: Betti number bounds for fewnomial hypersurfaces via stratified Morse Theory. Andreas Fischer: John Functions for $o$-minimal Domains. Marcus Tressl: Bounded super real closed rings. Nicolas Dutertre: On the real Milnor fibre of some maps from R^n to R^2. Roman Wencel: A model theoretic application of Gelfond-Schneider theorem. Andreas Fischer: The Riemann mapping theorem for $o$-minimal functions. Krzysztof Jan Nowak: On two problems concerning quasianalytic Denjoy--Carleman classes. Alessandro Berarducci: Cohomology of groups in o-minimal structures: acyclicity of the infinitesimal subgroup. Krzysztof Jan Nowak: Quantifier elimination, valuation property and preparation theorem in quasianalytic geometry via transformation to normal crossings. Andreas Fischer: Peano differentiable extensions in $o$-minimal structures. Benoit Bertrand and Frédéric Bihan: Euler characteristic of real non degenerate tropical complete intersections. Igor Klep, Markus Schweighofer: Sums of hermitian squares and the BMV conjecture. Elías Baro: Normal triangulations in o-minimal structures. V. Grandjean: Tame Functions with strongly isolated singularities at infinity: a tame version of a Parusinski's Theorem. V. Grandjean: On the the total curvatures of a tame function. Johannes Huisman, Frédéric Mangolte: The group of automorphisms of a real rational surface is n-transitive. Margarita Otero, Ya'acov Peterzil: G-linear sets and torsion points in definably compact groups. Andreas Fischer: O-minimal analytic separation of sets in dimension two. Dan Bates, Frédéric Bihan, Frank Sottile: Bounds on the number of real solutions to polynomial equations. Frédéric Bihan, Frank Sottile: Gale Duality for Complete Intersections. Alessandro Berarducci, Antongiulio Fornasiero: O-minimal cohomology: finiteness and invariance results. Fabrizio Catanese, Frédéric Mangolte: Real singular Del Pezzo surfaces and threefolds fibred by rational curves, I. Krzysztof Jan Nowak: Decomposition into special cubes and its applications to quasi-subanalytic geometry. Iwona Karolkiewicz, Aleksandra Nowel, Zbigniew Szafraniec: Immersions of spheres and algebraically constructible functions. Andreas Fischer: Smooth Approximation in O-Minimal Structures. Georges Comte, Yosef Yomdin: Rotation of Trajectories of Lipschitz Vector Fields. R. Rubio, J.M. Serradilla, M.P. Vélez: Detecting real singularities of curves from a rational parametrization. M. Ansola, M.J. de la Puente: Metric invariants of tropical conics and factorization of degree–two homogeneous polynomials in three variables. M. Ansola, M.J. de la Puente: A note on tropical triangles in the plane. Andreas Fischer: Extending O-minimal Fréchet Derivatives. David Trotman, Leslie C. Wilson: (r) does not imply (n) or (npf) for definable sets in non polynomially bounded o-minimal structures. Andreas Fischer: Smooth Approximation of Definable Continuous Functions. Frédéric Bihan, J. Maurice Rojas, Frank Sottile: Sharpness of Fewnomial Bound and the Number of Components of a Fewnomial Hypersurface. Wiesław Pawłucki: Lipschitz Cell Decomposition in O-Minimal Structures. I. Tobias Kaiser, Jean-Philippe Rolin, Patrick Speissegger: Transition maps at non-resonant hyperbolic singularities are o-minimal. David Grimm, Tim Netzer, Markus Schweighofer: A note on the representation of positive polynomials with structured sparsity. Jean-Philippe Monnier: Fixed points of automorphisms of real algebraic curves. Frédéric Bihan, Frank Sottile: New Fewnomial Upper Bounds from Gale Dual Polynomial Systems. Manfred Knebusch: Positivity and convexity in rings of fractions. Michael Barr, John F. Kennison, Robert Raphael: On productively Lindelöf spaces. Lev Birbrair, Alexandre Fernandes: Metric Geometry of Complex Algebraic Surfaces with Isolated Singularities. Marcus Tressl: Heirs of box types in polynomially bounded structures. Igor Klep, Markus Schweighofer: Connes' embedding conjecture and sums of hermitian squares. Andreas Fischer: Definable Smoothing of Lipschitz Continuous Functions. Tim Netzer: An Elementary Proof of Schmüdgen's Theorem on the Moment Problem of Closed Semi-Algebraic Sets. Vincent Grandjean: Triviality at infinity of real 3-space polynomial functions with cone-like ends. Frédéric Bihan, Frédéric Mangolte: Topological types of real regular jacobian elliptic surfaces. Nicolas Dutertre: A Gauss-Bonnet formula for closed semi-algebraic sets. Guillaume Valette: Multiplicity mod 2 as a metric invariant. R. Raphael, R. Grant Woods: When the Hewitt realcompactification and the P-coreflection commute. Mouadh Akriche, Frédéric Mangolte: Nombres de Betti des surfaces elliptiques réelles. José F. Fernando, Jesús M. Ruiz, Claus Scheiderer: Sums of squares of linear forms. Nicolas Dutertre: Semi-algebraic neighborhoods of closed semi-algebraic sets. A. Dolich, Patrick Speissegger: An ordered structure of rank two related to Dulac's Problem. Krzysztof Jan Nowak: On the Euler characteristic of the links of a set determined by smooth definable functions. Carlos Andradas, M. P. Vélez: On the non reduced order spectrum of real curves: some examples and remarks. W.D. Burgess, R. Raphael: Compactifications, C(X) and ring epimorphisms. Ahmed Srhir: Algèbre $p$-adique et ses applications en géométries algébrique et analytique $p$-adiques. José F. Fernando: On the Positive Extension Property and Hilbert's 17th Problem for Real Analytic Sets. Pantelis Eleftheriou, Sergei Starchenko: Groups definable in ordered vector spaces over ordered division rings. M. Dickmann, F. Miraglia: Marshall's and Milnor's Conjectures for Preordered von Neumann Regular Rings. Luis Felipe Tabera: Tropical plane geometric constructions. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando, Jesús M. Ruiz: On the finiteness of Pythagoras numbers of real meromorphic functions. Markus Schweighofer: Global optimization of polynomials using gradient tentacles and sums of squares. Andreas Fischer: Zero-Set Property of O-Minimal Indefinitely Peano Differentiable Functions. Roman Wencel: Weakly o-minimal non-valuational structures. Roman Wencel: Topological properties of sets definable in weakly o-minimal structures. W. Kucharz, K. Kurdyka: Stiefel-Whitney classes for coherent real analytic sheaves. W. Kucharz, K. Kurdyka: Algebraicity of global real analytic hypersurfaces. J. Bochnak, W. Kucharz: On successive minima of indefinite quadratic forms. Marcus Tressl: Super real closed rings. Andreas Fischer: Differentiability of Peano derivatives. Paweł Goldstein: Gradient flow of a harmonic function in R3. Jiawang Nie, Markus Schweighofer: On the complexity of Putinar's Positivstellensatz. J. Bochnak, W. Kucharz: Real algebraic morphisms represent few homotopy classes. Frédéric Bihan: Polynomial systems supported on circuits and dessins d'enfants. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando: On a Global Analytic Positivstellensatz. Fabrizio Broglia, Federica Pieroni: On the Real Nullstellensatz for Global Analytic Functions. Krzysztof Jan Nowak: A proof of the valuation property and preparation theorem. N. J. Fine, L. Gillman, J. Lambek: Rings of Quotients of Rings of Functions. Jana Maříková: Geometric Properties of Semilinear and Semibounded Sets. Igor Klep, Markus Schweighofer: A Nichtnegativstellensatz for polynomials in noncommuting variables. Benoit Bertrand, Frédéric Bihan, Frank Sottile: Polynomial systems with few real zeroes. Jean-Marie Lion, Patrick Speissegger: The Theorem of the Complement for nested Sub-Pfaffian Sets. Lev Birbrair, João Costa, Alexandre Fernandes, Maria Ruas: K-bi-Lipschitz Equivalence of Real Function Germs. Lev Birbrair: Lipschitz Geometry of Curves and Surfaces Definable in O-Minimal Structures. D. D'Acunto, K. Kurdyka: Effective Łojasiewicz gradient inequality for polynomials. Riccardo Ghiloni: Globalization and compactness of McCrory-Parusiński conditions. Proceedings of the RAAG Summer School Lisbon 2003: O-minimal Structures. Federica Pieroni: Sums of squares in quasianalytic Denjoy-Carleman classes. Federica Pieroni: Artin-Lang property for quasianalytic rings. Jean Philippe Monnier: Clifford Theorem for real algebraic curves. A. J. Wilkie: Lectures on "An o-minimal version of Gromov's Algebraic Reparameterization Lemma with a diophantine application". Vincent Astier: Elementary equivalence of some rings of definable functions. Benoit Bertrand: Asymptotically maximal families of hypersurfaces in toric varieties. E. Bujalance, F. J. Cirre, J. M. Gamboa, G. Gromadzki: On the number of ovals of a symmetry of a compact Riemann surface. Didier D'Acunto, Vincent Grandjean: A Gradient Inequality at infinity for tame functions. Mário J. Edmundo: On torsion points of locally definable groups in o-minimal structures. Johannes Huisman, Frédéric Mangolte: Every connected sum of lens spaces is a real component of a uniruled algebraic variety. José F. Fernando: On the Hilbert's 17th Problem for global analytic functions on dimension 3. Margarita Otero: On divisibility in definable groups. L. Alberti, G. Comte, B. Mourrain: Meshing implicit algebraic surfaces: the smooth case. Jonathan Kirby, Boris Zilber: The uniform Schanuel conjecture over the real numbers. Ma.Emilia Alonso, Dan Haran: Covers of Klein Surfaces. Ya'acov Peterzil, Anand Pillay: Generic sets in definably compact groups. Nicolas Dutertre: Curvature integrals on the real Milnor fibre. Guillaume Valette: Volume, Density And Whitney Conditions. Andreas Bernig: Gromov-Hausdorff limits in definable families. Ya'acov Peterzil, Sergei Starchenko: Complex analytic geometry and analytic-geometric categories. M. Coste, T. Lajous, H. Lombardi, M-F. Roy: Generalized Budan-Fourier theorem and virtual roots. Louis Mahé: On the Pierce-Birkhoff Conjecture in three variables. Olivier Macé, Louis Mahé: Sommes de trois carrés de fractions en deux variables. Markus Schweighofer: Certificates for nonnegativity of polynomials with zeros on compact semialgebraic sets. Igor Klep, Dejan Velušček: $n$-real valuations and the higher level version of the Krull-Baer theorem. A. J. Wilkie: Covering definable open sets by open cells. Mário J. Edmundo, Margarita Otero: Definably compact abelian groups. Luis Felipe Tabera: Tropical constructive Pappus' theorem. Niels Schwartz: About Schmüdgen's Theorem. Claus Scheiderer: Moment problem and complexity. Claus Scheiderer: Distinguished representations of non-negative polynomials. Claus Scheiderer: Sums of squares on real algebraic surfaces. Ángel L. Pérez del Pozo: Automorphism groups of compact bordered Klein surfaces with invariant subsets. Andreas Bernig: Support functions, projections and Minkowski addition of Legendrian cycles. Adam Dzedzej, Zbigniew Szafraniec: On families of trajectories of an analytic gradient vector field. Riccardo Ghiloni: Rigidity and Moduli Space in Real Algebraic Geometry. Serge Randriambololona: O-minimal structures: low arity versus generation. Andreas Fischer: Singularities of o-minimal Peano derivatives. Sérgio Alvarez, Lev Birbrair, João Costa, Alexandre Fernandes: Topological K-Equivalence of Analytic Function-Germs. L. Birbrair, A.G. Fernandes: Horn Exponents of Real Quasihomogeneous and Semi-Quasihomogeneous Surfaces. R. Raphael, R.G. Woods: On RG-Spaces and the Regularity Degree. Salma Kuhlmann, Saharon Shelah: κ-bounded Exponential-Logarithmic Power Series Fields. Daniel Richardson: Near Integral Points of Sets Definable in O-Minimal Structures. Wiesław Pawłucki: A linear extension operator for Whitney fields on closed o-minimal sets. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando, Jesús M. Ruiz: On the Pythagoras number of real analytic surfaces. Bruce Reznick: On the absence of uniform denominators in Hilbert's 17th problem. Victoria Powers, Bruce Reznick: Polynomials positive on unbounded rectangles. D. Biljakovic, M. Kochetov, S. Kuhlmann: Primes and Irreducibles in Truncation Integer Parts of Real Closed Fields. Michael Barr, John F. Kennison, Robert Raphael: Searching For Absolute CR-Epic Spaces. José F. Fernando, José M. Gamboa: Polynomial and regular images of Rn. Victoria Powers, Bruce Reznick, Claus Scheiderer, Frank Sottile: A New Proof of Hilbert's Theorem on Ternary Quartics. Markus Schweighofer: Iterated rings of bounded elements: Erratum. Ya'acov Peterzil, Sergei Starchenko: Complex-Like Analysis in O-Minimal Structures. Alessandro Berarducci, Margarita Otero, Ya'acov Peterzil, Anand Pillay: A descending chain condition for groups definable in o-minimal structures. A. J. Wilkie: Fusing o-minimal structures. Ángel L. Pérez del Pozo: Gap sequences on Klein surfaces. Andreas Fischer: Definable Λ p -regular cell decomposition. Vincent Grandjean: On the Limit set at infinity of gradient of semialgebraic function. Riccardo Ghiloni: On the space of morphisms into generic real algebraic varieties. Mário J. Edmundo, Gareth O. Jones, Nicholas J. Peatfield: Sheaf cohomology in o-minimal structures. Didier D'Acunto, Vincent Grandjean: On gradient at infinity of real polynomials. Andreas Bernig, Alexander Lytchak: Tangent spaces and Gromov-Hausdorff limits of subanalytic spaces. W. Charles Holland, Salma Kuhlmann, Stephen H.McCleary: Lexicographic Exponentiation of Chains. Digen Zhang: A note on Prüfer extensions. Matthias Aschenbrenner, Lou van den Dries: Asymptotic Differential Algebra. Digen Zhang: Prüfer hulls of commutative rings. Digen Zhang: An elementary proof that the length of X14+X24+X34+X44 is 4. Manfred Knebusch, Digen Zhang: Convexity, valuations and Prüfer extensions in real algebra. Tobias Kaiser: Dirichlet-regularity in arbitrary $o$-minimal structures on the field IR up to dimension 4. Tobias Kaiser: Capacity in subanalytic geometry. Tobias Kaiser: Dirichlet-regularity in polynomially bounded $o$-minimal structures on IR. Andreas Bernig: Curvature tensors of singular spaces. Aleksandra Nowel: Topological invariants of analytic sets associated with Noetherian families. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando, Jesús M. Ruiz: On the Hilbert 17th Problem for Global Analytic Functions. Francesca Acquistapace, Fabrizio Broglia, José F. Fernando, Jesús M. Ruiz: On the Pythagoras Numbers of Real Analytic Curves. Mário J. Edmundo: Covers of groups definable in o-minimal structures. Vincent Astier: On some sheaves of special groups. M. Hrušák, R.Raphael, R.G.Woods: On a class of pseudocompact spaces derived from ring epimorphisms. Riccardo Ghiloni: Explicit Equations and Bounds for the Nakai--Nishimura--Dubois--Efroymson Dimension Theorem. W. Domitrz, S. Janeczko, M. Zhitomirskii: Relative Poincare Lemma, Contractibility, Quasi-Homogeneity and Vector Fields Tangent to a Singular Variety. Jean-Yves Welschinger: Spinor states of real rational curves in real algebraic convex $3$-manifolds and enumerative invariants. Timothy Mellor: Imaginaries in Real Closed Valued Fields II. Timothy Mellor: Imaginaries in Real Closed Valued Fields I. Andreas Bernig: Curvature bounds on subanalytic spaces. Marcus Tressl: Computation of the z-radical in C(X). M. Dickmann, F. Miraglia: Algebraic K-theory of Special Groups. Jean-Philippe Monnier: On real generalized Jacobian varieties. Frédéric Mangolte: Real algebraic morphisms on 2-dimensional conic bundles. Ludwig Bröcker: Charakterisierung algebraischer Kurven in der komplexen projektiven Ebene. Alessandro Berarducci, Margarita Otero: An additive measure in o-minimal expansions of fields. Riccardo Ghiloni: Second Order Homological Obstructions and Global Sullivan-type Conditions on Real Algebraic Varieties. Michael Barr, R. Raphael, R.G. Woods: On CR-epic embeddings and absolute CR-epic spaces. Matthias Aschenbrenner, Lou van den Dries, Joris van der Hoeven: Differentially Algebraic Gaps. Aleksandra Nowel, Zbigniew Szafraniec: On trajectories of analytic gradient vector fields on analytic manifolds. Wiesław Pawłucki: On the algebra of functions Ck-extendable for each k finite. Igor Klep: A Kadison-Dubois representation for associative rings. Jaka Cimprič, Igor Klep: Orderings of Higher Level and Rings Of Fractions. José F. Fernando: Sums of squares in excellent henselian local rings. Salma Kuhlmann, Murray Marshall, Niels Schwartz: Positivity, sums of squares and the multi-dimensional moment problem II. Markus Schweighofer: Optimization of polynomials on compact semialgebraic sets. Alessandro Berarducci, Tamara Servi: An effective version of Wilkie's theorem of the complement and some effective o-minimality results. Mário J. Edmundo, Arthur Woerheide: Comparation theorems for o-minimal singular (co)homology. Carlos Andradas, Antonio Díaz-Cano: Some properties of global semianalytic subsets of coherent surfaces. Ludwig Bröcker: Euler integration and Euler multiplication. Artur Piękosz: K-subanalytic rectilinearization and uniformization. Hélène Pennaneac'h: Virtual and non virtual algebraic Betti numbers. José F. Fernando, Jesús M. Ruiz: On the Pythagoras numbers of real analytic set germs. Daniel Richardson, Ahmed El-Sonbaty: Counterexamples to the Uniformity Conjecture. Johannes Huisman: Real line arrangements and fundamental groups. Lou van den Dries, Patrick Speissegger: O-minimal preparation theorems. Murray Marshall: Optimization of polynomial functions. Salma Kuhlmann, Murray Marshall: Positivity, sums of squares and the multi-dimensional moment problem. Murray Marshall: Approximating positive polynomials using sums of squares. Murray Marshall: *-orderings and *-valuations on algebras of finite Gelfand-Kirillov dimension. Frédéric Chazal, Rémi Soufflet: Stability and finiteness properties of Medial Axis and Skeleton. Didier D'Acunto, Krzysztof Kurdyka: Geodesic diameter of compact real algebraic hypersurfaces. Jean-Yves Welschinger: Invariants of real symplectic 4-manifolds and lower bounds in real enumerative geometry. J. Huisman, M. Lattarulo: Imaginary automorphisms on real hyperelliptic curves. Johannes Huisman, Frédéric Mangolte: Every orientable Seifert 3-manifold is a real component of a uniruled algebraic variety. J. Bochnak, W. Kucharz: Analytic Cycles on Real Analytic Manifolds. Didier D'Acunto: Sur la topologie des fibres d'une fonction définissable dans une structure o-minimale. Henri Lombardi: Constructions cachées en algèbre abstraite (5). Principe local-global de Pfister et variantes. F. Chazal, J-M. Lion: Volumes transverses aux feuilletages definissables dans des structures o-minimales. Jean-Marie Lion, Patrick Speissegger: A geometric proof of the definability of Hausdorff limits. Jean-Philippe Monnier: Divisors on real curves. Artur Piękosz: Extending analytic $K$-subanalytic functions. Marcus Tressl: Pseudo Completions and Completion in Stages of o-minimal Structures. A. Díaz-Cano: Orderings and maximal ideals of rings of analytic functions. Mário J. Edmundo: O-minimal (co)homology and applications. Mário J. Edmundo: O-minimal cohomology with definably compact supports. Mário J. Edmundo: O-minimal cohomology and definably compact definable groups. Matthias Aschenbrenner, Lou van den Dries: Liouville Closed H-Fields. C. Andradas, A. Díaz-Cano: Closed stability index of excellent henselian local rings. José F. Fernando, Jesús M. Ruiz, Claus Scheiderer: Sums of squares in real rings. Michel Coste, Jesús M. Ruiz, Masahiro Shiota: Global Problems on Nash Functions. F. Acquistapace, F. Broglia, M. Shiota: The finiteness property and Łojasiewicz inequality for global semianalytic sets. F. Broglia, F. Pieroni: Separation of global semianalytic subsets of 2-dimensional analytic manifolds. F. Acquistapace, A. Díaz-Cano: Divisors in Global Analytic Sets. Johannes Huisman: Real hypersurfaces having many pseudo-hyperplanes. Vincent Astier, Marcus Tressl: Axiomatization of local-global principles for pp-formulas in spaces of orderings. Claus Scheiderer: Sums of squares on real algebraic curves. Maria Jesus de la Puente: Real Plane Algebraic Curves. C. Andradas, R. Rubio, M.P. Vélez: An Algorithm for convexity of semilinear sets over ordered fields. Zbigniew Szafraniec: Topological invariants of real Milnor fibres. Markus Schweighofer: On the complexity of Schmüdgen's Positivstellensatz. Didier D'Acunto, Krzysztof Kurdyka: Bounds for Gradient Trajectories of Definable Functions with Applications to Robotics and Semialgebraic Geometry. Z. Jelonek, K. Kurdyka: Quantitative generalized Bertini-Sard Theorem for smooth affine Varieties. J. Bochnak, W. Kucharz: On approximation of smooth submanifolds by nonsingular real algebraic subvarieties. J. Bochnak, W. Kucharz: A topological proof of the Grothendieck formula in real algebraic geometry. Georges Comte, Yosef Yomdin: Book: Tame Geometry with Applications in Smooth Analysis. José F. Fernando, J. M. Gamboa: Polynomial images of R^n. José F. Fernando: Analytic surface germs with minimal Pythagoras number. Markus Schweighofer: Iterated rings of bounded elements and generalizations of Schmüdgen's Positivstellensatz. Marcus Tressl: Valuation theoretic content of the Marker-Steinhorn Theorem. WWW Server: School of Mathematics, University of Manchester, UK.
CommonCrawl
where for a surface embedded in $\mathbb R^3 $ for shallow shell geometry we took $z=f(x,y)=w $ from Mongé form and neglected squares of first order partial derivatives $ (p ,q) $ in its denominator.The standard differential relation can be linked to classical large deformation theory of deformations of Plates and Shells. It links direct and shear strains for compatibility and is known to represent Gauss curvature as was derived by Von Kármán. The full non-linear $K$ is a known isometric invariant that can be derived from Christoffel symbols of the first fundamental form of surface theory (Egregium theorem). dealing with large deformations in both rectangular and polar coordinates. Is it composed of isometric or topological invariants that are derived from first and second fundamental forms? Or the Gauss-Bonnet theorem? How is this derived and recognized? Browse other questions tagged dg.differential-geometry riemann-surfaces classical-mechanics or ask your own question.
CommonCrawl
All currently known construction methods of smooth compact $\mathrm G_2$-manifolds have been tied to certain singular $\mathrm G_2$-spaces, which in Joyce's original construction are $\mathrm G_2$-orbifolds and in Kovalev's twisted connected sum construction are complete G2-manifolds with cylindrical ends. By a slight abuse of terminology we also refer to the latter as singular $\mathrm G_2$-spaces, and in fact both construction methods may be viewed as desingularization procedures. In turn, singular $\mathrm G_2$-spaces comprise a (conjecturally large) part of the boundary of the moduli space of smooth compact $\mathrm G_2$-manifolds, and so their deformation theory is of considerable interest. Furthermore, singular $\mathrm G_2$-spaces are also important in theoretical physics. Namely, in order to have realistic low-energy physics in M-theory, one needs compact singular $\mathrm G_2$-spaces with both codimension 4 and 7 singularities according to Acharya and Witten. However, the existence of such singular $\mathrm G_2$-spaces is unknown at present. The aim of this workshop was to bring reserachers from special holonomy geometry, geometric analysis and theoretical physics together to exchange ideas on these questions.
CommonCrawl
I know how to solve a Rubik $4\times4\times4$ or $5\times5\times5$ or bigger, but I have problem with a specific algorithm: the parity error. What's the shortest parity algorithm for $n\times n\times n$, or is there a really easy way to memorize parity? Unfortunately, there's not an easy way to spot or correct parity errors, but understanding what causes them can help you memorize how to correct them, and figure out where to go. Mechanical puzzles have something called "virtual states." These states are a part of the cube that exists, but they're either covered up or not shown on the surface of the cube. There are a countless number of virtual states, but the importantly: some virtual states, even though you cannot see them, make the difference in solvability. Parity errors most commonly (though not exclusively) result from errors in states you can't actually see. A common example of what I mean by this is the centers on the 4x4. You can't see them - not the stationary ones that exist on a 3x3 - but they still exist. Parities emerge on the 4x4 when the invisible 3x3 centers on the 4x4 are rotated into an invalid position. You can't see them, but they're there. As a result, they're quite literally impossible to spot until you run face-first into an unsolvable state. To correct them, you need to execute an algorithm that puts the internal hidden states into their correct locations, likely at the cost of re-scrambling a portion of the rest of the cube. As far as algorithms go, there's nothing to do but memorize, unfortunately. Trying to understand how the algorithm does what it does will let you remember it more clearly, but at the end of the day, it's still memorization. Practicing them over and over until you have them down is, at some point, the only way. On higher order cubes, solve from the inside out, and use parity algorithms as you go. For example, on the 7x7, pair the inner edge wings with their edges, then the outer edge wings with their edges. If you run into a parity case on the inner edges, it'll be easier to spot and less destructive to correct if you go inside-out than outside-in. The same algorithms will work, though. If you think about how a 5x5 works, and then think about how the fifth layer is hidden, you can discover why the OLL parity exists. Armed with that knowledge I tackle the OLL parity using no extra algs. Using the same tactic I got around the PLL parity, but I recommend just learning the alg for that, just get the two incorrect edges opposite each other (You can use a U-Perm to do this) and do this 6 turn long alg: MR2 U2 MR2 u2 MR2 MU2 and then fix the PLL. I challenge you to figure it out! Note: You only need 1 Algorithm for the OLL parity which you might know, it's the one you use to pair outer edges to inner edges on a 5x5. I'm sure there are other ones but I use r U2 r U2 F2 r F2 l' U2 l U2 r2. Good luck!
CommonCrawl
What are some good canned classifiers for high-dimensional data with probablistic labels, besides neural nets? I've got a classification problem where my labels are $N\times4$ matrices of probabilities of class membership, and I've got about 1800 covariates. The covariates are mostly granular, in the sense that an additively separable model (like a penalized multinomial logit) probably would work that well. I also haven't had much luck with using standard multinomial classifiers, after assigning each observation to the highest-probability class -- xgboost and random forest don't find much. A neural net is the obvious solution, but tuning the hyperparameters is a huge chore that I'd like to avoid if possible. Are there other good options that require less futzing time? For background, the class probabilities are all weights from a finite mixture model. I trained it and it fits really well, but I'm only just now realizing that I won't know the class weights for new data unless I know their outcomes! D'oh! If your problem is related to image processing then you may want to look into the rFerns package, the rFerns function is a mix of random forests and naive bayes. It's extremely fast, efficient, and simple to use, plus it was specifically designed for image processing / object recognition. Not the answer you're looking for? Browse other questions tagged machine-learning probability classification neural-networks algorithms or ask your own question. What are some good frameworks for method selection? Classifiers for small data sets and low dimensional features? What are good activation functions for this neural net architecture? What are some unstable classifiers?
CommonCrawl
The physiological sleep inducers are adenosine and the hormone melatonin. Melatonin is secreted by the epithalamus gland. It's concentration is high in the dark and it induces sleep by inhibiting the RAS. When the brain is performing, it uses high level of ATP, which generates the adenosine. When significant amount of adenosine accumulated in the brain, it binds with the A1 receptor and inhibits the cholinergic neurons of reticular activating system, those are involved in arousal. There is another one mechanism which maintain balances between brain excitations and inhibition that is essential for normal brain performance. Complications arise with too much excitations of brain include convulsions, anxiety, high blood pressure, restlessness and insomnia. There are physiological inhibitor of neuronal cell like $\gamma$-amino butyric acid (GABA), glycine etc. GABA is a neurotransmitter, present in most of the synapses (40%) in central nervous system and inhibits the post synaptic neurons by opening the chloride (Cl$^-$) channel followed by the hyper-polarization of the neuron. It acts by binding with the ligand gated ion channel (GABA$_A$ receptors) in the site of the receptor, located between $\alpha 1$ and $\beta 2$ subunits. Thus GABA inhibits the neurons present in the region of RAS and inhibits the state of stimulation of those neurons. Insomnia is a sleep disorder, when it is difficult for one to fall asleep. The possible causes of insomnia include stress, excessive caffeine intake, mood disorders, shift work, pain or other medical issues, drugs etc. When one is incapable of maintaining the balance between the state of excitation and inhibition, GABA agonists are given from the outside. Based on the GABA$_A$ receptors inhibitory action, barbiturates, benzodiazepine (BZDs) and non BZDs groups of drugs are used for the treatment of insomnia and other central nervous system disorders like convulsions, anxiety, epilepsy etc. Barbiturates, the first generation hypnotics, has higher affinity towards the ionotropic GABA$_A$ receptors and act by decreasing waking, increasing slow-wave sleep and enhancing the intermediate stage situated between slow-wave sleep and paradoxical sleep, at the expense of this last sleep stage. Barbiturates can induce sleep in absence of GABA as well as it is capable of stimulating the GABA binding sites of GABA$_A$ receptors. Pentobarbitone further inhibits the glutamate mediated depolarization of post synaptic neurons.
CommonCrawl
I have 2 questions regarding the naming of secondary structure elements ($\alpha$-helix and $\beta$-sheets), like helix C or sheet 2, which are often used in publications. Who assigns the characters and numbers to the helices and sheets? The authors of the paper where the structure is published (considering possible homologues or canonical folds)? Some institution? Is there a database/website where one can easily look up the right naming? I could not find any such information on RCSB webpage's entries. Or is it always necessary to look at the publication? I read that there would be a 'canonical P450 fold' - but I could not find any naming conventions. Browse other questions tagged biochemistry molecular-biology pdb xray-crystallography or ask your own question. How to predict a mRNA secondary structure with a large sequence? Can pymol show cartoon (secondary structure) for a pdb of multiple frames? How to confirm secondary structure formation of Precursor miRNA on gel?
CommonCrawl
A condition on a region of Euclidean space expressing some non-flatness property. An open set $G\subset E^n$ satisfies the weak cone condition if $x+V(e(x),H)\subset G$ for all $x\in G$, where $V(e(x),H)$ is a right circular cone with vertex at the origin of fixed opening $\epsilon$ and height $H$, $0\leq H\leq\infty$, and with axis vector $e(x)$ depending on $x$. An open set $G$ satisfies the strong cone condition if there exists a covering of the closure $\bar G$ by open sets $G_k$ such that for any $x\in\bar G\cap G_k$ the cone $x+(V(e(x),H)$ is contained in $G$ (the openings of these cones may depend on $k$). In connection with integral representations of functions and imbedding theorems, anisotropic generalizations of cone conditions have been considered, for example, the weak and strong $l$-horn conditions (see ), the cube condition, etc. This page was last modified on 24 April 2014, at 17:39.
CommonCrawl
Romain Pétrides (Paris Diderot University) will speak in the geometry seminar on Tuesday 19 June, at 1.30pm in the Salle de Profs. Romain's title is Min-Max construction for free boundary minimal disks and his abstract is below. We will discuss existence of minimal disks into a Riemannian manifold having a boundary lying on a specified embedded submanifold and that meet the submanifold orthogonally along the boundary. A general existence result has been obtained by A. Fraser. Her construction was inspired by Sacks-Uhlenbeck construction of minimal $2$-spheres : the existence is obtained by a limit procedure for a perturbed energy functional whose critical points are called $\alpha$-harmonic maps. We will explain how it is possible to adapt ideas of Colding-Minicozzi. These ideas go back to the replacement method of Birkhoff for the existence of geodesics. This approach gives general energy identities that include bubbles. This is a joint work with P. Laurain.
CommonCrawl
Using julia -L startupfile.jl, rather than machinefiles for starting workers. These are some terms that get thrown around a lot by julia programmers. This is a brief writeup of a few of them. This is a blog post about about dispatch. Mostly, single dispatch, though it trivially generalises to multiple dispatch, because julia is a multiple dispatch language. This post starts simple, and becomes complex. The last of which is kinda evil. Today we are going to look at loading large datasets in a asynchronous and distributed fashion. In a lot of circumstances it is best to work with such datasets in an entirely distributed fashion, but for this demonstration we will be assuming that that is not possible, because you need to Channel it into some serial process. But it doesn't have to be the case. Anyway, we use this to further introduce Channels and RemoteChannels. I have blogged about Channels before, you make wish to skim that first. That article focused on single producer single consumer. This post will focus on multiple producers, single consumer. (though you'll probably be able to workout multiple consumers from there, it is pretty semetrical). If I were more musically talented I would be writing a song ♫ Arguments that are destructured, and operator characters combine-ed; Loop binding changes and convert redefine-ed… ♫ no none of that, please stop. Technically speaking these are a few of my Favourite Things that are in julia 0.7-alpha. But since since 1.0 is going to be 0.7 with deprecations removed, We can look at it as a 1.0 list. Many people are getting excited about big changes like Pkg3, named tuples, field access overloading, lazy broadcasting, or the parallel task runtime (which isn't in 0.7 alpha, but I am hopeful for 1.0) I am excited about them too, but I think they're going to get all the attention they need. (If not then they deserve a post of their own each, not going to try and squeeze them into this one.) Here are some of the smaller changes I am excited about. I've been wanting to do a JuMP blog post for a while. JuMP is a Julia mathematical programing library. It is to an extent a DSL for describing constrained optimisation problems. A while ago, a friend came to me who was looking to get "buff", what he wanted to do, was maximise his protein intake, while maintaining a generally healthy diet. He wanted to know what foods he should be eating. To devise a diet. If one thinks about this, this is actually a Linear Programming problem – constrained linear optimisation. The variables are how much of each food to eat, and the contraints are around making sure you have enough (but not too much) of all the essential vitamins and minerals. Note: this is a bit of fun, in absolutely no way do I recommend using the diets the code I am about to show off generates. I am in no way qualified to be giving dietry or medical advice, etc. But this is a great way to play around with optimisation. A shortish post about the various string type in Julia 0.6, and it's packages. This post covers Base.String, Base.SubString, WeakRefStrings.jl, InternedStrings.jl, ShortStrings.jl and Strs.jl; and also mentioneds StringEncodings.jl. Thanks to Scott P Jones, who helped write the section on his Strs.jl package. This is just a quick post to show off DataDeps.jl. DataDeps.jl is the long discussed BinDeps for data. At it's heart it is a tool for reproducible data science. It means anyone trying to run your code later, in a different environment isn't faffing around trying to work out where to download the data from and how to connect it to your scripts. I wished to do some machine learning for binary classification. Binary classification is perhaps the most basic of all supervised learning problems. Unsurprisingly julia has many libraries for it. Today we are looking at: LIBLINEAR (linear SVMs), LIBSVM (Kernel SVM), XGBoost (Extreme Gradient Boosting), DecisionTrees (RandomForests), Flux (neural networks), TensorFlow (also neural networks). In this post we are only concentrating on their ability to be used for binary classification. Most (all) of these do other things as well. We'll also not really be going into exploring all their options (e.g. different types of kernals). Furthermore, I'm not rigeriously tuning the hyperparameters so this can't be considered a fair test for performance. I'm also not performing preprocessing (e.g. many classifies like it if you standarise your features to zero mean unit variance). You can look at this post more as talking above what code for that package looks like, and this is roughly how long it takes and how well it does out of the box. It's more of a showcase of what packages exist. For TensorFlow and Flux, you could also treat this as a bit of a demo in how to use them to define binary classifiers. Since they don't do it out of the box. Julia has 3 kinds of parallelism. The well known, safe, slowish and easyish, distributed parallelism, via pmap, @spawn and @remotecall. The wellish known, very safe, very easy, not-actually-parallelism, asynchronous parallelism via @async. And the more obscure, less documented, experimental, really unsafe, shared memory parallelism via @threads. It is the last we are going to talk about today. I'm not sure if I can actually teach someone how to write threaded code. Let alone efficient threaded code. But this is me giving it a shot. The example here is going to be fairly complex. For a much simpler example of use, on a problem that is more easily parallelizable, see my recent stackoverflow post on parallelizing sorting. I wanted to talk about using Coroutines for lazy sequences in julia. Because I am rewriting CorpusLoaders.jl to do so in a nondeprecated way. This basically corresponds to C# and Python's yield return statements. (Many other languages also have this but I think they are the most well known). The goal of using lazy sequences is to be able to iterate though something, without having to load it all into memory. Since you are only going to be processing it a single element at a time. Potentially for some kind of moving average, or for acausal language modelling, a single window of elements at a time. Point is, at no point do I ever want to load all 20Gb of wikipedia into my program, nor all 100Gb of Amazon product reviews. And I especially do not want to load $\infty$ bytes of every prime number. If one wants to have full control over the worker process to method to use is addprocs and the -L startupfile.jl commandline arguement when you start julia See the documentation for addprocs. The simplest way to add processes to the julia worker is to invoke it with julia -p 4. The -p 4 argument says start 4 worker processes, on the local machine. For more control, one uses julia --machinefile ~/machines Where ~/machines is a file listing the hosts. The machinefile is often just a list of hostnames/IP-addresses, but sometimes is more detailed. Julia will connect to each host and start a number of workers on each equal to the number of cores. Even the most detailed machinefile doesn't give full control, for example you can not specify the topology, or the location of the julia exectuable. TensorFlow's SVD is significantly less accurate than LAPACK's (i.e. julia's and numpy/SciPy's backing library for linear algebra). But still incredibly accurate, so probably don't panic. Unless your matrices have very large ($>10^6$) values, then the accuracy difference might be relevant for you (but probably isn't). However, both LAPACK and TensorFlow are not great then – LAPACK is still much better. Anyone who has been stalking me may know that I have been making a fairly significant number of PR's against TensorFlow.jl. One thing I am particularly keen on is making the interface really Julian. Taking advantage of the ability to overload julia's great syntax for matrix indexing and operations. I will make another post going into those enhancements sometime in the future; and how great julia's ability to overload things is. Probably after #209 is merged. This post is not directly about those enhancements, but rather about a emergant feature I noticed today. I wrote some code to run in base julia, but just by changing the types to Tensors it now runs inside TensorFlow, and on my GPU (potentially). This is a demonstration of using JuliaML and TensorFlow to train an LSTM network. It is based on Aymeric Damien's LSTM tutorial in Python. All the explinations are my own, but the code is generally similar in intent. There are also some differences in terms of network-shape. The task is to use LSTM to classify MNIST digits. That is image recognition. The normal way to solve such problems is a ConvNet. This is not a sensible use of LSTM, after all it is not a time series task. The task is made into a time series task, by the images arriving one row at at a time; and the network is asked to output which class at the end after seeing the 28th row. So the LSTM network must remember the last 27 prior rows. This is a toy problem to demonstrate that it can. JuliaPro is JuliaComputing's prepackaged bundle of julia, with Juno/Atom IDE, and a bunch of packages. The short of it is: there is no reason not to install julia this way on a Mac/Windows desktop – it is more convenient and faster to setup, but it is nothing revolutionary. Julia is a great language for scientific and technical programming. It is more or all I use in my research code these days. It gets a lot of attention for being great for scientific programming because of its: great matrix syntax, high speed and optimisability, foreign function interfaces, range of scientific libraries, etc etc. It has all that sure. (Though it is still in alpha, so many things are a bit broken at times.) One things that is under-mentioned is how great it is as a "glue" language. This is a second shot at expressing Path Schema as algebraic objects. See my first attempt. The definitions should be equivelent, and any places they are not indicates a deficency in one of the defintions. This should be a bit more elegant, than before. It is also a bit more extensive. Note that and are now defined differently, and and are what one should be focussing on instead, this is to use the free monoid convention. In general a path can be described as a a hierachical index, onto a directed multigraph. Noting that "flat" sets, trees, and directed graphs are all particular types of directed multigraphs. This post comes from a longish discussion with Fengyang Wang (@TotalVerb), on the JuliaLang Gitter. Its pretty cool stuff. It is defined here independent of the object (filesystem, document etc) being indexed. The precise implementation of the algebric structure differs, depending on the Path types in question, eg Filesystem vs URL, vs XPATH. Note I have written a much improved version of this. See the new post. In general a path can be described as a a heirachical index. It is defined here independent of the object (filesystem, document etc) being indexed. The precise implementation of the algebric structure differs, depending on the Path types in question, eg Filesystem vs URL, vs XPATH.
CommonCrawl
Results concerning the existence of solutions of multipoint boundary value problems are given. The results are based on a topological transversality method and rely on a priori bounds on solutions. Applications are made to conjugate type and focal type boundary value problems; the third order, 3-point boundary value problem $y''' = f(x,y,y',y'')$, $y(x_1 ) = r_1 $, $y(x_2 ) = r_2 $, $y(x_3 ) = r_3 $, where $x_1 < x_2 < x_3 $, is discussed. Eloe, Paul W. and Henderson, Johnny, "Nonlinear boundary value problems and a priori bounds on solutions" (1984). Mathematics Faculty Publications. 97.
CommonCrawl
In many introductory courses to quantum mechanics, we see $\delta$-functions all over the place. For example when expressing an arbitrary wave function $\psi(x)$ in the basis of eigenfunctions of the position operator $\hat x$ as $$ \psi(x) = \int\mathrm d\xi\, \delta(x-\xi)\, \psi(\xi). $$ In bra-ket notation this corresponds to $$ \left|\psi\right\rangle = \int\mathrm d\xi\,\left|\,\xi\,\right\rangle\!\left\langle\,\xi\,\middle|\,\psi\,\right\rangle, $$ where $\left|\,\xi\,\right\rangle$ is the state corresponding to the wavefunction $x\mapsto\delta(x-\xi)$. Now the $\delta$-function is really not a function, but a distribution, that's defined by how it acts on test-functions, i.e. $\delta[\varphi] = \varphi(0)$. Do you know of an introductory text on quantum mechanics that stresses this point and uses the language of distributions properly, avoding any functions with seamingly infinite peaks? Perhaps you are starting by the wrong end. Your concern seems to be related in the first term with the totally misleading notation of integrals in quantum mechanics, and this is more related with the spectral theorem than with distributions itself. Distributions only appear in Quantum mechanics when certain operators has empty spectrum in the usual Hilber space. Then, you need to consider a bigger underlying space. Once you have the mathematical background and feel totally comfortable with the integrals of quantum mechanics as (which is nothing more tan spectral thorem) you can move onto distributions in quantum mechanics, which are developed in the context of Gelfand triplets. An excellent reference is "The role of the rigged Hilbert space in Quantum Mechanics" from Rafael de la Madrid. It is freely available in the web. Brian Hall "Quantum Theory for Mathematicians" is a recent nice book that presents the basics of QM with mathematical rigor, as suggested by the title. It covers a fair amount of topics, and seems suitable for an undergraduate level. The short book of Mackey "Mathematical foundations of Quantum Mechanics" is also a very nice book on the axiomatization of QM, but may be difficult for an undergraduate student. Not the answer you're looking for? Browse other questions tagged quantum-mechanics mathematical-physics resource-recommendations dirac-delta-distributions or ask your own question. How can we justify identifying the Dirac delta function with the eigenfunction of position?
CommonCrawl
I am a bit confused about the definition of weak del pezzo surface. Can someone give an example that what kind of weak del pezzo surface is not a del pezzo surface? A surface $S$ is del Pezzo if $-K_S$ is ample. It is weak del Pezzo if $-K_S$ is nef and big. To get examples of (true) weak del Pezzos, remember that a del Pezzo of degree $d$ is the blowup of $\mathbf P^2$ in $9-d$ general points. If $S$ is the blowup of $\mathbf P^2$ in points $p_1,\ldots,p_r$, then $-K_S=3H-E_1-\cdots-E_r$ (in the obvious notation). So the trick is to choose the points so that $-K_S$ is nef and big, but has degree $0$ on some curve. For example, choose 6 points in $\mathbf P^2$ such that 3 of them lie on a line. Then on the blowup, $-K_S \cdot L=0$ where $L$ is the proper transform of the line. However, one can verify that $-K_S$ is still basepoint-free, hence nef, and has 4-dimensional space of sections, giving a birational map onto the image of $S$ in $\mathbf P^3$, hence is big. The simplest example is the second Hirzebruch surface. Not the answer you're looking for? Browse other questions tagged algebraic-geometry surfaces or ask your own question. Del Pezzo surface of degree 4 is intersection of two quadrics? Why are Del Pezzo surfaces rational? Is it true that if the anticanonical divisor has positive self-intersection then it is ample? How exactly do vectors, normals and faces relate in surfaces? Is it always possible to construct a rigid vector bundles with given chern character?
CommonCrawl
Abdallah Ben Abdallah, Farhat Shel. Exponential stability of a general network of 1-d thermoelastic rods. Mathematical Control & Related Fields, 2012, 2(1): 1-16. doi: 10.3934\/mcrf.2012.2.1. Kangsheng Liu, Xu Liu, Bopeng Rao. Eventual regularity of a wave equation with boundary dissipation. Mathematical Control & Related Fields, 2012, 2(1): 17-28. doi: 10.3934\/mcrf.2012.2.17. Amol Sasane. Extension of the $\\nu$-metric for stabilizable plants over $H^\\infty$. Mathematical Control & Related Fields, 2012, 2(1): 29-44. doi: 10.3934\/mcrf.2012.2.29. Louis Tebou. Energy decay estimates for some weakly coupled Euler-Bernoulli and wave equations with indirect damping mechanisms. Mathematical Control & Related Fields, 2012, 2(1): 45-60. doi: 10.3934\/mcrf.2012.2.45. Huaiqiang Yu, Bin Liu. Pontryagin\'s principle for local solutions of optimal control governed by the 2D Navier-Stokes equations with mixed control-state constraints. Mathematical Control & Related Fields, 2012, 2(1): 61-80. doi: 10.3934\/mcrf.2012.2.61. Jie Yu, Qing Zhang. Optimal trend-following trading rules under a three-state regime switching model. Mathematical Control & Related Fields, 2012, 2(1): 81-100. doi: 10.3934\/mcrf.2012.2.81.
CommonCrawl
"... which induces an exact sequence in homotopy $$\ldots\pi_2(S^1)\to\pi_2(S^3)\to\pi_2(S^2)\to\pi_1(S^1)\to\pi_1(S^3)\to\pi_1(S^1)\ldots$$ from which the higher homotopy groups of spheres could be computed ..." Now I know exact sequences, some algebraic topology and how continuous maps induce maps on homology and how Mayer-Vietoris extends this (under certain conditions) to a long exact sequence, and how the Hurewicz theorem gets you from homology to homotopy, but the machinery behind the above statement seems to be some other result/theorem/piece of theory. A very crude guess is that $\pi_n$ is a functor which preserves exact sequences, and some kind of Mayer-Vietoris then gives the long exact sequence. If someone has a nice reference for this material, that would be nice. To motivate you guys, here is a pretty picture. Note, as $p$ is a Serre fibration, the homotopy type of $F$ is independent of the choice of $b_0 \in B$. with a proof of the fact that $\pi_n(E, F, x_0) \cong \pi_n(B, b_0)$. Finally, the path-lifting property, together with the fact that $B$ is path-connected, shows that the map $\pi_0(F, x_0) \to \pi_0(E, x_0)$ is surjective. Another reference is May's A Concise Course in Algebraic Topology, page $66$, which takes a more functorial approach; in particular, it uses the loop space functor. Not the answer you're looking for? Browse other questions tagged reference-request algebraic-topology homotopy-theory exact-sequence hopf-fibration or ask your own question. Long exact sequence in homology: naturality=functoriality? Do long exact Mayer Vietoris sequences decompose into short exact sequences $0\to H_n(A\cap B)\to H_n(A)\oplus H_n(B)\to H_n(X)\to 0$? Image of exact sequence still exact? What are the maps in the long exact sequence of homotopy groups for the free loop space fibration?
CommonCrawl
Abstract: Wreath products of finite groups have permutation representations that are constructed from the permutation representations of their constituents. One can envision these in a metaphoric sense in which a rope is made from a bundle of threads. In this way, subgroups and quotients are easily visualized. The general idea is applied to the finite subgroups of the special unitary group of $(2\times 2)$-matrices. Amusing diagrams are developed that describe the unit quaternions, the binary tetrahedral, octahedral, and icosahedral group as well as the dicyclic groups. In all cases, the quotients as subgroups of the permutation group are readily apparent. These permutation representations lead to injective homomorphisms into wreath products.
CommonCrawl
Example 1: Any collection $\mathbf A$ of binary relations on a set $X$ such that $\mathbf A$ is closed under union, intersection and composition. Example 1: Any collection $\mathbf A$ of binary relations on a set $X$ such that $\mathbf A$ is closed under union, intersection and composition. - Andreka 1991 AU proves that these examples generate the variety DLOS. + H. Andreka[(Andreka1991)] proves that these examples generate the variety DLOS.
CommonCrawl
BellInequalityMaxQubits: Approximates the optimal value of a Bell inequality in qubit (i.e., 2-dimensional quantum) settings. NonlocalGameValue: Computes the maximum value of a nonlocal game in a classical, quantum, or no-signalling setting. BellInequalityMax: Bug fix when computing the classical value of a Bell inequality using measurements that have values other than $0, 1, 2, \ldots, d-1$. KrausOperators: If the zero map is provided as input, this function now returns a single zero matrix Kraus operator, rather than an empty cell containing no Kraus operators. XORGameValue: Bug fix when computing the value of some XOR games with complex entries. This page was last modified on 13 April 2015, at 18:41.
CommonCrawl
tl;dr We benchmark dask on an out-of-core dot product. We also compare and motivate the use of an optimized BLAS. Disclaimer: This post is on experimental buggy code. This is not ready for public use. We now give performance numbers on out-of-core matrix-matrix multiplication. Dense matrix-matrix multiplication is compute-bound, not I/O bound. We spend most of our time doing arithmetic and relatively little time shuffling data around. As a result we may be able to read large data from disk without performance loss. When multiplying two $n\times n$ matrices we read $n^2$ bytes but perform $n^3$ computations. There are $n$ computations to do per byte so, relatively speaking, I/O is cheap. We normally measure speed for single CPUs in Giga Floating Point Operations Per Second (GFLOPS). Lets look at how my laptop does on single-threaded in-memory matrix-matrix multiplication using NumPy. For matrices too large to fit in memory we compute the solution one part at a time, loading blocks from disk when necessary. We parallelize this with multiple threads. Our last post demonstrates how NumPy+Blaze+Dask automates this for us. We perform a simple numerical experiment, using HDF5 as our on-disk store. 18.9 GFLOPS, roughly 3 times faster than the in-memory solution. At first glance this is confusing - shouldn't we be slower coming from disk? Our speedup is due to our use of four cores in parallel. This is good, we don't experience much slowdown coming from disk. It's as if all of our hard drive just became memory. Reference BLAS is slow; it was written long ago. OpenBLAS is a modern implementation. I installed OpenBLAS with my system installer (apt-get) and then reconfigured and rebuilt numpy. OpenBLAS supports many cores. We'll show timings with one and with four threads. This is about four times faster than reference. If you're not already parallelizing in some other way (like with dask) then you should use a modern BLAS like OpenBLAS or MKL. Finally we run on-disk our experiment again, now with OpenBLAS. We do this both with OpenBLAS running with one thread and with many threads. We'll skip the code (it's identical to what's above) and give a comprehensive table of results below. Sadly the out-of-core solution doesn't improve much by using OpenBLAS. Acutally when both OpenBLAS and dask try to parallelize we lose performance. tl:dr When doing compute intensive work, don't worry about using disk, just don't use two mechisms of parallelism at the same time. Actually we can improve performance when an optimized BLAS isn't avaiable. Also, thanks to Wesley Emeneker for finding where we were leaking memory, making results like these possible.
CommonCrawl
Abstract: We describe a method that is used to reduce, significantly, the number of CP violating complex phases in the Yukawa parameters. With this Reduction of Complex Phases (RCP) we obtain only one CP violating complex phase in the case where the neutrinos have an (effective) $3\times 3$ Majorana mass matrix. For the See Saw extension of the SM with three righthanded neutrinos, and in connection with CP violation in leptogenis, we reduce the usual 6 complex phases to only 2.
CommonCrawl
In the link above it is discussed that the interesting feature of Jeffreys prior is that, when reparameterizing the model, the resulting posterior distribution gives posterior probabilities that obey the restrictions imposed by the transformation. Say, as discussed there, when moving from the success probability $\theta$ in the Beta-Bernoulli example to odds $\psi=\theta/(1-\theta)$, it should be the case that the a posterior satisfies $P(1/3\leq\theta\leq 2/3\mid X=x)=P(1/2\leq\psi\leq 2\mid X=x)$. I wanted to create a numerical example of invariance of Jeffreys prior for transforming $\theta$ to odds $\psi$, and, more interestingly, lack thereof of other priors (say, Haldane, uniform, or arbitrary ones). Now, if the posterior for the success probability is Beta (for any Beta prior, not only Jeffreys), the posterior of the odds follows a Beta distribution of the second kind (see Wikipedia) with the same parameters. Then, as highlighted in the numerical example below, it is not too surprising (to me, at least) that there is invariance for any choice of Beta prior (play around with alpha0_U and beta0_U), not only Jeffreys, cf. the output of the program. Do you know a (preferably simple) example in which we do get lack of invariance? lead to the same posterior for $\psi$. This will indeed always occur (caveat; as long as the transformation is such that a distribution over $\psi$ is determined by a distribution over $\theta$). result in the same prior distribution for $\psi$. If they result in the same prior, they will indeed result in the same posterior, too (as you have verified for a couple of cases). As mentioned in @NeilG's answer, if your Method For Deciding The Prior is 'set uniform prior for the parameter', you will not get the same prior in the probability/odds case, as the uniform prior for $\theta$ over $[0,1]$ is not uniform for $\psi$ over $[0,\infty)$. Instead, if your Method For Deciding The Prior is 'use Jeffrey's prior for the parameter', it will not matter whether you use it for $\theta$ and convert into the $\psi$-parametrization, or use it for $\psi$ directly. This is the claimed invariance. It looks like you're verifying the likelihoods induced by the data are unaffected by parametrization, which has nothing to do with the prior. If your way of choosing priors is to, e.g., "choose the uniform prior", then what is uniform under one parametrization (say Beta, i.e. Beta(1,1)) is not uniform under another, say, BetaPrime(1,1) (which is skewed) — it's BetaPrime(1,-1) is uniform if such a thing exists. The Jeffreys prior is the only "way to choose priors" that is invariant under reparametrization. So it is less assumptive than any other way of choosing priors. Not the answer you're looking for? Browse other questions tagged bayesian mathematical-statistics fisher-information jeffreys-prior invariance or ask your own question. Why is the Jeffreys prior useful? Which distributions are parameterization invariant when based on the Jeffreys prior? What is the relation behind Jeffreys Priors and a variance stabilizing transformation? How do ABC and MCMC differ in their applications? Why are there recommendations against using Jeffreys or entropy based priors for MCMC samplers?
CommonCrawl
Distributed Denial-of-Service (DDoS) attacks continue to be a major threat in the Internet today. DDoS attacks overwhelm target services with requests or other traffic, causing requests from legitimate users to be shut out. A common defense against DDoS is to replicate the service in multiple physical locations or sites. If all sites announce a common IP address, BGP will associate users around the Internet with a nearby site, defining the \emphcatchment of that site. Anycast addresses DDoS both by increasing capacity to the aggregate of many sites, and allowing each catchment to contain attack traffic leaving other sites unaffected. IP anycast is widely used for commercial CDNs and essential infrastructure such as DNS, but there is little evaluation of anycast under stress. This paper provides the \emphfirst evaluation of several anycast services under stress with public data. Our subject is the Internet's Root Domain Name Service, made up of 13 independently designed services (``letters'', 11 with IP anycast) running at more than 500 sites. Many of these services were stressed by sustained traffic at $100\times$ normal load on Nov. 30 and Dec. 1, 2015. We use public data for most of our analysis to examine how different services respond to the these events. We see how different anycast deployments respond to stress, and identify two policies: sites may \emphabsorb attack traffic, containing the damage but reducing service to some users, or they may \emphwithdraw routes to shift both good and bad traffic to other sites. We study how these deployments policies result in different levels of service to different users. We also show evidence of \emphcollateral damage on other services located near the attacks.
CommonCrawl
The rings considered in this article are commutative with identity. This article is motivated by the work on comaximal graphs of rings. In this article, with any ring $R$, we associate an undirected graph denoted by $G(R)$, whose vertex set is the set of all elements of $R$ and distinct vertices $x,y$ are joined by an edge in $G(R)$ if and only if $Rx\cap Ry = Rxy$. In Section 2 of this article, we classify rings $R$ such that $G(R)$ is complete and we also consider the problem of determining rings $R$ such that $\chi(G(R)) = \omega(G(R))< \infty$. In Section 3 of this article, we classify rings $R$ such that $G(R)$ is planar.
CommonCrawl
Dario Pighin is a PhD Student at Universidad Autónoma de Madrid (UAM). He earned both the Bachelor's Degree and the Master's Degree in Mathematics at the University of Rome Tor Vergata (Rome 2). Currently, he is studying for his PhD in Control Theory, under the joint supervision of professor Enrique Zuazua. Master degree in Pure and Applied Mathematics (2014-2016), University of Rome Tor Vergata, Rome, Italy. Bachelor's Degree in Mathematics (2011-2014), University of Rome Tor Vergata, Rome, Italy. The dissertation is devoted to the the study of the so called "Turnpike Property" in Optimal Control Theory, i.e. we show the convergence of Non-Stationary Optimal Control Problems to the corresponding Stationary one as the time horizon T$\to+\infty$. Actually we prove an exponential convergence far away from the initial point and the terminal point. We analyse both the Finite Dimensional and Infinite Dimensional Linear Quadratic Case with some controllability and observability assumptions. Furthermore, we extend our analysis to a Finite Dimensional NonLinear Convex Case, with controllability and observability assumptions as well. Pighin, D., Zuazua, E. Controllability under positivity constraints of semilinear heat equations. Mathematical Control and Related Fields, Volume 8 (no. 3&4), DOI: 10.3934/mcrf.2018041. N. Sakamoto, D. Pighin, E. Zuazua The turnpike propety in nonlinear optimal control – A geometric approach.
CommonCrawl
Posted on 27/10/2018, in Machine Learning. This note was first taken when I learnt the machine learning course on Coursera. Lectures in this week: Lecture 13, Lecture 14. error This note of Alex Holehouse is also very useful (need to be used alongside with my note). settings_backup_restore Go back to Week 7. This is the first unsupervised learnig. There is no label associated to it. There is X but not y. The most popular and widely used algorithm. Also talk about how to avoid local optimal as well. Random choose $K$ training examples as a initial cluster. Elbow method: check the cost function J wrt number of clusters. Check the "elbow" (cái cùi chỏ) where the graph change direction much. EM isn't used very often! If $k=5$ have bigger J than $k=3$ then k-means got stuck in a bad local minimum. You should try re-running k-means with multiple random initializations. Choose the number of clusters depend on later/downstream purpose (choose the size of T-shirt, 3 or 5 for example). Speeds up algorithms + Reduces space used by data for them. We want to reduce dimension to 2 or 3 so that we can visualize the data. For example, we want to find a line to translate 2D to 1D: all data point projected to this line should be small. The vector of this line is what we wanna find. Below photo comapre pca (right) and linear regression (left). LR has y to compare while PCA has no y, every x has equal role. How to use PCA for yourself + and how to reduce dimension for your data. In probability theory and statistics, a covariance matrix is a matrix whose element in the $i, j$ position is the covariance between the i-th and j-th elements of a random vector. Choo k from 1 to the one get the fraction less than 0.001. The numerator is small: we lose very little information in the dimensionality reduction, so when we decompress we regenerate the same data. Always use normal approach (without PCA) to solve a problem, if you CANNOT DO MORE, then think about adding PCA. The K-means algorithm is a method to automatically cluster similar data examples together. Check the guide in ex7.pdf, page 7. In this exercise, you will use the K-means algorithm to select the 16 colors that will be used to represent the compressed image. Concretely, you will treat every pixel in the original image as a data example and use the K-means algorithm to find the 16 colors that best group (cluster) the pixels in the 3-dimensional RGB space. Once you have computed the cluster centroids on the image, you will then use the 16 colors to replace the pixels in the original image. First, you compute the covariance matrix of the data. Then, you use Octave/MATLAB's SVD function to compute the eigenvectors $U_1, U_2,\ldots,U_n$. Before using PCA, it is important to first normalize the data by subtracting the mean value of each feature from the dataset, and scaling each dimension so that they are in the same range. In this part of the exercise, you will run PCA on face images to see how it can be used in practice for dimension reduction. For example, if you were training a neural network to perform person recognition (gven a face image, predict the identitfy of the person), you can use the dimension reduced input of only a 100 dimensions instead of the original pixels. keyboard_arrow_right Go to Week 9.
CommonCrawl
John has 77 boxes each having dimensions 3x3x1. Is it possible for John to build one big box with dimensions 7x9x11? I'm leaning towards no, but I would like others opinion. The answer is no; John can't even fill up the topmost $7\times 11\times 1$ slice of the $7\times 11\times 9$ box. Consider just the top $7\times 11$ face of this box; look just at this face and ignore the rest of the box. A solution to the problem would fill up this $7\times 11$ rectangle with large $3\times3$ rectangles and small $3\times 1$ rectangles. But $7\times 11$ is not a multiple of $3$. The volume of the big box is $V_B = 7\cdot 9 \cdot 11 = 693$, the total volume of the small boxes is $V_b = 77 \cdot 3 \cdot 3 \cdot 1 = 693$. This means the volume of the small boxes is sufficient and we need to use all small boxes. We have a base field, e.g. $7\times 9$, and need to drop all 77 small boxes over it. We win if we dropped all $77$ boxes without loosing. This is a search space of $77\times 7 \times 9 \times 3 = 14553$ drop configurations. Not that much for a machine. We could avoid the drop simulation and instead have $c_z$ as another choice. This would enlarge the search space to $77\times 7 \times 9 \times 11 \times 3 = 160083$ configurations. In both cases we need to check that boxes do not intersect. This should be sufficient to code a solver which visits all configurations of the search space (brute force) and will answer the question by either listing feasible configurations or reporting that there is no solution. Note: I submitted this before MJD published a counter argument. Not the answer you're looking for? Browse other questions tagged recreational-mathematics combinatorial-geometry or ask your own question. What is the minimum number of moves of solve the puzzle? How do you calculate the odds of catching a cold from a person who interacted with someone that has a chance of already having a cold?
CommonCrawl
How do we incorporate new information into a Dirichlet prior distribution? My problem is this: I have an ensemble of predictors that each produces a distribution over a set of classes. What I would like to do is to first have a non-informative prior about how this label distribution looks like, and then update that prior with the prediction of each member of the ensemble. So I thought of using a non-informative Dirichlet prior, which I then update with each sample distribution that comes in as a prediction. My question is: Is this approach valid, and if yes how would I update my prior so that, it becomes more defined as more samples accumulate? Dirichlet prior is an appropriate prior, and is the conjugate prior to a multinomial distribution. However, it seems a bit tricky to apply this to the output of a multinomial logistic regression, since such a regression has a softmax as the output, not a multinomial distribution. However, what we can do is sample from a multinomial, whose probabilities are given by the softmax. So, we can forward propagate mini-batches of data examples, draws from the standard normal distribution, and back-propagate through the network. This is fairly standard and widely used, eg the Kingma VAE paper above. A slight nuance is, we are drawing discrete values from a multinomial distribution, but the VAE paper only handles the case of continuous real outputs. However, there is a recent paper, the Gumbel trick, https://casmls.github.io/general/2017/02/01/GumbelSoftmax.html , ie https://arxiv.org/pdf/1611.01144v1.pdf ,and https://arxiv.org/abs/1611.00712 , which allows draws from discrete multinomial papers. The $\alpha_k$ here are prior probabilities for the various categories, which you can tweak, to push your initial distribution towards how you think the distribution could be distributed initially. So: yes :). It is. Using something like multi-task learning, eg http://www.cs.cornell.edu/~caruana/mlj97.pdf and https://en.wikipedia.org/wiki/Multi-task_learning . Except multi-task learning has a single network, and multiple heads. We will have multiple networks, and a single head. The 'head' comprises an extract layer, which handles 'mixing' between the nets. Note that you'll need a non-linearity between your 'learners' and the 'mixing' layer, eg ReLU or tanh. In this last network, each learner learns to fix any issues caused by the network so far, rather than creating its own relatively independent prediction. Such an approach can work quite well, ie Boosting, etc. Not the answer you're looking for? Browse other questions tagged bayesian ensemble dirichlet-distribution or ask your own question. How to use prior probability in inferencing from HMM for activity recognition? can Dirichlet prior distribution be larger than 1? How do I weight a prior for empirical Bayesian updating using a multivariate gamma distribution? In deriving the parameter of a posterior, is it necessary to use the likelihood over $n$ samples?
CommonCrawl
Elections and statistics: the case of "United Russia", 2009-2012Apr 02 2012The election statistics analysis does not confirm the assumption of correct ballot counting (a survey). Elections and statistics: the case of "United Russia", 2009-2018Apr 02 2012Jun 23 2018The election statistics analysis does not confirm the assumption of correct ballot counting (a survey). On stochastic stability of non-uniformly expanding interval mapsJul 13 2011Dec 17 2012We study the expanding properties of random perturbations of regular interval maps satisfying the summability condition of exponent one. Under very general conditions on the interval maps and perturbation types, we prove strong stochastic stability. Perfectoid Shimura varieties of abelian typeJul 07 2015Sep 12 2016We prove that Shimura varieties of abelian type with infinite level at $p$ are perfectoid. As a corollary, the moduli spaces of polarized K3 surfaces with infinite level at $p$ are also perfectoid. An analogue of Szego's limit theorem in free probability theoryJun 06 2007Aug 07 2007In the paper, we discuss orthogonal polynomials in free probability theory. Especially, we prove an analogue of of Szego's limit theorem in free probability theory. Weak embedding theorem and a proof of cycle double cover of bridgeless graphsDec 11 2017In this article, we give a positive answer to the cycle double cover conjecture. Ones who are mainly interesting in the proof of the conjecture can only read Sections 2 and 4. A manual proof of the Four-colour TheoremOct 30 2017Jul 06 2018In this paper, we provide an easy proof of the Four-colour Theorem in a special case indeed. W-Infinity and String TheoryFeb 20 1992Feb 23 1992We review some recent developments in the theory of $W_\infty$. We comment on its relevance to lower-dimensional string theory. Decomposition ComplexityDec 03 2010We consider a problem of decomposition of a ternary function into a composition of binary ones from the viewpoint of communication complexity and algorithmic information theory as well as some applications to cellular automata. Nonrationality of a generic cubic fourfoldJan 23 2016Jan 29 2016An error in Section 4 invalidates all the main results of the paper.
CommonCrawl
Given that $ a\; $and $\;b $ are integers and $ a\;+a^2\;b^3 $ is odd, which one of the following statements is correct ? From the time the front of a train enters a platform, it takes 25 seconds for the back of the train to leave the platform, while travelling at a constant speed of 54 km/h. At the same speed, it takes 14 seconds to pass a man running at 9 km/h in the same direction as the train. What is the length of the train and that of the platform in meters, respectively? Which of the following functions describe the graph shown in the below figure? Answer : (C) If (i) and (ii) are true, then (iii) is true. For what value of k will $ F\left(z\right) $ satisfy the Cauchy-Riemann equations? A bar of uniform cross section and weighing 100 N is held horizontally using two massless and inextensible strings S1 and S2 as shown in the figure. For an Oldham coupling used between two shafts, which among the following statements are correct? I. Torsional load is transferred along shaft axis. II. A velocity ratio of 1:2 between shafts is obtained without using gears. III. Bending load is transferred transverse to shaft axis. IV. Rotation is transferred along shaft axis. For a two-dimensional incompressible flow field given by $ \overset\rightharpoonup u=A\left(x\widehat i-y\widehat j\right) $, where $ A>0 $ , which one of the following statements is FALSE? (A) It satisfies continuity equation. (B) It is unidirectional when $ x\rightarrow0 $ and $ y\rightarrow\infty $. (C) Its streamlines are given by $ x=y $. Answer : (C) Its streamlines are given by $ x=y $. Which one of the following statements is correct for a superheated vapour? (A) Its pressure is less than the saturation pressure at a given temperature. (B) Its temperature is less than the saturation temperature at a given pressure. (C) Its volume is less than the volume of the saturated vapour at a given temperature. (D) Its enthalpy is less than the enthalpy of the saturated vapour at a given pressure. Answer : (A) Its pressure is less than the saturation pressure at a given temperature.
CommonCrawl
Relationship between logical axioms and tautologies? "Logical axioms are formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense." This seems to imply that when you construct a deductive system you choose a "minimal set" of tautologies form the synctax of your formal language and you use them as axioms for your formal system. These tautologies will be needed to help you proving all the other tautolgies. What about all the sentences that aren't tautologies? Can you deduce them? When it says that you can prove all other tautologies, does it mean that you need a rule of inference for that or are these trivially implied by the axioms? What about theorems, do they follow from tautologies and sme rule of inference? I feel like I'missing something, or It's not very clear what the article is saying. "tautologies [deleted : Logical axioms] are formulas that are satisfied by every assignment of truth values". "Usually one takes as logical axioms [deleted : at least some minimal] a suitable set of tautologies that is sufficient for proving all tautologies in the language". See List of Hilbert systems for a collection of axioms systems for propositional calculus. As you can see, there are many : classical, intuitionistic, with few basic connectives (one or two : the other defined from the basic ones) or with many. "Suitable" means : enough to prove the Completeness Theorem, i.e. to prove that all formulas of the calculus with the "designated" property according to the relavant semantics [i.e. all tautologies, in the case of classical propositional calculsu] are derivable from the axioms with the rules of inference. We write $\vDash \mathcal A$ to denotes the fact that formula $\mathcal A$ is a tautology. "does it mean that you need a rule of inference for that or are these trivially implied by the axioms?" "What about theorems, do they follow from tautologies and some rule of inference?" As said, we can have many different versions of the calculus, vith different sets of axioms+rules. With axioms, we need at least one rule : usually Modus Ponens. But we may have calculus without axioms and rules only; see Natural Deduction. a finite sequence of formulas where each formula is an axioms or derived from previous formulas in the sequence by way of the inference rule(s). We write $\vdash \mathcal A$ to denotes the fact that formula $\mathcal A$ is derivable. "These tautologies [the axioms] will be needed to prove [i.e. to derive] all other tautolgies." This is sometimes called semantical completeness : if $\vDash \mathcal A$, then $\vdash \mathcal A$. In addition, we can define the relation of derivability from assumption : $\Gamma \vdash \mathcal A$, where $\Gamma$ is a set of formulas, called assumptions. a derivation from assumptions is a finite sequence of formulas where each formula is either an axioms or a formula in the set of assumptions, or it is derived from previous formulas in the sequence by way of the inference rule(s). In this case, we speak of strong completeness of the calculus : if $\Gamma \vDash \mathcal A$, then $\Gamma \vdash \mathcal A$. "What about all the sentences that aren't tautologies? Can you deduce them?" No; it is a basic result of propositional calculus that it is sound : it proves only tautologies, i.e. if $\vdash \mathcal A$, then $\vDash \mathcal A$. But propositional calculus is not syntactically complete in the sense that it is not true that, for every formula $\varphi$ of the language, either $\varphi$ or $¬ \varphi$ is a theorem of the calculus. Consider the simple formula consisting of a single propositional variable $P$; neither it nor its negation are derivable (they are not tautologies, and the calculus is sound, i.e. it proves only tautologies). The same for first order predicate calculus. But, and this is not the case for predicate calculus, propositional logic has the additional property that it is enough to add to the axioms a formula $\mathcal B$ that is not a tautology and the resulting system will be inconsistent. "in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense." Yes; for first order predicate calculus we have to add suitable axioms and rules in order to manage quantifiers. The basic properties required for first-order predicate calculus are the same listed above. a formula that is true under every possible interpretation of the language. First-order instances of propositional tautologies, like e.g. $(x=c) \to (x=c)$, are valid. But, in addition, there are valid first-order formulas that are not instances of propositional tautologies, like e.g. $\forall x (x=x)$. Not the answer you're looking for? Browse other questions tagged logic axioms . Can we define definitions as axioms in logic? Are PA and ZFC examples of logical systems? Are there formal systems that are not logical systems? prove that $\Sigma \vdash \phi_1$ and $\Sigma \vdash \phi_2$ leads to $\Sigma \vdash \phi_1 \wedge \phi_2$.
CommonCrawl
I'd hope the title is self explanatory. In Kaggle, most winners use stacking with sometimes hundreds of base models, to squeeze a few extra % of MSE, accuracy... In general, in your experience, how important is fancy modelling such as stacking vs simply collecting more data and more features for the data? By way of background, I have been doing forecasting store $\times$ SKU time series for retail sales for 12 years now. Tens of thousands of time series across hundreds or thousands of stores. I like saying that we have been doing Big Data since before the term became popular. I have consistently found that the single most important thing is to understand your data. If you don't understand major drivers like Easter or promotions, you are doomed. Often enough, this comes down to understanding the specific business well enough to ask the correct questions and telling known unknowns from unknown unknowns. Once you understand your data, you need to work to get clean data. I have supervised quite a number of juniors and interns, and the one thing they had never experienced in all their statistics and data science classes was how much sheer crap there can be in the data you have. Then you need to either go back to the source and try to get it to bring forth good data, or try to clean it, or even just throw some stuff away. Changing a running system to yield better data can be surprisingly hard. Once you understand your data and actually have somewhat-clean data, you can start fiddling with it. Unfortunately, by this time, I have often found myself out of time and resources. I personally am a big fan of model combination ("stacking"), at least in an abstract sense, less so of fancy feature engineering, which often crosses the line into overfitting territory - and even if your fancier model performs slightly better on average, one often finds that the really bad predictions get worse with a more complex model. This is a dealbreaker in my line of business. A single really bad forecast can pretty completely destroy the trust in the entire system, so robustness is extremely high in my list of priorities. Your mileage may vary. In my experience, yes, model combination can improve accuracy. However, the really big gains are made with the first two steps: understanding your data, and cleaning it (or getting clean data in the first place). I can't speak for the whole of industry, obviously, but I work in industry and have competed on Kaggle so I will share my POV. The "training" set was artificially limited to have fewer rows than columns specifically so that feature selection, robustness, and regularization technique would be indispensable to success. The so-called "test" set has a markedly different distribution than the training set and the two are clearly not random samples from the same population. If someone gave me a data set like this at work, I would immediately offer to work with them on feature engineering so we could get features that were more useful. I would suggest we use domain knowledge to decide on likely interaction terms, thresholds, categorical variable coding strategies, etc. Approaching the problem in that way would clearly be more productive than trying to extract meaning from an exhaust file produced by a database engineer with no training in ML. Furthermore, if you learn, say, that a particular numeric column is not numeric at all but rather a ZIP code, well, you can go and get data from 3rd-party data sources such as the US Census to augment your data. Or if you have a date, maybe you'll include the S&P 500 closing price for that day. Such external augmentation strategies require detailed knowledge of the specific data set and significant domain knowledge but usually have the much larger payoffs than pure algorithmic improvements. So, the first big difference between industry and Kaggle is that in industry, features (in the sense of input data) are negotiable. A second class of differences is performance. Often, models will be deployed to production in one of two ways: 1) model predictions will be pre-computed for every row in a very large database table, or 2) an application or website will pass the model a single row of data and need a prediction returned in real-time. Both use cases require good performance. For these reasons, you don't often see models that can be slow to predict or use a huge amount of memory like K-Nearest-Neighbors or Extra Random Forests. A logistic regression or neural network, in contrast, can score a batch of records with a few matrix multiplications, and matrix multiplication can be highly optimized with the right libraries. Even though I could get maybe +0.001 AUC if I stacked on yet another non-parametric model, I wouldn't because prediction throughput and latency would drop too much. There's a reliability dimension to this as well - stacking four different state-of-the-art 3rd-party libraries, say LightGBM, xgboost, catboost, and Tensorflow (on GPUs, of course) might get you that .01 reduction in MSE that wins Kaggle competitions, but it's four different libraries to install, deploy, and debug if something goes wrong. It's great if you can get all that stuff working on your laptop, but getting it running inside a Docker container running on AWS is a completely different story. Most companies don't want to front a small devops team just to deal with these kinds of deployment issues. That said, stacking in itself isn't necessarily a huge deal. In fact, stacking a couple different models that all perform equally well but have very different decision boundaries is a great way to get a small bump in AUC and a big bump in robustness. Just don't go throwing so many kitchen sinks into your heterogeneous ensemble that you start to have deployment issues. From my experience, more data and more features are more important than the fanciest, most stacked, most tuned, model one can come up with. Look at the online advertising competitions that took place. Winning models were so complex they ended up taking a whole week to train (on a very small dataset, compared to the industry standard). On top of that, prediction in a stacked model is longer than in a simple linear model. On the same topic, remember that Netflix never used its 1M$ algorithm because of engineering costs. I would say that online data science competitions are a good way for a company to know "what is the highest accuracy (or any performance metric) that can be achieved" using the data they collect (at some point in time). Note that this actually is a hard problem which is being solved ! But, in the industry, field knowledge, hardware and business constraints usually discourage the use of "fancy modelling". Stacking significantly increases complexity and reduces interpretability. The gains are usually relatively small to justify it. So while ensembling is probably widely used (e.g. XGBoost), I think stacking is relatively rare in industry. In my experience collecting good data and features is much more important. The clients we worked with usually have a lot of data, and not all of it in format that can be readily exported or easy to work with. The first batch of data is usually not very useful; it is our task to work with the client to figure what data we would need to make the model more useful. This is a very iterative process. Point 3) is especially important, because models that are easy to interpret are easier to communicate to the client and it is easier to catch if we have done something wrong. the more risk you will face over the lifetime of that model. Time is typically either frozen in Kaggle competitions, or there's a short future time window where test set values come in. In industry, that model might run for years. And all it might take is for one variable to go haywire for your entire model to go to hell, even if it was built flawlessly. I get it, no one wants to watch a contest where competitors carefully balance model complexity against the risk, but out there in a job, your business and quality of life will suffer if something goes wrong with a model you're in charge of. Even extremely smart people aren't immune. Take, for instance, the Google Flu Trends prediction failure. The world changed, and they didn't see it coming. To O.P.'s question, "In general, in your experience, how important is fancy modelling such as stacking vs simply collecting more data and more features for the data?" Well, I'm officially old, but my answer is that unless you have a really robust modeling infrastructure, it's better to have straightforward models, with a minimal set of variables, where the input-to-output relationship is relatively straightforward. If a variable barely improves your loss metric, leave it out. Remember that it's a job. Get your kicks outside of work on Kaggle contests where there is the "go big or go home" incentive. One exception would be if the business situation demanded a certain level of model performance, for instance if your company needed to match or beat the performance of a competitor to gain some advantage (probably in marketing). But when there's a linear relationship between the model performance and business gain, the increases in complexity don't typically justify the financial gain (see "Netflix never used its $1 Million Algorithm due to Engineering costs" - apologies to @RUser4512 for citing the same article). In a Kaggle competition however, that additional gain may move you hundreds of ranks as you pass nearby solutions. I work mainly with time-series financial data, and the process from gathering data, cleaning it, processing it, and then working with the problem owners to figure out what they actually want to do, to then building features and models to try and tackle the problem and finally to retrospectively examine the process to improve for next time. This whole process is greater than the sum of its parts. I tend to get 'acceptable' generalisation performance with a linear/logistic regression and talking with domain experts to generate features, way better time spent than spending time over-fitting my model to the data I have. Not the answer you're looking for? Browse other questions tagged large-data stacking collecting-data kaggle or ask your own question. How to know that your machine learning problem is hopeless? How people use Stacking method in the real-world problems?
CommonCrawl
$\star$ Argue that for any constant $0 < \alpha \le 1/2$, the probability is approximately $1 - 2\alpha$ that on a random input array, PARTITION produces a split more balanced than $1 - \alpha$ to $\alpha$. In order to produce a worse split than $\alpha$ to $1 - \alpha$, PARTITION must pick a pivot that will be either within the smallest $\alpha n$ elements or the largest $\alpha n$ elements. The probability of either is (approximately) $\alpha n / n = \alpha$ and the probability of both is $2\alpha$. Thus, the probability of having a better partition is the complement, $1 - 2\alpha$.
CommonCrawl
In this talk, we consider a time harmonic acoustic problem in a locally perturbed waveguide. We are interested in a situation where an observer generates incident propagative waves from $-\infty$ and measures the resulting scattered field at $-\infty$ and/or $+\infty$. We explain how to construct perturbations of the waveguide such that the scattered field is exponentially decaying at $-\infty$ and/or at $+\infty$, so that in practice, these defects are invisible to the observer.
CommonCrawl
Lemma 15.28.6. Let $R$ be a ring. Let $f_1, \ldots , f_ r \in R$ be a sequence. Multiplication by $f_ i$ on $K_\bullet (f_\bullet )$ is homotopic to zero, and in particular the cohomology modules $H_ i(K_\bullet (f_\bullet ))$ are annihilated by the ideal $(f_1, \ldots , f_ r)$. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0663. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0663, in case you are confused.
CommonCrawl
There is a large hotel, and $n$ customers will arrive soon. Each customer wants to have a single room. You know each customer's arrival and departure day. Two customers can stay in the same room if the departure day of the first customer is earlier than the arrival day of the second customer. What is the minimum number of rooms that are needed to accommodate all customers? And how can the rooms be allocated? The first input line contains an integer $n$: the number of customers. Then there are $n$ lines, each of which describes one customer. Each line has two integers $a$ and $b$: the arrival and departure day. Print first an integer $k$: the minimum number of rooms required. After that, print a line that contains the room number of each customer in the same order as in the input. The rooms are numbered $1,2,\ldots,k$.
CommonCrawl
I'm a postdoc of mathematics at the University of Jyväskylä, Finland. My research is focused on inverse problems, and the main tools are differential geometry, functional analysis and PDEs. One of my long time hobbies is Latin language. See my homepage for more information. 120 Updating the Hot Network Questions List - now with a bit more network and a little less "hotness"! 102 What is the smallest unknown natural number? 66 What is Google Translate good for? 53 Why did no student correctly find a pair of $2\times 2$ matrices with the same determinant and trace that are not similar?
CommonCrawl
Previously I've posted about the lambda calculus and Church numbers. We'd shown how we can encode numbers as functions using the Church encoding, but we'd not really shown how we could do anything with those numbers. As we still need the parentheses to make sure that the $f$ and $x$ get bundled together. We'll need this convention as we go on as things are going to get a little more parenthesis-heavy. OK, let's get back to the arithmetic. How do we get from $three$ to $four$? Well, the difference is that we just need to apply $f$ one more time. We can encode the idea of applying $f$ one more time into a lambda function. We could call it $add-one$ or $increment$ but lets go with $succ$ for 'successor'. The $n$ is the number we're adding one to - we need to bind in the values of $f$ and $x$ in to the function because they'll need to have $n$ applied to them before we can apply $f$ in the one extra time. So the signature of $succ$ - and consequently any unary operation on a number - is $\lambda n.\lambda f.\lambda x$, where $n$ is the number being changed. Yeah, it's a bit verbose in comparison to the lambda calculus version.2 All those parentheses, while great for being explicit about which functions get applied to what, makes it a bit tough on the eyes. Let's see if we can define addition. Where $m$ and $n$ are the numbers being added together. Now all we need to do is work out what comes after the dot. And this works,4 but we could probably write something both more intuitive and simpler. What do we want as the result of $add$? We want a function that applies $f$ to $x$ $n$ many times, and the applies $f$ to the result of that $m$ many times. We can just write that out with the variables we've been given - first apply $f$ to $x$, $n$ many times. We've used the word 'times' a lot here when talking about the application of $f$ onto $x$s in the above. But now we'll have to deal with real multiplication. Before you try and reach at an answer, step back a little and ask yourself what the result ought to be, and what the Church arithmetic way of describing it would be. Say we had the numbers two and three. If I was back in primary school I'd say that the reason that multiplying them together made six was because six was 'two lots of three' or 'three lots of two'. $two\ f$ is a function that applies $f$ two times to whatever it's next argument is. $three\ (two\ f)$ will apply $two\ f$ to its next argument three times. So it will apply it $3\ \times\ 2$ times - 6 times. So what could exponentiation be? Well, the first thing we know is that this time, order is going to be important - $2^3$ is not the same as $3^2$. Next, what does exponentiation mean? I mean, really mean? When we did multiplication we saw us doing 'two lots of (three lots of $f$)'. But now we need to do 'two lots of something' three times. The 'three' part has to apply, not to the number of times we do an $f$, nor the number of times we do '$n$ lots of $f$'. But rather it needs to be the number of times we do $n$ to itself. So if 'three' is the application of $f$ three times to $x$, we can say that $2^3$ is the application of $two$ three times to $f\ x$. Another way to look at it: a Church number is already encoding some of the behaviour of exponentiation. When we use inc and 0 as f and x we can think of the number n acting as $inc^n$ - inc done to itself n many times. This is more explicit if we try it with something other than increment - say double, aka 'times two'. Let's do it in Haskell - but please feel free to pick any language you like. Four lots of timesTwo is 16; all we need to do is to use the number two instead, and apply the result to an f and an x. This is because you know the function you're left with after you've applied $n$ to $m$ is a number - will take an $f$ and an $x$ - you don't need to explicitly bind them in the outer function just in order to pass them unchanged to the inner one. But that's just a nicety. The important thing is… we've finished! An interesting relationship between the last three: the $f$ moves along to the right as the operation becomes 'bigger'. Next post we'll be taking a short break from arithmetic to take a look at logic using the lambda calculus. And I'm speaking as a mad Lisp fan, lover of parens where ever they are. For functional programming that is. Get your pencil and paper out if you want to prove it!
CommonCrawl
In this theoretical contribution, we chose to use an indenter having a spherical geometrical form, which we used to model the surfaces mixture to separate the contributions of substrate and film in the composite covered material hardness. We have considered the coefficients $\alpha$, $\beta$ of the model as ratios of the projections of the imprints at the horizontal plans (disks surfaces). We prove that the film hardness of monolayer coating is dependent on the composite and substrate hardness, the geometrical form of the indenter, and the film thickness. Key words: hardness, spherical indenter, model of the surfaces mixture, monolayer coating. P. J. Blau, Microindentation Techniques in Materials Science and Engineering: A Symp. Sponsored by ASTM Committee E-4 on Metallography and by the International Metallographic Society (July 15–18, 1984, Phila-delphia, PA, USA) (ASTM International: 1986). N. Piskounov, Calcul Différentiel et Integral (Algeria: Office des Publications Universitaires: 1991). D. Tabor, The Hardness of Metals (Oxford: Clarendon Press: 1951). G. Farges and D. Degout, Traitement Thermique, 246: 81 (1991).
CommonCrawl
BOOST2016 is the eighth of a series of successful joint theory/experiment workshops that bring together the world's leading experts from theory and LHC experiments to discuss the latest progress and develop new approaches on the reconstruction of and use of boosted decay topologies in order to search for new physics. This year, the workshop is jointly hosted by the University of Zurich and ETH Zurich. In experimental studies which use boosted taggers, groomers are typically used to reduce sensitivity to wide angle soft radiation. It is therefore important to understand the behavior of these groomers to all orders in QCD. In this talk, I will discuss the factorization of groomed two prong substructure observables, focusing in particular on the $D_2$ observable. I will show that for a particular groomer, soft drop, this observable can be factorized to all orders in perturbation theory. I will discuss theoretical and experimental advantages and disadvantages of soft dropped $D_2$ as a tagger, as well as present numerical results. This analysis sheds considerable light into the behavior of groomed substructure observables and their calculability. Jet shapes are commonly used as discriminative variables to tag boosted objects. In this talk, I will present a method to compute jet shapes for boosted objects which retains the dominant contributions coming either from the large boost or, when appropriate, from the smallness of the shape itself. I will mostly focus on the case of 2-subjettiness but will also show that the method can be applied to other observables like N-subjettiness with grooming or Energy-Correlation functions. I will discuss recent advances in precision jet substructure calculations. The soft drop groomed mass has been calculated to next-to-next-to-leading logarithmic accuracy and matched to relative $\alpha_s^2$ fixed-order corrections for jets in $pp\to Z+j$ events. The normalized soft drop mass distribution is insensitive to underlying event and pileup, depends only on collinear physics, and only requires determination of the relative quark and gluon jet fractions from fixed-order calculations. This is the first jet substructure calculation to this accuracy and opens the door to precision theory and data comparisons. able to identify high-energy jets containing $b$ quarks (``$b$-jets''). density and a more ambiguous association of hits with tracks. a spurious hit in the densely populated inner layer. results indicated a falling tagging efficiency beyond approximately 150 GeV. by the restricted momentum range published. We explore the scale-dependence and correlations of jet substructure observables to improve upon existing techniques in the identification of highly Lorentz-boosted objects. Modified observables are designed to remove correlations from existing theoretically well-understood observables, providing practical advantages for experimental measurements and searches for new phenomena. We study such observables in W jet tagging and provide recommendations for observables based on considerations beyond signal and background efficiencies. The LHC is starting to study the regime where top-quark pairs are produced with energies much larger than the top mass. In this "boosted regime", large QCD corrections can arise both from soft-gluon emissions and from emissions collinear to the energetic top quarks, which become singular in the boosted limit. In this talk I discuss a theoretical framework which can be used to resum both types of potentially large corrections in the boosted regime, and compare some of its numerical predictions for differential cross sections with LHC data. The top quark mass is one of the most important standard model parameters. The most precise method for top mass extraction comes from kinematic extraction. However, there's an O(1) GeV theory uncertainty associated with the fact these methods rely on Monte Carlo simulations which do not have a fully specified field theoretic mass scheme definition. I will describe our proposal for using a 2-jettiness variable with a boosted top sample to extract the top mass at the LHC. This variable obeys a factorization theorem which allow the associated cross section to be calculated with a well defined top mass scheme, and has the same strong sensitivity as the currently used template method. We study the detector performance with an emphasis on jet substructure variables for extremely boosted objects at very high energy proton colliders using Geant4 simulation. We focus on the calorimeter performance and study hadronically-decaying W bosons with transverse momentum in the multi-TeV range (5-20 TeV). The calorimeter segmentation is benchmarked in order to understand the impact of granularity and resolution on boosted boson discrimination. Abstract: The linear collider experiments require excellent performance of jet clustering algorithms in high-energy electron-positron with non-negligible gamma gamma -> hadrons background. The ILC and CLIC detector concepts have studied the performance of several algorithms under realistic conditions and with a detailed model of the detector response. Results on jet energy and substructure response are presented for several key benchmark processes. The identification of boosted objects in TeV electron-positron collisions is also discussed. In this talk I will introduce our recent work about factorization and resummation for jet processes. From a detailed analysis of Sterman-Weinberg cone-jet cross sections in effective field theory, we obtain novel factorization theorems which separate the physics associated with different energy scales present in such processes. The relevant low-energy physics is encoded in Wilson lines along the directions of the energetic particles inside the jets. This multi-Wilson-line structure is present even for narrow-cone jets due to the relevance of small-angle soft radiation. We discuss the renormalization-group equations satisfied by these operators. Their solution resums all logarithmically enhanced contributions to such processes, including non-global logarithms. Such logarithms arise in many observables, in particular whenever hard phase-space constraints are imposed, and are not captured with standard resummation techniques. Our formalism provides the basis for higher-order logarithmic resummations of jet and other non-global observables. As a nontrivial consistency check, we use it to obtain explicit two-loop results for all logarithmically enhanced terms in cone-jet cross sections and verify those against numerical fixed-order computations. This talk is based on arXiv:1508.06645, arXiv:1605.02737 and some recent progress about numerical results. Measuring inclusive quantities, both global (missing and sum transverse energy) and local (jet mass and substructure), after the high luminosity LHC upgrade will be extremely challenging, and will require new pile-up mitigation techniques that correct more than local jet energies. To this end, one can use the fact that pile-up has no angular structure while hard processes are characterised by small-angle emissions and are therefore highly sparse in the frequency domain. Using wavelet functions, intermediates between a standard pixel basis and a Fourier basis, which are localised in position ($y - \phi$) as well as frequency (angular) space, we can naturally and efficiently perform an event-wide classification of signal and pile-up particles by filtering in the frequency domain. In this talk, we will motivate the use of wavelets in high energy physics, describe the procedure behind a wavelet analysis, and present a few concrete methods and results. In particular, using a generator-level overlay of signal and pile-up events, we demonstrate that, using wavelet techniques, a significant improvement in e.g.~missing transverse energy reconstruction may be possible even up to $\langle\mu\rangle$ of 300 or beyond. Building on the jet-image based representation of high energy jets, we develop computer vision based techniques for jet-tagging through the use of Deep Neural Networks. Jet-images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing. We show how applying such techniques using Deep Neural Networks can improve the performance to identify highly boosted W bosons with respect to state-of-the-art substructure methods. In addition, we explore new ways to extract and visualize the discriminating features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Deducing whether the substructure of an observed jet is due to a low-mass single particle or due to multiple decay objects of a massive particle is an important problem in the analysis of collider data. Traditional approaches have relied on expert features designed to detect energy deposition patterns in the calorimeter, but the complexity of the data make this task an excellent candidate for the application of machine learning tools. The data collected by the detector can be treated as a two-dimensional image, lending itself to the natural application of image classification techniques. In this work, we apply deep neural networks with a mixture of locally-connected and fully-connected nodes. Our experiments demonstrate that without the aid of expert features, such networks match or modestly outperform the current state-of-the-art approach for discriminating between jets from single hadronic particles and overlapping jets from pairs of collimated hadronic particles, and that such performance gains persist in the presence of pileup interactions. In addition, we will present initial studies on using deep networks to perform b-tagging inside boosted objects. We present a new approach for efficiently and selectively identifying high-momentum, hadronically decaying top quarks, Higgs bosons, and W and Z bosons, distinguishing them from jets from light quarks and gluons in proton-proton collisions at the LHC or future colliders. This technique yields variables that can be combined with those from current approaches to boosted particle tagging in multivariate classifiers such as deep neural networks or boosted decision trees to yield estimators which can be used in a broad range of analyses in which such highly boosted particles play a role. The technique is capable of identifying subjets which overlap strongly in the tracking and calorimetry systems, allowing good performance even in the multi-TeV regime. The performance of the method is studied in various scenarios and shows promise for actual use at the LHC experiments. We present a new algorithm developed for the identification of boosted heavy particles at the LHC, the Heavy Object Tagger with Variable R (HOTVR). The algorithm is based on jet clustering with a variable distance parameter $R$ combined with a mass jump condition. The variable $R$ approach adapts the jet size to the transverse momentum $p_T$, resulting in smaller jets for increasing values of $p_T$, making the jet mass less susceptible to radiation. Two and three prong decays are identified using subjets, formed by the mass jump condition. The resulting algorithm combines the jet clustering, subjet finding and rejection of soft clusters in one step, making it robust and simple. We present performance tests for the identification of boosted top quarks, which show that the HOTVR algorithm has similar or better performance over a large range in $p_T$ compared to other algorithms commonly used at the LHC. It has recently been shown that the Y-splitter method with trimming is a very effective method for tagging boosted electroweak bosons, outperforming several standard taggers at high pt. Here we analytically investigate this observation and explain the performance of Y-splitter with a range of grooming techniques from first principles of QCD. We also suggest modifications that considerably simplify the analytical results, thereby increasing robustness, and make the results largely independent of the details of grooming.
CommonCrawl
Fact: Start with $V$ a subspace of $\mathbb R^n$. Take the set of all supports of vectors in $V$. Throw out $\emptyset$. You now have the dependent sets of some matroid. Not sure you believe me? Or just want to get your hands on the independent sets? (If you tell people that you have a matroid, they always ask you for the independent sets. How can you blame them?) Okay, no problem. It follows from the above that the matroids that arise in this fashion are precisely the ones that are representable over $\mathbb R$. I like this way of thinking about representability. But I've never seen it presented this way in the literature. Why not? (Maybe it has to do with how I seem to be relying on a nice Euclidean inner product?) This leads me to a couple of questions. Is the fact I stated above presented in any reference anywhere? I would greatly appreciate it if anyone could point me in the direction of one. Is it the case that the fact I stated above remains true when $\mathbb R$ is replaced by any field? If so, does it characterize representability over any field? EDIT: I just realized that it's not difficult to prove that the above fact does hold over any field; I believe one can simply show that those support sets which are minimal with respect to $\subseteq$ satisfy the circuit axioms, and that's enough. So my main question here is whether or not this stills serves to characterize representability over other fields. This is just taking the dual of the usual definition of representability: a representable matroid over a field $k$ is given by $n$ vectors in a vector space $V$, that is by a map $k^n\to V$. If you dualize, you get a map $V^* \to k^n$ (indeed using the inner product on $k^n$), and the description you gave. I think it's a little more standard to think about the hyperplanes in $V$ where the coordinates vanish: a set is independent if the intersection of the hyperplanes is transverse (i.e. its codimension is the same as the number of hyperplanes). I think this interpretation is considered quite standard by people in matroid theory and hyperplane arragnements; for example, Section 1.9 in these notes of Reiner or Section 3 of these notes by Stanley. Certainly I tend to think more about matroids in terms of hyperplane arrangements, rather than vectors. Not the answer you're looking for? Browse other questions tagged reference-request linear-algebra matroid-theory or ask your own question. Is this condition equivalent to being a matroid quotient? When does a collection of sets forming a geometric lattice give the flats of a matroid?
CommonCrawl
You can navigate this wiki by using the sidebar which lists the main entries (examples, problems,…) and gathers the "tags" or keywords. You can also use the "Table of Content" in the top bar. A typical orbit for: $z\mapsto\beta z \mod \mathbb Z^2$ for $|\beta|>1$. This is a place for (references to) uses of and results about dynamical systems (including group actions) defined by piecewise affine transformations. We are interested in their mathematical theory including conjectures as well as numerical experiments and applications of these systems. It was created by a group of French mathematicians but anyone working on related subjects is invited to join. In order to avoid spam you must register before editing pages or posting comments. You can do it here. If you are a member you can also send invitation from that page. Thank you for adding information about these fascinating dynamical systems. We hope to gather short introductions and foremost references and links to the analysis and applications of piecewise affine dynamics. Note. Each tag is one word. The wiki is also organized hierarchically: each page is given a parent (using the "parent" button which is accessed by clicking on "options" at the bottom of the page). The "Table of Content" buttom in the top bar displays this structure.
CommonCrawl
Abstract: Let $p(t)$ be an admissible Hilbert polynomial in $\PP^n$ of degree $d$. The Hilbert scheme $\hilb^n_p(t)$ can be realized as a closed subscheme of a suitable Grassmannian $ \mathbb G$, hence it could be globally defined by homogeneous equations in the Plucker coordinates of $ \mathbb G$ and covered by open subsets given by the non-vanishing of a Plucker coordinate, each embedded as a closed subscheme of the affine space $A^D$, $D=\dim(\mathbb G)$. However, the number $E$ of Plucker coordinates is so large that effective computations in this setting are practically impossible. In this paper, taking advantage of the symmetries of $\hilb^n_p(t)$, we exhibit a new open cover, consisting of marked schemes over Borel-fixed ideals, whose number is significantly smaller than $E$. Exploiting the properties of marked schemes, we prove that these open subsets are defined by equations of degree $\leq d+2$ in their natural embedding in $\Af^D$. Furthermore we find new embeddings in affine spaces of far lower dimension than $D$, and characterize those that are still defined by equations of degree $\leq d+2$. The proofs are constructive and use a polynomial reduction process, similar to the one for Grobner bases, but are term order free. In this new setting, we can achieve explicit computations in many non-trivial cases.
CommonCrawl
Cavity optomechanics provides a platform for exquisitely controlling coherent interactions between photons and mesoscopic mechanical excitations. Cavity optomechanics has recently been used to demonstrate phenomena such as laser cooling, optomechanically induced transparency, and coherent wavelength conversion. These experiments were enabled by photonic micro- and nanocavities engineered to minimize optical and mechanical dissipation rates, $\gamma_o$ and $\gamma_m$, respectively, while enhancing the per-photon optomechanical coupling rate, $g_0$. The degree of coherent photon-phonon coupling in these devices is often described by the cooperativity parameter, $C = N g_0^2 / \gamma_o\gamma_m$, which may exceed unity in several cavity optomechanics systems for a sufficiently large intracavity photon number, $N$. Here we demonstrate optical whispering gallery mode (WGM) microdisk cavities that are fabricated from wide-bandgap materials such as gallium phosphide (GaP), and single crystal diamond (SCD). By using wide-bandgap materials high-$C$ can be achieved by reaching high-$N$ before thermal instabilities occur. We demonstrate GaP microdisks with intrinsic optical quality factors $> 2.8 \times 10^5$ and mode volumes $< 10(\lambda/n)^3$, and study their optomechanical properties. We observe optomechanical coupling in GaP microdisks between optical modes at 1.5 $\mu$m wavelength and several mechanical resonances, and measure an optical spring effect consistent with a predicted optomechanical coupling rate $g_0/2\pi \sim 30$ kHz for the fundamental mechanical radial breathing mode at 488 MHz. We have also demonstrated monolithic microdisk cavities fabricated from bulk SCD via a scalable process. Optical quality factors of $ 1.15 \times 10^5$ at 1.5 $\mu$m are demonstrated, which are among the highest measured in SCD to date, and can be improved by optimizing our fabrication process further. In addition to SCD-possessing desirable optical properties, its high Young's modulus, high thermal conductivity, and low intrinsic dissipation, show great promise for use in high-$C$ optomechanics. Current investigation is focused on characterizing the optical properties of these devices, and optimizing them for applications in nonlinear optics and quantum optomechanics.
CommonCrawl
"If the axiom of constructibility holds then there is a subset of the product of the Baire space with itself which is $\Delta^1_2$ and is the graph of a well ordering of the Baire space. If the axiom holds then there is also a $\Delta^1_2$ well ordering of Cantor space." Can someone (give here or point me to) a (sketch or thorough description) of a $\Delta^1_2$ set that does this for (Baire space or Cantor space)? I can see how V=L implies there is a definable well order, but I can't see how it would be in the analytical hierarchy. In the constructible universe $L$, there is a definable well-ordering of the entire universe. This universe is built up in transfinite stages $L_\alpha$, and the ordering has $x\lt_L y$ when $x$ is constructed at an earlier stage, or else they are constructed at the same stage, but $x$ is constructed at that stage by an earlier definition, or with the same definition, but with earlier parameters. I also explain this in this MO answer. One may extract from this definition a rather low-complexity definable well-ordering of the reals by capturing the countable pieces of the $L$ hieararchy by reals. That is, if $x$ is a real number of $L$, then it appears at some countable stage $L_\alpha$ for a countable ordinal $\alpha$, and the entire structure $L_\alpha$ is countable, and hence itself coded by a real. Here, we code a set by a real in any of the standard ways, for example, by coding a well-founded extensional relation on $\omega$ whose Mostowski collapse is the given set. Furthermore, the $L$-order is absolute to any $L_\alpha$, since $L_\alpha$ knows about the $L_\beta$-heirarchy for $\beta<\alpha$. Also, if a countable structure is well-founded and thinks $V=L$, then it is $L_\alpha$ for some $\alpha$. Note that if a real $z$ codes a first order structure $M$, then the question of whether $M$ satisfies a first order assertion is an arithmetic statement in $z$, since we need only quantify over the coded elements, which are coded by natural numbers. $x\lt_L y$ in the $L$ order. There is some countable ordinal $\alpha$ such that $L_\alpha$ satisfies $x\lt_L y$. For every countable ordinal $\alpha$, if $x$ and $y$ are reals in $L_\alpha$, then $L_\alpha$ satisfies $x\lt_L y$. There is a real $z$ coding a well-founded structure that thinks $V=L$ (and so this structure must be some $L_\alpha$) in which $x$ and $y$ are reals and the structure satisfies $x\lt_L y$. All reals $z$ coding well-founded structures $L_\alpha$ in which $x$ and $y$ are reals satisfy $x\lt_L y$. The fourth statement has complexity $\Sigma^1_2$, since being-well-founded is $\Pi^1_1$. Similarly the fifth statement has complexity $\Pi^1_2$, so overall the ordering is $\Delta^1_2$. The end result is that in the universe $L$, there is a low-complexity definable well-ordering of the reals. In this universe, therefore, all of the supposedly non-constructive applications of AC turn out to be completely definable. A subset of Baire space Wadge incomparable to a Borel set? How much choice does a linear or well-order on cardinals imply?
CommonCrawl
1 . What should come in place of question mark (?) in the following questions? 2 . What should come in place of question mark (?) in the following questions? 3 . What should come in place of question mark (?) in the following questions? 4 . What should come in place of question mark (?) in the following questions? 5 . What should come in place of question mark (?) in the following questions? 75 $\times$ 4.8 $\div$ 3.2 = ? 6 . What should come in place of question mark (?) in the following questions? 7 . What should come in place of question mark (?) in the following questions? 8 . What should come in place of question mark (?) in the following questions? 9 . What should come in place of question mark (?) in the following questions? What is the difference between a discount of 39% on Rs. 15400 and two successive discounts of 24% and 15% on the same amount?
CommonCrawl
Manage passwords, documents and other confidential data from one source! Password Depot is a powerful and very user-friendly password manager for PC which helps to organize all of your passwords – but also, for instance, information from your credit cards or software licenses. The software provides security for your passwords – in three respects: It safely stores your passwords, guarantees a secure data use and helps you to create secure passwords. However, Password Depot does not only guarantee security: It also stand for convenient use, high customizability, marked flexibility in terms of interaction with other devices and, last but not least, extreme functional versatility. Find all the password-protected or encrypted files on a PC or over the network! BEST POSSIBLE ENCRYPTION. In Password Depot, your information is encrypted not merely once but in fact twice, thanks to the algorithm AES or Rijndael 256. In the US, this algorithm is approved for state documents of utmost secrecy! BACKUP COPIES. Password Depot generates backup copies of your passwords files. The backups may be stored optionally on FTP servers on the Internet (also via SFTP) or on external hard drives. You can individually define the time interval between the backup copies' creation. PROTECTION FROM KEYLOGGING. All password fields within the program are internally protected against different types of interception of keystrokes (Key Logging). This disables that your sensible data entries can be spied out. VIRTUAL KEYBOARD. The ultimate protection against keylogging. With this tool you can enter your master password or other confidential information without even touching the keyboard. Password Depot does not simulate keystrokes but instead uses an internal cache, so that they can neither be intercepted software- nor hardware-based. FAKE MOUSE CURSORS. Typing, using the program's virtual keyboard, you can also set the program to show multiple fake mouse cursors instead of your usual single cursor. This additionally makes it impossible to discern your keyboard activities. UNCRACKABLE PASSWORDS. The integrated Password Generator creates virtually uncrackable passwords for you. Thus in future, you will not have to use passwords such as "sweetheart" anymore, a password that may be cracked within minutes, but e.g. "g\/:1bmVuz/z7ewß5T$x_sb}@<i". Even the latest PCs can take a millenium to crack this password! FILE ATTACHMENTS. You may add file attachments containing e.g. additional information to your password entries. These attachments can be opened directly from within Password Depot and may additionally be saved on data storage media. TRANSFER PASSWORDS. You can import both password entries from other password managers into Password Depot as well as export entries from Password Depot. To do so the software offers you special wizards that facilitate importing and exporting password information. USER-FRIENDLY INTERFACE. Password Depot's user interface is similar to that of Windows Explorer. This allows you to effectively navigate through your password lists and to quickly find any password you happen to be searching for. CUSTOM BROWSERS. You can determine yourself which browsers you would like to use within the program. This way, you are not bound to common browsers such as Firefox or Internet Explorer but can also use Opera, for example. INDIVIDUAL USER MODES. As a new user, you can work with only a few functions in the Beginner Mode, whereas as an expert you can use all functions in the Expert Mode or can outline your own demands in the Custom Mode. ENTERPRISE SERVER. Password Depot features a separate server model enabling several users to access the same passwords simultaneously. Accessing password files may run either via a local network or via the Internet. USB STICK. You can copy both your password files and the program Password Depot itself onto a USB stick. In this way, you can carry the files and the software along wherever you go, always having them available to use. CLOUD DEVICES. Password Depot supports web services, among them GoogleDrive, OneDrive, Dropbox and Box. In this way, Password Depot enables you to quickly and easily enter the Cloud! Note: After 30 days trial, the program will run in freeware DEMO mode.
CommonCrawl
27/03/2012�� The critical value is the value from the distribution of the test for which P(X>X critical value) = alpha, where X is the observed test statistic and X critical value is the critical value for the test.... Practice finding the critical value z* for some given confidence level. If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Calculate critical value(s) based on the significance level $\alpha$. Compare test statistic to critical value. Basically, rather than mapping the test statistic onto the scale of the significance level with a p-value, we're mapping the significance level onto the scale of the test statistic with one or more critical values. is known as the critical value, the positive value that is at the vertical boundary for the area of in the right tail of the standard normal distribution. is the population standard deviation. is the sample size. 27/03/2012�� The critical value is the value from the distribution of the test for which P(X>X critical value) = alpha, where X is the observed test statistic and X critical value is the critical value for the test. 13/04/2009�� Best Answer: I'm assuming that you are talking about critical values of z for a 2-sided test. In that case, you express your confidence level as a decimal, add 1 to it, and divide the result by 2.
CommonCrawl
where $\theta, \alpha$ and $\sigma$ are time-dependent. First, I treat $\alpha_t$ as the mean of my process (~$\mu_t$). I am estimating and forecasting $\mu_t$ using the Local Linear Trend State Space model and the Kalman Filter. Then, comes my problem. Since $\theta$ is changing over time I cannot estimate it. If it was constant I could simply replace all the known elements and run a regression to find out its value. But now $\theta$ is inside an integral. Is there a way for me to estimate $\theta_t$ and use it to simulate the process using Monte Carlo simulation? I am using R to code it. Is there a package that I can use? Any code would be very appreciated. Browse other questions tagged stochastic-processes monte-carlo hullwhite kalman or ask your own question.
CommonCrawl
A tautology is a statement which is always true, independently of any relevant circumstances that could theoretically influence its truth value. if $p$ is true then $p$ is true. An example of a "relevant circumstance" here is the truth value of $p$. The archetypal tautology is symbolised by $\top$, and referred to as Top. Let $\mathcal L$ be a logical language. Let $\mathscr M$ be a formal semantics for $\mathcal L$. Tautologies are also referred to as logical truths. Definition:Top (Logic), a symbol often used to represent tautologies in logical languages.
CommonCrawl
Abstract: A class of solutions to a Darboux system in $\mathbb R^3$ is introduced that satisfy the factorization condition for an auxiliary second-order linear problem. It is shown that this reduction provides the (local) solvability of the Darboux system, and an explicit solution is given to this problem for two types of dependent variables. Explicit formulas for the Lamé coefficients and solutions to the associated linear problem are constructed. It is shown that the reduction, known in the literature, to a weakly nonlinear system is a particular case of the approach proposed.
CommonCrawl
This is more a question about singular, since that is what Sage uses to computer the Groebner Basis. In Singular, you can use option(prot) and then, upon using the groebner function of your choice, you will see verbose output. From the Singular manual, we are told that when "s" is printed in the verbose output, a new element of the standard basis has been found. It is sometimes enough, and in particular very useful for my needs, to know that a certain polynomial is in the groebner basis. I am working with an overdetermined system of polynomials for which the full groebner basis is too difficult to compute, however it would be valuable for me to be able to print out the groebner basis elements as they are found so that I may know if a particular polynomial is in the groebner basis. Is there any way to do this? My question is above, but more details specifically related to my problem are below, which may be useful in providing an alternate answer. The problem is as follows: I have a system of (not necessarily homogeneous) multivariate polynomials $f_1(x_1,\dots,x_n)=0,\dots f_m(x_1,\dots,x_n)=0$. I would like to prove that a few of the $x_i$ must be equal to zero. I was able to uncomment lines 240, 241, and 242 in this toy implementation library of Faugere's f5 algorithm for Singular and remove the "lead" function around the to-be-printed output on line 241. After just a couple of seconds while running this command, $x_7^5$ was printed out as a member of the basis. This would seem to imply that $x_7$ must equal zero. The toy library is useful in this regard, since it was able to be modified to print out the members of the basis as they were discovered, however it is still a toy implementation and is not as efficient as slimgb, for instance. My question is, how can I get this same behavior from slimgb? After you reformulated your question, it seems to me that what you want to do is test "radical membership" of some of the x_i. That's to say: You want to know whether there is some integer n such that x_i^n belongs to the ideal. Actually, the Gröbner basis of an ideal is not enough to solve that radical membership! In Singular, radical membership can be tested as explained here. I hope this is enough to solve your problem. In particular, I believe that looking at some protocol output of slimgb would certainly not solve your problem: As demonstrated above, the Gröbner basis does, in general, not directly provide the radical membership information for variables). Also, as much as I know, there is no option to make slimgb or std or another Singular function make print the polynomials being considered. Thanks, that is useful. Unfortunately the radicalMemberShip function uses RAM even faster than the groebner function does, but that's my problem to solve. Do you know the degree of the polynomial you are looking for? Then it might make sense to compute a truncated Gröbner basis out to that degree -- in particular if your polynomial system happens to be homogeneous. Even if the ideal not homogeneous it might be useful to compute the truncated Groebner Basis in this case. Especially if you already suspect that your polynomial is in the ideal. I added more information about my problem to my original question. I'm not sure this will totally address my problem, but I will try it now.
CommonCrawl
Why are temperatures generally hotter in the Middle East than in Europe? How come the average temperature in the middle east (Israel, Saudi Arabia, Sudan or lower) is always so much significantly higher than in Europe (say Germany, England etc.)? I know that the sun rays pass a greater distance to Europe than the middle east, but is that the only factor influencing? And also, the distance isn't that much greater so how come the sun is weakened so significantly in Europe in comparison to the middle east? The distance of the Sun from Europe or the Middle East plays virtually no role. After all, many people on the Northern Hemisphere might be surprised that the Earth is closest to the Sun in January – it was on January 4th, 2014. It was 3 million miles or 3 percent closer than it is in July. Nevertheless, the winter is cold! Moreover, these 3 million miles are much greater than 3 thousand miles between Europe and a place of the Middle East but even 3 million miles are too small to really matter. The winter is cold and Europe is colder than the Middle East for the same reason: the sun rays are bombarding the Earth's surface from a more "horizontal" angle than in the summer or in the Middle East throughout much of the year. When the angle of the sun rays is $\alpha$ from the vertical direction, the actual energy and heat coming per unit are is $$ \cos\alpha\cdot P $$ where $P$ is the power you only get if the rays are bombarding the surface from a perpendicular, normal direction. If you substitute $\alpha\to 90^\circ$, the expression above goes to zero. The values of $\alpha$ are generally smaller in the Middle East than in Europe and $\cos\alpha$ is therefore greater because the Middle East is closer to the equator, it has a smaller "latitude", we say, and the equator is the place where the Sun often illuminates the Earth's surface from a perpendicular direction. On the contrary, the poles are cooler because the solar radiation only "touches" the surface while it moves almost horizontally. Consequently, $\cos\alpha$ is very small. Europe is somewhere between the Middle East and the North Pole so its temperatures are somewhere in between, too. Image obtained using Climate Reanalyzer (http://cci-reanalyzer.org), Climate Change Institute, University of Maine, USA. I've tried to replicate their crazy color scheme. The point is, gross temperature phenomenon on the earth is dominated by the radiant solar flux. Regions near the poles are colder because the incoming light flux is spread over a larger area. Imagine trying to hold a surfboard perpendicular against the flow of a river, its hard, but if you tip it so that it comes at an angle to the incoming water, it becomes easier. Not the answer you're looking for? Browse other questions tagged temperature earth sun geophysics climate-science or ask your own question. How can anything be hotter than the Sun? Why does absorption cause seismic pulses to increase in length over distance? Why does the Sun always rise in the East? Are the electrons at the centre of the Sun degenerate or not? What exactly will happen if a small planet collides with Earth? Can you get a sunburn faster in the evening than at noon on a boat? Why is the tropopause at a higher altitude at the equator? How much does temperature affect the time of sunrise?
CommonCrawl
21 Why, in terms of quantum groups, does the knot determinant appear as an evaluation of both the Jones and Alexander polynomials? 13 How unique are extensions of TQFTs to lower dimension? 11 Is there a reasonable definition of TQFTs for n-cobordisms with connected inputs/outputs? 10 When were bordered Heegaard Floer homology's DA bimodules invented? 10 Is there a version of Seiberg-Witten-Floer or Heegard-Floer homology for 3-manifolds with boundary? 9 Is there a definition of an $\infty$-groupoid in HoTT whose terms are $n$-manifolds and whose higher morphisms are diffeomorphisms/isotopies/etc? 9 How do sutured TQFT fit into the larger TQFT picture?
CommonCrawl
Consider a Directed Acyclic Graph in which every node has a value and a cost and edges do not have any weight. I need to find a path containing nodes such that sum of values of these nodes is maximized, but sum of cost over all these nodes must be at most C. I am thinking of some modification of Dijkstra's algorithm. Any ideas? Is there any standard algorithm for this? Consider a SUBSET SUM instance with weights $w_1,\ldots,w_n$ and target $C$. We create an instance of your problem, with the same value of $C$. There are $n+1$ nodes $v_1,\ldots,v_n$, where node $v_i$ has cost and weight $w_i$. There is an edge from $v_i$ to $v_j$ iff $i < j$. Any directed path in this DAG corresponds to a subset of the weights $w_1,\ldots,w_n$. You are thus looking for a subset of the weights with maximal sum under the constraint that the sum is at most $C$. Thus the solution to your problem is $C$ iff the SUBSET SUM instance is a Yes instance. Not the answer you're looking for? Browse other questions tagged algorithms graphs graph-theory data-structures or ask your own question.
CommonCrawl
Let $G$ be a reductive algebraic group over $\mathbb R$ and $K$ a maximal compact subgroup. Then we refer to the conjugacy class in $G$ of some $k \in K$ as an elliptic conjugacy class. Question: Can one characterizes those conjugacy classes in $G$ which contain an elliptic conjugacy class in their closure? For g in G, write g=gsgu as its Jordan decomposition into semisimple and unipotent parts. I claim that the closure of the conjugacy class of g contains an elliptic element if and only if gs is elliptic. Let us first suppose that gs is not elliptic. Choose an embedding of G into GLn(ℂ). Then by our assumption, gs has an eigenvalue of norm greater than one, let λ be the absolute value of such an eigenvalue. Suppose for want of contradiction that the conjugacy class of gs contained an elliptic element a in its closure. WLOG a is in the special unitary group SUn. Let h be in the conjugacy class of gs. Then h has an eigenvalue of absolute value λ. Letting v be an eigenvector, we see that |(h-a)v| is at least (λ-1)|v|, so |h-a|≥λ-1, a contradiction. Now suppose that gs is elliptic. We may replace G by the centraliser of gs is G, which is also reductive. So WLOG, gs is central in G. Now the Zariski closure of the group generated by gu is a one-dimensional unipotent subgroup of G. Let E be a non-zero element in its lie algebra. This is a nilpotent element. Then by the Jacobson-Morozov theorem, we can extend E to a sl2 triple E,F,H in Lie(G). Now consider conjugation by elements of the form exp(tH) with t real. This shows that gs is in the closure of the conjugacy class of g, and we're done. Not the answer you're looking for? Browse other questions tagged algebraic-groups lie-groups or ask your own question.
CommonCrawl
Eve loves puzzles. She recently bought a new one that has proven to be quite difficult. The puzzle is made of a rectangular grid with $R$ rows and $C$ columns. Some cells may be marked with a dot, while the other cells are empty. Four types of pieces come with the puzzle, and there are $R \times C$ units of each type. 1. Type $1$ pieces can only be used on cells marked with a dot, while the other types of pieces can only be used on empty cells. 2. Given any pair of cells sharing an edge, the line drawings of the two pieces on them must match. 3. The line drawings of the pieces cannot touch the border of the grid. As Eve is having a hard time to solve the puzzle, she started thinking that it was sloppily built and perhaps no solution exists. Can you tell her whether the puzzle can be solved? The first line contains two integers $R$ and $C$ $(1 \leq R, C \leq 20)$, indicating respectively the number of rows and columns on the puzzle. The following R lines contain a string of C characters each, representing the puzzle's grid; in these strings, a lowercase letter "o" indicates a cell marked with a dot, while a "-" (hyphen) denotes an empty cell. There are at most $15$ cells marked with a dot. Output a single line with the uppercase letter "Y" if it's possible to solve the puzzle as described in the statement, and the uppercase letter "N" otherwise.
CommonCrawl
Some notes on Euler productsDec 21 2014We focus on a well-known convergence phenomenon, the fact that the $\zeta$ zeros are the universal singularities of certain Euler products. Piecewise constant local martingales with bounded numbers of jumpsDec 23 2016A piecewise constant local martingale $M$ with boundedly many jumps is a uniformly integrable martingale if and only if $M_\infty^-$ is integrable. Minimal number of points with bad reduction for elliptic curves over P^1Jul 25 2010Jul 22 2011In this work we use elementary methods to discuss the question of the minimal number of points with bad reduction over the projective line for elliptic curves E/k(T) which are non-constant resp. have non-constant j-invariant. On the cohomology of the holomorph of a finite cyclic groupMar 02 2003Feb 11 2004The mod 2 cohomology algebra of the holomorph of any finite cyclic group whose order is a power of 2 is determined. G-bundles on the absolute Fargues-Fontaine curveJun 03 2016We prove that the category of vector bundles on the absolute Fargues-Fontaine curve is canonically equivalent to the category of isocrystals. We deduce a similar result for G-bundles for some arbitrary reductive group G over a p-adic local field.
CommonCrawl
What does the broken vertical bar ¦ mean? If $\mathbf a\cdot \mathbf b = \mathbf a\cdot \mathbf c$ where $\mathbf a ¦ \mathbf 0¦\mathbf b$, what conclusion(s) can be made? Any ideas what the symbol means? Browse other questions tagged notation vectors or ask your own question. what does the inverse membership symbol means? What do vertical bars with an index mean? What does $e$ mean in this expression? What does the subset with slash symbol mean? What does this bar symbol mean in this equation? What does the bar above the integral mean?
CommonCrawl
Given an $n \times n$ grid with unit grid cells, and one point from the interior of each cell, what is are best possible lower and upper bounds for lengths of minimum spanning trees? The lower bound version of problem seems generate trees that are related to the Euclidean Steiner Tree Problem. If $n < 4$ the problem is easy; but for the $4 \times 4$ grid, we seem to get the familiar tree from the Steiner problem. Note that each of the end vertices represents several vertices at some small distance from each other. For a $5 \times 5$ grid the problem begins to become interesting. Here is one candidate for a minimum. My question is, has this problem been studied before?
CommonCrawl
In a ring, how do we prove that a * 0 = 0? In a ring, I was trying to prove that for all $a$, $a0 = 0$. But I found that this depended on a lemma, that is, for all $a$ and $b$, $a(-b) = -ab = (-a)b$. I am wondering how to prove these directly from the definition of a ring. $a0 = a(0+0)= a0 + a0$, property of $0$ and distributivity. Thus $a0+ (-a0) = (a0 + a0) +(-a0)$, using existence of additive inverse. Finally $0 = a0$ by associativity and properties of additive inverse. Just note that $ab +a(-b)= a(b + (-b))= a0= 0$. So, the first theorem is necessary to prove the second one, but not conversely. Not the answer you're looking for? Browse other questions tagged abstract-algebra ring-theory rngs or ask your own question. Is every regular element of a ring invertible? Euclid's Lemma for Euclidean Ring. How does this prove that the ring homomorphism is surjective? How to prove that the ring of all algebraic integers is a Bézout domain? $R$ is a prime right Goldie ring which contains a minimal right ideal. Show that $R$ must be a simple Artinian ring. How do we find ideals the ring $\mathbb R[X_0, X_1, …, X_n,…]$? How do we show that each of these ideals are not finitely generated?
CommonCrawl
Abstract: The HERAPDF1.0 PDF set, which was an NLO QCD analysis based on H1 and ZEUS combined inclusive cross section data from HERA-I, has been updated to HERAPDF1.5 by including preliminary inclusive cross section data from HERA-II running. Studies have also been made by adding various other HERA data sets: combined charm data, combined low energy run data and H1 and ZEUS jet data. These data give information on the treatment of charm and the charm quark mass and on the value of $\alpha_s(M_Z)$. The PDF analysis has also been extended to NNLO. The PDFs give a good description of Tevatron and early LHC data.
CommonCrawl
Which functions of one variable are derivatives ? This is motivated by this recent MO question. Is there a complete characterization of those functions $f:(a,b)\rightarrow\mathbb R$ that are pointwise derivative of some everywhere differentiable function $g:(a,b)\rightarrow\mathbb R$ ? Of course, continuity is a sufficient condition. Integrability is not, because the integral defines an absolutely continuous function, which needs not be differentiable everywhere. A. Denjoy designed a procedure of reconstruction of $g$, where he used transfinite induction. But I don't know whether he assumed that $f$ is a derivative, or if he had the answer to the above question. I can't claim much knowledge here, but I am given to understand that the class of differentiable functions (or the class of functions which are derivatives of such) is really quite nasty and complicated. This paper by Kechris and Woodin indicates that there is some very serious descriptive set theory involved: that there is a hierarchy of levels of complication indexed by $\omega_1$ (i.e., the set of countable ordinals). This online article by Kechris and Louveau also looks relevant. D. Preiss and M. Tartaglia On Characterizing Derivatives Proceedings of the American Mathematical Society, Vol. 123, No. 8 (Aug., 1995), 2417-2420. Chris Freiling, On the problem of characterizing derivatives, Real Analysis Exchange 23 (1997/98), no. 2, 805-812. Take a look a this book by Andrew M. Bruckner: Differentiation of real functions. Chapter seven is about The problem of characterizing derivatives. There is a review by Daniel Waterman. You might also want to take a look at Homeomorphisms in Analysis by Goffman, Nishiura and Waterman. Every Henstock-Kurzweil integrable function on [a,b] is almost everywhere the derivative of a differentiable function, and inversely, any derivative is Henstock-Kurzweil integrable. Not the answer you're looking for? Browse other questions tagged ca.classical-analysis-and-odes or ask your own question. Reference for the fact that the images of the narrow and wide Denjoy integrals are respectively $ACG_\ast$ and $ACG$? Derivatives of $O$-regular varying functions are $O$-regular varying functions?
CommonCrawl
I'm not familiar with interval solutions for absolute equations. How to solve for this interval? The first thing to do is to notice that if $x$ is large and positive, then $|3-x|$ is the same as $x-3$, so the equation becomes $$ (x-3)+4x = 5(x+2) -13 \\ 5x-3 = 5x -3 $$ and so any time $x$ is that large, the equation is automatically true. How large is "that large"? Well, as long as $x\geq 3$ the essential property that $|3-x| = x-3$ holds. So the equation is autmatically satisfied for $x\geq 3$, that is, on the intervale $[3,+\infty)$. (By the way, the square brace on the left means $x$ is greater than or equal to $3$; the parenthesis on the right means $x < \infty$. Do case work lucky guess start with $3-x\geq 0$ then $|3-x|=x-3$ and $|2+x|=2+x$ so $$x-3+4x=10+5x-13\\5x=5x\\0=0$$ Since $0=0$ is always true we have that for each $x\geq 3$ you have a solution. If for example you've got $1=2$ which is always false you wouldn't have any solutions. One way to solve a problem like this is by graphing both sides of the equation. Another way to solve these sorts of problems is by analyzing different cases: I see now that kingW3 has already provided an answer in this direction, so I'll curtail my response here. How to solve this absolute value equation?
CommonCrawl
Lemma 15.44.12. Let $A$ be a ring. Let $B$ be a filtered colimit of étale $A$-algebras. Let $\mathfrak p$ be a prime of $A$. If $B$ is Noetherian, then there are finitely many primes $\mathfrak q_1, \ldots , \mathfrak q_ r$ lying over $\mathfrak p$, we have $B \otimes _ A \kappa (\mathfrak p) = \prod \kappa (\mathfrak q_ i)$, and each of the field extensions $\kappa (\mathfrak p) \subset \kappa (\mathfrak q_ i)$ is separable algebraic. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0AH1. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0AH1, in case you are confused.
CommonCrawl
Imagine you have a plane, flat surface with a square grid drawn on it. You have a standard cubic die which is placed flat on the surface. Its length is the same as the length of side of each grid square. The only way to move the die to an adjacent square is by tipping the die into the square. So obviously the die cannot move diagonally. Since this grid is infinitely large, there are infinite round trips that can be made with the die. Some of these round-trips can cause a rotation of the die. The set of all round trips can be categorized into equivalence sets, where each class represents a particular rotation following a round-trip. These rotations naturally constitute a group. Can anyone list out all possible rotations in this configuration? Also, when moving the die around, I noticed that it was impossible for a round-trip to produce a 90 degree rotation about the the axis perpendicular to the surface. Can anyone give me a simple proof of this? It seems to be an easy proof, but I am unable to show it. Also, if we generalize this to a n-sided die, what are the impossible rotations? If anyone could provide a related paper or reference as well, I would really appreciate it. Rolling around a square produces a $\frac 13$ rotation about a body diagonal. This generates a subgroup of size three. Rolling around a different square will fix a different diagonal, generating another order three subgroup. Both of these are subgroups of the even permuations. I believe they generate the whole even subgroup. Rolling around a $2 \times 1$ rectangle produces a $\frac 12$ rotation around an axis through opposite face centers. This generates a subgroup of size six. Both of these correspond to even permutations, while a single quarter turn is an odd permutation. By checkerboard coloring the grid, we know that to return to the starting square requires an even number of quarter turns, so a single quarter turn is not possible. Added:It is actually much easier for other side dice. For all of the other Platonic solids, you have specified the orientation once you specify the bottom face. For $20$ side dice, if you roll six times around one vertex, you come back to start having moved one face, so you can get any face on the bottom at the starting point. For $4$ side dice, when you come back to start you have the same face down as you started with. You can cover the plane with a triangular grid in the standard way and each triangle has a face that will always be down when the tetrahedron is on that cell. For $12$ side dice I don't think you can get back to the starting location except by retracing your steps from somewhere because the pentagons don't overlay. For $8$ sided dice you can color alternate faces black and white on the octahedron and only the faces that match the original one in color can wind up down at the starting point-color the triangular grid on the plane to match. As rolling six times around the same point gets you back to start with a different face down, you can get four faces down at start. Here's a slick method: label the die with the usual numbers of pips (so 1 is opposite 6, 2 is opposite 5, and 3 is opposite 4) and label the grid like a chessboard. Then place the die on a black square with the die angled in an isometric view (think Q*bert) with the numbers 1,4,5 visible (for instance). Now, what happens if you make a 90 rotation around an edge? Two of the faces remain visible, but the face which is hidden is replaced with its opposite face. The important point to note: opposite faces have opposite parity (odd/even), so the resulting sum of visible faces now also has opposite parity! Thus, no matter how you manipulate the die, it will always have an even sum of visible faces when positioned on the black squares, and it will always have an odd sum of visible faces when positioned on the white squares. Of course, if the die began instead with an odd sum of visible faces on a black square or an even sum of visible faces on a white square, then it will always have an odd sum of visible faces when positioned on the black squares and an even sum of visible faces when positioned on the white squares. That is sufficient to answer your question. It's not hard to show further that there exist round trips to rotate a die into any configuration of the same parity. Not the answer you're looking for? Browse other questions tagged group-theory recreational-mathematics rotations dice or ask your own question. The Mathematics of Skateboarding Tricks.
CommonCrawl
The interaction of turbulent wakes with one another and with the adjacent fluid directly impacts the generation of electricity in wind turbine arrays. Computational modeling is well suited to the repeated iterations of data generation that may be required to inform understanding of the function of wind farms as well as to develop control schemes for plant function. In order to perform such computational studies, a simplified model of the turbine must be implemented. One of the most computationally efficient parametrizations of the blade utilizes a stationary disk which has a prescribed drag and produces a wake. However, since accurate estimates of wake properties and the interaction with the surrounding fluid is critical to the function of wind farms, a comparison of the wakes emitted from a stationary disk model should be compared to that of a model with a rotating blade. Toward this end, an array of model rotating wind turbines is compared experimentally to an array of static porous disks. Stereo particle image velocimetry measurements are done in a wind tunnel bracketing the center turbine in the fourth row of a 4$\times$3 array of model turbines. Equivalent sets of rotors and porous disks are created by matching their respective induction factors. The similarities and differences in the wakes between these two cases are explored using time-averaged statistics. The primary difference in the mean velocity components was found in the spanwise mean velocity component, which is much as 190\% different between the rotor and disk cases. Conditional averaging of mean kinetic energy transport in wake from these two models reveals that a differing mechanism is responsible for the entrainment of mean kinetic energy in the near wake. In contrast, results imply that the stationary porous disk adequately represents the mean kinetic energy transport of a rotor in the far wake where rotation is less important. Proper orthogonal decomposition and analysis of the invariants of the Reynolds stress anisotropy tensor is done in order to examine large scale structure of the flow and characterize the turbulent wake produced by the porous disks and rotors. The spatial coherence uncovered via the proper orthogonal decomposition in the rotor case and its absence in the disk case suggests caution should be employed when applying stationary disk parametrization to research questions that are heavily dependent on flow structure. Motivated by questions on the impact of freestream turbulence on wakes in wind energy, a study of pairs of cylinders subject to varying levels of inflow turbulence is undertaken. Time-averaged statistics show a modification of the symmetry and development of the wakes originating from the pairs of cylinders in response to freestream turbulence. Recurrence-based phase averaging allowws examination of the many configurations of the wake and the modification of these topologies due to varying inflow turbulence. Results show the changes in vortex shedding synchronization as well as large scale cross stream advection in response to elevated levels of incoming turbulence. Camp, Elizabeth H., "Wind Energy and Wind-Energy-Inspired Turbulent Wakes: Modulation of Structures, Mechanisms and Flow Regimes" (2018). Dissertations and Theses. Paper 4391.
CommonCrawl
In this, the third of these articles on Whole Number Dynamics , we shall complete the solution to the problem started in the first article . The later articles in this series will deal with other problems with the same theme. Let us remind ourselves of the problem. Starting with any whole number between 1 and 999 inclusive, we add the squares of the digits; for example starting with 537 we get the number $83 (= 5^2 + 3^2 + 7^2)$. The problem was to understand what happens if we repeat the process indefinitely, and you should have seen that whatever number you started with, the list of numbers you get either ends up with the consecutive numbers $$1, 1, 1, 1, \dots \qquad \qquad (1)$$ or it ends up with the cycle $$145, 42, 20, 4, 16, 37, 58, 89, \dots \qquad (2)$$ repeated again and again.The number $n$ is happy if, starting with $n$, we end with 1,1,1,..., and sad if it ends up with 145,42,20, ... and so on. In the first article we showed that if we started with any whole number from 1 to 999, then we always ended up with another number between 1 and 999. We then showed that because of this, if we repeat this process indefinitely we will always end up with some set of numbers being repeated over and over again. Of course, at this stage we didn't know that these sets had to be (1) or (2); in fact, we didn't even know how many sets there might be. In the second article, we showed that if we reach the situation where a single number was repeated over and over again, then that number has to be 1; it is not possible, for example, to start with some whole number and end with the number 57 repeated over and over again. Just to make sure you really do understand what is happening here, I suggest that you (and your friends) see how many numbers between, say 20 and 30, end up at the number 1. (B) explain why (1) and (2) are the only two possibilities. Suppose that we start with a very large number, say 9,876,543,210. We say that this is a ten digit number because it has ten places `filled in'; similarly 123,456,789 is a nine digit number. Now apply the rule `sums of squares' to both of these; what do you notice? How many digits does the answer have? Try three more nine digit numbers for yourself; how many digits do these answers have? Now try to find a nine digit number whose answer is 712 or larger. How many nine digit numbers are there whose answer is 712 or larger? Can you find a nine digit number whose answer is 730? If you can, then find out how many there are; if you can't, then explain why not. How many nine digit numbers are there whose answers lie between 713 and 728 inclusive? It is now easy to see what is happening in general. If, for example, we have 7 in the thousands digit, this contributes 7,000 to the number but only 49 to the answer. Similarly, if we have the number 53,426 the digits 5 and 3 contribute 53,000 to the number and only $5^2$ and $3^2$ to the answer. It should now be clear that starting with a large number $N$ that is at least 1,000 the answer is LESS than the number $N$. This means that if we start with a large number, and apply the rule repeatedly, eventually we will get an answer that is less than 1,000. At this point, we know from the first of these articles that whatever whole number we start with we will eventually end up with a some set of numbers being repeated over and over again, so we see that (A) is true. Let us now consider (B). We know now that wherever we start we will eventually end up with a number with at most three digits. At this point, if we apply the rule again, we end up with a number which is at most $3 \times 9^2 = 243$. If we apply the rule again, the largest answer we can get is the largest of the answers we get from the three numbers 199, 239 and 243; can you explain to a friend or your teacher why this is so? As these three answers are $$1^2 + 9^2 + 9^2 = 163,$$ $$2^2 + 3^2 + 9^2 = 94,$$ $$2^2 + 4^2 + 3^2 = 29,$$ we now know that wherever we start, we will always eventually reach a number that is at most 163. If we now check all starting points from 1 to 163 (admittedly this is a lot of work, and you may like to share the task out among your friends, but I know of no other way than this) we see that the only possibilities are (1) and (2). This completes the discussion of the Happy Numbers Problem and gives the complete solution of it.
CommonCrawl
Is the wave-particle duality a real duality? How do I construct the $SU(2)$ representation of the Lorentz Group using $SU(2)\times SU(2)\sim SO(3,1)$ ? Do virtual particles actually physically exist? What is more fundamental, fields or particles? What is a complete book for introductory quantum field theory? Why path integral approach may suffer from operator ordering problem? Why treat complex scalar field and its complex conjugate as two different fields? Why is the anticommutator actually needed in the canonical quantization of free Dirac field? Why not use the Lagrangian, instead of the Hamiltonian, in nonrelativistic QM? Getting particles from fields: normalization issue or localization issue? How do instantons cause vacuum decay? And in what sense are they 'non-local'? Does magnetic monopole violate $U(1)$ gauge symmetry? The definitions between on- and off-shell are given in Wikipedia. Why is it so important in QFT to distinguish these two notions ?
CommonCrawl
We prove a result about the existence of certain `sums-of-squares' formulas over a field $F$. A classical theorem uses topological $K$-theory to show that if such a formula exists over $\mathbb R$, then certain powers of $2$ must divide certain binomial coefficients. In this paper we use algebraic $K$-theory to extend the result to all fields not of characteristic $2$. Full text: dvi.gz 21 k, dvi 47 k, ps.gz 623 k, pdf 0 k.
CommonCrawl
A field line is a locus that is defined by a vector field and a starting location within the field. Field lines are useful for visualizing vector fields, which are otherwise hard to depict.... The answer depends on whether you want to A) draw magnetic field lines conceptually or B) draw actually magnetic field originated by a real magnet. In diagram 5 I have put in a magnetic field line which would actually be a circle if I could draw the diagram in three dimensions. An important thing to note is that the E-field and the B-field are at right angles to one another and that $\vec E \times \vec B$ gives the direction in which the electromagnetic wave is travelling. Magnets have two poles; the field lines spread out from the north pole and circle back around to the south pole. In this activity, you'll watch field lines materialize before your very eyes. Drawing magnetic field diagrams. It would be difficult to draw the results from the sort of experiment seen in the photograph, so we draw simple magnetic field lines instead.
CommonCrawl
The Price Rate of Change (ROC) is a momentum-based technical indicator that measures the percentage change in price between the current price and the price a certain number of periods ago. The ROC indicator is plotted against zero, with the indicator moving upwards into positive territory if price changes are to the upside, and moving into negative territory if price changes are to the downside. The indicator can be used to spot divergences, overbought and oversold conditions, and centerline crossovers. The Price Rate of Change (ROC) oscillator is unbounded above zero. This is because its value is based on price changes, which can indefinitely expand over time. A rising ROC typically confirms an uptrend. But this can be misleading, as the indicator is only comparing the current price to the price N days ago. A falling ROC indicates the current price is below the price N days ago. This usually helps confirm a downtrend, but isn't always accurate. An ROC reading above zero is typically associated with a bullish bias. An ROC reading below zero is typically associated with a bearish bias. When the price is consolidating, the ROC will hover near zero. In this case, it is important traders watch the overall price trend since the ROC will provide little insight except for confirming the consolidation. Overbought and oversold levels are not fixed on the ROC, rather each asset will generate its own extreme levels. Traders can see what these levels are by looking to past readings and noting the extreme levels the ROC and reached before the price reversed. The main step in calculating the ROC, is picking the "n" value. Short-term traders may choose a small n value, such as nine. Longer-term investors may choose a value such as 200. The n value is how many periods ago the current price is being compared to. Smaller values will see the ROC react more quickly to price changes, but that can also mean more false signals. A larger value means the ROC will react slower, but the signals could be more meaningful when they occur. Select an n value. It can be anything such as 12, 25, or 200. Short-term trader traders typically use a smaller number while longer-term investors use a larger number. Find the most recent period's closing price. Find the period's close price from n periods ago. Plug the prices from steps two and three into the ROC formula. As each period ends, calculate the new ROC value. What Does the Price Rate of Change (ROC) Indicator Tell You? The Price Rate of Change (ROC) is classed as a momentum or velocity indicator because it measures the strength of price momentum by the rate of change. For example, if a stock's price at the close of trading today is $10, and the closing price five trading days prior was $7, then the five-day ROC is 42.85, calculated as ((10−7)÷7)×100=42.85((10 - 7) \div 7) \times 100 = 42.85((10−7)÷7)×100=42.85. Like most momentum oscillators, the ROC appears on a chart in a separate window below the price chart. The ROC is plotted against a zero line that differentiates positive and negative values. Positive values indicate upward buying pressure or momentum, while negative values below zero indicate selling pressure or downward momentum. Increasing values in either direction, positive or negative, indicate increasing momentum, and moves back toward zero indicate waning momentum. Zero-line crossovers can be used to signal trend changes. Depending on the n value used these signal may come early in a trend change (small n value) or very late in a trend change (larger n value). The ROC is prone to whipsaws, especially around the zero line. Therefore, this signal is generally not used for trading purposes, but rather to simply alert traders that a trend change may be underway. Overbought and oversold levels are also used. These levels are not fixed, but will vary by the asset being traded. Traders look to see what ROC values resulted in price reversals in the past. Often traders will find both positive and negative values where the price reversed with some regularity. When the ROC reaches these extreme readings again, traders will be on high alert and watch for the price to start reversing to confirm the ROC signal. With the ROC signal in place, and the price reversing to confirm the ROC signal, a trade may be considered. ROC is also commonly used as a divergence indicator that signals a possible upcoming trend change. Divergence occurs when the price of a stock or another asset moves in one direction while its ROC moves in the opposite direction. For example, if a stock's price is rising over a period of time while the ROC is progressively moving lower, then the ROC is indicating bearish divergence from price, which signals a possible trend change to the downside. The same concept applies if the price is moving down and ROC is moving higher. This could signal a price move to the upside. Divergence is a notoriously poor timing signal since a divergence can last a long time and won't always result in a price reversal. The two indicators are very similar and will yield similar results if using the same n value in each indicator. The primary difference is that the ROC divides the difference between the current price and price n periods ago by the price n periods ago. This makes it a percentage. Most calculations for the momentum indicator don't do this. Instead, the difference in price is simply multiplied by 100, or the current price is divided by the price n periods ago and then multiplied by 100. Both these indicators end up telling similar stories, although some traders may marginally prefer one over the other as they can provide slightly different readings. One potential problem with using the ROC indicator is that its calculation gives equal weight to the most recent price and the price from n periods ago, despite the fact that some technical analysts consider more recent price action to be of more importance in determining likely future price movement. The indicator is also prone to whipsaws, especially around the zero line. This is because when the price consolidates the price changes shrink, moving the indicator toward zero. Such times can result in multiple false signals for trend trades, but does help confirm the price consolidation. While the indicator can be used for divergence signals, the signals often occur far too early. When the ROC starts to diverge, the price can still run in the trending direction for some time. Therefore, divergence should not be acted on as a trade signal, but could be used to help confirm a trade if other reversal signals are present from other indicators and analysis methods. The rate of change - ROC - is the speed at which a variable changes over a specific period of time. What Is the Force Index Definition and Its Uses? The force index is a technical indicator that uses price and volume to determine the power behind a price move. The force index can also identify potential turning points in price. Williams %R is a momentum indicator in technical analysis that measures overbought and oversold levels. It is similar to the stochastic oscillator in how it generates trade signals. The Aroon Oscillator is a trend-following indicator that uses aspects of the Aroon Indicator to gauge the strength of a current trend and the likelihood that it will continue. Crosses of the zero line signal trend changes and possible trades. The Guppy Multiple Moving Average (GMMA) identifies changing trends by combining two sets of moving averages (MA) with multiple time periods. Each set contains up to six moving averages, for a total of 12 MAs in the indicator. The Chikou span is a component of the Ichimoku Kinko Hyo indicator. It is created by plotting closing prices 26 periods in the past. It helps highlight the trend and indicate potential trend reversals. What Does it Mean to Use Technical Divergence? What technical tools can I use to measure momentum?
CommonCrawl
A message may be accompanied with a digital signature, a MAC or a message hash, as a proof of some kind. Which assurances does each primitive provide to the recipient? What kind of keys are needed? Is there an example of two known strings which have the same MD5 hash value (representing a so-called "MD5 collision")? Hashing or encrypting twice to increase security? "SHA-256" vs "any 256 bits of SHA-512", which is more secure? How do hashes really ensure uniqueness? Why is $H(k\mathbin\Vert x)$ not a secure MAC construction? Why is SHA-1 considered broken? Is there a known pair of distinct bit strings (A,B) such that SHA-1(A) == SHA-1(B)? If the answer is no, then how can SHA-1 be considered broken? What do the magic numbers 0x5c and 0x36 in the opad/ipad calc in HMAC do? How is the MD2 hash function S-table constructed from Pi? Are there any known collisions for the SHA (1 & 2) family of hash functions? Are there any known collisions for the hash functions SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512? By that, I mean are there known values of $a$ and $b$ where $F(a) = F(b)$ and $a ≠ b$? How is SHA1 different from MD5? Why can't I reverse a hash to a possible input? Is every output of a hash function possible? Is it possible to actually verify a "sponge function" security claim? What is the difference between a digest and a hash function? I was wondering about the difference between these two terms... What is the difference between a digest and a hash function? What is the difference between a HMAC and a hash of data? What is the recommended replacement for MD5? Since MD5 is broken for purposes of security, what hash should I be using now for secure applications? Are common cryptographic hashes bijective when hashing a single block of the same size as the output? Should I use the first or last bits from a SHA-256 hash?
CommonCrawl
Now a colleger, wanted to be a physicist, but now also a mathematician. Hee hee. Meaning and usage of complex number functor and monad? Why is the Major-Minor Scale unused? Is mfix for Maybe impossible to be nontrivially total? Is there a name for, or notable structure that uses, weird "distributive laws" such as $a\times(b+c)=b\times a+c\times a$?
CommonCrawl
Ramesh was born on 28 February of a year. If in that year Republic Day fell on Sunday, then on which day was Ramesh born ? A person puts some money on bet and loses. He again puts double the amount on bet and loses again. Again he puts on bet double the amount of the previous bet and wins double the amount from this bet. Find out what percentage of the total amount put on bet does he recover ? If a clock rings once at 1 O' clock, twice at 2 O' clock, thrice at 3 O' clock and so on, i.e., it rings as many times as its time, then how many times does it ring in 24 hours ? In 24 hours, the clock will ring 78 $\times$ 2 =156 times. In the following questi ons sign 'X' is assigned to teh open state of a dice. Find the figure from the given options which can be obtained after closing the figure (X). 5 . In question find the odd/ word/letters/ number from the given alternatives. 6 . In question find the odd/ word/letters/ number from the given alternatives. 7 . In question find the odd/ word/letters/ number from the given alternatives. 8 . In question find the odd/ word/letters/ number from the given alternatives. 9 . In question find the odd/ word/letters/ number from the given alternatives. 10 . In question, In each of the below given questions are given one/ two statements followed by two conclusions I and II. You have to assume the given statements to be true even if they seem differ from generally known facts. Study all the conclusions and then decide which of the conclusions logi cally f ol lows the given statements, whatever the generally known facts may be. The Prime Minister has emphasized that his government will make full efforts for the elevation of farmers and rural poor. II. This government will not make full efforts for the elevation of the urban poor.
CommonCrawl
Po semináři bude s profesorem Shorem diskuse o jeho práci jednoho z vedoucích redaktorů časopisu Astronomy and Astrophysics. A cosmic string modeled by an abelian Higgs vortex is studied in the rotating black hole background of the Kerr geometry. It is shown that such a system displays much richer phenomenology than its static Schwarzschild/Reissner- Nordstrom cousins. In particular it is shown that the rotation generates a small electric flux near the horizon. For an extremal rotating black hole the two phases of the Higgs hair are possible: i) small black holes expel the Higgs field and (similar to Wald's solution) there is no flux through the horizon ii) large black holes are pierced by the Higgs hair. Backreaction of the Higgs vortex on the Kerr geometry will also be briefly studied, while it will be shown that it cannot be described as a mere conical deficit as might be expected. In this talk we discuss how polarization of photons affects their motion in a gravitational field created by a rotating massive compact object. We briefly discuss gravito-electromagnetism analogy and demonstrate that spinoptical effects in many aspects are similar to the Stern-Gerlach effect. We use (3+1)-form of the Maxwell equations to derive a master equation for the propagation of monochromatic electromagnetic waves with a given helicity. We first analyze its solutions in the high frequency approximation using the 'standard' geometrical optics approach. After that we demonstrate how this 'standard' approach can be modified in order to include the effect of the helicity of photons on their motion. Such an improved approach reproduces the standard results of the geometrical optics at short distances. However, it modifies the asymptotic behavior of the circularly polarized beams in the late-time regime. We demonstrate that the corresponding equations for the circularly polarized beam can be effectively obtained by modification of the background geometry by including a small frequency dependent factor. We discuss motion of circularly polarized rays in the Kerr geometry. Applications of this formalism to the propagation of circular polarized photons in the Kerr spacetime are discussed. I will discuss the possibility of decoupling gravitational perturbations a la Teukolsky on higher dimensional black hole backgrounds. After briefly discussing the non-impulsive (smooth) case, we will show that every impulsive wave-type spacetime of the form $M=N\times R^2_1$, with line element $ds^2 = dh^2 + 2 du dv + f(x)\delta(u) du^2$ is geodesically complete. Here $(N,h)$ is an arbitrary connected, complete Riemannian manifold, $f$ is a smooth function and $\delta$ denotes the Dirac distribution on the hypersurface $u=0$. Moreover the geodesics behave as is physically expected. POZOR! Posunuto na středu (od 13:10)!!! In this talk I will present the idea of a geometrical interpretation of dark phenomena on cosmological and astrophysical scales. In particular, I will review the status of our knowledge on the physics of extensions of General Relativity in relation with dark matter and dark energy and I will delineate the new challenges in this research field for the next years.
CommonCrawl
This is actually an exercise in Landau-Lifshitz's book. Their solution goes as follows. After we have found a frame of reference where $\mathbf E$ and $\mathbf B$ are parallel (let's call the common direction $\mathbf n$), every other frame of reference obtained by boosting along $\mathbf n$ is such a frame of reference. So they proceed to find the frame of reference where the electric and magnetic field are parallel, searching among those obtained by boosting along $\mathbf E\wedge \mathbf B$. In order to do so, they impose that the cross product between the new electric and magnetic fields is zero, and find a condition for the relative speed between the given frame of reference and the new one. My problem is, are they sure a priori to find a solution where the boost is perpendicular to both $\mathbf E$ and $\mathbf B$? Also, if the original $\mathbf E$ and $\mathbf B$ are not parallel, is there any other frame of reference where they are parallel, other than those that they found (ie, the one boosted perpendicularly, and all the ones boosted from the latter along the common direction)? I hope I was clear enough in explaining the problem. If anything needs to be explained better, please comment, I will edit the question. Thanks in advance! Browse other questions tagged electromagnetism special-relativity classical-electrodynamics or ask your own question. Is the existence of electromagnetic standing waves dependent on the observers reference frame? How are the magnetic field lines in a electric dipole formed (step by step and graphically)? What happens to a uniform electric field when a point charge is sent moving in a direction neither parallel nor perpendicular to the field?
CommonCrawl
What are the sub $C^*$-algebras of $C(X,M_n)$? Let $X$ be a locally compact Hausdorff topological space, denote by $M_n$ the $C^*$-algebra of complex $n\times n$ matrices, by $C_0(X,M_n)$ the $C^*$-algebra of continuous functions on $X$ with values in $M_n$ vanishing at infinity, and by $C_b(X,M_n)$ the $C^*$-algebra of bounded continuous functions on $X$ with values in $M_n$. Does there exist a description of all sub $C^*$-algebras of $C_0(X,M_n)$ or $C_b(X,M_n)$? I am most interested in the (easier?) case where $X$ is compact and sub $C^*$-algebras of $C(X,M_n)$ containing the unit of $C(X,M_n)$. For the case $n=1$, i.e., for $C_0(X)$, you can find the answer here: What is the commutative analogue of a C*-subalgebra?. Every $C^*$-subalgebra $A$ of $C_0(X,M_n)$ has irreps of dimension $\leq n.$ (Just because every irrep of $A$ can be continued to an irrep of $C_0(X,M_n)$). Such $C^*$-algebras are called $n$-subhomogeneous. Vasilʹev, N. B. "$C^∗$-algebras with finite-dimensional irreducible representations." Uspehi Mat. Nauk 21 1966 no. 1 (127), 135–154. Tomiyama, Jun; Takesaki, Masamichi Applications of fibre bundles to the certain class of $C^∗$-algebras. Tôhoku Math. J. (2) 13 1961 498–522. Not the answer you're looking for? Browse other questions tagged c-star-algebras noncommutative-topology or ask your own question. Does equality of the operator norm and the cb norm for every bimodule map over a C*-subalgebra imply that the subalgebra is matricially norming? What is the commutative analogue of a C*-subalgebra?
CommonCrawl
Since the code was written during work hours, it was obviously a huge waste of company resources. To prevent similar occurrences in the future, we must minimize the waste of worked hours. And since it is common knowledge that a shorter program is faster to write, we must golf this code to be as short as possible! A single non-negative integer. You must not handle faulty input. Your program must produce output identical to that of the script above. You should output one word per line, and the number of words should be consistent with the original script. It is permissible to include non-newline whitespace characters at the end of each line (but not at the beginning) since they are invisible. One additional newline character is permissible at the very end of the output. Assumes input in cell A1, and that Wordwrap formatting is turned on for cell. Use Alt+Enter to add line feeds within the string and note the whitespace. Only handles input up to 3570 due to limit of REPT function (Good luck getting a cell to be that tall, though). Each of these can be expressed with 9 characters, so a string is made of 54 characters (9 * 6), then repeated as large as Excel will allow. Then it takes the left 9 * (number of input) characters as the output. Linefeed for the "but and no" one is placed after the blank so that the Yeah for #6, #12, (etc) is formatted to the left rather than the right, and so that there is no blank linefeed added every 6th line for that item. We use a recursive function which goes from \$n\$ to \$1\$ rather than from \$0\$ to \$n-1\$. if \$n\equiv1\pmod 3\$, output "Yeah" if \$n\equiv1\pmod 2\$, output "But" if \$n\equiv2\pmod 3\$, output "No" This allows us to store the simpler case \$n\equiv0\pmod 3\$ as the first entry of our lookup array, where we can define \$s\$: a variable holding either "But\n" or an empty string. The two other entries are defined as "Yeah\n" + s and s + "No\n" respectively. Note: By iterating from \$n-1\$ to \$0\$, we could define \$s\$ in the first entry just as well, but that would cost two extra parentheses. `But\n` // set s to "But" s + `No\n` // 3rd entry: s followed by "No" Thanks to @JoKing for -11 bytes (reducing the amount of labels used from 8 to 7), and -24 more bytes (changing the general flow of the program and reducing the amount of labels used from 7 to 5 in the process). Whitespace is definitely not the right language for this challenge.. In Whitespace both loops and if-statements are made with labels and jumps to labels, and since they aren't if-elseif-else cases but multiple if-cases, it means I will have to jump back after every if, making it quite long it means I will have to slightly modify the checks to skip over some prints (thanks @JoKing). In general, it loops from the input down to 0, pushing a newline and the word reversed (so in the order "\noN", "\ntuB", "\nhaeY" instead of "Yeah\n", "But\n", "No\n"). And after the input has looped down to 0 and all the characters are on the stack, it will print those characters in reverse (so the correct output order). More in depth however: Although we need to print words in the range (input, 0], it will loop in the range [input, 0) instead. Because of this, we can use the check if(i%3 == 2) for "\noN" (or actually, if(i%3 != 2) skip the pushing of "\noN"), and we use the check if(i%2 != 1) for "\ntuB" (or actually, if(i%2 == 0) skip the pushing of "\ntuB"). Only after these two checks we decrease the iteration i by 1. And then do the check if(i%3 == 0) to push "\nhaeY", similar as in the JS example code in the challenge description. Skipping with if-not checks instead of going to a label and return from the label with if-checks saved 23 bytes. Also, in Whitespace character values are stored in the stack as their unicode values (i.e. 10 for new-lines, 65 for 'A', 97 for 'a', etc.). Since I already need to loop over the stack to print the characters, I am also able to use my Whitespace tip to lower the byte-count by adding a constant to the number values, before printing them as characters. has the values -94 for the newline, 7 for the 'o', and -26 for the 'N'. Because adding the constant of 104 will correctly give our unicode values 10, 111, and 78 for these characters respectively. Port of Keeta's Excel answer. Saved 1 byte thanks to Kevin Cruijssen. Since initially submitting my answer, I've looked through some historic discussions here about what constitutes a suitable answer. Since it seems commonly accepted to provide just a method in Java (including return type and parameter declarations), here is a shorter, Groovy, method which has the method return value be the answer. Use of def means that the return type is inferred. Unlike the original answer below, which loops from 0 up to n-1, this one calls itself from n down to 1, but decrements the input for the rest of the line in the recursive call. Groovy scripts don't require certain common imports, so this can be a program printing the answer to Java's STDOUT without having to declare System.out. before print. It also provides some common utility methods, such as this toLong() which allows us to parse the input argument reasonably consicely. Essentially the Java 10 answer, but leveraging Groovy's shorter loop syntax and ability to evaluate truthy statements. Convert the input to unary. For each integer 0...n-1, generate three lines of text, one for each word, each with i 1s before it, except for No, which has two extra 1s so that we calculate (i+2)%3==0 which is equivalent to i%3==1. Remove pairs of 1s before Bs. Remove 1s in groups of three everywhere else. Delete all lines that still have a 1. -1 byte thanks to @OlivierGrégoire. Pretty simple, saved two bytes by using [1..n] instead of [0..n-1] and adjusted the remainders: The operator (?) tests takes four arguments, returning an empty list or the provided string as a singleton if the result is correct. (Note the space after But.) Takes input as a command-line argument. Try it online! and output the first a elements of it using cyclic indexing. As before, the final result is concatenated together and autoprinted. There's already a better C answer here but this one is recursive and it took me some time to get straight so I'm posting it. Very straight-forward answer, checking for a shorter, recursive method right now. This isn't exactly the best solution but it's my take on it. The hardest part was not to construct the list but to actually parse the decimal number. 2 bytes may be saved if the newline at the end is not required: c\ → d. -2 bytes changing from i=0 to p-1 to i=1 to p and adjusting modulos. Apart from that, pretty straight-forward. Edit: -4 bytes from Keeta! Thanks! ' so if they are equal then there is no remainder! First note that we have a period of \$2\times3=6\$ due to the modulo definition. So the resulting list of lines should be these values repeated (or truncated) to length n concatenated together. "'⁴\ÆẓNƇ» - compressed string "Yeah But No"
CommonCrawl
Format: TextI edited [[stable (infinity,1)-category]] a bit: * rephrased the intro part, trying to make it more forcefully to the point (not claiming to have found the optimum, though) * added a dedicated section <a href="http://ncatlab.org/nlab/show/stable+(infinity%2C1)-category#the_homotopy_category_of_a_stable_category_triangulated_categories_7">The homotopy cat of a stable (oo,1)-cat: traingulated categories</a> to highlight the important statement here, which was previously a bit hidden in the main text. * added a dedicated section <a href="http://ncatlab.org/nlab/show/stable+(infinity%2C1)-category#the_homotopy_category_of_a_stable_category_triangulated_categories_7">The homotopy cat of a stable (oo,1)-cat: traingulated categories</a> to highlight the important statement here, which was previously a bit hidden in the main text. Format: MarkdownItexI have added a remark to [[stable derivator]] about why the homotopy category of such acquires one triangulation and not the "negative" triangulation, a potentially confusing point (at least, to me). This remark would probably fit better at [[stable (∞,1)-category]], since the "negating one morphism" operation makes more sense there than in a derivator. But currently that page doesn't actually describe the construction of the triangulation, which is necessary for the remark to make sense, whereas the page [[stable derivator]] does. I have added a remark to stable derivator about why the homotopy category of such acquires one triangulation and not the "negative" triangulation, a potentially confusing point (at least, to me). This remark would probably fit better at stable (∞,1)-category, since the "negating one morphism" operation makes more sense there than in a derivator. But currently that page doesn't actually describe the construction of the triangulation, which is necessary for the remark to make sense, whereas the page stable derivator does. Format: MarkdownItexI have added a pointer to [introductory notes by Yonatan Harpaz](http://ncatlab.org/nlab/show/stable+%28infinity%2C1%29-category#Harpaz2013) on stable $\infty$-categories. These are notes a kindly prepared for our group seminar last week. I have added a pointer to introductory notes by Yonatan Harpaz on stable ∞\infty-categories. These are notes a kindly prepared for our group seminar last week. Format: MarkdownItexI have added some missing op's to the section [Stabilization and localization of presheaf $(\infty,1)$-categories](http://ncatlab.org/nlab/show/stable+%28infinity%2C1%29-category#StabGiraud), and updated the statement of Proposition 2 (it's even a sufficient condition for stable + presentable). I have added some missing op's to the section Stabilization and localization of presheaf (∞,1)(\infty,1)-categories, and updated the statement of Proposition 2 (it's even a sufficient condition for stable + presentable).
CommonCrawl
Abstract: We construct an approximate expression for the total cross section for the production of a heavy quark-antiquark pair in hadronic collisions at next-to-next-to-next-to-leading order (N$^3$LO) in $\alpha_s$. We use a technique which exploits the analyticity of the Mellin space cross section, and the information on its singularity structure coming from large N (soft gluon, Sudakov) and small N (high energy, BFKL) all order resummations, previously introduced and used in the case of Higgs production. We validate our method by comparing to available exact results up to NNLO. We find that N$^3$LO corrections increase the predicted top pair cross section at the LHC by about 4% over the NNLO.
CommonCrawl
"How do I show that a two-qubit state is an entangled state?" includes an answer which references the Peres–Horodecki criterion. This works for $2\times 2$ and $2\times3$ dimensional cases; however, in higher dimensions, it is "inconclusive." It is suggested to supplement with more advanced tests, such as those based on entanglement witness. How would this be done? Are there alternative ways to go about this? Determining whether a given state is entangled or not is NP hard. So if you include all possible types on entanglement, including mixed states and multipartite entanglement, there is never going to be an elegant solution. Techniques are therefore defined for specific cases, where the structure of the problem can be used to create an efficient solution. For example, if a state is bipartite and pure, you can simply take the reduced density matrix of one party and see if it is mixed. This could be done by computing the Von Neumann entropy to see if it is non-zero (this quantity provides a measure of entanglement in this case). This approach would work for any pure state of two particles, whatever their dimension. It can also be used to calculate entanglement for any bipartition. For example, if you had $n$ particles, you could take the first $m$ to be one party, and the remaining $n-m$ to be another, and use this technique to see if any entanglement exists between these groups. For other cases, the approach you take will depend on the kind of entanglement you are looking for. As suggested in your Wiki link, the way to detect an entangled state is to find a hyperplane that separates it from the convex set of separable states. This hyperplane represents what is called an entanglement witness. The PPT criterion that you mentioned is one such witness. Now to construct entanglement witnesses for higher dimensional systems is not easy, but it can be done algorithmically by solving a hierarchy semi-definite programs (SDP) . This hierarchy is complete, as every entangled state will eventually be detected. But it is computationally inefficient if the entangled state is very close to the convex set of separable states. It is infact known that detecting entanglement is NP-hard. Gharibian, Sevag. "Strong NP-hardness of the quantum separability problem." arXiv preprint arXiv:0810.4507 (2008). Not the answer you're looking for? Browse other questions tagged entanglement quantum-state qudit or ask your own question. How do I show that a two-qubit state is an entangled state? Are X-state separability and PPT- probabilities the same for the two-qubit, qubit-qutrit, two-qutrit, etc. states?
CommonCrawl
Concentration is a not so popular 2 player card game of both skill and luck. The standard Concentration game is played with one or two 52-card decks, however, for the sake of the problem, we will look at a variation of Concentration. A card is represented by a single integer. Two cards $i$, $j$ are considered "similar" if and only if $\lfloor i/2\rfloor =\lfloor j/2\rfloor $. A deck consisting of $2N$ cards is used for each game. More specifically, a deck of $2N$ cards contains exactly one copy of card $i$ for all $0\leq i <2N$. All cards are initially facing down on a table in random positions, i.e. neither players know what any cards are. Players take turns making moves. Player 0 goes first, then player 1 goes, then player 0 goes, and so on. During each turn, a player chooses two cards and reveals them. If the two cards are "similar", then they are removed from the table and the player gets to keep them, the player is then granted another turn; this can happen infinitely as long as the player always finds two "similar" cards. If the cards are different, the player's turn ends. When there are no more cards on the table, the player with more cards wins the game. Anthony and Matthew like to play this boring game and share an identical play style: whenever they are to choose a card to reveal, if they have knowledge of two "similar" cards, they will pick one of the two "similar" cards; otherwise they will pick a random unknown card to reveal. Anthony and Matthew are both extremely intelligent and have perfect memories, i.e. they remember every card that has been revealed. Before the game starts, both Anthony and Matthew make up their minds about in which order they will choose random cards to reveal, in case when they do not have knowledge of two "similar" cards. Each player's choices of revelation can be represented by a permutation of numbers $[0,\ldots , 2N-1]$. For example, let $\sigma _0$, a permutation of $[0,\ldots , 2N-1]$ be the "random" choices of Anthony. When Anthony is to choose an unknown card, he will choose the smallest $i$ such that $\sigma _0(i)$ is not revealed, and reveal $\sigma _0(i)$. Similarly, let $\sigma _1$ be the choices of Matthew. Having knowledge of $\sigma _0$ and $\sigma _1$, we should be able to perfectly determine the winner (and win lots of money by betting on that player), and it is your job to do exactly that! The first line of input contains one integer $1\leq N\leq 10^6$. The second line contains $2N$ integers, with the $i$-th integer being $\sigma _0(i)$. This line defines $\sigma _0$. The third line contains $2N$ integers, with the $i$-th integer being $\sigma _1(i)$. This line defines $\sigma _1$. Output a single line with $0$ if Anthony wins, $1$ if Matthew wins, or $-1$ if the game ties.
CommonCrawl
There exists a 9x9 grid with the cells in one single row numbered 1-9 in order. The cells in the other 8 rows are initially empty. Note: The cells initially containing numbers can be in any one row; not necessarily the first. The columns from left to right are numbered $1-9$ (as with the top row). Firstly, $B9$ must be $1$ since it is the last in its continuous region. Then, this forces $C8$ to be $1$ since none of the rest of row $B$ can contain $1$ nor can column $9$. Similarly, we find that going down diagonally to the left all the entries are $1$ down to $I2$. Now, look at $C9$. The entry here, $x$, must be the same as $D8$, since its continuous region has to contain $x$ but row $C$ and column $9$ already contain $x$. By a similar line of reasoning, we find, recursively, that the entries $E7$, $F6$, $G5$, $H4$ and $I3$ are all $x$ but of course cannot be $3,4,\ldots,9$ so $x=2$ and $B1$ must also be $2$. We can continue this line of reasoning, next starting at the entry in $D9$, calling this $y$ and proceeding diagonally left and down to find $y=3$. In this way, we can fill the entire grid, recursively always beginning at the topmost entry in column $9$. The Sudoku variation in question turns out to be called "Du-Sum-Oh," along with some aliases, and cells 1– 8 by themselves can force a unique solution without being given cell 9. Hexomino's original answer1 revealed how delightful this puzzle is but I had forgotten the details months later when mentioning it to a fellow Sudoku enthusiast, so some variety ensued. The layout on the left, with straightforward numbering, has a very sleek route to solution 2 whereas the numbering on the right demonstrates that an irregular set of initial numbers can also force a unique solution and be amusing to solve 3 if you're in the mood. Progress came from starting with small boards while experimenting with simple zigzags and L shapes. The 4×4 and 5×5 layouts along the way were misleadingly efficient 4 and led to an unnecessarily awkward 9×9 layout. 4 Solutions of the 4×4 layout in just two steps and of the 5×5 layout in four steps. Edit: Angel Koh came up with another solution to my layout, so it is non-unique. I believe to have a unique solution in regular sudoku you need at minimum: 1 number in each column, 1 number in each row, 1 number in each box, and every number from 1-9. But, you can cheat on 1 of these and for instance satisfy the remaining 3 clues but have a number in 8 boxes. Although I am not sure how to prove it.
CommonCrawl
math | God, Your Book Is Great !! In this post, I plan to discuss about two very simple inequalities – Markov and Chebyshev. These are topics that are covered in any elementary probability course. In this post, I plan to give some intuitive explanation about them and also try to show them in different perspectives. Also, the following discussion is closer to discrete random variables even though most of them can be extended to continuous ones. One interesting way of looking at the inequalities is from an adversarial perspective. The adversary has given you some limited information and you are expected to come up with some bound on the probability of an event. For eg, in the case of Markov inequality, all you know is that the random variable is non negative and its (finite) expected value. Based on this information, Markov inequality allows you to provide some bound on the tail inequalities. Similarly, in the case of Chebyshev inequality, you know that the random variable has a finite expected value and variance. Armed with this information Chebyshev inequality allows you to provide some bound on the tail inequalities. The most fascinating this about these inequalities is that you do not have to know the probabilistic mass function(pmf). For any arbitrary pmf satisfying some mild conditions, Markov and Chebyshev inequalities allow you to make intelligent guesses about the tail probability. Another way of looking at these inequalities is this. Supposed we do not know anything about the pmf of a random variable and we are forced to make some prediction about the value it takes. If the expected value is known, a reasonable strategy is to use it. But then the actual value might deviate from our prediction. Markov and Chebyshev inequalities are very useful tools that allow us to estimate how likely or unlikely that the actual value varies from our prediction. For eg, we can use Markov inequality to bound the probability that the actual varies by some multiple of the expected value from the mean. Similarly, using Chebyshev we can bound the probability that the difference from mean is some multiple of its standard deviation. One thing to notice is that you really do not need the pmf of the random variable to bound the probability of the deviations. Both these inequalities allow you to make deterministic statements of probabilistic bounds without knowing much about the pmf. There are some basic things to note here. First the term P(X >= k E(X)) estimates the probability that the random variable will take the value that exceeds k times the expected value. The term P(X >= E(X)) is related to the cumulative density function as 1 – P(X < E(X)). Since the variable is non negative, this estimates the deviation on one side of the error. (1) The probability that X takes a value that is greater than twice the expected value is atmost half. In other words, if you consider the pmf curve, the area under the curve for values that are beyond 2*E(X) is atmost half. (2) The probability that X takes a value that is greater than thrice the expected value is atmost one third. Let us see why that makes sense. Let X be a random variable corresponding to the scores of 100 students in an exam. The variable is clearly non negative as the lowest score is 0. Tentatively lets assume the highest value is 100 (even though we will not need it). Let us see how we can derive the bounds given by Markov inequality in this scenario. Let us also assume that the average score is 20 (must be a lousy class!). By definition, we know that the combined score of all students is 2000 (20*100). Let us take the first claim – The probability that X takes a value that is greater than twice the expected value is atmost half. In this example, it means the fraction of students who have score greater than 40 (2*20) is atmost 0.5. In other words atmost 50 students could have scored 40 or more. It is very clear that it must be the case. If 50 students got exactly 40 and the remaining students all got 0, then the average of the whole class is 20. Now , if even one additional student got a score greater than 40, then the total score of 100 students become 2040 and the average becomes 20.4 which is a contradiction to our original information. Note that the scores of other students that we assumed to be 0 is an over simplification and we can do without that. For eg, we can argue that if 50 students got 40 then the total score is atleast 2000 and hence the mean is atleast 20. We can also see how the second claim is true. The probability that X takes a value that is greater than thrice the expected value is atmost one third. If 33.3 students got 60 and others got 0 , then we get the total score as around 2000 and the average remains the same. Similarly, regardless of the scores of other 66.6 students, we know that the mean is atleast 20 now. This also must have made clear why the variable must be non negative. If some of the values are negative, then we cannot claim that mean is atleast some constant C. The values that do not exceed the threshold may well be negative and hence can pull the mean below the estimated value. Let us look at it from the other perspective : Let p be the fraction of students who have a score of atleast a . Then it is very clear to us that the mean is atleast a*p. What Markov inequality does is to turn this around. It says, if the mean is a*p then the fraction of students with a score greater than a is atmost p. That is, we know the mean here and hence use the threshold to estimate the fraction . The probability that the random variable takes a value thats greater than k*E(X) is at most 1/k. The fraction 1/k act as some kind of a limit. Taking this further, you can observe that given an arbitrary constant a, the probability that the random variable X takes a value >= a ie P(X >= a) is atmost 1/a times the expected value. This gives the general version of Markov inequality. In the equation above, I seperated the fraction 1/a because that is the only varying part. We will later see that for Chebyshev we get a similar fraction. The proof of this inequality is straightforward. There are multiple proofs even though we will use the follow proof as it allows us to show Markov inequality graphically.This proof is partly taken from Mitzenmacher and Upfal's exceptional book on Randomized Algorithms. But we also know that the expectation of indicator random variable is also the probability that it takes the value 1. This means E[I] = Pr(X>=a). Putting it all together, we get the Markov inequality. This is a very powerful technique. Careful selection of f(X) allows you to derive more powerful bounds. (1) One of the simplest examples is f(X) = |X| which guarantees f(X) to be non negative. (3) Under some additional constraints, Chernoff inequality uses . Let us consider a simple example where it provides a decent bound and one where it does not. A typical example where Markov inequality works well is when the expected value is small but the threshold to test is very large. Consider a coin that comes up with head with probability 0.2 . Let us toss it n times. Now we can use Markov inequality to bound the probability that we got atleast 80% of heads. Of course we can estimate a finer value using the Binomial distribution, but the core idea here is that we do not need to know it ! The upper bound is greater than 1 ! Of course using axioms of probability, we can set it to 1 while the actual probability is closer to 0.66 . You can play around with the coin example or the score example to find cases where Markov inequality provides really weak results. The last example might have made you think that the Markov inequality is useless. On the contrary, it provided a weak bound because the amount of information we provided to it is limited. All we provided to it were that the variable is non negative and that the expected value is known and finite. In this section, we will show that it is indeed tight – that is Markov inequality is already doing as much as it can. From the previous example, we can see an example where Markov inequality is tight. If the mean of 100 students is 20 and if 50 students got a score of exactly 0, then Markov implies that atmost 50 students can get a score of atleast 40. This implies that the bound is actually tight ! Of course one of the reasons why it was tight is that the other value is 0 and the value of the random variable is exactly k. This is consistent with the score example we saw above. Chebyshev inequality is another powerful tool that we can use. In this inequality, we remove the restriction that the random variable has to be non negative. As a price, we now need to know additional information about the variable – (finite) expected value and (finite) variance. In contrast to Markov, Chebyshev allows you to estimate the deviation of the random variable from its mean. A common use of it estimates the probability of the deviation from its mean in terms of its standard deviation. (1) In contrast to Markov inequality, Chebyshev inequality allows you to bound the deviation on both sides of the mean. (2) The length of the deviation is on both sides which is usually (but not always) tighter than the bound k E[X]. Similarly, the fraction 1/k^2 is much more tighter than 1/k that we got from Markov inequality. (3) Intuitively, if the variance of X is small, then Chebyshev inequality tells us that X is close to its expected value with high probability. (4) Using Chebyshev inequality, we can claim that atmost one fourth of the values that X can take is beyond 2 standard deviation of the mean. We used the Markov inequality in the second line and used the fact that . It is important to notice that Chebyshev provides bound on both sides of the error. One common mistake to do when applying Chebyshev is to divide the resulting probabilistic bound by 2 to get one sided error. This is valid only if the distribution is symmetric. Else it will give incorrect results. You can refer Wikipedia to see one sided Chebyshev inequalities. One of the neat applications of Chebyshev inequality is to use it for higher moments. As you would have observed, in Markov inequality, we used only the first moment. In the Chebyshev inequality, we use the second moment (and first). We can use the proof above to adapt Chebyshev inequality for higher moments. In this post, I will give a simple argument for even moments only. For general argument (odd and even) look at this Math Overflow post. It should be intuitive to note that the more information we get the tighter the bound is. For Markov we got 1/t as the fraction. It was 1/a^2 for second order Chebyshev and 1/a^k for k^th order Chebyshev inequality. Using Chebyshev inequality, we previously claimed that atmost one fourth of the values that X can take is beyond 2 standard deviation of the mean. It is possible to turn this statement around to get a confidence interval. If atmost 25% of the population are beyond 2 standard deviations away from mean, then we can be confident that atleast 75% of the population lie in the interval . More generally, we can claim that, percentage of the population lies in the interval . We can similarly derive that 94% of the population lie within 4 standard deviations away from mean. We previously saw two applications of Chebyshev inequality – One to get tighter bounds using higher moments without using complex inequalities. The other is to estimate confidence interval. There are some other cool applications that we will state without providing the proof. For proofs refer the Wikipedia entry on Chebyshev inequality. (1) Using Chebyshev inequality, we can prove that the median is atmost one standard deviation away from the mean. (2) Chebyshev inequality also provides the simplest proof for weak law of large numbers. Markov and Chebyshev inequalities are two of the simplest , yet very powerful inequalities. Clever application of them provide very useful bounds without knowing anything about the distribution of the random variable. Markov inequality bounds the probability that a nonnegative random variable exceeds any multiple of its expected value (or any constant). Chebyshev's inequality , on the other hand, bounds the probability that a random variable deviates from its expected value by any multiple of its standard deviation. Chebyshev does not expect the variable to non negative but needs additional information to provide a tighter bound. Both Markov and Chebyshev inequalities are tight – This means with the information provided, the inequalities provide the most information they can provide. Hope this post was useful ! Let me know if there is any insight I had missed ! (1) Probability and Computing by Mitzenmacher and Upfal. (2) An interactive lesson plan on Markov's inequality – An extremely good discussion on how to teach Markov inequality to students. (3) This lecture note from Stanford – Treats the inequalities from a prediction perspective. (4) Found this interesting link from Berkeley recently. R is one of the coolest language designed and I am having lot of fun using it. It has become my preferred language of programming next only to Python. If you are also using Ubuntu, the rate of update of R in Ubuntu's official repositories is slightly slow. If you want to get the latest packages as soon as possible, then the best option is to add some CRAN mirror to your Ubuntu repository. This by itself is straightforward. I decided to write this post on how to solve the GPG error if you get it. (1) Decide on which CRAN repository you want to use. Finding the nearest one usually gives the best speed. Lets say it is http://cran.cnr.berkeley.edu/ . Append "bin/linux/ubuntu". Typically this works. You can confirm this by going to this url in the browser too. (a) Synaptic -> Settings -> Repositories -> Other Software -> Add . In the apt line enter "deb http://cran.cnr.berkeley.edu/bin/linux/ubuntu natty/". (b) sudo vim /etc/apt/sources.list and add "deb http://cran.cnr.berkeley.edu/bin/linux/ubuntu natty/" at the end. If you are not comfortable with vim, use gedit but instead of sudo , used gksudo. (3) Refresh the source repository by using refresh in Synaptic or using "sudo apt-get update " (4) Install R or any other package you want. If you are installing R , I suggest you install r-base-dev instead of r-base. If you are installing some R package , check if it exists with the name r-cran-* . Else, install it using install.packages command inside R. If you get the error, enter the following commands in the terminal. Repeat the steps above and this should fix the key error. I recently spent some time developing notes on Subset sum – specifically the NP-Completeness part of it. I thought I will share it with the blog readers. Subset sum is one of the very few arithmetic/numeric problems that we will discuss in this class. It has lot of interesting properties and is closely related to other NP-complete problems like Knapsack . Even though Knapsack was one of the 21 problems proved to be NP-Complete by Richard Karp in his seminal paper, the formal definition he used was closer to subset sum rather than Knapsack. Informally, given a set of numbers S and a target number t, the aim is to find a subset S' of S such that the elements in it add up to t. Even though the problem appears deceptively simple, solving it is exceeding hard if we are not given any additional information. We will later show that it is an NP-Complete problem and probably an efficient algorithm may not exist at all. The decision version of the problem is : Given a set S and a target t does there exist a subset such that . One thing to note is that this problem becomes polynomial if the size of S' is given. For eg,a typical interview question might look like : given an array find two elements that add up to t. This problem is perfectly polynomial and we can come up with a straight forward algorithm using nested for loops to solve it. (what is the running time of best approach ?). A slightly more complex problem asks for ,say, 3 elements that add up to t. Again, we can come up with a naive approach of complexity . (what is the best running time?). The catch in the general case of subset sum is that we do not know . At the worst case is and hence the running time of brute force approach is approximately . A slightly more efficient algorithm checks out all possible subsets. One typical way to do this is to express all numbers from 0 to in binary notation and form a subset of elements whose indexes are equal to the bit positions that correspond to 1. For eg, if n is 4 and the current number, in decimal, is say $10$ which in binary is 1010. Then we check the subset that consists of and elements of S. One advantage of this approach is that it uses constant space. At each iteration, you examine a single number. But this approach will lead to a slower solution if is small. Consider the case where . We will have to examine around different subsets to reach this solution. A slightly different approach finds all possible sums of subsets and checks if t has occurred in the subset. if has t, return true. This algorithm uses the notation S+x to mean . Refer CLRS 35.5 for a discussion of a similar algorithm for a variant of subset sum problem. In this section we will prove that a specific variant of Subset sum is NP-Complete. Subset sum decimal is defined very similar to standard Subset sum but each number in S and also t is encoded in decimal digits. We can show that Subset sum decimal is in class NP by providing the subset S' as the certificate. Clearly, we can check if elements in S' adds up to t in polynomial time. The next step is to select another NP-Complete problem which can be reduced to Subset sum decimal. So far we have not discussed any arithmetic NP complete problems. The only non graph theoretic problem that we have discussed in 3SAT and we will use it for the proof. Of course there are multitude of other reductions including Vertex cover, 3 dimensional matching, partition etc. 1. Construct a set S of unique large decimal numbers that somehow encode the constraints of . Additionally this operation must take polynomial time. 2. Construct an appropriate target t such that this instance of Subset sum decimal is solvable if and only if a solution to 3SAT instance exists. Handle complications like carries in addition. 3. Devise a way to find the satisfying assignment from subset solution and vice versa. 1. All the literals to is used in some clause of . 2. No clause can contain both a literal and its complement. As a consequence of these assumptions, we do not have any variables that are superfluous. Also we do not have any clauses that get satisfied trivially. We will not duplicate the proof in the lecture notes as a detailed sketch of the reduction is given in CLRS section 34.5.5. Instead we will focus on certain observations. This is easy to see. For each variable we create 2 variables. Similarly we create two variables for each clause . The total number of variables in S is 2(m+n). Each number in set S and t contains exactly n+m digits. Hence the total construction takes time polynomial in n+m . Observation 2 : There are no carries when elements in subset are added to form t. (b) A clause cannot contain a literal and its complement. So, each variable can add at most 1 to that clause column and there at most 3 variables in a clause. Additionally, we have 1 and 2 from the slack variables. Concisely, we get at most 3 from or and 3 from and . Hence we can conclude that carries does not occur at each column(digit) as the base we use is 10. Observation 3 : All variables in S corresponding to $x_i$s are unique. (a) First we show that if , and does not match in the leading n digits. Similar argument holds for and . (b) Next, we can show that does not equal to . This is because our assumption that a literal and its complement does not occur in the same clause. This means that the trailing m digits will not be equal. In conclusion, no pair of variables in S corresponding to are equal. Observation 4 : All variables in S corresponding to s are unique. Each clause creates two variables and . If , and does not match in the trailing m digits. Additionally, by construction, as the digit position corresponding to has 1 for and 2 for . Observation 5 : All variables in S is unique. i.e. S forms a set. This can observed from Observation 3 and 4. By construction and do not match. Similar argument hold for and . Observation 6 : New variables corresponding and are both needed for proof. A detailed sketch is given in CLRS. The variables and created from makes sure that each variable has a unique boolean assignment of 0 or 1. Else the sum for that column in target will be 2. This is due to the assumption that all variables HAS to be used in some clause and hence has a unique assignment. Of course, it is possible that has multiple satisfying assignment but the target digit forces only one of them to be selected when you select the elements of subset . The digits corresponding to clauses makes sure that each clause has at least one variable that evaluates to true. This is because each digit of slack variable corresponding to (ie ) contribute at most 3 towards t and hence the remaining (at least) 1 has to come from or s. So variables ensure that each has a unique assignment. Variables ensure that each clause of is satisfied. Observation 7 : Subset sum is NP complete if the numbers are expressed in base . From observation 2 , we know that the maximum possible digit due to summation of elements in S is 6. This means we can reuse the proof of Subset sum decimal to prove that Subset sum is NP-Complete for any base b that is greater that 6. Observation 8 : Given S' we can find a satisfying assignment for . We know that any satisfying subset must include either or for . If includes then set to 1. Else set it to 0. This is a bit tricky and is done in two steps. More details can be found in CLRS proof. 1. If the satisfying assignment had , then select . Else select . 2. For each clause find how many variables in it evaluated to true due to the boolean assignment. At least one variable has to be true and at most 3 variables are true. a. If has only one variable that evaluates to true, then select and . b. If has two variables that evaluate to true, then select . c. If has three variables that evaluate to true, then select . Observation 10 : If is not satisfied, then S' cannot be found. If is not satisfied, then there exist at least one clause that is not satisfied. This means that for digit, the slack variables contribute only 3 but the corresponding digit in t has 4. Hence no S' exists. The formal definition of Subset sum binary is similar to Subset sum decimal . The only difference is that all numbers are encoded in bits. We can notice that the above proof for Subset sum decimal holds only for numbers expressed in base of at least 7 (from observation 7). For bases from 1-6, the previous proof does not apply – partly due to the fact that there will be carries during addition. We need an alternate proof approach. Since we have proved Subset sum decimal as NP-Complete , we can use the result to prove Subset sum binary as NP-Complete. The certificate is the subset S' given in binary. We can see that it can be done in polynomial time and hence Subset sum binary is in NP. The next step is to reduce Subset sum decimal to Subset sum binary. First we observe that any number encoded in decimal can be encoded to binary in polynomial time and vice versa. When given S and t in decimal as input, we encode them in binary and pass it to our Subset sum binary routine. The decision version of Subset sum binary returns true or false which can be fed directly as result of Subset sum decimal. In the optimization version , we just convert the $S'$ returned by the Subset sum binary subroutine to decimal. Observation 11 : A decimal number can be converted to binary in polynomial time. Assume some number n is encoded in both binary and decimal. This means where k is the number of digits in the decimal representation and k1 is the number of bits needed to encode it. So to express a decimal number with k digits, we need between 3k – 4k bits. Observation 12 : Subset sum is NP complete for any base . is a constant irrespective of n. So if n needs k digits in base b1, then it needs at most to be represented in base b2. (Verify observation 11 using this equation !). From observation 12, the only base left is 1 and this section handles the special case where all numbers are expressed in base 1. Subset sum unary is similar to Subset sum decimal where all numbers are expressed in unary notation. Numbers in base 1 are called as being represented in unary. Any number k is represented as which is a string of k 1's. Let us check if Subset sum unary is NP-Complete . The certificate is the subset where all elements are expressed in unary. If we are given numbers in unary, then verification takes time that is polynomial in the length of individual unary numbers. Hence Subset sum unary is in unary. To prove Subset sum unary is in NP-Complete , we have to reduce either Subset sum decimal/binary to unary. Superficially, it looks straightforward and hence it seems as though Subset sum unary is in NP-Complete. But the catch is that expressing a number n in base b to unary needs time exponential when computed wrt the size of n's representation in base b. For eg, representing a binary number n that needs k bits needs around unary digits. We can see that is exponential when viewed from k. In summary, converting a number from any base to unary takes exponential time. So we cannot use our reduction technique as there the reduction is not polynomial. What we showed above was that Subset sum unary is in NP but not NP-Complete. Here we show that there exists a dynamic programming formulation for this problem. We represent the problem as a matrix A of size n*t. A is a boolean matrix where the interpretation of cell A[i,j]=True is that there exists a subset of that sum up to j. ie such that . Since A[5,8]=True , we conclude that there exists a subset of S that sum up to t(8). Subset sum is interesting in the sense that its binary/decimal can be proved as NP-Complete but its unary version seems to allow a polynomial looking dynamic programming solution. Looking at the dynamic programming solution carefully, the time (and space) complexity of the approach is where n=|S| and t is the target. By itself, the DP solution looks feasible and 'somehow' polynomial. But one of the reasons that Subset sum is NP-Complete is due to the fact that it allows "large" numbers. If t is large, then the table A is huge and the DP approach takes a lot of time to complete. Given S and t , there are two ways to define an polynomial algorithm. One uses the length of S ie n to measure algorithm complexity. From this angle, is not polynomial. This is because t can be huge irrespective of n. For eg, we have have a small set with 4 elements but the individual elements (and t) are of the order , say, . But from the perspective of magnitude of t, this dynamic programming approach is clearly polynomial. In other words, we have two ways to anchor our polynomial – and . An algorithm is called pseudo polynomial, if its time complexity is bounded above by a polynomial function of two variables – and . Problems that admit pseudo polynomial algorithms are called weak NP-Complete problems and those that do not admit are called Strong NP-Complete problems. For example, Subset sum is a weak NP-Complete problem but Clique is a strong NP-Complete problem. There are lot of interesting discussion about the strong/weak NP-Complete problems in both Garey and Johnson and in Kleinberg/Tardos. See references for more details. Observation 13 : Only number theoretic problems admit pseudo polynomial algorithms. Observation 14 : Strong NP-Complete problems do not admit a pseudo polynomial time algorithm unless P=NP. 1. CLRS 34.5.5 – Proof of NP-Completeness of Subset sum. 2. CLRS 35.5 – An exponential algorithm to solve a variant of subset sum problem. 3. Garey and Johnson 4.2 – Discussion of pseudo polynomial time algorithms along with strong and weak NP-Complete problems. 4. Kleinberg and Tardos 6.4 – Discusses a variant of the DP algorithm given in the lecture notes and the concept of pseudo polynomial time algorithms. Section 8.8 has an alternate NP-Completeness proof of Subset sum using vertex cover which you can skim through if interested. Hope you enjoyed the discussion on various facets of Subset sum problem ! I somehow finished my Master's using my old Calculus knowledge. I took a course on Numerical Methods which kind of exposed my weaknesses. I kept getting confused with the error approximations which used ideas from infinite series. Other advanced ideas like multivariable optimization were also problematic to me. Once that course was over, I swore myself to refresh my Calculus stuff and also learn multivariable calculus. I started listening to MIT OCW's Single Variable Calculus lecture videos and felt two things – The course was a bit slow for my pace and the course jumped right away into the mechanics without spending much time on the intuitive explanations of the Calculus. In other words, I felt 18.01 was more focused on the analytic part which emphasized proofs and derivations whereas for my purposes an intuitive explanation of the concept would have sufficed. In fact, I remembered almost all of the Calculus formulas from undergrad – My only problem was the lack of "sense" in how to apply it to the problem I faced (say in machine learning or some optimization). Then I found the Calculus Revisited course from MIT OCW. It consists of a series of lectures on Calculus but also assumes that students have had prior exposure to it. This assumption had some interesting consequences and I fit the bill perfectly. I downloaded the set of videos and started listening to them. Interestingly, all the lectures were between 20-40 minutes which allowed for maximum focus and also allowed you to listen to multiple lectures in the same day. In fact, Arlington had a heavy snow this week and my university had to be closed for the entire week. I completed around 16 lectures in 3 days and was able to finish it ahead of my target date of Feb 15. The course starts with the absolute basic ideas of sets, functions, induction and other stuff. If you are from CS and had taken discrete math, you can feel free to skip the first section. But I would suggest you to still take a look as it , in a sense, sets the stage for the entire course. Do take some time to listen to the lecture on limits. (Part 1 , lecture 4). Here, the discussion of limits effortlessly leads to the derivation of the formula for instantaneous speed and hence differentiation. Part 2 forms the crux of the course and covers differentiation. Professor Herbert Gross had a beautiful way of teaching stuff about derivatives. In particular, he extensively used the idea of geometric proofs or visualizations to expound basic ideas. The way he brought out the tight relation between analysis (as in Math) and geometry was enthralling. He had a huge emphasis on geometric intuition which helped me to "grasp" the key concepts. Part 3 had some nice discussion on Circular functions. He joked about how teachers don't provide good motivation for learning trigonometry which I felt very true to me. He also explained some concepts that were new to me – that you do not really need triangles to define cosine and sine. Previously, I was aware of the radian concept but never put it all together. He also explained how sine and cosine tend to come up in unexpected places – like as the solution of the differential equation for harmonic motion 🙂 He also masterfully showed the close relation between circular and hyperbolic functions with a playful title of 'What a difference a sign makes' (in Part 5). Part 4 discussed about integration and how it can be used to calculate 2 and 3 dimensional areas (and volumes). This part also had a great discussion on how differential and integral calculus are related. That being said, I was a bit dissatisfied with the discussion on the two fundamental theorems of Calculus. The discussion on Mean Value Theorem also felt a bit rushed. I got a bit lost on the discussion on 1 dimensional arc length calculations. May be I should revisit the lecture notes for the same when I get some free time. Part 6 was my favorite part for two reasons – This had a discussion of infinite series and my favorite quip of the course. When discussing about the non intuitiveness and the intellectual challenges posed by infinity , professor Herbert Gross playfully quips (which goes something like this)– ' of course, one thing to do is to not study it. I can call it as the right wing conservative educational philosophy' – Ouch 🙂 I think I mostly understood the idea of infinite series even though there was not much explanation of "why" it works that way. I also felt the topic of Uniform Convergence to be way beyond my comprehension level. Overall, it is a great course and acted as a fast paced refresher for those who had already taken Calculus. The course slowly starts from basic pre-calculus ideas and rapidly gains speed and covers a huge list of calculus topics. I felt few of important Calculus topics were not covered or rushed up – First and second fundamental theorem of Calculus, Mean Value theorem, Taylor series, L'Hospital rule, discussion of exponents and logarithms etc. But that being said, I feel the course more than makes it up for the way the basic ideas were covered. I had fun learning the ideas of limits, infinitesimals , intuitive ideas of differentiation/integration, geometric explanation of differentiation/integration, how the concept of inverse functions pervades Calculus etc. Prof. Herbert Gross had a jovial air around him and occasionally delved into philosophical discussions which made listening to the lectures more interesting. He also had an extensive set of supplementary notes and huge amount of problems with solutions. I had to skip the problems part to conserve time. But if you have some time do spend some on it. Lastly, I found that one of the lectures in the series was missing. Lecture 5 in Part 2 on Implicit Differentiation was the same as the one on Lecture 4. I had sent a mail to MIT OCW about this and got an reply saying they will fix it soon. Hopefully, it will be fixed soon. Last week, I was searching for tutorials on using Lagrange multipliers. I was most interested in the case where there are multiple constraints. I found some good youtube videos in the process. So, I spent some time looking at good Youtube channels where good math lessons are taught. To my delight , I found some good channels. Has some interesting stuff on Partial fractions, calculus and some geometry ish topics. Has some nice and organized stuff about trigonometry and calculus. This is probably the most popular education Youtube channel. It contains basic tutorial videos on lot of subjects like physics , biology and math. It also has some nice videos on contemporary economic issues. Most of the videos are well packages using playlists that will help you listen in a organized fashion. You can also check Khan academy's website . Steven Strogatz writes a weekly article series on Math in NYTimes. He explains lot of interesting stuff in Math in a simple manner. You can check out the Strogatz's Opinionator blog page for more details. Most of the channels may not be very useful for grad students in their studies. But they can act as a refresher. The easiest way to follow the channels is by Subscribing to it. In each of the web page , there is a subscribe button which allows you to be notified when new video are uploaded. Once you subscribe , you either visit your My Subscriptions page to get the videos uploaded per user. You can also add the subscription widget to your Youtube homepage. But, there is still a small inconvenience. You have to visit youtube to find any updates. And like Wikipedia, we know surfing Youtube is a time sucker. Luckily, Youtube provides a RSS/Atom feed of your subscription page. If you use any RSS reader like Google Reader then you can click on the Feed icon at the My Subscriptions page and subscribe to the feed. You can refer to my old Google Reader tutorial if you want a tutorial on using it . So, if any new videos are uploaded then you can check them in your RSS reader and listen to them at your own pace. Just to bring the topic to closure, I finally found a good tutorial on using Lagrange Multipliers with multiple constraints at An Introduction to Lagrange Multipliers . Calculus is one of the important fields to master if you want to do research in data mining or machine learning. There is a very good set of video lectures on Single Variable Calculus at MIT OCW. The video lectures are here . I had listened to first 5-6 lectures. Since I had some grounding in calculus already, I did not have any trouble understanding it. But I felt professor David Jerison went a bit too fast but without giving a deep intuition of calculus. I was a bit dissatisfied and quit watching the lectures. When I was looking for alternate video lectures on calculus, I came across a set of 5 lectures on calculus titled "Big Picture Of Calculus". It consists of recordings of Professor Gilbert Strang and focuses explicitly on giving an intuitive feel on calculus. From the lectures, it looks like it might grow to a full series of lectures on calculus although for quite some time the lecture count has stayed constant at 5. The lectures span the most important topics in differential and integral calculus. I have talked about Gilbert Strang and his linear algebra course lectures here . The calculus lectures are also excellent. He focuses on the main topics and gives a geometric intuition. The lectures are short (around 30 minutes) and hence are quite convenient to watch also. I hope that the series will be expanded to cover other important topics too. Once you get the basic intuition , the OCW course on calculus should be easy to follow. The website for the videos is at Big Picture of Calculus . The lectures can be watched online. If you want to download them , you need to follow a convoluted procedure. a. Goto the html page for the individual lecture. Eg Big Picture of Calculus at http://www-math.mit.edu/~gs/video1.html . b. View the source of this page . Have fun with Calculus ! Linear Algebra is one of the coolest and most useful math courses you can take. Basically , it deals with vectors , matrices all the cool stuff you can do with them. Unfortunately, I did not really have a dedicated course on Linear Algebra in my undergrad. From what I hear , most of the CS people I meet (from India) also don't have this course in their undergrad. Sure we have had some of the topics (like vectors, basic matrices, determinants, Eigenvalues) split across in multiple courses or in our high school ; but not a single,unified course on it. Linear algebra is useful on its own but it becomes indispensable when your area of interest is AI , Data Mining or Machine Learning. When I took a machine learning course , I spent most of the time learning things in Linear Algebra, adv Calculus or Linear Optimization. In hindsight , machine learning would have been an easy course if I had previously taken courses on Linear Algebra or Linear Optimization. Ok, enough on my rant on lack of Linear Algebra in undergrad. After I struggled mightily in my machine learning course, I decided that I had to master Linear Algebra before taking any more advanced courses. I spent the entire winter holidays learning Linear Algebra as I was taking an advanced data mining course this spring. So this blog post is a discussion of my experience. Arguably the best resource to learn Linear Algebra is MIT's OCW course taught by Professor Gilbert Strang . This course are is one the most popular OCW course and so far had more than 1 Million visits . I also searched for alternate courses, but this course wins hands down both for its excellent teaching style and its depth. The course website is here. It contains around 35 video lectures on various topics. The lecture are available for download both from ITunes and from Internet Archive. If you prefer YouTube, then the playlist for this course is here. The recommended book for this course is Introduction to Linear Algebra. 4th ed. by Gilbert Strang. I found the book to be quite costly , even used books for old versions ! I don't mind buying expensive books (I shell out a lot of money for data mining books , but a rant on it later ) but since I was interested in Linear Algebra primarily to help me master data mining, I preferred the equivalent book Linear Algebra and Its Applications , also by Gilbert Strang. This book had a very similar content to the recommended book but I felt was more fast paced which suited me fine. Also I was able to get an old copy from Amazon for 10 bucks. Sweet ! My only complaint of the book is that the examples and exercises felt a bit disconnected (or should I say, I wasn't clear of the motivation ? ) from the topics. If you don't want to purchase these expensive books , then there is an EXCELLENT free e-book by Professor Jim Hefferon .The book's website is here , from where you can download the e-book. I have to say, this book really blew me away. It was really intuitive, has excellent (mostly plausible) examples, was slightly more theoretical than Strang's book with more proofs. It also had a very helpful solution manual , and a LaTeX version of the book. Too good to be true 🙂 I felt this book had a much limited set of topics than Strang's course/book, (hence this a truly intro book) , but whatever topic it took, it gave it a thorough treatment. Another thing I liked in the book are the exercises – Most of them were excellent. And having a solution manual helped clarify a lot of things given that I was doing essentially a self-study. Thanks Jim ! I felt, overall, the lectures were excellent. They were short (40-50 minutes). So my usual daily schedule was to listen to a lecture, and read the relevant sections in the book , solve the exercises for which the answers are available at the end of book. All these steps took at most 2-3 hours a day. I was also taking notes in LaTeX using Lyx. I have talked about using Lyx previously in this blog post. I really liked Strang's teaching style. He often emphasizes intuition , especially geometric intuition rather than proofs. I felt that is how intro courses must be structured. Proofs are important but not before I have a solid understanding of the topics. But I also have to say that the lectures were varying in quality. Some of the lectures were exceptional while some were not so enlightening. But on the whole, I was really glad that he has made the lectures available online. It has certainly helped me learn Linear Algebra. If possible see all the lectures as almost all of them cover important topics. I did and have to say all of them were excellent and useful. But if you are mostly interested in applied Linear Algebra and planning to use it in Data Mining/ Machine learning, then my suggestion will be Lectures 1-11 , 14-22,25,27-29,33. If you are interested watch lectures 30,31 too. Again a better way to learn is to take notes during lectures and solving at least a few exercises in the book. If you have Matlab or Octave then you can verify answers to some other exercises for which solutions are not given. I have taken LaTeX notes for this course but they are a bit scattered and unorganized. Hopefully, I will organize them together and create a single PDF soon. I kind of put it on a lower priority after I noticed that Peteris Krumins's blog has a partial set of lecture notes for this course. His lecture notes can accessed here . As of now (Jan 30 , 2010) , he has put notes for first 5 lectures although the frequency seems to be a bit slow. Have fun with Vectors and Matrices !!
CommonCrawl
Many people find it hard to come up with a story problem that represents fraction division (including many math teachers, engineers, and mathematicians). Why is this hard to do? For many people, their schema for dividing fractions consists almost entirely of the "invert and multiply" rule. But there is much more to thinking about fraction division than that. So much in fact, that we can't say it all in a single blog post. This is the first of several musings about fraction division. If you have 12 liters of tea and a container holds 2 liters, how many containers can you fill? If you have 1 ½ liters of tea and a container holds ¼ liter, how many containers can you fill? If you have 1 ¼ liters of tea and a container holds ¾ liters, how many containers can you fill? If you have ¾ liter of tea and a container holds 1 ¼ liters, how many containers can you fill? If you have ¾ liter of tea and a pitcher holds 1 ⅓ liters, how much of a container can you fill? So a division problem that asks "how many groups?" is structurally the same as a division problem that asks about "how much of a group?", but because of the way we speak about quantities greater than 1 and quantities less than 1, the language makes the structure harder to see. What other ways might we see the parallel structure? Equations: $$? \times2 = 12, \quad ? \times \frac14 = 1\frac12, \quad ? \times \frac34 = 1\frac14, \quad ? \times 1\frac14 = \frac34.$$ The diagrams don't have the language problem. In all cases the upper and lower braces show the relation between the size of a container and the amount you have. Whether a whole number of containers can be filled (diagrams 1 and 2), a container plus a part of a container can be filled (diagram 3), or only a part of a container can be filled (diagram 4), the underlying story is the same. Many people think of diagrams primarily as tools to solve problems. But sometimes diagrams can help students see structure or reveal other important aspects of the mathematics. This is an example of looking for and making use of structure (MP7). The intertwining of the abstraction of the equations and the concreteness of the diagrams is a good example of MP2 (reason abstractly and quantitatively). Coming up next week: what else are diagrams good for?
CommonCrawl
You are planning to create a map of an RPG. This map is represented by a grid whose size is $H \times W$. Each cell in this grid is either '@', '*', '#', or '.'. The meanings of the symbols are as follows. '@': The start cell. The story should start from this cell. '*': A city cell. The story goes through or ends with this cell. You have already located the start cell and all city cells under some constraints described in the input section, but no road cells have been located yet. Then, you should decide which cells to set as road cells. The journey must contain as many city cells as possible. The journey must consist of distinct non-empty cells in this map. The journey must begin with the start cell. The journey must end with one of the city cells. The journey must contain all road cells. That is, road cells not included in the journey must not exist. The journey must be unforked. In more detail, all road cells and city cells except for a cell at the end of the journey must share edges with the other two cells both of which are also contained in the journey. Then, each of the start cell and a cell at the end of the journey must share an edge with another cell contained in the journey. You do not have to consider the order of the cities to visit during the journey. Initially, the map contains no road cells. You can change any empty cells to road cells to make a journey satisfying the conditions above. Your task is to print a map which maximizes the number of cities in the journey. The input consists of a single test case of the following form. The first line consists of two integers $N$ and $W$. $H$ and $W$ are guaranteed to satisfy $H = 4n - 1$ and $W = 4m -1$ for some positive integers $n$ and $m$ ($1 \leq n, m \leq 10$). The following $H$ lines represent a map without road cells. The ($i+1$)-th line consists of a string $S_i$ of length $W$. The $j$-th character of $S_i$ is either '*', '@' or '.' if both $i$ and $j$ are odd, otherwise '.'. The number of occurrences of '@' in the grid is exactly one. It is guaranteed that there are one or more city cells on the grid. Print a map indicating a journey. If several maps satisfy the condition, you can print any of them.
CommonCrawl
1) How do we make sense of $X(f)$ with $X$ a vector field on $M$ and $f$ a smooth function on $M$? 2) How do we make sense of $Xg(Y,Z)$ where $g$ is a riemanian metric and $X,Y,Z$ vector fields. To get this answered, the short answer to both questions is yes. A vector field $X$ is a section of the tangent bundle $TM$, in particular $X:M\to TM$. So $X$ maps points to vectors, and vectors map functions to real numbers. So when people say $Xf$, they actually mean $\tilde X(f)$ where $\tilde X$ is implicitly defined as follows: $\tilde X: C^\infty(M)\to C^\infty(M)$, $\, \tilde X(f)(p):=X(p)(f)$. The second question follows easily from this. I don't think it's valid to think of $g$ evaluated at $x$, then paired with $Y_x$ and $Z_x$, then acted upon by $X_x$. Because, as you can see from the above, the components of $g$ itself get differentiated. The proper order is that the tensor $g$ gets paired with the vector fields $Y$ and $Z$, resulting in a function, which is acted upon by $X$. Not the answer you're looking for? Browse other questions tagged differential-geometry definition riemannian-geometry or ask your own question. Is a linear vector field a geodesible vector field? When is a Divergence-Free Vector Field on the Tangent Bundle of a Riemannian Manifold Hamiltonian? How can a vector field eat a smooth function?
CommonCrawl
Let $S$ be a surface. Is it true that if $S$ is covered by the hyperbolic plane (or a subset thereof) then it admits a Riemannian metric of constant negative curvature? How does the metric (or multiple metrics) arise? Is the converse true? Again, how do we construct the covering(s)? Relevant references would be much appreciated. The statement is not true. The universal covering of $T^2$ is homeomorphic to $\mathbb R^2$, and $\mathbb R^2$ is homeomorphic to $\mathbb H^2$, so there exists a universal covering map $\mathbb H^2 \to T^2$. But $T^2$ does not have a hyperbolic metric, by the Gauss-Bonnet theorem. The correct statement is that if there exists a covering map $f : \mathbb H^2 \to S$ such that the deck transformation action of $\pi_1(S)$ on $\mathbb H^2$ is an action by isometries of $\mathbb H^2$ then $S$ admits a Riemannian metric of constant negative curvature. The proof is to take any open subset $U \subset S$ which is evenly covered by $f$, choose an open subset $\tilde U \subset \mathbb H^2$ such that $f$ maps $\tilde U$ to $U$ by a homeomorphism, and define the Riemannian metric on $U$ as the pushforward via the map $f$ of the Riemannian metric on $\tilde U$. The hypothesis that the deck action is by isometries is needed here to show that the Riemannian metric on $U$ is well-defined independent of the choice of $\tilde U$. The converse is only true with an additional hypothesis as well, namely that the Riemannian metric on $S$ is geodesically complete. The proof takes some work, but you can find it carefully written up in textbooks on hyperbolic geometry. Not the answer you're looking for? Browse other questions tagged surfaces riemann-surfaces hyperbolic-geometry riemannian-metric or ask your own question. How do we define a complete metric on a Riemann surface with punctures? Which bordered surfaces have hyperbolic structures? Why is the Hyperbolic plane $\delta$-hyperbolic?
CommonCrawl
Lets say I have two stocks x and y and their corresponding stock price p(x) and p(y). consider HR as hedge ratio. Then we can calculate the spread using this equation. from this step what rationale should we use for buying and selling pairs? Pairs trading works for two highly correlated stocks. We then sell the costlier stock and buy cheaper stock simultaneously. another question? If price of x is 13 dollar and price of y is 63 dollar then how many shares of x and y should we buy and sell simultaneously in a pairs trading? To answer your last question: the current prices alone don't decide how many shares to sell and buy in each of the stocks. That is decided by the hedge ratio. In fact, the whole point of the hedge ratio is to assume that it is the ratio that the stocks will revert back to over time. So if we denote the spread at time $t$ by $s_t$ and the hedge ratio as $\beta$, we have $$ s_t = p(x_t) - \beta p(y_t) + \epsilon_t $$ where $\epsilon_t$ is the deviation from the equilibrium state. When you get your signal and let's say $s_t>0$, you sell \$1 worth of $x$ and buy \$ $\beta$ worth of $y$. How do you decide $\beta$? Well, usually by doing a linear regression using some past data. In the stated model it is natural to regress $x_t$ on $y_t$ and force the intercept to be $0$. Not the answer you're looking for? Browse other questions tagged arbitrage pairs-trading spread algorithm or ask your own question.
CommonCrawl
Is a square zero matrix positive semidefinite? Does the fact that a square zero matrix contains non-negative eigenvalues (zeros) make it proper to say it is positive semidefinite? The $n \times n$ zero matrix is positive semidefinite and negative semidefinite. "When in doubt, go back to the basic definitions"! The definition of "positive semi-definite" is "all eigen-values are non-negative". The eigenvalues or the zero matrix are all 0 so, yes, the zero matrix is positive semi-definite. And, as Gary Moon said, it is also negative semi-definite. Not the answer you're looking for? Browse other questions tagged linear-algebra matrices positive-semidefinite or ask your own question. How to make a matrix positive semidefinite?
CommonCrawl
Right-skewed distribution with mean equals to mode? Is it possible to have a right-skewed distribution with mean equal to mode? If so, could you give me some example? Easy examples come from binomial distributions -- which can hardly be dismissed as pathological or as bizarre counter-examples constructed ad hoc. Here is one for 10 trials and probability of success 0.1. Then the mean is 10 $\times$ 0.1 = 1, and 1 also is the mode (and for a bonus the median too), but the distribution is manifestly right skewed. Among continuous distributions, the Weibull distribution can show equal mean and mode yet be right-skewed. is right (i.e. positively) skewed and has both a mean and a mode of 1. Not the answer you're looking for? Browse other questions tagged distributions mean skewness mode or ask your own question. Why not take the mode of a bootstrap distribution?
CommonCrawl
$\star$ Consider an open-address hash table with a load factor $\alpha$. Find the nonzero value $\alhpa$ for which the expected number of probes in an unsuccessful search equals twice the expected number of probes in a successful search. Use the upper bounds given by Theorems 11.6 and 11.8 for there expected number of probes.
CommonCrawl
Conference: The 35th International Colloquium on Automata, Languages and Programming (ICALP 2008). Abstract: Given query access to a set of points in a metric space, we wish to quickly check if it has a specific property. More precisely, we wish to distinguish sets of points that have the property from those that need to have at least an $\epsilon$ fraction of points modified to achieve it. We show one-sided error testers that immediately follow from known characterizations of metric spaces. Among other things, we give testers for tree metrics and ultrametrics which are optimal among one-sided error testers. Our tester for embeddability into the line is optimal even among two-sided error testers, and runs in sublinear time. We complement our algorithms with several lower bounds. For instance, we present lower bounds for testing dimensionality reduction in the $\ell_1$ and $\ell_\infty$ metrics, which improve upon lower bounds given by Krauthgamer and Sasson (SODA 2003). All our lower bounds are constructed by using a generic approach. We also look at the problem from a streaming perspective, and give a method for converting each of our property testers into a streaming tester.
CommonCrawl
In QFT, the LSZ formula can be used to calculate S-matrix elements from time ordered products of fields (say $\phi)$), which gives the probability for a system to evolve from an initial state at asymptotic time consisiting of free particles to another asymptotic state consisting of other free particles. The only thing that is needed for it to work is that the fields $\phi$ create free states at asymptotic states.The whole interacting physics is encoded in the S-matrix. In (nonequilibrium) statistical mechanics, the evolution of a system can for example be calculated in certain cases from the master equation or from the Boltzmann equation. However, the master equation does only work for Markov processes (that do not depend on the past or have no memory) and the Boltzmann equation is only valid for dilute systems as it includes only binary collisions/interactions. So compared to the S-matrix formalism in QFT, these approaches to calculate the evolution of a system in nonequilibrium statistical mechanics seem to be rather limitied and incomplete. Does there exist some kind of S-Matrix formalism or even something like an LSZ formula too, for example to calculate the transition of a system between two different long-lived (metastable) states that includes all interactions/correlations at least in principle? The S-matrix formalism can be applied to effective field theories involving unstable particles (simply by giving the masses a small imaginary part encoding their half-life - no other change to the usual setting). Apart from that I do not know anything resembling an asymptotic theory for dissipative systems. OK, here are some more details about the latter connection: Weinberg relates the S-matrix elements to transition rates (rather than transition probabilities) in Section 3.4, and uses this relation to define a corresponding master equation; the rate of change of the probability density $P_\alpha$ (where $\alpha$ indexes the possible scattering eigenstates) is obtained by the usual balance equation matching what goes in and what goes out (Weinberg, eq. (3.6.19)). The H-theorem for this equation can then be proved assuming that the transition rates satisfy detailed balance. This detailed balance condition is a consequence of time reversal invariance together with the Low equations. The latter are equivalent to the unitarity of the S-matrix. The Master equation is meaningful of course only in a context where many collisions take place everywhere, thus in a chemical or nuclear reaction context. Time reversal invariance also requires the absence of external magnetic fields. @Dilaton: I added two additional paragraphs to my answer.
CommonCrawl
The following is a lightweight tutorial for loop invariant proofs. It assumes that you have heard about these proofs, but don't yet know what to do with them and how to do them. Loop invariant proofs might seem scary at first, in particular if you are not used to writing mathematical proofs. But they shouldn't be: when you plan to write a loop invariant proof, you already have an algorithm and you have an intuitive notion of why the algorithm is correct. Loop invariant proofs provide you with a very structured way of translating your intuition into something solid. Let us start with a very simple example. Consider the following computational problem: given an array A (of size n) of numbers, output the sum of the numbers in A. Note that strictly speaking, we will always use a python list instead of an array. Important: Since python lists are indexed from $0,\ldots,n-1$, I will assume that we are using indices from 0,...,n-1. Note that in textbooks you more often will have indices $1,\ldots, n$. Thus, if you compare the following examples with textbook solutions, you should keep in mind that there might be an index shift by 1. Further note that we use $A[0:j]$ (or simply $A[:j]$) to refer to the subarray of $A$ from $A$ to $A[j-1]$ (not including A[j]). In pseudo-code you will often see $A[0..j-1]$ instead, which in contrast to the notation we use includes the last index (in this example $j-1$). I provide an additional notebook with the same content but starting with index 1. Before thinking about other steps in the loop invariant proof, we need a loop invariant. The algorithm seems obviously correct. But why? Loop Invariant: At the start of the iteration with index $j$, $\ldots$ subarray $A[0:j]$ $\ldots$. 1.) Think about a specific iteration: Imagine the algorithm has been running for a while, and the next iteration would be iteration (with index) $j$. Alternatively you could imagine that the algorithm is interrupted just before interation $j$. What information did it gain so far? If $j$ is not specific enough for you, you could think about a specific $j$, e.g., 10. It might also work to think about the first few iterations, i.e., before iteration 0, before iteration 1, .... In the algorithm above we see in this case that only the value of answer changes: first it is 0, then A, then A+A, and so on. This might help enough to observe that the information that we have gained before iteration $j$ is that answer = A+A+...+A[j-1]. Loop Invariant: At the start of the iteration $j$ of the loop, the variable answer should contain the sum of all numbers in subarray $A[0:j]$. Perfect! Now will this always work? Unfortunately not. One problem might be that the information that we have gained after the loop is not exacly what we wanted to compute, but we are using it to get our final result. But this approach can work, and sometimes it is just useful, to get started thinking about the right loop invariant. Think about the algorithmic technique used. This maybe sounds complicated, but is actually quite simple. In our algorithm above, we have a variable answer, which we manipulate while the loop is running. So what we are actually doing is incrementally building a solution. And while the above is only one example, we actually use loops very often to incrementally build a solution. In these cases, the loop invariant is often a statement of the form: The solution computed so far, is the correct solution for the things that I have seen so far. Merging this into the incomplete loop invariant from above, this would state. Loop Invariant: At the start of the iteration $j$ of the loop, the variable answer should contain the correct solution for the subarray $A[0:j]$. Loop Invariant: At the start of the iteration $j$ of the loop, the variable answer should contain the sum of the numbers from the subarray $A[0:j]$. Lets assume we have successfully formulated a loop invariant. Fantastic, the hardest part is done! Now we need to handle the three steps of the proof: Initialization, Maintenance, Termination. Before we look at how to handle them, lets remind ourselves, why they together constitute a proof. Termination: When the for-loop terminates $i = (n-1)+1=n$. Now the loop invariant gives: The variable answer contains the sum of all numbers in subarray A[0:n]=A. This is exactly the value that the algorithm should output, and which it then outputs. Therefore the algorithm is correct. About initialization: Now we know that for 'Termination', we want that the loop invariant is true at the end? The easiest way to prove this: proving that it was true all the time. Naturally, this is done in two steps: showing that it is true at the beginning and then showing that it remains true while the loop is running. Initialization: Before the first iteration of the loop, the loop invariant states: 'At the start of iteration 0 of the loop, the variable answer should contain the sum of the numbers from the subarray $A[0:0]$, which is an empty array. The sum of the numbers in an empty array is 0, and this is what answer has been set to. Maintenance: Assume that the loop invariant holds at the start of iteration $j$. Then it must be that [...write here what the loop invariant states]. In iteration $j$, [...write here what the loop does; it should result in providing proof of the following sentence]. Thus at the start of iteration $j+1$, [...write here the loop invariant but with the variable increased by one, e.g., if the loop invariant makes a statement about subarray A[1:j], here you would have the same statement but for subarray A[1:j+1]] which is what we needed to prove. Maintenance: Assume that the loop invariant holds at the start of iteration $j$. Then it must be that answer contains the sum of numbers in subarray $A[0:j]$. In the body of the loop we add $A[j]$ to answer. Thus at the start of iteration $j+1$, answer will contain the sum of numbers in $A[0:j+1]$, which is what we needed to prove. That's it. So in summary, if the question is to prove correctness of the algorithm above, a complete solution could be the following (which is just a copy of the relevant parts above). Loop Invariant: At the start of iteration $j$ of the loop, the variable answer should contain the sum of the numbers from the subarray A[0:j]. Initialization: At the start of the first loop the loop invariant states: 'At the start of the first iteration of the loop, the variable answer should contain the sum of the numbers from the subarray A[0:0], which is an empty array. The sum of the numbers in an empty array is 0, and this is what answer has been set to. Termination: When the for-loop terminates $i = (n-1)+1 = n$. Now the loop invariant gives: The variable answer contains the sum of all numbers in subarray A[0:n]=A. This is exactly the value that the algorithm should output, and which it then outputs. Therefore the algorithm is correct. This example demonstrates what effect an if-statement in the body of the loop has on the proof. Lets assume we want to find the largest number in a non-empty array A. Now lets first do the proof. A lot of this is just copy-and-paste from the previous example. Loop Invariant: At the start of the iteration with index j of the loop, the variable answer should contain the maximum of the numbers from the subarray A[0:j]. Initialization: At the start of the first loop, we have j=1. Therefore the loop invariant states: 'At the start of the iteration with index j of the loop, the variable answer should contain the maximum of the numbers from the subarray A[0:1], which is A. This is what answer has been set to. Maintenance: Assume that the loop invariant holds at the start of iteration $j$. Then it must be that answer contains the maximum of numbers in subarray $A[0:j]$. There are two cases: (1) $A[j]$ > answer. From the loop invariant we get that $A[j]$ is larger than the maximum of the numbers in $A[0:j]$. Thus, $A[j]$ is the maximum of $A[0:j+1]$. In this case, the algorithm sets answer to $A[j]$, thus in this case the loop invariant holds again at the beginning of the next loop. (2) $A[j]$ $\le$ answer. That is, the maximum in $A[0:j]$ is at least as large as $A[j]$, thus the maximum of $A[0:j+1]$ is the same as the maximum of $A[0:j]$. The algorithm also doesn't change answer, thus in this case the loop invariant holds again at the beginning of the next loop. Termination: When the for-loop terminates $j = (n-1)+1 = n$. Now the loop invariant gives: The variable answer contains the maximum of all numbers in subarray $A[0:n]=A$. This is exactly the value that the algorithm should output, and which it then outputs. Therefore the algorithm is correct. What we see here is that an if-statement in the algorithm results in a case distinction in the proof. In the lecture we saw the example of better_linear_search. There we also had an if-statement, and therefore a case distinction. The better_linear_search proof is a bit more involved, because there was a return-statement in the body of the loop. I hope these examples help you get started. The best way to learn how to do loop invariant proofs is to do them. So I recommend to simply do the proofs for the examples from the practice exercises and from the homework assignment, and let me know in case you get stuck.
CommonCrawl
When an aircraft stalls, do all or most lift forces abruptly disappear or is this transition continuous? Will the aircraft drop from the sky like a brick or can this (losing lift and altitude) happen so gradually that it will be entirely unnoticed from the crew and passengers? If lift abruptly disappears when entering a stall, wouldn't people notice a considerable loss in perceived body weight at least as long the aircraft accelerates downward and builds up vertical speed downward. I expect that the airframe experiences enormous drag in the vertical direction once downward speed is being built up and it probably reaches an equilibrium condition limiting its fall rate. But if lift disappears abruptly during the stall, then you should have a sensation of free fall (no weight) at least during a short time. Is this the case or does lift diminish continiously during a stall? PS: While writing the question i have searched and found those Lift-AoA diagrams (like this: https://www.grc.nasa.gov/www/k-12/airplane/incline.html) that should be able to answer the question. Basically my question is: How quickly does Lift drop off to the right of the diagram. All diagrams i found did not in fact go much beyond the stall condition. You make some valid observations: not all lift disappears after the stall, and at the event of stall there is a downward acceleration that may or may not be perceived. Pull the aircraft back far enough to 45°, and the $C_L$ is even higher than when it stalled! Of course, the drag will be huge compared to normal operating circumstances, and any fixed wing other than a military jet fighter or a VTOL won't have enough power installed to fly at such angles of attack. Isn't there a perceived loss of body weight during the acceleration to stall. Yes there is, but it does not last long, is not necessarily very powerful, and could be referenced to other factors than stall, such as turbulence. Once the $C_L$ - $\alpha$ curve hits the trough at 15°, the acceleration stops and weight perception returns to normal. And it's sensory perception, the body only really understands what is happening if all sensory organs are aligned (vision, motion, sound), and can only react in a composed manner if it anticipated the cues and is trained in how to respond. Otherwise we just fight or flee. You mention the case of AF447. It was in a fully developed stall and descending with on average 10,000 ft/min, but the onset to stall was different. It was very gradual, and it was a high speed high altitude stall: there simply is not enough air up there to provide enough lift for the weight of an airliner. It can stall at AoA of zero, then start to fall, as a result of which the AoA changes but the pitch attitude of the aircraft does not, and your body is still aligned with gravity in a dark cockpit where the horizon cannot be seen. No physical cues on what is happening whatsoever! If you look at some of the parameters below, taken from the BEA accident report the second graph shows the normal acceleration (up and down) in G's. As you see, during the initial upset the passengers would have felt some spikes in G's up to about 1.75G upward and 0.5G downward. This is in the range of heavy to severe turbulence. They would notice. Anyone standing up might even have been tossed around. After that it settles down to around 0.75 to 1.25 G's. So there was no feeling of freefall. Several places I've read assert that the passengers probably did not know anything was wrong. I strongly disagree with that. Aside from the G forces during the initial part of the accident, the pitch attitude fluctuated between 20° nose up and 10° nose down. The aircraft was also rolling back and forth, sometimes up to 30°. Since it was night IMC they may not have known the seriousness of what was happening, but I can't imagine those kinds of pitch and roll motions would seem normal to anyone. Not the answer you're looking for? Browse other questions tagged lift stall or ask your own question. How does a nose down stall occur?
CommonCrawl
Though once I am made, I am blind. By makers as well as the made. Explaining why I might have swayed. For my classic is ever so white. But at first kill the guest on the right. But count only half of the numeral. And the answer is there, not a funeral. This will be my last riddle for a couple of days, as school is very important and I need to keep up with the work due. Consequently, I have tried to make this riddle quite hard by using extensive wordplay. The first half talks a bit about the answer; the second half talks about the directions. I have added a hint, because this riddle has proved to be a bit too difficult! $\ldots$ I then added a little bit more to the hint to make the answer less broad, but I still like all the answers anyway! And the colour's nearly hot. Is it treasure? No it's not. The plane is ready to fly, if properly constructed. If the paper plane is not properly constructed, then it won't fly well. A paper plane doesn't have eyes. Also, if you will, it doesn't have a view port/cockpit, so the imaginary paper pilot can't see.. My travels remain unforeseen, By makers as well as the made. The paper plane does not know which way it will go, nor does the plane maker. The wind is known to intervene, Explaining why I might have swayed. Wind can heavily influence the direction in which the plane flies. A colour was much uninvited, For my classic is ever so white. After scrambling, you will be delighted, But in first kill the guest on the right. Take prepared and remove red to get prepa. Then scramble to get paper. Find the word near the burial ground, But count only half of the numeral. Put a spell on the word, not the sound, And the answer is there, not a funeral. From the word "explaining", we only consider five letters (because x is the numeral for 10, so half is 5) of the word. So, we'll consider plain which is homophone for plane (satisfies putting a spell on the word and not the sound). "X marks the spot": Indicates the word "explaining" in the riddle. "And the colour's nearly hot.": Red is usually used to indicate heat. Red is located in the words "prepared" and "impaired". Maybe that you can buy fake feathers. Otherwise I am impaired, Though once I am made, I am blind. some feathers look like they have an eye on them but they can't see. Air resistance prevents it from falling normally. Find the word near the burial ground, But count only half of the numeral. Maybe dropped feathers otherwise I have no idea. Put a spell on the word, not the sound, And the answer is there, not a funeral. If you switch the letters around and change one you get breathe. Also, not sure. You spell the words in the address, and weddings would be the counterpart to funerals - weddings have invites in envelopes, funerals don't! if an egg is cracked or somehow defective you don't cook it. Once an egg is made, it's sealed. The hen doesn't know where its egg goes, and I guess neither does the egg. "Wind eggs" are eggs that are missing their yolk, or are otherwise defective (connotations vary). Many chicken eggs are white, and they're traditionally depicted as such. But in first kill the guest on the right. This part seems to be instructions for the wordplay that will lead you to the answer, but I haven't figured it out. Coffins are generally prepared from wood. Coffins are mostly designed in a certain shape, size and also the type of wood to be used. Without any design, I am just another piece of wood. Once a coffin is made, they just close you and you can't see anything or it is blind to religion, caste or creed; anyone can use it irrespectively. Coffins can be used by anyone and it can be transported anywhere. The person who makes the coffin doesn't know to whom it is going to be made for. It could be himself!. Am not sure how to explain this part though. Generally wood has it's own natural colour, so in olden days they wouldn't paint or colour the coffins. Of late (19th or 20th century) they seem to have paint it white fully or decorate it inside with white cloth. Nowadays people plan their own funeral using funeral agencies who take care of this part for you. So you will be delighted after choosing from a range of coffins (size, shape and material). Not sure about this one. I think this might refer to "Hospitum" or "Guest right" (Game of Thrones). And inferring from this, it means that the funeral agency first has to wait for the client (guest) to die for the coffin to be used. You would mostly find coffins near the burial ground. With the word C-O-F-F-I-N, there are 6 letters, if you count only half then you have 3 letters that you could choose. The 1st part of COF doesn't make any sense, choosing the second half FIN makes good sense. If you taken FIN from COFFIN, and then do a 3 letter word play for FIN, you would get a new word. The answer from the 3 letter wordplay would be END and it's self explanatory. Usually, I am prepared, If only correctly designed; Otherwise I am impaired, Though once I am made, I am blind. Paper planes are usually prepared but they fly only when they are correctly folded. Otherwise they won't fly. They can fly in any direction but they cannot see. My travels remain unforeseen, By makers as well as the made. The wind is known to intervene, Explaining why I might have swayed. The trajectory or flight of the plane remains unknown to everyone including the maker of the same. The wind sole decides the direction of the flight or it can simply knock it down and it will sway back to the ground. A colour was much uninvited, For my classic is ever so white. After scrambling, you will be delighted, But in first kill the guest on the right. Students often tear the pages from the notebooks to make planes which are mostly white. Most of the time the used pages which are scribbled all over used to make the plane.But in first we have to tear the page from the notebook which can be correlated to killing. Thanks to the author for the hint, The initial part of the answer which is paper is coming from "prepared" which is in the 1st paragraph. When we remove(kill) the color which is on the right the latter can be re-positioned to form paper. Similarly is with the word "explaining" which has the 'x' marking the spot and if we find anagrams of the same there we get the 2nd part of the answer which is plane. Hint: x marks the spot, And the colour's nearly hot. Usually for the purpose of sport the children they mark a target on any board with an X and try to hit the same with the plane. (Not sure). Usually, I am prepared If only correctly designed; Otherwise I am impaired, Though once I am made, I am blind. Not sure what that means but i guess the "word" is the answer you're looking for. Many people for instance, searched in war for their beloved ones and found them with an arrow in the chest, thus having the answer. classical hint. "X marks the Spot" is often the spot where trainees try to shoot into, when trying to improve their erchery skills. "The color's nearly hot" is a reference to the warm/hot color red, which is also a symbol for archery, for instance in Darts. The closer you get to the middle, the closer you get to the red spot. A letter is usually prepared using content related to the topic of the letter. But initially a draft of the same is designed/created. And if the content doesn't matches with the subject of the letter than it renders it useless.Moreover the correct format can be related to the designing part. A letter once completed is sealed inside a envelope and thus we can correlate it getting blind. A letter does not knows where it will be posted as the address is usually written on the envelope which is usually decided by the makers or writer of the letter. Sometimes we've seen that letters when kept in open that too unattended are blown away in windy weather which clearly explain them getting swayed off in air. Usually in earlier times the traditional color format was black on white i.e. white page and black ink. But in late 90's kids starting using colored pages to write letters on. Not sure about the scrambling and killing part. Not sure about this part. You will have text, regions, areas, symbols, direction, scale and any specific location based information. In this case marked by the location of the treasure. If not marked with a treasure location (implicitly or explicitly), I am just another map. If the treasure map is made, you are hiding the location of the treasure in the map. And to an ordinary person, it remains just another map. Or hiding in plain sight. Generally, with a treasure map, you would either hide the map or you might give the map to someone or hide it in plain sight, so that it is not easy to get it or you would want to get the treasure at a later stage. Also, you don't know when you might travel out there again to get the treasure back. To obtain the treasure, and provided you have found out where the treasure is, you would have to keep moving in all / certain directions using different transportation modes. Generally maps are made out of paper which is generally sepia / natural colour tone. But generally they make maps in white to highlight things in a map better. Generally, in a treasure map, you will have to find out where the treasure is located. And once you find the location of the treasure, you will be delighted. And if multiple people are looking for the treasure, they will find it first. So you would have to kill the other person to get the treasure. Mostly refers to pirates / sailors. In the burial ground, the word 'X' is found denoting / marked to be dug up for a coffin / casket to be put. In the Treasure Map, it means you find the treasure buried somewhere near the place of the the marked location. So indirectly telling that you need find the first half i.e. "TREASURE" in "TREASURE MAP". This indicates that generally in maps, a cross or 'X' is marked in red colour denoting a treasure. Not the answer you're looking for? Browse other questions tagged riddle wordplay rhyme poetry or ask your own question. Wordplay: I O _________ O A!
CommonCrawl
1(a) Why separately excited DC motor is widely used as compared to DC shunt motor? Explain. 1(b) Give advantages of regenerative braking of DC motor compared to other methods of braking? 1(c) Explain why V/F control is popular in AC induction motor control. 1(d) Give advantages of high frequency induction heating as compared to conventional methods of heating. 1(e) Compare SMPS with linear regulated power supply. 2(a) Explain the effect of source inductance in single-phase converter working in rectifier mode. Draw relevant output voltage waveforms. Give equations which can be used to determine overlap angle and output DC voltage. 2(b) In a 3-phase full converter working in rectifier mode, input supply is 440V(L-L), 50Hz. If firing angle $\alpha = \pi/4$ and load current is 20 A constant with load voltage = 370 V, determine source inductance $Ls$ and overlap angle $\mu$. 3(a) Explain the steps involved in space vector modulation (SVM) technique used in three-phase voltage source inverter. 3(b) Explain using block diagram and transfer function, working of PI controller for DC-DC converter. 4(a) Give details of the state – space averaged model of DC – DC buck converter 10 operating in continuous conduction mode. 4(b) A separately excited DC motor armature winding is supplied power using single-phase 10 full bridge converter working on 250V, 50Hz mains supply. If $Ra = 0.1Ω$ and armature current is 50 A, find the firing angle of the converter at 700 RPM. Assume that field winding is supplied with rated DC voltage and motor ratings are 110V DC, 1000 RPM and 75A . 5(a) Explain rotor resistance method of speed control of three-phase wound rotor induction motor. Draw speed-torque characteristics and give disadvantages of this technique. Draw variations in applied voltage and motor current over entire operation from low speed to double the rated speed of the motor. 6(b) Selection of battery capacity in UPS. 6(c) Constant torque and content power regions in control of separately excited DC motor.
CommonCrawl
Can you believe school has already started? It seems like we were just finishing last semester. Last semester was tough because the administration had a hard time keeping records of all the students in order, which slowed everything down. This year, they are going to be on top of things. They have recognized that you have the skills to help them get into shape with your programming ability, and you have volunteered to help. You recognize that the key to getting to student records quickly is having them in a sorted order. However, they don't really have to be perfectly sorted, just so long as they are sort-of sorted. Write a program that sorts a list of student last names, but the sort only uses the first two letters of the name. Nothing else in the name is used for sorting. However, if two names have the same first two letters, they should stay in the same order as in the input (this is known as a 'stable sort'). Sorting is case sensitive based on ASCII order (with uppercase letters sorting before lowercase letters, i.e., $A < B < \ldots < Z < a < b < \ldots < z$). Input consists of a sequence of up to $500$ test cases. Each case starts with a line containing an integer $1 \leq n \leq 200$. After this follow $n$ last names made up of only letters (a–z, lowercase or uppercase), one name per line. Names have between $2$ and $20$ letters. Input ends when $n = 0$. For each case, print the last names in sort-of-sorted order, one per line. Print a blank line between cases.
CommonCrawl