text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
"R\'enyi's quantum thermodynamics" by Natalia Bebiano, Joao da Providencia et al.
A theory of thermodynamics has been recently formulated and derived on the basis of R\'enyi entropy and its relative versions. In this framework, the concepts of partition function, internal energy and free energy are defined, and fundamental quantum thermodynamical inequalities are deduced. In the context of R\'enyi's thermodynamics, the variational Helmholtz principle is stated and the condition of equilibrium is analyzed. The %obtained results reduce to the von Neumann ones when the R\'enyi entropic parameter $\alpha$ approaches 1. The main goal of the article is to give simple and self-contained proofs of important known results in quantum thermodynamics and information theory, using only standard matrix analysis and majorization theory.
Bebiano, Natalia; da Providencia, Joao; and da Providencia, J.P.. (2018), "R\'enyi's quantum thermodynamical inequalities", Electronic Journal of Linear Algebra, Volume 33, pp. 63-73. | CommonCrawl |
The roots/solutions of the characteristic polynomial are called the eigenvalues of $A$.
Now recall that we originally began with the matrix equation $Ax = \lambda x$ which is equivalent to the matrix equation $(A - \lambda I)x = 0$. We noted that this matrix equation has the trivial solution $x = 0$. If $\det (A - \lambda I) \neq 0$ the $(A - \lambda I)x = 0$ has only the trivial solution. However, we noted that if $\det (A - \lambda I) = 0$ (which happens when $\lambda$ is an eigenvalue of $A$) then there are infinitely many solutions $x$ corresponding to this $\lambda$. These nontrivial solutions are defined below.
Definition: Let $A$ be an $n \times n$ matrix. If $\lambda$ is an eigenvalue of $A$ then a corresponding nonzero vector $v$ is called an Eigenvector of $A$ corresponding to $\lambda$ if $(A - \lambda I)v = 0$.
Note that eigenvectors of $A$ corresponding to an eigenvalue $\lambda$ are not unique as there are infinitely many.
Also note that the zero vector is not an eigenvector. | CommonCrawl |
Abstract: We examine the variation of the fine structure constant in the context of a two-field quintessence model. We find that, for solutions that lead to a transient late period of accelerated expansion, it is possible to fit the data arising from quasar spectra and comply with the bounds on the variation of $\alpha$ from the Oklo reactor, meteorite analysis, atomic clock measurements, Cosmic Microwave Background Radiation and Big Bang Nucleosynthesis. That is more difficult if we consider solutions corresponding to a late period of permanent accelerated expansion. | CommonCrawl |
Lemma 9.6.8 (Classification of simple extensions). If a field extension $F/k$ is generated by one element, then it is $k$-isomorphic either to the rational function field $k(t)/k$ or to one of the extensions $k[t]/(P)$ for $P \in k[t]$ irreducible.
sending the indeterminate $t$ to $\alpha $. The image is a domain, so the kernel is a prime ideal. Thus, it is either $(0)$ or $(P)$ for $P \in k[t]$ irreducible.
If the kernel is $(P)$ for $P \in k[t]$ irreducible, then the map factors through $k[t]/(P)$, and induces a morphism of fields $k[t]/(P) \to F$. Since the image contains $\alpha $, we see easily that the map is surjective, hence an isomorphism. In this case, $k[t]/(P) \simeq F$.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 09G1. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 09G1, in case you are confused. | CommonCrawl |
Abstract: This article investigates the parameter space of the exponential family $z\mapsto \exp(z)+\kappa$. We prove that the boundary (in $\C$) of every hyperbolic component is a Jordan arc, as conjectured by Eremenko and Lyubich as well as Baker and Rippon. In fact, we prove the stronger statement that the exponential bifurcation locus is connected in $\C$, which is an analog of Douady and Hubbard's celebrated theorem that the Mandelbrot set is connected. We show furthermore that $\infty$ is not accessible through any nonhyperbolic ("queer") stable component.
The main part of the argument consists of demonstrating a general "Squeezing Lemma", which controls the structure of parameter space near infinity. We also prove a second conjecture of Eremenko and Lyubich concerning bifurcation trees of hyperbolic components. | CommonCrawl |
Amaro-Seoane, P., & Preto, M. (2011). The impact of realistic models of mass segregation on the event rate of extreme-mass ratio inspirals and cusp re-growth. Classical and quantum gravity, 28(9): 094017. doi:10.1088/0264-9381/28/9/094017.
Abstract: One of the most interesting sources of gravitational waves (GWs) for LISA is the inspiral of compact objects on to a massive black hole (MBH), commonly referred to as an "extreme-mass ratio inspiral" (EMRI). The small object, typically a stellar black hole (bh), emits significant amounts of GW along each orbit in the detector bandwidth. The slowly, adiabatic inspiral of these sources will allow us to map space-time around MBHs in detail, as well as to test our current conception of gravitation in the strong regime. The event rate of this kind of source has been addressed many times in the literature and the numbers reported fluctuate by orders of magnitude. On the other hand, recent observations of the Galactic center revealed a dearth of giant stars inside the inner parsec relative to the numbers theoretically expected for a fully relaxed stellar cusp. The possibility of unrelaxed nuclei (or, equivalently, with no or only a very shallow cusp) adds substantial uncertainty to the estimates. Having this timely question in mind, we run a significant number of direct-summation $N-$body simulations with up to half a million particles to calibrate a much faster orbit-averaged Fokker-Planck code. We then investigate the regime of strong mass segregation (SMS) for models with two different stellar mass components. We show that, under quite generic initial conditions, the time required for the growth of a relaxed, mass segregated stellar cusp is shorter than a Hubble time for MBHs with $M_\bullet \lesssim 5 \times 10^6 M_\odot$ (i.e. nuclei in the range of LISA). SMS has a significant impact boosting the EMRI rates by a factor of $\sim 10$ for our fiducial models of Milky Way type galactic nuclei. | CommonCrawl |
I know there was a question about good algebraic geometry books on here before, but it doesn't seem to address my specific concerns.
Are there any well-motivated introductions to scheme theory?
My idea of what "well-motivated" means are specific enough that I think it warrants a detailed example.
Let $f:X \rightarrow Y$ be a morphism of schemes. The diagonal morphism is the unique morphism $\Delta: X \rightarrow X \times_Y X$ whose composition with both projection maps $\rho_1,\rho_2: X \times_Y X \rightarrow X$ is the identity map of $X$. We say that the morphism $f$ is separated if the diagonal morphism is a closed immersion.
Hartshorne refers vaguely to the fact that this corresponds to some sort of "Hausdorff" condition for schemes, and then gives one example where this seems to meet up with our intuition. There is (at least for me) little motivation for why anyone would have made this definition in the first place.
In this case, and I would suspect many other cases in algebraic geometry, I think the definition actually came about from taking a topological or geometric idea, translating the statement into one which only depends on morphisms (a more category theoretic statement), and then using this new definition for schemes.
For example translating the definition of a separated morphism into one for topological spaces, it is easy to see why someone would have made the original definition. Use the same definition, but say topological spaces instead of schemes, and say "image is closed" instead of closed immersion, i.e.
Let $f:X \rightarrow Y$ be a morphism of topological spaces. The diagonal morphism is the unique morphism $\Delta: X \rightarrow X \times_Y X$ whose composition with both projection maps $\rho_1,\rho_2: X \times_Y X \rightarrow X$ is the identity map of $X$. We say that the morphism $f$ is separated if the image of the diagonal morphism is closed.
After unpacking this definition a little bit, we see that a morphism $f$ of topological spaces is separated iff any two distinct points which are identified by $f$ can be separated by disjoint open sets in $X$. A space $X$ is Hausdorff iff the unique morphism $X \rightarrow 1$ is separated.
So here, the topological definition of separated morphism seems like the most natural way to give a morphism a "Hausdorff" kind of property, and translating it with only very minor tweaking gives us the "right notion" for schemes.
Is there any book which does this kind of thing for the rest of scheme theory?
Are people just expected to make these kinds of analogies on their own, or glean them from their professors?
I am not entirely sure what kind of posts should be community wiki - is this one of them?
I would recommend Ravi Vakil's notes, which give good geometric intuition for just about everything they cover, and are forthright when the material is "just algebra" and should be regarded as such. They do, like Hartshorne, start off with a dose of abstract exercises about sheaves, but there's really no way to get around the necessity of doing so. As an example, whereas Hartshorne (in II.8) pulls the conormal and relative cotangent exact sequences for modules of differentials out of thin air (= Matsumura), Ravi's notes introduce these by emphasizing their intuitive geometric content in the smooth case, which as far as I can tell is the sort of thing you're interested in.
Dear Steven, I think Mumford's notes of the mid 60's, the first ever explaining schemes to ordinary mortals, are still the closest to what you want. They have become a book in 1988: The Red Book of Varieties and Schemes, published by Springer (LNM 1358).
After a first chapter on classical algebraic varieties, Mumford introduces schemes by quoting Felix Klein [in the 1880's!] and amazingly commenting "It is interesting to read Felix Klein describing what to all intents is nothing but the theory of schemes".
PS Although it is the exact opposite of what you are asking for (!), let me mention that conversely the notion of proper map in Algebraic Geometry seems to have influenced Bourbaki's point of view on proper maps in General(= point-set) Topology. He defines them as universally closed maps and, almost as an afterthought, mentions that in the case of locally compact spaces they are characterized by the property that compact subsets have compact inverse images .
I think the books of Shafarevich meet your criteria. He gives analytic intuitions when he starts explaining about schemes. I had found it to be very helpful.
Not the answer you're looking for? Browse other questions tagged mathematics-education ag.algebraic-geometry or ask your own question.
Is there a good notion of `Separated Stack'?
Confusing definitions in Liu's Algebraic geometry and arithmetic curves?
Does a section of a morphism of schemes give a subscheme? | CommonCrawl |
Abstract: In 1994, Jaeger, Kauffman, and Saleur introduced a determinant formulation for the Alexander-Conway polynomial based on the the free fermion model in statistical mechanics. This can be used to define an invariant of knots in thickened surfaces $\Sigma \times [0,1]$, where $\Sigma$ is closed and oriented . The usual Alexander-Conway polynomial for knots in the $3$-sphere can be recovered from this construction. For knots in $\Sigma \times [0,1]$, with $\Sigma \ne S^2$, the JKS polynomial gives something new. Sawollek further showed that the JKS polynomial is an invariant of virtual knots. This invariant has been studied from many different perspectives (e.g. using biquandles, the extended knot group, and the virtual knot group) and is now commonly known as the generalized Alexander polynomial.
A knot $K \subset \Sigma \times [0,1]$ is said to be virtually slice if there is a compact connected oriented $3$-manifold $W$ and a disc $D$ smoothly embedded in $W \times [0,1]$ such that $\partial W=\Sigma$ and $\partial D=K$. This definition is due to Turaev. Here we show that the generalized Alexander polynomial is vanishing on all virtually slice knots. To do this, we also prove that Bar-Natan's ``Zh'' correspondence and Satoh's Tube map are both functorial under concordance. The result is applied to determining the slice status of many low crossing number virtual knots.
This project is joint work with H. U. Boden Virtural Concordance and the Generalized Alexander Polynomial [pdf]. | CommonCrawl |
Abstract: We derive bounds on the distribution function, therefore also on the Value-at-Risk, of $\varphi(\mathbf X)$ where $\varphi$ is an aggregation function and $\mathbf X = (X_1,\dots,X_d)$ is a random vector with known marginal distributions and partially known dependence structure. More specifically, we analyze three types of available information on the dependence structure: First, we consider the case where extreme value information, such as the distributions of partial minima and maxima of $\mathbf X$, is available. In order to include this information in the computation of Value-at-Risk bounds, we utilize a reduction principle that relates this problem to an optimization problem over a standard Fr\'echet class, which can then be solved by means of the rearrangement algorithm or using analytical results. Second, we assume that the copula of $\mathbf X$ is known on a subset of its domain, and finally we consider the case where the copula of $\mathbf X$ lies in the vicinity of a reference copula as measured by a statistical distance. In order to derive Value-at-Risk bounds in the latter situations, we first improve the Fr\'echet--Hoeffding bounds on copulas so as to include this additional information on the dependence structure. Then, we translate the improved Fr\'echet--Hoeffding bounds to bounds on the Value-at-Risk using the so-called improved standard bounds. In numerical examples we illustrate that the additional information typically leads to a significant improvement of the bounds compared to the marginals-only case. | CommonCrawl |
solving for $\lambda$ gives me approx. $2.44\times 10^12 m$, but my solution sheet shows $244 m$. Where is the unit conversion incorrect?
Browse other questions tagged homework-and-exercises wavelength radio-frequency or ask your own question.
Does the wavelength always decrease in a medium?
In electromagnetic radiation, how do electrons actually "move"?
If wave speed is dependent on medium only, then how to reconcile $v\propto f$?
Is meters per second equivalent to seconds per meter?
What exactly is meant by the wavelength of a photon?
Why don't radio waves ruin electronics?
What is the scientific explanation for radio waves bending around the Earth? | CommonCrawl |
Abstract: A new approach to the torsion problem in the Jacobians of hyperelliptic curves over the field of rational numbers was offered by Platonov. This new approach is based on the calculation of fundamental units in hyperelliptic fields. The existence of torsion points of new orders was proved with the help of this approach. The full details of the new method and related results are contained in .
Platonov conjectured that if we consider the $S$ consisting of finite and infinite valuation and change accordingly definition of the degree of $S$-unit, the orders of torsion $\mathbb Q$-points tend to be determined by the degree of fundamental $S$-units.
The main result of this article is the proof of existence of the fundamental $S$-units of large degrees. The proof is based on the methods of continued fractions and matrix linearization based on Platonov's approach.
Efficient algorithms for computing $S$-units using method of continued fractions have been developed. Improved algorithms have allowed to construct the above-mentioned fundamental $S$-units of large degrees.
As a corollary, alternative proof of the existence of torsion $\mathbb Q$-points of some large orders in corresponding Jacobians of hyperelliptic curves was obtained.
Keywords: fundamental unit, $S$-unit, hyperelliptic fields, Jacobian, hyperelliptic curves, torsion problem in Jacobians, fast algorithms, continued fractions, matrix linearization, torsion $\mathbb Q$-points. | CommonCrawl |
Calculate expected value and variance of normally distributed..
I don't know how do it good?
It work like this? But no idea about variance..?
As a bonus, $X \sim N(\mu, \sigma^2/n)$.
Not the answer you're looking for? Browse other questions tagged probability probability-theory discrete-mathematics probability-distributions or ask your own question.
$X$ normally distributed in $\mathbb R^n$ iff components $x_i$ normally distributed? | CommonCrawl |
This is the first installment in a four-part blog post series about TLA+. Part 2, Part 3, Part 4. A video of a 40-minute talk that covers parts of this series.
If you find TLA+ and formal methods in general interesting, I invite you to visit and participate in the new /r/tlaplus on Reddit.
Thinking doesn't guarantee that we won't make mistakes. But not thinking guarantees that we will.
Thinking clearly is hard; we can use all the help we can get.
TLA+ is a formal specification and verification language that helps engineers design, specify, reason about and verify complex, real-life algorithms and software or hardware systems. TLA+ has been successfully used by Intel, Compaq and Microsoft in the design of hardware systems, and has started seeing recent use in large software systems, at Microsoft, Oracle, and most famously at Amazon, where engineers use TLA+ to specify and verify many AWS services.
I will get to explaining what a formal specification and verification language is and how it helps create better, more robust algorithms and digital systems shortly, but first I'd like to clarify the focus of this four-part blog post series on TLA+. I called this series, TLA+ in Practice and Theory, but it will be almost all theory, where "theory" means two things — mathematical theory and design theory. While I will cover virtually all of TLA+ — in great detail — this is not a tutorial although it may serve to complement one. I will attempt to clarify some of the concepts that can be hard to grasp when learning TLA+ — which, while very small and simple, does contain some ideas that may take a while to fully understand — but this material is by no means essential to learning TLA+ or for writing good, useful specifications, nor will reading these posts teach you how to write good specifications; so this series is neither necessary nor sufficient for putting TLA+ to good use, but I think it is both necessary and sufficient for understanding it.
I hope to make you understand what TLA+ is for, why it is designed in this way, and how its design and mathematical theory compare to other approaches to formal methods. Most importantly, I hope it will convey how powerful a tool is (almost) ordinary mathematics for reasoning about the systems we design and build. This series is addressed to those who are interested in formal methods or programming languages (even though TLA+ is not a programming language at all) and may find TLA+'s theoretical approach and pragmatic design interesting, and to those who already know TLA+ but wish to dig deeper into its theory.
There is no shortage of good beginner tutorials on TLA+. The TLA+ Hyperbook is a complete, thorough, hands-on tutorial written by Leslie Lamport, TLA+'s inventor, and is probably the best way to get started with TLA+ if you want to start applying it in practice. You should expect to work through the hyperbook and become productive enough to specify and verify real-world systems in about two weeks. Specifying Systems, also by Lamport, is an older book, less hands-on but more in-depth and with a greater emphasis on theory. Both books are freely available for download. Recently, Lamport made an online TLA+ video course. This tutorial, which focuses on PlusCal (a pseudocode-like language that compiles to TLA+), is for those who wish to start model checking specifications within hours and are not interested in theory. Additional complementary learning material is the Dr. TLA+ series of lectures, which covers various algorithms and their specification in TLA+, and a collection of TLA+ examples that you can find in this GitHub repository.
There is no shortage of papers about the theory of TLA+, either. There are nice collections of academic papers on TLA+ by both Leslie Lamport and Stephan Merz, and there are others. However, those are technical, aimed at researchers and generally assume significant prior knowledge Another problem with scientific papers is that they are hardly read by anyone outside a very narrow sub-sub-discipline. . This series is intended to be a gentler introduction to the theory of TLA+ for curious practitioners (software and hardware developers) as well as academics who are more familiar with other approaches to software specification and verification.
The next three installments in the series will be a deep dive into the language itself and the theory behind it, but the choices made in the design of the language can only be understood in context. This post will provide this necessary context in the form of a general introduction to the motivation, history and design principles of TLA+, as well as a comparison of those with other approaches.
Any treatment of the theory of TLA+ must begin with an overview of its practice, as the theoretical choices made in the design of TLA+ were motivated, first and foremost, by the necessities of practice.
High complexity increases the probability of human error in design, code, and operations. Errors in the core of the system could cause loss or corruption of data, or violate other interface contracts on which our customers depend. So, before launching such a service, we need to reach extremely high confidence that the core of the system is correct. We have found that the standard verification techniques in industry are necessary but not sufficient. We use deep design reviews, code reviews, static code analysis, stress testing, fault-injection testing, and many other techniques, but we still find that subtle bugs can hide in complex concurrent fault-tolerant systems.
… [H]uman fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular 'rare' scenario. We have found that testing the code is inadequate as a method to find subtle errors in design.
… In order to find subtle bugs in a system design, it is necessary to have a precise description of that design. There are at least two major benefits to writing a precise design; the author is forced to think more clearly, which helps eliminate 'plausible hand-waving', and tools can be applied to check for errors in the design, even while it is being written. In contrast, conventional design documents consist of prose, static diagrams, and perhaps pseudo-code in an ad hoc untestable language. Such descriptions are far from precise; they are often ambiguous, or omit critical aspects… At the other end of the spectrum, the final executable code is unambiguous, but contains an overwhelming amount of detail. We needed to be able to capture the essence of a design in a few hundred lines of precise description. As our designs are unavoidably complex, we needed a highly expressive language, far above the level of code, but with precise semantics. That expressivity must cover real-world concurrency and fault-tolerance. And, as we wish to build services quickly, we wanted a language that is simple to learn and apply, avoiding esoteric concepts. We also very much wanted an existing ecosystem of tools. In summary, we were looking for an off-the-shelf method with high return on investment. We found what we were looking for in TLA+.
… In industry, formal methods have a reputation of requiring a huge amount of training and effort to verify a tiny piece of relatively straightforward code, so the return on investment is only justified in safety-critical domains such as medical systems and avionics. Our experience with TLA+ has shown that perception to be quite wrong. … Engineers from entry level to Principal have been able to learn TLA+ from scratch and get useful results in 2 to 3 weeks… without help or training. … Executive management is now proactively encouraging teams to write TLA+ specs for new features and other significant design changes. In annual planning, managers are now allocating engineering time to use TLA+.
Now that we know a little about what TLA+ is used for, let me explain what it is from a practical point of view. A formal specification language, like a programming language, is a formal system (a language with precise syntactic and semantic rules), but one that focuses on what the program should do rather than on how it should do it. For example, a specification may say that a subroutine must return its input sorted — that's the what — without saying how, meaning which algorithm is used to sort. This description should immediately raise the objection that what any sub-system does forms the how of its super-system, or in programming-speak, the what is the how of a higher abstraction layer. For example, in specifying an algorithm detailing how to find median element in a list, a first step can be sorting the list, even though which algorithm is used for sorting is still irrelevant.
A versatile specification language should therefore be able to describe both the what and the how, or, better yet, describe at any desired level of detail the operation of any abstraction layer. The realization that what and how are just a relation between two abstraction levels on some spectrum is given a precise mathematical meaning through something known as an abstraction/refinement relation, which forms the very core of TLA+'s theory.
This also suggests that a versatile specification language could also serve as a programming language, as it is able to describe both the how and the what. But we'll see that the requirements of programming languages may be at odds with what we want from a specification language, so there may be good reasons not to make a specification language serve double-duty.
There are different kinds of specification languages. Their essential differences (at least those felt by the user) are not so much due to a choice of mathematical theories so much as to differences in goal. The first kind is specification languages that are embedded in a mainstream programming language as contracts, either as part of the programming language itself or in annotations. Examples of such specification languages include JML for Java, ACSL or VCC for C, Spec# for C#, clojure.spec for Clojure, Eiffel, Dafny, Whiley, and SPARK. That kind of specifications describe the intended behavior of individual program units like functions and classes, and can be used to verify the behavior of the units using tools like automatic randomized test generation, concolic testing, model checking, automated proofs with SMT solvers and manual proofs with proof assistants. While such specifications are very useful, they are limited in scope as they cannot easily specify global correctness properties of the program. For example, it's hard to write — let alone verify — a code-level contract that specifies that the program must eventually respond to every user request, that the program will never respond to any request from a user with information belonging to another user, or that the database the program implements is consistent or that it can never lose data.
Another kind of a specification language is one that is specifically designed to, or can with some effort, serve double duty as a programming language. Such languages include general-purpose proof assistants like Isabelle, Coq and Lean, specification and proof languages more directed at software like WhyML, and experimental programming languages based on dependent type theory such as ATS, Agda, and Idris (the F* language probably belongs somewhere between this group and the previous one). These languages have a very clear, unique and powerful advantage: they allow what's known as end-to-end verification, namely the ability to specify and verify the behavior of a program from the highest level of global properties down to the machine instructions emitted by the compiler, ensuring that the executable conforms with the specification. This ability, however, comes at a very high cost: those languages are extremely complex, requiring months to learn and years to master. For this reason they are only used by specialists, very rarely in industry, and virtually never without the support of academic experts. To date, no one has been able to verify all interesting properties of large programs in this way. All instances of successful end-to-end verification are of relatively small programs or program components, and usually require an amount of effort that is far beyond the means of all but high-assurance software.
Finally, there are standalone specification languages that don't serve as complete programming languages but may (or may not) allow generation of bits of code in some programming language. Those include Z, VDM, B-method, Event-B, PVS, ACL2, ASM, and TLA+. Z is used by Altran UK to create high-level specifications of their software (they also make use of SPARK to specify at the code level), and the B method is used in the railway sector; PVS is used at NASA (and is actually more similar in the theory it employs to tools like Isabelle/HOL and Coq).
In its focus on industrial use and user-friendliness, TLA+ shares common goals with Z, B-Method and ASM (as well as some aspects of theory), while in its focus on a universal mathematical formalization of computation it shares common goals with Isabelle and Coq.
Because there is no perfect formalism I will use the word formalism to stand for "formal system" — one good for every purpose — each is constructed around a set of ideological or aesthetic choices that are important to point out, so we can delineate those parts of the debate over a preferred formalism — and, as programmers know, debates over formalism are common — that are a matter of aesthetics.
This has been a very practical interest. I want to verify the algorithms that I write. A method that I don't think is practical for my everyday use doesn't interest me.
We are motivated not by an abstract ideal of elegance, but by the practical problem of reasoning about real algorithms. Rigorous reasoning is the only way to avoid subtle errors… and we want to make reasoning as simple as possible by making the underlying formalism simple The Temporal Logic of Actions, 1994 .
Practicality is an ideal whose realization depends on carefully deciding what uses the formalism serves (and what uses it does not) as well as who is its intended audience (and who is not). TLA+ was designed for engineers working in industry and for algorithm designers either in industry or academia; it is not designed for mathematicians, logicians or programming language theorists. It addresses the need for a specification and verification of real systems and algorithms. It is not intended as a programming language, nor as a tool for exploration of novel logical and mathematical ideas.
Practicality — now with a clear audience and intended use in mind — entails two requirements: simplicity, to allow engineers to quickly learn and use the language, and scalability, to allow applying the formalism to real-world software or hardware of considerable complexity and size, as well as the nice-to-have universality, to allow applying the same formalism to the different kinds of algorithms and systems an engineer may encounter.
But while scalability can be fairly easily tested, and universality is a mathematical property of the formalism, simplicity (as a cognitive measure) often depends on a personal aesthetic point of view. As my goal is to compare and contrast the underpinnings of TLA+'s theory with other approaches, I would like to describe an approach to program analysis which has been popular in programming language theory circles in the last few decades, to which TLA+ stands in stark contrast.
In practice, there is not a sharp distinction between verifying a piece of mathematics and verifying the correctness of a system: formal verification requires describing hardware and software systems in mathematical terms, at which point establishing claims as to their correctness becomes a form of theorem proving. Conversely, the proof of a mathematical theorem may require a lengthy computation, in which case verifying the truth of the theorem requires verifying that the computation does what it is supposed to do.
While the introductory words "in practice" serve as a justification, there is no actual evidence to support this thesis, which, while interesting, currently holds only in theory. "Computational mathematics" is indeed very interesting, but as developers rather than logicians, we are more concerned with the opposite problem: how computation is represented mathematically rather than how math is represented computationally.
And, there is good reason to believe that this thesis — that computational math is the right tool for mathematically reasoning about computation — is false. In constructive mathematics, the most common computational object is the constructive, or computable, function. Algorithmically speaking, a computable function corresponds to a sequential algorithm. But the systems and algorithms most software engineers build and want to reason about aren't sequential, but interactive or concurrent. The kinds of algorithms that are most interesting and common in constructive mathematics are the least interesting and common in software and vice-versa. This is not to say that one cannot reason about general algorithms in such a formalism, far from it, but reasoning generally about arbitrary algorithms in a functional formalism can be quite complicated, and in any event, their treatment is quite different from that of sequential algorithms. It seems that while the mechanization of mathematics and the mathematization of programming are similar in theory, achieving those goals in practice requires different designs.
For quite a while, I've been disturbed by the emphasis on language in computer science… I believe that the best way to get better programs is to teach programmers how to think better. Thinking is not the ability to manipulate language; it's the ability to manipulate concepts. Computer science should be about concepts, not languages. But how does one teach concepts without getting distracted by the language in which those concepts are expressed? My answer is to use the same language as every other branch of science and engineering — namely, mathematics.
… The obsession with language is a strong obstacle to any attempt at unifying different parts of computer science. When one thinks only in terms of language, linguistic differences obscure fundamental similarities.
Lamport observes that programming languages are complex, whereas ordinary, classical math is simple. Programming languages need to be mechanically translatable to efficient machine code that interacts with hardware and operating systems, and they are used to build programs millions of lines of code long that then need to be maintained by large and ever-changing teams of engineers over many years. Such requirements place constraints on the design of programming languages that make them necessarily complex, but this complexity is not necessary for reasoning about specifications; specifications are orders of magnitude shorter than code, and reasoning doesn't require the generation of an efficient executable. If it is possible to separate programming and reasoning into separate languages, each simple on its own — or, at least, as simple as possible — there may be much to be gained by such an approach.
The primary goals of a programming language are efficiency of execution and ease of writing large programs. The primary goals of an algorithm language are making algorithms easier to understand and helping to check their correctness. Efficiency matters when executing a program that implements the algorithm. Algorithms are much shorter than programs, typically dozens of lines rather than thousands. An algorithm language doesn't need complicated concepts like objects or sophisticated type systems that were developed for writing large programs.
The difference between the two approaches — reasoning about programs within the programming language itself or reasoning about programs with ordinary math — can be explained with the following analogy. Consider an electrical engineer designing an analog circuit, and a mechanical engineer designing a contraption made of gears, pulleys and springs. They can either use the standard mathematical approach, using equations describing what the components do and how they interact, or invent an algebra of electronic components or one of mechanical components and reason directly in a language of capacitors and resistors (one capacitor plus one resistor etc.) or that of gears and springs. Lamport's approach is analogous to the former, while the programming language theory approach is analogous to the latter. I think that the two different aesthetic preferences can also be linked to two philosophical views of computation. While in both views a program is a mathematical object — like a number or a relation — in Lamport's view, it is a mathematical object that ultimately serves as a description of a physical process, while the linguistic approach sees it as a pure mathematical creation, albeit one that may be incarnated, imperfectly, in earthly devices. Of course, to be convenient, the "standard mathematical" approach needs to be flexible enough that concrete domain objects — like resistors — could be modularly defined and composed.
Physicists don't have to revise the theory of differential equations every time they study a new kind of system, and computer scientists shouldn't have to change their formalisms when they encounter a new kind of system.
Similarly, composition of components in TLA+ is not function composition. We will explore composition in detail in part 4, but just to pique your curiosity, consider that a component imposes certain constraints on the behavior of the system, as it operates following some rules. It also interacts with its environment — be it the user, the network, the operating system or other components — on which it imposes no rules; the environment appears nondeterministic to the component. The composition of multiple components, is then the intersection of the constraints imposed by all components, or, in logic terms, the conjunction of the formulas describing them. Composition in TLA+ is, therefore, simple logical conjunction ("and" or $\land$).
Most importantly, this choice of representation takes nothing from the ability to abstract to computations to any desired level; quite the contrary — it offers very elegant abstraction.
The linguistic approach, however, has some advantages, like the ability to do end-to-end verification. As the unified language can be mechanically compiled to an efficient executable, assuming that the compilation process is itself verified, we can be certain that no mistake has crept into the pipeline from the top-level specification all the way down to the machine instruction. It is tempting to believe that if the semantics of a programming language were well defined, and if the composition rules were simple and consistent, then we could apply formal reasoning directly to program code and easily move up and down the abstraction hierarchy in order to verify correctness of properties at different levels of abstraction. Unfortunately, this is not the case. Anyone interested in program verification would do well to understand the enormous theoretical difficulties facing the endeavor that define its limitation and force the design tradeoffs made by the various approaches. In another blog post I discussed those theoretical challenges. Among them I mentioned results showing that the effort required to verify the behavior of some composition of components may not only grow exponentially with the number of components but also with the complexity of their internal details. This means that we cannot always hide details of implementation as we go up the abstraction hierarchy. The implication is that end-to-end verification is essentially hard regardless of how it's done. In practice, end-to-end verification is very expensive, and there is no sign that this will drastically change in the foreseeable future. For the time being end-to-end verification remains reserved for very specialized software, or very small, secure cores of larger systems, and thus forms its own niche. Luckily, very few software systems truly require end-to-end verification, and those that do are either small, or only require such strong guarantees of one or two small internal components.
Giving up on a goal that cannot at the moment be affordably realized nor required by the vast majority of software is a very reasonable concession, provided we can trade it for other advantages Actually, end-to-end verification can also be accomplished even in the simple math approach, by mechanically compiling a program to its formal mathematical representation. There have been two academic projects to do end-to-end verification in TLA+ for C programs and for Java bytecode. This approach to to program verification (and here I mean an actual compilable program) that is similar to the verification technique of characteristic formulas, which we'll explore a bit in part 4. However, this is not the primary focus of TLA+. ; those would be simplicity, universality and scalability, things that can make it applicable for a wide range of software and for a wide range of users. Of course, nothing prevents us from complementing this approach with code-level specification tools, which we can use to verify local correctness properties, like those of particular code units, rather than an entire system.
Whether or not your aesthetic choices coincide with Lamport's, I think this radically different ideology from that of programming language theory deserves consideration if it indeed leads to a more pragmatic approach to verification.
Leslie Lamport, best known for his major contributions to the theory of concurrent and distributed algorithms, began his work on formal methods in the late 1970s, from a pragmatic standpoint as we've seen. His work was roughly contemporary to that of Tony Hoare and Robin Milner, but from the very beginning took a different path from their algebraic/linguistic work on process calculi. He made some important contributions to the theory of verifying concurrent/interactive/non-terminating computations, especially to specification via refinement and to the concept of safety and liveness properties (terms he coined). Those mathematical ideas — that would serve as the foundation for TLA+ — were developed years before the formalism itself. This sets TLA+ apart from the specification languages based on programming language theory, where in many cases the formalism came first and the semantics later. This difference of approach runs deep in computer science, where the "concepts first" approach is preferred by the "computational school" (research of complexity theory and algorithms), and the "formalism first" approach is preferred by the "programming school" (research of programming languages). The difference can be traced back all the way to Alonzo Church and Alan Turing. Whereas the first invented a formalism and later came to believe it is sufficient to describe all computations (but never precisely defined the concept), the second thought of computation as an abstract concept first, and then later picked an arbitrary formalism to describe it so that the concept could be treated rigorously.
So around 1993 Lamport invented TLA+, a complete formalism for specification built around TLA. This short note by Lamport is a summary of how he created TLA+ in a process of erasing programming constructs from the formal description of algorithms, and distilling algorithms down to their mathematical essence.
While TLA+ was not originally designed with any form of mechanical verification in mind, in 1999, Yuan Yu wrote a model checker for TLA+, called TLC, and in 2008 a team at INRIA built a mechanical proof checker for TLA+, called TLAPS. In 2009, Lamport introduced PlusCal, an "algorithm language" that looks like pseudocode yet is completely formal, and is compiled into readable TLA+. Recently, research has started on building a more state-of-the-art model checker for TLA+.
A TLA+ specification describes a system at some chosen level of detail. It can be no more than a list of some global properties, a high-level description of the algorithm, a code-level description of the algorithm, or even a description of the CPU's digital circuits as they're computing the algorithm — whatever level or levels of abstraction you are interested in.
I believe that the best language for writing specifications is mathematics. Mathematics is extremely powerful because it has the most powerful abstraction mechanism ever invented — the definition. With programming languages, one needs different language constructs for different classes of system — message passing primitives for communication systems, clock primitives for real-time systems, Riemann integrals for hybrid systems. With mathematics, no special-purpose constructs are necessary; we can define what we need.
… Perhaps the greatest advantage of specifying with mathematics is that it allows us to describe systems the way we want to, without being constrained by ad hoc language constructs. Mathematical manipulation of specifications can yield new insight.
TLA+ is not a programming language. It has no built-in notion of IO, no built-in notion of threads or processes, no heap, no stack — actually, no concept of memory at all — and not even subroutines PlusCal does have processes, stacks and subroutines. . And yet, any software or hardware system, and almost any kind of algorithm, can be written in TLA+ succinctly and elegantly. By reasoning about algorithms or systems rather than programs (in part 3 we'll explore the difference between the two) we gain power and simplicity; in exchange we give up the ability of mechanically translating an algorithm into an efficient executable In principle, some TLA+ algorithms, those specified with sufficient detail, could certainly be translated into code, although I don't know if anyone has actually attempted that, or how useful that would be. . Once you've specified your algorithm or system at the level or levels of abstraction that you find most important, you translate the algorithm to your chosen programming language manually There are interesting techniques that can be employed if it is deemed necessary to further verify that the code matches the specification that are not as extreme as compiling the code back to TLA+. One of them is capturing system logs, and then using the model checker to check that they conform with the behavior of the specification. . If you are designing an algorithm, the TLA+ specification will closely resemble the code, or, at least, be at the same abstraction level. However, most of the time, engineers use TLA+ to design and reason about complete systems, in which case the specification will be at a much higher abstraction level than the code. Lamport likens this use of TLA+ to that of a blueprint when designing a house An expanded version of that article is here .
A system or an algorithm — at any abstraction level — is expressed in TLA+ as a single logical formula (obviously, if it is non-trivial, we compose it of more manageable pieces) that can be manipulated like any mathematical formula. As scary as that may sound to programmers, the experience is very similar to programming and can be learned by programmers faster than most programming languages. It's math that feels a lot like programming. TLA+ is one of those rare combinations of simplicity, elegance, versatility and power, that, at least in me, evoked an impression similar to the one I had years ago when I learned Scheme. With the exception of its proof language — which is guided by other design goals — it is also rather minimalistic. TLA+ is certainly not perfect, but to me, it feels as elegant and as finely crafted as Scheme or Standard ML.
Computer scientists and programmers love talking about abstractions. One of my favorite things about TLA+ is that it gives a precise mathematical definition to the concept of abstraction, and allows us to reason about it directly: In TLA+, the abstraction/implementation relation is expressed by the simple, familiar logical implication: $X \Rightarrow Y$ is the proposition that $X$ implements $Y$ or, conversely, that $Y$ abstracts $X$.
Code-level specification language — whether based on contracts or on types — make a clear distinction between algorithms, expressed in the body of subroutines (e.g., a routine implementing the Quicksort), and algorithm properties (e.g., "the subroutine returns a sorted list"), expressed as contracts or types. Even contract systems or types systems that allow full use of the programming language when expressing properties (e.g. dependent types) still make this clear distinction: semantically, properties are distinct from algorithms. TLA+ makes no such distinction. The property "the algorithm sorts", and the algorithm Quicksort are just two specifications at different levels of detail, different levels of abstraction, and $X \Rightarrow Y$ therefore also means that specification $X$ has the property $Y$.
As I mentioned above, TLA+ makes it easy to naturally express many different kinds of algorithms: sequential, interactive, concurrent, parallel, etc. There is, however, one glaring omission: while TLA+ allows specifying probabilistic algorithms, the logic lacks the power to reason about them, like specifying that an algorithm yields the right answer with probability 0.75; this shortcoming, however, is probably easy to rectify in a future version of the language. It is easy to specify properties like worst-case time or space complexity, and even properties like "the system will eventually converge to one of three attractors".
But elegance, universality and power aside, the biggest practical impact you feel when using TLA+ (and the biggest practical difference between TLA+ and some other specification tools) is the availability of a model checker. A model checker can make the difference between a formal method that actually saves you development time, and one that may be prohibitively expensive. But, most importantly, with a proof assistant you can prove that an algorithm is correct only if it actually is, and, if you're specifying a complex algorithm or system, chances are it isn't. A model checker gives you, at the push of a button, a counterexample that shows you exactly where things have gone wrong.
At its core is TLA — the temporal logic of actions — which is in some ways analogous to ordinary differential equations. But whereas ODEs are used to describe continuous dynamical systems, TLA is used to describe arbitrary discrete dynamical systems. In addition, TLA accommodates reasoning that is useful when analyzing programs in particular, most importantly assertional reasoning and refinement, the latter is a precise mathematical definition for the concept of abstraction and implementation relationship between algorithms (e.g., a parallel mergesort implements, i.e. refines, general mergesort). While TLA incorporates some linear temporal logic, and despite its name, TLA's main design objective is actually to reduce the reliance on temporal logic and temporal reasoning as much as possible, because while simpler than many alternatives, it is not quite "ordinary math". We will explore TLA in depth in part 3 and part 4.
Just like ODEs describe the evolution of a continuous system over some phase space and the variables take values over some field, so too TLA requires some state space (but does not dictate a particular one) in which the state of the system, namely the values of the variables, is defined at any point in time. The "+" in TLA+ uses a formal set theory based on ZFC to allow TLA variables to take on many kinds of values (atoms like numbers and strings, finite and infinite sequences, sets, records, and functions). We'll go over this "static" part of TLA+ in part 2.
Finally, TLA+ has a module system to allow information hiding and elaborate abstraction/implementation or equivalence relations. We'll go over the module system and composition in part 4.
The TLA+ software package contains an IDE called "the toolbox", a $\LaTeX$ pretty-printer, and the TLC model checker, which lets you verify properties of algorithms written in a useful subset of TLA+ on restricted finite-state instances of your system at the push of a button. The proof system TLAPS, available as a separate download but fully integrated with the toolbox, allows interactive work with the proof system for the verification of proofs. TLAPS is not a self-contained proof assistant, but rather a front-end for the TLA+ proof language, which uses automated solvers and the proof assistant Isabelle as backends for discharging proof obligations. As this series focuses on the theory, I will not discuss the use of the TLA+ tools, even though they are of the utmost practical importance. The learning material I linked to covers their use.
While the TLA logic is universal (with the aforementioned exception of probabilistic algorithms), TLA+ is a tool for a job, and the job is the formal specification and verification of real-world algorithms and large software systems by practitioners. Like all elegant tools, it can be used to do more, but that doesn't mean it's optimized for tasks it wasn't designed to do. The ergonomics of the language as well as the current tooling means that it is not the best choice in every circumstance. For example, it can be used to mechanically prove general mathematical theorems, but interactive theorem provers designed for that particular task likely do a better job. It can also be used to specify programming languages, but lacking syntactic constructs that make it easy to embed different languages nicely There aren't any sophisticated constructs, like macros or arbitrary infix operators, that may help with shallow embedding, and there aren't convenient quoting constructs that can help with deep embedding. How nice the result depends on the embedded language. TLA+'s own rather complex grammar is specified in TLA+ as a BNF grammar, and the specification looks nice. , other tools may handle that task better. The limitations of the model checker and the proof system are such that for exploring numerical and statistical algorithms, I would suggest using specialized languages like Matlab, Octave or Julia. And even though the logic itself is universal, that does not mean that it is always the best formalism to derive any kind of insight about a system. Different formalisms may offer different insight.
I would like to address a few interesting misconceptions about TLA+ that I found online.
The first is that TLA+ is only a tool for verifying distributed systems. It is true that these days, TLA+ is mostly known in the context of distributed systems and concurrent algorithms. This has a few reasons: 1) Lamport's algorithmic work is in concurrent and distributed algorithms, and the Balkanized nature of computer science places certain limits on the influence of ideas outside the sphere in which their originators are known. 2) Few other general software verification formalisms are able to handle concurrency as elegantly as TLA+, so that is just where it shines in comparison. 3) When engineers write software systems that are too complex or subtle to be obviously correct and could therefore benefit from formal verification, it is usually the case that a concurrent or distributed algorithm is involved. Nevertheless, there is nothing that intrinsically limits TLA+ specifically to concurrent and distributed algorithms. In fact, TLA+ has no special concurrency constructs — like message passing or even processes — at all.
TLA+ incorrectly treats state as global, which is scientifically incorrect in addition to being disastrous for engineering.
The error here is confusing a mathematical notation with the systems it describes. If we use math to describe the position along one dimension of $n$ particles, we might define a vector $\boldsymbol x$, with components $x_1…x_n$, each being the position (or state) of a single particle. This notation (which in TLA+ is written as $x[i]$ instead of $x_i$) says absolutely nothing about the actual physical interaction of the particles. It's just notation, and it treats state as global no more and no less than the mathematical notation of the vector $\boldsymbol x$ does. That we have a notation for the vector $\boldsymbol x$, does not actually mean that each particle is instantaneously aware of the position of all others. TLA+ and math in general don't "treat state as global"; they just allow denoting some global notion of state, one that the actual system described does not have.
One may indeed ask why we'd ever want to allow the specification of non-physically realizable interaction at all, and the answer is that working in a formalism that restricts what's expressible to what's physically realizable would make it more complex and less general Consider what would be required if for math to have a built-in mechanism preventing us from expressing quantities representing velocities greater than the speed of light. . Another reason is that we may wish to define non-physical behavior as a high-level abstractions in order to specify and prove certain properties. For example, if we're specifying an algorithm for distributed transaction, we would very much like to show that our system would behave as if distributed transactions were instantaneous. In that case, we would specify our realizable, physically realistic, algorithm, then we'd specify the non-realizable abstract behavior, and then we'd show that our low-level algorithm implements the high-level behavior. If we weren't even allowed to describe the abstract behavior, we wouldn't be able to prove that the algorithm has this desired property.
Another (rather technical) misconception is that TLA+ is "not higher order", and therefore cannot specify some higher-order algorithms. We will get into the specifics of the formalism in the next three installments and examine its precise "order", which would make the error of this assertion clearly apparent, but at this point I'd like to point out that the very phrasing of the claim is an example of what Lamport calls the "Whorfian syndrome" — after the Sapir-Whorf hypothesis, which posits that the language we use shapes our thought — namely, a confusion of language with reality, or of the signifier with the signified, or of the formalism and that which it formalizes. The term "higher-order" is a property of the formal description of a system, not the system itself.
When programmers say that a program is higher-order, they mean that it is parameterized by another program. But such an interaction between two programs or algorithms is only higher-order if we choose to model interaction in our language using function composition; there are other ways of expressing such a relationship.
Therefore, what constitutes "higher-order" vs. "first-order" depends on the domain of discourse and/or on the modes of composition. Both are features of the formalism, not of the formalized "reality". We will see that TLA+ expresses behaviors that may perhaps be higher-order in some formalisms as first-order.
Comparisons between radically different formalisms tend to cause a great deal of confusion. Proponents of formalism A often claim that formalism B is inadequate because concepts that are fundamental to specifications written with A cannot be expressed with B. Such arguments are misleading. The purpose of a formalism is not to express specifications written in another formalism, but to specify some aspects of some class of computer systems. Specifications of the same system written with two different formalisms are likely to be formally incomparable… Arguments that compare formalisms directly, without considering how those formalisms are used to specify actual systems, are useless.
Writing code in a high-level language can help one think more clearly in exactly the same way that writing natural language or drawing blueprints can… we now have programming languages, not just specification languages, which can be very useful in prototyping high-level specifications, writing executable specifications, and even evolving those into actual programs… With all due respect to Leslie Lamport, who is a great computer scientist and programmer. But he should learn Haskell.
That misconception is interesting because the commenter's expectation of what should be specified is shaped by the particular abilities of his favorite programming language (and I should note that from the level of flexible abstraction or concreteness offered by TLA+, all programming languages seem almost equally constrained in their range of expression). I challenge that commenter to specify the very important and very reasonable program properties — easily, naturally and clearly specified in TLA+ — in his favorite programming language, such as: "every user request would eventually be answered by the server", or "when constructing a response to a request by a user, no information belonging to a different user will be read from the database", or "the transactions appear to have taken place instantaneously (i.e., they're linearizable)", or "the worst case complexity of this sort function is $5n \log n$". As I already mentioned, there are some research programming languages that do allow expressing such properties, but they make their own nontrivial tradeoffs.
[D]o we need to specify programs in a higher-level language before implementing them in a programming language? People who design PLs would say no, their languages make what the program does so obvious that no higher-level description is needed. I think the first PL for which this was believed to be true was FORTRAN. It's not true for FORTRAN and I don't think it's true for any existing general-purpose PL.
As engineers, we should use tools that help us build software that complies with requirements at the lowest cost. Formal methods and, in fact, all software verification methods — including the many forms of testing — lie on a spectrum of the effort they require and the confidence they provide, and we should pick those that match the requirements of the system we're building. It's too early to tell precisely how much TLA+ helps to develop software in general, and exactly what problems benefit from it most, but I believe — and experience like that of Amazon seems to support that — that a large class of software systems can significantly benefit from a tool like TLA+.
Even if you have a perfect proof that a program satisfies a specification, how do we verify that a specification is correct? … It's hard to believe that a programmer who have trouble write a correct program in the first place can magically write perfectly correct specification. People with experiences with mathematical logics know how hard and technical it is to encode precisely what we want to prove in a logical language even for relatively simple combinatorial facts. I don't say it's impossible to write a correct specification, but it is much harder than writing a correct program in the first place.
… We believe that if one really understands what a program should do, then he can specify it precisely in an understandable manner.
A high-level formalism like TLA+ is designed to allow a precise description of a software (or hardware) system, that allows both its assumptions as well as operation to be stated clearly and concisely enough that the correctness of the specification is far more likely than the correctness of a program. As to the question of the ability of programmers to write such specifications, if you are able to convert informal requirements to a program — which is just a formal specification at a fairly low level — and if you understand your program well enough, then you are also able to specify it formally at a higher level, in a way that shows how and why it works. Actually doing it is a matter of some relatively short training and some practice. The practical successful experience of a company like Amazon with TLA+ shows that it is both useful and easy enough to use.
Advocates of programming methodologies have tended to talk as if their methodologies automatically generate good programs. A programming methodology is no substitute for intelligent reasoning about algorithms and their complexity, and cannot by itself lead one to a good method of solution. "Structured programming" would not have helped Euclid discover his algorithm.
The experience of Amazon has shown that using formal methods wisely can supplement other methodologies, and reduce the cost of development.
[A] conclusion we have drawn from our interaction with developers is that real developers do appreciate contracts… Unfortunately, we have also seen an unreasonable seduction with static checking. When programmers see our demos, they often develop a romantic enthusiasm that does not correspond to verification reality. Post-installation depression can then set in as they encounter difficulties while trying to verify their own programs.
Reality is somewhere in the middle: we can feasibly verify some properties of some systems, and in general, the more complex or big the program, the more tricky the property, or the more confident we wish to be in the veracity of the verification process, the more work is required. Some reasonable compromise must always be made depending on the software requirements. I believe that TLA+ hits a sweet spot in the compromises it makes, and its versatility in choosing the desired level of detail or abstraction in the specification gives the user freedom to pick a useful point in terms of utility and effort. TLA+ offers simplicity, universality and scalability, which it achieves by making two concessions: not being a programming language, and not trying to be a general tool for studying theory. The former largely sacrifices end-to-end verification, which is neither feasible nor necessary for virtually all ordinary software; the latter sacrifices power which is of no relevance to TLA+'s intended goal: a tool for reasoning about the behavior (especially correctness) of systems and algorithms, not the study of theory.
In addition, TLA+ nicely complements affordable code-level formal tools like static analyzers. TLA+ gives up on the 100% confidence of code-level formal methods, whereas static analyzers give up on verifying complex global properties.
My feeling is that people are looking for magic bullet in math, [that] there's this wonderful thing… somehow you discover the right math and that math solves the problem for you. That's not the way it happens. … When you understand something then you can find the math to express that understanding. The math doesn't provide the understanding.
A lot of people… addressing the same kind of problems that I do — specification and verification — are looking for these new math abstractions… I gave up on that hunt 20 years ago. I discovered that, for example, for proving the correctness of a concurrent algorithm there's one basic method that works — proving an invariant. And you can package that in however many ways you want but there isn't anything that's going to make the proof any simpler. And so what I've done is just taking the method that goes as directly as possible from the problem into the math, and I have it easy because since I describe the algorithm in terms of math, it's already in math so I don't need any semantics… to translate from how I'm describing the algorithm into math. And so I don't need some new fangled kinds of math to try to smooth that process.
… My hunch is that people will find that all these new kinds of math are not really going to solve the problem.
A good mathematical formalism is a necessary condition for reasoning about programs, but it doesn't make answering all questions possible, let alone easy (see correctness and complexity). It is a famous property of math that while it describes things made of very simple parts, it can ask questions that are very hard to answer.
On top of this, every formalism introduces some "accidental complexity", difficulties that arise from the choice of the formalism itself rather than the problem. TLA+ is not immune, although it does very well on that account. Those accidental difficulties may make it harder to answer a specific kind of question. This means that we cannot have a single formalism that is best for all uses; this is true in computer science as it is true in mathematics.
You will find that no matter what formalism you choose, the actual work is similar whether it's in TLA+ or in Coq. Being able to think mathematically, i.e., to think precisely, is a prerequisite to using any formalism, but that ability improves with actual work. TLA+ is great for practicing mathematical thinking because it is so much simpler than other general formalisms designed to reason about programs. It helps you practice logic and mathematical thinking using the simplest possible math and simplest possible logic, letting you concentrate on the problem with as little sophisticated details of formalism as possible, all this without detracting anything from its expressivity or reasoning power. Even if you are drawn to theories like type theory and intuitionistic logic for aesthetic or maybe even pragmatic reasons, it can help to have a grasp of formal ordinary math and classical logic before trying to grapple with those more complicated formalisms.
Programmers have this idea that programming languages are simple but math is difficult, complicated. Which is absurd! Mathematics is so much simpler than even the simplest programming language, but people have just had fear of math instilled into them. …[P]eople start using [TLA+] — it teaches them math… It makes using math as much fun as programming.
TLA+ is the most valuable thing that I've learned in my professional career. It has changed how I work, by giving me an immensely powerful tool to find subtle flaws in system designs. It has changed how I think, by giving me a framework for constructing new kinds of mental-models, by revealing the precise relationship between correctness properties and system designs, and by allowing me to move from 'plausible prose' to precise statements much earlier in the software development process.
Next week we'll learn how TLA+ uses logic to describe the state of programs, meaning their data, how formal logic can be used for specifications in general, as well as take a look at TLA+'s declarative proof language. | CommonCrawl |
Abstract: We prove that any topological group $G$ containing a subspace $X$ of the Sorgenfrey line has spread $s(G)\ge s(X\times X)$. Under OCA, each topological group containing an uncountable subspace of the Sorgenfrey line has uncountable spread. This implies that under OCA a cometrizable topological group $G$ is cosmic if and only if it has countable spread. On the other hand, under CH there exists a cometrizable Abelian topological group that has hereditarily Lindelöf countable power and contains an uncountable subspace of the Sorgenfrey line. This cometrizable topological group has countable spread but is not cosmic. | CommonCrawl |
We synthesize experimental data from recent studies to construct a computational model for the gene regulatory network that governs the development of immune cells and use it to explain several surprising results. At the heart of the model is a cross-antagonism between the macrophage-promoting factor Egr and the neutrophil-promoting factor Gfi. This module is capable of giving rise to both graded and bistable responses. Increasing the concentrations of these factors forces the system into the bistable regime in which cells can decide stochastically between fates. This bistable switch can be used to explain cell reprogramming experiments in which a gene associated with one cell fate is induced in progenitors of another. In one such experiment, C/EBP$\alpha $, a neutrophil promoting factor, was induced in B cell progenitors which then differentiated to macrophages. Our model shows that if C/EBP$\alpha $ is induced early, it can induce differentiation to a neutrophil. In B cell progenitors, however, the bistable switch is already in a macrophage promoting state. Thus, expression of C/EBP$\alpha $ cannot activate the neutrophil pathway, but it can repress the B cell pathway and promote macrophage differentiation. | CommonCrawl |
Want to be notified of new releases in colbyford/NAGLU?
In 2015, the Critical Assessment of Genome Interpretation (CAGI) proposed a challenge to devise a computational method for predicting the phenotypic consequences of genetic variants of a lysosomal hydrolase enzyme known as $\alpha$-N-acetylglucosaminidase (NAGLU). In 2014, the Human Gene Mutation Database released that 153 NAGLU mutations associated with MPS IIIB and 90 of them are missense mutations. The ExAC dataset catalogued 189 missense mutations NAGLU based on exome sequence data from about 60,000 individual and 24 of them are known to be disease associated. Biotechnology company, BioMarin, has quantified the relative functionality of NAGLU for the remaining subset of 165 missense mutations. For this particular challenge, we examined the subset missense mutations within the ExAC dataset and predicted the probability of a given mutation being deleterious and relating this measure to the capacity of enzymatic activity. In doing so, we hoped to learn the degree to which changes in amino acid physicochemical properties are tolerable for NAGLU function.
Once the probability scores were generated, the dataset was then run through multiple machine learning algorithms to generate an applicable model for predicting the enzymatic activity of MPS IIIB-related mutations. This prediction was generated using the PolyPhen-2 probability score and other information about the mutation (amino acid type, location, allele frequency, etc.) as input feature variables. This generated a predicted aggregate score for each mutation, which was then reported back to CAGI. The results of the analysis are significant enough to hold confidence that the scores are decent predictors of enzymatic activity given a mutation in the NAGLU amino acid sequence. | CommonCrawl |
Eines unserer Interessen für die Zukunft ist, mit Hilfe dieses Renormierungsverfahrens Modelle korrelierter Elektronensysteme zu untersuchen. Es ist für diese Modelle besonders gut geeignet, da es die Renormierung des Hamiltonoperators selbst erlaubt und deshalb Eigenschaften gebundener oder korrelierter Zustände sehr gut studiert werden können.
Phys. Rev. Lett.111, 175301 (2013) .
We present a systematic construction of effective Hamiltonians of periodically driven quantum systems. Due to an equivalence between the time dependence of a Hamiltonian and an interaction in its Floquet operator, flow equations, that permit to decouple interacting quantum systems, allow us to identify time-independent Hamiltonians for driven systems. With this approach, we explain the experimentally observed deviation of expected suppression of tunneling in ultra-cold atoms.
J. Phys. A36, 2707-2736 (2003) .
To contrast different generators for flow equations for Hamiltonians and to discuss the dependence of physical quantities on unitarily equivalent, but effectively different initial Hamiltonians, a numerically solvable model is considered which is structurally similar to impurity models. By this we discuss the question of optimization for the first time. A general truncation scheme is established that produces good results for the Hamiltonian flow as well as for the operator flow. Nevertheless, it is also pointed out that a systematic and feasible scheme for the operator flow on the operator level is missing. For this, an explicit analysis of the operator flow is given for the first time. We observe that truncation of the series of the observable flow after the linear or bilinear terms does not yield satisfactory results for the entire parameter regime as - especially close to resonances - even high orders of the exact series expansion carry considerable weight.
Phys. Lett. A64, 275-280 (2002) .
The spin-boson model is studied by means of flow equations for Hamiltonians. Our truncation scheme includes all coupling terms which are linear in the bosonic operators. Starting with the canonical generator eta_c=[H_0,H] with H_0 resembling the non-interacting bosonic bath, the flow equations exhibit a universal attractor for the Hamiltonian flow. This allows to calculate equilibrium correlation functions for super-Ohmic, Ohmic and sub-Ohmic baths within a uniform framework including finite bias. Results for sub-Ohmic baths might be relevant for the assessment of dissipation due to 1/f-related noise, recently found in solid-state qubits.
This paper shows how flow equations can be used to diagonalize dissipative quantum systems. Applying a continuous unitary transformation to the spin-boson model, one obtains exact flow equations for the Hamiltonian and for an observable. They are solved exactly for the case of an Ohmic bath with a coupling $\alpha =1/2$. Using the explicite expression for the transformed observable one obtains dynamical correlation functions. This yields some new insight to the exactly solvable case $\alpha =1/2$. The main motivation of this work is to demonstrate, how the method of flow equations can be used to treat dissipative quantum systems in a new way. The approach can be used to construct controllable approximation schemes for other environments.
Physica D126, 123-135 (1999) .
The Hénon--Heiles Hamiltonian was introduced in 1964 [M. Hénon, C. Heiles: Astron. J. 69, 73 (1964)] as a mathematical model to describe the chaotic motion of stars in a galaxy. By canonically transforming the classical Hamiltonian to a Birkhoff-Gustavson normalform Delos and Swimm obtained a discrete quantum mechanical energy spectrum. The aim of the present work is to first quantize the classical Hamiltonian and to then diagonalize it using different variants of flow equations, a method of continuous unitary transformations introduced by Wegner in 1994 [Ann. Physik (Leipzig) 3, 77 (1994)]. The results of the diagonalization via flow equations are comparable to those obtained by the classical transformation. In the case of commensurate frequencies the transformation turns out to be less lengthy. In addition, the dynamics of the quantum mechanical system are analyzed on the basis of the transformed observables.
J. Stat. Phys.90, 889-898 (1998) .
A new approach to dissipative quantum systems modelled by a system plus environment Hamiltonian is presented. Using a continuous sequence of infinitesimal unitary transformations the small quantum system is decoupled from its thermodynamically large environment. Dissipation enters through the observation that system observables generically 'decay' completely into a different structure when the Hamiltonian is transformed into diagonal form. The method is particularly suited for studying low--temperature properties. This is demonstrated explicitly for the super-Ohmic spin-boson model.
Euro. Phys. Jour. B5, 605-611 (1998) .
Ann. Physik (Leipzig)6, 90-135 (1997) .
We introduce a new theoretical approach to dissipative quantum systems. By means of a continuous sequence of infinitesimal unitary transformations, we decouple the small quantum system that one is interested in from its thermodynamically large environment. This yields a trivial final transformed Hamiltonian. Dissipation enters through the observation that generically observables 'decay' completely under these unitary transformations, i.e. are completely transformed into other terms. As a nontrivial example the spin-boson model is discussed in some detail. For the super-Ohmic bath we obtain a very satisfactory description of short, intermediate and long time scales at small temperatures. This can be tested from the generalized Shiba-relation that is fulfilled within numerical errors.
Ann. Physik (Leipzig)6, 215-233 (1997) .
We study the problem of the phonon-induced electron-electron interaction in a solid. Starting with a Hamiltonian that contains an electron-phonon interaction, we perform a similarity renormalization transformation to calculate an effective Hamiltonian. Using this transformation singularities due to degeneracies are avoided explicitely. The effective interactions are calculated to second order in the electron-phonon coupling. It is shown that the effective interaction between two electrons forming a Cooper pair is attractive in the whole parameter space. The final result is compared with effective interactions obtained using other approaches.
Europhys. Lett.40, 195-200 (1997) .
It is shown that one can obtain quantitatively accurate values for the superconducting critical temperature within a Hamiltonian framework. This is possible if one uses a renormalized Hamiltonian that contains an attractive electron--electron interaction and renormalized single particle energies. It can be obtained by similarity renormalization or using flow equations for Hamiltonians. We calculate the critical temperature as a function of the coupling using the standard BCS-theory. For small coupling we rederive the McMillan formula for $T_c$. We compare our results with Eliashberg theory and with experimental data from various materials. The theoretical results agree with the experimental data within 10%. Renormalization theory of Hamiltonians provides a promising way to investigate electron--phonon interactions in strongly correlated systems.
Z. Phys. B99, 269-280 (1996) .
Using continuous unitary transformations recently introduced by Wegner we obtain flow equations for the parameters of the spin-boson Hamiltonian. Interactions not contained in the original Hamiltonian are generated by this unitary transformation. Within an approximation that neglects additional interactions quadratic in the bath operators, we can close the flow equations. Applying this formalism to the case of Ohmic dissipation at zero temperature, we calculate the renormalized tunneling frequency. We find a transition from an untrapped to a trapped state at the critical coupling constant $\alpha= 1$. We also obtain the static susceptibility via the equilibrium spin correlation function. Our results are both consistent with results known from the Kondo problem and those obtained from mode coupling theories. Using this formalism at finite temperature, we find a transition from coherent to incoherent tunneling at $T_2\approx T_1$, where $T_1$ is the corssover temperature of the dynamics from underdamped to overdamped motion known from the NIBA.
Ann. Physics (NY)252, 1-32 (1996) .
We apply the method of infinitesimal unitary transformations recently introduced by Wegner to the Anderson single impurity model. It is demonstrated that this method provides a good approximation scheme for all values of the on-site interaction $U$, it becomes exact for $U=0$. We are able to treat an arbitrary density of states, the only restriction being that the hybridization should not be the largest parameter in the system. Our approach constitutes a consistent framework to derive various results usually obtained by either perturbative renormalization in an expansion in the hybridization Anderson's 'poor man's' scaling approach or the Schrieffer-Wolff unitary transformation. In contrast to the Schrieffer-Wolff result we find the correct high-energy cutoff and avoid singularities in the induced couplings. An important characteristic of our method as compared to the 'poor man's' scaling approach is that we continuously decouple modes from the impurity that have a large energy difference from the impurity orbital energies. In the usual scaling approach this criterion is provided by the energy difference from the Fermi surface.
Phys. Lett. A219, 313-318 (1996) .
J. Phys. A: Math. Gen.27, 4259-4279, corrigendum 5705 (1994) . | CommonCrawl |
what is the minimum price of such a route?
what is the minimum number of flights in a minimum-price route?
what is the maximum number of flights in a minimum-price route?
The first input line contains two integers $n$ and $m$: the number of cities and the number flights. The cities are numbered $1,2,\ldots,n$. City 1 is Syrjälä, and city $n$ is Lehmälä.
After this, there are $m$ lines that describe the flights. Each line has three integers $a$, $b$, and $c$: there is a flight from city $a$ to city $b$ with price $c$. All flights are one-way flights.
You may assume that there is a route from Syrjälä to Lehmälä.
Print four integers according to the problem statement. | CommonCrawl |
By a dynamical system $(X,T)$ we mean the action of the semigroup $(\Bbb Z^+,+)$ on a metrizable topological space $X$ induced by a continuous selfmap $T:X\rightarrow X$. Let $M(X)$ denote the set of all compatible metrics on the space $X$. Our main objective is to show that a selfmap $T$ of a compact space $X$ is a Banach contraction relative to some $d_1\in M(X)$ if and only if there exists some $d_2\in M(X)$ which, regarded as a $1$-cocycle of the system $(X,T)\times (X,T)$, is a coboundary. | CommonCrawl |
The LUNA experiment plays an important role in understanding open issue of neutrino physics. As an example, two key reactions of the solar p-p chain $^3He(^3He,2p)^4He$ and $^3He(^4He,\gamma)^7Be$ have been studied at low energy with LUNA, providing an accurate experimental input to the Standard Solar Model and consequently to the study of the neutrino mixing parameters. The LUNA collaboration will study the reaction $^2H(p,\gamma)^3He$ at Big Bang Nucleosynthesis (BBN) energies. This reaction is presently the main source of the $2\%$ uncertainty of the calculated primordial abundance of deuterium in BBN calculations . As it is well known, the abundance of deuterium depends on the number of neutrino families (or any other relativistic species existing in the early Universe, "dark radiation"). Therefore, the comparison of computed and observed deuterium abundances allows to severely constrain the number of neutrino species and/or the lepton degeneracy in the neutrino sector. The paucity of data at BBN energy of the $^2H(p,\gamma)^3He$ reaction is presently the main limitation to exploit the deuterium abundance as a probe of neutrino physics and to improve the BBN estimation of the baryon density. As a matter of fact, the deuterium abundance derived from damped Lyman $\alpha$ (DLA) system observations has presently an error of only $1.5\%$ . The aim of the the new measurement is therefore to substantially improve the $9\%$ error of present $^2H(p,\gamma)^3He$ data at BBN energies . Starting from the present uncertainty of the relevant parameters (i.e. baryon density, observed abundance of deuterium and BBN nuclear cross sections), it will be shown that a renewed study of the $^2H(p,\gamma)^3He$ process is essential to constrain the number of neutrino families and to probe the existence of dark radiation in the early Universe, by using the BBN theory and the cosmic microwave background (CMB) data. R.J. Cooke and M. Pettini: arXiv:1308.3240v1 [astro-ph.CO] 14 Aug 2013. L. Ma et al., Phys. Rev. C 55, 588 (1997). | CommonCrawl |
The solution depends only on a little algebra and some clear mathematical thinking.
Pierre, Tarbert Comprehensive, Ireland, Prateek, Riccarton High School, Christchurch, New Zealand and Vassil from Lawnswood Sixth Form, Leeds started by taking small values of $n$, usually a good way to begin. This solution comes from Arun Iyer, S.I.A High School and Junior College, India. They all found the answer which is $30$.
and it is quite easy to see that $n(n-1)(n+1)(n^2+1)$ is divisible by $2$, $3$ and $5$ for all values of $n$. As $n$, $(n-1)$ and $(n+1)$ are three consecutive integers their product must be divisible by $2$ and by $3$. If none of these numbers is divisible by $5$ then $n$ is either of the form $5k+2$ or $5k+3$ for some integer $k$ and in both of these cases we can check that $n^2 + 1$ is divisible by $5$. Since $2$, $3$ and $5$ are coprime therefore $n^5 - n$ is divisible by $2 \times 3 \times 5$ i.e by $30$.
Since the second term of the sequence is $2^5-2 = 30$ therefore the divisor cannot be greater than $30$. Therefore $30$ is the largest number that d ivides each member of the sequence.
Polynomial functions and their roots. Factors and multiples. Expanding and factorising quadratics. Generalising. Making and proving conjectures. Mathematical reasoning & proof. Inequalities. Networks/Graph Theory. Common factors. Creating and manipulating expressions and formulae. | CommonCrawl |
Often, data sets include a large number of features. The technique of extracting a subset of relevant features is called feature selection. Feature selection can enhance the interpretability of the model, speed up the learning process and improve the learner performance. There exist different approaches to identify the relevant features. In the literature two different approaches exist: One is called "Filtering" and the other approach is often referred to as "feature subset selection" or "wrapper methods".
Filter: An external algorithm computes a rank of the variables (e.g. based on the correlation to the response). Then, features are subsetted by a certain criteria, e.g. an absolute number or a percentage of the number of variables. The selected features will then be used to fit a model (with optional hyperparameters selected by tuning). This calculation is usually cheaper than "feature subset selection" in terms of computation time.
Feature subset selection: Here, no ranking of features is done. Features are selected by a (random) subset of the data. Then, a model is fit and the performance is checked. This is done for a lot of feature combinations in a CV setting and the best combination is reported. This method is very computational intense as a lot of models are fitted. Also, strictly all these models would need to be tuned before the performance is estimated which would require an additional nested level in a CV setting. After all this, the selected subset of features is again fitted (with optional hyperparameters selected by tuning).
mlr supports both filter methods and wrapper methods.
Filter methods assign an importance value to each feature. Based on these values the features can be ranked and a feature subset can be selected. You can see here which algorithms are implemented.
Different methods for calculating the feature importance are built into mlr's function generateFilterValuesData(). Currently, classification, regression and survival analysis tasks are supported. A table showing all available methods can be found in article filter methods.
The most basic approach is to use generateFilterValuesData() directly on a Task() with a character string specifying the filter method.
fv is a FilterValues() object and fv$data contains a data.frame that gives the importance values for all features. Optionally, a vector of filter methods can be passed.
A bar plot of importance values for the individual features can be obtained using function plotFilterValues().
By default plotFilterValues() will create facetted subplots if multiple filter methods are passed as input to generateFilterValuesData().
According to the "information.gain" measure, Petal.Width and Petal.Length contain the most information about the target variable Species.
With mlrs function filterFeatures() you can create a new Task() by leaving out features of lower importance.
Keep a certain absolute number (abs) of features with highest importance.
Keep a certain percentage (perc) of features with highest importance.
Keep all features whose importance exceeds a certain threshold value (threshold).
Function filterFeatures() supports these three methods as shown in the following example. Moreover, you can either specify the method for calculating the feature importance or you can use previously computed importance values via argument fval.
Often feature selection based on a filter method is part of the data preprocessing and in a subsequent step a learning method is applied to the filtered data. In a proper experimental setup you might want to automate the selection of the features so that it can be part of the validation method of your choice. A Learner (makeLearner()) can be fused with a filter method by function makeFilterWrapper(). The resulting Learner (makeLearner()) has the additional class attribute FilterWrapper(). This has the advantage that the filter parameters (fw.method, fw.perc. fw.abs) can now be treated as hyperparameters. They can be tuned in a nested CV setting at the same level as the algorithm hyperparameters. You can think of if as "tuning the dataset".
In the following example we calculate the 10-fold cross-validated error rate mmce of the k-nearest neighbor classifier (FNN::fnn()) with preceding feature selection on the iris (datasets::iris()) data set. We use information.gain as importance measure with the aim to subset the dataset to the two features with the highest importance. In each resampling iteration feature selection is carried out on the corresponding training data set before fitting the learner.
You may want to know which features have been used. Luckily, we have called resample() with the argument models = TRUE, which means that r$models contains a list of models (makeWrappedModel()) fitted in the individual resampling iterations. In order to access the selected feature subsets we can call getFilteredFeatures() on each model.
The result shows that in the ten folds always Petal.Length and Petal.Width have been chosen (remember we wanted to have the best two, i.e. \(10 \times 2\)). The selection of features seems to be very stable for this dataset. The features Sepal.Length and Sepal.Width did not make it into a single fold.
In the following regression example we consider the BostonHousing (mlbench::BostonHousing()) data set. We use a Support Vector Machine and determine the optimal percentage value for feature selection such that the 3-fold cross-validated mean squared error (mse()) of the learner is minimal. Additionally, we tune the hyperparameters of the algorithm at the same time. As search strategy for tuning a random search with five iterations is used.
After tuning we can generate a new wrapped learner with the optimal percentage value for further use (e.g. to predict to new data).
## "crim" "dis" "rad" "lstat"
Wrapper methods use the performance of a learning algorithm to assess the usefulness of a feature set. In order to select a feature subset a learner is trained repeatedly on different feature subsets and the subset which leads to the best learner performance is chosen.
How to assess the performance: This involves choosing a performance measure that serves as feature selection criterion and a resampling strategy.
Which learning method to use.
How to search the space of possible feature subsets.
Deterministic forward or backward search makeFeatSelControlSequential (?FeatSelControl()).
Feature selection can be conducted with function selectFeatures().
In the following example we perform an exhaustive search on the Wisconsin Prognostic Breast Cancer (TH.data::wpbc()) data set. As learning method we use the Cox proportional hazards model (survival::coxph()). The performance is assessed by the holdout estimate of the concordance index cindex).
ctrl is aFeatSelControl() object that contains information about the search strategy and potential parameter values.
In a second example we fit a simple linear regression model to the BostonHousing (mlbench::BostonHousing()) data set and use a sequential search to find a feature set that minimizes the mean squared error mse). method = "sfs" indicates that we want to conduct a sequential forward search where features are added to the model until the performance cannot be improved anymore. See the documentation page makeFeatSelControlSequential (?FeatSelControl()) for other available sequential search methods. The search is stopped if the improvement is smaller than alpha = 0.02.
Further information about the sequential feature selection process can be obtained by function analyzeFeatSelResult().
## Stopped, because no improving feature was found.
A Learner (makeLearner()) can be fused with a feature selection strategy (i.e., a search strategy, a performance measure and a resampling strategy) by function makeFeatSelWrapper(). During training features are selected according to the specified selection scheme. Then, the learner is trained on the selected feature subset.
The result of the feature selection can be extracted by function getFeatSelResult().
Some algorithms internally compute a feature importance during training. By using getFeatureImportance() it is possible to extract this part from the trained model. | CommonCrawl |
Abstract: Measurements of $\alpha_s$, the coupling strength of the Strong Interaction between quarks and gluons, are summarised and an updated value of the world average of $\alpha_s (M_Z)$ is derived. Building up on previous reviews, special emphasis is laid on the most recent determinations of $\alpha_s$. These are obtained from $\tau$-decays, from global fits of electroweak precision data and from measurements of the proton structure function $\F_2$, which are based on perturbative QCD calculations up to $O(\alpha_s^4)$; from hadronic event shapes and jet production in $\epem$ annihilation, based on $O(\alpha_s^3) $ QCD; from jet production in deep inelastic scattering and from $\Upsilon$ decays, based on $O(\alpha_s^2) $ QCD; and from heavy quarkonia based on unquenched QCD lattice calculations. Applying pragmatic methods to deal with possibly underestimated errors and/or unknown correlations, the world average value of $\alpha_s (M_Z)$ results in $\alpha_s (M_Z) = 0.1184 \pm 0.0007$. The measured values of $\alpha_s (Q)$, covering energy scales from $Q \equiv \mtau = 1.78$ GeV to 209 GeV, exactly follow the energy dependence predicted by QCD and therefore significantly test the concept af Asymptotic Freedom. | CommonCrawl |
This is a reprint of the classic Traders of Genoa. It has a larger box and completely new graphics, but the important elements of gameplay remain the same.
We first played this last weekend with 5 of us and we had a blast. It was chaotic, it was exciting, and it was fun. The tower player was constantly being pulled in 4 different directions. The haggling and wheeling and dealing was non-stop. None of us even really knew if we were getting the best deal; we all just enjoyed being part of the action. Every tower movement and action came at a steep price, and the multiple offers given were often so varied, it was nearly impossible to gauge what the better deal would be. Also, there was really no down time between turns because everyone played such an integral part of every turn. I would literally jog into the kitchen to get another beer because I didnt want to miss out on a good deal. The game took about 2 and a half hours and we were all sorry to see it end. What a perfect game night!
The very next day, three of the five of us got together again to play because we enjoyed it so much the night before. The 3-player experience was not nearly as fun. I think its because the chaos factor was gone. Everything became more predictable and mechanical. There was a surplus of actions so they went cheap. As a result, the haggling and wheeling and dealing aspect of the game was greatly diminished. 5 ducats for an action was quite common simply because there were no other offers on the table. That never happened the night before. The game still had strategy but it was no longer exciting. I guess the best comparison I could give is Pit (the card game). Pit is a great game with 8 players and a lousy one with 3 players because it requires the constant chaos of deals flying left and right to be stimulating. That, of course, is the extent of the similarities between Traders of Genoa and Pit.
1) You need 4 or 5 (preferably 5) people to make it work.
2) Every player needs to be ready and willing to negotiate their butts off for 2 hours straight.
If you have those two ingredients then this game will not disappoint.
and cooperative (the other players are your competitors, but not your ennemies).
For me, the key mechanism that makes this game so much fun is the following: on his or her turn, each player may determine a path consisting in five fields on the board. Most of the fields represent buildings in which different actions are possible. But the player-in-charge doesn't choose the five fields alone. He (or she) discusses the selection with the others and makes deals (you sell the actions that can be taken in the various places). In this way, it's always your turn. Everybody is always looking at the board and at her (or his) cards, trying to figure out and negotiate the best deal. A bit like Bohnanza.
If you don't have it already, don't wait any longer!
The beauty of the negotiations in ToG is that they're all small ones. No (well, let's say very few) big hold ups working on the 'uber trade' like you can see in Settlers, or even Monopoly if you play that way. This game succeeds with gamers as well as the Monopoly crowd, as long as the Monopoly players accept that only the negotiation is the interesting part of Monopoly. Back to the point I was making... the negotiations happen every turn (multiple times per turn, actually), and are so small, that single deals aren't what make or break you (which tends to be case in Settlers when trying to make a sweet play and you don't want to want a whole 'nother round before getting to play again). It's all the negotiations added up which determine the outcome.
Two nice effects: 1) there's very little individual thinking time necessary, i.e. thinking of what you're going to do when it's your turn. 2) It's not obvious who is winning just from looking at the state of the game. Money is kept secret. You *can* gang up and spurn whoever you *think* is winning, but most people don't keep careful enough track, and often it ends up being a surprise who wins after the game is over. With all the money that changed hands during the game, it downright shocked us how close the game ended up, and how far behind the person who we all thought was winning actually was.
The only people I can imagine this not appealing to are people who only play games with no communication necessary (for the actual game play, not just social talk), like Chess, Go, Scrabble, etc.
I wasn't sure whether I would really like a game so heavily based on negotiation. Now I am.
Ravensburger's Alea label has quickly become recognized as an excellent brand. They clearly take time in selecting which designs to publish, and one of their challenges seems to be in maintaining consistently high quality. If I could be critical, it would only be that they failed to make their edition of Adel Verpflichtet playable with six players, something that would have possibly encouraged more owners of the original to shell out their money. This minor nit aside, their latest full box release of Die Händler von Genoa (The Traders of Genoa) continues their string of strong original designs while fostering some similarities to their earlier titles of Chinatown and Princes of Florence.
The scene this time is the town square of Genoa, and each player takes the role of a Trader with all kinds of ways to make money. Each building in Genoa is associated with an action that can help to support these various moneymaking schemes, and the novel mechanism in the game is that each player "walks" through the town for up to five steps each turn. Where they go, where they stop, and what they do when there creates ways to generate cash or build a foundation for future earnings. The board shows 18 buildings laid out across an $8\times8$ grid, with 14 buildings ringing the edge and four more bordering the four-square town center. Empty street spaces separate these sections. Five wooden disks represent the steps, and these get dropped off one by one walks through the plaza. This visible pattern of the walk not only ensures that the proper distance is covered, but also helps to verify that certain moneymaking criteria have been met.
There are four direct ways to earn cash, each represented by a deck of small cards. The "small sale" card (blue) pays you to sell a specific good in a specific location in the city. The "large sale" card (pink) pays you to sell three different but specific goods in a specific location, paying you more in total but less per unit, but also earning you one of five special action chits. The "messenger" card (green) pays you whenever two buildings are connected in any players' walk. The "privilege" card (tan) pays off only at the end of the game, and each represents the privilege (deed) for one of the buildings. Holding the privilege for one building is worth a little, but holding privileges for adjacent buildings generates incremental profits. Each player begins with one of each type of card, and others can be acquired throughout the game.
In addition to these direct methods of raising cash, there are multiple other ways, primarily through negotiation. You see, although you can walk up to five spaces each turn, you can only take one action per turn. "Taking an action" means having the right to use the action associated with a particular building, which can also then allow the sale of goods as described above. Since you can only take one action on your walk, you can sell the rights to other players for buildings you walk through, or even let their incentives determine which way you move through the town. You begin your turn by throwing two eight-sided dice to determine where you'll take your first step, then walk up to four more steps. Since buildings are adjacent along the outside edge of the board, you can generate up to five separate actions to be used by the players if you start and stay on the edge. If you start in the center or walk to the center, you will walk through some empty street squares, but in Genoa even these can create value.
To understand this concept, it is necessary to understand the actions associated with the buildings. The four cards mentioned each have buildings associated with them. The "action" in the Rathaus, for example, is to take two of the blue small sale cards. The action of the Poststation is to take two green cards, and in the Gildenhalle is to take a single pink large sale card. The brown privilege cards can be taken in each of the four villas, but the villas also have an additional use. Each very valuable pink card can only be sold in one of these same buildings and once there you can either take a privilege card or cash in a pink card, but not both. Four spaces near the corners represent warehouses where goods are acquired. Each of these holds two types of goods; by taking the action in that building you take one of each. So, if you have a blue card that allows you to sell silk in the Park, you need to first acquire a silk cube either by trading for it or by taking the action in the Tuchlager. Later on, then, you need to get to the Park to take the action and cash in the blue card with the silk.
Not too complicated if all the game had were the four card types and goods associated with them. But, the depth of this game cannot be underestimated as there are two other key features that, combined with the above, allow for the creation of some very detailed and clever trades. The first of these features are the special action chits. There are five of these, and each has a building or buildings associated with them. One allows to choose where you begin your walk rather than roll dice, very valuable to ensure that you get to the place you need to cash in that pink card or pick up that needed good. The second allows you to take any good off the board and use it as part of a sale or trade. The third lets you take an extra action on a walk. So, I can take two actions on my own walk or buy a second action on someone else's. The fourth is a great concept: the 1:1 trade. With this chit, I can trade one of anything for one of anything else on the board, except for cash of course. Simply, I could trade a cube of silk for a cube of copper to better match my cards. Or, I could trade one type of special action chit for another, or for a blue card. Anything for anything, and this can be very useful.
The fifth special action chit, the "building chit", is associated with the second special feature mentioned above. Each player has "possession markers" in their color available at the side of the board. The action of the Kathedrale building is to take two of your markers and place them in front of you. These could also be acquired through a trade with another player or with your 1:1 chit. At the end of each walk, starting from the walking player, players can place their possession markers on buildings adjacent to steps taken in the empty street spaces. These markers have three distinct values. First, the bank pays you 10 ducats when anyone else takes the action in that building. Placed early on an important building, this can generate good income throughout the game. Second, every possession marker on a building at the game's end is worth another 10 ducats. Third, when used in conjunction with the fifth action chit, you can take the action of the building you possess even when you're not there, as long as you've taken the action somewhere else. Let me explain this. Say earlier that I placed one of my possession markers on the Taverna, which allows a player to take a 1:1 action chit as the action. On someone's walk, I purchase the action in the Villa Colini, which gives me a privilege card. But, before ending up my action, I turn in a special building chit (gained earlier through trade or actioning the Palazzo building), and take a 1:1 chit, since my possession marker is on the Taverna and the building action chit allows me to exercise my right of possession.
The combinations of primary cards, special actions chits, possession markers, and an "everyone can take an action on each player's turn" mechanism create an extremely dynamic and interactive play. You cannot rest in this game and stay on top of things, as a winning strategy usually results from a series of small individual decisions about what to bid for, what to keep for yourself, how to creatively trade, and who to trade with. The negotiations in this game can be both simple and complex. Only two-player deals are allowed (fortunately), but even with this constraint the options are quite varied. Splitting items at the buildings where two are earned comes up often, as do simple "cash for this action" offers. More interesting, though not always more profitable, are deals involving changes in possession markers, swaps of privilege cards to possibly give someone a larger string of connected buildings, or swaps of special action chits. It is not uncommon to be in a position of being offered very different things from two people for the right to action the same building, so understanding how each helps them and which offer is better for you has be made quickly and decisively or the game bogs down.
Bogging down can be an issue. With five players, this game can take from one and a half to three and half hours based on the level of detail allowed in the negotiations and the number of options considered before moving on. We've found that it works best when everyone tries to push his or her walk along and not spend too much time trying to evaluate the complete matrix of decisions and steps possible. The game officially ends on the last player's turn when a round marker hits a predetermined spot (which varies based on the number of players). This marker advances each time everyone has taken one walk, however it also advances if a walk begins in the center of the plaza, represented by four grid areas. As a result, the game can randomly move ahead faster through the right dice rolls, or be forced ahead by a player choosing to use a special action chit to begin his walk in the center.
Comparisons to Chinatown are fair in that negotiation is the heart of this game. In Chinatown, however, while you cannot predict what businesses or land spaces will be drawn in future turns, you can often accurately calculate the value of what you get versus what your dealing partner gets in a deal. In Genoa this is much more difficult, because one can't be exactly sure why someone wants that pepper cube and if in fact they'll get to the building needed to sell it in time. The reward gained is not as straight forward in many cases and where it is, a wise player will try to extract much of the value from you.
Like Princes of Florence, a game that shares the artwork of Genoa but feels nothing like it, there is no sure-fire strategy that wins this game but there are multiple ways to consider. In our group's games, people have made significant advances by concentrating on "messenger" cards (the green ones); by collecting connected strings of privilege cards; by using possession markers adroitly; by being highly selective about who to deal with when possible; by making multi-step maneuvers after gaining the action -- maneuvers that involve special action chits and selling cards; and by helping to control the flow of the game by forcing advancements on the turn marker track. There are likely many more subtle and obvious ways as well. Another sign of the strength in the game is the fact that often the final scores are quite close. It is only after several plays that you begin to truly appreciate that paying 20 versus for an action that could have been had for 10 can have real consequences.
I don't believe that Die Händer von Genoa will be as popular as some of the earlier Alea titles, especially Taj Mahal and Florence. It is a very deep and robust game, and in many ways more difficult to get your arms around than the others. Chinatown's mechanisms are much simpler, Florence's objectives are easier to define, and while reading other players' signals is imperative in Taj Mahal its effects are not as cumulative as in Genoa. It does provide a great gaming experience for those who like this style game, and brings a level of sophistication and true strategy that sets a new standard for games using negotiation as their principle mechanism.
These streets are paved with opportunities for wandering traders. You visit up to five buildings each turn, one of which has actions that you'll perform for nothing. Negotiate with others to guide your itinerary ("If you want what's offered there, you'll pay me to go there!"), and auction buildings' actions to interested competitors (the winners pay you). Collect Orders that show both the goods you must purchase at a Warehouse and the Villa where they must be delivered. Buildings also offer trading privileges or markers in your color that you can place to earn commissions from visiting traders. Win by finishing as the wealthiest trader. We wouldn't swap this superlative game of negotiation for anything!
This booming city offers many ways for shrewd, industrious negotiators to become wealthy. Each turn, you can visit up to five buildings, which offer a variety of potentially rewarding benefits. You may visit just one building, taking whatever benefit it offers for free; but it is far better to listen to offers from other players, who may want something that another building has to offer. When you go to another building, an auction for its benefits is conducted, and you can accept whatever offer you like best. At some buildings, for example, you can get free extra benefits, or markers that you place in buildings to earn you money when others use those buildings' benefits. At other buildings, you draw Orders, which specify commodities you can acquire at warehouses, as well as places where the commodities can be delivered in exchange for money. The player ending up with the most money wins. | CommonCrawl |
Bare-hands construction of fiber products: an affine open cover of a fiber product of schemes can be assembled from compatible affine open covers of the pieces.
is an affine open covering of $X \times _ S Y$.
Suggested slogan: Bare-hands construction of fiber products: an affine open cover of a fiber product of schemes can be assembled from compatible affine open covers of the pieces.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 01JS. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 01JS, in case you are confused. | CommonCrawl |
Hosseini, E., Barid Loghmani, G., Heydari, M., Wazwaz, A. (2017). A numerical study of electrohydrodynamic flow analysis in a circular cylindrical conduit using orthonormal Bernstein polynomials. Computational Methods for Differential Equations, 5(4), 280-300.
Ehsan Hosseini; Ghasem Barid Loghmani; Mohammad Heydari; Abdul-Majid Wazwaz. "A numerical study of electrohydrodynamic flow analysis in a circular cylindrical conduit using orthonormal Bernstein polynomials". Computational Methods for Differential Equations, 5, 4, 2017, 280-300.
Hosseini, E., Barid Loghmani, G., Heydari, M., Wazwaz, A. (2017). 'A numerical study of electrohydrodynamic flow analysis in a circular cylindrical conduit using orthonormal Bernstein polynomials', Computational Methods for Differential Equations, 5(4), pp. 280-300.
Hosseini, E., Barid Loghmani, G., Heydari, M., Wazwaz, A. A numerical study of electrohydrodynamic flow analysis in a circular cylindrical conduit using orthonormal Bernstein polynomials. Computational Methods for Differential Equations, 2017; 5(4): 280-300.
In this work, the nonlinear boundary value problem in electrohydrodynamics flow of a fluid in an ion-drag configuration in a circular cylindrical conduit is studied numerically. An effective collocation method, which is based on orthonormal Bernstein polynomials is employed to simulate the solution of this model. Some properties of orthonormal Bernstein polynomials are introduced and utilized to narrow down the computation of nonlinear boundary value problem to the solution of algebraic equations. Also, by using the residual correction process, an efficient error estimation is introduced. Graphical and tabular results are presented to investigate the influence of the strength of nonlinearity $\alpha$ and Hartmann electric number $Ha^2$ on velocity profiles. The significant merit of this method is that it can yield an appropriate level of accuracy even with large values of $\alpha$ and $Ha^2$. Compared with recent works, the numerical experiments in this study show a good agreement with the results obtained by using MATLAB solver bvp5c and its competitive ability. | CommonCrawl |
Pseudoscalar Mesons in the SU3 Linear Sigma Model with Gaussian Functional Approximation - High Energy Physics - Phenomenology - Download this document for free, or read online. Document in PDF available to download.
Abstract: We study the SU3 linear sigma model for the pseudoscalar mesons in theGaussian Functional Approximation GFA. We use the SU3 linear sigma modelLagrangian with nonet scalar and pseudo-scalar mesons including symmetrybreaking terms. In the GFA, we take the Gaussian Ansatz for the ground statewave function and apply the variational method to minimize the ground stateenergy. We derive the gap equations for the dressed meson masses, which areactually just variational parameters in the GFA method. We use theBethe-Salpeter equation for meson-meson scattering which provides the masses ofthe physical nonet mesons. We construct the projection operators for the flavorSU3 in order to work out the scattering T-matrix in an efficient way. In thispaper, we discuss the properties of the Nambu-Goldstone bosons in variouslimits of the chiral $U L3\times U R3$ symmetry. | CommonCrawl |
minQual (-40) minimum quality: the minimum value for qualities in the described error models. Currently exclusively integer quality models (as Illumina and phred qualities) are addressed. Therefore, subsequent CDFs over quality spectra have all the length (maxQual - minQual + 1).
maxQual (40) maximum quality: highest value of the quality spectrum, an integer - see above.
tholdQual (.) the threshold quality: level below which below which all base-calls have been considered "problematic" or "accident", regardless whether the corresponding base had been called correctly or not. If none such threshold has been applied, tholdQual should be set to "."
p(minQual), $\ldots$, p(maxQual) CDF over qualities of "unproblematic" base calls. A base call is considered as unproblematic iff it is (i) correct and (ii) equal or above the level specified by tholdQual.
minQual $\ldots$ maxQual (-40,…,40) quality level for the following observed substitution rates p(X) apply.
p(A),p(C),p(G),p(N),p(T) CDF of the symbol specified by letter to be substituted by A, C, G, N, or T as observed for the quality level in this line.
length (11) extension of the "problem" captured in this error profile. Consequently, the last position affected is (start+length-1). | CommonCrawl |
soooooo is HOTT, is $S^1$ something like the natural numbers object for $\infty$-topoi?
oh, like ,internal hom objects?
so is this what you work on? I'm not really clear on who you are.
and also, a thesis that my advisor and the people at my school actually know stuff about, haha, which is nice.
Yes, that PRISM program is quite thorough.
I'm prone to making Coq-related double entendres.
maybe he was snowden the whole time?!!
anybody know what scheme takes a ring $R$ to the set $R[x]/(x^n)$? | CommonCrawl |
While delay differential equations with variable delays may have a superficial appearance of analyticity, it is far from clear in general that a global bounded solution $x(t)$ (namely, a bounded solution defined for all time $t$, such as a solution lying on an attractor) is an analytic function of $t$. Indeed, very often such solutions are not analytic, although they are often $C^\infty$. In this talk we provide sufficient conditions both for analyticity and for non-analyticity (but $C^\infty$ smoothness) of such solutions. In fact these conditions may occur simultaneously for the same solution, but in different regions of its domain, and so the solution exhibits co-existence of analyticity and non-analyticity. In fact, we show it can happen that the set of non-analytic points $t$ of a solution $x(t)$ can be a generalized Cantor set. | CommonCrawl |
Ravichandran, KS and Dwarakadasa, ES and Banerjee, D (1991) Mechanisms of cleavage during fatigue crack growth in $Ti-6Al-4V$ alloy. In: Scripta Metallurgica et Materialia, 25 (9). pp. 2115-2120.
Beta processed microstructures of titanium alloys consisting of Widmanstatten structures offer attractive combinations of strength, fracture toughness and resistance to fatigue crack growth (FCG) for damage-tolerance applications [1-3]. This is generally attributed to large prior beta grain sizes incorporating coarse lamellar colonies across the boundaries of which cracks are deflected or deviated . FCG resistance of these microstructures is far superior that of other microstructures owing to increased levels of crack deflection or deviation causing an increase in the intrinsic crack growth resistance, $\Delta K_e_f_f_,_t_h$, as well as crack closure . However, the fatigue crack initiation resistance is lowfor these structures due to the formation of large intense planar slip bands. Growth of fatigue cracks along these bands normal to $\alpha/\beta$ interfaces is easy without interruption at the interfaces and this has been largely attributed to the poor fatigue crack initiation characteristics, tn the present study, some important and unusual fracture features pertaining to the effect of $\beta$ phase in retarding cleavage cracks produced during fatigue crack growth are presented. The factors responsible for these effects are highlighted. | CommonCrawl |
I will talk about the computational complexity of computing the noncommutative determinant. In contrast to the case of commutative algebras, we know of (virtually) no efficient algorithms to compute the determinant over non-commutative domains. Our results show that the determinant in noncommutative settings can be as hard as the permanent.
Given a Hamiltonian on $T^n\times R^n$, we shall explain how the sequence of suitably rescaled (i.e. homogenized) Hamiltonians, converges, for a suitably defined symplectic metric. We shall then explain some applications, in particular to symplectic topology and invariant measures of dynamical systems.
Finding the longest increasing subsequence (LIS) is a classic algorithmic problem. Simple $O(n log n)$ algorithms, based on dynamic programming, are known for solving this problem exactly on arrays of length $n$.
The early work of Condorcet in the eighteenth century, and that of Arrow and others in the twentieth century, revealed the complex and interesting mathematical problems that arise in the theory of social choice. In this lecture, Noga Alon, Visiting Professor in the School of Mathematics, explains how the simple process of voting leads to strikingly counter-intuitive paradoxes, focusing on several recent intriguing examples. | CommonCrawl |
Recall that a multiset is a collection of elements allowing the repetition of elements and that the repetition number of an element in a multiset is the number of times that element appears in the multiset.
As a simple example, suppose that we want to construct a $3$-letter password out of the letters in the alphabet. In a sense, our "multiset" is the alphabet for which there is no restriction on which of the $26$ letters we can choose from. For the first character of our password, we can choose any of the $26$ letters. The same goes for the second and third characters of our password and so the total number of passwords we can have, i.e., the total number of $3$-permutations from a multiset with $26$ distinct elements and whose repetition numbers are $\infty$ is $26^3 = 17576$. | CommonCrawl |
Out-of-sample performance (cash, blue-to-pink line) of an MDFA low-pass filter built using the approach discussed in this article on the STOXX Europe 50 index futures with expiration March 18th (STXEH3) for 200 15-minute interval log-return observations. Only one small loss out-of-sample was recorded from the period of Jan 18th through February 1st, 2013.
Continuing along the trend of my previous installment on strategies and performances of high-frequency trading using multivariate direct filtering, I take on building trading signals for high-frequency index futures, where I will focus on the STOXX Europe 50 Index, S&P 500, and the Australian Stock Exchange Index. As you will discover in this article, these filters that I build using MDFA in iMetrica have yielded some of the best performing trading signals that I have seen using any trading methodology. My strategy as I've been developing throughout my previous articles on MDFA has not changed much, except for one detail that I will discuss throughout and will be a major theme of this article, and that relates to an interesting structure found in index futures series for intraday returns. The structure is related to the close-to-open variation in the price, namely when the price at close of market hours significantly differs from the price at open. an effect I've mentioned in my previous two articles dealing with high(er)-frequency (or intraday) log-return data. I will show how MDFA can take advantage of this variation in price and profit from each one by 'predicting' with the extracted signal the jump or drop in the price at the open of the next trading day.
The frequency of observations on the index that are to be considered for building trading filters using MDFA is typically only a question of taste and priorities. The beauty of MDFA lies in not only the versatility and strength in building trading signals for virtually any financial trading priorities, but also in the independence on the underlying observation frequency of the data. In my previous articles, I've considered and built high-performing strategies for daily, hourly, and 15 minute log returns, where the focus of the strategy in building the signal began with viewing the periodogram as the main barometer in searching for optimal frequencies on which one should set the low-pass cutoff for the extracting target filter function.
Index futures, as in a futures contract on a financial index, as we will see in this article present no new challenges. With the success I had on the 15-minute return observation frequency that I utilized in my previous article in building a signal for the Japanese Yen, I will continue to use the 15 minute intervals for the index futures where I hope to shed some more light on the filter selection process. This includes deducing properties of the intrinsically optimal spectral peaks to trade on. To do this, I present a simple approach I take in these examples by first setting up a bandpass filter over the spectral peak in the periodogram and then study the in-sample and out-of-sample characteristics of this signal, both in performance and consistency. So without further ado, I present my experiments with financial trading on index futures using MDFA, in iMetrica.
The STOXX Europe 50 Index, Europe's leading Blue-chip index, provides a representation of sector leaders in Europe. The index covers 50 stocks from 18 European countries and has the highest trading volume of any European index. One of the first things to notice with the 15-minute log-returns of STXE are the frequent large spikes. These spikes will occur every 27 observations at 13:30 (UTC time zone) due to the fact that there are 26 15-minute periods during the trading hours. These spikes represent the close-to-open jumps that the STOXX Europe 50 index has been subjected to and then reflected in the price of the futures contract. With this 'seasonal' pattern so obviously present in the log-return data, the frequency effects of this pattern should be clearly visible in the periodogram of the data. The beauty of MDFA (and iMetrica) is that we have the ability to explicitly engineer a trading signal to take advantage of this 'seasonal' pattern by building an appropriate extractor .
Regarding the periodogram of the data, Figure 2 depicts the periodograms for the 15 minute log-returns of STXE (red) and the explanatory series (pink) together on the same discretized frequency domain. Notice that in both log-return series, there is a principal spectral peak found between .23 and .32. The trick is to localize the spectral peak that accounts for the cyclical pattern that is brought about by the close-to-open variation between 20:00 and 13:30 UTC.
Figure 2: The periodograms for the 15 minute log-returns of STXE (red) and the explanatory series (pink).
In order to see the effects of the MDFA filter when localizing this spectral peak, I use my target builder interface in iMetrica to set the necessary cutoffs for the bandpass filter directly covering both spectral peaks of the log-returns, which are found between .23 and .32. This is shown in Figure 3, where the two dashed red lines show indicate the cutoffs and spectral peak is clearly inside these two cutoffs, with the spectral peak for both series occurring in the vicinity of . Once the bandpass target was fixed on this small frequency range, I set the regularization parameters for the filter coefficients to be , , and .
Figure 3: Choosing the cutoffs for the band pass filter to localize the spectral peak.
Pinpointing this frequency range that zooms in on the largest spectral peak generates a filter that acts on the intrinsic cycles found in the 15 minute log-returns of the STXE futures index. The resulting trading signal produced by this spectral peak extraction is shown in Figure 4, with the returns (blue to pink line) generated from the trading signal (green) , and the price of the STXE futures index in gray. The cyclical effects in the signal include the close-to-open variations in the data. Notice how the signal predicts the variation of the close-to-open price in the index quite well, seen from the large jumps or falls in price every 27 observations. The performance of this simple design in extracting the spectral peak of STXE yields a 4 percent ROI on 200 observations out-of-sample with only 3 losses out of 20 total trades (85 percent trade success rate), with two of them being accounted for towards the very end of the out-of-sample observations in an uncharacteristic volatile period occurring on January 31st 2013.
Figure 4: The performance in-sample and out-of-sample of the spectral peak localizing bandpass filter.
The two concurrent frequency response (transfer) functions for the two filters acting on the STXE log-return data (purple) and the explanatory series (blue), respectively, are plotted below in Figure 5. Notice the presence of the spectral peaks for both series being accounted for in the vicinity of the frequency , with mild damping at the peak. Slow damping of the noise in the higher frequencies is aided by the addition of a smoothing expweight parameter that was set at .
Figure 5: The concurrent frequency response functions of the localizing spectral peak band-pass filter.
With the ideal characteristics of a trading signal quite present in this simple bandpass filter, namely smooth decaying filter coefficients, in-sample and out-of-sample performance properties identical, and accurate, consistent trading patterns, it would be hard to imagine on improving the trading signal for this European futures index even more. But we can. We simply keep the spectral peak frequencies intact, but also account for the local bias in log-return data by extending the lower cutoff to frequency zero. This will provide improved systematic trading characteristics by not only predicting the close-to-open variation and jumps, but also handling upswings and downswings, and highly volatile periods much better.
In this new design, I created a low-pass filter by keeping the upper cutoff from the band-pass design and setting the lower cutoff to 0. I also increased the smoothing parameter to $\alpha = 32$. In this newly designed filter, we see a vast improvement in the trading structure. As before, the filter was able to deduce the direction of every single close-to-open jump during the 200 out-of-sample observations, but notice that it was also able to become much more flexible in the trading during any upswing/downswing and volatile period. This is seen in more detail in Figure 7, where I added the letter 'D' to each of the 5 major buy/sell signals occurring before close.
Figure 6: Performance of filter both in-sample (left of cyan line) and on 210 observations out-of-sample (right of cyan line).
Notice that the signal predicted the jump correctly for each of these major jumps, resulting in large returns. For example, at the first "D" indicator, the signal indicated sell/short (magenta dashed line) the STXE future index 5 observations before close, namely at 18:45 UTC, before market close at 20:00 UTC. Sure enough, the price of the STXE contract went down during overnight trading hours and opened far below the previous days close, with the filter signaling a buy (green dashed line) 45 minutes into trading. At the mark of the second "D", we see that on the final observation before market close, the signal produced a buy/long indication, and indeed, the next day the price of the future jumped significantly.
Figure 7: Zooming in on the out-of-sample performance and showing all the signal responses that predicted the major close-to-open jumps.
Only two very small losses of less than .08 percent were accounted for. One advantage of including the frequency zero along with the spectral peak frequency of STXE is that the local bias can help push-up or pull-down the signal resulting in a more 'patient' or 'diligent' filter that can better handle long upswings/downswings or volatile periods. This is seen in the improvement of the performance towards the end of the 200 observations out-of-sample, where the filter is more patient in signaling a sell/short after the previous buy. Compare this with the end of the performance from the band-pass filter, Figure 4. With this trading signal out-of-sample, I computed a 5 percent ROI on the 200 observations out-of-sample with only 2 small losses. The trading statistics for the entire in-sample combined with out-of-sample are shown in Figure 8.
Figure 8: The total performance statistics of the STXEH3 trading signal in-sample plus out-of-sample. The max drop indicates -0 due to the fact that there was a truncation to two decimal places. Thus the losses were less than .01.
In this experiment trading S&P 500 future contracts (E-mini) on observations of 15 minute intervals from Jan 4th to Feb 1st 2013, I apply the same regimental approach as before. In looking at the log-returns of ESH3 shown in Figure 10, the effect of close-to-open variation seem to be much less prominent here compared to that on the STXE future index. Because of this, the log-returns seem to be much closer to 'white noise' on this index. Let that not perturb our pursuit of a high performing trading signal however. The approach I take for extracting the trading signal, as always, begins with the periodogram.
Figure 10: The log-return data of ES H3 at 15 minute intervals from 1-4-2013 to 2-1-2013.
As the large variations in the close-to-open price are not nearly as prominent, it would make sense that the same spectral peak found before at near is not nearly as prominent either. We can clearly see this in the periodogram plotted below in Figure 11. In fact, the spectral peak at is slightly larger in the explanatory series (pink), thus we should still be able to take advantage of any sort of close-to-open variation that exists in the E-min future index.
Figure 11: Periodograms of ES H3 log-returns (red) and the explanatory series (pink). The red dashed vertical lines are framing the spectral peak between .23 and .32.
With this spectral peak extracted from the series, the resulting trading signal is shown in Figure 12 with the performance of the bandpass signal shown in Figure 13.
Figure 12: The signal built from the extracted spectral peak and the log-return ESH3 data.
One can clearly see that the trading signal performs very well during the consistent cyclical behavior in the ESH3 price, However, when breakdown occurs in this stochastic structure and follows more prominently another frequency, the trading signal dies and no longer trades systematically taking advantage of the intrinsic cycle found near . This can be seen in the middle 90 or so observations. The price can be seen to follow more closely a random walk and the trading becomes inconsistent. After this period of 90 or so observations however, just after the beginning of the out-of-sample period, the trajectory of the ESH3 follows back on its consistent course with a cyclical component it had before.
Figure 13: The performance in-sample and out-of-sample of the simple bandpass filter extracting the spectral peak.
Now to improve on these results, we include the frequency zero by moving the lower cutoff of the previous band-pass filter to $\latex \omega_0 = 0$. As I mentioned before, this lifts or pushes down the signal from the local bias and will trade much more systematically. I then lessened the amount of smoothing in the expweight function to , down from as I had on the band-pass filter. This allows for slightly higher frequencies than to be traded on. I then proceeded to adjust the regularization parameters to obtain a healthy dosage of smoothness and decay in the coefficients. The result of this new low-pass filter design is shown in Figure 14.
Figure 14: Performance out-of-sample (right of cyan line) of the ES H3 filter on 200 15 minute observations.
The improvement in the overall systematic trading and performance is clear. Most of the major improvements came from the middle 90 points where the trading became less cyclical. With 6 losses in the band-pass design during this period alone, I was able to turn those losses into two large gains and no losses. Only one major loss was accounted for during the 200 observation out-of-sample testing of filter from January 18th to February 1st, with an ROI of nearly 4 percent during the 9 trading days. As with the STXE filter in the previous example, I was able to successfully build a filter that correctly predicts close-to-open variations, despite the added difficulty that such variations were much smaller. Both in-sample and out-of-sample, the filter performs consistently, which is exactly what one strives for thanks to regularization.
In the final experiment, I build a trading signal for the Australian Stock Exchange futures, during the same period of the previous two experiments. The log-returns show moderately large jumps/drops in price during the entire sample from Jan 4th to Feb 1st, but not quite as large as in the STXE index. We still should be able to take advantage of these close-to-open variations.
Figure 15: The log-returns of the YAPH3 in 15-minute interval observations.
In looking at the periodograms for both the YAPH3 15 minute log-returns (red) and the explanatory series (pink), it is clear that the spectral peaks don't align like they did in the previous two exampls. In fact, there hardly exists a dominant spectral peak in the explanatory series, whereas the spectral peak in YAPH3 is very prominent. This ultimately might effect the performance of the filter, and consequently the trades. After building the low-pass filter and setting a high smoothing expweight parameter , I then set the regularization parameters to be , , and (same as first example).
Figure 16: The periodograms for YAPH3 and explanatory series with spectral peak in YAPH3 framed by the red dashed lines.
The performance of the filter in-sample and out-of-sample is shown in Figure 18. This was one of the more challenging index futures series to work with as I struggled finding an appropriate explanatory series (likely because I was lazy since it was late at night and I was getting tired). Nonetheless, the filter still seems to predict the close-to-open variation on the Australian stock exchange index fairly well. All the major jumps in price are accounted for if you look closely at the trades (green dashed lines are buys/long and magenta lines are sells/shorts) and the corresponding action on the price of the futures contract. Five losses out-of-sample for a trade success ratio of 72 percent and an ROI out-of-sample on 200 observations of 4.2 percent. As with all other experiments in building trading signals with MDFA, we check the consistency of the in-sample and out-of-sample performance, and these seem to match up nicely.
Figure 18 : The out-of-sample performance of the low-pass filter on YAPH3.
The filter coefficients for the YAPH3 log-difference series is shown in Figure 19. Notice the perfectly smooth undulating yet decaying structure of the coefficients as the lag increases. What a beauty.
Figure 19: Filter coefficients for the YAPH3 series.
Studying the trading performance of spectral peaks by first constructing band-pass filters to extract the signal corresponding to the peak in these index futures enabled me to understand how I can better construct the lowpass filter to yield even better performance. In these examples, I demonstrated that the close-to-open variation in the index futures price can be seen in the periodogram and thus be controlled for in the MDFA trading signal construction. This trading frequency corresponds to roughly in the 15 minute observation data that I had from Jan 4th to Feb 1st. As I witnessed in my empirical studies using iMetrica, this peak is more prominent when the close-to-open variations are larger and more often, promoting a very cyclical structure in the log-return data. As I look deeper and deeper into studying the effects of extracting spectral peaks in the periodogram of financial data log-returns and the trading performance, I seem to improve on results even more and building the trading signals becomes even easier.
Stay tuned very soon for a tutorial using R (and MDFA) for one of these examples on high-frequency trading on index futures. If you have any questions or would like to request a certain index future (out of one of the above examples or another) to be dissected in my second and upcoming R tutorial, feel free to shoot me an email.
This entry was posted in Financial Trading, High-Frequency Finance, iMetrica and tagged Direct Filtering, Euro stoxx50, filtering, financial trading, high frequency trading, iMetrica, index futures, MDFA, S&P 500, signal extraction. Bookmark the permalink. | CommonCrawl |
($\mathit V−\mathit A$) theory predicts $\delta $ = 0.75.
1 The quoted systematic error includes a contribution of 0.00006 (added in quadrature) from uncertainties on radiative corrections and on the Michel parameter $\eta $.
2 BALKE 1988 uses $\rho $ = $0.752$ $\pm0.003$.
3 VOSSLER 1969 has measured the asymmetry below 10 MeV. See comments about radiative corrections in VOSSLER 1969 . | CommonCrawl |
Variance-based sensitivity analysis provides a quantitative measure of how uncertainty in a model input contributes to uncertainty in the model output. Such sensitivity analyses arise in a wide variety of applications and are typically computed using Monte Carlo estimation, but the many samples required for Monte Carlo to be sufficiently accurate can make these analyses intractable when the model is expensive. This paper presents a multifidelity approach for estimating sensitivity indices that leverages cheaper low-fidelity models to reduce the cost of sensitivity analysis while retaining accuracy guarantees via recourse to the original, expensive model. This paper develops new multifidelity estimators for variance and for the Sobol' main and total effect sensitivity indices. We discuss strategies for dividing limited computational resources among models and specify a recommended strategy. Results are presented for the Ishigami function and a convection-diffusion-reaction model that demonstrate up to $$10\times$$ speedups for fixed convergence levels. Finally, for the problems tested, the multifidelity approach allows inputs to be definitively ranked in importance when Monte Carlo alone fails to do so.
Qian, E., Peherstorfer, B., O'Malley, D., Vesselinov, V. V., and Willcox, K.. Multifidelity Monte Carlo Estimation of Variance and Sensitivity Indices. United States: N. p., Web. doi:10.1137/17M1151006.
Qian, E., Peherstorfer, B., O'Malley, D., Vesselinov, V. V., & Willcox, K.. Multifidelity Monte Carlo Estimation of Variance and Sensitivity Indices. United States. doi:10.1137/17M1151006.
Qian, E., Peherstorfer, B., O'Malley, D., Vesselinov, V. V., and Willcox, K.. 2018. "Multifidelity Monte Carlo Estimation of Variance and Sensitivity Indices". United States. doi:10.1137/17M1151006. | CommonCrawl |
The classic comparison theorem of quantum mechanics states that if the comparison potentials are ordered then the corresponding energy eigenvalues are ordered as well, that is to say if $V_a\le V_b$, then $E_a\le E_b$. The nonrelativistic Schrodinger Hamiltonian is bounded below and the discrete spectrum may be characterized variationally. Thus the above theorem is the direct consequence of the min--max characterization of the discrete spectrum [1, 2]. The classic comparison theorem does not allow the graphs of the comparison potentials to cross over each other. The refined comparison theorem for the Schrodinger equation overcomes this restriction by establishing conditions under which graphs of the comparison potentials can intersect and still preserve the ordering of eigenvalues.
The relativistic Hamiltonian is not bounded below and it is not easy to define the eigenvalues variationally. Therefore comparison theorems must be established by other means than variational arguments. Attempts to prove the nonrelativistic refined comparison theorem without using the min--max spectral characterization suggested the idea of establishing relativistic comparison theorems for the ground states of the Dirac and Klein--Gordon equations [4, 5]. Later relativistic comparison theorems were proved for all excited states by the use of monotonicity properties . In the present work, refined comparison theorems have now been established for the Dirac \S 4.2.1 and \S 4.2.2 and Klein--Gordon \S 4.1.1 and \S 4.1.2 equations. In the simplest one--dimensional case, the condition $V_a\le V_b$ is replaced by $U_a\le U_b$, where $U_i=\int_0^xV_idt$, $x\in[0,\ \infty)$, and $i=a$ or $b$.
Special refined comparison theorems for spin--symmetric and pseudo--spin--symmetric relativistic problems , which also allow very strong potentials such as the harmonic oscillator \S 4.1.2, \S 4.2.1, and \S 4.2.2 [8, 10], are proved. | CommonCrawl |
– vortex ellipses with semiaxes $a$, $b$.
The emphasis is on the analysis of the asymptotic ($t\to\infty$) behavior of the system and on the verification of the stability criteria for vorticity continuous distributions.
Keywords: vortex dynamics, point vortex, hydrodynamics, asymptotic behavior. | CommonCrawl |
Abstract: For three-dimensional ABJ(M) theories and $\mathcal N=4$ Chern-Simons-matter quiver theories, we construct two sets of 1/2 BPS Wilson loop operators by applying the Higgsing procedure along independent directions of the moduli space, and choosing different massive modes. For theories whose dual M-theory description is known, we also determine the corresponding spectrum of 1/2 BPS M2-brane solutions. We identify the supercharges in M-theory and field theory, as well as the supercharges preserved by M2-/anti-M2-branes and 1/2 BPS Wilson loops. In particular, in $\mathcal N=4$ orbifold ABJM theory we find pairs of different 1/2 BPS Wilson loops that preserve exactly the same set of supercharges. In field theory they arise by Higgsing with the choice of either particles or antiparticles, whereas in the dual description they correspond to a pair of M2-/anti-M2-branes localized at different positions in the compact space. This result enlightens the origin of classical Wilson loop degeneracy in these theories, already discussed in arXiv:1506.07614. A discussion on possible scenarios that emerge by comparison with localization results is included. | CommonCrawl |
We study Darboux and Christoffel transforms of isothermic surfaces in Euclidean space. Using quaternionic calculus we derive a Riccati type equation which characterizes all Darboux transforms of a given isothermic surface. Surfaces of constant mean curvature turn out to be special among all isothermic surfaces: their parallel surfaces of constant mean curvature are Christoffel and Darboux transforms at the same time. We prove --- as a generalization of Bianchi's theorem on minimal Darboux transforms of minimal surfaces --- that constant mean curvature surfaces in Euclidean space allow $\infty^3$ Darboux transforms into surfaces of constant mean curvature. We indicate the relation between these Darboux transforms and Bäcklund transforms of spherical surfaces.
1991 Mathematics Subject Classification: (Primary) 53A10, (Secondary) 53A50, 53C42.
Keywords: Isothermic surface, Darboux transformation, Christoffel transformation, Riccati equation, Constant mean curvature, Baecklund transformation. | CommonCrawl |
Hi! I'm an NSF postdoc at UW - Madison (Fall 2016 - Spring 2019), working primarily with Uri Andrews, Steffen Lempp, Joe Miller, and Mariya Soskova.
I'm active on mathoverflow and math.stackexchange, and often work at Canada/USA Mathcamp in the summer.
In Spring 2018 I taught Math 222 (calculus and analytic geometry 2); in Spring 2019 I will be teaching Math 551 (point-set topology).
"Logical complexity of Banach-Mazur games." In preparation.
"The complexity of expansions of Cantor and Baire space." With Uri Andrews, Joe Miller, and Mariya Soskova. In preparation.
"Effective localization number: building $k$-surviving degrees." With Ivan Valverde. In preparation.
"Theories satisfying 'arithmetic-is-recursive.'" With Uri Andrews, Matthew Harrison-Trainor, and Joe Miller. In preparation.
"Natural examples of many-one incompleteness in the $\kappa$-Turing degrees." With Reese Johnston. In preparation.
"The axiom of choice in higher reverse mathematics." Submitted.
"Set theory and strong reducibilities." Submitted.
"Medvedev reducibility and $\alpha$-recursion theory." Submitted.
"Limit computability and ultrafilters." With Uri Andrews, Mingzhong Cai, and David Diamondstone. Submitted.
"Computing strength of structures related to the field of real numbers." With Greg Igusa and Julia Knight. Journal of Symbolic Logic, 82(1), pp. 137-150, 2017.
"Computable structures in generic extensions." With Antonio Montalban and Julia Knight. Journal of Symbolic Logic, 81(3), pp. 814-832, 2016.
"Transfinite recursion in higher reverse mathematics." Journal of Symbolic Logic, 80(3), pp.940-969, 2015.
"Computably enumerable partial orders." With Peter Cholak, Damir Dzhafarov, and Richard Shore. Computability, 1(2), pp. 99-107, 2012. | CommonCrawl |
In this post, we describe the different functionals available for discrete geometry optimization in VaryLab.
Defines an energy functional which is minimal for planar circular quads. Since we are using an angle criterion, the convergence to planarity is relatively slow. If the planar quads energy is added to the optimization, the geometry converges more quickly.
The property for a quadrilateral to possess an incircle tangent to its sides is that the two sums of opposite side lengths is equal $a+c=b+d$. Planarity is not included in this functional, so to get planar quadrilaterals with inscribed incircles you need to add planarity to the optimization.
In a quad-mesh with incircles, the incircles need not touch. So in combination with the incircles and planarity energies it one can create a mesh with touching incircles.
This energy implements an angle criterion for conical meshes. So in combination with planarity it optimizes a mesh to have the property, that faces adjacent to one node are tangent to a cone of revolution.
Allows the user to specify a direction field. This can be used with spring energy and boundary constraints to do simple form finding.
The lengths of the diagonals of each quad are equal in an optimized mesh.
This energy is dual to the planar faces energy. It computes the volume spanned by a node and its neighbors. Minimization yields meshes such that each node lies in a plane with its neighbors. If used together with face planarity the initial mesh is mapped to a plane.
Given a reference mesh we compute the closest point to a node and add a spring force between each node and its projection. The projection point is recomputed in each step of the optimization. If combined with other energies it keeps the optimized mesh close to a reference mesh.
This is similar to Reference mesh but with a reference nurbs surface.
The spring energy is computed by adding springs to all the edges of the mesh. These springs can have user specified target lengths and strengths that can be specified by various options.
See the article: A.I. Bobenko, A conformal energy for simplicial surfaces.
On smooth surface, the curvature of a surface curve is decomposed into geodesic and normal curvature, where geodesic curvature is the curvature in direction of the tangent plane. So we consider the projection of the parameter polyline into the tangent plane orthogonal to the normal at a node. For the optimized mesh, the projection is straight.
This curvature is based on the intrinsic geometry of the surface. Let $\alpha$, $\beta$, $\gamma$, and $\delta$ denote the angles in the adjacent quads at a node in cyclic order. Then the optimal mesh satisfies $\alpha+\beta = \gamma+\delta$ and $\beta+\gamma = \delta+\alpha$, i.e. so the parameter lines are straight from an intrinsic point of view.
This energy penalizes the deviation of a parameter polyline from a straight line. So using this energy only, will flatten the mesh to the plane. Used together with, e.g. a reference surface energy, this energy smoothes the parameter lines of the quad mesh.
The curvature is the inverse of the radius of the circle through three consecutive points on a parameter polyline.
Please leave a comment if you need you need more detailed information.
In this post, we describe how VaryLab can be used to create a minimal surface from scratch. We use the build-in primitives of VaryLab to create a start geometry and modify this using interactive editing as well as tabular data input. Subdivision steps are used to obtain finer resolutions and to create the final mesh. This mesh is then optimized to have the shortest edge lengths possible with certain boundary conditions. This gives a coarse approximation of the shape of a minimal surface with the given boundary.
We start by creating a simple square with a generator from the menu Generators->Quad Mesh. Set the u and v resolution to 2 and uncheck the "Use Dimonds" box. This creates our start geometry, a quadrilateral. You can move the vertices in space by selecting and Shift-Mouse-Drag.
An alternative way of coordinate input is the data table in the data visualization panel. You can activate the coordinate table by selecting the VPosition data channel and choose a table visualization for vertices. The table shows the coordinates of either all or just the selected vertices.
To obtain a finer surface resolution, we use several subdivision steps. From the menu choose Subdivision->CatmullClark. Here we can adjust subdivision parameters to create a linear subdivision and fixed boundary interpolation.
Adjust the position of vertices that should remain in a fixed location and keep them selected. The optimization can fix selected vertices in all or just some dimensions.
The optimization core of VaryLab can be used to shorten the edge lengths of the mesh and keep certain vertices at fixed positions. The corresponding energy is the spring energy. Activate the Sping Energy optimizer from the Optimizer Plug-ins panel. The spring length should be set to constantly 0. In the Optimization panel you can select constraints. We fix all selected vertices. All other boundary vertices can move along the z-direction. To achieve this effect, check all boxes of the selection constraints and the x and the y check boxes of the boundary constraints.
To start the optimization you can either choose to interactively optimize by pressing the Animate button. Or you can optimize the mesh for a predefined number of steps with the Optimize button.
A boundary of a discrete surface is usually a closed polygonal curve in space. If a surface has multiple boundary components, one speaks of a multiply-connected surface. In this article we deal with surfaces that have one boundary component. We call those simply-connected or surface with disk topology.
Starting with a discrete surface with one boundary component we want to create a mesh where the pattern of the mesh aligns with the boundary curve in a nice way. You can download the example model here. We use a triangle pattern and demonstrate the usage of automatic as well as custom alignment of the boundary.
In order to create a new mesh from an existing surface, we first need to create a suitable parameterization for the input data. For general information about parameterization have a look into the Discrete Surface Parameterization article. In our case we want to create a parameterization that respects the geometry of a regular triangle pattern which means boundary angles of the domain of parameterization should be quantized to a multiple of 30º.
For the given triangulated surface, we create a map from the surface to a rectangle. In order to specify these boundary conditions, you select the four corner vertices of the mesh and type in the desired boundary angle in the "Custom Nodes" fold-out panel inside the "Discrete Conformal Parameterization" panel. If the vertices do not appear, press the Unwrap button and select the vertices again. To create the rectangle, the four vertices must have a custom angle of 90º. The general boundary setting should be set to "Quatized Angles" for the mode, and "Straight" for quantization. These settings affect all unselected vertices. If you press the unwrap button the mapping is calculated and can be previewed using the texture display features of VaryLab.
If we have a mesh with sufficiently quantized boundary angles, we can go on and create a boundary aligned mesh from this data. In our case we have four angles of 90º and at all other vertices the boundary of the domain is a straight line which corresponds to a boundary angle of 180º.
To perform the remeshing step select the "Boundary aligned Triangles" pattern from the "Surface Remeshing" panel and hit the "Remesh" button. You can download the result mesh here.
In this little post I will explain how to use VaryLab to create an A-net, i.e., a mesh with the property that each node and its neighbors lie in one plane.
Since A-nets are discrete versions of parametrization along asymptotic lines which only exist for surfaces with negative Gaussian curvature, we need to create a mesh with negative curvature first.
We start with a standard quad grid from the generators menu.
To obtain the right combinatorial structure, i.e., mesh topology, we need to un-check the "Use Diamonds" box.
After clicking "Ok" we have a nice 6x6 quad grid.
Now we use the Vertex Coordinate Editor to move the corners of the grid up and down respectively. To do so we hold down the Shift key and the left mouse button and move the corners up and down.
We fix the corners by selecting them with a left mouse click (the selected nodes are shown in red). Then we turn to the Optimizer Plugins panel and select Spring Energy. We select the const. option and set the target length to 0. This will make the edge behave like rubber bands/spring with 0 rest length. As optimization method I selected CG (Conjugate gradient) and maximal number of iterations 100. Finally we fix the selected points in the constraints panel by checking x, y, and z. Now we press the Optimize button to proceed with the optimization. And obtain a mesh with negative curvature.
To turn this mesh into an A-net we use the Planar Vertex Stars optimizer from the Optimizer Plugin panel and optimize again. The mesh does not change a lot, since the above mesh is already almost an A-net after the first optimization. | CommonCrawl |
For an odd prime $p$ and an exponent $e\geq1$, the group $(\Z/p^e\Z)^\times$ is cyclic. It happens that any integer $a$ which generates $(\Z/p^2\Z)^\times$ will also be a generator of $(\Z/p^e\Z)^\times$ for all $e\geq 1$. We call the least such $a$ the Conrey (primitive) generator modulo p.
for $p=40487$, the Conrey generator is $10$ instead of $5$.
for $p=6692367337$, the Conrey generator is $7$ instead of $5$. | CommonCrawl |
In mathematics, the inverse hyperbolic functions are inverse functions of the hyperbolic function. For a given hyperbolic function, the size of hyperbolic angle is always equal to the area of some hyperbolic sector where x*y = 1 or it could be twice the area of corresponding sector for the hyperbola unit – x2 − y2 = 1, in the same way like the circular angle is twice the area of circular sector of the unit circle. In some case, the inverse hyperbolic functions are also named as area functions to realize the values of hyperbolic angles.
Inverse hyperbolic cosine (if the domain is the closed interval $(1, +\infty )$.
The concept is not new but inverse hyperbolic functions exist in various differential equations in hyperbolic geometry or Laplace equations. These equations are important for the calculation variables in different sectors like physics, chemistry, heat transfer, electromagnetic theory, relativity theory, or fluid dynamics etc.
However, there are standard abbreviations used in mathematics for inverse hyperbolic functions and when they are combined together in logical form, it will make the formula. This would be easy understanding formula if you know its nomenclature and the basic knowledge of trigonometric functions too.
There are particular notations for the inverse function but it should be misunderstood by -1 in mathematics. It is just a shorthand practice for writing the inverse functions. The next term is hyperbolic functions whose numerator and denominator are arranged at the degree of two solve in terms of ex with the help of quadratic formula. Now take the natural logarithms to derive the natural expressions for inverse hyperbolic functions. | CommonCrawl |
In this article are studied different theorems limits in a criticalBellman-Harris branching process with a only type of particle and with finite second moments. There were used two processes in order to figure out the limits as following as: "The condition of no extinction" and "The condition of extinction in the near future". In the two previous processes is taken into account two different cases as: $i := dit$ y $i := di ± \tau $, where t is a point of time and $d_i\varepsilon (0,\infty )$ are fixed for every $i = 1, . . . ,k$. For the case where $i := di\tau$, the Esty's comparison lemma 2.3 is used to investigate the asymptotic behavior of the joint probability generating function F$F(s_1,...,\tau_k)$, for $t\longrightarrow \infty $; for the case $i := \tau + di$ is not used. For this last case is founded another comparison lemma (lemma 4.3), that is the base to demonstrate the theorems limits if $i := \tau ± di$.
by: López-Martínez, Edgar, et al. | CommonCrawl |
Well, my guess might be wrong (see here), but yours is definitely not correct either.
What's a 'viewable space' then?
In portuguese "viewable" means "visualizavel", when a space had more than 3 dimensions, can't be viewable, understand now?
about the link than you give, a got a prove, with think solve the question.
So then applying the same argument to the term dim(U2+U3+⋯) recursively you can then eventually arrive at an equation consisting of the sum of the dimensions of each subspace minus a bunch of nasty terms involving intersections similar to dim(U1 ∩(U2+U3+⋯))..
then, 3 dimensions, the other term will be 3n dimensions, so, the sum will be 3(n-1). You can get in adress linear algebra - The dimension of the sum of subspaces $(U_1,\ldots,U_n)$ - Mathematics Stack Exchange.
seems the same results, than I show in my post.
Can You viewable (n-1) spheres above your head?
Last edited by lemgruber; May 13th, 2015 at 02:38 PM.
Be V a vector space of dimension finite, if U1 and U2 are vectors subspace of V, then, Dim(U1 + U2)= Dim(U1) + Dim(U2) - Dim(U1 ∩ U2).
this sum make a space vector compound of n -1 spheres concentric, that can be drawn on a graph, a graph 3(n - 1) dimensional. This, at least to me, a new thing.
Apologize delay to make right form, but now seems ready.
First I understand this with faith, after I could demonstrate mathematically.
First I receive the idea, and I like thanks to being who make this possible.
Last edited by lemgruber; May 14th, 2015 at 02:13 PM.
The utility for this, are inumerous, although the geometry of n-dimensional space, was much developed, the utilization of figures geometrics depicts on space n dimensional, let the understand about the geometry of fenomenous much more easy and fast.
The utility is 0 until someone can understand you.
draw the geometric figures, start of four dimensions, with this tool, became possible viewable the geometric figures and understand in another way, more intuitive what be said in language not geometric. The utility function don't have a spheric equation, but can be transformed in equations of sphere or became a subspace vector trough another transformation like a example in this post.
Please, if someone understand, what I try to say, help me. Say something!
Last edited by lemgruber; May 15th, 2015 at 11:57 AM.
I don't understand why this would be a problem. I mean, there are ways to visualize things in more than three dimensions (see, for example, literally any complex analysis course), but it's not a limitation to fail to draw something.
I agree Greathouse, but when we draw geometry in graphic, the understand gain in clarity, again, I don't say what not exist ways to interpret things without draw figures, but it's much better when we do. At least I think so. | CommonCrawl |
Archimede 66, No. 3, 139-144 (2014).
Summary: Drawing the parabolic segment with the Archimedes triangle, equivalent triangles are detected into the $n\times n$ trapezoids grid; so we can measure a figure circumscribed to the segment and the entire construction triangle. It is seen that these figures contains respectively: $P_n$ (the square pyramidal number) and $n^3$ triangles. For $n$ tending to infinity the ratio of these quantities tends to $1/3$, that proves the Archimedean theorem. | CommonCrawl |
This note translates the code from an interesting blog post (in French) from Python to R. The code includes a function to compute closeness vitality with the igraph package.
The blog post, which was written by Serge Lhomme, is based on a network made of French cities connected by highways as its data example. The network is unweighted and undirected, and has a density of around 0.05.
The second line of the data generation code creates exactly the same sequence of edges as the Python codes does, whereas the last line assigns vertex names to the graph. The "+1" part of that last line is due to the fact that R indexation starts at 1 instead of starting at 0 as it does in Python.
The rest of this note is organised around the centrality measures that can be computed to describe the graph. Computing centrality measures is fairly similar in networkx and in igraph, although one has to be careful to set the weighting and normalizing options identically to get matching results.
The blog post continues by suggesting a simple way to "stress test" a graph: for each node in the graph, remove that node, compute the average shortest path length of the resulting graph, and compare it to that same measure in the initial graph.
The only "sensitive" part of the code is where we control for the possibility that removing a node creates an unconnected graph (which will happen if the node is a cut-vertex—more on that later). In that case, the vulnerability measure is set to NA, since the distance matrix on which it is based includes some infinite values.
The text explains that, to compute the closeness vitality of node $x$ in graph $g$, you need to compute two Wiener indexes: one for the initial graph $g$, and one for that same graph after node $x$ has been removed (Equation 3.20). The Wiener index is simply the sum of the distance matrix of a graph (Equation 3.18), which is trivial to implement through igraph.
The text also explains that closeness vitality will be equal to $-\infty$ when the node for which it is being computed is crucial to keeping the graph connected. The networkx implementation of closeness vitality avoids that issue by removing infinite values from the distance matrix, which will also be our strategy so that we can match the networkx results as exactly as possible.
The code from this note is available as a Gist, which also contains the Python code that I translated. For additional connectivity tests, see Serge Lhomme's NetSwan package, which is discussed in this other blog post (also in French).
The code to implement the closeness vitality function has been submitted as a possible addition to the igraph package. | CommonCrawl |
We will now state some very basic properties regarding these incidence matrices.
Proposition 1: Let Let $(X, \mathcal A)$ be a $(v, b, r, k, \lambda)$-BIBD and let $M$ be a corresponding incidence matrix. Then every column of $M$ contains exactly $k$ many $1$s.
Proposition 2: Let $(X, \mathcal A)$ be a $(v, b, r, k, \lambda)$-BIBD and let $M$ be a corresponding incidence matrix. Then every row of $M$ contains exactly $r$ many $1$s.
Proposition 3: Let $(X, \mathcal A)$ be a $(v, b, r, k, \lambda)$-BIBD and let $M$ be a corresponding incidence matrix. Then between any two distinct rows there are exactly $\lambda$ many $1$s in the same corresponding columns. | CommonCrawl |
According to this and many other places, weight for exponential moving average is just being defined as $\omega_t=(1-\alpha)\alpha^t$, where $t$ is current index and $\alpha$ is a smoothing factor.
How does one derives this formula itself and what does $\alpha$ mean, and where does one can plug size of averaging window?
This is the problem for me as I expected $\omega$ to be a function of window size $N$ and index $t$, but here and everywhere else I got only $t$ and mysterious $\alpha$.
I understand that $0<\alpha<1$ and that it describes the steepness of the exponential slope, but I am confused that I cant find the derivation of this formula. That is why I cant understand it to the end. Could anybody provide step by step derivation of this?
With exponential moving average, your averaging window includes all previous values, although most recent values weight more. A finite w can not thus be defined in this case.
On the other hand, you can select $\alpha$ so that the last w samples make up for a given portion of your current estimate.
Not the answer you're looking for? Browse other questions tagged numerical-methods average or ask your own question.
How would I apply an Exponential Moving Average to Quaternions?
How does one choose the step size for steepest descent? | CommonCrawl |
I have a very simple question, what is the meaning, if any, of this?
I.e. Economically what does the following calculation mean?
Note that $Q_i P_i$ is the total revenue from good $i$. So dividing total revenue from all goods by the amount produced of all of them should give us something like "average revenue per item produced"?
Not the answer you're looking for? Browse other questions tagged microeconomics mathematical-economics price-index or ask your own question.
Intuitively, why is Profit = (Price $-$ Average Cost) $\times$ Quantity? | CommonCrawl |
Methods, Functions and Subroutines in Perl and what is $self ?
A class with only attributes can be already useful, but having methods other than getters and setters will make the life of the objects more interesting.
When we created a class, Moo automatically added methods to get and set the values of a single attribute. In that example we already saw that a method is called using the arrow notation: $student->name;.
Let's see now how we can create our own methods. In the first example we have class called Point representing a single 2-dimensional point. It has two attributes: x and y. It also has an extra method called coordinates that will return a string that looks like this: [x, y], where x and y will be replaced by the respective numerical values.
The attributes are not new. We saw them in the first article. However, in this example we also have a method called coordinates. If you look at it, you won't see any obvious difference between a method and a plain subroutine in Perl. Indeed, there is almost no difference. The main reason we call them methods is to have a vocabulary similar to what people arriving from programming languages use.
A method in a Moo-based class is just a subroutine with a few minor differences.
We call the method with the arrow notation $p->coordinates.
Even though we did not pass any parameter to this method call, Perl will automatically take the object, ($p in this case), and pass it as the first argument of the method. That's what is going to be the first element of the @_ array and that will arrive in the $self variable inside the function.
While $self is not a reserved variable or a special word in Perl, it is quite customary to use this name to hold the current object inside a class.
This one would return the distance of the point from the [0,0] coordinates. I think there is nothing special in this example, but I wanted to show another simple case.
In another example we would like to add a method to move the point from one coordinate to another. This involves updating both x and y at the same time.
In this method we get two parameters $dx and $dy, besides the object itself which is copied to $self. These hold the distance we move x and y respectively. For x I wrote the code step-by-step. First we use $self, the current object to get the current value of the x attribute and assign it to a temporary variable called $x_temp. Then we increment $x_temp by the "move", and finally we put the new value from $x_temp in the x attribute. $self->x( $x_temp );.
In the case of the y attribute, I already left out the use of the temporary variable which was only there to (hopefully) make it clearer what we have been doing.
Just as there is no privacy for attributes, in the Object Oriented system of Perl there are no private methods either.
Nevertheless, as people want to have the feeling of privacy the tradition says that any method that starts with and underscore _ should be considered private. There is nothing in Perl that would enforce this privacy, but it is a good proximation. After all you usually have access to the source code of any class anyway, so you could copy the class, make some changes and use your own version.
The final call to return is not really required here, but I like to add it to make sure the caller won't receive any accidental value till I might decide what should be returned.
This was a slightly more complex case, where we wanted to change two attributes at once, hence we created a separate method.
For further explanations see what is the difference between methods, functions, subroutines and what is $self anyway? | CommonCrawl |
Abstract: We derive a sufficient condition for realizing meta-stable de Sitter vacua with small positive cosmological constant within type IIB string theory flux compactifications with spontaneously broken supersymmetry. There are a number of `lamp post' constructions of de Sitter vacua in type IIB string theory and supergravity. We show that one of them -- the method of `Kähler uplifting' by F-terms from an interplay between non-perturbative effects and the leading $\alpha'$-correction -- allows for a more general parametric understanding of the existence of de Sitter vacua. The result is a condition on the values of the flux induced superpotential and the topological data of the Calabi-Yau compactification, which guarantees the existence of a meta-stable de Sitter vacuum if met. Our analysis explicitly includes the stabilization of all moduli, i.e. the Kähler, dilaton and complex structure moduli, by the interplay of the leading perturbative and non-perturbative effects at parametrically large volume. | CommonCrawl |
Two primes the difference between which is 2. Generalized twins are pairs of successive primes with difference $2m$, where $m$ is a given natural number. Examples of twins are readily found on consulting the table of prime numbers. Such are, e.g., 3 and 5, 5 and 7, 11 and 13, 17 and 19. Generalized twins — for $m=2$, for example — include 13 and 17, 19 and 23, 43 and 47. It is not yet (1992) known if the set of twins, and even the set of generalized twins for any given $m$, is infinite. This is the twin problem.
It is known that the infinite sum $\sum 1/p$ over all $p$ belonging to a twin is finite, see Brun sieve; Brun theorem. Its value has been estimated as $1.9021605831 \ldots$.
This page was last modified on 1 November 2014, at 19:52. | CommonCrawl |
One of the most interesting subjects to study in physics are black holes. These objects first came up as simple theoretical solutions to Einstein's equations, back in 1916. In 1972, the first credible astronomical evidence for black holes come from the observations of Cygnus X-1, some of which were found here at University of Toronto by Charles Thomas Bolton. Astronomical evidence continues to accrue, indicating there is a super-massive black hole at the center of our galaxy.
However, it is far more difficult to probe the strong field physics of gravity and black holes, such as gravitational waves. One experimental approach that has been succesful has been the Laser Interferometer Gravitational-Wave Observatory (LIGO). This extremly intricate experiment exploits the fact that graviational waves will stretch and compress spacetime as it propagates. The setup consists of a perpendicular pair of mirrors and a laser beam splitter which sends two beams to bounce off the two mirrors, similar in spirit to the early interferometer of Michelson-Morley that disproved the aether. The interference of the two laser beams will detect any changes in the length of a laser's path as a result of gravitational waves.
In 2015, LIGO made their first detection of gravitational waves with a merging pair of 36 and 29 solar mass black holes. The final mass of the black hole was 62 solar masses with the equivalent of 3 solar masses worth of energy radiated in gravitational waves. This output more power than the output of all the light by all the stars in the observable universe. This was the first ever direction detection of a black hole merger and of gravitational waves themselves! This gives us even more motivation to theoretically probe the physics of black holes.
The first discovered solution to Einstein's equations of general relativity and the most basic black hole is known as the Schwarzschild black hole. This is a neutral, non-rotating, spherically symmetric black hole. Of course, we can make a charged black hole, such as the Reissner-Nordström black hole, or have rotations, such as the Kerr black hole. We can try to imagine creating even more complicated back holes, but we hit a road block. In four dimensions we have no-hair theorems which state that the only quantum numbers for a black hole are the mass, charge(s), and angular momentum. Determining these quantum numbers and other parameters of the merger is part of the information that LIGO wishes to access.
Black holes have two general features, a singularity and an event horizon. In general relativity, the singularity is at the centre of the black hole and it is a curvature singularity. The singularity is expected to be resolved by some quantum gravitational effects if it exists in a complete theory of gravity. The event horizon is even more interesting. This boundary is a surface where the velocity required to escape the gravitational field is equal to the speed of light. Anything inside the event horizon cannot escape and must hit the singularity, so the interior is causally disconnected from the exterior. Since this includes light, the event horizon would then be entirely black. Interestingly enough, there is no curvature singularity at the horizon, although there can be coordinate singularities. This indicates that this bizarre behaviour should be visable classically.
Hawking radiation was derived by Stephen Hawking in 1975. This radiation comes from putting quantum field theory on a fixed curved spacetime background. An intuitive picture for where this radiation comes from is virtual vacuum pairs appearing near the horizon. One will become an infalling particle whereas the particle outside will become a real particle and radiate away. To transition from virtual to real, the paticle must aquire energy, which is taken from the black hole.
Overtime this will cause the black hole to evaporate away, leaving only a thermal bath of particles. Since the black hole radiation is outside the event horizon, it can only depend on the mass, angular momenta, and charges due to the no-hair theorem.
This leads us to what is called Hawking's paradox or the black hole information problem. Suppose we have two textbooks of equivalent mass, one on quantum mechanics and one on general relativity. If we were to throw both books into the black hole, it would cause the same increase in mass to the black hole. But when the black hole radiates, the radiation will only depend on the mass and so the radiation is the same, despite the very different information in the textbooks!
This might not seem like a big problem but it leads to a far more important technical paradox. Suppose that some matter that is initially in a pure state (a state described in quantum mechanics by a single wavefunction) collapses to form a black hole. Due to Hawking radiation, the black hole will lose mass by evaporating. Once the black hole evaporates completely, we are left with a bath of thermal particles. This is clearly in a mixed state (a state that requires a sum of multiple wavefunctions to describe it). However, unitarity of quantum mechanics demands pure states evolve to pure states. Since the theory of quantum mechanics is unitary, we should not see this breaking of unitarity. That is the paradox, which is deeply connected to the information loss I described earlier.
Obviously, this paradox needs to be resolved. Now, Hawking's calculation is a semiclassical, not a full quantum gravity calculation. One might try to solve the paradox by looking at further perturbative quantum corrections to the result. These corrections do not help, as Mathur's Theorem (found in 2009) showed only large order one corrections will be able to unravel the paradox. He did not need to assume a specific quantum gravity theory, only that it has satisfies strong subadditivity (a generic technical property of entropy) and has traditional event horizons (i.e. Hawking pairs created independently). The theorem is a powerful result and limits where we can look for a solution to the paradox. In short, the solution won't be found by considering quantum field theory on a fixed background like Hawking first considered.
One proposal to avoid information loss was called black hole complementarity by t'Hooft and Susskind starting in the '90s. Here complementarity meant that two very different observers would observe different physics, but these observations are underlied by the same physical system. Basically, the idea was that infalling observers would smoothly fall into a black hole but the observers outside would see a unitary process of radiation (i.e. the information is transmitted back). This would require violating monogamy of entanglement in quantum mechanics, as a single Hawking mode would need to be maximally entangled with the interior for the infalling observers and maximally entangled with old radiation in order to restore unitarity. It was argued that the observers would not have enough time to compare each others results, avoiding the violation from being detected.
This was one of the dominant proposals until a major objection to complementarity was raised by Almheiri, Marolf, Polchinski, and Sully (AMPS) in 2012, usually called the firewall argument. AMPS argue that this setup will lead to high energy excitations around the horizon which burns up infalling observers. These excitations form the firewall. However, this firewall breaks the equivalence principle (i.e. the smoothness of the black hole geometry). This lead Hawking to put out a paper suggesting that if firewalls are inevitable, then perhaps black holes do not have event horizons at all!
There have been many other proposals to try to avoid firewalls. Susskind and Maldacena, in 2013, have a proposed resolution to the firewall arguments called ER=EPR. This is a proposal that entanglement between particles (EPR) was actually equivalent to a quantum wormhole connecting the two particles (ER). These wormholes are not traversable but do allow for entanglement to be set up between the two sides. This setup avoids the firewall argument by modifying the interior of the black hole and allowing for smooth horizons and unitary radiation. The state-dependence proposal of Kyriakos Papadodimas and Suvrat Raju, discussed in 2013, is a lot more controversial. They argue that the things that can be observed with black holes are intrinsically dependent on the state we are measuring. A number of people such as Daniel Harlow and Joseph Polchinski have argued that this breaks key results of quantum mechanics. There are also some proposals of classical non-local information transfer, advocated by Steven Giddings, that allows for information of leave the black hole in a non-local way. The full mechanism behind this is not yet fully understood.
Of course, these are not the only proposals and some propsals predate the firewall arguements entirely, such as considering black hole microstates. Before we can describe this, we must first discuss string theory.
Our current experimental understanding of the world involves particles, zero dimensional interacting objects. When looking for a theory of quantum gravity, one proposal is string theory which involves strings, which is extended in 1 spatial direction, and D\(p\)-branes, extended objects in \(p\) spatial dimensions. There is debate on the possibility experimental evidence for string theory, but there are plenty of other, more eloquent authors who have written on that topic. What concerns us is this; if string theory describes quantum gravity, it should not have any information paradox.
There are several different types of string theories, both bosonic and supersymmetric versions. There are many supersymmetric versions including M theory, Type I, Type IIA, Type IIB, Heterotic E and Heterotic O. Amongst the supersymmetric versions, they are all related between each other using dualities and most times we refer to all of them as just (super)string theory. Critical superstring theories operate in 10 dimensions (or 11 for M theory).
The fact that the theories live in higher dimesions allows to evade the no-hair theorems and store more information than just the mass, angular momentum, and charge(s). This is the first glimmer of how we might yet save information. Still, we need to find out exactly how information is recovered. Additionally, to connect it to black holes in nature we would need to compactify some of the dimensions.
String theory is described by one coupling, \(g_s\), and one length scale set by the string tension, \(\alpha'=l^2_s\). These two together can be used to describe our usual gravitational coupling constant in supergravity, \(G_N=8\pi^6g_s^2l_s^8\). However, there is another additional graviation affect, as the presence of strings/branes will warp spacetime themselves with a strength proportional to $g_sN$. So the number of strings/branes \(N\) can also play a role in the gravitational physics of a system in string theory. Although it may seem counterintuitive, using a large number of branes can be a useful limit for studying string theory. This is somewhat reminiscent of collective effects in condensed matter physics where a large number of particles will have some new description that is easy to access.
With the main string theory ingredients discussed, we can now consider constructing systems with large numbers of strings/branes that act as microstates of the black hole. Black holes have an entropy proportional to their area, known as the Bekenstein-Hawking entropy. The statistical mechanics of this interprestion would lead us to believe that this entropy is related to the number of these microstates. In general relativity, it is not clear what the microstates are but we will discuss how these microstates might be interpreted in string theory.
Constructions in string theory use special configurations of D\(p\)-branes which are wrapped on certain compactifications of some of the extra dimensions. They are colloquially called fuzzballs. For example, the two charge black hole (D1D5) has D5 branes wrapped up on a four dimensional torus (a two dimensional torus is a donut shape) and a circle and D1 branes wrapped on the circle. This means that it has five extended directions are compactified, so the result is a 5 dimensional black hole. For a great public talk on how this is accomplished, take a look at A.W. Peet's Perimeter Institute public lecture entitled ``String Theory: Legos for Black Holes" from May 6th, 2015. The main bonus of these microstates is that they have no horizon and are smooth with no unphysical singularities.
Crucially, it is important that these microstates can reproduce the entropy of a black hole, otherwise they cannot be seen as microstates. Strominger and Vafa in 1996 found that the entropy of a class of black holes can be reproducing by counting these microstates. There has been a lot of work on these fuzzballs to show that they not only get the correct relationship for black hole entropy but also get the correct radiation spectrum. However, unlike with Hawking, the radiation is from the D-branes themselves and not pair creation at a horizon. This is an order one difference in the structure of the horizon, as required by Mathur's theorem.
More recent work has been done on trying to get further towards a more astrophysical black hole. Earlier solutions are typically far from a Schwarzschild black hole, usually higher dimensional black holes with conserved charges and/or large angular momentum. There have been microstates contructed that avoid some of these problems but none that solve all of them. So a full description of astrophysical black holes remains elusive.
The 10 dimensional geometries have no horizon so the horizon must be generated in some other way. The idea is that a classical black hole state will actually be some average over these mircrostates and these horizon will arise in from this averaging procedure. Of course, we still need to address how these fuzzballs will form in the first place. The rough argument is that there is an exponentially suppressed probability of infalling matter to tunnel into a microstate but that there are an exponential number of states to tunnel into. So we then can expect a roughly order one probability to tunnel into these microstates.
A general microstate has three important regions in its geometry. It will have an asymptotic region, a cap region which is at the end of a finite throat. The asympototic infinity will be flat, but will have a blakc hole description in general. The cap is where non-compactified spacetime ends and the full 10d geometry takes over. It is these caps that will distinguish the different microstates apart from one another. They will in general be non-spherically symmetric and be horizon sized. This is true, even at weak gravitational coupling, due to the large number of strings/branes in the constructions.
Away from the cap is the throat. This will be the key part, as it will house the transition from the full microstate physics of the cap to the black hole/flat asymptotics. It turns out that this throat will have an anti-deSitter (AdS) component to the metric. This is important, as the physics of large \(N\) AdS has a well known alternative description in what is called the AdS/CFT correspondance and we can exploit this to understand the physics in the throat.
The AdS/CFT correspondence says that certain quantities calculated using the bulk gravity in an anti-deSitter (AdS) spacetime (a maximally symmetric solution to Einstein's equations with negative cosmological constant) are the same as quantities calculated using conformal field theory (CFT, or quantum field theory with conformal symmetry) on a spacetime conformal to boundary of the AdS. This is an example of the holographic principle, and many quantities one one side will have a dual description on the other (e.g. fields in the bulk will be dual to operators in the CFT). This is a useful correspondence as sometimes it is easier to calculate quantities using one description or the other.
Additionally, this duality is not true for just pure AdS geometries. Geometries can be more complex in the interior, as long as they have an asympotitic AdS region. This means that this duality can apply to AdS black hole geometries as well as geometries with some broken symmetries. This is especially important for applications of holography to condensed matter systems, where quantum critical points of phase transitions may display conformal symmetry or a partially broken conformal symmetry.
This duality was first discovered in 1997 by Maldacena in a limit of a stack of D3 branes which related the field theory of the open strings on the branes to the gravity of the closed strings in the bulk. However, in 2009, Heemskerk, Penedones, Polchinski, and Sully argued that holography can be more general than just string theory constructions. They argued that having a large gap in low lying states and a well defined large \(N\) expansion for a CFT would should be enough to have a gravity dual. This means that CFTs on their own can be used to investigate quantum gravity without having to rely on string theory.
So we can use CFTs themselves to define our quantum gravity and investigate these dual CFTs including how they would solve the black hole information problem.
Of course, one of the main questions in AdS/CFT holography is trying to determine how the geometry of the bulk comes about from the CFT. Work starting in 2006 by Hamilton, Kabat, Lifschytz and Lowe focused on getting bulk fields as a smeared integral of the dual operator on the boundary. Alternatively, one can use quantum information quantities such as entanglement entropy in the CFT which was found to be dual to minimal surfaces in the bulk by Ryu and Takayanagi also in 2006. There has been significant progress since then covering a wide range of more interesting geometries and quantum informational tools. One can look at the accompanying essay for more details on this direction.
There are a number of useful structures in CFTs that can be examined in holography. Conformal symmetry will constrain the form of low point correlation functions and organizes the operators of the theory in convenient ways. One of these structures, conformal blocks, are used to build up four point functions. They have recently been used to examine the black hole information problem from a perturbative and non-perturbative perspective in general semiclassical gravity theories.
Our approach to the black hole information problem is slightly different. We examine a specific CFT, the D1D5 CFT, which is dual to the D1D5 gravity theory in the throat. However, we do not examine the CFT dual to the semiclassical gravity and try to find corrections. Instead we examine the CFT where it is free and is dual to a very string theoretic limit. In 2014, Gopakumar and Gaberdiel found that this CFT has a subsector that describes a tensionsless limit (\(\alpha'\rightarrow\infty\)) where strings are long and floppy. We then consider perturbing the CFT with an operator that will take us towards a semiclassical gravity description. Our hope is that by seeing the CFT as providing a tendril down from a very quantum gravity to the semiclassical description. We hope that in the future this will be helpful in showing how information is restored in these mircrostates of black holes. | CommonCrawl |
All of these series are telescoping series, but determining how to telescope them can be difficult and involve a lot of nasty algebra, so I shall instead show how to derive their values from infinite products that are easy to evaluate using logarithmic differentiation. Consider first the infinite product where $\alpha$, with $|\alpha|\lt 1$, is a parameter. We are leaving $\alpha$ as a variable for now so that we may differentiate with respect to it later. This is an easy product to evaluate if one recalls the following "difference of squares" factoring trick from algebra: This trick can be extended to greater heights: We shall use this trick to telescope this product into a closed form: which gives us the result Now we shall apply logarithmic differentiation - that is, we will take the logarithm of both sides, and then differentiate. Taking the logarithm, we have and differentiating both sides with respect to $\alpha$, we have or By taking $\alpha=1/2$, we obtain the value of the first series: We may also take the previous result and differentiate again with respect to $\alpha$ for another formula: Letting $\alpha=4$ gives the solution to the third problem: If we rearrange this last formula a little bit, we have After a lot of gross algebra and some properties of the hyperbolic functions, this turns into or, letting $\ln\alpha=\beta$, This is our polished formula, which will later be accompanied by a similar (but more difficult to prove) twin formula.
Another product can be obtained using the "difference of cubes" factoring trick and its generalization: From this trick, we have that if $|\alpha|\lt 1$, Of course, these tricks can be taken even further using formulas for differences of fourth, fifth, and higher powers, but this is as far as I will take it.
Now consider the product with $\alpha\gt 0$. Using the difference of squares factoring trick, we have yielding the result By taking a logarithm of both sides, we have and differentiating gives us, after a bit of nasty algebra, Using this formula with $\alpha=1/2$, we can obtain the value of the first sum: Differentiating the formula again (and a bit of algebra) yields the formula and with a bit of algebraic manipulation, this turns into and, letting $\beta=\ln\alpha$, this becomes the "twin formula" mentioned earlier: Again, the trick used to obtain this formula can be extended to use difference of cube, fourth power, and higher degree factoring tricks, but that's all I will do with it for now. | CommonCrawl |
Special Issue "Foundations of Quantum Mechanics: Quantum Logic and Quantum Structures"
Since its origins, quantum theory posed deep questions with regard to the fundamental problems of physics. During the last few decades, the advent of quantum information theory and the possibility of developing quantum computers, gave rise to a renewed interest in foundational issues. Research in the foundations of quantum mechanics was particularly influenced by the development of novel laboratory techniques, allowing for the experimental verification of the most debated aspects of the quantum formalism.
The VIII Conference on Quantum Foundations (https://sites.google.com/view/viiijfc), to be held during 21–23 November, 2018, at the CAECE University, Buenos Aires, Argentina, aims to gather experts in the field to promote academic debate on the foundational problems of quantum theory. This Special Issue captures the main aspects of this debate by incorporating a selected list of contributions presented at the conference. Researchers not attending the conference are also welcome to present their original and recent developments, as well as review papers, on the topics listed below. All contributions will be peer-reviewed.
A quantum correlation NFG,A for (n+m)-mode continuous-variable systems is introduced in terms of local Gaussian unitary operations performed on Subsystem A based on Uhlmann fidelity F. This quantity is a remedy for the [...] Read more.
Abstract: We explore diverse features of a set of relations between entanglement, energy, and quantum evolution, that can be interpreted as entropic energy-time uncertainty relations. These relations arise naturally within the timeless point of view advocated by Page and Wootters , but they can be formulated and interpreted independently of that approach.
BZ-lattices (called $BZ^\ast-lattices$) that represents a quite faithful abstraction of the concrete model based on $\mathcal E(\mathcal H)$. Interestingly enough, in the framework of BZ$^\ast$-lattices different abstract notions of ``unsharpness'' collapse into the one and the same concept, similarly to what happens in the concrete BZ$^\ast$-lattices of all effects. | CommonCrawl |
We report on a search for weakly interacting massive particles (WIMPs) using 278.8 days of data collected with the XENON1T experiment at LNGS. XENON1T utilizes a liquid xenon time projection chamber with a fiducial mass of (1.30 +/- 0.01) ton, resulting in a 1.0 ton yr exposure. The energy region of interest, [1.4; 10.6] keV(ee) ([4.9; 40.9] keV(nr)), exhibits an ultralow electron recoil background rate of [82(-3)(+5) (syst) +/- 3 stat)] events/ton yr keV(ee)). No significant excess over background is found, and a profile likelihood analysis parametrized in spatial and energy dimensions excludes new parameter space for the WIMP-nucleon spin-independent elastic scatter cross section for WIMP masses above 6 GeV/c(2), with a minimum of 4.1 x 10(-47) cm(2) at 30 GeV/c(2) and a 90% confidence level.
The XENON1T experiment searches for dark matter recoils within a $2$ tonne liquid xenon target. The detector is operated as a dual-phase time projection chamber, and reconstructs the energy and position of interactions in the active volume. In the central volume of the target mass, the lowest background rate of a xenon-based direct detection experiment so far has been achieved. In this work we describe the detector response modelling, the background and signal models, and the statistical inference procedures used in a search for Weakly Interacting Massive Particles (WIMPs) using 1\,tonne$\times$year exposure of XENON1T data.
By the XENON collaboration (127 authors). In internal XENON review.
Stockholm University, Faculty of Science, Department of Physics. Nikhef, Netherlands; University of Amsterdam, Netherlands.
We present first results on the scalar WIMP-pion coupling from 1 t×yr of exposure with the XENON1T experiment. This interaction is generated when the WIMP couples to a virtual pion exchanged between the nucleons in a nucleus. In contrast to most non-relativistic operators, these pion-exchange currents can be coherently enhanced by the total number of nucleons, and therefore may dominate in scenarios where spin-independent WIMP-nucleon interactions are suppressed. Moreover, for natural values of the couplings, they dominate over the spin-dependent channel due to their coherence in the nucleus. Using the signal model of this new WIMP-pion channel, no significant excess is found, leading to an upper limit cross section of 6.4×10−46 cm2 (90 % confidence level) at 30 GeV/c2 WIMP mass.
131 total authors (XENON collaboration, M. Hoferichter, P. Klos, J. Menéndez and A. Schwenk).
When searching for new physics effects, collaborations will often wish to publish upper limits and intervals with a lower confidence level than the threshold they would set to claim an excess or a discovery. However, confidence intervals are typically constructed to provide constant coverage, or probability to contain the true value, with possible overcoverage if the random parameter is discrete. In particular, that means that the confidence interval will contain the 0-signal case with the same frequency as the confidence level. This paper details a modification to the Feldman-Cousins method to allow a different, higher excess reporting significance than the interval confidence level. | CommonCrawl |
National Engineering Physics Institute "MEPhI"
Abstract: In the space of functions of two variables with Hardy–Krause property, new notions of higher-order total variations and Banach spaces of functions of two variables with bounded higher variations are introduced. The connection of these spaces with Sobolev spaces $W^m_1$, $m\in\mathbb N$, is studied. In Sobolev spaces, a wide class of integral functionals with the weak regularization properties and the $H$-property is isolated. It is proved that the application of these functionals in the Tikhonov variational scheme generates for $m\ge3$ the convergence of approximate solutions with respect to the total variation of order $m-3$. The results are naturally extended to the case of functions of $N$ variables.
Keywords: higher-order total variations for functions of several variables, regularization of ill-posed problems. | CommonCrawl |
###Read problems statements in [Hindi](http://www.codechef.com/download/translated/CK101TST/hindi/CAMPON.pdf), [Mandarin Chinese](http://www.codechef.com/download/translated/CK101TST/mandarin/CAMPON.pdf), [Russian](http://www.codechef.com/download/translated/CK101TST/russian/CAMPON.pdf), [Vietnamese](http://www.codechef.com/download/translated/CK101TST/vietnamese/CAMPON.pdf) and [Bengali](http://www.codechef.com/download/translated/CK101TST/bengali/CAMPON.pdf) as well. The Petrozavodsk camp takes place in about one month. Jafar wants to participate in the camp, but guess what? His coach is Yalalovichik. Yalalovichik is a legendary coach, famous in the history of competitive programming. However, he is only willing to send to the camp students who solve really hard problems on Timus. He started a marathon at the beginning of December. Initially, he said that people who solve 200 or more problems by the 31-st of December may go to the camp. Jafar made a schedule for the next month. For each day, he knows how many problems he is going to solve. The problem is that Yalalovichik is a really moody coach — he may wake up tomorrow and change his decision about the deadline and the number of problems that must be solved by this deadline to qualify for the camp. Jafar has $Q$ such scenarios. Now he wants to know: in each scenario, if he does not change his problem solving schedule, will he go to the camp or not? ### Input - The first line of the input contains a single integer $T$ denoting the number of test cases. The description of $T$ test cases follows. - The first line of each test case contains a single integer $D$ - the number of days in Jafar's schedule. - $D$ lines follow. For each $i$ ($1 \le i \le D$), the $i$-th of these lines contains two space-separated integers $d_i$ and $p_i$ denoting that Jafar will solve $p_i$ problems on day $d_i$. - The next line contains a single integer $Q$ denoting the number of scenarios Jafar considers. - $Q$ lines follow. For each $i$ ($1 \le i \le Q$), the $i$-th of these lines contains two space-separated integers $dead_i$ and $req_i$ denoting a scenario where Yalaovichik decides that students who solve $req_i$ problems by day $dead_i$ (inclusive) will go to the camp. ### Output For each scenario, print a single line containing the string `"Go Camp"` if Jafar is going to the camp or `"Go Sleep"` otherwise (without quotes). ### Constraints - $1 \le T \le 100$ - $1 \le D \le 31$ - $1 \le d_i \le 31$ for each valid $i$ - $1 \le p_i \le 100$ for each valid $i$ - $d_1, d_2, \ldots, d_D$ are pairwise distinct - $1 \le Q \le 100$ - $1 \le dead_i \le 31$ for each valid $i$ - $1 \le req_i \le 5,000$ for each valid $i$ ### Example Input ``` 1 3 10 5 14 4 31 1 2 9 2 15 7 ``` ### Example Output ``` Go Sleep Go Camp ``` ### Explanation **Example case 1:** - By the end of day $9$, Jafar will not have any problem solved. - By the end of day $15$, Jafar will have $9$ problems solved, which is enough to go to the camp, since he needs at least $7$ problems. | CommonCrawl |
Tool to decrypt / encrypt using Base 36 (Alphanumeric) Cipher. The base 36 is the ideal basis for encoding any alphanumeric string by a number (and vice versa) because it uses the usual 36 characters (26 letters and 10 digits).
Team dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Base 36 Cipher tool. Thank you.
> [News]: Discover the next version of dCode Base 36 Cipher!
The base 36 cipher uses the principle of arithmetic base change (conversion from base 36 to base 10). Thus, the words are considered to be written in base 36 (with as 36 alphanumeric symbols the 26 letters of the alphabet ABCDEFGHIJKLMNOPQRSTUVWXYZ and the 10 digits 0123456789) then converted into a number in base 10.
The decryption of the base 36 consists of the conversion of numbers from the base 10 to the base 36.
Example: $ 527198 = 11 \times 36^3 + 10 \times 36^2 + 28 \times 36^1 + 14 \times 36^0 $ so [11,10,28,14] in base 36 and 11=B, 10=A, 28=S, 14=E. The plain message is BASE.
How to recognize a Base 36 ciphertext?
The coded message consists of decimal numbers whose length is proportional to the length of the word.
The same word is coded with the same number, so the numbers corresponding to the common words appear coded several times.
Improve the Base 36 Cipher page! | CommonCrawl |
This question based on previous one by me. As Christopher Purcell noticed in his comment, there exist conjecture (which has a lot of counterexamples) that if you take a pair of ants $(n,n+1)$ apart (of the same colour, facing the same direction) then you will get an oscillating pattern.
Is there any explanation of this phenomenon?
Browse other questions tagged co.combinatorics ds.dynamical-systems discrete-geometry cellular-automata or ask your own question.
How to recover $k$ lost items in binary data $x_1,x_2,x_3 \dots,x_n$ via only XOR operator? | CommonCrawl |
A real matrix $A$ is called copositive if $x^TAx \ge 0$ holds for all $x \in \mathbb R^n_+$. A matrix $A$ is called completely positive if it can be factorized as $A = BB^T$ , where $B$ is an entrywise nonnegative matrix. The concept of copositivity can be traced back to Theodore Motzkin in 1952, and that of complete positivity to Marshal Hall Jr. in 1958. The two classes are related, and both have received considerable attention in the linear algebra community and in the last two decades also in the mathematical optimization community. These matrix classes have important applications in various fields, in which they arise naturally, including mathematical modeling, optimization, dynamical systems and statistics. More applications constantly arise.
The workshop brought together people working in various disciplines related to copositivity and complete positivity, in order to discuss these concepts from different viewpoints and to join forces to better understand these difficult but fascinating classes of matrices. | CommonCrawl |
Advice for writing good mathematics?
What is an adjective for "weaker than weak"?
Is arrow notation for vectors "not mathematically mature"?
Learning Mathematics using only audio.
If $\Bbb C^\infty$ is the only infinite-dimensional Hilbert space, why do other space names need to be used?
How are the "real" spherical harmonics derived?
Is there a difference between transform and transformation? | CommonCrawl |
I have a Topological Vector Space (TVS) ($I, \oplus , \otimes $) over $ \mathbb Q$ and I want to uniquely extend its scalar multiplication to $ \mathbb R $, so that it is promoted to a TVS over $ \mathbb R $.
The question is, what additional conditions does $I$ need to fulfill for the promotion to be possible in a unique way?
In the specific problem I am attempting to solve, topologically $I$ is given to be isomorphic to $\mathbb R$. This of course makes it metrizable (and possessing a bunch of other nice properties for that matter) but it is not a metric space, so is it enough? In fact, its promotion is an intermediary step for endowing $I$ with a metric other than the Euclidean (which it could easily inherit through its homeomorphism with $ \mathbb R$).
So my primary interest is in finding out whether homeomorphism to $\mathbb R$ is enough and in what way. Of course if general conditions under which the promotion is possible can be given, without utilizing the homeomorphism to $ \mathbb R$, it would be even better.
is a Uniformly Continuous mapping for every $v \in I$. I expected that this would be enough to allow the unique (uniformly) continuous extension of $ \otimes_v $ to the closure of $ \mathbb Q$ which is $ \mathbb R $ for every $v \in I$, which is adequate for my purposes.
Unfortunately, in my search, all I have managed to come across is the possibility of (uniquely and continuously) extending a Uniformly Continuous mapping from a subset to its closure, between metric spaces.
Are there more general conditions which allow for such an extension?
In particular, is metrizability (or any other topological property which $I$ inherits from $ \mathbb R $) enough for the extension?
Browse other questions tagged general-topology functional-analysis uniform-spaces or ask your own question.
Is the completion of a metrizable topological group metrizable?
Space of cadlag functions - Nonexistence of a TVS Polish topology?
How to show a function is continuous but not uniformly continuous.
Prove materiality uniform continuity condition.
Continuous Extension of Densely Defined Continuous (but not Uniformly Continuous) Function. | CommonCrawl |
Today is the release of version 1.0 of bfs, a fully-compatible* drop-in replacement for the UNIX find command. I thought this would be a good occasion to write more about its implementation. This post will talk about how I parse the command line.
Due to the short-circuiting behaviour of -a/&&, this only prints files and directories, skipping symbolic links, device nodes, etc.
without understanding how find will interpret it as an expression. Plenty of invalid bug reports have been made about this over the years. In fact, the find command line syntax was deeply confusing to me before I wrote bfs; I hope not everyone has to implement find from scratch just to understand it.
Nesting an option in the middle of an expression can be confusing, so GNU find warns unless you put them at the beginning.
(which is the same as ... -a -print, of course).
$ find -type f .
find: paths must precede expression: .
bfs does away with this particular restriction, because I hate it when a computer knows what I want to do but refuses to do it.
bfs uses a recursive descent parser to parse the command line, where each nonterminal symbol gets a function that parses it, e.g. parse_clause(). These functions recursively call other parsing functions like parse_term(), resulting in code that is structured very much like the grammar itself.
but naïvely that will change a left-associative rule into a right-associative one (terms like a -o b -o c will group like a -o (b -o c) instead of (a -o b) -o c). Care must be taken2 to parse right-recursively, but build the expression tree left-associatively.
These are all the tokens that may directly follow a TERM, and cannot be the first token of a FACTOR. Not all grammars can be parsed with a single such token of lookahead—the ones that can are known as LL(1).
// and paths/predicates, so we just ignore --.
// starting with - is a flag/predicate.
// such at the beginning of the command line.
where \) and , are treated as paths because they come before the expression starts. To maintain 100% compatibility, bfs supports these weird filenames too, but only before it sees any non-path, non-flag tokens. parser_advance() updates state->expr_started based on the types of tokens it sees.
The table contains entries with a function pointer to the parsing function for each literal, and up to two integer arguments that get passed along to it. The extra arguments let one function handle multiple similar literals; for example parse_type() handles -type and -xtype. The prefix flag controls whether extra characters after the literal should be accepted; this is used by a couple of literals like -O and -newerXY.
error: Unknown argument '-dikkiq'; did you mean '-follow'?
This also plays into a nice analogy between Boolean algebra and "regular" algebra, where "or" is like addition and "and" is like multiplication. Just like we can write \(x \, y\) instead of \(x \times y\), we can omit the explicit -a.
Actually, as all of find's binary operators are associative, this doesn't really matter. But it's still a useful technique to know. | CommonCrawl |
devices to cleverly condense your text.
unlikely to be on review boards.
adverbs and prepositions, for instance.
* Elimination of vowels and consonants that seem "obvious enough".
* Wide-scale deletion of mindless drivel.
* Use of the "you can guess the rest" syntactic construct.
Turn a 200 page thesis into an 8 page conference paper ($\alpha = .04$)!
foreign air about their papers! | CommonCrawl |
In cluster physics a single particle potential to determine the microscopic part of the total energy of a collective configuration is necessary to calculate the shell- and pairing effects. In this paper we investigate the properties of the Riesz fractional integrals and compare their properties with the standard Coulomb and Yukawa potentials commonly used. It is demonstrated, that Riesz potentials may serve as a promising extension of standard potentials and may be reckoned as a smooth transition from Coulomb to Yukawa like potentials, depending of the fractional parameter $\alpha$. For the macroscopic part of the total energy the Riesz potentials treat the Coulomb-, symmetry- and pairing-contributions from a generalized point of view, since they turn out to be similar realizations of the same fractional integral at distinct $\alpha$ values. | CommonCrawl |
nite finitely generated subgroups of the Gupta–Sidki 3-group $G$ are abstractly commensurable with $G$ or $G \times G$. As a consequence, we show that $G$ is subgroup separable and from this it follows that its membership problem is solvable.
nite subgroups of $G$ and establish an analogue for the Grigorchuk group. | CommonCrawl |
Curious collective behaviors are all around us from the atomic to the astronomical scale. For example, how do defects in crystals (Plasticity project) organize themselves into sharp, wall-like structures when left to their own devices? How galaxies form the neat spiral shapes (Spiral galaxy) appears to be an open question still (says the wiki). On smaller scales, flocks of birds create very cool patterns such as those found in starlings (movie below). How do they decide which direction to fly? How is information transmitted from bird to bird?
On the human scale, how do marching bands work? What is the nature of the intricate patterns that a marching band makes as they perform a halftime show? Are they only moving relative to one another and memorized separation vectors, or have they memorized specific positions on the field and when to move between them? Is there a set of measurements that you could perform to determine which of these methods or combination of methods they use? I presume that these positions are determined prior to the performance (otherwise super kudos to them), meaning that there could be no interactions between the performers and they could still make these impressive patterns. What would halftime look like if they had no prior knowledge and were simply told to make the shape of a pterodactyl? I bet that would not go over very well.
Many of these collective behaviors are highlighted in this compilation video, which I highly encourage you to watch.
For these four aspects, we wrote down a model with four forces on each individual, in the same order that they are listed above. During a simulation, we calculate the total force on each individual $i$ using the forces below and then integrate these forces to see how the crowd as a whole behaves.
These forces are not novel, each as has been used in many situations before. However, if we split the parameters for these particles into two groups, we find that the behaviors that are accessible are quite surprising. In particular, we can make two groups called active and passive moshers which are distinguished by $\alpha$ and $\beta$. Active moshers flock and run around ($\alpha \ne 0$ and $\beta \ne 0$) whereas passive ones don't ($\alpha = \beta = 0$).
If you thought that the initial conditions of starting off in a circle were a bit contrived, then you'd be right. We did too. But, it turned out that started with the populations mixed led to a spontaneous self-segregation! After the circle formed, then a mosh pit or circle pit would form anyway. This hints that these dynamical structures are actually stable, which was supported by the fact that even extremely large pits did not dissolve after a very long time. Below is the largest circle pit we simulated (~100k participants). The red particles are active moshers while the black are passive. The black particles are shaded gray according the force that they feel, thus labeling grain boundaries in the crowd.
In the second movie, you can watch the segregation take place in a system of 100k participants. It's a rather long movie so feel free to fast forward and look at several different states. | CommonCrawl |
Abstract: We introduce a new computable invariant for strong shift equivalence of shifts of finite type. The invariant is based on an invariant introduced by Trow, Boyle, and Marcus, but has the advantage of being readily computable.
We summarize briefly a large-scale numerical experiment aimed at deciding strong shift equivalence for shifts of finite type given by irreducible $2\times 2$-matrices with entry sum less than 25, and give examples illustrating to power of the new invariant, i.e., examples where the new invariant can disprove strong shift equivalence whereas the other invariants that we use can not. | CommonCrawl |
Let us say there is a program such that if you give a partially filled Sudoku of any size it gives you corresponding completed Sudoku.
Can you treat this program as a black box and use this to solve TSP? I mean is there a way to represent TSP problem as partially filled Sudoku, so that if I give you the answer of that Sudoku, you can tell the solution for TSP in polynomial time?
If yes, how? how do you represent TSP as a partially filled Sudoku and interpret corresponding filled Sudoku for the result.
For 9x9 Sudoku, no. It is finite so can be solved in $O(1)$ time.
But if you had a solver for $n^2 \times n^2$ Sudoku, that worked for all $n$ and all possible partial boards, and ran in polynomial time, then yes, that could be used to solve TSP in polynomial time, as completing a $n^2 \times n^2$ Sudoku is NP-complete.
The proof of NP-completeness works by reducing from some NP-complete problem R to Sudoku; then because R is NP-complete, you can reduce from TSP to R (that follows from the definition of NP-completeness); and chaining those reductions gives you a way to use the Sudoku solver to solve TSP.
It is indeed possible to use a general Sudoku solver to solve instances of TSP, and if this solver takes polynomial time then the whole process will as well (in complexity terminology, there is a polynomial-time reduction from TSP to Sudoku). This is because Sudoku is NP-complete and TSP is in NP. But as is usually the case in this area, looking at the details of the reduction isn't particularly illuminating. If you want, you can piece it together by using the simple reduction from Latin square completion to Sudoku here, the reduction from triangulating uniform tripartite graphs to Latin square completion here, the reduction from 3SAT to triangulation here, and a formulation of TSP as a 3SAT problem. However, if you want to understand the idea behind reducing from Sudoku to TSP I think you would be better off studying Cook's theorem (showing that SAT is NP-complete) and a couple of simple reductions from 3SAT (e.g. to 3-dimensional matching) and being satisfied in the knowledge that the TSP-Sudoku reduction is just the same kind of thing but longer and more fiddly.
Not the answer you're looking for? Browse other questions tagged algorithms np-complete reductions traveling-salesman sudoku or ask your own question.
Can Euclidean TSP be exactly solved in time better than (sym)metric TSP?
How can we design an efficient warehouse management program?
Is this an instance of a well-known problem?
Is any sudoku solver an SAT solver? | CommonCrawl |
"Beyond the classical distance-redshift test: cross-correlating redshift-free standard candles and sirens with redshift surveys."
Suvodip Mukherjee & Benjamin D. Wandelt.
"A year in the life of GW170817: the rise and fall of a structured jet from a binary neutron star merger."
E. Troja H. van Eerten et al.
"Internal gas models and central black hole in 47 Tucanae using millisecond pulsars."
"Chiral fermion asymmetry in high-energy plasma simulations."
"Confirmation Of Two Galactic Supernova Remnant Candidates Discovered By THOR."
"On-the-Fly Mapping of New Pulsars."
Joseph K. Swiggum & Peter A. Gentile.
"Cosmic String Loop Collapse in Full General Relativity."
"Relativistic parameterizations of neutron matter and implications for neutron stars."
"Constraining the Neutron Star Radius with Joint Gravitational-Wave and Short Gamma-Ray Burst Observations of Neutron Star-Black Hole Coalescing Binaries."
"The Evolution of X-ray Bursts in the "Bursting Pulsar" GRO J1744-28."
J. M. C. Court et al.
"The two tails of PSR J2055+2539 as seen by Chandra: analysis of the nebular morphology and pulsar proper motion."
"Constraints on non-linear tides due to $p$-$g$ mode coupling from the neutron-star merger GW170817."
Steven Reyes & Duncan A. Brown.
"Testing Weak Equivalence Principle with Strongly Lensed Cosmic Transients."
H. Yu & F. Y. Wang.
"Post-Newtonian corrections to Toomre's criterion."
"The single-degenerate model for the progenitors of accretion-induced collapse events."
"From Megaparsecs To Milliparsecs: Galaxy Evolution & Supermassive Black Holes with NANOGrav and the ngVLA."
S. R. Taylor & J. Simon.
"Circumbinary discs around merging stellar-mass black holes."
Rebecca G. Martin et al.
"Long-term evolution of RRAT J1819-1458."
Ali Arda Gencali & Unal Ertan.
"Gravitational lensing beyond geometric optics: I. Formalism and observables."
"Observational consequences of structured jets from neutron star mergers in the local Universe."
Nihar Gupte & Imre Bartos.
"Jiamusi Pulsar Observations: II. Scintillations of 10 Pulsars."
P. F. Wang et al.
"Breaking properties of neutron star crust."
D.A. Baiko & A.I. Chugunov.
"NICER Observes the Effects of an X-Ray Burst on the Accretion Environment in Aql X-1."
"NICER Detection of Strong Photospheric Expansion during a Thermonuclear X-Ray Burst from 4U 1820-30."
"Hamiltonians and canonical coordinates for spinning particles in curved space-time."
"Exclusion of standard $\hbar\omega$ gravitons by LIGO observation."
"Towards the Design of Gravitational-Wave Detectors for Probing Neutron-Star Physics."
"Neutron stars and millisecond pulsars in star clusters: implications for the diffuse $\gamma$-radiation from the Galactic Centre."
"X-ray guided gravitational-wave search for binary neutron star merger remnants."
"Super-Knee Cosmic Rays from Galactic Neutron Star Merger Remnants."
Shigeo S. Kimura et al.
"Intermediate-Mass Ratio Inspirals in Galactic Nuclei."
Giacomo Fragione & Nathan Leigh.
"Circularly Polarized EM Radiation from GW Binary Sources."
Soroush Shakeri & Alireza Allahyari.
"Black Holes and Neutron Stars in Nearby Galaxies: Insights from NuSTAR."
"Relativistic Tidal Disruption and Nuclear Ignition of White Dwarf Stars by Intermediate Mass Black Holes."
"Binary neutron star and short gamma-ray burst simulations in light of GW170817."
"Strong constraints on clustered primordial black holes as dark matter."
"On the Amplitude and Stokes Parameters of a Stochastic Gravitational-Wave Background."
"Classifying X-ray Binaries: A Probabilistic Approach."
"Fast neutrino flavor conversion: roles of dense matter and spectrum crossing."
Sajad Abbar & Huaiyu Duan.
"Hydrodynamical Neutron-star Kicks in Electron-capture Supernovae and Implications for the CRAB Supernova."
Alexandra Gessner & Hans-Thomas Janka.
"Magnetic helicity evolution in a neutron star accounting for the Adler-Bell-Jackiw anomaly."
Maxim Dvornikov & Victor B. Semikoz.
"Gravitational Radiation From Pulsar Creation."
Leonard S. Kisslinger et al.
"Single-pulse classifier for the LOFAR Tied-Array All-sky Survey."
"Are fast radio bursts the most likely electromagnetic counterpart of neutron star mergers resulting in prompt collapse?."
Vasileios Paschalidis & Milton Ruiz.
"A lesson from GW170817: most neutron star mergers result in tightly collimated successful GRB jets."
"Supernova Nebular Spectroscopy Suggests a Hybrid Envelope-Stripping Mechanism for Massive Stars."
"Designs for next generation CMB survey strategies from Chile."
Jason R. Stevens et al.
"Hypernuclear stars from relativistic Hartree-Fock density functional theory."
Jia Jie Li et al.
"Synchrotron maser from weakly magnetised neutron stars as the emission mechanism of fast radio bursts."
Killian Long & Asaf Pe'er.
"Formation of Relativistic Axion Stars."
James Y. Widdicombe et al.
"Search for sub-solar mass ultracompact binaries in Advanced LIGO's first observing run."
"Methods for the detection of gravitational waves from sub-solar mass ultracompact binaries."
"The Next-Generation Very Large Array: Supermassive Black Hole Pairs and Binaries."
"Neutron star pulse profiles in scalar-tensor theories of gravity."
Hector O. Silva & Nicolas Yunes.
"NICER Discovers the Ultracompact Orbit of the Accreting Millisecond Pulsar IGR J17062-6143."
Tod E. Strohmayer et al.
"Binary Black Hole Mergers from Globular Clusters: the Impact of Globular Cluster Properties."
"Axion star collisions with black holes and neutron stars in full 3D numerical relativity."
"Neutron star -- axion star collisions in the light of multi-messenger astronomy."
"Goldstone Boson Emission From Nucleon Cooper Pairing in Neutron Stars and Constraints on the Higgs Portal Models."
"Effect of background magnetic field on the normal modes of conformal dissipative chiral hydro and a novel mechanism for explaining pulsar kicks."
Arun Kumar Pandey & Manu George.
"Stochastic gravitational-wave background from spin loss of black holes."
Xi-Long Fan & Yan-Bei Chen.
"Progenitors of gravitational wave mergers: Binary evolution with the stellar grid-based code ComBinE."
Matthias U. Kruckow et al.
"The impact of vector resonant relaxation on the evolution of binaries near a massive black hole: implications for gravitational wave sources."
Adrian S. Hamers et al.
"Trans-Ejecta High-Energy Neutrino Emission from Binary Neutron Star Mergers."
"Gamma-ray emission from high Galactic latitude globular clusters."
Sheridan J. Lloyd et al.
"Chiral symmetry restoration by parity doubling and the structure of neutron stars."
"Relativistic charge solitons created due to nonlinear Landau damping: A candidate for explaining coherent radio emission in pulsars."
"PALFA Single-Pulse Pipeline: New Pulsars, Rotating Radio Transients and a Candidate Fast Radio Burst."
"Investigating the Structure of Vela X."
"The Enigmatic compact radio source coincident with the energetic X-ray pulsar PSR J1813$-$1749 and HESS J1813$-$178."
Sergio A. Dzib et al.
"A Review of Compact Interferometers."
"Time evolution of the X-ray and gamma-ray fluxes of the Crab pulsar."
L. L. Yan et al.
"Electromagnetic Emission from newly-born Magnetar Spin-Down by Gravitational-Wave and Magnetic Dipole Radiations."
"NICER Discovers mHz Oscillations in the "Clocked" Burster GS 1826-238."
"Non-linear relativistic contributions to the cosmological weak-lensing convergence."
"Dark energy from $\alpha$-attractors: phenomenology and observational constraints."
"A Decline in the X-ray through Radio Emission from GW170817 Continues to Support an Off-Axis Structured Jet."
K. D. Alexander et al.
"Black Hole and Neutron Star Binary Mergers in Triple Systems: Merger Fraction and Spin-Orbit Misalignment."
Bin Liu & Dong Lai.
"Noise-marginalized optimal statistic: A robust hybrid frequentist-Bayesian statistic for the stochastic gravitational-wave background in pulsar timing arrays."
Sarah J. Vigeland et al.
"Super-Eddington winds from Type I X-ray bursts."
Hang Yu & Nevin N. Weinberg.
"Relating Braking Indices of Young Pulsars to the Dynamics of Superfluid Cores."
H. O. Oliveira et al.
"On the magnetic field inside the solar circle of the Galaxy: On the possibility of investigation some of characteristics of the interstellar medium with using of pulsars with large Faraday rotation values."
"Stochastic Chemical Evolution of Galactic Sub-halos and the Origin of r-Process Elements."
"Formation of hot subdwarf B stars with neutron star components."
Carlos O. Lousto & James Healy.
Kevin J. Kelly & Pedro A. N. Machado.
J. C. Bray & J. J. Eldridge.
Gavin P Lamb et al. | CommonCrawl |
This site is a one-stop shop for the work of the single ion channel group at UCL. The site is owned by Professor Lucia Sivilotti (AJ Clark professor of pharmacology at UCL), who has run the group since the retirement of David Colquhoun in 2004. The original pages were on the UCL server, but they have been moved to this site largely because corporatisation of the UCL server makes it hard to update the pages there.
Choose a page via tabs above the header, or via links on right sidebar.
Here's a short bit of movie that shows one single molecule doing its thing. It is a video of an oscilloscope screen.
The movie shows channels opened by a low concentration of acetylcholine. Openings are the downward deflections. Each opening has much the same amplitude, about 6 pA, but the durations of open and shut times are very variable, as expected for a single molecule. Analysis of the open and shut times can tell us the rates of transition between different states of the channel, and hence the equilibrium constants. Analysis of records like this can separate the binding steps from the opening steps, and hence solve the binding-gating problem (the affinity-efficacy problem).
The details. Human recombinant muscle type nicotinic receptor, $\alpha_2 \beta \epsilon \delta$, expressed in HEK cell, –100 mV. 100 nM ACh.
The last summer workshop, Understanding Ion Channel currents in terms of mechanisms, was on 8 – 12 July 2013. Unfortunately we shall not be able to run the course in 2014: more details on the course page.
To see what's in the workshop and to register for it, check the workshop page. There are pictures from previous courses. | CommonCrawl |
Why are monopoles resulting from electroweak symmetry breaking allowed?
I understand (i think) that for a magnetic monopole to exist as the result of a gauge group $G$ being spontaneously broken to a subgroup $H$ by the Higgs mechanism, that certain criteria must be fulfilled. One of these is that there must be a non-trivial second homotopy. Which i believe means that the resultant vacuum manifold must be non-trivial.
Your question is answered in the paper Monopoles in Weinberg-Salam Model by Cho and Maison, from which I think the quotes are taken. What the gauged $CP^1$ model is exactly is not really relevant to the answer, which is purely mathematical (it is a type of model to which the authors reduce the bosonic sector of the standard Weinberg-Salam model with extra hypercharge added).
"The basis for this "non-existence theorem" is, of course, that with the spontaneous symmetry breaking the quotient space $SU(2)\times U(1)/U(1)$ allows no non-trivial second homotopy. This has led many people to conclude that there is no topological structure in the Weinberg-Salam model which can accommodate a magnetic monopole... In the following we establish the existence of a new type of monopole and dyon solutions in the standard Weinberg-Salam model, and clarify the topological origin of the magnetic charge.
[...] So the above ansatz describes a most general spherically symmetric ansatz of a $SU(2)\times U(1)$ dyon. Here we emphasize the importance of the non-trivial $U(1)$ degrees of freedom to make the ansatz spherically symmetric. Without the extra $U(1)$ the Higgs doublet does not allow a spherically symmetric ansatz. This is because the spherical symmetry for the gauge field involves the embedding of the radial isotropy group $SO(2)$ into the gauge group that requires the Higgs field to be invariant under the $U(1)$ subgroup of $SU(2)$. This is possible with a Higgs triplet, but not with a Higgs doublet. In fact, in the absence of the hypercharge $U(1)$ degrees of freedom, the above ansatz describes the $SU(2)$ sphaleron which is not spherically symmetric. The situation changes with the inclusion of the extra hypercharge $U(1)$ in the standard model, which can compensate the action of the $U(1)$ subgroup of $SU(2)$ on the Higgs field."
Not the answer you're looking for? Browse other questions tagged higgs symmetry-breaking topology magnetic-monopoles electroweak or ask your own question.
What are the Higgs masses for $SU(2) \times U(1)$ goes to $U(1)$ symmetry breaking with a complex triplet?
Is the right-handed electron really an $SU(2)$ singlet?
Is the electroweak $SU(2)$ gauge symmetry an exact symmetry in Standard Model before spontaneous symmetry breaking?
Why did David Tong say that the global topological $U(1)$ symmetry is unbroken in Higgs phase?
If monopoles are excised points in a $U(1)$ bundle, how are they affected by other charges? | CommonCrawl |
Abstract: A study is made of the asymptotic behavior of the solution of the scattering problem for a multidimensional Schrödinger equation as $x\to\infty$. The potential is assumed to vary smoothly and decrease more rapidly than the Coulomb potential. The asymptotic behavior of the solution of the scattering problem corresponding to the plane wave eikx contains special functions in the neighborhood of the direction of $k$. The singularities of the scattering amplitude are described; these also arise only in this direction. | CommonCrawl |
Farmer John is building a nicely-landscaped garden, and needs to move a large amount of dirt in the process.
The garden consists of a sequence of $N$ flowerbeds ($1 \leq N \leq 100,000$), where flowerbed $i$ initially contains $A_i$ units of dirt. Farmer John would like to re-landscape the garden so that each flowerbed $i$ instead contains $B_i$ units of dirt. The $A_i$'s and $B_i$'s are all integers in the range $0 \ldots 10$.
To landscape the garden, Farmer John has several options: he can purchase one unit of dirt and place it in a flowerbed of his choice for $X$ units of money. He can remove one unit of dirt from a flowerbed of his choice and have it shipped away for $Y$ units of money. He can also transport one unit of dirt from flowerbed $i$ to flowerbed $j$ at a cost of $Z$ times $|i-j|$. Please compute the minimum total cost for Farmer John to complete his landscaping project.
The first line of input contains $N$, $X$, $Y$, and $Z$ ($0 \leq X, Y \le 10^8; 0 \le Z \leq 1000$). Line $i+1$ contains the integers $A_i$ and $B_i$.
Please print the minimum total cost FJ needs to spend on landscaping.
Note that this problem has been asked in a previous USACO contest, at the silver level; however, the limits in the present version have been raised considerably, so one should not expect many points from the solution to the previous, easier version. | CommonCrawl |
deleted material, e.g., by typing `I$}'.
! Argument of \d has an extra }.
! Argument of \v has an extra }.
! Please use \mathaccent for accents in math mode.
I'm changing \accent to \mathaccent here; wish me luck.
! Argument of \u has an extra }.
! You can't use `macro parameter character #' in math mode.
Output written on header-fr.dvi (1 page, 964 bytes). | CommonCrawl |
I am sorry for this elementary question, but i could not figure out a rigorous proof for why the Lebesgue integral of any function over a null set is zero.
If $\mu(E) = 0$, then $\mu(E \cap A_j) = 0$ for all $j$. Thus $\int_E s \,d\mu = 0$.
The Lebesgue integral of a nonnegative function $f$ is the supremum of integrals of all simple functions $s$ such that $0 \le s \le f$. Since all of these integrals are $0$, the supremum is $0$ too.
Since every real function $f$ can be written as $f = f^+ - f^-$ where $f^+$ and $f^-$ are both nonnegative, we have $\int_E f \, d\mu = 0$ too. The general result follows from the fact that every complex function $f$ can be written as $f = u + i v$ where $u$ and $v$ are real.
Not the answer you're looking for? Browse other questions tagged measure-theory or ask your own question.
Is there a non-null subset where $f$ has the sign of its integral?
What is the Lebesgue Integral of $f(x) = x$ over the Smith-Volterra-Cantor Set?
When is the image of a null set a null set?
Why does this function preserve measure of null sets?
How to define $0\cdot\infty$ in Lebesgue's integral? | CommonCrawl |
Two of our 10 M31 fields mosaiced together; color based upon our BVR images.
The galaxies of the Local Group serve as our laboratories for understanding star formation and stellar evolution in differing environments: the galaxies currently active in star-formation in the Local Group cover a factor of 10 in metallicity and span a range of Hubble types from dwarf spheroidal to Irr to Sb and Sc. We are conducting a uniform survey (UBVRI, Halpha, [SII], and [OIII]) of nearby galaxies selected on the basis of current star formation. In the Local Group, this sample includes M31, M33, NGC 6822, IC 1613, IC 10, WLM, Pegasus, and Phoenix; we exclude the Milky Way and Magellanic Clouds, which are being surveyed separately by seve ral groups. We also include Sextans A and Sextans B, located just beyond the Local Group (van den Bergh 1999a, 1999b). Using the new, wide-field Mosaic cameras, we are producing catalogs of UBVRI photometry of roughly a million stars, using Halpha, [SII], and [OIII] to distinguish bona fide stellar members from compact HII regions. This on-line catalog will answer a number of scientific questions directly, but we believe that the real strength of this survey will be in the science we will enable with 8-10-m class telescopes and the capability of follow-u p spectroscopy. In addition, the calibrated images will provide a detailed, uniform atlas of both the stellar and ionized gas components of these galaxies, which will certainly prove useful for a host of other projects.
In addition.... Mosaic of our 10 M31 frames! If that takes too long for you to download, try this smaller version. Put together by K. Olsen.
Mosaic of our 3 M33 frames put together by K. Olsen.
We have now released all of our images: 20 fields (M31: 10 fields, M33: 3 fields, plus IC10, N6822, WLM, Phoenix, Pegasus, Sextans A, and Sextans B.) The stacked images can be downloaded either from ftp://ftp.lowell.edu/pub/massey/lgsurvey or from http://www.archive.noao.edu/nsa/ The individual images can be obtained from http://www.archive.noao.edu/nsa/ Tell them Phil send you.
Our UBVRI and narrow-band photometry is complete. The first paper ``A Survey of Local Group Galaxies Currently Forming Stars. I. UBVRI Photometry of Stars in M31 and M33 was published in Massey et al. 2006 AJ 131, 2478.
The second paper, ``...II. UBVRI Photometry of Stars in Seven Dwarfs and a Comparison of the Entire Sample" has now also come out: Massey et al. 2007, AJ, 133, 2393.
PLEASE send me (phil.massey at lowell.edu) a note giving publication information if you use our data or photometry so we can include it when we are bugged by NOAO to give them this information each year.
What about the emission-line data (not covered in the above? We discuss the calibration and give the final numbers in Massey et al (2007, AJ, 134, 2474); see Table 2. Note that there is a typo in the table, though: the units for the emission-line filters should be ergs/sec/cm^2, not ergs/sec/cm^2/A.
Here is the final status report to the NOAO survey workshop, April 14, 2003.
Our color-terms: We've measured the broad-band color terms for both Mosaic-N (Massey et al. 2006 AJ, 131, 2478) and Mosaic-S ( Massey et al 2007 AJ, 133, in press>).
Click on the link for our finding charts, made from the DSS, and showing the region to be covered by our survey.
NGC6822 See also Shay Holmes' beautiful reductions of our CTIO data.
BVR photometry will allow us to distinguish bona-fide red supergiant stars (RSGs) from red foreground dwarfs, a problem even at high galactic latitudes (see Massey 1998, ApJ, 501, 153 ). This will allow accurate number ratios of blue to red stars (B/R) to be determined as a function of position within these galaxies for comparison with the prediction of stellar evolutionary models. These data will also yield the relative number of RSGs to Wolf-Rayet (WR) stars in these galaxies, an important diagnostic of massive star evolutionary models. (The WR content is known from other surveys, with a new, global study of M31 being conducted concurrently.) Because stellar winds are driven by radiation pressure via highly ionized metals, mass-loss rates depend upon metallicity; such mass-loss is believed to have a controlling effect on the evolution of the most luminous stars, but empirical checks on such stellar evolution tracks are sorely lacking (cf. Maeder \& Conti 1994, ARAA, 32, 227).
Stars with Halpha emission will be readily identified, and the [SII] and [OIII] filters will allow us to distinguish these from compact H~II regions. Such Halpha bright objects will include candidate Luminous Blue Variables (LBVs), originally known as Hubble-Sandage Variables ( Hubble \& Sandage 1953, ApJ, 118, 353), and the related high-luminosity B[e] stars. LBVs have luminosities that are at or exceed the Eddington limit, and recent surveys have shown that the detection of such stars by their variability alone may miss a substantial fraction (Massey et al. 1996, ApJ, 469, 629), in accord with the belief that LBVs may be a normal stage in the evolution of the most massive stars. (See Humphreys \& Davidson 1994, PASP, 106, 704 for a review.) However, other recent work suggests that some LBVs are quite isolated from other massive stars (King et al. 2000, AJ, submitted), lending credence to the suggestions that the LBVs are primarily a binary star phenomenon (e.g., Gallagher \& Kenyon 1985, ApJ, 290, 542). Unbiased statistics are needed, and our survey will provide the candidates for follow-up spectroscopic surveys and photometric monitoring programs. These studies are also necessary groundwork in understanding whether LBVs may be used as good distance indicators. These stars are the optically brightest in star-forming spirals and irregulars. Leitherer (1997, in Luminous Blue Variables, ed. A. Nota \& H. Lamers, p. 97) in showed that the luminosities of LBVs follow a reasonably well-defined relation when plotted against parameters derived from optical spectra, similar to the relationship found by Kudritzki et al.\ (19 96) and Kudritzki (1997) for O-A supergiants. In addition to LBVs, we expect to find other interesting H$\alpha$-emission objects. The unique object SS433 (Margon 1984, ARAA 22, 507) in our own Milky Way indicates that unexpected and rare objects may be found, but only if we look!
Good UBV photometry is a critical necessity for identifying the most massive stars, but is insufficient by itself to determine the initial mass function via a luminosity function; for this, spectroscopy is needed. A survey such as ours will identify the stars for which spectroscopy with GMOS on Gemini will allow the direct determination of the initial mass function in these nearby systems, for comparison with the Milky Way and Magellanic Clouds.
Color-magnitude and Hess diagrams obtained from these data will be used to estimate the star formation histories in these galaxies. This is most efficaciously done by comparing to theoretically simulated Hess diagrams using quantitative techniques such as the ones used by Tolstoy and Saha (1996, ApJ, 462, 672) and Dolphin (1997, New Astronomy, 2, 379). While some studies can be done with HST data (e.g. Gallagher et al. 1998, AJ, 115, 1869; Tolstoy et al. 1998, AJ, 116, 1244; Hodge et al.\ 1999b, ApJ, 521, 577), the galaxy-wide effects and interconnections that show how star formation proceeded in the distinct morphological or kinematically defined components of the galaxy can only be studied using a survey such as the one we are proposing here. This topic is of utmost importance in reconciling star formation histories inferred from high redshift galaxy counts with that from the fossil record of stars in galaxies in the local universe.
How extended are the halos around galaxies? This question has not been adequately answered, even for our own galaxy, where a census of halo stars (in situ) is severely thwarted by large numbers of indistinguishable nearby disk ``contaminants''. Most techniques, such as those based upon observations of RR Lyrae stars, are severely hampered by selection effects. This question is best answered by looking to nearby external galaxies. Consider M31. By counting stars delineated by colors and magnitudes corresponding to old red giants (which effectively also rejects contamination from unresolved background galaxies -- though in seeing better than 1 arc-sec one can also reject them on the basis of image morphology) the density of the halo can be traced out to 100 kpc in projection assuming an R**-3.5 law. Is there a tidal cut off closer in?
From the V- and I-band images, it will be possible to map the ages and metallicities of the older (T > 1 Gyr) stellar subsystems in the outer regions of these galaxies by comparing color-magnitude diagrams with globular cluster fiducial sequences (Da Costa and Armandroff 1990 AJ, 100, 162). The properties of the older populations can then be compared to those of the younger stars to clarify the star formation and chemical enrichment history of each galaxy.
Although our focus is on the resolved stellar content of these galaxies, we would be remiss not to take advantage of what we can learn about the ISM and its interactions with the stellar component of these galaxies. All of these galaxies have measurable amounts of HI gas. Some gas will be ionized from hot stars within the galaxies, some by intergalactic UV, and some by shocks. The proposed deep, large-format Halpha, [SII], and [OIII] images will define the extent of the ionized emission, which may extend well beyond the galaxies in bubbles, filaments, and chimneys ( Howk and Savage 1997 AJ, 114, 2463; Rand 1998 ApJ, 501, 137). Our survey will allow study of both the diffuse HII and the discrete HII regions, which in some cases, such as for M33 (see Hodge et al. 1999a PASP, 111, 685), are only partially identified in the outer parts of the galaxies. The [SII] exposures will be effective in detecting supernova remnants, while the [OIII] exposures are suitable for seeking out the higher excitation gas, such as plantary nebuale and hot HII regions. Images of these galaxies in these three emission line filters can also serve as a good comparison to those being obtained by the Magellanic Cloud Emission-line Survey currently underway at Tololo (Smith 1999, in New Views of the Magellanic Clouds, p. 28). Follow-up spectra would allow modeling of the relative importance of ionization mechanisms in galaxies of different mass, star formation activity level, and metallicity.
This material is based upon work supported by the National Science Foundation under Grant No. 0093060. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of that National Science Foundation. | CommonCrawl |
I'm currently trying to understand the Wilcoxon-Rank-Test/Mann-Whitney-Wilcoxon Test, but it got me kind of confused. As far as I understood it is to test whether the null hypothesis $F_X = F_Y$ can be rejected at level $\alpha$, where $F_X$ is the cumulative distribution of the random variable X and same for Y.
Now here's the problem: Mostly it is assumed that the cumulative distributions are continuous. The test statistic, the distribution of the rank vector and so on are derived based on that assumption. In case the cumulative distribution are not continuous I don't know why we apparently can get the same statistics. I hope what I've been telling was right, I'm just very confused at this point. My question here is if anyone has a good source which can help me understand my problem, since I didnt find any proper sources. Or maybe anyone could briefly explain how the test statistic is derived when we have continuous data or ordinal data.
What is the interpretation of (y bar square/variance) in 'Nominal is Best' method of Taguchi Design?
Which statistical test to use? Is the Wilcoxon Signed-Rank Test Appropriate? | CommonCrawl |
Chandra, AK and Sumathi, R (1990) Mindo/3 configuration interaction studies of $\alpha$-cleavage processes in organic photochemistry. In: Journal of Photochemistry and Photobiology A: Chemistry, 52 (2). pp. 213-234.
The $\alpha$-cleavage processes (Norrish type I) in several ketones and thiones are examined. The relative heights of the activation barriers are estimated using MINDO/3 and and configuration interaction. Results reveal certain features that are of interest in organic photochemistry. The perpendicular motion of the carbon atom of the chromophore is an important component in the reaction coordinate if the cleavage process takes place in the lowest triplet state. Selectivity of the cleavage of an $\alpha$ bond in a non-symmetric system is determined from the third derivative of energy with respect to the reaction coordinate at the starting point of the reaction. | CommonCrawl |
We consider the minimal non-negative Jacobi operator with $p\times p-$matrix entries. Using the technique of boundary triplets and the corresponding Weyl functions, we describe the Friedrichs and Krein extensions of the minimal Jacobi operator. Moreover, we parametrize the set of all non-negative extensions in terms of boundary conditions. | CommonCrawl |
Thank you for the great work and the flowing updates!
As you're quite firm with the API, maybe you can help me with a slight offtopic question. Is there a way to change the SSID of the guest-wlan? As we need to change the password on a daily base, we'd like to change it as well. like guest20160118 today and guest20160119 tomorrow. If only the password is changed certain clients are unable to connect because the trying the old password.
where $name = ssid and $x_passphrase is the psk. The id is the unique id of the wlan which you will find in the wlan config using my tool.
Thank you very much. It works like a charm!
Improved readability of the debug output when $debug is set to true in the config file, making it readable in cli mode as well as for html output.
i personally haven't tried it yet my controller is on the aws. Can i ask a question regarding the hotspot.
Can i make a voucher stats like unused and used vouchers per month based on your api?
My appology if i don't understand correctly the API browser tool uses and its purpose.
sloofmasters API tool will show you the operators and the vouchers. I highly suggest you download it and play with it to see what you can do. What you chose to do with the information is up to you. You could use the API class and a PHP script to only show what you are looking for. I've not viewed his index.php but I'm sure it will give you some starters.
I've removed any potential sensitive data below but you should get the gist of what is returned.
To call the API, create a custom PHP file and point to the unifiapiclass. The code below is some snippets from a page I use to get voucher information.
require 'classes/unifi.config.php'; // I use a different PHP class name than sloofmaster does so you will need to change this name.
Thank you thesohoguy for the explaination and for the simple examples.
Just after some assistance with the API Browser.
Uploaded the files to our cpanel webhost, changed the config file, and tried to access the index.php.
Takes a very long time to load, when it does, no sites are selectable from the drop down.
Have tried 2 different controllers(Different versions, 4.8.12, and 4.8.6), which are on different internet connections, different router types, and different base OS(Ubuntu v Windows).
This looks like a connectivity issue between the server you installed the tool on and your controller server; somehow curl cannot connect to the IP address or port. Any firewalls in between that are preventing a connection? The PHP error is just a result of the connectivity problem.
To start off with, I suggest you first check IP connectivity by doing a ping from the server where the tool is installed to the controller host name which you configured in the config file.
As a bit of a followup, with the help of @slooffmaster, we worked out the shared cpanel hosting where i had put the API Browser tool, blocks outgoing connections to 8443 (or perhaps everything besides 80, and 443).
Moved this to a local server, and works fine.
Thanks to sloofmaster for the assistance, and for creating such a useful tool.
I have succefully implemented this on my local pc with the controller installed on the same pc.
I want to implement this on my Amazon cloud aws controller. I have some security concern.
1. will it cause an security issue?
Security should indeed be a major concern when providing public access to this tool because it exposes credential/psk's etc... Therefore, when exposing this tool on a public url it is indeed good pratice to tightly control access and limit access to specific IP adresses using AWS rules in your case. On top of these controls and when using apache2, you can best also use .htaccess to limit access to specific IP adresses, maybe add a password as well.
NOTE: currently port forward stats are not visible through the controller dashboard, using the API Browser tool you can at least have a peek at them, or else you can use the function which is now added to the PHP class.
Is it possible to have it work over the cloud (customer has a UC-CK Cloud Key) for times when you can't forward port 8443?
Let me know please and thanks.
I haven't tried this yet, primarily because I don't have a Cloud Key (don't have an immediate requirement for it myself).
can see the API has those metrics but can seem to understand a bit of them .
Welcome to these forums. The tool itself queries data that is made available through the API. There are some stats but not all might fit your requirements.
My tool was developed primarly to show the raw data that is available through the API. For specific requirements you may need to start writing code in PHP using the PHP API client that comes with the tool. | CommonCrawl |
Poisson distribution in statistics are defined as one of the most importan chapter. For solving the Poisson distribution we are having one of the formula. By using the formula given for the Poisson Distribution we are calculating the average values. According to the rate of change function we have to calculate the Poisson distribution average.... Here we will show that if Y and Z are independent Poisson random variables with parameters ? 1 and ? 2 , respectively, then Y+Z has a Poisson distribution with parameter ? 1 +? 2 .
Poisson distribution is positively skewed for smaller values. As the mean increases greater than 5 (generally, not strictly) the distribution gets more symmetrical.
Poisson distribution in statistics are defined as one of the most importan chapter. For solving the Poisson distribution we are having one of the formula. By using the formula given for the Poisson Distribution we are calculating the average values. According to the rate of change function we have to calculate the Poisson distribution average.
Read this as " X is a random variable with a Poisson distribution." The parameter is ? (or ? ); ? (or ? ) = the mean for the interval of interest. Leah's answering machine receives about six telephone calls between 8 a.m. and 10 a.m.
Splitting (Thinning) of Poisson Processes: Here, we will talk about splitting a Poisson process into two independent Poisson processes. The idea will be better understood if we look at a concrete example.
In this respect, the exponential distribution is related to the Poisson distribution. Exponential distributions are always defined on the interval $[0,\infty)$. | CommonCrawl |
I've worked out the T matrix of the tip (position and orientation) with respect to the base frame of a 6 DoF robot for 2 seconds at 1kHz.
So the size of the total T matrix is 4*4*1000.
suppose you calculate the positions of your joints $p_i, i=1\ldots6$ using your homogeneous matrices.
the thing is now, if you want to have a smooth animation, it is a bad idea to call the plot command each time because it needs some time. The better solution is to update the coordinates of your figure.
where you have to figure out the correct child of you figure handle f, you can just hop trough the structure of f to find the right children.
Not the answer you're looking for? Browse other questions tagged matlab simulation forward-kinematics frame or ask your own question. | CommonCrawl |
I often see a situation when professors use words "logic", "mathematical proof" and even prove logically while actually knowing that students are not even familiar with logic itself, i.e. no formal understanding of equivalence, implications, inference rules, etc.
How are students supposed to understand such "proofs" because they can actually accept as a proof any "intuitively" explained reason why a theorem is true and never even suspect they were deceived unless they know the exact definition of an argument and true (sound) argument? Is knowing logic always taken as granted like a prerequisite? Even so, shouldn't the lecturer at least designate some time for explaining basic logic?
In practice, one does have to explain (even to university engineering students) that although A implies B, it need not be the case that B implies A. However, there is a confounding factor, which is that mathematical logic is not what is used in scientific reasoning. As is explained by Polya in one of his books (the one on plausible reasoning) and by V. I. Arnold in some of his essays (he tells a story about his paper being rejected), in scientific contexts, when it is known that A implies B, the truth of B provides evidence for the plausibility of A (this is precisely what Polya calls plausible reasoning). It is therefore incumbent on the teacher of mathematics to distinguish mathematical logic from ordinary reasoning (dare I say common sense reasoning?) and it is generally helpful to do this explicitly rather than implicitly.
In general, one does not need to teach logic formally, but the basic consequences of negations of implications should be explained if they are used, because they are not obvious to many, and they are less so still in contexts in which reasoning is not strictly mathematical (an error often made by mathematicians is to assume that there are no such contexts). This is particularly so if one proves a statement by proving its contrapositive, or by contradiction. The validity of either line of argument is not clear to a novice, and the strategy underlying such an argument needs to be explained to such a student.
A mathematical proof has (among others) the purpose to convince someone of some fact, given some already established facts.
Whether or not a proof is valid does not depend on who presents it. That is one of the key features of math - it does not matter at all if the "professor knows what he does" or if she sounds clever. If I can't help but agree that the arguments clearly show that the new fact follows from known facts, I have to agree to the new fact as well.
I agree that point 3 is somewhat hard to employ without any prerequisite in logic, but the first two points are doable. I will give examples for each.
There is a huge difference between explaining (or understanding) a proof and finding and writing it down. Since the question is asking about a lecturer explaining a proof, I'll not bother with the latter.
The lecturer can explain the bigger picture of the proof: "We take any even number and show that its square must also be even".
The lecturer can remind the students that for even $a$, there exists $k \in \mathbb N$ so that $a = 2k$. Give examples if neccessary. This should be working knowledge already. If not, this has to be considered a gap in mathematical basics and not in logic.
The manipulation to express $a^2$ in terms of $k$ can be executed by either students or lecturer, leading to $a^2 = 4k^2$ if done correctly -- regardless of whoever did it.
The last step, showing that the number $4k^2$ is also even because $4k^2 = 2 \cdot k'$ with $k' := 2k^2$ can also be done by the lecturer. This decomposition always works and again does not depend on who does it.
None of these steps requires "deep" understanding" of logic except "simple" implications. But in the end, the students should be able to understand that for any even $a$, also $a^2$ must be even.
It is indeed a completely different story for the students to come up with a similar proof.
Regarding proofs by contradiction: it is important for the students to understand that apart from the cleverly chosen starting fact, each conclusion is right and again does not depend on who presents the reasoning. If one then arrives at a contradiction, the only remaining option is that the premise was wrong.
Let's look at Euler's proof of the fact that there are infinite prime numbers.
Assume that there are finitely many prime numbers. We don't know yet if this is true or false, but we can assume either.
If the students have mathematical background about divisors, they must agree that the product of all primes, increased by one, is prime or has a prime divisor that is not in the list we started with. This argument is somewhat complex, but uses only known facts about divisibility.
The reasoning in point 2 is sound. The only way to fix the error that we did not start with "all primes" is to assume that there is no such thing.
To summarize: Understanding proofs does not require formal knowledge of logic. Common sense is enough, together with the attitude that arguments are not valued by authority.
Not the answer you're looking for? Browse other questions tagged undergraduate-education proofs or ask your own question.
What goes wrong when students interchange "there exists" and "for all" randomly? How to fix this?
How to intuitively explain the role of transistors in boolean logic and switching?
What basic algebra skills and techniques are most important for calculus students to know?
What are non-math majors supposed to get out of an undergraduate calculus class?
How to balance the difficulty level and speed of lectures for students of very different levels? | CommonCrawl |
Thürigen, J. (2015). Discrete quantum geometries and their effective dimension. PhD Thesis.
Abstract: In several approaches towards a quantum theory of gravity, such as group field theory and loop quantum gravity, quantum states and histories of the geometric degrees of freedom turn out to be based on discrete spacetime. The most pressing issue is then how the smooth geometries of general relativity, expressed in terms of suitable geometric observables, arise from such discrete quantum geometries in some semiclassical and continuum limit. In this thesis I tackle the question of suitable observables focusing on the effective dimension of discrete quantum geometries. For this purpose I give a purely combinatorial description of the discrete structures which these geometries have support on. As a side topic, this allows to present an extension of group field theory to cover the combinatorially larger kinematical state space of loop quantum gravity. Moreover, I introduce a discrete calculus for fields on such fundamentally discrete geometries with a particular focus on the Laplacian. This permits to define the effective-dimension observables for quantum geometries. Analysing various classes of quantum geometries, I find as a general result that the spectral dimension is more sensitive to the underlying combinatorial structure than to the details of the additional geometric data thereon. Semiclassical states in loop quantum gravity approximate the classical geometries they are peaking on rather well and there are no indications for stronger quantum effects. On the other hand, in the context of a more general model of states which are superposition over a large number of complexes, based on analytic solutions, there is a flow of the spectral dimension from the topological dimension $d$ on low energy scales to a real number $0<\alpha<d$ on high energy scales. In the particular case of $\alpha=1$ these results allow to understand the quantum geometry as effectively fractal. | CommonCrawl |
Get-Up-and-Go Test is commonly used for assessing the physical mobility of the elderly by physicians.
This paper presents a method for automatic analysis and classification of human gait in the Get-Up-and-Go Test using a Microsoft Kinect sensor.
Two types of features are automatically extracted from the human skeleton data provided by the Kinect sensor.
The first type of feature is related to the human gait (e.g., number of steps, step duration, and turning duration); whereas the other one describes the anatomical configuration (e.g., knee angles, leg angle, and distance between elbows).
These features characterize the degree of human physical mobility.
State-of-the-art machine learning algorithms (i.e.
Bag of Words and Support Vector Machines) are used to classify the severity of gaits in 12 subjects with ages ranging between 65 and 90 enrolled in a pilot study.
Our experimental results show that these features can discriminate between patients who have a high risk for falling and patients with a lower fall risk.
This paper discusses obstacle avoidance using fuzzy logic and shortest path algorithm.
This paper also introduces the sliding blades problem and illustrates how a drone can navigate itself through the swinging blade obstacles while tracing a semi-optimal path and also maintaining constant velocity .
This paper summarizes the recent progress we have made for the computer vision technologies in physical therapy with the accessible and affordable devices.
We first introduce the remote health coaching system we build with Microsoft Kinect.
Since the motion data captured by Kinect is noisy, we investigate the data accuracy of Kinect with respect to the high accuracy motion capture system.
We also propose an outlier data removal algorithm based on the data distribution.
In order to generate the kinematic parameter from the noisy data captured by Kinect, we propose a kinematic filtering algorithm based on Unscented Kalman Filter and the kinematic model of human skeleton.
The proposed algorithm can obtain smooth kinematic parameter with reduced noise compared to the kinematic parameter generated from the raw motion data from Kinect.
In this paper we consider a set of travelers, starting from likely different locations towards a common destination within a road network, and propose solutions to find the optimal connecting points for them.
A connecting point is a vertex of the network where a subset of the travelers meet and continue traveling together towards the next connecting point or the destination.
The notion of optimality is with regard to a given aggregated travel cost, e.g., travel distance or shared fuel cost.
This problem by itself is new and we make it even more interesting (and complex) by considering affinity factors among the users, i.e., how much a user likes to travel together with another one.
This plays a fundamental role in determining where the connecting points are and how subsets of travelers are formed.
We propose three methods for addressing this problem, one that relies on a fast and greedy approach that finds a sub-optimal solution, and two others that yield globally optimal solution.
We evaluate all proposed approaches through experiments, where collections of real datasets are used to assess the trade-offs, behavior and characteristics of each method.
There are a number of ways to procedurally generate interesting three-dimensional shapes, and a method where a cellular neural network is combined with a mesh growth algorithm is presented here.
The aim is to create a shape from a genetic code in such a way that a crude search can find interesting shapes.
Identical neural networks are placed at each vertex of a mesh which can communicate with neural networks on neighboring vertices.
The output of the neural networks determine how the mesh grows, allowing interesting shapes to be produced emergently, mimicking some of the complexity of biological organism development.
Since the neural networks' parameters can be freely mutated, the approach is amenable for use in a genetic algorithm.
Object detection and 6D pose estimation in the crowd (scenes with multiple object instances, severe foreground occlusions and background distractors), has become an important problem in many rapidly evolving technological areas such as robotics and augmented reality.
Single shot-based 6D pose estimators with manually designed features are still unable to tackle the above challenges, motivating the research towards unsupervised feature learning and next-best-view estimation.
In this work, we present a complete framework for both single shot-based 6D object pose estimation and next-best-view prediction based on Hough Forests, the state of the art object pose estimator that performs classification and regression jointly.
Rather than using manually designed features we a) propose an unsupervised feature learnt from depth-invariant patches using a Sparse Autoencoder and b) offer an extensive evaluation of various state of the art features.
Furthermore, taking advantage of the clustering performed in the leaf nodes of Hough Forests, we learn to estimate the reduction of uncertainty in other views, formulating the problem of selecting the next-best-view.
To further improve pose estimation, we propose an improved joint registration and hypotheses verification module as a final refinement step to reject false detections.
We provide two additional challenging datasets inspired from realistic scenarios to extensively evaluate the state of the art and our framework.
One is related to domestic environments and the other depicts a bin-picking scenario mostly found in industrial settings.
We show that our framework significantly outperforms state of the art both on public and on our datasets.
Understanding the complex behavior of pedestrians walking in crowds is a challenge for both science and technology.
In particular, obtaining reliable models for crowd dynamics, capable of exhibiting qualitatively and quantitatively the observed emergent features of pedestrian flows, may have a remarkable impact for matters as security, comfort and structural serviceability.
Aiming at a quantitative understanding of basic aspects of pedestrian dynamics, extensive and high-accuracy measurements of pedestrian trajectories have been performed.
More than 100.000 real-life, time-resolved trajectories of people walking along a trafficked corridor in a building of the Eindhoven University of Technology, The Netherlands, have been recorded.
A measurement strategy based on Microsoft Kinect\texttrademark has been used; the trajectories of pedestrians have been analyzed as ensemble data.
The main result consists of a statistical descriptions of pedestrian characteristic kinematic quantities such as positions and fundamental diagrams, possibly conditioned to local crowding status (e.g., one or more pedestrian(s) walking, presence of co-flows and counter-flows).
This paper describes a novel method for allowing an autonomous ground vehicle to predict the intent of other agents in an urban environment.
This method, termed the cognitive driving framework, models both the intent and the potentially false beliefs of an obstacle vehicle.
By modeling the relationships between these variables as a dynamic Bayesian network, filtering can be performed to calculate the intent of the obstacle vehicle as well as its belief about the environment.
This joint knowledge can be exploited to plan safer and more efficient trajectories when navigating in an urban environment.
Simulation results are presented that demonstrate the ability of the proposed method to calculate the intent of obstacle vehicles as an autonomous vehicle navigates a road intersection such that preventative maneuvers can be taken to avoid imminent collisions.
In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world).
We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures.
These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks.
While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures.
Additionally, we evaluate the generalization performance of the architectures on environments not used during training.
The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures.
Automatic video keyword generation is one of the key ingredients in reducing the burden of security officers in analyzing surveillance videos.
Keywords or attributes are generally chosen manually based on expert knowledge of surveillance.
Most existing works primarily aim at either supervised learning approaches relying on extensive manual labelling or hierarchical probabilistic models that assume the features are extracted using the bag-of-words approach; thus limiting the utilization of the other features.
To address this, we turn our attention to automatic attribute discovery approaches.
However, it is not clear which automatic discovery approach can discover the most meaningful attributes.
Furthermore, little research has been done on how to compare and choose the best automatic attribute discovery methods.
In this paper, we propose a novel approach, based on the shared structure exhibited amongst meaningful attributes, that enables us to compare between different automatic attribute discovery approaches.We then validate our approach by comparing various attribute discovery methods such as PiCoDeS on two attribute datasets.
The evaluation shows that our approach is able to select the automatic discovery approach that discovers the most meaningful attributes.
We then employ the best discovery approach to generate keywords for videos recorded from a surveillance system.
This work shows it is possible to massively reduce the amount of manual work in generating video keywords without limiting ourselves to a particular video feature descriptor.
Utilities face the challenge of responding to power outages due to storms and ice damage, but most power grids are not equipped with sensors to pinpoint the precise location of the faults causing the outage.
Instead, utilities have to depend primarily on phone calls (trouble calls) from customers who have lost power to guide the dispatching of utility trucks.
In this paper, we develop a policy that routes a utility truck to restore outages in the power grid as quickly as possible, using phone calls to create beliefs about outages, but also using utility trucks as a mechanism for collecting additional information.
This means that routing decisions change not only the physical state of the truck (as it moves from one location to another) and the grid (as the truck performs repairs), but also our belief about the network, creating the first stochastic vehicle routing problem that explicitly models information collection and belief modeling.
We address the problem of managing a single utility truck, which we start by formulating as a sequential stochastic optimization model which captures our belief about the state of the grid.
We propose a stochastic lookahead policy, and use Monte Carlo tree search (MCTS) to produce a practical policy that is asymptotically optimal.
Simulation results show that the developed policy restores the power grid much faster compared to standard industry heuristics.
We consider the task of pixel-wise semantic segmentation given a small set of labeled training images.
Among two of the most popular techniques to address this task are Random Forests (RF) and Neural Networks (NN).
The main contribution of this work is to explore the relationship between two special forms of these techniques: stacked RFs and deep Convolutional Neural Networks (CNN).
We show that there exists a mapping from stacked RF to deep CNN, and an approximate mapping back.
This insight gives two major practical benefits: Firstly, deep CNNs can be intelligently constructed and initialized, which is crucial when dealing with a limited amount of training data.
Secondly, it can be utilized to create a new stacked RF with improved performance.
Furthermore, this mapping yields a new CNN architecture, that is well suited for pixel-wise semantic labeling.
We experimentally verify these practical benefits for two different application scenarios in computer vision and biology, where the layout of parts is important: Kinect-based body part labeling from depth images, and somite segmentation in microscopy images of developing zebrafish.
Head Mounted Displays (HMDs) allow users to experience virtual reality with a great level of immersion.
However, even simple physical tasks like drinking a beverage can be difficult and awkward while in a virtual reality experience.
We explore mixed reality renderings that selectively incorporate the physical world into the virtual world for interactions with physical objects.
We conducted a user study comparing four rendering techniques that balances immersion in a virtual world with ease of interaction with the physical world.
Finally, we discuss the pros and cons of each approach, suggesting guidelines for future rendering techniques that bring physical objects into virtual reality.
In literature, several approaches are trying to make the UAVs fly autonomously i.e., by extracting perspective cues such as straight lines.
However, it is only available in well-defined human made environments, in addition to many other cues which require enough texture information.
Our main target is to detect and avoid frontal obstacles from a monocular camera using a quad rotor Ar.Drone 2 by exploiting optical flow as a motion parallax, the drone is permitted to fly at a speed of 1 m/s and an altitude ranging from 1 to 4 meters above the ground level.
In general, detecting and avoiding frontal obstacle is a quite challenging problem because optical flow has some limitation which should be taken into account i.e.
lighting conditions and aperture problem.
We address the robot grasp optimization problem of unknown objects considering uncertainty in the input space.
Grasping unknown objects can be achieved by using a trial and error exploration strategy.
Bayesian optimization is a sample efficient optimization algorithm that is especially suitable for this setups as it actively reduces the number of trials for learning about the function to optimize.
In fact, this active object exploration is the same strategy that infants do to learn optimal grasps.
One problem that arises while learning grasping policies is that some configurations of grasp parameters may be very sensitive to error in the relative pose between the object and robot end-effector.
We call these configurations unsafe because small errors during grasp execution may turn good grasps into bad grasps.
Therefore, to reduce the risk of grasp failure, grasps should be planned in safe areas.
We propose a new algorithm, Unscented Bayesian optimization that is able to perform sample efficient optimization while taking into consideration input noise to find safe optima.
The contribution of Unscented Bayesian optimization is twofold as if provides a new decision process that drives exploration to safe regions and a new selection procedure that chooses the optimal in terms of its safety without extra analysis or computational cost.
Both contributions are rooted on the strong theory behind the unscented transformation, a popular nonlinear approximation method.
We show its advantages with respect to the classical Bayesian optimization both in synthetic problems and in realistic robot grasp simulations.
The results highlights that our method achieves optimal and robust grasping policies after few trials while the selected grasps remain in safe regions.
Financial fraud detection is an important problem with a number of design aspects to consider.
Issues such as algorithm selection and performance analysis will affect the perceived ability of proposed solutions, so for auditors and re-searchers to be able to sufficiently detect financial fraud it is necessary that these issues be thoroughly explored.
In this paper we will revisit the key performance metrics used for financial fraud detection with a focus on credit card fraud, critiquing the prevailing ideas and offering our own understandings.
There are many different performance metrics that have been employed in prior financial fraud detection research.
We will analyse several of the popular metrics and compare their effectiveness at measuring the ability of detection mechanisms.
We further investigated the performance of a range of computational intelligence techniques when applied to this problem domain, and explored the efficacy of several binary classification methods.
Depth maps captured by modern depth cameras such as Kinect and Time-of-Flight (ToF) are usually contaminated by missing data, noises and suffer from being of low resolution.
In this paper, we present a robust method for high-quality restoration of a degraded depth map with the guidance of the corresponding color image.
We solve the problem in an energy optimization framework that consists of a novel robust data term and smoothness term.
To accommodate not only the noise but also the inconsistency between depth discontinuities and the color edges, we model both the data term and smoothness term with a robust exponential error norm function.
We propose to use Iteratively Re-weighted Least Squares (IRLS) methods for efficiently solving the resulting highly non-convex optimization problem.
More importantly, we further develop a data-driven adaptive parameter selection scheme to properly determine the parameter in the model.
We show that the proposed approach can preserve fine details and sharp depth discontinuities even for a large upsampling factor ($8\times$ for example).
Experimental results on both simulated and real datasets demonstrate that the proposed method outperforms recent state-of-the-art methods in coping with the heavy noise, preserving sharp depth discontinuities and suppressing the texture copy artifacts. | CommonCrawl |
We study index theory for manifolds with Baas–Sullivan singularities using geometric $K$-homology with coefficients in a unital $C^*$-algebra. In particular, we define a natural analog of the Baum–Connes assembly map for a torsion-free discrete group in the context of these singular spaces. The cases of singularities modelled on $k$-points (i.e., $\mathbb Z/k\mathbb Z-manifolds) and the circle are discussed in detail. In the case of the former, the associated index theorem is related to the Freed–Melrose index theorem; in the case of the latter, the index theorem is related to work of Rosenberg. | CommonCrawl |
Challenging probability problem (AMC 12B Problem 18) - Are the AoPS solutions incomplete/wrong?
A frog makes $3$ jumps, each exactly $1$ meter long. The directions of the jumps are chosen independently at random. What is the probability that the frog's final position is no more than $1$ meter from its starting position?
This problem comes from the AMC $12$ in the year $2010$. The contest is an invitational test in the US for secondary school to qualify for the Olympiad. It involves $25$ questions in $75$ minutes and problems can be solved without calculus.
I didn't get very far in my attempt, so I ultimately searched and found contributed solutions on the Art of Problem Solving.
I don't understand "solution $1$" and I am pretty sure "solution $2$" is incorrect.
The AoPS solution states "it is relatively easy" to show exactly $1$ of these has magnitude $1$ or less. If so, then out of $4$ possible options, there would be $1$ with magnitude $1$ or less, so the probability would be $1/4$ (the corrrect answer is indeed $1/4$, but this method does not satisfy me yet).
I did not understand this step, and someone asked the same question in a previous thread. It's not obvious to me, and I have no clue how you would go about showing this.
Is there an inequality that will help? I don't see how to simplify it. Also the official solution is much more complicated (see sketch below), which makes me think this solution is either elegant and overlooked or it is coincidentally the correct number but not the correct method.
The solution goes like this. Suppose the first jump is from the origin. So to be $1$ unit from the starting point, you need to be in the unit circle.
The next two jumps can be $2$ units after the first jump, equally likely to be in any angle. So the sample space of ending points is a circle of radius $2$ centered at the point of the first jump. This circle is also tangent to the unit circle.
Thus the sample space has an area of $4\pi$, of which the area of the unit circle is $\pi$. Hence the probability is $1/4$.
I am pretty sure this method is incorrect because I simulated $2$ jumps numerically. You do get a circle of $2$, but not all points are equally likely. There is clustering to the center of the circle and the circumference of the circle.
Plus if this method was true, it would seem that $3$ jumps should be a circle of radius $3$, but that would imply a totally different answer of $1/9$.
The idea is to set coordinates so the first jump is $(0, 0)$ and the second jump is $(1, 0)$. Let the starting point be $(\cos \alpha, \sin \alpha)$ and then the location after the third jump is $(1 + \cos \beta, \sin \beta)$.
It is not too hard to work out the condition for the third point to be within $1$ unit of the first point. Ignoring the measure $0$ cases of $\alpha = 0$ and $\alpha = \pi$, we need $\alpha \leq \beta \leq \pi$. We can limit to $0 \leq \alpha \leq \pi$ since the other half works out the same by symmetry. And we have $0 \leq \beta \leq 2\pi$.
Considering a rectangle $(\alpha, \beta)$ where all angles are equally likely, the sample space is the rectangle of area $2\pi$. The event to be within 1 is a triangle with area $\pi/2$, so the desired probability is $1/4$.
Are the AoPS solutions incomplete?
I would love if their "solution $1$" is correct as it's much easier to compute and it be more reasonable for an average time allotment of $3$ minutes/problem.
I run the YouTube channel MindYourDecisions, and am considering this problem. If I make a video I'll credit anyone who offers helpful answers.
I searched the problem on AoPS, and found this thread from 2016.
The solution by Zimbalono there (post #13) is worth looking at as well. The last figure probably calls for some more explanation, but it's pretty good overall.
WLOG, let $a=(1,0)$. Then, since $\|b\|=\|c\|=1$, the vectors $b+c$ and $b-c$ are orthogonal. This means the four points $a+(b+c)$, $a+(b-c)$, $a-(b+c)$, and $a-(b-c)$ form a rhombus centered at $(1,0)$. This rhombus has side length $2=2\|b\|=2\|c\|$.
In generic position (with probability $1$), one of each of the pairs $\pm(b+c)$ and $\pm(b-c)$ point inward from $(1,0)$ with negative $x$ coordinate, and one each point outward. We claim that (aside from a probability-0 case that puts both on the circle) exactly one of the two inward vertices of the rhombus lies inside the circle.
Now we have two right triangles with the same hypotenuse length and the same right angle. Going from the one with vertices on the circle to the one that shares vertices with the rhombus, we must lengthen one leg and shorten the other leg. The lengthened leg pushes its endpoint outside the circle, while the shortened leg pulls its endpoint inside the circle. We have one vertex of the rhombus inside the circle, and we're done.
My figures were done in Asymptote - code available on request, if you want to tinker with things.
I have just arranged a given soultion(The official solution).
Not the answer you're looking for? Browse other questions tagged probability contest-math or ask your own question. | CommonCrawl |
Both rat derived vascular smooth muscle cells (SMC) and human myofibroblasts contain $\alpha$ smooth muscle actin (SMA), but they utilize different mechanisms to contract populated collagen lattices (PCLs). The difference is in how the cells generate the force that contracts the lattices. Human dermal fibroblasts transform into myofibroblasts, expressing $\alpha$-SMA within stress fibers, when cultured in lattices that remain attached to the surface of a tissue culture dish. When attached lattices are populated with rat derived vascular SMC, the cells retain their vascular SMC phenotype. Comparing the contraction of attached PCLs when they are released from the culture dish on day 4 shows that lattices populated with rat vascular SMC contract less than those populated with human myofibroblast. PCL contraction was evaluated in the presence of vanadate and genistein, which modify protein tyrosine phosphorylation, and ML-7 and Y-27632, which modify myosin ATPase activity. Genistein and ML-7 had no affect upon either myofibroblast or vascular SMC-PCL contraction, demonstrating that neither protein tyrosine kinase nor myosin light chain kinase was involved. Vanadate inhibited myofibroblast-PCL contraction, consistent with a role for protein tyrosine phosphatase activity with myofibroblast-generated forces. Y-27632 inhibited both SMC and myofibroblast PCL contraction, consistent with a central role of myosin light chain phosphatase.
Dallon, J. C. and Ehrlich, H P., "Differences in the Mechanism of Collagen Lattice Contraction by Myofibroblasts and Smooth Muscle Cells" (2010). All Faculty Publications. 2712. | CommonCrawl |
You are given a following process.
There is a platform with $ n $ columns. $ 1 \times 1 $ squares are appearing one after another in some columns on this platform. If there are no squares in the column, a square will occupy the bottom row. Otherwise a square will appear at the top of the highest square of this column.
When all of the $ n $ columns have at least one square in them, the bottom row is being removed. You will receive $ 1 $ point for this, and all the squares left will fall down one row.
You task is to calculate the amount of points you will receive.
The first line of input contain 2 integer numbers $ n $ and $ m $ ( $ 1 \le n, m \le 1000 $ ) — the length of the platform and the number of the squares.
The next line contain $ m $ integer numbers $ c_1, c_2, \dots, c_m $ ( $ 1 \le c_i \le n $ ) — column in which $ i $ -th square will appear.
Print one integer — the amount of points you will receive.
In the sample case the answer will be equal to $ 2 $ because after the appearing of $ 6 $ -th square will be removed one row (counts of the squares on the platform will look like $ [2~ 3~ 1] $ , and after removing one row will be $ [1~ 2~ 0] $ ).
After the appearing of $ 9 $ -th square counts will be $ [2~ 3~ 1] $ , and after removing one row it will look like $ [1~ 2~ 0] $ .
So the answer will be equal to $ 2 $ . | CommonCrawl |
Abstract : We propose a new model for finite discrete time-varying graphs (TVGs), suitable for the representation of dynamic networks, for instance. The model we propose is based on the concept that in this class of TVGs there is typically a finite number of nodes and also a finite number of time instants. In this context, roughly speaking, an edge is able to connect any pair of nodes in any pair of time instants. This leads to the conception that in this environment an edge should be represented by an ordered quadruple of the form $(u, t_p, v, t_q)$, where $u$ and $v$ are nodes and $t_p$ and $t_q$ are time instants. Building upon this concept, we propose a new model that defines a TVG as an object $G_d = (V, E, T, \Phi)$, where $V$ is the set of nodes, $T$ is the set of time instants on which the TVG is defined, $E \subseteq V \times T \times V \times T$ is the set of edges in the TVG and $\Phi$ is a function that assigns a weight to each edge. In this on-going work, we show how key concepts such as degrees, journeys, distance, connectivity, are handled in this environment and also study the data structures used for the representation of dynamic networks built following our model. Moreover, we proof that when the TVG nodes can be considered as independent entities in each time instant, the analyzed TVG is isomorphic to a directed static graph. The basic idea behind the proof is hinted by the fact that if the nodes are elements of $V \times T$, an edge in the TVG could be seen as a pair $((u,t_p), (v,t_q))$, which in this case would represent a directed edge from node $(u, t_p)$ to node $(v,t_q)$, reducing the TVG to a simple directed graph. This is an important theoretical result because this allows the application of known results from the theory of directed graphs in the context of dynamic networks, either in a straightforward way when the nodes can be treated as independent in each time instant, or as a means for adapting known results to the more general TVG environment. In order to ascertain the generality of our proposed model, we show it can be used to represent several previous cases of dynamic networks found in the recent literature, which in general are unable to represent each other. | CommonCrawl |
A graph is called 1-planar if it can be drawn in the plane so that each of its edges is crossed by at most one other edge. We show that every 1-planar drawing of any 1-planar graph on $n$ vertices has at most $n-2$ crossings; moreover, this bound is tight. By this novel necessary condition for 1-planarity, we characterize the 1-planarity of Cartesian product $K_m\times P_n$. Based on this condition, we also derive an upper bound on the number of edges of bipartite 1-planar graphs, and we show that each subgraph of an optimal 1-planar graph (i.e., a 1-planar graph with $n$ vertices and $4n-8$ edges) can be decomposed into a planar graph and a forest. | CommonCrawl |
We construct periodic approximations to the free energies of Ising models on fractal lattices of dimension smaller than two, in the case of zero external magnetic field, using a generalization of the combinatorial method of Feynman and Vodvickenko. Our procedure is applicable to any fractal obtained by the removal of sites of a periodic two dimensional lattice. As a first application, we compute estimates for the critical temperatures of many different Sierpinski carpets and we compare them to known Monte Carlo estimates. The results show that our method is capable of determining the critical temperature with, possibly, arbitrary accuracy and paves the way to determine $T_c$ for any fractal of dimension below two. Critical exponents are more difficult to determine since the free energy of any periodic approximation still has a logarithmic singularity at the critical point implying $\alpha = 0$. We also compute the correlation length as a function of the temperature and extract the relative critical exponent. We find $\nu=1$ for all periodic approximation, as expected from universality. | CommonCrawl |
Why is the set-theoretic principle $\diamondsuit$ called $\diamondsuit$?
A shallow answer would just point to theorem 6.2 in Jensen's 1972 paper "The fine structure of the constructible hierarchy", where Jensen introduces this property. Or was this symbol used already earlier for the combinatorial principle?
Is there some (deeper) reason explaining how $\diamondsuit$ got its name? Perhaps because $\Box$ was already established as a combinatorial principle?
I once asked Jensen this question, when we were at a conference at Oberwolfach.
I told him that I had always assumed that the diamond $\Diamond$ principle was called $\Diamond$ because it expresses that there is an object exhibiting an elaborate degree of internal reflection. After all, if $\langle A_\alpha\mid\alpha<\omega_1\rangle$ is a $\Diamond$-sequence, then it very often happens for $\alpha<\beta$ that $A_\beta\cap\alpha=A_\alpha$, which is an instance of $A_\beta$ reflecting to $A_\alpha$.
But alas, this was not his reason. The actual reason was less interesting, having mainly to do, he said, with him simply needing a new symbol that had not yet been used, and that one was available.
Meanwhile, I believe that we should all adopt my explanation anyway! Henceforth, let it be known that a $\Diamond$ sequence is one exhibiting beautifully captivating internal reflections!
Not the answer you're looking for? Browse other questions tagged reference-request lo.logic set-theory ho.history-overview or ask your own question.
Why is this set stationary? | CommonCrawl |
This is the question posed by Emil Stoyanov in a comment to another page.
Then there exists point $S$ within $\Delta ABC$ such that the four points $M,$ $N,$ $P,$ and $S$ form a parallelogram.
Construct $\Delta M'N'P'$ for which $\Delta MNP$ serves the medial triangle.
This is because sliding $M$ away from $A'$ towards, say, $B$ pushes $P$ down $AB$ and $N$ up $AC.$ Relative to $\alpha$ the movement of $N'$ and $P'$ is opposite to that of $N$ and $P,$ respectively.
Note 1: Point $A'$ is the foot of the cevian through the circumcenter $O$ of $\Delta ABC.$ This is because The lines joining the feet of the altitudes are antiparallel and the circumcenter is the isogonal conjugate of the orthocenter.
Note 2: An absolutely analogous behavior is observed when $AM,$ $BN,$ $CP$ are the cevians through $D.$ This is discussed on a separate page. | CommonCrawl |
Arrows show the direction of current.
The problem I'm getting is if r < R , then R/r > 1 which implies V>E.
How is it possible that the voltage across a resistor can be more than the emf of the battery? I must be mistaking somewhere but I'm unable to find it.
Your $\mathbf E$ is not the emf of the cell.
Assuming that $r$ is the internal resistance of the cell then $\mathbf E$ is the potential difference across the terminals of the cell.
Let $\mathcal E$ be the emf of the cell and $E$ the potential difference across the terminals of the cell.
$\mathcal E - Ir = E$ is the potential difference across the cell terminals.
$ IR = V$ is the potential difference across the resistor R.
So substitution these two potential differences into the first equation one gets $E-V = 0 \Rightarrow E = V$ as expected.
Not the answer you're looking for? Browse other questions tagged homework-and-exercises electric-circuits electrical-resistance or ask your own question.
How can a battery offer resistance and act as a voltage provider? | CommonCrawl |
Celsius is a temperature scale.
$0 \cels$, which is set at the melting point of water.
$100 \cels$, which is set at the boiling point of water, as defined at sea level and standard atmospheric pressure.
A temperature measured in Celsius is often referred to as so many degrees Celsius.
The symbol for the degree Celsius is $\cels$.
The Celsius scale is also known as the centigrade scale, but this usage has been obsolete since $1948$.
This entry was named for Anders Celsius.
The Celsius scale is based on that proposed in $1742$ by the astronomer Anders Celsius.
He defined the upper and lower reference points from the melting and boiling points of water. These were initially the other way round from their current definitions: $0 \,^\circ \mathrm C$ was for the boiling point and $100 \,^\circ \mathrm C$ was for the melting point.
Celsius originally called it the centigrade scale, but for various reasons the name was a source of confusion, and so in $1948$ the name Celsius was adopted.
This is the system of measurement of temperature that is generally taught in schools. | CommonCrawl |
Subsets and Splits