text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Effect of beryllium on the precipitation of S' phase and mechanical properties of aluminum(alpha)-sulfur(aluminum(2) copper magnesium).
An investigation on the effect of Be on S$\sp\prime$ phase precipitation in pseudobinary Al($\alpha$)-S(Al$\sb2$CuMg) alloy is carried out using microhardness measurement, resistivity analysis, optical, SEM and TEM microscopy, X-ray diffraction, EDS and WDS analysis, tensile and charpy-V notch impact tests, and fractography. Source: Dissertation Abstracts International, Volume: 54-05, Section: B, page: 2690. Thesis (Ph.D.)--University of Windsor (Canada), 1992.
Fang, Wei., "Effect of beryllium on the precipitation of S' phase and mechanical properties of aluminum(alpha)-sulfur(aluminum(2) copper magnesium)." (1992). Electronic Theses and Dissertations. 1817. | CommonCrawl |
Farmer John is attempting to sort his $N$ cows ($1 \leq N \leq 100$), conveniently numbered $1 \dots N$, before they head out to the pastures for breakfast.
The cows are a bit sleepy today, so at any point in time the only cow who is paying attention to Farmer John's instructions is the cow directly facing Farmer John. In one time step, he can instruct this cow to move $k$ paces down the line, for any $k$ in the range $1 \ldots N-1$. The $k$ cows whom she passes will amble forward, making room for her to insert herself in the line after them.
Farmer John is eager to complete the sorting, so he can go back to the farmhouse for his own breakfast. Help him find the minimum number of time steps required to sort the cows.
The first line of input contains $N$.
The second line contains $N$ space-separated integers, $p_1, p_2, p_3, \dots, p_N$, indicating the starting order of the cows.
A single integer: the number of time steps before the $N$ cows are in sorted order, if Farmer John acts optimally. | CommonCrawl |
I would like to ask a question about Gödel's Incompleteness Theorems which I've had in the back of my head for some time. Since I'm a student working in a completely different area of maths (my usual pastime is cutting and pasting manifolds), my understanding of these results is nontechnical (I first learned about them before I started studying maths at university, by reading Nagel and Newman's excellent book).
I realize there are other questions on this topic, but I'd like to be a bit more specific in my question, and I wasn't able to google up anything that addressed what I'm about to ask.
Why does the sentence $G$ satisfy the inductive definition of truth? Is this because it is the negation of a false statement? If so, why is $\neg G$ false? Is this because it causes a contradiction, and sentences that lead to a contradiction in the system are false by definition?
Why doesn't the fact that $G$ is true (i.e. the list of steps reducing the truth of $G$ to the truth to atomic sentences (axioms?)) constitute a proof of $G$?
I hope I have managed to make myself clear. Thanks for any clarification!
About 1., Gödel's First Incompleteness Theorem is a ingenious exercise of "coding" formal properties and relations regarding a theory $F$ with "a certain amount" of arithmetic inside $F$ itself.
This exercise ends with the definition of the so-called provability predicate $Prov_F(x)$ which holds of $a$ iff there is a proof in $F$ of the formula $A$ with "code" $a$.
$⊢_F G_F \leftrightarrow ¬Prov_F(\ulcorner G_F \urcorner)$ [where $\ulcorner x \urcorner$ is the "code" of formula $x$].
Thus, it can be shown, even inside $F$, that $G_F$ is true if and only if it is not provable in $F$.
Thus, "reading" the above proof, we can "know of" the truth of $G_F$ (provided that $F$ is consistent) simply because $G_F$ is not provable in $F$ and $G_F$ is equivalent to the formula $¬Prov_F(\ulcorner G_F \urcorner)$.
Why doesn't the fact that $G$ is true constitute a proof of $G$ ?
Because a proof in $F$ of $G_F$ is a precise formal objcet and G's Incompleteness Th shows that such a proof in $F$ cannot exists.
the system $F$ is unable to prove all true sentences expressible in it.
By Gödel's Completeness Theorem, there are formal systems in which G is false. How can this not cause a contradiction ?
NO; by G's Completeness Th there are models of $F$ in which $G_F$ is false.
G's Completeness Th, prove that a formula provable in a theory $T$ must be true in all models of $T$.
Thus assuming that $\mathbb N$ is a model of our theory $F$ containing "a certain amount" of arithmetic, we have that all theorems of $F$ (i.e. formulae provable from $F$'s axioms) must be true in all models of $F$.
But $G_F$ is not provable from $F$'s axioms; thus, it must be not true in some model of $F$.
The proof of G's Incompleteness Th give us the insight that $G_F$ is true in $\mathbb N$; thus, it must be false in some model of $F$ different from $\mathbb N$, i.e. in some non-standard model of arithmetic.
Not the answer you're looking for? Browse other questions tagged logic soft-question incompleteness provability or ask your own question.
Why is the Continuum Hypothesis (not) true?
Why is CH true if it cannot be proved?
In Godel's first incompleteness theorem the Godel sentence G is true otherwise it contradicts itself, however its truth implies it is not provable .
Is this translation into symbols correct?
Does Gödel's incompleteness theorem contradict itself? | CommonCrawl |
I have a computer science and mathematics degree and am trying to wrap my head around quantum computing and it just doesn't seem to make sense from the very beginning. I think the problem is the definitions out there are generally watered down for lay-people to digest but don't really make sense.
For example, we often see a bit compared to a qubit and are told that an "old fashion digital bit" can only be in one of two states, 0 or 1. But qubits on the other hand can be in a state of 0, 1, or "superposition", presumably a third state that is both 0 AND 1. But how does being in a state of "both 0 and 1" deliver any value in computing as it is simply the same as "I don't know"? So for example, if a qubit represents the last binary digit of your checking account balance, how does a superimposed 0 and 1 deliver any useful information for that?
Worse you see these same articles say things like "and two qubits can be put together for a total of 4 states, three can make a total of 8 states" -- ok that's just $2^n$, the same capability that "old-fashioned bits" have.
Obviously there is great promise in quantum computing so I think its more a problem of messaging. Any suggestions for intro/primer literature that doesn't present the quantum computing basics in this oxymoronic way?
So there are two major advantages qubits have over classical bits: superposition and entanglement.
where $|k\rangle$ is the state with it's qubits in the binary representation of $k$. Plugging this state into a quantum operator essentially gets every single possible input into said operator with one query, and if you can tease out the information you need with that one operator then you only need one attempt instead of $2^N$ queries. See the Deutsch-Josza algorithm for more details.
where the values are linked together. That means that when qubits are combined, the possible states they can occupy are far more varied. If I measure one of the qubits I affect the data stored in the other qubit because they were a linked object. Through this you can get cool interference effects by changing bases. I would also look at the quantum teleportation circuit to get an idea of how this can be useful for things that classical bits cannot do.
In this context, 0 and 1 refer to orthogonal basis states of a Hilbert space (a complex vector space) representation of the states of physical objects like electrons (say spin states of an electron - "up" and "down"). It is more appropriate to denote them as $|0\rangle$ and $|1\rangle$, according to the Dirac notation. I've written about this previously, here. Just saying "superposition of state 0 and state 1" doesn't convey any useful information, yes. However specifying the superposition state like $\alpha|0\rangle+\beta|1\rangle$, where $\alpha,\beta\in \Bbb C$ and $|\alpha|^2+|\beta|^2=1$, makes complete sense mathematically and conveys useful information. By the way, $|\alpha|^2$ is the probability of the qubit collapsing to state $|0\rangle$ upon measurement and $|\beta|^2$ is the probability of it collapsing to state $|1\rangle$, upon measurement. You might say "superposition of state 0 and state 1" doesn't make physical or intuitive sense. Sure, quantum mechanics is simply a mathematical model that happens to give correct predictions about real world phenomena. It doesn't need to make physical or intuitive sense. It just needs to work.
Also, we would never use a qubit to represent the last binary digit of your account balance, in the first place. That would be silly. And even if we do, the qubit should be restricted to the computational basis states $|0\rangle$ or $|1\rangle$, and not their superposition states.
From here you can say that a $n$-qubit system can store $2^n$ values(coefficients) in parallel, although there's always the restriction that the squared sum of the moduli of the coefficients of the computational basis states must add up to $1$.
See my previous answers: this and this, for resource recommendations. As mentioned there, I'd recommend starting with Vazirani's lectures and then moving on to Nielsen and Chuang. I recently found the 2006 lecture notes by John Watrous which are also pretty great for beginners. It helps a lot to have a thorough grounding in linear algebra, while learning quantum computing, but I suspect you already have that, being a mathematics and computer science graduate.
As for how computation using qubits can be faster, in some cases, than using classical bits, I recommend carefully thumbing through the standard quantum algorithms like Deutsch-Jozsa, Shor's, Grover's among others. Here is a simple explanation of the Deutsch-Jozsa algorithm. It would be a bit difficult to summarize that in one answer. Please keep in mind that quantum computing cannot speed up all type of computations. It's applicable to only very specific problems.
Not the answer you're looking for? Browse other questions tagged physical-qubit quantum-state resource-request or ask your own question. | CommonCrawl |
The Dirichlet characters $\chi\colon\Z\to\C$ of modulus $q$ form a group under pointwise multiplication.
In the Conrey labeling system we simply have $\chi_q(n_1,\cdot)\chi_q(n_2,\cdot) = \chi_q(n_1n_2,\cdot)$.
This group is (non-canonically) isomorphic to the multiplicative group $(\Z/q\Z)^\times$. | CommonCrawl |
For an odd positive integer, the unit squares of an chessboard are coloured alternately black and white, with the four corners coloured black. A it tromino is an -shape formed by three connected unit squares. For which values of is it possible to cover all the black squares with non-overlapping trominos? When it is possible, what is the minimum number of trominos needed?
%V0 For $n$ an odd positive integer, the unit squares of an $n\times n$ chessboard are coloured alternately black and white, with the four corners coloured black. A it tromino is an $L$-shape formed by three connected unit squares. For which values of $n$ is it possible to cover all the black squares with non-overlapping trominos? When it is possible, what is the minimum number of trominos needed? | CommonCrawl |
A set is a fundamental concept in modern mathematics, which means that the term itself is not defined.
However, the set can be imagined as a collection of different elements. The whole modern Mathematics is based on this concept, so it is important to know and understand the theory of sets.
Sets are usually denoted by capital Latin letters $A, B, C...$. Elements of the set are written within the braces $\lbrace$ and $\rbrace$.
The set with no elements in it is called the empty set, and is denoted $\emptyset$ or $\lbrace \rbrace$.
Two sets $A$ and $B$ are equal if and only if they contain the same elements; if the elements of the first set are the same as the elements of the second, and vice versa.
Since equal set relation is transitive, reflexive and symmetric it is referred to as the equivalence relation.
Set $A$ is a subset of $B$ if and only if each element of the set $A$ is also an element of the set $B$.
Set $B$ is a superset of set $A$, denoted by $B \supseteq A$.
If set $A$ is a subset of set $B$ and if set $B$ contains at least one element that does not belong to set $A$, then we say that set $A$ is a proper subset of set $B$, denoted by $A \subset B$, or that set $B$ is a proper superset of set $A$, denoted by $B \supset A$.
Relation $\subset$ is transitive: $(A \subset B) \wedge (B \subset C) \Rightarrow A \subset C$.
The union of two sets, $A$ and $B$, is a set of all elements that belong to at least one set $A$ or $B$.
Intersection of two sets, $A$ and $B$, is a set containing all the elements that belong to both set $A$ and set $B$.
Difference of two sets, $A$ and $B$, is a set containing all elements of set $A$ which are not contained by set $B$.
Symmetric difference of two sets, $A$ and $B$, is a set containing all the elements which belong only to one of the sets $A$ and $B$.
It is obvious that the operation $\bigtriangleup$ is commutative.
Let us assume that $A \subset B$. Complement of the set $A$ relative to set $B$ is a set containing all elements of set $B$ which do not belong to set $A$.
Power set of set $A$ is a set of all subsets of $A$, including the empty set and the set $A$ itself.
A Cartesian Product of sets $A$ and $B$ is a set containing all ordered pairs $(x,y)$ where $x$ is an element of set $A$, and $y$ is an element of set $B$.
If $A$ and $B$ are two non-empty sets, then each subset $\rho$ of $A\times B$ is called a binary relation on a set $A\times B$.
If $(a,b)\in \rho$, we say that $a$ and $b$ are in relation, and are denoted by $a\rho b$.
If $(x,y)\in f$, we write $f(x)=y$. An element $x$ is called an original, and $y$ an image.
The set $X$ is called a domain, and the set $Y$ codomain of function $f$.
Three special types of functions are of a particular significance in mathematics: injections, surjection and bijection.
A function $f:X\longmapsto Y$ is injective or "1-1" function if and only if for each element $x\in X$ there is exactly one element $y\in Y$ for which $f(x)=y$ is valid.
A function $f:X\longmapsto Y$ is surjective or "onto" function if and only if for each element $y\in Y$ there is an element $x\in X$ for which $f(x)=y$ is true.
A function $f:X\longmapsto Y$ is bijective if it is both injective and surjective. | CommonCrawl |
We rigorously show that a class of systems of partial differential equations (PDEs) modeling wave bifurcations supports stationary equivariant bifurcation dynamics through deriving its full dynamics on the center manifold(s). This class of systems is related to the theory of hyberbolic conservation laws and supplies a new class of PDE examples for stationary $O(2)$-bifurcation. A direct consequence of our result is that the oscillations of the dynamics are not due to rotation waves though the system exhibits Euclidean symmetries. The main difficulties of carrying out the program are: 1) the system under study contains multi bifurcation parameters and we do not know a priori how they come into play in the bifurcation dynamics. 2) the representation of the linear operator on the center space is a $2 \times 2$ zero matrix, which makes the characteristic condition in the well-known normal form theorem trivial. We overcome the first difficulty by using projection method. We managed to overcome the second subtle difficulty by using a conjugate pair coordinate for the center space and applying duality and projection arguments. Due to the specific complex pair parametrization, we could naturally obtain a form of the center manifold reduction function, which makes the study of the current dynamics on the center manifold possible. The symmetry of the system plays an essential role in excluding the possibility of bifurcating rotation waves. | CommonCrawl |
Dementyeva I. S., Kuznetsov A. P., Savin A. V., Sedova Y. V.
The model of three linear-coupled logistic maps is examined. The structure of parameter plane (coupling value—period-doubling parameter) is discussed. We select configuration of coupling and parameters so, that regimes of three-frequency quasiperiodicity become possible. Also we consider bifurcations associated with such states.
Semenova N. I., Anishchenko V. S.
In the present work we analyze the statistics of a set that is obtained by calculating a stroboscopic section of phase trajectories in a harmonically driven van der Pol oscillator. It is shown that this set is similar to a linear shift on a circle with an irrational rotation number, which is defined as the detuning between the external and natural frequencies. The dependence of minimal return times on the size ε of the return interval is studied experimentally for the golden ratio. Furthermore, it is also found that in this case, the value of the Afraimovich–Pesin dimension is $\alpha_c = 1$.
To implicitly singularly perturbed autonomous systems of ordinary differential equations of second order found some sufficient conditions for the existence of periodic solutions of relaxation (self-oscillation), determined by means of an auxiliary dynamical system that implements a sliding mode. It is shown that so defined periodic motions have typical properties of self-oscillations of relaxation defined autonomous systems of ordinary differential equations with a small parameter at the highest derivative.
Aristov S. N., Prosviryakov E. Y.
We have obtained a solution of the problem within the exact solutions of the Navier–Stokes equations which describes the flow of a viscous incompressible fluid caused by spatially inhomogeneous wind stresses.
Berezovoj V. P., Tur A. V., Yanovsky V. V.
Proposed simple model movements thin tubes under the influence of fluid flow. Was obtained a nonlinear equation for the string with the flow. Demonstrated the possibility of tube vibrations under the influence of fluid flow and criterion was found linear instability at a constant flow rate. In the condition where the linear instability conditions are violated the possibility of oscillations detected during the presence of small periodic oscillation of flow.
Demin V. A., Kostarev K. G., Mizev A. I., Mosheva E. A., Popov E. A.
Direct 3D numerical modeling of the displacement process of a light liquid by a heavy one in a thin isothermal horizontal layer of finite length has been carried out. Initial non-equilibrium step distribution of liquids density generates convective process in this inhomogeneous system. The effect of liquids miscibility is taken into account over the calculation. The conditions of convection excitation in the regions with unstable stratification near the interface between counter propagating fluxes of liquids have been analyzed. The calculation results of concentration fronts velocity in dependence on densities difference are received in the presence and in the absence of the secondary spiral rollers in a flow. The evolution of secondary convective structures has been simulated in details at various stages of the process. The results of numerical modeling confirm previous experimental data.
We study a particle equilibria with respect to axes of precession and of dynamical symmetry of a rigid body in assumption that the body gravitational field is composed of gravitational fields of two conjugate complex masses being on imaginary distance. We establish that there are not more then two of these equilibria in the plane passing the body mass center orthogonally to the precession axis. Using terminology of the Generalized Restricted Circular Problem of Three Bodies, we call these equilibria the Triangular Libration Points (TLP).We find TLPs' coordinates analytically and we trace their evolution at changing values of the system parameters. We also prove that TLPs are instable.
Vershilov A. V., Grigoryev Y. A., Tsiganov A. V.
We discuss an application of the Poisson brackets deformation theory to the construction of the integrable perturbations of the given integrable systems. The main examples are the known integrable perturbations of the Kowalevski top for which we get new bi-Hamiltonian structures in the framework of the deformation theory. | CommonCrawl |
• It objects are concrete categories, i.e., categories equipped with a faithful functor to $Set$.
• A 1-morphism between $(C_1,U_1)$ and $(C_2,U_2)$ consist of a functor $F:C_1\to C_2$ and a natural transformation $z:U_1\Rightarrow U_2\circ F$.
• Its 2-morphisms are the obvious thing.
Question: Is there a name for that notion of functor between concrete categories?
Concrete functor is established in the literature for the related notion where the natural transformation is an isomorhpism (see e.g. Porst 1996 Concrete Categories Are Concretely Equivalent if…) — i.e. the sub-2-category of the slice 2-category of CAT over Set on faithful functors.
Your 2-category is similarly the sub-2-category on faithful functors of the colax slice 2-category of CAT over Set. So it seems very natural to call your notion colax concrete functors, though as far as I can find this term hasn't been used before. A lax concrete functor would be the same thing but with the transformation in the other direction.
Not the answer you're looking for? Browse other questions tagged ct.category-theory terminology or ask your own question.
The urge to combine 1- and 2-morphisms in slicing a 2-category.
Is there an interesting definition of a category of test categories?
When are subcategories of continuous functors reflective?
Are strict $\infty$-categories localized at weak equivalences a full subcategory of weak $\infty$-categories? | CommonCrawl |
I have been able to find a lot of information on the category of contexts -- for example, the page on syntactic categories at the nLab is a good starting point. However, when I try to find similar information on the category of judgements, I find a whole lot less. My guess would be that I am simply not looking for the right term.
To be more specific, I am looking for a reference which defines the category of judgements with $\Gamma \vdash t:T$, i.e. term $t$ has type $T$ in context $\Gamma$ as objects, and ???? as arrows (i.e. that is one of the things I am looking for). I am guessing that the morphisms are likely the same as in the category of contexts, namely the substitutions that respect the underlying type theory.
Edit: on top of Andrej's answer, and Paul's book there is also relevant work by Garner such as the paper Two dimensional models of type theory, and the slides Two dimensional locally cartesian closed categories which are quite relevant.
As far as I understand, Seely's work (see links in Andrej's answer) uses explicit reduction paths (based on explicit generators such as $\beta$ reduction) as 2-cells, while the more recent work uses abstract identity types for the same idea. If I understand well, these are essentially the same, just that Seely's work gave explicit generators for the 2-cells, while in homotopy type theory one allows generalizations to higher dimensions, and the simplest way to do this is to let the inhabitants be implicit.
Surprisingly, no one mentionned that the category of judgements is mostly easily seen as the slice category of the category of contexts over a single variable -- as explained over at the n-lab.
One way to set up a category is to use contexts as objects, and declare that a morphism from the context $\Gamma = x_1 : A_1, \ldots, x_m : A_m$ to the context $\Delta = y_1 : B_1, \ldots, y_n : B_n$ is an $n$-tuple $(t_1, \ldots, t_n)$ where $$\Gamma \vdash t_i : B_i.$$ Composition is given by substitution. Then a judgment of the form $\Gamma \vdash u : B$ is just a special morphism whose codomain is the context $y_1 : B$. This sort of thing can be read about in Paul's book.
If you really insist on having judgments as objects, rather than morphisms, you could impose a further 2-categorical structure. Say that a 2-cell from $\Gamma \vdash u : B$ to $\Gamma \vdash v : B$ is an equality $\Gamma \vdash u = v : B$. What kind of equality you use may depends on what your are doing. This way you will get a groupoid-like structure. You could also use one-way reductions as 2-cells, for example $\beta$-reductions, in which case your 2-category will look like a poset enriched category. See the paper by R. Seely, "Modelling Computations: A 2-categorical framework", LICS 1987. I think Neil Ghani's work is also relevant. See his PhD thesis, but he will be able to provide better references if you contact him.
Presumably you discovered my book Practical Foundations of Mathematics and wrote your very flattering private email to me after asking this question and then forgot to update it.
Coincidentally, Andrej Bauer commented on substitution as pullback on his blog and I posted a brief description of my construction there.
Not the answer you're looking for? Browse other questions tagged ct.category-theory type-theory lo.logic categorical-logic or ask your own question.
Gauss Sums over "semisimple spherical tensor category"?
Is there a reasoned derivation of the coherence conditions for symmetric rig categories?
Set as a (strict) infinite-category?
Proof assistant for working in weaker foundations?
What category of toposes is monadic over the 2-category of groupoids? | CommonCrawl |
Ramasarma, T and Gullapalli, S and Shivaswamy, V and Kurup, CKR (1990) Polyvanadate acts at the level of plasma membranes through $\alpha$- adrenergic receptor and affects cellular calcium distribution and some oxidation activities. In: Journal of Biosciences, 15 (3). pp. 205-210.
The activities of calcium-stimulated respiration, calcium uptake, $\alpha$- glycerophosphate dehydrogenase and rates of oxidation in state 3 and of H2O2 generation, were found to increase and that of pyruvate dehydrogenase decrease in mitochondria isolated from livers of rats administered intraperitoneally or perfused with Polyvanadate. Phenoxybenzamine, an antagonist of $\alpha$-adrenergic receptor, effectively prevented these changes. It was also found that perfusion of the liver with Polyvanadate reproduced one of best characterized events of $\alpha$-adrenergic activation-stimulation of protein kinase C in plasma membrane accompanied by its decrease in cytosol. These experiments indicate for the first time the $\alpha$-adrenergic mimetic action of polyvanadate.
Cell membrane/US;Receptor;Adrenergic;Alpha/BI;Mitochondria/US;Liver;Protein Kinase C/SE;Catecholamines;Cytosol;Calcium/BI;Phenoxybenzamine;Metabolism Signal Transudation/DE; Animals;Rats. | CommonCrawl |
Abstract: We propose an efficient grassmannian formalism for solution of bi-linear finite-difference Hirota equation (T-system) on T-shaped lattices related to the space of highest weight representations of $gl(K_1,K_2|M)$ superalgebra. The formalism is inspired by the quantum fusion procedure known from the integrable spin chains and is based on exterior forms of Baxter-like Q-functions. We find a few new interesting relations among the exterior forms of Q-functions and reproduce, using our new formalism, the Wronskian determinant solutions of Hirota equations known in the literature. Then we generalize this construction to the twisted Q-functions and demonstrate the subtleties of untwisting procedure on the examples of rational quantum spin chains with twisted boundary conditions. Using these observations, we generalize the recently discovered, in our paper with N. Gromov, AdS/CFT Quantum Spectral Curve for exact planar spectrum of AdS/CFT duality to the case of arbitrary Cartan twisting of AdS$_5\times$S$^5$ string sigma model. Finally, we successfully probe this formalism by reproducing the energy of gamma-twisted BMN vacuum at single-wrapping orders of weak coupling expansion. | CommonCrawl |
It is well-known that a single linear classifier cannot represent the XOR function $x \oplus y$, depicted below: there is no way to draw a single line that can separate the red and magenta (square) points from the blue and cyan (circle) points.
A two-layer neural network, however, can separate this pattern easily. Below, we give you a simple dataset in two dimensions that represents a noisy version of the XOR pattern. Your task is to hand-pick weights for a very simple two-layer network, such that it can separate the red/magenta points from the blue/cyan points.
This can be modelled by the following graph.
The question becomes: how do you choose the weights $W, U, b_1, b_2$ to create this sort of separation. Well the way I thought about it was this (not really sure if it's super correct, but whatever).
The goal of the top level of the neural network is to literally draw a line through the data and say: "anything above this line is one class, anything below is another". That's what the sigmoid function is doing. The weights just allow you to configure how this line is drawn, and the number of variables say how many lines are drawn.
For example, in the hidden layer, $h$ has 2 dimensions and therefore allows us to draw two lines through the data in this layer. (This is maybe an oversimplified and slightly off view of it, but it helps me to think about it this way.) So now, in the first layer, we say $h_1$ will represent the red group of data and $h_2$ will represent the magenta group of data. How can we draw a line through the data to make this split. Well for the red group $x_2 = .5-x_1$ represents a good split and for the magenta group $x_2 = 1.5 - x_1$ represents a good split. But, we want to say that for anything below the line is the red group will represent $h_1=1$ so we just reverse this equation. Now, we can simply map these equations to the weight parameters (note, the sigmoid function will simply remap anything on the opposite sides of the line to 0, 1).
Now, the data is split in such a way that we can actually just draw a line through it and remap the data such that we have our XOR function.
We can draw the line at $h_1+h_2-.5 = 0$ and thus we will have our data split if we let $q=1, r=1, \beta_2=-.5$.
And there you have it, we made an XOR function out of a neural network and manually set the weights. I hope this gives some insight into how a NN works in finding the weights it uses. Really messy code for this can be found on my github, but most of it was taken from Socher's course. | CommonCrawl |
Abstract: Although $SO(10)$ Supersymmetric (SUSY) Grand Unification Theories (GUTs) are very attractive for neutrino mass and mixing, it is often quite difficult to achieve successful leptogenesis from the lightest right-handed neutrino $N_1$ due to the strong relations between neutrino and up-type quark Yukawa couplings. We show that in a realistic model these constraints are relaxed, making $N_1$ leptogenesis viable. To illustrate this, we calculate the baryon asymmetry of the Universe $ Y_B $ from flavoured $ N_1 $ leptogenesis in a recently proposed $ \Delta(27) \times SO(10) $ SUSY GUT. The flavoured Boltzmann equations are solved numerically, and comparison with the observed $ Y_B $ places constraints on the allowed values of right-handed neutrino masses and neutrino Yukawa couplings. The flavoured $SO(10)$ SUSY GUT is not only fairly complete and predictive in the lepton sector, but can also explain the BAU through leptogenesis with natural values in the lepton sector albeit with some tuning in the quark sector. | CommonCrawl |
The mentors have prepared a description of the talks, with extensive references.
The mentors are also writing a book entitled Elements of $\infty$-Category Theory. A current draft is available here.
Notes from the 2018 workshop are available here. Additionally, the mentors have written up takeaways from each day.
Here is a list of exercises. | CommonCrawl |
Авторы: Gelfand I., Gindikin S., Graev M.
The miracle of integral geometry is that it is often possible to recover a function on a manifold just from the knowledge of its integrals over certain submanifolds. The founding example is the Radon transform, introduced at the beginning of the 20th century. Since then, many other transforms were found, and the general theory was developed. Moreover, many important practical applications were discovered. The best known, but by no means the only one, being to medical tomography.
This book is a general introduction to integral geometry, the first from this point of view for almost four decades. The authors, all leading experts in the field, represent one of the most influential schools in integral geometry. The book presents in detail basic examples of integral geometry problems, such as the Radon transform on the plane and in space, the John transform, the Minkowski-Funk transform, integral geometry on the hyperbolic plane and in the hyperbolic space, the horospherical transform and its relation to representations of $SL(2,\mathbb C)$, integral geometry on quadrics, etc. The study of these examples allows the authors to explain important general topics of integral geometry, such as the Cavalieri conditions, local and nonlocal inversion formulas, and overdetermined problems in integral geometry. Many of the results in the book were obtained by the authors in the course of their career-long work in integral geometry. | CommonCrawl |
Is the sum of two independent geometric random variables with the same success probability parameter a geometric random variable? What is it's distribution?
I am not sure how to turn this into a distribution. It looks like binomial with n = z and x = 2, but I don't know how to get the coefficient from this.
If $X$ and $Y$ are geometric random variables, they are each a count of Bernoulii trials until the first success.
$Z$ then counts Bernoulii trials until the second success.
PS: this is called a negative binomial distribution.
You've made a mistake in letting the bounds for $x$ go from 0 to $\infty$. If $x$ were larger than $z$, then in $P(Y=z-x)$ we would have a negative number for $z-x$, and resulting probability should be zero. Also, it appears that in the definition of geometric random variable that you're using, 1 is the smallest possible value, so $x$ should not start at 0.
Not the answer you're looking for? Browse other questions tagged probability-distributions random-variables or ask your own question. | CommonCrawl |
Nonequilibrium thermodynamic model of diffusionless transformation of an austenite in iron and alloys based on it is developed, taking into account internal stresses in the system. Onsager motion equations for a model thermodynamic system describing a diffusionless transformation and kinetic equations for changing deformations and growth rates of the $\alpha$-phase are found. A scheme of diffusionless transformations of austenite is constructed, including the normal and martensitic transformations as limiting cases.
Key words: nonequilibrium thermodynamics, diffusionless transformation, equations of motion, iron-based alloys, austenite.
Ya. S. Umanskiy and Yu. A. Skakov, Fizika Metallov [Physics of Metals] (Moscow: Atomizdat: 1978) (in Russian).
G. V. Kurdyumov, L. M. Utevskiy, and R. I. Entin, Prevrashcheniya v Zheleze i Stali [Transformations in Iron and Steel] (Moscow: Nauka: 1977) (in Russian).
B. Ya. Lyubov, Kineticheskaya Teoriya Fazovykh Prevrashcheniy [Kinetic Theory of Phase Transformations] (Moscow: Metallurgiya: 1969) (in Russian).
G. V. Kurdyumov, Yavleniya Zakalki i Otpuska Stali [Steel Annealing and Tempering Phenomena] (Moscow: Metallurgizdat: 1960) (in Russian).
L. Onsager, Phase Transformation in Solid. Conference (Cornwell, 1951).
I. Prigogine, Vvedenie v Termodinamiku Neobratimykh Protsessov [Introduction to the Thermodynamics of Irreversible Processes] (Moscow: Inostrannaya Literatura: 1960) (Russian translation).
S. de Groot and P. Mazur, Neravnovesnaya Termodinamika [Non-Equilibrium Thermodynamics] (Moscow: Mir: 1964) (Russian translation).
A. A. Zhukov and R. L. Snezhnoy, Diffuzionnye Protsessy v Metallakh [Diffusion Processes in Metals] (Kiev: Naukova Dumka: 1966) (in Russian).
M. A. Krishtal, Termodinamika, Fizicheskaya Kinetika, Strukturoobrazovanie i Svoystva Chuguna i Stali [Thermodynamics, Physical Kinetics, Structure Formation and Properties of Cast Iron and Steel] (Moscow: Metallurgiya: 1971), Iss. 4 (in Russian).
C. J. Smithells, Metally [Metals] (Moscow: Metallurgiya: 1980) (Russian translation).
D. F. Kalinovich, I. I. Kovenskiy, and M. D. Smolin, Ukrayins'kyy Fizychnyy Zhurnal, 14, No. 2: 515 (1969) (in Russian) .
M. V. Belous and M. P. Braun, Fizika Metallov [Physics of Metals] (Kiev: Vishcha Shkola: 1985) (in Russian). | CommonCrawl |
Abstract: An operator $A$ mapping a separable reflexive Banach space $X$ into the dual space $X'$ is called increasing if $\|Au\|\to \infty$ as $\|u\|\to \infty$. Necessary and sufficient conditions for the superposition operators to be increasing are obtained. The relationship between the increasing and coercive properties of monotone partial differential operators is studied. Additional conditions are imposed that imply the existence of a solution for the equation $Au=f$ with an increasing operator $A$. | CommonCrawl |
Abstract: The origin of ultra-wide massive binaries wider than >$10^3$ astronomical units (AUs) and their properties are not well characterized nor understood. Here we use the second Gaia data release to search for wide astrometric companions (orbital separations $10^3-$few$\times 10^5$ a.u.) to main-sequence Galactic O5-B5 stars which share similar parallax and proper motion with the primaries. We find an ultra-wide multiplicity fraction of $8\pm1$ per cent, to our completeness limit (up to $\approx 17$ mag; down to G-stars at distances of 0.5-2 kpc). Assuming a Kroupa mass function for the secondaries, the overall ultra-wide multiplicity fraction down to M-dwarfs is consistent with $48\pm6$ per cent. We use these data as a verification sample to test the existence of ultra-wide binaries among neutron stars and black holes. In particular, if a compact object is formed in an ultra-wide binary and receives a very-little/no natal kick, such a binary should not be disrupted but rather survive as a common proper motion pair. We, therefore, use Gaia data to search for ultra-wide astrometric companions to pulsars (normal or millisecond ones) and X-ray binaries. We find no reliable pairs with angular separation less than 25 arcsec. This is to be compared with the $5\pm1\%$ found for our OB-binaries verification sample located at similar distances. Therefore, we confirm the general picture that most compact objects receive strong natal kicks. We note that outside the most reliable angular separation interval we found two potential ultra-wide binaries, including a candidate companion to the slowest pulsar in our sample with projected separation of $2.7\times 10^5$ a.u. and a companion to a high-mass X-ray binary with projected separation of $3.78\times 10^5$ a.u. In both cases, however, the detection is marginal given the false positive probability of a few percents. | CommonCrawl |
such puzzles, where now the responder has to solve $k$ puzzles chosen independently in parallel.
A direct product is a function of the form $g(x_1,\ldots,x_k)=(g_1(x_1),\ldots,g_k(x_k))$. We show that the direct product property is locally testable with $2$ queries, that is, a canonical two-query test distinguishes between direct products and functions that are from direct products with constant probability.
and the two subsets intersect in about (1-\delta)n elements.
The behavior of games repeated in parallel, when played with quantumly entangled players, has received much attention in recent years. Quantum analogues of Raz's classical parallel repetition theorem have been proved for many special classes of games. However, for general entangled games no parallel repetition theorem was known.
Non-signaling games are an important object of study in the theory of computation, for their role both in quantum information and in (classical) cryptography. In this work, we study the behavior of these games under parallel repetition.
Parallel repetition is known to reduce the soundness error of some special cases of interactive arguments: three-message protocols and public-coin protocols. However, it does not do so in the general case. | CommonCrawl |
I know I can color a 3d plot using a function and a color map. What I would like to do is to color a plot using a function that return a rgb color directly.
All you need to do is create your own colormap. Strictly speaking a colormap is a function $cm:[0,1]\longrightarrow [0,1]^4 $, where $x\mapsto (r(x),g(x),b(x),\alpha(x))$ (here $\alpha$ indicates transparency). In the end, the point $(s,t,F(s,t))$ will be coloured $cm(cf(s,t))$. | CommonCrawl |
Abstract: As we showed in a preceding arXiv:gr-qc Einstein equations, conveniently written, provide the more orthodox and simple description of cosmological models with a time dependent speed of light $c$. We derive here the concomitant dependence of the electric permittivity $\epsilon$, the magnetic permeability $\mu$, the unit of charge $e$, Plank's constant $h$, under the assumption of the constancy of the fine structure constant $\alpha$, and the masses of elementary particles $m$. As a consequence of these concomitant dependences on time they remain constant their ratios $e/m$ as well as their Compton wave length $\lambda_c$ and their classical radius $r_0$. | CommonCrawl |
Is the observable universe homeomorphic to $B^3$?
Or is it even sensible to talk about space (rather than spacetime) as a 3 manifold?
I think that "observable universe" is not defined precisely enough to make such statements about it.
The spacetime events that we can see are the events on our past light cone. That light cone intersects the last-scattering surface (about 400,000 years after the big bang) in an approximate sphere. By convention the light cone is cut off there (because we can't see through the opaque plasma before last scattering—though future neutrino and gravitational-wave astronomy might change that). The matter passing through that sphere (which is also the boundary of the light cone) will, by the continuity equation, pass through the light cone at some point, while matter outside can't without exceeding the speed of light. The matter that passes through the sphere is called the observable universe.
In a perfectly uniform zero-pressure universe described exactly by an FLRW metric, and in which the last-scattering time is precisely defined, the sphere will be exactly a sphere, and the locus of observable matter will be exactly a cylinder ($\mathbb B^3 \times \mathbb R$) in FLRW coordinates. The metric breaks spacetime symmetry, giving a natural separation into space and cosmological time, and a natural correspondence between spatial points at different cosmological times. You could think of this universe as a 3D space with a geometry that's time-invariant up to an overall conformal scale factor (the inflating-balloon analogy, sort of). The observable universe is topologically and even metrically a ball in that space.
In reality, the universe went from almost opaque to almost transparent over some nonzero time, so there is an inherent ambiguity in the cutoff of the past light cone and the boundary of the observable universe. Also, the matter making up the observable universe does not stay in place relative to the FLRW "space". In the case of identical quantum particles, you can't trace the later motion of the matter that passed through the sphere even in principle. And since the geometry of spacetime is determined by the matter distribution, the FLRW metric is not exactly correct and there is no precisely defined FLRW "space".
The observable universe is still a "fuzzy ball", but I don't think "fuzzy" can be given a precise mathematical definition.
Not the answer you're looking for? Browse other questions tagged cosmology topology or ask your own question.
Is topology of universe observable?
Assuming space is infinite can our observable universe be an island amongst an archipelego?
Is it better to define the observable universe as the universe?
What physically determines the point-set topology of a spacetime manifold?
Expansion of Universe: Spacetime metric and Topological Variations? | CommonCrawl |
I'm currently working on the excellent Machine Learning course by Andrew Ng available on coursera. I've been working through the exercises using R, not matlab or octave as is requried in the course. This is the first programming exercise - implementing linear regression using the gradient descent algorithm rather than the normal equation method.
Rather than calculating the optimal solution for the linear regression with a single algorithm, in this exercise we use gradient descent to iteratively find a solution. To get the concept behing gradient descent, I start by implementing gradient descent for a function which takes just on parameter (rather than two - like linear regression).
We define the cost function $J(\theta_1)$ using calculus as $J(\theta)=2.4(x-2)$ (see Matt's blog).
where $\alpha$ is the learning rate governing the size of the step take with each iteration.
Here I define a function to plot the results of gradient descent graphically so we can get a sense of what is happening.
Below is the actual implementation of gradient descent.
Pretty simple! Now I use the plotting function to produce plots, and populate these with points using the gradient descent algorithm.
Another way to look at the rate of convergence is to plot the number of iterations against the output of $f(x)$. Vertical lines show when convergence occurs. When $\alpha$ is set very low, it takes much longer than necessary (although it does converge). When $\alpha$ is too high, convergence doesn't occur at all within a hundred iterations.
Gradient descent in R was published on March 29, 2015 and last modified on April 07, 2015 . | CommonCrawl |
Abstract: We have constructed a high-$p_T$ trigger for the HERA-B experiment at DESY. The HERA-B experiment produces B mesons by inserting wire targets into the halo of the proton beam circulating in HERA. The high-\pt trigger records events that contain tracks that have high transverse momentum with respect to the beam. Such a trigger is efficient for recording $B \to \pi^+\pi^-$, $B \to K^-\pi^+$, $B_s \to K^+ K^-$, $B_s \to D_s^-\pi^+$, and other topical hadronic B decays. These decays provide sensitivity to the internal angles $\alpha$ and $\gamma$ of the CKM unitarity triangle, and they also can be used to measure or constrain the $B_s$-$\bar B_s$ mixing parameter $x_s$. | CommonCrawl |
For @GoalCharts we have added a 0.75xG for every penalty. This comparison will not be a scientific study, there were too few data available for a profound statistical analysis. We aim to visualize the xG-data in such a way, that the interpretation can be done each for yourself.
The aim is to compare the different numbers (xG) of the above mentioned sites, as well to compare how the teams at #WM2018 have performed, in terms of goals and xGoals, as it is shown in the graphic on the right. It shows the scored goals and corresponding xGoals for each site as red dots and bars, respectively.
The teams are ordered by the mean of xGoals, printed as small numbers next to the team hashtags. The number of integrated goals (122) over 48 games can be compared to the integrated xGoals for the three sites. Immediately, it is observed, that @Caley_Graphics and @11tegen11 are very similar to actual goals and @StrataBet lies noticeably above.
Therefore, the questions is: How well describe the data the real outcome of the game? And might it be possible to decide, whether the outcome is more luck or skill?
Let us first see, how well do the data describe the outcome of the game. This will be done by a correlation analysis, with the reasonable assumption that the xGoal models represent linear models. The outcome of the game is given by the goal difference, which we plot against the corresponding xGoal difference.
The graphs for the linear fit models $$ Gd = G_0 + \alpha\cdot xGd $$ are shown as straight lines and the corresponding values for the interception $G_0$ and gradient $\alpha$, together with the $R^2$ values are given in the legend. Without going into much detail, all fits are significant, the error for the gradient $\alpha$ are in the same range (0.16, 0.17, 0.14). The interception $G_0$ are consistent with zero for all fits.
The fits for @11tegen11 and @Caley_graphics are very similar and within statistical error more or less identical and differ from @StrataBet. This was already expected by the different number of integrated xGoals. The former xGoal models are probably aimed to describe the correct number of goals which results from shots. The later one describe more chances in a game, which result not neccessarly in a shot. Therefore, the number of xGoals is higher.
To be more specific for an assessment of the best model for a game outcome, we need some more data. In summary so far, it is quite reasonable to compare the goals to the xGoals. For this purpose we chose in the following for the team xGoal comparison the mean of the three sites. The conclusion are not that different, if we chose the so far best xGoal model of @Goal_charts. Unfortunately, the number of games for each team is much to low, to draw a serious conclusion concerning the team efficiency (goals vs xGoals). Anyway, let us see how well they have done!
To see how well the teams performed in scoring goals, we subtract xGoals from goals and plot them in descending order. On top of the list is the most efficient team ranked, on bottom the worst efficient one.
Yes, indeed, due to the small numbers you can as well say, the most lucky or most unlucky team.
Without any doubt, the most remarkable ranking is the last place of Germany with a large gap to the second last team Island. At least for Germany, one might tend to say, that it is not only missing luck!
Next we look into the defensive strength of the teams and see how much xGoals they have admitted. On top of the list is Uruguay as the only team without any goal against. Eight out of the first ten teams have qualified for R16, and eight out of the last ten teams have left.
The defensive efficiency is measured as the difference of goals and xGoals against the teams. The ranking in ascending order is shown in the graphic on the right. Interestingly, some big teams (BRA, BEL, GER, ENG) are ranked in the mid and many small teams (KOR, DEN, PER, IRN, SWE) are ranked in the upper part.
In case of the big loser of the tournament Germany, one might say that the failing was mainly caused by the offensive part of the team, whatever this in particular means.
The chart on the right shows for all teams, which are qualified for quarter final, the xGoal differences vs goal differences. Due to lack of data from @GoalCharts, we use only those from @11tegen11 and @Caley_graphics, which we furthermore average. Every national flag marks a game for the corresponding team, therefore every flag appears exactly four times. Note, one flag for Belgium is more or less completely below the flag of Sweden, which means for this game the xG-differences are more or less the same.
Therefore, the right upper green area shows those games, which positive xGoal difference and goal difference, which we call deserved victory. Only Belgium has deservedly won all games and Russia is the most lucky team, as it is immediately observed in the chart.
After tournament we try to give a conclusion. | CommonCrawl |
Abstract: We study the Dirichlet mixed problem for a class parabolic equation with double non-power nonlinearities in cylindrical domain $D=(t>0)\times\Omega$. By the Galerkin approximations method suggested by Mukminov F. Kh. for a parabolic equation with double nonlinearities we prove the existence of strong solutions in Sobolev–Orlicz space. The maximum principle as well as upper and lower estimates characterizing powerlike decay of solution as $t\to\infty$ in bounded and unbounded domains $\Omega\subset R_n$ are established.
Keywords: parabolic equation, $N$-functions, existence of solution, estimate of decay rate of solution, Sobolev–Orlicz spaces. | CommonCrawl |
Deductionis a syntactic understanding of the notion of inference. In earlier modules we learned about the semantic notion of entailment, symbolized as $\vDash$, when we reason about arguments in terms of the propositions' truth values. Here, we think about inferences in terms of rules of inferences that grounded in the relationship between the symbols themselves.
DerivabilityThe symbol $\vdash$ represents the notion of syntactical derivability. For any set of SL or PL statement $\Gamma$ and some statement $\phi$, '$\Gamma \vdash \phi$' means ' from the set $\Gamma$, the sentence $\phi$ is derivable using permissible rules of inferences.' We will sometimes refer the process of deriving something a derivation or a proof.
AssumptionThe statements above the horizontal lines are assumptions - they are accessible for drawing inferences. Our job as the deducer is to find out that deductively follow from these given assumptions. In this derivation, we were given certain assumptions(premises); but not all derivations begin with given assumptions, however. Eventually we will learn to prove statements without them; we'd have to make them our own.
JustificationThe numbers and symbol to the right of each line are there to justify that particular line was derived. For instance, we justify the first two lines by stating that these were given. Lines 3 to 7, however, were the product of using rules of inferences on the lines indicated.
FormulasWe will also starting referring the SL or PL sentences as formulas , as opposed to propositions or sentences. The reason for this is to emphasize that in a syntactic context we are treating the symbols as nothing but a bunch of strings. Putting it differently, when we say formulas we refer to the symbols themselves without caring about its meaning at all.
FormalismFor the sake of convenience, we will express rules of inferences using this notation: $(\Gamma, \theta)$, which is an ordered pair that says: from the finite set of formulas $\Gamma$, we can infer the formula $\theta$. This seems pretty abstract, but you will see that it is really nothing but a straightforward procedure of processing bits of strings.
Conjunction Elimination -$\wedge E$ says that whenever you have a formula of the form $\alpha \wedge \beta$, you are entitled to infer either $\alpha$ or $\beta$. This rule is supposed to mirror the truth table definition of conjunction: $\alpha \wedge \beta$ is true when both of the conjuncts are true.
Disjunction Introductionis similar to $\wedge I$, except you only need one formula to start with. The idea is that you are allowed to introduce a disjunction as long as one of the disjuncts is accessible.
Conditional Eliminationis also called by its Latin name Modus Ponens. It states that whenever you have a conditional statement along its antecedent, you are allowed to infer the consequent as well. | CommonCrawl |
Results for "Natalia M. Litchinitser"
A symmetric Norlund sum with application to inequalitiesMar 09 2012Properties of an $\alpha,\beta$-symmetric Norlund sum are studied. Inspired in the work by Agarwal et al., $\alpha,\beta$-symmetric quantum versions of Holder, Cauchy-Schwarz and Minkowski inequalities are obtained.
Quantitative generalizations of Niederreiter's result concerning continuantsSep 08 2011We give certain generalization of Niederreiter's result concerning famous Zaremba's conjecture on existence of rational numbers with bounded partial quotients. | CommonCrawl |
I'll give it a try too, although this room does not seem to be frequented too much.
I do not really see why Problem proving that $0<a<b$ implies $1/b<1/a$ has 3 votes to close as a NARQ.
I agree that it is a homework question and OP did not show any effort - is this the reason for closing votes?
@MartinSleziak Do you want to close and reopen that post? That has 4 close votes, so if I vote now, that would be kill it.
@Srivatsan I am reluctant to vote for closing, but I am definitely not going to reopen it.
I don't have strong opinion either way.
I've asked this in the main chatroom too: see here.
Do you have any ideas for what other purposes could room like Jury Duty useful. I mean things that are not important enough to be starred (pinned to the noticeboard, so to say), but it would be good to have them separately somewhere, so that they do not get lost between smalltalk and other stuff and where they can be easily noticed at least for a few hours.
Is how to get $dx dy=rdrd\theta$ duplicate of Explain $\iint \mathrm dx\mathrm dy = \iint r \mathrm d\alpha\mathrm dr$?
@Srivatsan I don't get your point in the comment to a question to which I answered, please brief me your apprehensions about my answer!
Matt raised the problem in the main chatroom whether if we close non-migrated question and leave the migrated one, it will be possible to accept an answer.
His message: Regarding this and this: can we close the latter, non-migrated version, rather than the first (with 4 votes at the moment)?
As a result of our discussion that followed, he flagged for moderators attention and asked to merge accounts or to attach the question to the account (if possible).
Hopefully that was right sing to do.
The non-migrated question is closed already BTW. | CommonCrawl |
Furno, I. ; Labit, B. ; Podesta, M. ; Fasoli, A. ; Mueller, S. H. ; Poli, F. M. ; Ricci, P. ; Theiler, C.
The mechanism for blob generation in a toroidal magnetized plasma is investigated using time-resolved measurements of two-dimensional structures of electron density, temperature, and plasma potential. The blobs are observed to form from a radially elongated structure that is sheared off by the $E \times B$ flow. The structure is generated by an interchange wave that increases in amplitude and extends radially in response to a decrease of the radial pressure scale length. The dependence of the blob amplitude upon the pressure radial scale length is discussed. | CommonCrawl |
Most of the following materials come from article link.
Analysis of a protein's evolutionary history may seem irrelevant to protein structure prediction. Indeed, a protein's folded structure depends entirely on the laws of physics and is dictated by the protein's amino acid sequence and its environment.
At the most fine-grained level, an MSA is constructed for a given protein and the information in pairs of columns is used to predict which residues are in contact.
The fundamental assumption is that for residues that are in contact, the corresponding columns will be in some way more highly related to each other than for residues that are not.
From protein sequence to contact prediction. A typical workflow taking a protein's sequence, extracting sequence/amino acid properties, encoding the information, applying a learning algorithm and finally making contact pair predictions to be used for structure prediction.
To remove those predicted contacts that are in some way physically unrealizable. The simplest and perhaps most effective method of filtering is contact occupancy.
There are many definitions of residue contact used in the literature. Some use the $C\alpha$ distance, that is, the distance between the $\alpha$ carbon atoms of the residue pair, whereas others prefer the $C\beta$ distance.
The most common minimum separation used to define a contact pair is 8Å. It is also usual to exclude residue pairs that are separated along the amino acid sequence by less than some fixed number of residues, since short-range contacts are less interesting and easier to predict than long-range ones.
For a given target protein, the prediction accuracy $A_N$ on $N$ predicted contacts is defined to be $$A_N = N_c/N$$ where $N_c$ is the number of the predicted contacts that are indeed contacts for a given minimum sequence separation. Typically $N$ is taken be one of $L$, $L/2$, $L/5$, or $L/10$, where $L$ is the length of the sequence.
For most proteins, the actual number of contacts (using the 8Å definition) is in the range $L$ to $2L$. It has become relatively standard to report results on the best $L/2$ predictions with a maximum distance of 8Å between $C\beta$ atoms ($C\alpha$ for glycine), with a minimum sequence separation of 6.
The prediction coverage is defined to be $N_c/T_c$, where $T_c$ is the total number of contacts pairs for the protein. | CommonCrawl |
Four Color Theorem (4CT) states that every planar graph is four colorable. There are two proofs given by [Appel,Haken 1976] and [Robertson,Sanders,Seymour,Thomas 1997]. Both these proofs are computer-assisted and quite intimidating.
if $\alpha \in L_v$ then $f(\alpha) \in L_v$ for all $v \in V$, for all $\alpha \in C$.
Then there exists an $L$-coloring of the graph $G$.
If you know such conjectures implying 4CT, please list them one in each answer. I could not find a comprehensive list of such conjectures.
Coloring a hypergraph induced by a finite family of discs (not necessarily interior disjoint but also discs that might have arbitrary overlaps) with at most four colors $$.
The proposition that every planar triangulation with more than three vertices is the union of two connected bipartite graphs, each with no isthmus $$.
A combinatorial problem about the three-dimensional vector cross product algebra $$.
The proposition that the grammar G is totally ambiguous $$.
For any positive integer $n$, there exists at least a signable path joining two permutations of $S_n$.
An algebraic equivalent of the four-color theorem is presented. The equivalent is the assertion of non-membership of a family of polynomials in a family of polynomial ideals over a particular finite field $$.
Smorodinsky S. On the chromatic number of geometric hypergraphs.
Mabry R. Bipartite Graphs and the Four-Color Theorem.
Kauffman L.H. Map coloring and the vector cross product.
Cooper B. J. et al. Toward a language theoretic proof of the four color theorem.
Eliahou S. et al. Signed permutations and the four color theorem.
Howard L. An Algebraic Reformulation of the Four Color Theorem.
Another mechanical verification of the 4-colour theorem has been done by George Gonthier at Microsoft Research Cambridge. The difference with his proof is that the entire theorem has been stated and mechanically verified using the Coq proof assistant, whereas the other proofs contain only the kernel calculation written in Assembly Language and C, and thus have a risk of being buggy. Gonthier's proof covers both the calculational aspects and the logical ones in just 60,000 lines of Coq.
Look at T. Saaty, Thirteen colorful variations on Guthrie's 4-color conjecture, American Math. Monthly, 79 (1972) 2-43 for many examples.
Also, in David Barnette's book Map Coloring, Polyhedra, and the Four-Color Problem, MAA, Dolciani Series, Volume 8, 1983 many examples are given. One particularly interesting result in Barnete's book is: If it is always possible to truncate vertices of a convex polyhedron so as to to produce a 3-valent convex polyhedron so that the number of sides of each face is a multiple of three, it implies the truth of the four color conjecture.
Every planar graph is 4-colorable (The 4CT) iff there exists an absolute planar retract.
Dror Bar-Natan's paper "Lie Algebras and the Four Color Theorem" (Combinatorica 17-1 (1997) 43-52, last updated October 1999, arXiv:q-alg/9606016) contains an appealing statement about Lie algebras that is equivalent to the Four Color Theorem. The notions appearing in the statement also appear in the theory of finite-type invariants of knots (Vassiliev invariants) and 3-manifolds.
Proposition 2.4 in this paper http://www.sciencedirect.com/science/article/pii/0012365X9500109A# gives another formulation for the 4CT.
Edit: For a given graph $G$, the graph $\Delta(G)$ has the edges of $G$ as its vertices; two edges of $G$ are adjacent in $\Delta(G)$ if they span a triangle in $G$. Then the 4CT can be stated as follows: For every planar graph $G$, the chromatic number of $\Delta(G)$ equals to the clique number of $\Delta(G)$.
http://www.sciencedirect.com/science/article/pii/0095895684900352 the following similar fact: Given a graph $G$, let $K(G)$ denote the graph whose vertices are the edges of $G$. Two vertices of $K(G)$ are adjacent if the corresponding edges in $G$ are contained in a clique.
Then the 4CT is equivalent to: For any planar graph $G$, the chromatic number and the clique number of $K(G)$ are equal.
The high-level description of the automated proof by Gonthier is worth reading, if you are looking for more insight.
Yuri Matiyasevich studied several probabilistic restatements of the Four Colour Theorem, involving positive correlations between two notions of similarity between colourings. His proofs of equivalence rely on an associated graph polynomial, which provides another likely pointer to conjectures that imply the theorem.
Georges Gonthier, Formal Proof—The Four-Color Theorem, Notices of the American Mathematical Society 55(11) 1382–1393, 2008.
Yuri Matiyasevich, One Probabilistic Equivalent of the Four Color Conjecture, translation of paper in Teoriya Veroyatnostei i ee Primeneniya 48 411–416, 2003.
Every planar graph is the intersection graph of segments in the plane using only four directions.
Since parallel segments form an independent set in such a representation, this conjecture implies the 4CT, but perhaps is even stronger.
The reference: West, Open problems. SIAM J Discrete Math Newsletter, 2(1):10-12, 1991.
Every snark has a subgraph that can be formed from the Petersen graph by subdividing some of its edges.
Again according to wikipedia, a proof of this conjecture was announced in 2001 by Robertson, Sanders, Seymour and Thomas.
points out, the Primality Principle due to G. Spencer-Brown as well as the Eliahou–Kryuchkov conjecture are equivalent reformulations of the FCT.
S. Eliahou, Signed diagonal flips and the four color theorem, European J. Combin. 20 (1999) 641–646.
S. I. Kryuchkov, The four color theorem and trees, I. V. Kruchatov, Institute of Atomic Energy, Moscow, 1992, IAE-5537/1.
G. Spencer-Brown, Laws of Form, Gesetze der Form, Bohmeier Verlag, 1997.
Conjecture 6.4. For every pair of finite, binary trees (D, R) with the same number of leaves, there is a sign assignment of D and a word w of rotation symbols valid for D so that Dw = R.
It is stated that conjecture 6.4 following from previous propositions and theorems in the paper is equivalent to 4CT.
A k-flow on an undirected graph G is a directed graph derived by replacing each edge in G with an arc and assigning it an integer between -k and k, exclusive, such that, for each vertex in G, the sum of the integers assigned to arcs pointing into that vertex is equal to the sum of the integers assigned to arcs pointing out. A NWZ (nowhere zero) k-flow is a k-flow in which no arc has been assigned the number 0.
For any planar graph G, the dual of G is the graph that contains one vertex for each face in a planar embedding of G, and two vertices in a dual share one edge connecting them for every edge that the corresponding faces in G share between them in their boundaries. According to Tutte's Flow-Colouring Duality Theorem, a planar graph with no isthmus (i.e. edge whose deletion would increase the number of components) has a NWZ k-flow if and only if its dual is k-colourable. In other words, a planar graph is 4-colourable if and only if its dual has a NWZ 4-flow.
Note that 4CT requires the planar graph in question to have no loops (edges connecting any vertex to itself) because any graph with a loop cannot be vertex-coloured with any set of colours, since any vertex with a loop would therefore be adjacent to a vertex of the same colour, regardless of its colour.
If you can prove the theorem for rectangular maps, that are maps made from overlapping sheets of paper, you have also proved the 4ct. In addition, only maps with faces having all 5 edges or more can be considered in the search.
Not the answer you're looking for? Browse other questions tagged graph-theory co.combinatorics big-list graph-colouring or ask your own question.
Is it possible to have a 4-coloring for a non-planar graph ?
How many distinct colors are needed to lower-bound the choosability of a graph?
Drawing graphs with few "sharp" vertices?
Is there a P/NP-complete dichotomy theorem for natural interesting properties of cubic graphs? | CommonCrawl |
The probability of an event happening is defined to be the number of ways in which the event can happen divided by the total number of possible outcomes. For example, a fair coin, which can never land on it's edge, has two possible outcomes, heads or tails. The number of ways in which you can get a head is one and the number of ways in which you can get a tail is one. Therefore, the probability of getting a head, $P(head)$, is $1/2$, which is the same as the probability of getting a tail.
Probabilities are given as a fraction or decimal number between $0$ and $1$. $0 \leq P(event) \leq 1$. $0$ means the event will not happen, $1$ means the event will happen. Everything between means the event may happen. Probabilities may also be expressed as percentages.
Independent events are events that are not effected by other events. If you have a bag with 50 red balls and 50 blue balls that are all identical except for the colour then the probability of selecting a red ball is determined by the number of red and blue balls and nothing else.
Dependent events are events that can be effected by previous events. If you had taken a ball out of the bag and not replaced it then the probability of selecting a red ball has been changed by the first event.
Mutally exclusive events are events such that if one happens then the other cannot happen. If you toss a coin you will get heads or tails. If you get heads you cannot get tails. If you get tails you cannot get heads. The two outcomes are mutually exclusive.
The probability of heads or tails is: $P(heads\ or\ tails)=P(heads)+P(tails)$.
Fig 15.1 Addition of exclusive events.
In the previous example the events were mutually exclusive so there was no overlap of the regions in the Venn diagram. Imagine we are investigating possible links between long-haul flying and deep vein thromboses. Some people fly, some don't. Some get deep vein thromboses, some don't. Some people fly and get deep vein thromboses. We can show this on a Venn diagram.
Fig 15.2 Addition of independent events.
$P(A\ or\ B) = P(A \cup B) = P(A) + P(B) - P(A \cap B)$.
$P(A\ and\ B) = P(A) \times P(B) = P(A \cap B)$.
Fig 15.3 Three Level Tree Diagram.
Notice the sum of the probabilities at each branch is equal to 1.
$P(3\ heads) = 1/2 \times 1/2 \times 1/2 = 1/8$.
$P(2\ heads) = 3(1/2 \times 1/2 \times 1/2) = 3/8$.
$P(1\ head) = 3(1/2 \times 1/2 \times 1/2) = 3/8$.
$P(0\ head) = (1/2 \times 1/2 \times 1/2) = 1/8$.
1. the numerators of $P(3)$, $P(2)$, $P(1)$ and $P(0)$ are the same as the 4th row of Pascal's triangle.
There is one way in which you could get no heads so the probability of getting at least one head is $P(at\ least\ 1\ head) = 1 - (1/2 \times 1/2 \times 1/2) = 7/8$.
Conditional probability is the probability of event $A$ happening given that event $B$ has already happened. We write the probability like this $P(A|B)$ where the | should be read as 'given that'. $P(A|B)$ would read as the probability of $A$ happening given that $B$ has happened.
Imagine we have 100 people in a room. 10 are vegan, 30 are vegetarian and the rest eat meat. If we choose a person at random what is the probability that they are vegan?
What is the probability that they are vegan given that they don't eat meat?
Q: The table shows the numbers of men and women that voted in the 2016 American presidential election. If you select a voter at random what is the most likely way they voted?
A: Of the 125.6 million people that voted 59.8 million voted for Clinton and 59.5 million voted for Trump.
Q: If your selection was male what is the most likely way they voted?
A: Of the 61.8 million men that voted 25.4 million voted for Clinton and 32.8 million voted for Trump. | CommonCrawl |
quite as big as was projected, however.
than a human ever could can provide better support.
> and server hardware, and provided key corporate middleware.
> has been maybe a decade or more since I did.
> I suspect that they DO in fact need to buy RH, even for $34 \times 10^9.
> of privileged or protected information on ANY cloud, for good reason.
> many vendors. The market is almost insane at the moment.
> -- again, from a software point of view.
> able to ride the bleeding edge of it, open source or not.
> as much as for any other reason. Fedora is where it is at, not RHEL.
> real money from top to bottom. | CommonCrawl |
Is $\gamma$ homotopic to $g\circ\gamma$?
Let $X$ be a simply connected topological space. Let $x_0,x_1\in X$ and let $\gamma$ be a path in $X$ from $x_0$ to $x_1$. Let $g$ be a homeomorphism of $X$ with itself. Then $g\circ\gamma$ is a path in $X$ from $g(x_0)$ to $g(x_1)$.
I think it should be true but I can't seem to give a homotopy from $\gamma$ to $g\circ\gamma$. That is I need a continuous map $F:I\times I\rightarrow X$ such that $F(s,0)=\gamma(s)$ and $F(s,1)=g\circ\gamma(s)$ for all $s\in I$.
Since $X$ is simply connected, every two paths are homotopic (almost) by definition.
Not the answer you're looking for? Browse other questions tagged general-topology algebraic-topology homotopy-theory or ask your own question.
Maps that induces identity on fundamental groups are homotopic to identity?
Is the projection of two homotopic maps path homotopic?
loops $\alpha ,\beta$ based at $x_0$ are homotopic then is there another loop such that conjugating $\alpha$ with it is path-homotopic to $\beta$? | CommonCrawl |
Science | Alberto Molino's website. Science – Alberto Molino's website.
I am currently working on several astronomical surveys (listed below). My main contribution to all these projects is related to the estimation of accurate multi-band photometry and photometric redshifts for as much near as distant galaxies. I am recently getting interested in improving the aforementioned quantities in clusters of galaxies, where the diffuse intra cluster light (ICL) makes it very challenging.
ALHAMBRA: Advance Large Homogeneous Medium Band Astronomical survey.
The Advance Large Homogeneous Medium Band Astronomical (ALHAMBRA; Moles et al. 2008) survey has observed eight different regions of the sky, including sections of the Cosmic Evolution Survey (COSMOS), DEEP2, European Large-Area Infrared Space Observatory Survey (ELAIS), Great Observatories Origins Deep Survey North (GOODS-N), Sloan Digital Sky Survey (SDSS) and Groth fields using a new photometric system with 20 optical, contiguous ∼300-Å filters plus the JHKs bands. The filter system is designed to optimize the effective photometric redshift depth of the survey, while having enough wavelength resolution for the identification of faint emission lines. The observations, carried out with the Calar Alto 3.5-m telescope using the wide-field optical camera Large Area Imager for Calar Alto (LAICA) and the near-infrared (NIR) instrument Omega-2000, represent a total of ∼700 h of on-target science images. The catalogues presented in Molino et al. 2014 are complete down to a magnitude I ∼ 24.5 AB and cover an effective area of 2.79 deg2. The ALHAMBRA Photometric Redshift estimates reach a precision of δz/(1 + zs) = 1% for I<22.5 and δz/(1+zs)=1.4% for 22.5<I<24.5. The global n(z) distribution shows a mean redshift ⟨z⟩ = 0.56 for I < 22.5 AB and ⟨z⟩ = 0.86 for I < 24.5 AB. Given its depth and small cosmic variance, ALHAMBRA is a unique data set for galaxy evolution studies.
CLASH: Cluster Lensing and Supernovae with Hubble.
The Cluster Lensing And Supernovae survey with Hubble (CLASH; Postman et al. 2012) is a Multi- Cycle Treasury program awarded with 524 HST orbits to image the cores of25 massive galaxy clusters at interme- diate redshifts (0.1<z<0.9). The cluster selection includes 20 X-ray selected dynamically-relaxed systems plus 5 additional specifically-selected strong lensing clusters. CLASH has combined the high spatial-resolution imaging from Hubble Space Telescope (HST) with a 16-band filter system optimized for photometric redshift estimations (4 WFC3/UVIS + 5 WFC3/IR + 7 ACS/WFC) and a typical photometric depth of 20 orbits per cluster. The combination of these three elements has made the CLASH survey an unprecedented legacy dataset. The photometric redshift catalogues, derived with such unique dataset, reach an accuracy of dz/(1+z) ∼ 0.8%, 1.0%, and 2.0%for galaxies with I-band F814W AB magnitudes < 18, 20, and 23, respectively (Molino et al. 2017).
J-PAS: Javalambre Physics of the Accelerated Universe Astronomical Survey.
Javalambre Physics of the Accelerating Universe Astrophysical Survey, (J-PAS; Benítez et al. 2014), is an unprecedented photometric sky survey of 8500 deg2 visible from Javalambre in 59 colors, using a set of broad, intermediate and narrow band filters. J-PAS will discover an unprecedented number of stars, galaxies, supernovas, quasars and solar system objects, which will be mapped with exquisite accuracy. The innovative designs of the J-PAS camera and filter system will allow, for the first time, to map not only the positions of hundreds of millions of galaxies in the sky, but their individual distances to us as well, providing the first complete 3D map of the Universe.
S-PLUS: Southern Photometric Local Universe Survey.
The Southern Photometric Local Universe Survey (S-PLUS; Mendes de Oliveira et al. in prep.) is a new project that will observe 8000 deg2 of the Southern Sky in a unique set of twelve optical bands. The filter system is composed of the 5 SDSS broad-band (BB) filters, supplemented by 7 narrow-band (NB) filters covering the main stellar features from 3700 to 9000\AA. S-PLUS is carried out using a fully robotic 0.8m telescope on Cerro Tololo. The camera has a 9kx9k E2V detector with a plate scale of 0.55\arcsec per pixel and a field of view of 2 deg2. The NB filters cover prominent features in nearby galaxies (i.e., OII, Ca H+K, D4000, H$\delta$, Mg$b$, H$\alpha$ and CaT), offering strong constraints on star formation histories as well as photometric redshifts of galaxies. They are furthermore highly suitable for searching for low-metallicity and carbon-enhanced stars, the blue horizontal branch and variable stars, and for mapping the Galactic plane.
J-PLUS: Javalambre Photometric Local Universe Survey.
As demonstrated in Molino et al. 2017b (accepted in A&A), for the case of the nearby double galaxy cluster Abell-2589 (z=0.041) & Abell-2593 (z=0.044), this new filter system is capable to provide as accurate photometric redshifts as dz/1+z = 1.0% for cluster galaxies with iSDSS<18 AB magnitudes.
September, 21st 2017: CLASH: Accurate Photometric Redshifts with 14 HST bands in Massive Galaxy Cluster Cores. (here also available the VIDEO). Brazil. Webinar.
September, 8th 2017. S-PLUS: the most precise photo-z machine for galaxies in the Nearby Universe. XLI Brazilian Astronomical Society (SAB) meeting. São Paulo. Brazil. | CommonCrawl |
Adam is generally regarded as being fairly robust to the choice of hyper parameters, though the learning rate sometimes needs to be changed from the suggested default.
if this is true its a big deal because hyper parameter search can be really important (in my experience at least) in the statistical performance of a deep learning system. Thus, my question is, why is Adam Robust to such important parameters? Specially $\beta_1$ and $\beta_2$?
I've read the Adam paper and it doesn't provide any explanation to why it works with those parameters or why its robust. Do they justify that elsewhere?
Also, as I read the paper, it seems that the number of hyper parameters they tried where very small, for $\beta_1$ only 2 and for $\beta_2$ only 3. How can this be a thorough empirical study if it only works on 2x3 hyper parameters?
In regards to the evidence in regards to the claim, I believe the only evidence supporting the claim can be found on figure 4 in their paper. They show the final results under a range of different values for $\beta_1$, $\beta_2$ and $\alpha$.
Personally, I don't find their argument convincing, in particular because they do not present results across a variety of problems. With that said, I will note that I have used ADAM for a variety of problems, and my personal finding is that the default values of $\beta_1$ and $\beta_2$ do seem surprisingly reliable, although a good deal of fiddling with $\alpha$ is required.
Adam learns the learning rates itself, on a per-parameter basis. The parameters $\beta_1$ and $\beta_2$ don't directly define the learning rate, just the timescales over which the learned learning rates decay. If they decay really fast, then the learning rates will jump about all over the place. If they decay slowly, it will take ages for the learning rates to be learned. But note that in all cases, the learning rates are determined automatically, based on a moving estimate of the per-parameter gradient, and the per-parameter squared gradient.
Adam is not the only optimizer with adaptive learning rates. As the Adam paper states itself, it's highly related to Adagrad and Rmsprop, which are also extremely insensitive to hyperparameters. Especially, Rmsprop works quite nicely.
There are a few fairly pathological cases where Adam will not work, particularly for some very non-stationary distributions. In these cases, Rmsprop is an excellent standby option. But generally speaking, for most non-pathological cases, Adam works extremely well.
Not the answer you're looking for? Browse other questions tagged neural-networks deep-learning optimization hyperparameter adam or ask your own question.
Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?
What is the connection between many highly correlated parameters in weight matrix with gradient descent converges slowly?
How does one Initialize Neural Networks as suggested by Saxe et al using Orthogonal matrices and a gain factor?
Adaptive Moment Estimate - What is meant by parameters of Adam optimizer are biased towards zero initially?
Why do nonlinearities in deep neural nets give rise to very high derivatives?
How does Gaussian prior on weights guarantees that the units are not likely to interact with each other?
How well should I expect Adam to work? | CommonCrawl |
Abstract: Cloaking devices are prescriptions of electrostatic, optical or electromagnetic parameter fields (conductivity $\sigma(x)$, index of refraction $n(x)$, or electric permittivity $\epsilon(x)$ and magnetic permeability $\mu(x)$) which are piecewise smooth on $\mathbb R^3$ and singular on a hypersurface $\Sigma$, and such that objects in the region enclosed by $\Sigma$ are not detectable to external observation by waves. Here, we give related constructions of invisible tunnels, which allow electromagnetic waves to pass between possibly distant points, but with only the ends of the tunnels visible to electromagnetic imaging. Effectively, these change the topology of space with respect to solutions of Maxwell's equations, corresponding to attaching a handlebody to $\mathbb R^3$. The resulting devices thus function as electromagnetic wormholes. | CommonCrawl |
Seeing that the plot does not support normality, what could I infer about the underlying distribution? It seems to me that a distribution more skewed to the right would be a better fit, is that right? Also, what other conclusions can we draw from the data?
If the values lie along a line the distribution has the same shape (up to location and scale) as the theoretical distribution we have supposed.
As we see, less concentrated points increase more and more concentrated points than supposed increase less rapidly than an overall linear relation would suggest, and in the extreme cases correspond to a gap in the density of the sample (shows as a near-vertical jump) or a spike of constant values (values aligned horizontally). This allows us to spot a heavy tail or a light tail and hence, skewness greater or smaller than the theoretical distribution, and so on.
You may also find the suggestion here useful when trying to decide how much you should worry about a particular amount of curvature or wiggliness.
A more suitable guide for interpretation in general would also include displays at smaller and larger sample sizes.
I made a shiny app to help interpret normal QQ plot. Try this link.
In this app, you can adjust the skewness, tailedness (kurtosis) and modality of data and you can see how the histogram and QQ plot change. Conversely, you can use it in a way that given the pattern of QQ plot, then check how the skewness etc should be.
For further details, see the documentation therein.
I realized that I don't have enough free space to provide this app online. As request, I will provide all three code chunks: sample.R, server.R and ui.R here. Those who are interested in running this app may just load these files into Rstudio then run it on your own PC.
# Compute the positive part of a real number x, which is $\max(x, 0)$.
# This function generates n data points from some unimodal population.
# mu: the mode of the population, default value is 0.
# the exact skewness defined in statistics textbook, the default value is 0.
# not the exact kurtosis defined in textbook, the default value is 0.
# skewness and tailedness of input.
# Keep generating data points until the length of data vector reaches n.
# Deal with the bimodal case.
# Need 'ggplot2' package to get a better aesthetic effect.
# tailedness and modality. For more information, see the source code in 'sample.R' code.
# 'scale' is a parameter that controls the skewness and tailedness.
# once to plot two plots. The generated sample was stored in the `data` object to be called later.
# For `Unimodal` choice, we fix the mode at 0.
# For `Bimodal` choice, we fix the two modes at -2 and 2.
# Details will be explained in `sample.R` file.
# Overlay the density curve.
# Plot the QQ plot.
# Define UI for application that helps students interpret the pattern of (normal) QQ plots.
# case while "1" indicates the most right-skewed case.
# case while "1" indicates the most heavy tail case.
# information to help users understand sliders.
# bution and the y-axis is the sample quantiles of data.
I have crudely copied his diagram which I keep in my notes as I find it very useful.
This can be interpreted using the probability density functions. For the same $\alpha$ value, the empirical quantile is to the left of the theoretical quantile, which means that the right tail of the empirical distribution is "lighter" than the right tail of the theoretical distribution, i.e. it falls faster to values close to zero.
Since this thread has been deemed to be a definitive "how to interpret the normal q-q plot" StackExchange post, I would like to point readers to a nice, precise mathematical relationship between the normal q-q plot and the excess kurtosis statistic.
A brief (and too simplified) summary is given as follows (see the link for more precise mathematical statements): You can actually see excess kurtosis in the normal q-q plot as the average distance between the data quantiles and the corresponding theoretical normal quantiles, weighted by distance from data to the mean. Thus, when the absolute values in the tails of the q-q plot generally deviate from the expected normal values greatly in the extreme directions, you have positive excess kurtosis.
Because kurtosis is the average of these deviations weighted by distances from the mean, the values near the center of the q-q plot have little impact on kurtosis. Hence, excess kurtosis is not related to the center of the distribution, where the "peak" is. Rather, excess kurtosis is almost entirely determined by the comparison of the tails of the data distribution to the normal distribution.
Not the answer you're looking for? Browse other questions tagged r data-visualization inference qq-plot or ask your own question.
How do I know what side is skewed?
How would you judge this qq plot? Distributional assumptions? Outliers?
Good explanation of the QQ-plot algorithm?
What does this QQplot tell us?
Does this normal distribution plot show a good fit?
How do I interpret this Weibull plot?
What distribution to use for this QQ plot?
How to interpret this normal qq-plot? | CommonCrawl |
Osamu Iyama, Tilting Cohen–Macaulay representations is another ICM address (this one appeared much later than the others?) Besides being a well-written overview, it has a rather high quiver-to-page ratio.
Igor Burban and Yuriy Drozd, Non-commutative nodal curves and derived tame algebras is a nice article to read after you've read Iyama's ICM address: it concerns the derived categories of singular curves and sheaves of orders on them. Just like Geigle–Lenzing canonical algebras can be studied using weighted projective lines (or vice versa), one can study derived tame algebras using algebraic geometry.
Bhargav Bhatt, Jacob Lurie and Akhil Mathew, Revisiting the de Rham–Witt complex gives an alternative construction of the de Rham–Witt complex: in part 1 this is done for algebras using only elementary methods, whilst part 2 invokes all the $\infty$-language to do descent theory (and more). I saw Jacob give a talk about (the first part of) this paper at the Stacks project workshop where he started with the memorable statement There will be no homotopy theory in this talk. | CommonCrawl |
Are there interesting conjectures "discovered" by computers and proved by humans?
Possible example in graph theory is "Some Conjectures of Graffiti.pc (2004-07)," suggested by Joseph O'Rourke in another answer.
The question might not be well defined because "discovered" is controversial.
Added This question may be a duplicate (or refinement) of (2) in Experimental Mathematics as Kristal Cantwell pointed out.
I am mainly interested in examples where the program is designed to make conjectures which are not known identities to the program and later proved.
Much of the early work on the Mandelbrot set was of this type. You see something strange in the computer images, then you try to prove that it really happens.
Here is one example: Pi and the Mandelbrot set. From conjecture in 1991 to paper in 2001.
Lovasz told me the following interesting story. He had read a paper containing a long list of computer generated conjectures, did not like most them, but suddenly found one, which turned out to be an interesting and deep question. Then he realized that the same question had been asked earlier by humans. See http://oldwww.cs.elte.hu/~lovasz/berlin.pdf.
About 1960 Ed Lorenz observed the "sensitive dependence on initial conditions" in a very simple weather model he was running on a computer. He later coined the term "butterfly effect" for this phenomenon.
The starting point of the mathematical theory of solitons for the Korteweg-de Vries equation was the numerical experiment of Kruskal and Zabusky in 1965, showing that solitons of different amplitudes, hence traveling at different speeds, crossed each other and reemerged (almost) undisturbed. I think this is an appropriate example in this thread, since this is an actual new phenomenon, totally unespected, discovered by computer simulation, then rigorously proved and widely generalized to constitute a whole new mathematical theory.
This is not precisely a conjecture, but the Fermi-Pasta-Ulam experiment seems to be the first time mathematicians and physicists realized that lack of integrals of motion does not necessarily lead to chaos or ergodicity, thus paving the way for KAM theory.
In our paper The numerical measure of a complex matrix (Comm. Pure and Appl. Math. ,65 (2012), pp 287--336), T. Gallay and I proved that the restriction to some zones of the numerical density of an $n\times n$ matrix is polynomial of degree at most $n-3$. The only reason why we were led to this result is because of numerical experiments shown some evidence. Does it qualify ?
Later on, we found that this polynomiality is related to the so-called lacunas for hyperbolic differential operators.
"Ever since the early 70s I've used computers to produce examples in algebraic geometry and commutative algebra, and I've developed algorithms to extend the power of computation in this area. I recently joined Mike Stillman and Dan Grayson in the project to (further) develop the Macaulay2 system for symbolic computation. "
One problem that perhaps fits in this category is the exact solution of the hard hexagon model by Rodney Baxter. In this case computer calculations revealed surprising patterns which suggested that the model was solvable, but it required someone like Rodney Baxter to first recognise these patterns, and then go ahead and find the solution. A large grey area in the nature of the question is the extent to which the computer "discovered" the solution.
This discovery is described in Ch. 14 of Rodney Baxter's book "Exactly Solved Models in Statistical Mechanics".
There must be other examples of computer discoveries in the field of solvable models in statistical mechanics, as to some extent this is the nature of field. However, to me this example stands out, as to the best of my knowledge there was no a priori expectation that the model was solvable before the computer calculations were performed.
Where sigma(a) and tau(a) are the sum of the divisors of a, and the number of divisors of a, respectively.
They then proved that this conjecture is true. To me this seems very interesting and non-obvious. But I have no idea if this theorem is significant, or if it was totally unknown before this program discovered it.
Probably not new either, but still cool that it was automatically discovered.
HR invented the concept of refactorable numbers, which are such that the number of divisors is itself a divisor, e.g., 9 is refactorable, because 9 has three divisors (1, 3 and 9) and 3 divides 9.
This left us with 26 open conjectures, which we present in Appendix C. We have not yet fully investigated these remaining conjectures, and it seems likely that the majority may be false. Of particular interest to us are the conjectures about refactorable numbers: amongst others, HR made the conjectures that: (i) for even numbers, if σ (a) is refactorable, then τ (a) and σ (a) will be even, (ii) for odd numbers, if σ (a) is even and refactorable, then τ (τ (a)) and σ (τ (a)) will both be prime, (iii) if τ (a) is refactorable and τ (τ (a)) is prime, then σ (τ (a)) will also be prime, and (iv) if both σ (a) and σ (σ (a)) are refactorable, then τ (σ (a)) will be refactorable and σ (τ (a)) will be odd.
The author has a number of other papers on this program and things it's discovered. I'm just now reading through Automatic Invention of Integer Sequences, where they claim it invented 17 novel integer sequences that were accepted into OEIS. Including the refactorable numbers mentioned above.
I agree that it may be difficult to program a computer to find conjectures which it can't then prove, but can then be proven by humans. Much easier seems to be to find and prove theorems, or to search for counterexamples to conjectures.
That's why I am thinking at another possibility: explore mathematical constructions using computer programs, and from the observed patterns and regularities, suggest conjectures. So, not the program will be the one which suggest the conjecture, but he human user. Exploring mathematical constructions is done in experimental mathematics. On this Wikipedia page there are some examples of patterns observed when using numerical and graphical simulations, under the titles "Finding serendipitous numerical patterns" and "Visual investigations".
There is a related question on experimental mathematics here.
Not the answer you're looking for? Browse other questions tagged big-list computer-science examples experimental-mathematics or ask your own question. | CommonCrawl |
Are there connections between long-range entanglement and topological quantum computation?
Long-range entanglement is characterized by topological order (some kinds of global entanglement properties), and the "modern" definition of topological order is the ground state of the system cannot be prepared by a constant-depth circuit from a product state, instead of ground states dependency and boundary excitations in traditional. Essentially, a quantum state which can be prepared by a constant-depth circuit is called trivial state.
On the other hand, quantum states with long-range entanglement are "robust". One of the most famous corollaries of quantum PCP conjecture which proposed by Matt Hastings is the No Low-energy Trivial States conjecture, and the weaker case proved by Eldar and Harrow two years ago (i.e. NLETS theorem: https://arxiv.org/abs/1510.02082). Intuitively, the probability of a series of the random errors are exactly some log-depth quantum circuit are very small, so it makes sense that the entanglement here is "robust".
It seems that this phenomenon is some kinds of similar to topological quantum computation. Topological quantum computation is robust for any local error since the quantum gate here is implemented by braiding operators which is connected to some global topological properties. However, it needs to point that "robust entanglement" in the NLTS conjecture setting only involved the amount of entanglement, so the quantum state itself maybe changed -- it does not deduce a quantum error-correction code from non-trivial states automatically.
Definitely, long-range entanglement is related to homological quantum error-correction codes, such as the Toric code (it seems that it is related to abelian anyons). However, my question is that are there some connections between long-range entanglement (or "robust entanglement" in the NLTS conjecture setting) and topological quantum computation? Perhaps there exists some conditions regarding when the correspondent Hamiltonian can deduce a quantum error-correction code.
There were two simultaneous PRLs published by Kitaev & Preskill and Levin & Wen that I think answer your question.
These use the area law of entanglement seen by states that can be expressed as ground states of a Hamiltonian with only local interactions.
Here $L$ is the length of the perimeter of the region. The first term accounts for the fact that correlations in these systems are typically short range, and so the entanglement is mostly composed of correlations between particles on each side of the boundary.
The $\gamma$ term is unaffected by the size or shape of the region, and so represents a contribution of global and topological effects. Whether this is non-zero, and what the value is, tells you about the topologically ordered nature of your entangled system.
The $\ldots$ term just represents contributions that decay as the region increases, and so can be ignored as $L\rightarrow \infty$.
The two papers, and ones based upon them, then find ways to isolate and calculate $\gamma$ for different entangled states. The value is shown to depend on the anyon model for which these entangled states represent the vacuum.
Not the answer you're looking for? Browse other questions tagged entanglement topological-quantum-computing or ask your own question.
How do Topological Quantum Computers differ from others models of quantum computation?
Is Gil Kalai's argument against topological quantum computers sound?
What exactly are anyons and how are they relevant to topological quantum computing?
Does local Clifford equivalence have a direct graphical representation for qudit graph states of non-prime dimension?
Is entanglement necessary for quantum computation?
Are there any other companies besides Microsoft pursuing topological QC?
Is there any tool or simulator for Topological quantum gates and circuits?
Are there any test examples of Octave and Quantum Entanglement? | CommonCrawl |
How have/can the moduli of higher genus curves (perhaps with level structure) be used to study the arithmetic of the curves (e.g. rational points, torsion in the Jacobian)?
This question is perhaps too broad: let me also give some more specific questions.
Give some examples of how the geometry of the moduli spaces has implications for the arithmetic of the curves?
What has/can be gleaned from explicit examples (say for low genus, or hyperelliptics) like the Igusa invariants for genus 2?
Which moduli related to higher genus curves are constructed over the integers?
Any orientation, examples or references would be appreciated; I am entirely new to this area.
There is some connection between the minimal height of curves and their moduli height. For example one can show that for a genus $g \geq 2$ curve $C$ we have that $$\mathcal H (C) < c \cdot \bar H (C), $$ where $\mathcal H (C)$ is the moduli height, $\bar H (C)$ the minimal height, and $c$ a constant.
You can check this paper for some of the details.
Not the answer you're looking for? Browse other questions tagged arithmetic-geometry moduli-spaces algebraic-curves or ask your own question.
How to obtain the Period matrix from the Igusa Invariants of a genus two curve?
Are rings of modular forms normal? | CommonCrawl |
The Buechler dichotomy theorem says that if $ p $ is a type of Lascar rank 1 in a superstable theory, then $ p $ has Morley rank 1, or the pregeometry on the set of realizations of $ p $ is locally modular. This isn't strictly a dichotomy, as $ p $ may have both properties. The dichotomy theorem can be seen as a trade-off between model theoretic simplicity (Morley rank 1) and geometric simplicity (local modularity).
The dichotomy theorem implies that any type $ p $ with $ U(p) = 1 $ and $ RM(p) > 1 $ is locally modular. Consequently, when checking the Zilber trichotomy principal for types of Lascar rank 1, it suffices to merely check the case of types of Morley rank 1.
As unidimensional theories are controlled by types of rank 1, the dichotomy theorem can be used to show that any unidimensional theory $ T $ is 1-based or totally transcendental (or both).
In a superstable theory: if $ U(p) = 1 $ and the pregeometry on $ p $ is non-trivial, then $ R^\infty(p) = 1 $, where $ R^\infty $ is Shelah's continuous infinity-rank. This amounts to showing that in some neighborhood of $ p $, all non-algebraic types have Lascar rank 1.
If $ R^\infty(p) = 1 $ and the pregeometry on $ p $ isn't locally modular, then $ RM(p) = 1 $. This amounts to showing that in some neighborhood of $ p $, there are only finitely many non-algebraic types.
For the first point, one uses non-triviality to find three realizations $ a, b, c $ of $ p $ which are pairwise independent, with each algebraic over the third. If $ \phi(x_1;x_2;x_3) $ is a formula witnessing the interalgebraicity, and $ \theta(x) $ is a neighborhood of $ p $ having the same $ R^\infty $-rank, it turns out that the formula $ \psi(x_3) = (d_p x_1) \exists x_2 (\phi(x_1,x_2,x_3) \wedge \theta(x_2)) $ gives the desired neighborhood of $ p $ containing only types of Lascar rank at most 1.
The second point is more complicated, and involves using the characterization of non-local-modularity in terms of plane curves, as well as the definability of $ R^\infty $ rank in sets of $ R^\infty $-rank 1. The rough idea is to take a 2-dimensional family of plane curves, and show that if $ C $ and $ C' $ are two generic curves in this family, then $ C \cap C' $ is finite, and that the finitely many types of the elements of $ C \cap C' $ over the code for $ C $ yield all the non-algebraic types in $ C $. This makes $ C $ have Morley rank 1, and its projection onto one of the two coordinates will be a neighborhood of the original type $ p $, also of Morley rank 1. The argument boils down to a series or rank calculations, and careful management of the issue of formulas versus types. | CommonCrawl |
Produces a rational approximation to $\pi$, which tends to quality $Q$ greater than $0.90$ in the limit of $n \rightarrow \infty$.
This measurement places $0.79<Q<0.80$, which seems like good news but may also contradict Beukers. Who or what has gone wrong in this situation? Is $n=5000$ not sufficiently far in the limit? Is it a fault of my programming? Of Mathematica? Or have we found a mistake in the original article? Expert opinions welcome. | CommonCrawl |
PhD student in Mathematics at The Ohio State University.
M. Sc. in Mathematics by University of São Paulo, 2018.
9 Why in unanswered questions there are answered questions?
9 What is SE etiquette when answering your own question?
20 How come Harry didn't recognize the Half-Blood Prince's handwriting?
11 Is there a Portuguese expression for "raining cats and dogs"?
11 Is $\Bbb S^2 \times \Bbb S^4$ symplectic?
7 Qual a origem e o significado do "É pique, é hora, ra-tim-bum"? | CommonCrawl |
The expectation of a random variable $X$, denoted $E(X)$, is the average of the possible values of $X$ weighted by their probabilities. This can be calculated in two equivalent ways.
Technical Note: If $X$ has finitely many possible values, the sums above are always well defined and finite. If $X$ can have countably many values (that is, values indexed by 1, 2, 3, $\ldots$), then more care is needed to make sure that the formulas result in a well defined number. We will deal with that issue shortly; for now, assume that the sum is well defined.
Assuming the sums are well defined, is straightforward to show that these two formulas give the same answer. One way to show it is to group terms in the first sum by the distinct values of $X(\omega)$ over all the different outcomes $\omega$.
The second formula is often given as "the" definition of expectation, but the first can be helpful for understanding properties of expectation. In particular, it shows that if two random variables have the same distribution, then they also have the same expectation.
Suppose $X$ has the distribution given below.
Then by the formula on the range of $X$, we have $E(X) = 2.85$.
Expectation is often also called expected value, hence the name of the function and also our name ev_X. But notice that the "expected value" need not be a possible value of the random variable. This random variable $X$ can't be 2.85.
But then what does the expected value represent? To see this, first visualize $E(X)$ using the show_ev=True option to Plot.
If you have studied some physics, you will recognize that the formula we used for the expectation is the same as the formula for the center of gravity of a system in which weights equal to the corresponding probabilities hang from each possible value 1, 2, 3, 4, and 5.
So suppose the histogram is made of cardboard or some rigid material, and imagine trying to balance it on the tip of a pencil held somewhere on the horizontal axis. You'll have to hold the pencil at 2.85 for the figure to balance.
The expectation is the center of the distribution in this physical sense: it is the center of gravity or center of mass of the distribution.
You can also think of expectation as the long run average value of the random variable when you generate the variable over and over again independently and under identical conditions. The sample_from_dist method applied to prob140 distribution objects allows you to do just that. It samples at random with replacement from the distribution and returns an array of sampled values. The argument is the sample size.
You can use the emp_dist method to convert the array of simulated values to a distribution object which you can then use with Plot and other prob140 functions. The show_ave=True option of Plot displays the average of the simulated values.
The average of the 10,000 simulated values of $X$ is very close to $E(X)$ but not exactly equal.
This is because of what you can see in the empirical histogram: it looks very much like the probability histogram of $X$. About 15% of the simulated values are 1, about 20% are 2's, and so on, so the average is very close to 2.85.
The similarity of the two histograms is because of the Law of Averages that you saw in Data 8 and that we will establish formally in this course.
Now that we have a few ways to think about expectation, let's see why it has such fundamental importance. We will start by directly applying the definition to calculate some expectations. In subsequent sections we will develop more powerful methods to calculate and use expectation.
This little example is worth writing out because it gets used all the time. Suppose a random variable $X$ is actually a constant $c$, that is, suppose $P(X = c) = 1$. Then the distribution of $X$ puts all its mass on the single value $c$, and $E(X) = c\cdot 1 = c$. We just write $E(c) = c$.
As you saw earlier, zero/one valued random variables are building blocks for other variables and are called indicators.
by our calculation above. Thus every probability is an expectation. We will use this heavily in later sections.
An instance of this is if $X$ is the number of spots on one roll of a die. Then $E(X) = 3.5$.
We now have an important new interpretation of the parameter of the Poisson distribution. We saw earlier it was close to the mode; now we know that it is also the balance point or expectation of the distribution. The notation $\mu$ was chosen to stand for "mean".
That's a bit technical for this level of course, and you will almost never have to deal with non-existent expectations in Prob140. Just keep in mind that expectation isn't always finite or even well defined.
Here is an example in which you can see that the expectation can't be finite. First notice that the sequence $1/2^n, ~n = 1, 2, 3, \ldots $ is a probability distribution: by summing the geometric series you get 1.
Now suppose the random variable $X$ has values $2, 4, 8, 16 \ldots$ so that $P(X = 2^n) = 1/2^n$ for $n = 1, 2, 3, \ldots $. Then for every possible value $x$, the product $xP(X=x) = 1$. If you try to add infinitely many 1's, the only sensible answer is infinity.
This problem with expectation happens when the distribution has "mass drifting off to infinity" at a rate that makes it impossible to balance the probability histogram anywhere on the horizontal axis. | CommonCrawl |
Good introduction to statistics from a algebraic point of view?
Is there an introduction to probability theory from a structuralist/categorical perspective?
Is there a combinatorial/topological treatment of statistical independence?
What is the algebraic equivalent of independent elements?
and related field called ergodic theory which in fact study different things.
However, as a new category theorist with almost no statistics background I don't aim to learn these advanced topics, but to understand very basic notions like random variable and expectation from a algebraic perspective.
It seems that statistics is the one of the most recalcitrant subject for algebraic approach, but I think it is not the case, we can just treat it as any other abstract object, and define axioms on this abstract random type. The notations and formulas in every introduction book of statistics I have read soon become utterly ugly due to lack of a proper foundation, which is really painful for someone ingrained with abstract algebra and functional programming. However,statistics is extremely useful for machine learning and the modelling of human brain and many others.
Lucien Le Cam developed an approach to statistics that largely disposes of measure-theoretic probability and replaced probability measures and random variables with certain Banach lattices. The approach can be found in Le Cam's book Asymptotic Methods in Statistical Decision Theory and the more accessible Comparison of Statistical Experiments by Torgersen.
Keeping the traditional measure theoretic approach to statistics but studying it by a category-theoretic approach is Statistical Decision Rules and Optimal Inference by Cencov.
For basic material on linear regression, there is also The Coordinate-Free Approach to Linear Models by Wichura; this is an area amenable to an approach that is likely to be more comfortable for an algebraist. This is the only book in the list that might be said to be introductory.
That being said, anyone who actually wants to work in statistics needs to be familiar with the standard literature and approach. Warts and all. Much of statistical theory is about inequalities; more analysis than algebra.
P. Diaconis "Group representations in probability and statistics, Chapter 6. Metrics on Groups, and Their Statistical Use" pdf projectEuclid is something eye-opening and must-and-pleasure to read, giving group theoretic look on basic tools in statistics like Mann-Whitney, Kolmogorov-Smirnov tests, Kendall and Spearmen correlations coefficients.
The central ideas are related to symmetric group $S_n$, and metrics on it, which give a clue to measuring "disorder" in samples, thus related to main statistics questions.
1) Kendall rank correlation coefficient (tau) is closely related to number of inversions of permutations.
2) Spearman's rank correlation coefficient is closely related to $L_2$-metric on the permutation group.
Example 13. Rank tests. Doug Critchlow (1986) has recently found a remark able connection between metrics and nonparametric rank tests. It is easy to describe a special case: consider two groups of people — m in the first, n in the second. We measure something from each person which yields a number, say $x_i$, $y_i$ We want to test if the two sets of numbers are "about the same."
This is the classical two-sample problem and uncountably many procedures have been proposed. The following common sense scenario leads to some of the most widely used nonparametric solutions.
Rank all n + m numbers, color the first sample red and the second sample blue, now count how many moves it takes to unscramble the two populations. If it takes very few moves, because things were pretty well sorted, we have grounds for believing the numbers were drawn from different populations. If the numbers were drawn from the same population, they should be well intermingled and require many moves to unscramble.
A not-so-fancy book is Heyer: Theory of Statistical Experiments, Springer. However, it's not category theoretic and gives a rather traditional picture.
It is not true that statistics is not well founded (that's maybe only the case in some textbooks in English speaking countries, because they have rather a tradition to teach statistics in an "applied" and "practical" way). There is literature about statistics (in the traditional way, based on probability theory), which gives thorough foundations, e.g. in German the book(s) by Witting (and Mueller-Funk in vol 2) "Mathematische Statistik" (which to my knowledge has never been translated into English) or Schmetterer. Both Witting and Schmetterer are rigorously formalized literature.
Not the answer you're looking for? Browse other questions tagged reference-request pr.probability ct.category-theory st.statistics or ask your own question.
Norms and metrics: Two sides of the same coin?
Topos Without point, from the point of view of logic. | CommonCrawl |
I realize this question is risky (as the title and the tags indicate), but hopefully I can make it acceptable. If not, and the question cannot be salvaged, I'm sorry and ready to delete it or accept closure.
Oftentimes, when one listens to chess grandmasters commentating on games, they will say things like, "The engines give a +1 advantage to White, but the postion is dead-drawn." And they proceed to provide high-level explanations of such statements. I'm a bad chess player so I cannot really attempt to verify those, but they often sound extremely convincing. Also, when they teach simple endgames, they seem to consider analyzing the crucial ideas to be enough for proof (at least in the sense of "a completely convincing explanation") and it's difficult not to agree with that. Even certain positions with a lot of pieces on the board can apparently be treated this way. "There is no way to make progress" is something that one can frequently hear.
Sometimes these just turn out to be wrong. These statements are generally based on intuitions ("axioms") such as "you cannot give away your queen for a pawn", which are believed to be true in almost all situations and on already analyzed positions (like the Lucena position). The grandmasters' intuitions are very fine, but it can happen that they will miss a counterintuitive material sacrifice, a counterintuitive retreat or a counterintuitive something else. However, they can be extremely convincing sometimes (and never proven wrong).
It's clear that chess can't be solved this way any time soon. A clear obstacle is the "wild" or "unclear" positions, where "anything can happen". But there are also those "tame" positions, "dead-drawn" positions and "winning advantages". (Surely, some -- or maybe a lot -- of these statements are not correct or correct with incorrect reasonings behind them.) Another indication is that the humans who make these statements get beaten by computers, which primarily use low-level tree searches.
My question is how much, if anything, of this high-level reasoning can be put to a form that would make it mathematics and mathematical proof. I think brute force is clearly not the only way of evaluating positions rigorously. In a K+Q v. K endgame, one doesn't have to analyze each possible move of the lone king to show that it's doomed. It's enough to show that a stalemate is impossible when the king is not on the side of the board, that the king can be forced to the side of the board and then that whenever the king is on the side, it can be checkmated. For a rigorous proof, one can use all kinds of symmetries and translations without having to go through every single node of the tree.
How much of that high-level thinking grandmasters employ can be made into mathematics?
is what I'm interested in, but this is vague -- it's not clear what "how much", "high-level thinking" or even "made into" mean. But I think people must have tried to pursue this and some clear mathematical statements must have been produced. If not, is it at least possible to say why it's difficult, infeasible or impossible?
Is there a chess position with a large (enough to be intractable to a human with just a pen and a piece of paper) tree that has been solved without brute-forcing the tree? Has a rigorous proof accessible to a human been produced?
Let me prove, for example, that the following 7-piece position is a draw. 7-piece positions are about the borderline of what's doable by brute force: they were tabulated around 2010.
1) if white queen captures the rook or the pawn, recapture.
4) else, move the rook to f6 or h6.
A) by capturing the g7 pawn. Recapture with the king and continue moving the rook, securing a draw.
C) by Qa6, Qb6, Qc6, Qd6 or Qe6, followed by rook takes queen and king takes rook. The very same idea, only be sure to take g7:h6 when you can.
D) By Qg6 R:g6 h5:g6. This is also a theoretical draw. Move Kh8-h7-h8-... and be sure not to take g7:h6 at a wrong moment.
Some details are left off here, but I think the level of rigour is fairly close to how math is written.
Computers wrongly give a decisive advantage to white: the idea of cutting the king with the rook is too "high-level" for them to understand it. And I believe, if needed, one can devise a rigorous proof here similar to the one above.
It's not chess, but you might like the book "Mathematical Go: Chilling gets the Last Point" by Berlekamp and Wolfe, about mathematical analysis of Go endgames. IIRC they used transfinite (or was it infinitesimal) surreal numbers to study Go endgame positions that Go experts thought were always drawn, and found wins in some of them, with the winning methods understandable and usable by human players once the mathematicians had found and published them.
Probably one can write the simplest endgames (KQ vs K and KR vs K) as induction proofs, in a manner so that they will work on bigger boards. For KBN vs K, you can proceed by "reduction" of classes of positions (say based upon opponent's king location), similar to solving a puzzle like Rubik's Cube (the KBN vs K has even been done for Kriegspiel by Ferguson).
For the second question, by now there exist a (large) number of studies whose first X moves are tricky enough so that brute force by computer is not immediately feasible, but whose last Y moves are not very understandable to humans, being just a "random" tablebase position that happens to give the desired result.
Some examples do exist but they're rare. One category that has not been mentioned so far is the application of combinatorial game theory to chess.
There is a heuristic argument that proofs in chess cannot usually be compressed much. Namely, Fraenkel and Lichtenstein have proved that chess is EXPTIME-complete. Of course, this is an asymptotic result and does not rigorously imply anything about chess on an $8\times 8$ board. Nevertheless, when we examine endgame tablebases, the impression one gets is that they are extremely hard to compress. John Nunn has written some books (e.g., Secrets of Rook Endings) trying hard to make the results of these endgame calculations comprehensible to a human, but a glance at his books shows that there is still an overwhelming amount of explicit memorization required if you really want to play these endings perfectly. If this is indeed the situation with such "simple" chess positions, the prospect of short proofs for complex positions looks pretty dim.
As a thought experiment, consider the opening position with White's queen removed (i.e., White is giving Black queen odds). "Obviously" this is a win for Black. But can we prove it mathematically? No. Now, typical grandmaster analyses will conclude "Black wins" with much less of an advantage than this. So if we can't convert "Black wins with queen odds" into a mathematical proof then we surely can't convert very many grandmaster analyses into mathematical proofs.
Although somewhat orthogonal to the thrust of the question (determining a winning strategy for a giving legal position in chess), I think the title question is in part answered by Raymond Smullyan.
Raymond Smullyan has written much to make topics in logic more accessible. In addition, he has exemplified logical reasoning in chess with his popular books on retrograde analysis, including Chess Mysteries of Sherlock Holmes. Here the goal often is to determine how a position was arrived at, along with certain properties like whether castling is legal. I would recommend his titles to anyone wanting to practice mathematics, especially as related to chess.
Gerhard "Has Arabian Nights Themes Too" Paseman, 2016.02.04.
Not the answer you're looking for? Browse other questions tagged soft-question combinatorial-game-theory chess or ask your own question.
Which popular games are the most mathematical?
Do there exist chess positions that require exponentially many moves to reach?
What proportion of chess positions that one can set up on the board, using a legal collection of pieces, can actually arise in a legal chess game?
When is a game tree the game tree of a board game? | CommonCrawl |
F. Krafrer: State-space method to solve polynomial matrix diophantine equation. Young Scientist Contest 1993, Institute of Information Theory and Automation, Prague 1993.
H. Kwakernaak: MATLAB Macros for Polynomial $H_\infty$, Control System Optimization. Memorandum No. 881, University of Twente, The Netherlands 1990.
MATLAB™ for MS-DOS Personal Computers, User's Guide. The Math. Works Inc., South Natick, MA.
S. Pejchová: Software Package for Polynomial Operations I. Research Report No. 1765, Institute of Information Theory and Automation, Prague 1993.
M. Šebek: An algorithm for spectral factorization of polynomial matrices with any signature. Memorandum No. 912, University of Twente, The Netherlands 1990. | CommonCrawl |
Paper summary davidstutz Tsipras et al. investigate the trade-off between classification accuracy and adversarial robustness. In particular, on a very simple toy dataset, they proof that such a trade-off exists; this means that very accurate models will also have low robustness. Overall, on this dataset, they find that there exists a sweet-spot where the accuracy is 70% and the adversarial accuracy (i.e., accuracy on adversarial examples) is 70%. Using adversarial training to obtain robust networks, they additionally show that the robustness is increased by not using "fragile" features – features that are only weakly correlated with the actual classification tasks. Only focusing on few, but "robust" features also has the advantage of more interpretable gradients and sparser weights (or convolutional kernels). Due to the induced robustness, adversarial examples are perceptually significantly more different from the original examples, as illustrated in Figure 1 on MNIST. https://i.imgur.com/OP2TOOu.png Figure 1: Illustration of adversarial examples for a standard model, a model trained using $L_\infty$ adversarial training and $L_2$ adversarial training. Especially for the $L_2$ case it is visible that adversarial examples need to change important class characteristics to fool the network. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Tsipras et al. investigate the trade-off between classification accuracy and adversarial robustness. In particular, on a very simple toy dataset, they proof that such a trade-off exists; this means that very accurate models will also have low robustness. Overall, on this dataset, they find that there exists a sweet-spot where the accuracy is 70% and the adversarial accuracy (i.e., accuracy on adversarial examples) is 70%. Using adversarial training to obtain robust networks, they additionally show that the robustness is increased by not using "fragile" features – features that are only weakly correlated with the actual classification tasks. Only focusing on few, but "robust" features also has the advantage of more interpretable gradients and sparser weights (or convolutional kernels). Due to the induced robustness, adversarial examples are perceptually significantly more different from the original examples, as illustrated in Figure 1 on MNIST.
Figure 1: Illustration of adversarial examples for a standard model, a model trained using $L_\infty$ adversarial training and $L_2$ adversarial training. Especially for the $L_2$ case it is visible that adversarial examples need to change important class characteristics to fool the network. | CommonCrawl |
[SOLVED] Kasteleyn's formula for domino tilings generalized?
[SOLVED] Why is there no Borel function mapping every countable set of reals outside itself?
[SOLVED] Have any long-suspected irrational numbers turned out to be rational?
[SOLVED] Proofs of the uncountability of the reals.
[SOLVED] Why could Mertens not prove the prime number theorem?
[SOLVED] Are some numbers more irrational than others?
[SOLVED] is f a polynomial provided that it is "partially" smooth?
[SOLVED] Why is differentiating mechanics and integration art?
[SOLVED] Why do we teach calculus students the derivative as a limit?
[SOLVED] Is the series $\sum_n|\sin n|^n/n$ convergent?
[SOLVED] Text for an introductory Real Analysis course.
[SOLVED] Is square of Delta function defined somewhere?
[SOLVED] Square root of a positive $C^\infty$ function.
[SOLVED] Is there a natural measures on the space of measurable functions? | CommonCrawl |
The Integral Input to State Stability (iISS) property is studied is the context of nonlinear time-invariant systems in cascade. A sufficient condition is given, in terms of the storage function of each subsystem, to ensure that the cascade composed of an iISS system driven by a Globally Asymptotically Stable (GAS) one remains GAS. Some sufficient conditions for the preservation of the iISS property under a cascade interconnection are also presented.
We address the problem of detecting faulty behaviors of robots belonging to a multi-agent system. Our objective is to develop a scalable architecture that can be adopted to realize a completely decentralized intrusion detector monitoring the agents' behavior. We want the solution to be independent from the set of ``rules'' describing the interaction among the agents, and from their dynamics; (non-invasive) mainly based on HW/SW components that are already present on-board of each agent. We focus on systems with decentralized cooperation schemes where cooperation is obtained by sharing a set of ``rules'' by which each agent plans its next ``action'' and where some of the agents may act not according to the rules due to spontaneous failure, tampering, or malicious introduction.
Consider the controlled system $dx/dt = Ax + \alpha(t)Bu$ where the pair $(A,B)$ is stabilizable and $\alpha(t)$ takes values in $[0,1]$ and is persistently exciting. In particular, when $\alpha(t)$ becomes zero the system dynamics switches to an uncontrollable system. In this paper, we address the following question: is it possible to find a linear time-invariant state-feedback, only depending on $(A,B)$ and the parameters of the persistent excitation, which globally asymptotically stabilizes the system? We give a positive answer to this question for two cases: when $A$ is neutrally stable and when the system is the double integrator.
Consider the controlled system $dx/dt = Ax + \alpha(t)Bu$ where the pair $(A,B)$ is stabilizableand $\alpha(t)$ takes values in $[0,1]$ and is persistently exciting. In particular, when $\alpha(t)$ becomes zero the system dynamics switches to an uncontrollable system. In this paper, we address the following question: is it possible to find a linear time-invariant state-feedback, only depending on $(A,B)$ and the parameters of the persistent excitation, which globally exponentially stabilizes the system? We give a positive answer to this question for two cases: when $A$ is neutrally stable and when the system is the double integrator.
Research of a modular stabilizing control law for uncertain, nonholonomic mobile systems with actuators limitation has been investigated. Modular design allows the definition of a stabilizing control law for the kinematic model. The presence of uncertainties in the actuators parameters or in the vehicle dynamics has been treated both adding suitable components to the Lyapunov function and using parameters adaptation laws (e.g. adaptive control and backstepping techniques). Simulations are reported for the set point stabilization of a unicycle like vehicle showing the feasibility of the proposed approach. Torque limitations for a unicycle like vehicle has been investigated using backstepping techniques for the vehicle tracking problem. Simulations are reported.
It is widely accepted in the control community that the Computer Controlled Systems (briefly, CCSs), have a lot of advantages based primarily on the high reconfigurability of the controller platform and the ability of making complex yet fast decisions. Such positive features make the CCS a useful platform for multitasking control, i.e. using only one single-processor platform to control several plants. Within this framework, the computer has to share its computational time to solve several critical tasks, each one with its priority. Taking into account schedulability and real time operating systems problems, it is just a step forward to realize that tasks with low priority could be interrupted any time, with unpredictable distributions, and it is just a logical intuition that not all the tasks can be at the highest priority. From a control point of view, each control task should compute a control input to the controlled system to prevent instability or to ensure the performances. This field seems to be at the border between the control community and the computer science community and it has not been widely investigated. Some work has been presented introducing more general scheduling models and methods for control systems, where control design methodology takes the availability of computing resources into account and allows the trade–offs between control performance and computer resources utilization, and also introducing the idea of any time CCS without solutions. In literature, the Control Server is introduced, allowing the separation between scheduling and control design by treating the controllers as scalable real–time components. Jitter and latency is included in the model and cannot be taken over. Furthermore, the interrupt time, the I/O operations and the overrun handling are not taken into account, simply imposing only soft real time control tasks. Useful tools for the analysis of real time control performance are the Jitterbug and the True Time that analyze the control performance once the control tasks are implemented as soft real time tasks. | CommonCrawl |
you could write create an empty list and then, create a column of -1 and augment it to the previous columns, do this n times but is there a faster way?
Of course, select the appropriate ring in your own applications.
As another alternative, you can use ones_matrix, which returns a matrix with all entries equal to 1. So, to build, say, an $1000\times1000$ matrix with all entries being $-1$, you can simply write -ones_matrix(1000,1000).
In a different computer, results will be different, of course. Except in the case of the Numpy array, the elements of the resulting matrix belong to the Integer Ring.
is measurably faster. This is mainly because in sage, -1 is not a literal but gets converted to Integer(-1) by the preparser.
which is as fast as -ones_matrix(ZZ,m,m). | CommonCrawl |
Localization and delocalization for interacting 1D quasiperiodic particles.
We consider a system of two interacting one-dimensional quasiperiodic particles as an operator on $\ell^2(\mathbb Z^2)$. The fact that particle frequencies are identical, implies a new effect compared to generic 2D potentials: the presence of large coupling localization depends on symmetries of the single-particle potential.
Read more about Localization and delocalization for interacting 1D quasiperiodic particles.
We consider the Navier–Stokes equations posed on the half space, with Dirichlet boundary conditions. We give a direct energy based proof for the instantaneous space-time analyticity and Gevrey class regularity of the solutions, uniformly up to the boundary of the half space. We then discuss the adaptation of the same method for bounded domains. | CommonCrawl |
I am a PhD student at Dept. of Computing, Imperial College since Oct. 2015, in the Program Specification and Verification Group supervised by Prof. Philippa Gardner. Currently, I am interesting in verification of concurrent programs and consistency models of distributed systems and databases.
I received MSc. in Advanced Computing from Imperial College London in 2015, and BEng. in Software Engineering from Southwest Jiaotong University (西南交通大学), China in 2014. HERE IS MY CV.
It is a game played by four people. In China, different areas have slightly different rules but the general rules are similar.
Every player has 13 tiles in their hands and can only be seen by players themselves by default. There are 3 suits, each of which contains four duplicated tiles from number one to nine. Some areas also have extra tiles and each of them also have four duplications (with few exceptions, but they do not really affect the rule but affect how much a player(s) pay when losing).
Initially players get 13 tiles and afterwards the game is organised by rounds until a player(s) win. A round is divided into four turns for the four players. In the counter-clock order players draw a new tile and then discard a tile chosen by themselves . Note that a play always has 13 tiles either hidden or seen by everyone (there is a mechanism to reveal tiles). To win, a player's 13 tiles plus an extra one need to form certain pattern. The extra one can be the immediate new tile the player draws before s/he dicards any tile; or it can be tiles other players discards in his/her turn.
It is a chess with the aim of get more territory on a 19 $\times$ 19 grids than your competitor. Both plays have unused white and black stones respectively. The player with black stones goes first, then the play with white stones. For each round, a player puts a stone on an unoccupied cross on the grids, for the purpose of securing more territory than the counterpart.
I learnt to play it while I was 10-ish and carried on until 15-ish. Have not really played since, but recent start to watch some matches on YouTube and Twitch.
I usually cook my own dinner. It is Chinese style food but based on available ingredients. I grew up in Changsha, Hunan and spent another 4 years as undergraduate in Chengdu, Sichuan, so I like spicy and chilli food. Sometime also cook pasta or steak. During weekend, I usually make some Catonese style soups. I want to learn to make bread or bun (Baozi in Chinese), but never have the time (it is an excuse) and space (kitchen is a bit small). | CommonCrawl |
7 What rules do moderators follow in deleting comments in an ongoing discussion?
3 Will my account be closed if i stop participating on Physics SE?
14 For which topological spaces $X$ can one write $X \approx Y \times Y$? Is $Y$ unique?
5 Is it possible to construct a 1-D linear differential operator with given spectrum $0\leq\lambda_0\leq \lambda_1\leq\dots\leq\lambda_n\le\dots$? | CommonCrawl |
I am quite confused about this. I know that zero eigenvalue means that null space has non zero dimension. And that the rank of matrix is not the whole space. But is the number of distinct eigenvalues ( thus independent eigenvectos ) is the rank of matrix?
Well, if $A$ is an $n \times n$ matrix, the rank of $A$ plus the nullity of $A$ is equal to $n$; that's the rank-nullity theorem. The nullity is the dimension of the kernel of the matrix, which is all vectors $v$ of the form: $$Av = 0 = 0v.$$ The kernel of $A$ is precisely the eigenspace corresponding to eigenvalue $0$. So, to sum up, the rank is $n$ minus the dimension of the eigenspace corresponding to $0$. If $0$ is not an eigenvalue, then the kernel is trivial, and so the matrix has full rank $n$. The rank depends on no other eigenvalues.
Not the answer you're looking for? Browse other questions tagged linear-algebra eigenvalues-eigenvectors matrix-rank or ask your own question.
If we know the rank of a matrix r, can we assume that will have precisely r non-zero eigenvalues?
Rank of a Hermitian matrix in terms of Eigen values?
Relation between the rank of the matrix and its characteristic polynomial?
Eigenvalues of $A'A$ where A is an $m\times n$ matrix of rank $m$. | CommonCrawl |
Pre-images of Seifert surfaces are incompressible?
Consider a knot $K \subset S^3$ and let $M_K$ be the associated double branched cover. The pre-image $S$ of a Seifert surface is a surface without boundary inside $M_K$.
Can $S$ be incompressible? If yes, how is this related to the topology of $K$?
I'm particularly interested in the case where $M_K$ is hyperbolic.
The incompressible Seifert surface $\Sigma$ will have incompressible preimage $S$ in the double branched cover if and only if the complement of a tubular neighborhood of $\Sigma$ in $S^3$ has incompressible boundary. This condition was termed "partially unknotted" by Jaco.
Examples of Seifert surfaces which are not partially unknotted were claimed by Jaco in the paper, but not explicitly described. The simplest examples can be obtained by taking a knotted handlebody with hyperbolic complement, such as Thurston's knotted wye.
See also Adams-Reid for descriptions of knotted genus 2 handlebodies (with incompressible hyperbolic complements).
Now, draw a knot on this handlebody which bounds a genus 1 Seifert surface $\Sigma$, using the fact that for a genus one surface $\Sigma$, $\Sigma\times [0,1]$ is a genus 2 handlebody. Any such knot will have a Seifert surface which is not partially unknotted.
Not the answer you're looking for? Browse other questions tagged 3-manifolds branched-covers seifert-surfaces or ask your own question.
How does Thurston's Orbifold Geometrization imply that knots with meridional rank 2 are 2-bridge?
What is the preimage of a braid in a covering space branched over the braid? | CommonCrawl |
There are two $n \times n$ grids, both of which contain the integers $1,2,\ldots,n^2$ in some order. The rows and columns of the grids are numbered $1,2,\ldots,n$.
Your task is to transform the first grid into the second grid using the operations, and also minimize the number of operations.
The first input line contains an integer $n$: the size of the grids.
After this, there are $n$ lines that describe the first grid, and finally there are $n$ lines that describe the second grid.
First print an integer $k$: the minimum number of operations. After this, print $k$ lines that describe the operations.
Each operation must be either of the form "$1$ $a$ $b$" (swap rows $a$ and $b$) or "$2$ $a$ $b$" (swap columns $a$ and $b$).
If there are no solutions, print only the value $-1$. | CommonCrawl |
Here is a list of articles selected from the BBC Sport website.
You might notice that some of the articles are about football (at the time of writing, Liverpool had just qualified for the Champions League final), and the others are about snooker (the snooker world championship was approaching its climax as I wrote this blog). It would be trivial for anyone to open the links and categorise the articles accordingly. The question is, can we write a computer program to do this automatically? Admittedly, this is a contrived toy problem, but it touches upon the sorts of things that data analysts do everyday, and is a nice way of introducing the concept of non-negative matrix factorization (more on this shortly).
I wrote a Python script to parse the web pages and count word frequencies (discarding common words such as 'a' and 'the'). The result was a $1113 \times 14$ matrix: $1113$ rows corresponding to the distinct words detected in the articles, and $14$ columns, one for each article, containing the corresponding word counts. We'll return to this matrix in a little while, but first I'd like to talk about data matrices in more general terms.
A data matrix is a set of observations of a number of variables. In our matrix of word counts above, the observations were individual web pages and the variables were the frequencies of the different words on those pages, but there are numerous other examples. For instance, the observations might be pixels in an image, with spectrometry data as the variables. Or observations might correspond to different people, with various test results as the variables. In this blog post we are going to assume that each column of our data matrix corresponds to an observation and each row corresponds to a variable.
Data matrices often have the following properties.
Their entries are non-negative. This isn't always the case but it is true for many important applications.
They can be very large and may be sparse. I recently encountered a data matrix from seismic tomography which had size $87000 \times 67000$, but with only $0.23\%$ of nonzero entries.
Large matrices are cumbersome to deal with. So it is natural to ask whether we can encapsulate the data using a smaller matrix, especially if our data matrix contains many zeros. Various techniques exist to do this, for example principal components analysis, or linear discriminant analysis. The drawback of these methods is that they do not preserve the non-negativity of the original data matrix, making the results potentially difficult to interpret. This is where non-negative matrix factorization comes in.
Non-negative matrix factorization (NMF) takes a non-negative data matrix $A$ and attempts to factor it into the product of a tall, skinny matrix $W$ (known as the features matrix) and a short, wide matrix $H$ (the coefficients matrix). Both $W$ and $H$ are non-negative. This is shown in the graphic below. Note the presence of $\approx$ rather than $=$: an exact NMF may not exist. In practice NMF algorithms iterate towards acceptable solutions, rather than obtaining the optimal solution. We have two new routines for computing non-negative matrix factorizations coming to the NAG Library soon.
The strength of NMF is that the preservation of non-negativity makes it easier to interpret the factors $W$ and $H$. In general $W$ tells us how the different variables can be grouped together into $k$ features that in some way represent the data. The matrix $H$ tells us how our original observations are built from these features, with the non-negativity ensuring this is done in a purely additive manner.
The best way of understanding this is to go back to our original example. Recall that we had a $1113 \times 14$ matrix of word frequencies for our $14$ web pages. I used one of the upcoming NAG Library routines to factorize it, choosing $k=2$. This resulted in a $1113 \times 2$ features matrix, $W$ and a $2\times 14$ coefficients matrix $H$. Let's discuss them in turn.
Each column of $W$ corresponds to a particular weighted grouping of the $1113$ distinct words from the BBC Sport articles. The larger the entries in the column, the more important the corresponding word is deemed to be. Rather than displaying $W$ in its entirety, I looked at the 10 largest entries in each column to see what the most important words were. The results are shown in this screenshot from my Python script below.
If you're at all familiar with significant terms and people in the snooker and football worlds, you'll hopefully agree that the first column corresponds to snooker and the second column football. It seems that our non-negative matrix factorization has successfully detected the two categories of web page. Let's denote these using the symbols and . Can we now use the NMF to accurately categorise the individual pages? To do this we need to look at the coefficients matrix $H$.
This coefficients matrix is of size $2 \times 14$. The entries in each column show us how well that particular web page fits into our two categories. I assigned each page to a category by simply selecting the largest entry in the column. The results are below. The symbol next to each link shows how it was categorised by the NMF. I will let you judge for yourself whether the categorisations are correct!
This was a small example, designed for illustrative purposes only. However, non-negative matrix factorization has become an important tool in the analysis of higher-dimensional data. The real-world applications of NMF include facial recognition, image processing, the detection of exoplanets in astronomy, mass spectrometry, text mining, speech denoising and bioinformatics, to name but a few.
Look out for our new non-negative matrix factorization routines f01sa (for dense data matrices) and f01sb (for large, sparse data matrices) coming to the NAG Library soon. | CommonCrawl |
Abstract: In view of its physical importance in predicting the order of chiral phase transitions in QCD and frustrated spin systems, we perform the conformal bootstrap program of $O(n)\times O(2)$-symmetric conformal field theories in $d=3$ dimensions with a special focus on $n=3$ and $4$. The existence of renormalization group fixed points with these symmetries has been controversial over years, but our conformal bootstrap program provides the non-perturbative evidence. In both $n=3$ and $4$ cases, we find singular behaviors in the bounds of scaling dimensions of operators in two different sectors, which we claim correspond to chiral and collinear fixed points, respectively. In contrast to the cases with larger values of $n$, we find no evidence for the anti-chiral fixed point. Our results indicate the possibility that the chiral phase transitions in QCD and frustrated spin systems are continuous with the critical exponents that we predict from the conformal bootstrap program. | CommonCrawl |
Class to handle job queues stored in the DB.
Definition at line 35 of file JobQueueDB.php.
server : Server configuration array for Database::factory. Overrides "cluster".
cluster : The name of an external cluster registered via LBFactory. If not specified, the primary DB cluster for the wiki will be used. This can be overridden with a custom cluster so that DB handles will be retrieved via LBFactory::getExternalLB() and getConnection().
wanCache : An instance of WANObjectCache to use for caching.
Definition at line 61 of file JobQueueDB.php.
References cache, WANObjectCache\newEmpty(), and server.
Definition at line 427 of file JobQueueDB.php.
References getMasterDB(), getScopedNoTrxFlag(), type, and wfDebug().
Definition at line 348 of file JobQueueDB.php.
References JobQueue\$type, cache, getCacheKey(), getMasterDB(), getScopedNoTrxFlag(), and type.
Definition at line 489 of file JobQueueDB.php.
References $e, getMasterDB(), RunnableJob\getMetadata(), getScopedNoTrxFlag(), JobQueue\incrStats(), null, throwDBException(), and type.
Definition at line 205 of file JobQueueDB.php.
References $fname, doBatchPushInternal(), getMasterDB(), getScopedNoTrxFlag(), and use.
This function should not be called outside of JobQueueDB.
Definition at line 238 of file JobQueueDB.php.
References $e, $job, $res, $rows, as, Wikimedia\Rdbms\IDatabase\endAtomic(), JobQueue\incrStats(), Wikimedia\Rdbms\IDatabase\insert(), insertFields(), Wikimedia\Rdbms\IDatabase\select(), Wikimedia\Rdbms\IDatabase\startAtomic(), throwDBException(), type, and wfDebug().
Definition at line 518 of file JobQueueDB.php.
References JobQueue\$dupCache, $params, WANObjectCache\get(), getMasterDB(), IJobSpecification\getParams(), JobQueue\getRootJobCacheKey(), getScopedNoTrxFlag(), JobQueue\ROOTJOB_TTL, WANObjectCache\set(), and use.
Definition at line 556 of file JobQueueDB.php.
References $e, getMasterDB(), getScopedNoTrxFlag(), throwDBException(), and type.
Definition at line 588 of file JobQueueDB.php.
References JobQueue\$type, as, cache, and getCacheKey().
Definition at line 165 of file JobQueueDB.php.
References $dbr, $e, cache, getCacheKey(), getReplicaDB(), getScopedNoTrxFlag(), and throwDBException().
Definition at line 132 of file JobQueueDB.php.
Definition at line 670 of file JobQueueDB.php.
References $dbr, $res, as, getReplicaDB(), and getScopedNoTrxFlag().
Definition at line 651 of file JobQueueDB.php.
Definition at line 104 of file JobQueueDB.php.
Definition at line 85 of file JobQueueDB.php.
References $dbr, $e, getReplicaDB(), getScopedNoTrxFlag(), throwDBException(), and type.
Definition at line 295 of file JobQueueDB.php.
References $e, $job, $params, claimOldest(), claimRandom(), JobQueue\factoryJob(), getMasterDB(), getScopedNoTrxFlag(), JobQueue\incrStats(), recycleAndDeleteStaleJobs(), throwDBException(), type, and wfRandomString().
Definition at line 573 of file JobQueueDB.php.
Definition at line 901 of file JobQueueDB.php.
Definition at line 606 of file JobQueueDB.php.
Definition at line 598 of file JobQueueDB.php.
Definition at line 873 of file JobQueueDB.php.
References $property, cache, and type.
Referenced by claimRandom(), doFlushCaches(), doGetAbandonedCount(), doGetAcquiredCount(), and doGetSize().
Definition at line 641 of file JobQueueDB.php.
Definition at line 823 of file JobQueueDB.php.
References $conn, $e, and server.
Referenced by getMasterDB(), and getReplicaDB().
Definition at line 614 of file JobQueueDB.php.
References $dbr, $e, $job, $params, JobQueue\factoryJob(), getReplicaDB(), getScopedNoTrxFlag(), throwDBException(), and unserialize().
Referenced by getAllAcquiredJobs(), and getAllQueuedJobs().
Definition at line 811 of file JobQueueDB.php.
References $e, DB_MASTER, and getDB().
Referenced by claimOldest(), claimRandom(), doAck(), doBatchPush(), doDeduplicateRootJob(), doDelete(), doPop(), and recycleAndDeleteStaleJobs().
Definition at line 799 of file JobQueueDB.php.
References $e, DB_REPLICA, and getDB().
Referenced by doGetAbandonedCount(), doGetAcquiredCount(), doGetSiblingQueueSizes(), doGetSiblingQueuesWithJobs(), doGetSize(), doIsEmpty(), and getJobIterator().
Definition at line 858 of file JobQueueDB.php.
References Wikimedia\Rdbms\IDatabase\clearFlag(), DBO_TRX, Wikimedia\Rdbms\IDatabase\getFlag(), Wikimedia\Rdbms\IDatabase\setFlag(), and use.
Referenced by claimOldest(), claimRandom(), doAck(), doBatchPush(), doDeduplicateRootJob(), doDelete(), doGetAbandonedCount(), doGetAcquiredCount(), doGetSiblingQueueSizes(), doGetSiblingQueuesWithJobs(), doGetSize(), doIsEmpty(), doPop(), getJobIterator(), and recycleAndDeleteStaleJobs().
Definition at line 778 of file JobQueueDB.php.
References IJobSpecification\getDeduplicationInfo(), IJobSpecification\getParams(), IJobSpecification\getType(), NS_SPECIAL, serialize(), and Wikimedia\Rdbms\IDatabase\timestamp().
Definition at line 889 of file JobQueueDB.php.
Definition at line 77 of file JobQueueDB.php.
Recycle or destroy any jobs that have been claimed for too long.
Definition at line 691 of file JobQueueDB.php.
References $e, $res, JobQueue\$type, getMasterDB(), getScopedNoTrxFlag(), JobQueue\incrStats(), throwDBException(), and type.
Return the list of job fields that should be selected.
Definition at line 922 of file JobQueueDB.php.
Definition at line 73 of file JobQueueDB.php.
Definition at line 913 of file JobQueueDB.php.
Referenced by doAck(), doBatchPushInternal(), doDelete(), doGetAbandonedCount(), doGetAcquiredCount(), doGetSize(), doIsEmpty(), doPop(), getJobIterator(), and recycleAndDeleteStaleJobs().
Definition at line 42 of file JobQueueDB.php.
Name of an external DB cluster or null for the local DB cluster.
Definition at line 49 of file JobQueueDB.php.
Definition at line 44 of file JobQueueDB.php.
Definition at line 47 of file JobQueueDB.php.
Definition at line 36 of file JobQueueDB.php.
Definition at line 37 of file JobQueueDB.php.
Definition at line 38 of file JobQueueDB.php.
Definition at line 39 of file JobQueueDB.php. | CommonCrawl |
Is a bound state a stationary state?
Now $\langle P \rangle = 0$ in any bound state for the following reason. Since a bound state is a stationary state, $\langle P \rangle$ is time independent. If this $\langle P\rangle \ne 0$, the particle must (in the average sense) drift either to the right or to the left and eventually escape to infinity, which cannot happen in a bound state.
The final sentence makes sense to me, but his reasoning in the second sentence does not. Aren't bound states and stationary states entirely different things? Does the one in fact imply the other?
I think most of us would agree that superposition of bound states — say, of an electron in an atom — still deserves to be called a bound state, even though most such superpositions are time-dependent. The electron is still bound to the atom.
The uncertainty principle is often used in this fashion to provide a quick order-of-magnitude estimate for the ground-state energy.
Bound states are thus characterized by $\psi(x)\to 0$ [as $|x|\to\infty$] ... The energy levels of bound states are always quantized.
Shankar doesn't say that bound states always have sharply-defined energies, so none of this contradicts the usual convention that a superposition of bound states is still called a bound state, whether or not it happens to be stationary.
Not the answer you're looking for? Browse other questions tagged quantum-mechanics hilbert-space terminology definition quantum-states or ask your own question.
Infinite Series vs Integral Representation of State Vectors in QM? | CommonCrawl |
Methods in Multiplying Madness Support.
Rather than showing the videos from the problem, you may choose to study the methods and recreate them on the board at the start of the lesson (in silence, as in the videos, so that students are expected to make sense of it without any explanation offered). The latter approach has the advantage of preserving a record of the four methods, if your board is big enough. The rest of the lesson follows in the same way however you choose to introduce it.
"Here are two multiplication calculations that I'd like you to do, using whatever method you like:"
Give students a short time to carry out the multiplications, perhaps on individual whiteboards.
"Now I'm going to show you four methods that could be used to work out those multiplications. Some of you used some of these methods, but there might be methods that you haven't seen before. Watch carefully, and see if you can work out what's going on."
Show the videos, or recreate the calculations from the videos on the board.
Then hand out this worksheet showing each finished method for $246 \times 34$.
"With your partner, try to recreate the methods and make sense of the different steps. Once you think you have made sense of them, make up a few calculations of your own and work them out using all four methods until you feel confident that you can use each method effectively."
Towards the end of the lesson(s), bring the class together and invite students to explain why each method works, and to discuss the benefits and disadvantages of each method.
Finally, this 2 minute video clip from the film Ma and Pa Kettle could be shown to provoke discussion about the misconceptions in Ma and Pa's methods.
Will each method always work?
Challenge students to adapt each method for multiplying decimals.
Show students this video clip of another multiplication method that requires no intermediate writing down, and invite them to make sense of it.
Offer students this worksheet with the methods for $23 \times 21$ to make sense of first, as there are no 'carry' digits so it is clearer to see what is going on.
Stu Cork created this GeoGebra file to use with this problem, which he has kindly given us permission to share.
Creating and manipulating expressions and formulae. Video. Mathematical reasoning & proof. Divisibility. Place value. Addition & subtraction. Factors and multiples. Algorithms. Multiplication & division. Integers. | CommonCrawl |
The aim of this paper is to prove a quantitative version of Shapiro's uncertainty principle for orthonormal sequences in the setting of Gabor-Hankel theory.
Shapiro, H. S.: Uncertainty principles for basis in $L^2(\mathbb R)$. Proc. of the Conf. on Harmonic Analysis and Number Theory, Marseille-Luminy, 2005 CIRM. | CommonCrawl |
which is useful for constructing an additive factor; for instance, \(cX+(\ldots)\) adds \(c\) to the result if the desired event occurs, and leaves it unchanged otherwise.
As part of a promotion, the same restaurant decides to give 50% off the total price if a customer purchases a meal, a drink, and a dessert, instead of the flat $2 discount. Construct a formula for the new cost of a visit, using the same indicator variables. | CommonCrawl |
This note is for Padfield, D., Rittscher, J., & Roysam, B. (2011). Coupled minimum-cost flow cell tracking for high-throughput quantitative analysis. Medical Image Analysis, 15(4), 650–668..
these applications depend on accurate cell tracking of individual cells that display various behaviors including mitosis, merging, rapid movement, and entering and leaving the field of view.
a general, consistent, and extensible tracking approach that explicitly models cell behaviors in a graph-theoretic framework.
introduce a way of extending the standard minimum-cost flow algorithm to account for mitosis and merging events through a coupling operation on particular edges.
the resulting graph can be efficiently solved using algorithms such as linear programming to choose the edges of the graph that observe the constraints while leading to the lowest overall cost.
the tracking algorithm relies on accurate denoising and segmentation steps for which the paper used a wavelet-based approach that is able to accurately segment cells even in images with very low contrast-to-noise.
Researchers use automated microscopes to run large numbers of experiments in a high-throughput and high-content fashion over extended periods of time.
96-well-plates: each well represents a different experiment and contains several hundred cells that are imaged over several hours or days resulting in the image acquisition of several hundred thousand cells.
the problem of cell tracking has been extensively studied in the past, and many effective approaches have been developed. These approaches generally adapt some cell established algorithmic framework, such as contour evolution or stochastic filtering, to the cell tracking problem and include extensions or post-processing steps to fit those frameworks to the challenges specific to cell tracking.
The paper develops a mathematical framework that formulate the cell tracking problem using a single model that is consistent, extensible, general, efficient, and requires few parameters.
Many of the cell tracking approaches require some simple cell detection step rather than accurate cell segmentations. However, the segmentation itself can yield information about the change in cell size, shape, and morphology that can lead to important biological insights such as the type of cell death induced by a particular treatment.
use the watershed algorithm on either the intensities, gradients, shapes as derived from the binary mask, or other measure or combination of measures.
Level set methods are another effective method for segmentation.
Wavelets are an effective tool for decomposing an image in both the frequency and spatial domain. Wavelet frames are a wavelet variant that avoids the sub-sampling at each decomposition level.
Formulating the association problem in a graph-theoretic framework as a flow network that can be solved efficiently using the minimum-cost flow algorithm.
such flow networks are generally limited to one-to-one matches, but the method can handle entering and leaving cells.
introduce a method called "coupled minimum-cost flow", which enforces a coupling of the flow of certain edges that enables it to handle mitosis and merging events that require one-to-many and many-to-one associations, respectively.
Bipartite graphs can be used to represent the objects that need to be matched between two adjacent sets. A bipartite graph is an undirected graph $G=(V,E)$ whose vertices $V$ can be divided into two disjoint sets $L$ and $R$ such that every edge $E$ in the graph connects a vertex in $L$ to a vertex in $R$.
Given such a graph, a matching is a subset of edges $M\subset E$ such that, for all vertices $\nu\in V$, at most one edge of $M$ is incident on $\nu$. We say that a vertex $\nu\in V$ is matched by matching $M$ if some edge in $M$ is incident on $\nu$; otherwise, $\nu$ is unmatched by matching $M$.
A maximum matching is a matching $M$ of maximum cardinality such that for any matching $M'$, $\vert M\vert\ge \vert M'\vert$.
The unweighted bipartite graph is limited to finding matchings of maximum cardinality where all edges have equal cost. For matching cells across two images, weights can be defined for each pair of cells based on some measurement. A minimum weighted bipartite matching is defined as a matching where the sum of the values of the edges in the matching have a minimal value. Finding such a matching is known as the assignment problem.
Maximum matching and assignment problems can be represented as a flow network problem. Converting from a maximum matching or assignment problem to a minimum-cost flow problem is accomplished by defining a flow network $G=(V,E)$.
Introduced models for cells entering and leaving by adding A (appear) and D (disappear) vertices to the graph.
Although accounting for cells entering and leaving the image could be modeled in the graph by adding vertices with connections to the source and sink vertices of the graph, splitting and merging cells cannot be represented directly. This is due to the fact that the number of edges entering a split (merge) vertex is different from the number of edges leaving, which violates the flow conservation constraint of the graph.
A constraint is needed to ensure that exactly two units flow to and from splitting and merging vertices when those vertices are part of the solution.
For each split (and merge) vertex, the columns representing the two edges entering and the two edges leaving the vertex (for a total of four columns) are added together to form a new column, and the four columns are removed. This yields a column that has more than two nonzero entries for each split (and merge) vertex, in order for this column to be included in the solution, all edges included in this combined column must be chosen together.
Using the coupled incidence matrix, finding the optimal matches corresponds to finding a subset of columns in this matrix such that the sum of the cost of these columns is minimized, under the constraint that no two columns share common nonzero entries.
$x$: a binary $\vert E\vert\times 1$ solution vector that is 1 if the row is in the solution and 0 otherwise.
Regardless of the sign of the costs, the algorithm will not revert to the trivial solution where none of the edges are chosen because of the flow requirement $d$ of the coupled minimum-cost flow framework.
Given the segmentations, calculate a set of features $\zeta$ on each cell and calculate feature difference vectors between cells on adjacent images $\theta(u,v)=\vert \zeta(u)-\zeta(v)\vert$. Any features of the cells can be used for this step such as location, area, eccentricity, and intensity as well as features calculated from the wavelet coefficients.
where $K$ is the number of features.
cell appearing and disappearing: $\theta(\oslash, v) = \vert \zeta(\oslash)-\zeta(v)\vert$ and $\theta(u,\oslash)=\vert \zeta(u)-\zeta(\oslash)\vert$.
One of the advantages of the wavelet-based method for denoising and segmentation is how it naturally represents objects at different scales.
Utilize biological insight of the cell cycle to guide the segmentation process. A cell begins its life in the G1 phase with a given cell size. As it proceeds through the cell cycle, it duplicates its DNA in the S phase and, by the end of the G2 phase, it had duplicated its internal organelles and structures and has doubled in size.
the registration is based on the normalized cross correlation metric.
the algorithm is based on the principle that a defocused image acquires a smoothed appearance, losing its texture.
tracking and segmentation algorithms on five time-lapse datasets with varying conditions and biological hypotheses and multiple wells.
Main concern for segmentation is avoiding over-segmentation or under-segmentation meaning the avoidance of the following: missing cells, finding non-existent cells, breaking a cell into smaller pieces, and detecting multiple cells as one cell.
where $A$ is the automatic segmentation mask, $M$ is the manual segmentation mask, and the score ranges from 0 (no overlap) to 1 (perfect overlap).
These algorithms have been applied to nearly 6000 images representing 400,000 cells and 32,000 tracks. A large portion of the results (25%) were validated manually by expert biologists at two different institutions, and their edits were tabulated and stored to enable reproducibility of their assessment.
From this video, it seems that almost all cells head for the same directions, what if more chaos? | CommonCrawl |
Abstract: We consider a Lagrange–Hermite polynomial, interpolating a function at the Jacobi zeros and, with its first $(r-1)$ derivatives, at the points $\pm 1$. We give necessary and sufficient conditions on the weights for the uniform boundedness of the related operator in certain suitable weighted $L^p$-spaces, $1<p<\infty$, proving a Marcinkiewicz inequality involving the derivative of the polynomial at $\pm 1$. Moreover, we give optimal estimates for the error of this process also in the weighted uniform metric. | CommonCrawl |
We introduce a method for transforming low-order tensors into higher-order tensors and apply it to tensors defined by graphs and hypergraphs. The transformation proceeds according to a surgery-like procedure that splits vertices, creates and absorbs virtual edges and inserts new vertices and edges. We show that tensor surgery is capable of preserving the low rank structure of an initial tensor decomposition and thus allows to prove nontrivial upper bounds on tensor rank, border rank and asymptotic rank of the final tensors. We illustrate our method with a number of examples. Tensor surgery on the triangle graph, which corresponds to the matrix multiplication tensor, leads to nontrivial rank upper bounds for all odd cycle graphs, which correspond to the tensors of iterated matrix multiplication. In the asymptotic setting we obtain upper bounds in terms of the matrix multiplication exponent $\omega$ and the rectangular matrix multiplication parameter $\alpha$. These bounds are optimal if $\omega$ equals two. We also give examples that illustrate that tensor surgery on general graphs might involve the absorption of virtual hyperedges and we provide an example of tensor surgery on a hypergraph. In the context of quantum information theory, our results may be interpreted as upper bounds on the number of Greenberger-Horne-Zeilinger states needed in order to create a network of Einstein-Podolsky-Rosen states by stochastic local operations and classical communication. They also imply new upper bounds on the nondeterministic quantum communication complexity of the equality problem when played among pairs of players.
Christandl, M, & Zuiddam, J. (2016). Tensor surgery and tensor rank. | CommonCrawl |
Students develop an understanding of the meanings of multiplication and division of whole numbers through activities and problems involving equal-sized groups, arrays, and area models; multiplication is finding an unknown product, and division is finding an unknown factor in these situations. For equal-sized group situations, division can require finding the unknown number of groups or the unknown group size.
To me, it seems that the emphasis is on "equal-sized groups," "same-size units of area," "identical rows," and "identical columns." The child's teacher could have emphasized this in class.
If so, then perhaps one "valid" answer to this higher-order thinking question is "You can add objects together if they belong to the same group. You can multiply groups of objects if they are of the same size."
For example, say that students are riding in $3$ buses: one bus has $30$ students, another has $30$, and another has $32$. How many students are there in total?
The answer is not $30+30+32+3$, that is, the number of buses is not added because buses are not students.
The answer is not $30\times 3$, because not all the buses have exactly $30$ students.
Not the answer you're looking for? Browse other questions tagged primary-education or ask your own question.
What was the problem with New Math? Why did it end?
Is there a name for 'simple' two-input-one-output word problems?
Systems of linear equations in grade 3? | CommonCrawl |
Abstract: We describe global embeddings of fractional D3 branes at orientifolded singularities in type IIB flux compactifications. We present an explicit Calabi-Yau example where the chiral visible sector lives on a local orientifolded quiver while non-perturbative effects, $\alpha'$ corrections and a T-brane hidden sector lead to full closed string moduli stabilisation in a de Sitter vacuum. The same model can also successfully give rise to inflation driven by a del Pezzo divisor. Our model represents the first explicit Calabi-Yau example featuring both an inflationary and a chiral visible sector. | CommonCrawl |
Bienz, Stefan; Hesse, Mauhad (1987). Synthese makrocyclischer, $\alpha, \beta$-ungesättigter y-Oxolactone durch Ringerweiterungsreaktionen; ein neuer Weg zum makrocyclischen Lacton-Antibiotikum A 26771 B. Helvetica Chimica Acta, 70(5):1333-1340.
A new synthetic route to the a$- unsaturated y-oxolactones 2a and 2b, involving two ring-enlargement reactions, is described. Ring opening of bicyclic a- nitroketones of the type 3 gave ring-enlarged compounds of the type 4 which were converted to monoprotected diketones of the type 10 by using a variation of the Nefreaction as a key step. Macrocyclic lactones of the type I1 were obtained by Baeyer-Viffiger oxidation and converted into compounds of the type 2. The conversion of 2b to the macrocyclic lactone antibiotic A 26 771 B (1) is already described in the literature.
Download PDF 'Synthese makrocyclischer, $\alpha, \beta$-ungesättigter y-Oxolactone durch Ringerweiterungsreaktionen; ein neuer Weg zum makrocyclischen Lacton-Antibiotikum A 26771 B'. Item availability may be restricted. | CommonCrawl |
Due to the high cost of flowers in Alberta, Yraglac decided it would be a good idea to plant his own flower garden instead of buying flowers for his girlfriend. We can model his flower garden on a two-dimensional grid, where the position of each flower is given by an $(x,y)$ coordinate. Today, Yraglac decided it would be a good idea to water his flowers one by one, starting at the origin $(0,0)$ and walking to each flower. He will take the shortest path at each step. However, it turns out that Yraglac has an important meeting today and has a limited amount of time available to water the flowers. Thus, he has ranked all of the flowers in his garden from most to least important and will water them in that order. For religious reasons, the number of flowers watered by Yraglac must be a prime number, or zero. Help Yraglac find the maximum number of flowers he can water.
The first line contains a single integer $T \leq 10$ giving the number of test cases. Each test case starts with a line containing an integer $N$ ($1 \leq N \leq 20\, 000$), the number of flowers in the garden, and an integer $D$ ($0 \leq D \leq 10^9$), the maximum total distance that can be travelled by Yraglac. The next $N$ lines contain two integers $x_ i$ and $y_ i$ ($-500 \leq x_ i,y_ i \leq 500$), the coordinates of the $i$th flower. The flowers are ordered from most to least important. Every flower has a different position, and there will not be a flower at the origin $(0,0)$.
For each test case, output a single line containing the maximum number of flowers that Yraglac can water. | CommonCrawl |
Abstract : This article is dedicated to the estimation of Wasserstein distances and Wasserstein costs between two distinct continuous distributions $F$ and $G$ on $\mathbb R$. The estimator is based on the order statistics of (possibly dependent) samples of $F$ resp. $G$. We prove the consistency and the asymptotic normality of our estimators. | CommonCrawl |
The seminar will meet Tuesdays, 4:00 p.m. in VV B139, unless otherwise indicated.
Let X be a compact hyperbolic manifold, and let Y be a totally geodesic closed submanifold in X. I will discuss the problem of bounding the integral of a Laplace eigenfunction on X over Y, as the eigenvalue tends to infinity. I will present an upper bound for these integrals that is sharp on average, and briefly describe ongoing work with Farrell Brumley in which we attempt to produce eigenfunctions with very large periods.
If E is a compact set of Hausdorff dimension greater than 5/4 on the plane, we prove that there is a point x\in E such that the set of distances between x and E has positive Lebesgue measure. Our result improves upon Wolff's theorem for dim E> 4/3. This is joint work with Larry Guth, Alex Iosevich and Yumeng Ou.
Brascamp-Lieb inequalities are L^p estimates for certain multilinear forms on functions on Euclidean spaces. In this talk we consider singular Brascamp-Lieb inequalities, which arise when one of the functions is replaced by a Calderon-Zygmund kernel. We focus on a family of multilinear forms in R^n with a certain cubical structure and discuss their connection to some patterns in positive density subsets in R^n. Based on joint works with V. Kovac and C. Thiele.
In this talk, I will present my recent works with my collaborators on the lower bound and upper bounds estimates for the first positive eigenvalues of Kohn Laplacian and sub-Laplacian on a strictly pseudoconvex pseudo-Hermitian CR manifold, which include CR Lichnerowicz-Obata theorem for the lower and upper bounds for the first positive eigenvalue for the Kohn Laplacian on strictly pseudoconvex hypersurfaces.
An old theorem of Weil and Kodaira says that: For a K\"ahler manifold X, there exists a closed meromorphic one-form with residue divisor D if and only if D is homologous to zero. In this talk, I will generalize Weil and Kodaira's criterion to non-K\"ahler manifolds.
I will discuss my recent work on some problems concerning Fourier decay and Fourier restriction for fractal measures on curves.
We apply the multisummability theory from Dynamical Systems to CR-geometry. As the main result, we show that two real-analytic hypersurfaces in $\mathbb C^2$ are formally equivalent, if and only if they are $C^\infty$ CR-equivalent at the respective point. As a corollary, we prove that all formal equivalences between real-algebraic Levi-nonflat hypersurfaces in $\mathbb C^2$ are algebraic (and in particular convergent). This is a joint work with I. Kossovskiy and B. Lamel.
Let Q be a homogenous integral polynomial of degree at least two. We consider certain results and questions concerning the distribution of the integral points on the level sets of Q. | CommonCrawl |
Do all proteins start with methionine?
Start codon AUG also codes for methionine and without start codon translation does not happen. And even the ambiguous codon GUG codes for methionine when it is first. So does this mean that all proteins start with methionine as the first amino acid.
You are correct in thinking that since the translation of mRNA begins with AUG, which codes for methionine, then all proteins should contain a methionine at their N-terminus (aka start site). But, it is indeed not so. First of all, I want to mention about variations in start codon. As you say, AUG is not the only, but actually the most common, start codon, and it codes for methionine in eukaryotes, or formylmethionine in prokaryotes but only at the start site. But, this start codon can also vary and become GUG or even UUG, coding for valine and leucine respectively. And the twist is, it still codes for methionine or formylmethionine1. In rare cases, such as heat shock, other codons like CUG, ACG, AUA and AUU, are also used for initiation 2. It is so because start codon itself is not sufficient to begin translation, other nearby factors, like the Shine-Dalgarno sequence, or initiation factors, also play a role. One such factor is the initiation tRNA. At the beginning of translation, tRNAMet or tRNAfMet binds to the small subunit of ribosome. So, whatever be the start codon, the first amino acid will be methionine3.
Now, coming back to the main question, N-terminal methionine, although being the first amino acid, is not present at N-terminus of all proteins. This is because of a process that is known as post-translational modification. After a polypeptide is completely translated from mRNA, it is modified at different places by different enzymes, which are regulated by different (internal or external) factors. There are more than a hundred post-translational modifications known4, one of which is the removal of methionine from the N-terminus of a polypeptide. N-terminal methionine is removed from a polypeptide by the enzyme methionine aminopeptidase5.
The question which immediately comes to mind is Why are proteins modified after translation? Well, there can be various different causes of it. First of all, post-translational modifications are regulated by many factors, and this process is called post-translational regulation6. Another point is cell targeting. Attaching different groups to polypeptides makes them more stable at their target location. For example, by attaching lipid molecules to polypeptides (in a process called lipidation) makes the polypeptide more stable and suitable for cell membranes4. A yet another factor is increasing stability. Yes, you read it right, in some cases, N-terminal methionine can destabilize a protein! For example, an extra N-terminal methionine not only destabilizes but also disrupts the native folding configuration of $\alpha$-lactalbumin7. There can be numerous other factors too, which promote removal of N-terminal methionine from polypeptides.
Thus, in short, No, not all proteins contain a methionine at their N-terminus. I hope this helps!
2. Ivanov IP, Firth AE, Michel AM, Atkins JF, Baranov PV. Identification of evolutionarily conserved non-AUG-initiated N-terminal extensions in human coding sequences. Nucleic Acids Research. 2011;39(10):4220-4234. doi:10.1093/nar/gkr007.
When ribosomes create peptides, Methionine is the starting amino acid. But, in many proteins, Methionine Aminopeptidases cut it off from N-terminus. This happens in cases when methionine is not required as starting amino acid (not required on N-terminus).
I remembered the name of enzyme.
Not the answer you're looking for? Browse other questions tagged biochemistry proteins translation or ask your own question.
Do alternative start codons code for methionine after transcription?
Does translation have a way of preventing mismatching between mRNA codons and tRNA anti-codons?
How are mitochondrial ATT (Ile) start codons translated as Methionine? | CommonCrawl |
I know that the halting problem is undecidable in general but there are some Turing machines that obviously halt and some that obviously don't. Out of all possible turing machines what is the smallest one where nobody has a proof whether it halts or not?
$TM(2,3), TM(2,2), TM(3,2)$ (where $TM(k,l)$ is the set of Turing machines with $k$ states and $l$ symbols).
The decidability of $TM(2,4)$ and $TM(3,3)$ is on the boundary and it is difficult to settle because it depends on the Collatz conjecture which is an open problem.
See also my answer on cstheory about Collatz-like Turing machines and "Small Turing machines and generalized busy beaver competition" by P. Michel (2004) (in which it is conjectured that $TM(4,2)$ is also decidable).
Kaveh's comment and Mohammad's answer are correct, so for a formal definition of the standard/non-standard Turing machines used in this kind of results see Turlough Neary and Damien Woods works on small universal Turing machines, e.g. The complexity of small universal Turing machines: a survey (Rule 110 TMs are weakly universal).
I would like to add that there are some Turing Machines for which the Halting problem is independent of ZFC.
For instance take a Turing machine which looks for a proof of contradiction in ZFC. Then if ZFC is consistent, it won't halt, but you cannot prove it in ZFC (because of Gödel's second incompleteness theorem).
So it is not only a matter of not having found a proof yet, sometimes proofs don't even exist.
No one has a proof whether Universal Turing machine halts or not. In fact, such proof is impossible as a result of the undecidability of the the Halting problem . The smallest is a 2-state 3-symbol universal Turing machine which was found by Alex Smith for which he won a prize of $25,000.
an inexactly phrased but reasonable general question that can be studied in several particular technical ways. there are many "small" machines measured by states/symbols where halting is unknown but no "smallest" machine is possible unless one comes up with some justifiable/quantifiable metric of the complexity of a TM that takes into account both states and symbols (apparently nobody has proposed one so far).
actually research into this problem related to Busy Beavers suggests that there are are many such "small" machines lying on a hyperbolic curve where $x \times y$, $x$ states and $y$ symbols, is small. in fact it appears to be a general phase transition/boundary between decidable and undecidable problems.
this new paper Problems in number theory from busy beaver competition 2013 by Michel a leading authority exhibits many such cases for low $x,y$ and shows the connection to general number theoretic sequences similar to the Collatz conjecture.
Is there any proof that a network made of Turing machines can't solve the halting problem?
Can every undecidability proof be converted into diagonalization proof?
Are All Turing-Uncomputable Sets Isomorphic to the Halting Problem?
How good can a halting detector be? | CommonCrawl |
I have checked the sequence, but I was not able to understand which is monotonically increasing or decreasing to check the convergence.
Option A is having the sequence like 1,1,2,2,4,4,8,8... Is it convergent?
Last edited by skipjack; May 19th, 2017 at 08:01 AM.
I would calculate numbers for each of A, B, C and D first. If you need more information after that, look at the definition of convergence: informally a sequence is convergent if it gets closer and closer to some limiting value. Is that true of 1,1,2,2,4,4,8,8?
An alternative approach is to consider that every subsequence of a convergent sequence converges to the same limit as the sequence. So, does the subsequence $(x_0, x_2, x_4, \ldots)$ converge?
Last edited by v8archie; May 19th, 2017 at 05:52 AM.
The subsequence $2^n$ is divergent in Ratio test. Does it mean the original sequence also diverges?
All subsequences of the 3 options are diverging... Does it mean none of the above is the correct option?
Last edited by skipjack; May 19th, 2017 at 08:02 AM.
You are telling YES to my last message or to my first question ??
If a subsequence is divergent then it's original sequence also diverges ? | CommonCrawl |
Can someone point out why these directed graphs aren't equivalence relations?
As far as I can tell, these two directed graphs are reflexive, symmetric and transitive.
In the first one we have c ~ a and a ~ d but not c ~ d, so the relation is not transitive. A similar problem exists for the other graph. See if you can find the issue yourself.
Then ask yourself what the graph of an equivalence relation looks like in general.
Not the answer you're looking for? Browse other questions tagged graphs discrete-mathematics transitivity or ask your own question.
How to construct this generalized xor without needing an extra vector?
$\lambda$-terms equal modulo $\alpha$-renaming, is this an equivalence relation?
Reflexive transitive closure = (zero or more) Kleene star? | CommonCrawl |
Rule 1: All balls of the same color must be adjacent to each other.
I wrote a program to find all the solutions for it. I got 24 solutions. But, how do I know that my program's calculation is right? In short I need some mathematical solution for this problem. I'm guessing I need permutation to solve this but the rule threw me off. Any help would be much appreciated. Thanks in advance guys.
You don't say exactly what you want to do with the balls, but I assume you want to place them in a line.
First, decide if the green balls will appear first, or the blue balls will appear first; you have 2 choices.
Then decide whether the yellow ball will be first, in between, or last. You have three choices.
Since both things must be chosen and the choices are independent, you multiply the numbers to get the total number of ways of doing it: $2\times 3$ or $6$ ways.
This assumes that you cannot distinguish between the two green balls, and you cannot distinguish between the two blue balls. If you can distinguish them, then you also need to decide (i) which green ball goes first (2 possible choices); and (ii) which blue ball goes first (2 possible choices). So the total number of ways would then be given by $2\times 3\times 2\times 2 = 24$ (the first 2 determines whether green or blue goes first; the 3 is the number of ways to place the yellow ball; the second 2 is the number of ways of deciding which green ball goes first among the green balls; the last 2 is the number of ways of deciding which blue ball goes first among the blue balls).
Join the greens with a green stick, the blues with a blue stick.
Now consider placing the sticks and the yellow ball. This can be done in $3! = 6$ ways. Assuming the green balls are distinct and blue are distinct, there are 2 ways to place each of the 2 sticks. So total is $6 \times 2 \times 2 = 24$.
Not the answer you're looking for? Browse other questions tagged combinatorics permutations puzzle or ask your own question.
Counting the number of trials. | CommonCrawl |
Authors Doherty, D., Sivagnanam S., Dura-Bernal S., & Lytton W. W.
Avalanches have been suggested to reflect a scale-free organization of cortex. It is hypothesized that such an organization may relate to a particularly effective form of activity propagation which is balanced between failure (activity fails to reach a target area) and overactivation (activity reaches a target area via many routes leading to wasted activity or epileptiform activity patterns). We electrically stimulated a computer model of mouse primary motor cortex (M1) and analyzed signal flow over space and time. Initially we stimulated a 300 $μ$m $\times$ 600 $μ$m slice of M1 using a 10 $μ$m $\times$ 10 $μ$m 0.5 nA stimulus across all 6 layers of cortex (1350 $μ$m) for 100 ms. Waves of activity swept across the cortex for a half a second after the end of the electrical stimulus. We extracted avalanches from the data by counting events, spikes, occurring within 1 ms frames. An avalanche of length N was defined as N consecutively active frames, preceded by a blank frame and followed by a blank frame. A graph of the cortical slice above, with the 0.5 nA stimulus, displayed a bimodal distribution. We observed 18 avalanches in total with 4 single neuron avalanches and all the other avalanches containing more than 1000 neurons each. The largest avalanche contained 7000 neurons. Studies have generally shown avalanche activity to show a linear log–log graph starting highest from small avalanches and decreasing as the avalanches get larger. We looked at responses of M1 to lower amplitude stimuli between 0.05 and 0.5 nA to see if they may fit a classic inverse power-law curve. We graphed M1 response to a 500 ms electric stimulus at various amplitudes and found particularly clear inverse power-law responses to stimuli between 0.16 and 0.18 nA. In the 300 $μ$m $\times$ 300 $μ$m slice of M1 for 500 ms using 0.16nA we observed 90 avalanches from as small as a single neuron action potential in isolation to 13 neurons spiking. A large proportion were SOM neurons participating in the avalanches but they also included IT neurons at this level of stimulation. Neurons from every layer of cortex participated in avalanches except for layer 4. At stimulus onset neurons within an avalanche spiked at the same time. Spike onset amongst neurons within an avalanche became more heterogeneous as time progressed, especially after about 400 ms. For example, a 5 neuron avalanche began 431 ms after stimulus onset with a SOM6 neuron spike (x:84.2 $μ$m, z:98.9 $μ$m). Eight-tenths of a millisecond later it was followed with an IT5A spike (x:92.8 $μ$m, z:85.7 $μ$m). Next, after 0.65 ms, a different SOM6 neuron spiked (x:79.1 $μ$m, z:83.2 $μ$m) and finally the avalanche ended with yet another SOM6 spike (x:81.0 $μ$m, z:64.3 $μ$m). We observed similar results using a 0.18nA stimulus that elicited 110 avalanches from single neuron avalanches to avalanches that included 12 neurons. The simulation of avalanches in cortex offers advantages for analysis that are not readily done experimentally in in vivo or in vitro. We have been able to record from every neuron in our M1 slice and follow activity from cell to cell. In the future we will analyze how avalanches take place within and between layers. | CommonCrawl |
In what sense Capital Asset Pricing Model(CAPM) is related with Modern Portfolio Theory(MPT)?
Where in steps shown below do I need to use CAPM?
Out there in the world, we have thousands of risky assets such as stocks, natural resources and bonds, and one risk-free asset, which is T-bills in usual cases.
With the assumption that return of all assets follow the normal distribution, we can use 3 information( expected return, variance, covariance with all other assets) to come up with the mean-variance frontier, a group of portfolios with the least risk at a given level of return. The portfolios are comprised of all risky assets. These 3 kinds of information are from the historical price movements of assets.
There is only one best risky asset portfolio that all the investors are holding, and that is the tangency portfolio. This tangency portfolio is on the mean-variance frontier of risky assets and when it is mixed with risk-free asset, it has the higher sharpe ratio than any other combination of other risky asset portfolio on the efficient frontier and risk-free asset.
The combination of the tangency portfolio and a risk-free asset can be done with several different weights in each. Since it is a linear combination of tangency portfolio and risk-free asset, this combination can be shown as a line and it is called Capital Allocation Line(CAL).
Based on an investor's risk aversion, investors choose how much weight of their wealth to invest in the risk-free asset, and the rest in tangency portfolio.
We can define the SCL as $R_i = \alpha_i + \beta_i * R_m + \epsilon_i$ (with $R_i$ and $R_m$ being the realized security and market returns in excess of the risk-free rate and $\beta_i$ being the OLS regression beta with $R_i$ being the dependent variable and $R_m$ being the dependent variable).
We can define Jensen's alpha as $\alpha_i = R_i - \beta_i * R_m$ (with the variables defined as above). From here it can be seen that Jensen's alpha equation is just another form of the SCL (with $\alpha_i$ and $R_i$ switching sides, and the equation multiplied by $-1$).
When SCL and Jensen's alpha equations use realized returns, CAPM uses expected returns and can be formulated followingly: $E(R_i) = \beta_i * E(R_m) + \epsilon_i$ (notation similar to the previous equations, but with $E(R_i)$ being the expected excess return of the security and $E(R_m)$ being the expected return of the market portfolio), where $\epsilon_i$ is an error term , and $E(\epsilon_i) = 0$.
Now, when previous realized returns are used as proxies for the expected returns (i.e. $E(R_i) = R_i$ and $E(R_m) = R_m$), when plugged in to the CAPM, we find that it must be the case that $\alpha_i ≡ \epsilon_i$ for all $i$. As the (realized) $\alpha_i$ is a deterministic term and does not necessarily equal zero, we find that it can't be that $\alpha_i ≡ E(\epsilon_i) = 0$ for all $i$. Thus, using realized security returns as proxies for expected returns is not compatible with the CAPM.
Edit2: I figure I did not still actually answer your question very well. Sharpe's development of the CAPM was originally spurred by the problem his graduate school supervisor Markowitz had with mean-variance optimization. As computers were slow and expensive, it was not feasible to do the calculations for a large number of securities.
Sharpe then first came up with the single-index model (SIM), which is basically what I previously referred to as the security characteristic line (SCL). The reasoning here was that the returns of different securities were related only through common relationships with some basic underlying factor. This being the case, instead of calculating all the pairwise covariances and the resulting portfolio volatilities the volatility of a (well diversified) portfolio (where all idiosyncratic risk is diversified away) could be approximated via securities' weigthed covariances with the underlying factor (i.e. the market index). This decreased the computing power cost of the operation dramatically.
The SIM was thus used to decompose ("analyst's") estimates of expected returns on different securities for a more efficient calculation of the efficient frontier. CAPM followed soon after, when Sharpe concluded that (if alpha's could not be predicted) the market portfolio itself is the tangency portfolio.
Now, as computing power is cheap today, and you can easily calculate the covariance matrix as well as the portfolio volatilies of a large number of different combinations, the SIM is no longer needed for the analysis.
Not the answer you're looking for? Browse other questions tagged portfolio-management portfolio modern-portfolio-theory capm portfolio-selection or ask your own question. | CommonCrawl |
Abstract: Many four-dimensional supersymmetric compactifications of F-theory contain gauge groups that cannot be spontaneously broken through geometric deformations. These "non-Higgsable clusters" include realizations of $SU(3)$, $SU(2)$, and $SU(3) \times SU(2)$, but no $SU(n)$ gauge groups or factors with $n> 3$. We study possible realizations of the standard model in F-theory that utilize non-Higgsable clusters containing $SU(3)$ factors and show that there are three distinct possibilities. In one, fields with the non-abelian gauge charges of the standard model matter fields are localized at a single locus where non-perturbative $SU(3)$ and $SU(2)$ seven-branes intersect; cancellation of gauge anomalies implies that the simplest four-dimensional chiral $SU(3)\times SU(2)\times U(1)$ model that may arise in this context exhibits standard model families. We identify specific geometries that realize non-Higgsable $SU(3)$ and $SU(3) \times SU(2)$ sectors. This kind of scenario provides a natural mechanism that could explain the existence of an unbroken QCD sector, or more generally the appearance of light particles and symmetries at low energy scales. | CommonCrawl |
The Large Hadron Collider (LHC) at CERN is well known for the massive amounts of physical data it produces. In this project, we collaborate with physicists from the LHCb collaboration and address the data processing needs that arise there.
The main objective of the C5 project is to develop new methods to realise the data processing tasks within LHCb. We also strive to adapt existing methods to understand whether/how they would apply in the context of LHCb. A very important aspect of the project is to evaluate methods in an environment like the one at CERN; specifically, we are only interested in solutions that will carry to a very large scale (such as thousands of compute nodes). The other angle of the project is to connect the application domain with methodology established within the CRC; examples are the use of compact data summaries (e.g., to avoid scanning huge amounts of data), streaming algorithms (e.g., the "trigger stage" at LHCb has to ingest a high-volume data stream and react with real-time characteristics), or the consideration of low-power platforms.
The project entered the CRC at phase 2. During that phase, we developed solutions to individual, isolated problems within LHCb. We proposed, e.g., new GPU-based algorithms; a novel filter mechanism (named DeLorean); and parallelisation mechanisms for LHCb analyses (including a tailor-made low-power hardware platform). Our umbrella goal for phase 3 is to connect those solutions towards a new design for the overall processing pipeline at LHCb. To this end, we want to, e.g., integrate our parallel data selection engine DeLorean with a Map-Reduce-based analysis framework and extend both parts by GPU acceleration.
Lindemann/2018a Lindemann, Thomas. Efficient Track Reconstruction on Modern Hardware. No. 1, DBIS Group, Chair 6, Department of Computer Science, 2018.
Lindemann/etal/2018a Thomas Lindemann and Jonas Kauke and Jens Teubner. Efficient Stream Processing of Scientific Data. In Proc. of the Joint HardBD & Active '18 Workshop, Paris, France, 2018.
Aaij/etal/2016k Aaij, R., ..., Schellenberg, M. et al.. Search for the $C\!P$-violating strong decays $\eta \to \pi^+\pi^-$ and $\eta^\prime(958) \to \pi^+\pi^-$. In Phys. Lett., Vol. B764, pages 233-240, 2017.
Aaij/etal/2016n Aaij, R.,..., Schellenberg, M. et al.. Observation of $J/\psi\phi$ structures consistent with exotic states from amplitude analysis of $B^+\to J/\psi \phi K^+$ decays. In Phys. Rev. Lett., Vol. 118, No. 2, pages 022003, 2017.
Aaij/etal/2016o Aaij, R.,..., Schellenberg, M. et al.. Amplitude analysis of $B^+\to J/\psi \phi K^+$ decays. In Phys. Rev., Vol. D95, No. 1, pages 012002, 2017.
Aaij/etal/2016r Aaij, R.,..., Schellenberg, M. et al.. First experimental study of photon polarization in radiative $B^0_s$ decays. In Phys. Rev. Lett., Vol. 118, No. 2, pages 021801, 2017.
Aaij/etal/2016t Aaij, R.,..., Schellenberg, M. et al.. Observation of $B^+\rightarrow J/\psi 3\pi^+ 2\pi^-$ and $B^+\rightarrow \psi(2S) \pi^+\pi^+\pi^-$ decays. In Eur. Phys. J., Vol. C77, No. 2, pages 72, 2017.
Aaij/etal/2016u Aaij, R.,..., Schellenberg, M. et al.. Observation of the decay $B^0_s \to \phi\pi^+\pi^-$ and evidence for $B^0 \to \phi\pi^+\pi^-$. In Phys. Rev., Vol. D95, No. 1, pages 012006, 2017.
Aaij/etal/2016v LHCb Collaboration and Aaij, R. and Eitschberger, U. and Schellenberg, M. and Spaan, B. and et al.. New algorithms for identifying the flavour of $\mathrm B ^0$ mesons using pions and protons. In European Physical Journal C, Vol. C77, No. 4, pages 238, 2017.
Aaij/etal/2016w Aaij, R.,..., Schellenberg, M. et al.. Observation of the annihilation decay mode $B^0\to K^+K^-$. In Phys. Rev. Lett., Vol. 118, No. 8, pages 081801, 2017.
Aaij/etal/2016x Aaij, R.,..., Schellenberg, M. et al.. Search for decays of neutral beauty mesons into four muons. In JHEP, Vol. 03, pages 001, 2017.
Aaij/etal/2016y Aaij, R.,..., Schellenberg, M. et al.. Measurement of the $b$-quark production cross-section in 7 and 13 TeV $pp$ collisions. In Phys. Rev. Lett., Vol. 118, No. 5, pages 052002, 2017.
Aaij/etal/2016z Aaij, R.,..., Schellenberg, M. et al.. Observation of the suppressed decay $\Lambda^0_b\rightarrow p\pi^-\mu^+\mu^-$. In JHEP, Vol. 04, pages 029, 2017.
Aaij/etal/2017a LHCb Collaboration and Aaij, R. and Schellenberg, M. and Spaan, B. and Stevens, H. and et al.. Measurement of $CP$ violation in $B^0\rightarrow J/\psi K^0_ \mathrmS $ and $B^0\rightarrow\psi(2S) K^0_\mathrmS$ decays. In Journal of High Energy Physics, Vol. 11, pages 170, 2017.
Dorok/etal/2017a Sebastian Dorok and Sebastian Breß and Jens Teubner and Horstfried Läpple and Gunter Saake and Volker Markl. Efficient Storage and Analysis of Genome Data in Databases. In Datenbanksysteme für Business, Technologie und Web (BTW), pages 423--442, 2017.
Dorok/etal/2017b Sebastian Dorok and Sebastian Breß and Jens Teubner and Horstfried Läpple and Gunter Saake and Volker Markl. Efficiently Storing and Analyzing Genome Data in Database Systems. In Datenbank-Spektrum, 2017.
Goertz/etal/2017a Görtz, Michael Dominik and Kühn, Roland and Zietek, Oliver and Bernhard, Roman and Bulinski, Michael and Duman, Dennis and Freisen, Benedikt and Jentsch, Uwe and Klöppner, Tobias and Popovic, Dragana and Xu, Lili. Energy Efficiency of a Low Power Hardware Cluster for High Performance Computing. In Eibl, Maximilian and Gaedke, Martin (editors), INFORMATIK 2017, pages 2537-2548, Gesellschaft für Informatik, Bonn, 2017.
Kussmann/etal/2017a Kussmann, Michael and Berens, Maximilian and Eitschberger, Ulrich and Kilic, Ayse and Lindemann, Thomas and Meier, Frank and Niet, Ramon and Schellenberg, Margarete and Stevens, Holger and Wishahi, Julian and Spaan, Bernhard and Teubner, Jens. DeLorean: A Storage Layer to Analyze Physical Data at Scale. In B. Mitschang et al. (Hrsg.) (editors), Datenbanksysteme für Business, Technologie und Web (BTW 2017), Vol. P-265, pages 413-422, GI, 2017.
Aaij/etal/2016a Aaij, R., ..., Schellenberg, M. et al.. First study of the CP -violating phase and decay-width difference in $B_B^0\to \psi(2S)\phi$ decays. In Phys. Lett., Vol. B762, pages 253-262, 2016.
Aaij/etal/2016b Aaij, R., ..., Schellenberg, M. et al.. Measurement of forward $W\to e\nu$ production in $pp$ collisions at $ \sqrts=8\,$ TeV. In JHEP, Vol. 10, pages 030, 2016.
Aaij/etal/2016d Aaij, R., ..., Schellenberg, M. et al.. Search for structure in the $B_s^0\pi^\pm$ invariant mass spectrum. In Phys. Rev. Lett., Vol. 117, No. 15, pages 152003, 2016.
Aaij/etal/2016e Aaij, R., ..., Schellenberg, M. et al.. Measurement of the ratio of branching fractions $ \mathcalB (B_c^+ \to J/\psi K^+)/\mathcalB(B_c^+ \to J/\psi\pi^+)$. In JHEP, Vol. 09, pages 153, 2016.
Aaij/etal/2016g Aaij, R., ..., Schellenberg, M. et al.. Measurement of the $B_s^0 \rightarrow J/\psi \eta$ lifetime. In Phys. Lett., Vol. B762, pages 484-492, 2016.
Aaij/etal/2016h Aaij, R., ..., Schellenberg, M. et al.. Study of $B_c^+$ decays to the $K^+K^-\pi^+$ final state and evidence for the decay $B_c^+\to\chi_c0\pi^+$. In Phys. Rev., Vol. D94, No. 9, pages 091102, 2016.
Aaij/etal/2016j Aaij, R., ..., Schellenberg, M. et al.. Amplitude analysis of $B^- \to D^+ \pi^- \pi^-$ decays. In Phys. Rev., Vol. D94, No. 7, pages 072001, 2016.
Aaij/etal/2016l Aaij, R., ..., Schellenberg, M. et al.. Measurement of the CKM angle $\gamma$ from a combination of LHCb results. In JHEP, Vol. 12, pages 087, 2016.
Aaij/etal/2016m Aaij, R.,..., Schellenberg, M. et al.. Measurements of the S-wave fraction in $B^0\rightarrow K^+\pi^-\mu^+\mu^-$ decays and the $B^0\rightarrow K^\ast(892)^0\mu^+\mu^-$ differential branching fraction. In JHEP, Vol. 11, pages 047, 2016.
Aaij/etal/2016p Aaij, R.,..., Schellenberg, M. et al.. Measurement of the forward Z boson production cross-section in pp collisions at $\sqrts = 13$ TeV. In JHEP, Vol. 09, pages 136, 2016.
Aaij/etal/2016q LHCb Collaboration and Aaij, R. and Meier, F. and Schellenberg, M. and Spaan, B. and et al.. Measurement of $CP$ violation in $B^0 \to D^+ D^-$ decays. In Physical Review Letters, Vol. 117, No. 26, pages 261801, 2016.
Aaij/etal/2016s Aaij, R.,..., Schellenberg, M. et al.. Differential branching fraction and angular moments analysis of the decay $B^0 \to K^+ \pi^- \mu^+ \mu^-$ in the $K^*_0,2(1430)^0$ region. In JHEP, Vol. 12, pages 065, 2016.
Bress/etal/2016a Sebastian Breß and Henning Funke and Jens Teubner. Robust Query Processing in Co-Processor-Accelerated Databases. In Proceedings of the 2016 ACM SIGMOD Conference on Management of Data, San Francisco, CA, USA, ACM, 2016.
Aaij/etal/2015a LHCb Collaboration and Aaij, R. and Adeva, B. and Adinolfi, M. and Affolder, A. and Spaan, B. and et al.. Measurement of C P Violation in $B^0 \to J /\psi K_S^0$ Decays. In Physical Review Letters, Vol. 115, No. 3, pages 031601, 2015.
Eitschberger/2018a Eitschberger, Ulrich Paul. Flavour-tagged Measurement of CP Observables in $B_s^0 \rightarrow D_s^\mp K^\pm$ Decays with the LHCb Experiment. TU Dortmund, 2018.
Meier/2016a Frank Meier. Measurement of $\sin2 \beta$ using charmonium and open charm decays at LHCb. TU Dortmund University, 2016.
Swientek/2015a Swientek, Stefan. A data processing firmware for an upgrade of the Outer Tracker detector at the LHCb experiment. TU Dortmund, 2015.
Schleich/2011a Schleich, Sebastian. First measurement of the inclusive $\phi$ meson production cross section in pp collisions at $\sqrt s = 7$ TeV and search for CP-violation in the $B_s\rightarrow\phi\phi$ decay at LHCb. TU Dortmund, 2011.
Balkesen/Teubner/2014a Cagri Balkesen and Jens Teubner and Gustavo Alonso and M. Tamer Öszu. Main-Memory Hash Joins on Modern Processor Architectures. In IEEE Transactions on Knowledge and Data Engineering, 2014.
Aaij/B/2013a Aaij, R and andere (inkl. Spaan, B. (LHCb Collaboration)). Measurement of the time-dependent $CP$ asymmetry in $B^0 \to J/\psi K^0_\rm S$ decays. In Phys.Lett., Vol. B721, pages 24-31, 2013.
Aaij/B/2013b Aaij, R and andere (inkl. Spaan, B. (LHCb Collaboration)). Measurement of the $B^0$--$\bar B^0$ oscillation frequency $\Delta m_d$ with the decays $B^0 \to D^- \pi^+$ and $B^0 \to J\ \psi K^*0$. In Phys.Lett., Vol. B719, pages 318-325, 2013.
Balkesen/etal/2013c Cagri Balkesen and Jens Teubner and Gustavo Alonso and M. Tamer Özsu. Main-Memory Hash Joins on Multi-Core CPUs: Tuning to the Underlying Hardware. In Proc.\ of the 29th IEEE Int'l Conference on Data Engineering (ICDE), pages 362--373, Brisbane, Australia, 2013.
Teubner/etal/2013a Jens Teubner and Louis Woods and Chongling Nie. XLynx---An FPGA-Based XML Filter for Hybrid XQuery Processing. In ACM Transactions on Database Systems (TODS), Vol. 38, No. 4, pages 23, 2013.
Woods/etal/2013a Louis Woods and Gustavo Alonso and Jens Teubner. Parallel Computation of Skyline Queries. In 21st IEEE Annual Int'l Symposium on Field-Programmable Custom Computing Machines (FCCM), pages 1-8, Seattle, WA, USA, 2013.
Roy/etal/2012a Pratanu Roy and Jens Teubner and Gustavo Alonso. Efficient Frequent Item Counting in Multi-Core Hardware. In Proc. of the 18th ACM SIGKDD Int'l Conference on Knowledge Discovery and Data Mining (KDD), pages 1451--1459, Beijing, China, 2012.
Woods/etal/2010a Louis Woods and Jens Teubner and Gustavo Alonso. Complex Event Detection at Wire Speed with FPGAs. In Proc.\ of the VLDB Endowment (PVLDB), Vol. 3, No. 1, Singapore, 2010.
Mueller/etal/2009b René Müller and Jens Teubner and Gustavo Alonso. Streams on Wires---A Query Compiler for FPGAs. In Proceedings of the VLDB Endowment, Vol. 2, No. 1, pages 229--240, Lyon, France, 2009.
Nedos/2008a Nedos, Mirco. Entwicklung, Implementierung and Test eines FPGA-Designs f\FCr die Level-1-Frontend-Elektronik des \C4usseren Spurkammersystems im LHCb-Detektor. TU Dortmund, 2008. | CommonCrawl |
Abstract: Deep neural networks (DNNs) are state-of-the-art solutions for many machine learning applications, and have been widely used on mobile devices. Running DNNs on resource-constrained mobile devices often requires the help from edge servers via computation offloading. However, offloading through a bandwidth-limited wireless link is non-trivial due to the tight interplay between the computation resources on mobile devices and wireless resources. Existing studies have focused on cooperative inference where DNN models are partitioned at different neural network layers, and the two parts are executed at the mobile device and the edge server, respectively. Since the output data size of a DNN layer can be larger than that of the raw data, offloading intermediate data between layers can suffer from high transmission latency under limited wireless bandwidth. In this paper, we propose an efficient and flexible 2-step pruning framework for DNN partition between mobile devices and edge servers. In our framework, the DNN model only needs to be pruned once in the training phase where unimportant convolutional filters are removed iteratively. By limiting the pruning region, our framework can greatly reduce either the wireless transmission workload of the device or the total computation workload. A series of pruned models are generated in the training phase, from which the framework can automatically select to satisfy varying latency and accuracy requirements. Furthermore, coding for the intermediate data is added to provide extra transmission workload reduction. Our experiments show that the proposed framework can achieve up to 25.6$\times$ reduction on transmission workload, 6.01$\times$ acceleration on total computation and 4.81$\times$ reduction on end-to-end latency as compared to partitioning the original DNN model without pruning. | CommonCrawl |
Welcome! This database provides access to our scientific literature.
Y. Driouich, M. Parente, and E. Tronci. "A methodology for a complete simulation of Cyber-Physical Energy Systems." In EESMS 2018 – Environmental, Energy, and Structural Monitoring Systems, Proceedings, 1–5., 2018. DOI: 10.1109/EESMS.2018.8405826.
Y. Driouich, M. Parente, and E. Tronci. "Model Checking Cyber-Physical Energy Systems." In Proceedings of 2017 International Renewable and Sustainable Energy Conference, IRSEC 2017. Institute of Electrical and Electronics Engineers Inc., 2018. DOI: 10.1109/IRSEC.2017.8477334.
T. Mancini, F. Mari, A. Massini, I. Melatti, I. Salvo, S. Sinisi, E. Tronci, R. Ehrig, S. Röblitz, and B. Leeners. "Computing Personalised Treatments through In Silico Clinical Trials. A Case Study on Downregulation in Assisted Reproduction." In 25th RCRA International Workshop on "Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion" (RCRA 2018)., 2018. DOI: 10.29007/g864.
T. Mancini, F. Mari, I. Melatti, I. Salvo, and E. Tronci. "An Efficient Algorithm for Network Vulnerability Analysis Under Malicious Attacks." In Foundations of Intelligent Systems – 24th International Symposium, ISMIS 2018, Limassol, Cyprus, October 29-31, 2018, Proceedings, 302–312., 2018. Notes: Best Paper. DOI: 10.1007/978-3-030-01851-1_29.
T. Mancini, F. Mari, I. Melatti, I. Salvo, E. Tronci, J. Gruber, B. Hayes, M. Prodanovic, and L. Elmegaard. "Parallel Statistical Model Checking for Safety Verification in Smart Grids." In 2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), 1–6., 2018. DOI: 10.1109/SmartGridComm.2018.8587416.
T. Mancini, E. Tronci, A. Scialanca, F. Lanciotti, A. Finzi, R. Guarneri, and S. Di Pompeo. "Optimal Fault-Tolerant Placement of Relay Nodes in a Mission Critical Wireless Network." In 25th RCRA International Workshop on "Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion" (RCRA 2018)., 2018. DOI: 10.29007/grw9.
V. Alimguzhin, F. Mari, I. Melatti, I. Salvo, and E. Tronci. "Linearising Discrete Time Hybrid Systems." IEEE Transactions on Automatic Control 62, no. 10 (2017): 5357–5364. ISSN: 0018-9286. DOI: 10.1109/TAC.2017.2694559.
Abstract: Model Based Design approaches for embedded systems aim at generating correct-by-construction control software, guaranteeing that the closed loop system (controller and plant) meets given system level formal specifications. This technical note addresses control synthesis for safety and reachability properties of possibly non-linear discrete time hybrid systems. By means of syntactical transformations that require non-linear terms to be Lipschitz continuous functions, we over-approximate non-linear dynamics with a linear system whose controllers are guaranteed to be controllers of the original system. We evaluate performance of our approach on meaningful control synthesis benchmarks, also comparing it to a state-of-the-art tool.
Y. Driouich, M. Parente, and E. Tronci. "Modeling cyber-physical systems for automatic verification." In 14th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design (SMACD 2017), 1–4., 2017. DOI: 10.1109/SMACD.2017.7981621.
B. P. Hayes, I. Melatti, T. Mancini, M. Prodanovic, and E. Tronci. "Residential Demand Management using Individualised Demand Aware Price Policies." IEEE Transactions On Smart Grid 8, no. 3 (2017): 1284–1294. DOI: 10.1109/TSG.2016.2596790.
M. P. Hengartner, T. H. C. Kruger, K. Geraedts, E. Tronci, T. Mancini, F. Ille, M. Egli, S. Röblitz, R. Ehrig, L. Saleh et al. "Negative affect is unrelated to fluctuations in hormone levels across the menstrual cycle: Evidence from a multisite observational study across two successive cycles." Journal of Psychosomatic Research 99 (2017): 21–27. DOI: 10.1016/j.jpsychores.2017.05.018.
B. Leeners, T. H. C. Kruger, K. Geraedts, E. Tronci, T. Mancini, F. Ille, M. Egli, S. Röblitz, L. Saleh, K. Spanaus et al. "Lack of Associations between Female Hormone Levels and Visuospatial Working Memory, Divided Attention and Cognitive Bias across Two Consecutive Menstrual Cycles." Frontiers in Behavioral Neuroscience 11 (2017): 120. ISSN: 1662-5153. DOI: 10.3389/fnbeh.2017.00120.
Abstract: Background: Interpretation of observational studies on associations between prefrontal cognitive functioning and hormone levels across the female menstrual cycle is complicated due to small sample sizes and poor replicability. Methods: This observational multisite study comprised data of n=88 menstruating women from Hannover, Germany, and Zurich, Switzerland, assessed during a first cycle and n=68 re-assessed during a second cycle to rule out practice effects and false-positive chance findings. We assessed visuospatial working memory, attention, cognitive bias and hormone levels at four consecutive time-points across both cycles. In addition to inter-individual differences we examined intra-individual change over time (i.e., within-subject effects). Results: Oestrogen, progesterone and testosterone did not relate to inter-individual differences in cognitive functioning. There was a significant negative association between intra-individual change in progesterone and change in working memory from pre-ovulatory to mid-luteal phase during the first cycle, but that association did not replicate in the second cycle. Intra-individual change in testosterone related negatively to change in cognitive bias from menstrual to pre-ovulatory as well as from pre-ovulatory to mid-luteal phase in the first cycle, but these associations did not replicate in the second cycle. Conclusions: There is no consistent association between women's hormone levels, in particular oestrogen and progesterone, and attention, working memory and cognitive bias. That is, anecdotal findings observed during the first cycle did not replicate in the second cycle, suggesting that these are false-positives attributable to random variation and systematic biases such as practice effects. Due to methodological limitations, positive findings in the published literature must be interpreted with reservation.
T. Mancini, F. Mari, A. Massini, I. Melatti, I. Salvo, and E. Tronci. "On minimising the maximum expected verification time." Information Processing Letters (2017). DOI: 10.1016/j.ipl.2017.02.001.
T. Mancini, A. Massini, and E. Tronci. "Parallelization of Cycle-Based Logic Simulation." Parallel Processing Letters 27, no. 02 (2017). DOI: 10.1142/S0129626417500037.
T. Mancini. "Now or Never: Negotiating Efficiently with Unknown or Untrusted Counterparts." Fundamenta Informaticae 149, no. 1-2 (2016): 61–100. DOI: 10.3233/FI-2016-1443.
T. Mancini, F. Mari, A. Massini, I. Melatti, and E. Tronci. "Anytime system level verification via parallel random exhaustive hardware in the loop simulation." Microprocessors and Microsystems 41 (2016): 12–28. ISSN: 0141-9331. DOI: 10.1016/j.micpro.2015.10.010.
Abstract: Abstract System level verification of cyber-physical systems has the goal of verifying that the whole (i.e., software + hardware) system meets the given specifications. Model checkers for hybrid systems cannot handle system level verification of actual systems. Thus, Hardware In the Loop Simulation (HILS) is currently the main workhorse for system level verification. By using model checking driven exhaustive HILS, System Level Formal Verification (SLFV) can be effectively carried out for actual systems. We present a parallel random exhaustive HILS based model checker for hybrid systems that, by simulating all operational scenarios exactly once in a uniform random order, is able to provide, at any time during the verification process, an upper bound to the probability that the System Under Verification exhibits an error in a yet-to-be-simulated scenario (Omission Probability). We show effectiveness of the proposed approach by presenting experimental results on SLFV of the Inverted Pendulum on a Cart and the Fuel Control System examples in the Simulink distribution. To the best of our knowledge, no previously published model checker can exhaustively verify hybrid systems of such a size and provide at any time an upper bound to the Omission Probability.
T. Mancini, F. Mari, A. Massini, I. Melatti, and E. Tronci. "SyLVaaS: System Level Formal Verification as a Service." Fundamenta Informaticae 149, no. 1-2 (2016): 101–132. DOI: 10.3233/FI-2016-1444.
V. Alimguzhin, F. Mari, I. Melatti, E. Tronci, E. Ebeid, S. A. Mikkelsen, R. H. Jacobsen, J. K. Gruber, B. Hayes, F. Huerta et al. "A Glimpse of SmartHG Project Test-bed and Communication Infrastructure." In Digital System Design (DSD), 2015 Euromicro Conference on, 225–232., 2015. DOI: 10.1109/DSD.2015.106.
R. Ehrig, T. Dierkes, S. Schaefer, S. Roeblitz, E. Tronci, T. Mancini, I. Salvo, V. Alimguzhin, F. Mari, I. Melatti et al. "An integrative approach for model driven computation of treatments in reproductive medicine." In Proceedings of the 15th International Symposium on Mathematical and Computational Biology (BIOMAT 2015), Rorkee, India., 2015. DOI: 10.1142/9789813141919_0005.
T. Mancini. "Now or Never: negotiating efficiently with unknown counterparts." In proceedings of the 22nd RCRA International Workshop. Ferrara, Italy. CEUR, 2015 (Co-located with the 14th Conference of the Italian Association for Artificial Intelligence (AI*IA 2015)). (2015).
T. Mancini, F. Mari, I. Melatti, I. Salvo, E. Tronci, J. K. Gruber, B. Hayes, M. Prodanovic, and L. Elmegaard. "User Flexibility Aware Price Policy Synthesis for Smart Grids." In Digital System Design (DSD), 2015 Euromicro Conference on, 478–485., 2015. DOI: 10.1109/DSD.2015.35.
Toni Mancini, Federico Mari, Annalisa Massini, Igor Melatti, and Enrico Tronci. "Simulator Semantics for System Level Formal Verification." In Proceedings Sixth International Symposium on Games, Automata, Logics and Formal Verification (GandALF 2015),., 2015. DOI: 10.4204/EPTCS.193.7.
Toni Mancini, Enrico Tronci, Ivano Salvo, Federico Mari, Annalisa Massini, and Igor Melatti. "Computing Biological Model Parameters by Parallel Statistical Model Checking." International Work Conference on Bioinformatics and Biomedical Engineering (IWBBIO 2015) 9044 (2015): 542–554. DOI: 10.1007/978-3-319-16480-9_52.
Toni Mancini, Federico Mari, Annalisa Massini, Igor Melatti, and Enrico Tronci. "Anytime System Level Verification via Random Exhaustive Hardware In The Loop Simulation." In In Proceedings of 17th EuroMicro Conference on Digital System Design (DSD 2014)., 2014. DOI: 10.1109/DSD.2014.91.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Model Based Synthesis of Control Software from System Level Formal Specifications." ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY 23, no. 1 (2014): Article 6. ACM. ISSN: 1049-331X. DOI: 10.1145/2559934.
E. Tronci, T. Mancini, F. Mari, I. Melatti, R. H. Jacobsen, E. Ebeid, S. A. Mikkelsen, M. Prodanovic, J. K. Gruber, and B. Hayes. "SmartHG: Energy Demand Aware Open Services for Smart Grid Intelligent Automation." In Proceedings of the Work in Progress Session of SEAA/DSD 2014., 2014. ISBN: 978-3-902457-40-0.
E. Tronci, T. Mancini, F. Mari, I. Melatti, I. Salvo, M. Prodanovic, J. K. Gruber, B. Hayes, and L. Elmegaard. "Demand-Aware Price Policy Synthesis and Verification Services for Smart Grids." In Proceedings of Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference On., 2014. DOI: 10.1109/SmartGridComm.2014.7007745.
E. Tronci, T. Mancini, I. Salvo, F. Mari, I. Melatti, A. Massini, S. Sinisi, F. Davì, T. Dierkes, R. Ehrig et al. "Patient-Specific Models from Inter-Patient Biological Models and Clinical Records." In Formal Methods in Computer-Aided Design (FMCAD)., 2014. DOI: 10.1109/FMCAD.2014.6987615.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "A Map-Reduce Parallel Approach to Automatic Synthesis of Control Software." In Proc. of International SPIN Symposium on Model Checking of Software (SPIN 2013), 43–60. Lecture Notes in Computer Science 7976. Springer - Verlag, 2013. ISSN: 0302-9743. ISBN: 978-3-642-39175-0. DOI: 10.1007/978-3-642-39176-7_4.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "On-the-Fly Control Software Synthesis." In Proceedings of International SPIN Symposium on Model Checking of Software (SPIN 2013), 61–80. Lecture Notes in Computer Science 7976. Springer - Verlag, 2013. ISSN: 0302-9743. ISBN: 978-3-642-39175-0. DOI: 10.1007/978-3-642-39176-7_5.
Giuseppe Della Penna, Benedetto Intrigila, Daniele Magazzeni, Igor Melatti, and Enrico Tronci. "CGMurphi: Automatic synthesis of numerical controllers for nonlinear hybrid systems." European Journal of Control 19, no. 1 (2013): 14–36. Elsevier North-Holland, Inc.. ISSN: 0947-3580. DOI: 10.1016/j.ejcon.2013.02.001.
Toni Mancini, Federico Mari, Annalisa Massini, Igor Melatti, Fabio Merli, and Enrico Tronci. "System Level Formal Verification via Model Checking Driven Simulation." In Proceedings of the 25th International Conference on Computer Aided Verification. July 13-19, 2013, Saint Petersburg, Russia, 296–312. Lecture Notes in Computer Science 8044. Springer - Verlag, 2013. ISSN: 0302-9743. ISBN: 978-3-642-39798-1. DOI: 10.1007/978-3-642-39799-8_21.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Linear Constraints and Guarded Predicates as a Modeling Language for Discrete Time Hybrid Systems." International Journal on Advances in Software vol. 6, nr 1&2 (2013): 155–169. IARIA. ISSN: 1942-2628.
linear constraints) in about 40 minutes.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. Model Based Synthesis of Control Software from System Level Formal Specifications. Vol. abs/1107.5638. CoRR, Technical Report, 2013. http://arxiv.org/abs/1107.5638 (accessed April 19, 2019).
Abstract: Many Embedded Systems are indeed Software Based Control Systems, that is control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for automatic synthesis of embedded systems control software.
We present an algorithm, along with a tool QKS implementing it, that from a formal model (as a Discrete Time Linear Hybrid System) of the controlled system (plant), implementation specifications (that is, number of bits in the Analog-to-Digital, AD, conversion) and System Level Formal Specifications (that is, safety and liveness requirements for the closed loop system) returns correct-by-construction control software that has a Worst Case Execution Time (WCET) linear in the number of AD bits and meets the given specifications.
We show feasibility of our approach by presenting experimental results on using it to synthesize control software for a buck DC-DC converter, a widely used mixed-mode analog circuit, and for the inverted pendulum.
Federico Mari, Igor Melatti, Enrico Tronci, and Alberto Finzi. "A multi-hop advertising discovery and delivering protocol for multi administrative domain MANET." Mobile Information Systems 3, no. 9 (2013): 261–280. IOS Press. ISSN: 1574-017x (Print) 1875-905X (Online). DOI: 10.3233/MIS-130162.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. A Map-Reduce Parallel Approach to Automatic Synthesis of Control Software. Vol. abs/1210.2276. CoRR, Technical Report, 2012. http://arxiv.org/abs/1210.2276 (accessed April 19, 2019).
Abstract: Many Control Systems are indeed Software Based Control Systems, i.e. control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for automatic synthesis of control software.
Available algorithms and tools (e.g., QKS) may require weeks or even months of computation to synthesize control software for large-size systems. This motivates search for parallel algorithms for control software synthesis.
In this paper, we present a map-reduce style parallel algorithm for control software synthesis when the controlled system (plant) is modeled as discrete time linear hybrid system. Furthermore we present an MPI-based implementation PQKS of our algorithm. To the best of our knowledge, this is the first parallel approach for control software synthesis.
We experimentally show effectiveness of PQKS on two classical control synthesis problems: the inverted pendulum and the multi-input buck DC/DC converter. Experiments show that PQKS efficiency is above 65%. As an example, PQKS requires about 16 hours to complete the synthesis of control software for the pendulum on a cluster with 60 processors, instead of the 25 days needed by the sequential algorithm in QKS.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Automatic Control Software Synthesis for Quantized Discrete Time Hybrid Systems." In Proceedings of the 51th IEEE Conference on Decision and Control, CDC 2012, December 10-13, 2012, Maui, HI, USA, 6120–6125. IEEE, 2012. ISBN: 978-1-4673-2065-8. Notes: Techreport version can be found at http://arxiv.org/abs/1207.4098. DOI: 10.1109/CDC.2012.6426260.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. Automatic Control Software Synthesis for Quantized Discrete Time Hybrid Systems. Vol. abs/1207.4098. CoRR, Technical Report, 2012. http://arxiv.org/abs/1207.4098 (accessed April 19, 2019).
Abstract: Many Embedded Systems are indeed Software Based Control Systems, that is control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for automatic synthesis of embedded systems control software. This paper addresses control software synthesis for discrete time nonlinear systems. We present a methodology to overapproximate the dynamics of a discrete time nonlinear hybrid system H by means of a discrete time linear hybrid system L(H), in such a way that controllers for L(H) are guaranteed to be controllers for H. We present experimental results on the inverted pendulum, a challenging and meaningful benchmark in nonlinear Hybrid Systems control.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "On Model Based Synthesis of Embedded Control Software." In Proceedings of the 12th International Conference on Embedded Software, EMSOFT 2012, part of the Eighth Embedded Systems Week, ESWeek 2012, Tampere, Finland, October 7-12, 2012, edited by Ahmed Jerraya and Luca P. Carloni and Florence Maraninchi and John Regehr, 227–236. ACM, 2012. ISBN: 978-1-4503-1425-1. Notes: Techreport version can be found at arxiv.org. DOI: 10.1145/2380356.2380398.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. On Model Based Synthesis of Embedded Control Software. Vol. abs/1207.4474. CoRR, Technical Report, 2012. http://arxiv.org/abs/1207.4474 (accessed April 19, 2019).
Abstract: Many Embedded Systems are indeed Software Based Control Systems (SBCSs), that is control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for control software. Given the formal model of a plant as a Discrete Time Linear Hybrid System and the implementation specifications (that is, number of bits in the Analog-to-Digital (AD) conversion) correct-by-construction control software can be automatically generated from System Level Formal Specifications of the closed loop system (that is, safety and liveness requirements), by computing a suitable finite abstraction of the plant.
With respect to given implementation specifications, the automatically generated code implements a time optimal control strategy (in terms of set-up time), has a Worst Case Execution Time linear in the number of AD bits $b$, but unfortunately, its size grows exponentially with respect to $b$. In many embedded systems, there are severe restrictions on the computational resources (such as memory or computational power) available to microcontroller devices.
This paper addresses model based synthesis of control software by trading system level non-functional requirements (such us optimal set-up time, ripple) with software non-functional requirements (its footprint). Our experimental results show the effectiveness of our approach: for the inverted pendulum benchmark, by using a quantization schema with 12 bits, the size of the small controller is less than 6% of the size of the time optimal one.
Ed Kuijpers, Luigi Carotenuto, Jean- Cristophe Malapert, Daniela Markov-Vetter, Igor Melatti, Andrea Orlandini, and Ranni Pinchuk. "Collaboration on ISS Experiment Data and Knowledge Representation." In Proc. of IAC 2012. Vol. D.5.11., 2012.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Synthesizing Control Software from Boolean Relations." International Journal on Advances in Software vol. 5, nr 3&4 (2012): 212–223. IARIA. ISSN: 1942-2628.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Control Software Visualization." In Proceedings of INFOCOMP 2012, The Second International Conference on Advanced Communications and Computation, 15–20. ThinkMind, 2012. ISSN: 978-1-61208-226-4.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Linear Constraints as a Modeling Language for Discrete Time Hybrid Systems." In Proceedings of ICSEA 2012, The Seventh International Conference on Software Engineering Advances, 664–671. ThinkMind, 2012.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Undecidability of Quantized State Feedback Control for Discrete Time Linear Hybrid Systems." In Theoretical Aspects of Computing – ICTAC 2012, edited by A. Roychoudhury and M. D'Souza, 243–258. Lecture Notes in Computer Science 7521. Springer Berlin Heidelberg, 2012. ISBN: 978-3-642-32942-5. DOI: 10.1007/978-3-642-32943-2_19.
Verzino Giovanni, Federico Cavaliere, Federico Mari, Igor Melatti, Giovanni Minei, Ivano Salvo, Yuri Yushtein, and Enrico Tronci. "Model checking driven simulation of sat procedures." In Proceedings of 12th International Conference on Space Operations (SpaceOps 2012)., 2012. DOI: 10.2514/6.2012-1275611.
Amedeo Cesta, Simone Fratini, Andrea Orlandini, Alberto Finzi, and Enrico Tronci. "Flexible Plan Verification: Feasibility Results." Fundamenta Informaticae 107, no. 2 (2011): 111–137. DOI: 10.3233/FI-2011-397.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. From Boolean Functional Equations to Control Software. Vol. abs/1106.0468. CoRR, Technical Report, 2011. http://arxiv.org/abs/1106.0468 (accessed April 19, 2019).
Abstract: Many software as well digital hardware automatic synthesis methods define the set of implementations meeting the given system specifications with a boolean relation K. In such a context a fundamental step in the software (hardware) synthesis process is finding effective solutions to the functional equation defined by K. This entails finding a (set of) boolean function(s) F (typically represented using OBDDs, Ordered Binary Decision Diagrams) such that: 1) for all x for which K is satisfiable, K(x, F(x)) = 1 holds; 2) the implementation of F is efficient with respect to given implementation parameters such as code size or execution time. While this problem has been widely studied in digital hardware synthesis, little has been done in a software synthesis context. Unfortunately the approaches developed for hardware synthesis cannot be directly used in a software context. This motivates investigation of effective methods to solve the above problem when F has to be implemented with software. In this paper we present an algorithm that, from an OBDD representation for K, generates a C code implementation for F that has the same size as the OBDD for F and a WCET (Worst Case Execution Time) at most O(nr), being n = |x| the number of arguments of functions in F and r the number of functions in F.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "From Boolean Relations to Control Software." In Proceedings of ICSEA 2011, The Sixth International Conference on Software Engineering Advances, 528–533. ThinkMind, 2011. ISSN: 978-1-61208-165-6. Notes: Best Paper Award.
Abstract: Many software as well digital hardware automatic synthesis methods define the set of implementations meeting the given system specifications with a boolean relation K. In such a context a fundamental step in the software (hardware) synthesis process is finding effective solutions to the functional equation defined by K. This entails finding a (set of) boolean function(s) F (typically represented using OBDDs, Ordered Binary Decision Diagrams) such that: 1) for all x for which K is satisfiable, K(x, F(x)) = 1 holds; 2) the implementation of F is efficient with respect to given implementation parameters such as code size or execution time. While this problem has been widely studied in digital hardware synthesis, little has been done in a software synthesis context. Unfortunately the approaches developed for hardware synthesis cannot be directly used in a software context. This motivates investigation of effective methods to solve the above problem when F has to be implemented with software. In this paper we present an algorithm that, from an OBDD representation for K, generates a C code implementation for F that has the same size as the OBDD for F and a WCET (Worst Case Execution Time) linear in nr, being n = |x| the number of input arguments for functions in F and r the number of functions in F.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. Quantized Feedback Control Software Synthesis from System Level Formal Specifications for Buck DC/DC Converters. Vol. abs/1105.5640. CoRR, Technical Report, 2011. http://arxiv.org/abs/1105.5640 (accessed April 19, 2019).
Abstract: Many Embedded Systems are indeed Software Based Control Systems (SBCSs), that is control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for automatic synthesis of SBCS control software. In previous works we presented an algorithm, along with a tool QKS implementing it, that from a formal model (as a Discrete Time Linear Hybrid System, DTLHS) of the controlled system (plant), implementation specifications (that is, number of bits in the Analog-to-Digital, AD, conversion) and System Level Formal Specifications (that is, safety and liveness requirements for the closed loop system) returns correct-by-construction control software that has a Worst Case Execution Time (WCET) linear in the number of AD bits and meets the given specifications. In this technical report we present full experimental results on using it to synthesize control software for two versions of buck DC-DC converters (single-input and multi-input), a widely used mixed-mode analog circuit.
Amedeo Cesta, Alberto Finzi, Simone Fratini, Andrea Orlandini, and Enrico Tronci. "Validation and verification issues in a timeline-based planning system." The Knowledge Engineering Review 25, no. 03 (2010): 299–318. Cambridge University Press. DOI: 10.1017/S0269888910000160.
Abstract: One of the key points to take into account to foster effective introduction of AI planning and scheduling systems in real world is to develop end user trust in the related technologies. Automated planning and scheduling systems often brings solutions to the users which are neither "obvious� nor immediately acceptable for them. This is due to the ability of these tools to take into account quite an amount of temporal and causal constraints and to employ resolution processes often designed to optimize the solution with respect to non trivial evaluation functions. To increase technology trust, the study of tools for verifying and validating plans and schedules produced by AI systems might be instrumental. In general, validation and verification techniques represent a needed complementary technology in developing domain independent architectures for automated problem solving. This paper presents a preliminary report of the issues concerned with the use of two software tools for formal verification of finite state systems to the validation of the solutions produced by MrSPOCK, a recent effort for building a timeline based planning tool in an ESA project.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Synthesis of Quantized Feedback Control Software for Discrete Time Linear Hybrid Systems." In Computer Aided Verification, edited by T. Touili, B. Cook and P. Jackson, 180–195. Lecture Notes in Computer Science 6174. Springer Berlin / Heidelberg, 2010. DOI: 10.1007/978-3-642-14295-6_20.
Abstract: We present an algorithm that given a Discrete Time Linear Hybrid System returns a correct-by-construction software implementation K for a (near time optimal) robust quantized feedback controller for along with the set of states on which K is guaranteed to work correctly (controllable region). Furthermore, K has a Worst Case Execution Time linear in the number of bits of the quantization schema.
Andrea Bobbio, Ester Ciancamerla, Saverio Di Blasi, Alessandro Iacomini, Federico Mari, Igor Melatti, Michele Minichino, Alessandro Scarlatti, Enrico Tronci, Roberta Terruggia et al. "Risk analysis via heterogeneous models of SCADA interconnecting Power Grids and Telco networks." In Proceedings of Fourth International Conference on Risks and Security of Internet and Systems (CRiSIS), 90–97., 2009. DOI: 10.1109/CRISIS.2009.5411974.
Abstract: The automation of power grids by means of supervisory control and data acquisition (SCADA) systems has led to an improvement of power grid operations and functionalities but also to pervasive cyber interdependencies between power grids and telecommunication networks. Many power grid services are increasingly depending upon the adequate functionality of SCADA system which in turn strictly depends on the adequate functionality of its communication infrastructure. We propose to tackle the SCADA risk analysis by means of different and heterogeneous modeling techniques and software tools. We demonstrate the applicability of our approach through a case study on an actual SCADA system for an electrical power distribution grid. The modeling techniques we discuss aim at providing a probabilistic dependability analysis, followed by a worst case analysis in presence of malicious attacks and a real-time performance evaluation.
Amedeo Cesta, Alberto Finzi, Simone Fratini, Andrea Orlandini, and Enrico Tronci. "Flexible Plan Verification: Feasibility Results." In 16th RCRA International Workshop on "Experimental evaluation of algorithms for solving problems with combinatorial explosion" (RCRA). Proceedings., 2009.
Amedeo Cesta, Alberto Finzi, Simone Fratini, Andrea Orlandini, and Enrico Tronci. "Flexible Timeline-Based Plan Verification." In KI 2009: Advances in Artificial Intelligence, 32nd Annual German Conference on AI, Paderborn, Germany, September 15-18, 2009. Proceedings, edited by Bärbel Mertsching, M. Hund and M. Z. Aziz, 49–56. Lecture Notes in Computer Science 5803. Springer, 2009. ISSN: 978-3-642-04616-2. DOI: 10.1007/978-3-642-04617-9_7.
Amedeo Cesta, Alberto Finzi, Simone Fratini, Andrea Orlandini, and Enrico Tronci. "Verifying Flexible Timeline-based Plans." In E-Proc. of ICAPS Workshop on Validation and Verification of Planning and Scheduling Systems., 2009.
Abstract: The synthesis of flexible temporal plans has demonstrated wide applications possibilities in heterogeneous domains. We are currently studying the connection between plan generation and execution from the particular perspective of verifying a flexible plan before actual execution. This paper explores how a model-checking verification tool, based on UPPAAL-TIGA, is suitable for verifying flexible temporal plans. We first describe the formal model, the formalism, and the verification method. Furthermore we discuss our own approach and some preliminary empirical results using a real-world case study.
Federico Mari, Igor Melatti, Ivano Salvo, Enrico Tronci, Lorenzo Alvisi, Allen Clement, and Harry Li. "Model Checking Coalition Nash Equilibria in MAD Distributed Systems." In Stabilization, Safety, and Security of Distributed Systems, 11th International Symposium, SSS 2009, Lyon, France, November 3-6, 2009. Proceedings, edited by R. Guerraoui and F. Petit, 531–546. Lecture Notes in Computer Science 5873. Springer, 2009. DOI: 10.1007/978-3-642-05118-0_37.
Abstract: We present two OBDD based model checking algorithms for the verification of Nash equilibria in finite state mechanisms modeling Multiple Administrative Domains (MAD) distributed systems with possibly colluding agents (coalitions) and with possibly faulty or malicious nodes (Byzantine agents). Given a finite state mechanism, a proposed protocol for each agent and the maximum sizes f for Byzantine agents and q for agents collusions, our model checkers return Pass if the proposed protocol is an ε-f-q-Nash equilibrium, i.e. no coalition of size up to q may have an interest greater than ε in deviating from the proposed protocol when up to f Byzantine agents are present, Fail otherwise. We implemented our model checking algorithms within the NuSMV model checker: the first one explicitly checks equilibria for each coalition, while the second represents symbolically all coalitions. We present experimental results showing their effectiveness for moderate size mechanisms. For example, we can verify coalition Nash equilibria for mechanisms which corresponding normal form games would have more than $5 \times 10^21$ entries. Moreover, we compare the two approaches, and the explicit algorithm turns out to outperform the symbolic one. To the best of our knowledge, no model checking algorithm for verification of Nash equilibria of mechanisms with coalitions has been previously published.
Silvia Mazzini, Stefano Puri, Federico Mari, Igor Melatti, and Enrico Tronci. "Formal Verification at System Level." In In: DAta Systems In Aerospace (DASIA), Org. EuroSpace, Canadian Space Agency, CNES, ESA, EUMETSAT. Instanbul, Turkey, EuroSpace., 2009.
Abstract: System Level Analysis calls for a language comprehensible to experts with different background and yet precise enough to support meaningful analyses. SysML is emerging as an effective balance between such conflicting goals. In this paper we outline some the results obtained as for SysML based system level functional formal verification by an ESA/ESTEC study, with a collaboration among INTECS and La Sapienza University of Roma. The study focuses on SysML based system level functional requirements techniques.
Igor Melatti, Robert Palmer, Geoffrey Sawaya, Yu Yang, Robert Mike Kirby, and Ganesh Gopalakrishnan. "Parallel and distributed model checking in Eddy." Int. J. Softw. Tools Technol. Transf. 11, no. 1 (2009): 13–25. Springer-Verlag. ISSN: 1433-2779. DOI: 10.1007/s10009-008-0094-x.
Abstract: Model checking of safety properties can be scaled up by pooling the CPU and memory resources of multiple computers. As compute clusters containing 100s of nodes, with each node realized using multi-core (e.g., 2) CPUs will be widespread, a model checker based on the parallel (shared memory) and distributed (message passing) paradigms will more efficiently use the hardware resources. Such a model checker can be designed by having each node employ two shared memory threads that run on the (typically) two CPUs of a node, with one thread responsible for state generation, and the other for efficient communication, including (1) performing overlapped asynchronous message passing, and (2) aggregating the states to be sent into larger chunks in order to improve communication network utilization. We present the design details of such a novel model checking architecture called Eddy. We describe the design rationale, details of how the threads interact and yield control, exchange messages, as well as detect termination. We have realized an instance of this architecture for the Murphi modeling language. Called Eddy_Murphi, we report its performance over the number of nodes as well as communication parameters such as those controlling state aggregation. Nearly linear reduction of compute time with increasing number of nodes is observed. Our thread task partition is done in such a way that it is modular, easy to port across different modeling languages, and easy to tune across a variety of platforms.
Amedeo Cesta, Alberto Finzi, Simone Fratini, Andrea Orlandini, and Enrico Tronci. "Merging Planning, Scheduling & Verification – A Preliminary Analysis." In In Proc. of 10th ESA Workshop on Advanced Space Technologies for Robotics and Automation (ASTRA)., 2008.
Amedeo Cesta, Alberto Finzi, Simone Fratini, Andrea Orlandini, and Enrico Tronci. "Validation and Verification Issues in a Timeline-based Planning System." In In E-Proc. of ICAPS Workshop on Knowledge Engineering for Planning and Scheduling., 2008.
Flavio Chierichetti, Silvio Lattanzi, Federico Mari, and Alessandro Panconesi. "On Placing Skips Optimally in Expectation." In Web Search and Web Data Mining (WSDM 2008), edited by M. Najork, A. Z. Broder and S. Chakrabarti, 15–24. Acm, 2008. DOI: 10.1145/1341531.1341537.
Abstract: We study the problem of optimal skip placement in an inverted list. Assuming the query distribution to be known in advance, we formally prove that an optimal skip placement can be computed quite efficiently. Our best algorithm runs in time O(n log n), n being the length of the list. The placement is optimal in the sense that it minimizes the expected time to process a query. Our theoretical results are matched by experiments with a real corpus, showing that substantial savings can be obtained with respect to the tra- ditional skip placement strategy, that of placing consecutive skips, each spanning sqrt(n) many locations.
Giuseppe Della Penna, Daniele Magazzeni, Alberto Tofani, Benedetto Intrigila, Igor Melatti, and Enrico Tronci. "Automated Generation Of Optimal Controllers Through Model Checking Techniques." In Informatics in Control Automation and Robotics. Selected Papers from ICINCO 2006, 107–119. Springer, 2008. DOI: 10.1007/978-3-540-79142-3_10.
Federico Mari, Igor Melatti, Ivano Salvo, Enrico Tronci, Lorenzo Alvisi, Allen Clement, and Harry Li. "Model Checking Nash Equilibria in MAD Distributed Systems." In FMCAD '08: Proceedings of the 2008 International Conference on Formal Methods in Computer-Aided Design, edited by A. Cimatti and R. Jones, 1–8. Piscataway, NJ, USA: IEEE Press, 2008. ISSN: 978-1-4244-2735-2. DOI: 10.1109/FMCAD.2008.ECP.16.
Abstract: We present a symbolic model checking algorithm for verification of Nash equilibria in finite state mechanisms modeling Multiple Administrative Domains (MAD) distributed systems. Given a finite state mechanism, a proposed protocol for each agent and an indifference threshold for rewards, our model checker returns PASS if the proposed protocol is a Nash equilibrium (up to the given indifference threshold) for the given mechanism, FAIL otherwise. We implemented our model checking algorithm inside the NuSMV model checker and present experimental results showing its effectiveness for moderate size mechanisms. For example, we can handle mechanisms which corresponding normal form games would have more than $10^20$ entries. To the best of our knowledge, no model checking algorithm for verification of mechanism Nash equilibria has been previously published.
Francesco Brizzolari, Igor Melatti, Enrico Tronci, and Giuseppe Della Penna. "Disk Based Software Verification via Bounded Model Checking." In APSEC '07: Proceedings of the 14th Asia-Pacific Software Engineering Conference, 358–365. Washington, DC, USA: IEEE Computer Society, 2007. ISSN: 0-7695-3057-5. DOI: 10.1109/APSEC.2007.43.
Abstract: One of the most successful approach to automatic software verification is SAT based bounded model checking (BMC). One of the main factors limiting the size of programs that can be automatically verified via BMC is the huge number of clauses that the backend SAT solver has to process. In fact, because of this, the SAT solver may easily run out of RAM. We present two disk based algorithms that can considerably decrease the number of clauses that a BMC backend SAT solver has to process in RAM. Our experimental results show that using our disk based algorithms we can automatically verify programs that are out of reach for RAM based BMC.
Giuseppe Della Penna, Daniele Magazzeni, Alberto Tofani, Benedetto Intrigila, Igor Melatti, and Enrico Tronci. "Automatic Synthesis of Robust Numerical Controllers." In Icas '07, 4. IEEE Computer Society, 2007. ISSN: 0-7695-2859-5. DOI: 10.1109/CONIELECOMP.2007.59.
Abstract: A major problem of numerical controllers is their robustness, i.e. the state read from the plant may not be in the controller table, although it may be close to some states in the table. For continuous systems, this problem is typically handled by interpolation techniques. Unfortunately, when the plant contains both continuous and discrete variables, the interpolation approach does not work well. To cope with this kind of systems, we propose a general methodology that exploits explicit model checking in an innovative way to automatically synthesize a (time-) optimal numerical controller from a plant specification and apply an optimized strengthening algorithm only on the most significant states, in order to reach an acceptable robustness degree. We implemented all the algorithms within our CGMurphi tool, an extension of the well-known CMurphi verifier, and tested the effectiveness of our approach by applying it to the well-known truck and trailer obstacles avoidance problem.
Benedetto Intrigila, Igor Melatti, Alberto Tofani, and Guido Macchiarelli. "Computational models of myocardial endomysial collagen arrangement." Computer Methods and Programs in Biomedicine 86, no. 3 (2007): 232–244. Elsevier North-Holland, Inc.. ISSN: 0169-2607. DOI: 10.1016/j.cmpb.2007.03.004.
Abstract: Collagen extracellular matrix is one of the factors related to high passive stiffness of cardiac muscle. However, the architecture and the mechanical aspects of the cardiac collagen matrix are not completely known. In particular, endomysial collagen contribution to the passive mechanics of cardiac muscle as well as its micro anatomical arrangement is still a matter of debate. In order to investigate mechanical and structural properties of endomysial collagen, we consider two alternative computational models of some specific aspects of the cardiac muscle. These two models represent two different views of endomysial collagen distribution: (1) the traditional view and (2) a new view suggested by the data obtained from scanning electron microscopy (SEM) in NaOH macerated samples (a method for isolating collagen from the other tissue). We model the myocardial tissue as a net of spring elements representing the cardiomyocytes together with the endomysial collagen distribution. Each element is a viscous elastic spring, characterized by an elastic and a viscous constant. We connect these springs to imitate the interconnections between collagen fibers. Then we apply to the net of springs some external forces of suitable magnitude and direction, obtaining an extension of the net itself. In our setting, the ratio forces magnitude /net extension is intended to model the stress /strain ratio of a microscopical portion of the myocardial tissue. To solve the problem of the correct identification of the values of the different parameters involved, we use an artificial neural network approach. In particular, we use this technique to learn, given a distribution of external forces, the elastic constants of the springs needed to obtain a desired extension as an equilibrium position. Our experimental findings show that, in the model of collagen distribution structured according to the new view, a given stress /strain ratio (of the net of springs, in the sense specified above) is obtained with much smaller (w.r.t. the other model, corresponding to the traditional view) elasticity constants of the springs. This seems to indicate that by an appropriate structure, a given stiffness of the myocardial tissue can be obtained with endomysial collagen fibers of much smaller size.
Federico Mari, and Enrico Tronci. "CEGAR Based Bounded Model Checking of Discrete Time Hybrid Systems." In Hybrid Systems: Computation and Control (HSCC 2007), edited by A. Bemporad, A. Bicchi and G. C. Buttazzo, 399–412. Lecture Notes in Computer Science 4416. Springer, 2007. DOI: 10.1007/978-3-540-71493-4_32.
Abstract: Many hybrid systems can be conveniently modeled as Piecewise Affine Discrete Time Hybrid Systems PA-DTHS. As well known Bounded Model Checking (BMC) for such systems comes down to solve a Mixed Integer Linear Programming (MILP) feasibility problem. We present a SAT based BMC algorithm for automatic verification of PA-DTHSs. Using Counterexample Guided Abstraction Refinement (CEGAR) our algorithm gradually transforms a PA-DTHS verification problem into larger and larger SAT problems. Our experimental results show that our approach can handle PA-DTHSs that are more then 50 times larger than those that can be handled using a MILP solver.
Novella Bartolini, and Enrico Tronci. "On Optimizing Service Availability of an Internet Based Architecture for Infrastructure Protection." In Cnip., 2006.
Giuseppe Della Penna, Antinisca Di Marco, Benedetto Intrigila, Igor Melatti, and Alfonso Pierantonio. "Interoperability mapping from XML schemas to ER diagrams." Data Knowl. Eng. 59, no. 1 (2006): 166–188. Elsevier Science Publishers B. V.. ISSN: 0169-023x. DOI: 10.1016/j.datak.2005.08.002.
Abstract: The eXtensible Markup Language (XML) is a de facto standard on the Internet and is now being used to exchange a variety of data structures. This leads to the problem of efficiently storing, querying and retrieving a great amount of data contained in XML documents. Unfortunately, XML data often need to coexist with historical data. At present, the best solution for storing XML into pre-existing data structures is to extract the information from the XML documents and adapt it to the data structures' logical model (e.g., the relational model of a DBMS). In this paper, we introduce a technique called Xere (XML entity–relationship exchange) to assist the integration of XML data with other data sources. To this aim, we present an algorithm that maps XML schemas into entity–relationship diagrams, discuss its soundness and completeness and show its implementation in XSLT.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Finite horizon analysis of Markov Chains with the Mur$\varphi$ verifier." Int. J. Softw. Tools Technol. Transf. 8, no. 4 (2006): 397–409. Springer-Verlag. ISSN: 1433-2779. DOI: 10.1007/s10009-005-0216-7.
Abstract: In this paper we present an explicit disk-based verification algorithm for Probabilistic Systems defining discrete time/finite state Markov Chains. Given a Markov Chain and an integer k (horizon), our algorithm checks whether the probability of reaching an error state in at most k steps is below a given threshold. We present an implementation of our algorithm within a suitable extension of the Mur$\varphi$ verifier. We call the resulting probabilistic model checker FHP-Mur$\varphi$ (Finite Horizon Probabilistic Mur$\varphi$). We present experimental results comparing FHP-Mur$\varphi$ with (a finite horizon subset of) PRISM, a state-of-the-art symbolic model checker for Markov Chains. Our experimental results show that FHP-Mur$\varphi$ can handle systems that are out of reach for PRISM, namely those involving arithmetic operations on the state variables (e.g. hybrid systems).
Giuseppe Della Penna, Daniele Magazzeni, Alberto Tofani, Benedetto Intrigila, Igor Melatti, and Enrico Tronci. "Automated Generation of Optimal Controllers through Model Checking Techniques." In Icinco-Icso, edited by J. Andrade-Cetto, J. - L. Ferrier, J. M. C. D. Pereira and J. Filipe, 26–33. INSTICC Press, 2006. ISSN: 972-8865-59-7. DOI: 10.1007/978-3-540-79142-3.
Abstract: We present a methodology for the synthesis of controllers, which exploits (explicit) model checking techniques. That is, we can cope with the systematic exploration of a very large state space. This methodology can be applied to systems where other approaches fail. In particular, we can consider systems with an highly non-linear dynamics and lacking a uniform mathematical description (model). We can also consider situations where the required control action cannot be specified as a local action, and rather a kind of planning is required. Our methodology individuates first a raw optimal controller, then extends it to obtain a more robust one. A case study is presented which considers the well known truck-trailer obstacle avoidance parking problem, in a parking lot with obstacles on it. The complex non-linear dynamics of the truck-trailer system, within the presence of obstacles, makes the parking problem extremely hard. We show how, by our methodology, we can obtain optimal controllers with different degrees of robustness.
Giuseppe Della Penna, Alberto Tofani, Marcello Pecorari, Orazio Raparelli, Benedetto Intrigila, Igor Melatti, and Enrico Tronci. "A Case Study on Automated Generation of Integration Tests." In Fdl, 278–284. Ecsi, 2006. ISSN: 978-3-00-019710-9.
Igor Melatti, Robert Palmer, Geoffrey Sawaya, Yu Yang, Robert Mike Kirby, and Ganesh Gopalakrishnan. "Parallel and Distributed Model Checking in Eddy." In Model Checking Software, 13th International SPIN Workshop, Vienna, Austria, March 30 – April 1, 2006, Proceedings, edited by A. Valmari, 108–125. Lecture Notes in Computer Science 3925. Springer - Verlag, 2006. ISSN: 0302-9743. ISBN: 978-3-540-33102-5. DOI: 10.1007/11691617_7.
Abstract: Model checking of safety properties can be scaled up by pooling the CPU and memory resources of multiple computers. As compute clusters containing 100s of nodes, with each node realized using multi-core (e.g., 2) CPUs will be widespread, a model checker based on the parallel (shared memory) and distributed (message passing) paradigms will more efficiently use the hardware resources. Such a model checker can be designed by having each node employ two shared memory threads that run on the (typically) two CPUs of a node, with one thread responsible for state generation, and the other for efficient communication, including (i) performing overlapped asynchronous message passing, and (ii) aggregating the states to be sent into larger chunks in order to improve communication network utilization. We present the design details of such a novel model checking architecture called Eddy. We describe the design rationale, details of how the threads interact and yield control, exchange messages, as well as detect termination. We have realized an instance of this architecture for the Murphi modeling language. Called Eddy_Murphi, we report its performance over the number of nodes as well as communication parameters such as those controlling state aggregation. Nearly linear reduction of compute time with increasing number of nodes is observed. Our thread task partition is done in such a way that it is modular, easy to port across different modeling languages, and easy to tune across a variety of platforms.
Enrico Tronci. "Introductory Paper." Sttt 8, no. 4-5 (2006): 355–358. DOI: 10.1007/s10009-005-0212-y.
Abstract: In today's competitive market designing of digital systems (hardware as well as software) faces tremendous challenges. In fact, notwithstanding an ever decreasing project budget, time to market and product lifetime, designers are faced with an ever increasing system complexity and customer expected quality. The above situation calls for better and better formal verification techniques at all steps of the design flow. This special issue is devoted to publishing revised versions of contributions first presented at the 12th Advanced Research Working Conference on Correct Hardware Design and Verification Methods (CHARME) held 21–24 October 2003 in L'Aquila, Italy. Authors of well regarded papers from CHARME'03 were invited to submit to this special issue. All papers included here have been suitably extended and have undergone an independent round of reviewing.
Andrea Bobbio, Ester Ciancamerla, Michele Minichino, and Enrico Tronci. "Functional analysis of a telecontrol system and stochastic measures of its GSM/GPRS connections." Archives of Transport – International Journal of Transport Problems 17, no. 3-4 (2005).
Edoardo Campagnano, Ester Ciancamerla, Michele Minichino, and Enrico Tronci. "Automatic Analysis of a Safety Critical Tele Control System." In 24th International Conference on: Computer Safety, Reliability, and Security (SAFECOMP), edited by R. Winther, B. A. Gran and G. Dahll, 94–107. Lecture Notes in Computer Science 3688. Fredrikstad, Norway: Springer, 2005. ISSN: 3-540-29200-4. DOI: 10.1007/11563228_8.
Abstract: We show how the Mur$\varphi$ model checker can be used to automatically carry out safety analysis of a quite complex hybrid system tele-controlling vehicles traffic inside a safety critical transport infrastructure such as a long bridge or a tunnel. We present the Mur$\varphi$ model we developed towards this end as well as the experimental results we obtained by running the Mur$\varphi$ verifier on our model. Our experimental results show that the approach presented here can be used to verify safety of critical dimensioning parameters (e.g. bandwidth) of the telecommunication network embedded in a safety critical system.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, and Enrico Tronci. "Exploiting Hub States in Automatic Verification." In Automated Technology for Verification and Analysis: Third International Symposium, ATVA 2005, Taipei, Taiwan, October 4-7, 2005, Proceedings, edited by D.A. Peled and Y.-K. Tsay, 54–68. Lecture Notes in Computer Science 3707. Springer, 2005. ISSN: 3-540-29209-8. DOI: 10.1007/11562948_7.
Abstract: In this paper we present a new algorithm to counteract state explosion when using Explicit State Space Exploration to verify protocol-like systems. We sketch the implementation of our algorithm within the Caching Mur$\varphi$ verifier and give experimental results showing its effectiveness. We show experimentally that, when memory is a scarce resource, our algorithm improves on the time performances of Caching Mur$\varphi$ verification algorithm, saving between 16% and 68% (45% on average) in computation time.
Benedetto Intrigila, Daniele Magazzeni, Igor Melatti, and Enrico Tronci. "A Model Checking Technique for the Verification of Fuzzy Control Systems." In CIMCA '05: Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce Vol-1 (CIMCA-IAWTIC'06), 536–542. Washington, DC, USA: IEEE Computer Society, 2005. ISSN: 0-7695-2504-0-01. DOI: 10.1109/CIMCA.2005.1631319.
Abstract: Fuzzy control is well known as a powerful technique for designing and realizing control systems. However, statistical evidence for their correct behavior may be not enough, even when it is based on a large number of samplings. In order to provide a more systematic verification process, the cell-to-cell mapping technology has been used in a number of cases as a verification tool for fuzzy control systems and, more recently, to assess their optimality and robustness. However, cell-to-cell mapping is typically limited in the number of cells it can explore. To overcome this limitation, in this paper we show how model checking techniques may be instead used to verify the correct behavior of a fuzzy control system. To this end, we use a modified version of theMurphi verifier, which ease the modeling phase by allowing to use finite precision real numbers and external C functions. In this way, also already designed simulators may be used for the verification phase. With respect to the cell mapping technique, our approach appears to be complementary; indeed, it explores a much larger number of states, at the cost of being less informative on the global dynamic of the system.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Bounded Probabilistic Model Checking with the Mur$\varphi$ Verifier." In Formal Methods in Computer-Aided Design, 5th International Conference, FMCAD 2004, Austin, Texas, USA, November 15-17, 2004, Proceedings, edited by A. J. Hu and A. K. Martin, 214–229. Lecture Notes in Computer Science 3312. Springer, 2004. ISSN: 3-540-23738-0. DOI: 10.1007/978-3-540-30494-4_16.
Abstract: In this paper we present an explicit verification algorithm for Probabilistic Systems defining discrete time/finite state Markov Chains. We restrict ourselves to verification of Bounded PCTL formulas (BPCTL), that is, PCTL formulas in which all Until operators are bounded, possibly with different bounds. This means that we consider only paths (system runs) of bounded length. Given a Markov Chain $\cal M$ and a BPCTL formula Φ, our algorithm checks if Φ is satisfied in $\cal M$. This allows to verify important properties, such as reliability in Discrete Time Hybrid Systems. We present an implementation of our algorithm within a suitable extension of the Mur$\varphi$ verifier. We call FHP-Mur$\varphi$ (Finite Horizon Probabilistic Mur$\varphi$) such extension of the Mur$\varphi$ verifier. We give experimental results comparing FHP-Mur$\varphi$ with (a finite horizon subset of) PRISM, a state-of-the-art symbolic model checker for Markov Chains. Our experimental results show that FHP-Mur$\varphi$ can effectively handle verification of BPCTL formulas for systems that are out of reach for PRISM, namely those involving arithmetic operations on the state variables (e.g. hybrid systems).
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Exploiting Transition Locality in Automatic Verification of Finite State Concurrent Systems." Sttt 6, no. 4 (2004): 320–341. DOI: 10.1007/s10009-004-0149-6.
Abstract: In this paper we show that statistical properties of the transition graph of a system to be verified can be exploited to improve memory or time performances of verification algorithms. We show experimentally that protocols exhibit transition locality. That is, with respect to levels of a breadth-first state space exploration, state transitions tend to be between states belonging to close levels of the transition graph. We support our claim by measuring transition locality for the set of protocols included in the Mur$\varphi$ verifier distribution. We present a cache-based verification algorithm that exploits transition locality to decrease memory usage and a disk-based verification algorithm that exploits transition locality to decrease disk read accesses, thus reducing the time overhead due to disk usage. Both algorithms have been implemented within the Mur$\varphi$ verifier. Our experimental results show that our cache-based algorithm can typically save more than 40% of memory with an average time penalty of about 50% when using (Mur$\varphi$) bit compression and 100% when using bit compression and hash compaction, whereas our disk-based verification algorithm is typically more than ten times faster than a previously proposed disk-based verification algorithm and, even when using 10% of the memory needed to complete verification, it is only between 40 and 530% (300% on average) slower than (RAM) Mur$\varphi$ with enough memory to complete the verification task at hand. Using just 300 MB of memory our disk-based Mur$\varphi$ was able to complete verification of a protocol with about $10^9$ reachable states. This would require more than 5 GB of memory using standard Mur$\varphi$.
Roberto Gorrieri, Ruggero Lanotte, Andrea Maggiolo-Schettini, Fabio Martinelli, Simone Tini, and Enrico Tronci. "Automated analysis of timed security: a case study on web privacy." International Journal of Information Security 2, no. 3-4 (2004): 168–186. DOI: 10.1007/s10207-004-0037-9.
Abstract: This paper presents a case study on an automated analysis of real-time security models. The case study on a web system (originally proposed by Felten and Schneider) is presented that shows a timing attack on the privacy of browser users. Three different approaches are followed: LH-Timed Automata (analyzed using the model checker HyTech), finite-state automata (analyzed using the model checker NuSMV), and process algebras (analyzed using the model checker CWB-NC). A comparative analysis of these three approaches is given.
Ruggero Lanotte, Andrea Maggiolo-Schettini, Simone Tini, Angelo Troina, and Enrico Tronci. "Automatic Analysis of the NRL Pump." Electr. Notes Theor. Comput. Sci. 99 (2004): 245–266. DOI: 10.1016/j.entcs.2004.02.011.
Abstract: We define a probabilistic model for the NRL Pump and using FHP-mur$\varphi$ show experimentally that there exists a probabilistic covert channel whose capacity depends on various NRL Pump parameters (e.g. buffer size, number of samples in the moving average, etc).
Ruggero Lanotte, Andrea Maggiolo-Schettini, Simone Tini, Angelo Troina, and Enrico Tronci. "Automatic Covert Channel Analysis of a Multilevel Secure Component." In Information and Communications Security, 6th International Conference, ICICS 2004, Malaga, Spain, October 27-29, 2004, Proceedings, edited by J. Lopez, S. Qing and E. Okamoto, 249–261. Lecture Notes in Computer Science 3269. Springer, 2004. DOI: 10.1007/b101042.
Abstract: The NRL Pump protocol defines a multilevel secure component whose goal is to minimize leaks of information from high level systems to lower level systems, without degrading average time performances. We define a probabilistic model for the NRL Pump and show how a probabilistic model checker (FHP-mur$\varphi$) can be used to estimate the capacity of a probabilistic covert channel in the NRL Pump. We are able to compute the probability of a security violation as a function of time for various configurations of the system parameters (e.g. buffer sizes, moving average size, etc). Because of the model complexity, our results cannot be obtained using an analytical approach and, because of the low probabilities involved, it can be hard to obtain them using a simulator.
Marco Martinelli, Enrico Tronci, Giovanni Dipoppa, and Claudio Balducelli. "Electric Power System Anomaly Detection Using Neural Networks." In 8th International Conference on: Knowledge-Based Intelligent Information and Engineering Systems (KES), edited by M. G. Negoita, R. J. Howlett and L. C. Jain, 1242–1248. Lecture Notes in Computer Science 3213. Wellington, New Zealand: Springer, 2004. ISSN: 3-540-23318-0. DOI: 10.1007/978-3-540-30132-5_168.
Abstract: The aim of this work is to propose an approach to monitor and protect Electric Power System by learning normal system behaviour at substations level, and raising an alarm signal when an abnormal status is detected; the problem is addressed by the use of autoassociative neural networks, reading substation measures. Experimental results show that, through the proposed approach, neural networks can be used to learn parameters underlaying system behaviour, and their output processed to detecting anomalies due to hijacking of measures, changes in the power network topology (i.e. transmission lines breaking) and unexpected power demand trend.
"Charme." In Lecture Notes in Computer Science, edited by D. Geist and E. Tronci. Vol. 2860. Springer, 2003. ISSN: 3-540-20363-X. DOI: 10.1007/b93958.
Antonio Bucciarelli, Adolfo Piperno, and Ivano Salvo. "Intersection types and λ-definability." Mathematical Structures in Computer Science 13, no. 1 (2003): 15–53. Cambridge University Press. ISSN: 0960-1295. DOI: 10.1017/S0960129502003833.
Abstract: This paper presents a novel method for comparing computational properties of λ-terms that are typeable with intersection types, with respect to terms that are typeable with Curry types. We introduce a translation from intersection typing derivations to Curry typeable terms that is preserved by β-reduction: this allows the simulation of a computation starting from a term typeable in the intersection discipline by means of a computation starting from a simply typeable term. Our approach proves strong normalisation for the intersection system naturally by means of purely syntactical techniques. The paper extends the results presented in Bucciarelli et al. (1999) to the whole intersection type system of Barendregt, Coppo and Dezani, thus providing a complete proof of the conjecture, proposed in Leivant (1990), that all functions uniformly definable using intersection types are already definable using Curry types.
Ester Ciancamerla, Michele Minichino, Stefano Serro, and Enrico Tronci. "Automatic Timeliness Verification of a Public Mobile Network." In 22nd International Conference on Computer Safety, Reliability, and Security (SAFECOMP), edited by S. Anderson, M. Felici and B. Littlewood, 35–48. Lecture Notes in Computer Science 2788. Edinburgh, UK: Springer, 2003. ISSN: 978-3-540-20126-7. DOI: 10.1007/978-3-540-39878-3_4.
Abstract: This paper deals with the automatic verification of the timeliness of Public Mobile Network (PMN), consisting of Mobile Nodes (MNs) and Base Stations (BSs). We use the Mur$\varphi$ Model Checker to verify that the waiting access time of each MN, under different PMN configurations and loads, and different inter arrival times of MNs in a BS cell, is always below a preassigned threshold. Our experimental results show that Model Checking can be successfully used to generate worst case scenarios and nicely complements probabilistic methods and simulation which are typically used for performance evaluation.
Mario Coppo, Mariangiola Dezani-Ciancaglini, Elio Giovannetti, and Ivano Salvo. "Mobility Types for Mobile Processes in Mobile Ambients." Electr. Notes Theor. Comput. Sci. 78 (2003). DOI: 10.1016/S1571-0661(04)81011-9.
Abstract: We present an ambient-like calculus in which the open capability is dropped, and a new form of "lightweight� process mobility is introduced. The calculus comes equipped with a type system that allows the kind of values exchanged in communications and the access and mobility properties of processes to be controlled. A type inference procedure determines the "minimal� requirements to accept a system or a component as well typed. This gives a kind of principal typing. As an expressiveness test, we show that some well known calculi of concurrency and mobility can be encoded in our calculus in a natural way.
Giuseppe Della Penna, Antinisca Di Marco, Benedetto Intrigila, Igor Melatti, and Alfonso Pierantonio. "Xere: Towards a Natural Interoperability between XML and ER Diagrams." In Fundamental Approaches to Software Engineering, 6th International Conference, FASE 2003, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2003, Warsaw, Poland, April 7-11, 2003, Proceedings, edited by M. Pezzè, 356–371. Lecture Notes in Computer Science 2621. Springer, 2003. ISSN: 3-540-00899-3. DOI: 10.1007/3-540-36578-8_25.
Abstract: XML (eXtensible Markup Language) is becoming the standard format for documents on Internet and is widely used to exchange data. Often, the relevant information contained in XML documents needs to be also stored in legacy databases (DB) in order to integrate the new data with the pre-existing ones. In this paper, we introduce a technique for the automatic XML-DB integration, which we call Xere. In particular we present, as the first step of Xere, the mapping algorithm which allows the translation of XML Schemas into Entity-Relationship diagrams.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Michele Minichino, Ester Ciancamerla, Andrea Parisse, Enrico Tronci, and Marisa Venturini Zilli. "Automatic Verification of a Turbogas Control System with the Mur$\varphi$ Verifier." In Hybrid Systems: Computation and Control, 6th International Workshop, HSCC 2003 Prague, Czech Republic, April 3-5, 2003, Proceedings, edited by O. Maler and A. Pnueli, 141–155. Lecture Notes in Computer Science 2623. Springer, 2003. ISSN: 3-540-00913-2. DOI: 10.1007/3-540-36580-X.
Abstract: Automatic analysis of Hybrid Systems poses formidable challenges both from a modeling as well as from a verification point of view. We present a case study on automatic verification of a Turbogas Control System (TCS) using an extended version of the Mur$\varphi$ verifier. TCS is the heart of ICARO, a 2MW Co-generative Electric Power Plant. For large hybrid systems, as TCS is, the modeling effort accounts for a significant part of the whole verification activity. In order to ease our modeling effort we extended the Mur$\varphi$ verifier by importing the C language long double type (finite precision real numbers) into it. We give experimental results on running our extended Mur$\varphi$ on our TCS model. For example using Mur$\varphi$ we were able to compute an admissible range of values for the variation speed of the user demand of electric power to the turbogas.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Finite Horizon Analysis of Markov Chains with the Mur$\varphi$ Verifier." In Correct Hardware Design and Verification Methods, 12th IFIP WG 10.5 Advanced Research Working Conference, CHARME 2003, L'Aquila, Italy, October 21-24, 2003, Proceedings, edited by D. Geist and E. Tronci, 394–409. Lecture Notes in Computer Science 2860. Springer, 2003. ISSN: 3-540-20363-X. DOI: 10.1007/978-3-540-39724-3_34.
Abstract: In this paper we present an explicit disk based verification algorithm for Probabilistic Systems defining discrete time/finite state Markov Chains. Given a Markov Chain and an integer k (horizon), our algorithm checks whether the probability of reaching an error state in at most k steps is below a given threshold. We present an implementation of our algorithm within a suitable extension of the Mur$\varphi$ verifier. We call the resulting probabilistic model checker FHP-Mur$\varphi$ (Finite Horizon Probabilistic Mur$\varphi$). We present experimental results comparing FHP-Mur$\varphi$ with (a finite horizon subset of) PRISM, a state-of-the-art symbolic model checker for Markov Chains. Our experimental results show that FHP-Mur$\varphi$ can handle systems that are out of reach for PRISM, namely those involving arithmetic operations on the state variables (e.g. hybrid systems).
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Finite Horizon Analysis of Stochastic Systems with the Mur$\varphi$ Verifier." In Theoretical Computer Science, 8th Italian Conference, ICTCS 2003, Bertinoro, Italy, October 13-15, 2003, Proceedings, edited by C. Blundo and C. Laneve, 58–71. Lecture Notes in Computer Science 2841. Springer, 2003. ISSN: 3-540-20216-1. DOI: 10.1007/978-3-540-45208-9_6.
Abstract: Many reactive systems are actually Stochastic Processes. Automatic analysis of such systems is usually very difficult thus typically one simplifies the analysis task by using simulation or by working on a simplified model (e.g. a Markov Chain). We present a Finite Horizon Probabilistic Model Checking approach which essentially can handle the same class of stochastic processes of a typical simulator. This yields easy modeling of the system to be analyzed together with formal verification capabilities. Our approach is based on a suitable disk based extension of the Mur$\varphi$ verifier. Moreover we present experimental results showing effectiveness of our approach.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Integrating RAM and Disk Based Verification within the Mur$\varphi$ Verifier." In Correct Hardware Design and Verification Methods, 12th IFIP WG 10.5 Advanced Research Working Conference, CHARME 2003, L'Aquila, Italy, October 21-24, 2003, Proceedings, edited by D. Geist and E. Tronci, 277–282. Lecture Notes in Computer Science 2860. Springer, 2003. ISSN: 3-540-20363-X. DOI: 10.1007/978-3-540-39724-3_25.
Abstract: We present a verification algorithm that can automatically switch from RAM based verification to disk based verification without discarding the work done during the RAM based verification phase. This avoids having to choose beforehand the proper verification algorithm. Our experimental results show that typically our integrated algorithm is as fast as (sometime faster than) the fastest of the two base (i.e. RAM based and disk based) verification algorithms.
Giuseppe Della Penna, Benedetto Intrigila, Enrico Tronci, and Marisa Venturini Zilli. "Synchronized regular expressions." Acta Inf. 39, no. 1 (2003): 31–70.
Abstract: Text manipulation is one of the most common tasks for everyone using a computer. The increasing number of textual information in electronic format that every computer user collects everyday also increases the need of more powerful tools to interact with texts. Indeed, much work has been done to provide simple and versatile tools that can be useful for the most common text manipulation tasks. Regular Expressions (RE), introduced by Kleene, are well known in the formal language theory. RE have been extended in various ways, depending on the application of interest. In almost all the implementations of RE search algorithms (e.g. the egrep UNIX command, or the Perl language pattern matching constructs) we find backreferences, i.e. expressions that make reference to the string matched by a previous subexpression. Generally speaking, it seems that all kinds of synchronizations between subexpressions in a RE can be very useful when interacting with texts. In this paper we introduce the Synchronized Regular Expressions (SRE) as an extension of the Regular Expressions. We use SRE to present a formal study of the already known backreferences extension, and of a new extension proposed by us, which we call the synchronized exponents. Moreover, since we are dealing with formalisms that should have a practical utility and be used in real applications, we have the problem of how to present SRE to the final users. Therefore, in this paper we also propose a user-friendly syntax for SRE to be used in implementations of SRE-powered search algorithms.
Marco Gribaudo, Andras Horváth, Andrea Bobbio, Enrico Tronci, Ester Ciancamerla, and Michele Minichino. "Fluid Petri Nets and hybrid model checking: a comparative case study." Int. Journal on: Reliability Engineering & System Safety 81, no. 3 (2003): 239–257. Elsevier. DOI: 10.1016/S0951-8320(03)00089-9.
Abstract: The modeling and analysis of hybrid systems is a recent and challenging research area which is actually dominated by two main lines: a functional analysis based on the description of the system in terms of discrete state (hybrid) automata (whose goal is to ascertain conformity and reachability properties), and a stochastic analysis (whose aim is to provide performance and dependability measures). This paper investigates a unifying view between formal methods and stochastic methods by proposing an analysis methodology of hybrid systems based on Fluid Petri Nets (FPNs). FPNs can be analyzed directly using appropriate tools. Our paper shows that the same FPN model can be fed to different functional analyzers for model checking. In order to extensively explore the capability of the technique, we have converted the original FPN into languages for discrete as well as hybrid as well as stochastic model checkers. In this way, a first comparison among the modeling power of well known tools can be carried out. Our approach is illustrated by means of a 'real world' hybrid system: the temperature control system of a co-generative plant.
Franco Barbanera, Mariangiola Dezani-Ciancaglini, Ivano Salvo, and Vladimiro Sassone. "A Type Inference Algorithm for Secure Ambients." Electronic Notes in Theoretical Computer Science 62 (2002): 83–101. Elsevier. Notes: TOSCA 2001, Theory of Concurrency, Higher Order Languages and Types. DOI: 10.1016/S1571-0661(04)00321-4.
Abstract: We consider a type discipline for the Ambient Calculus that associates ambients with security levels and constrains them to be traversed by or opened in ambients of higher security clearance only. We present a bottom-up algorithm that, given an untyped process P, computes a minimal set of constraints on security levels such that all actions during runs of P are performed without violating the security level priorities. Such an algorithm appears to be a prerequisite to use type systems to ensure security properties in the web scenario.
Giuseppe Della Penna, Benedetto Intrigila, Enrico Tronci, and Marisa Venturini Zilli. "Exploiting Transition Locality in the Disk Based Mur$\varphi$ Verifier." In 4th International Conference on Formal Methods in Computer-Aided Design (FMCAD), edited by M. Aagaard and J. W. O'Leary, 202–219. Lecture Notes in Computer Science 2517. Portland, OR, USA: Springer, 2002. ISSN: 3-540-00116-6. DOI: 10.1007/3-540-36126-X_13.
Abstract: The main obstruction to automatic verification of Finite State Systems is the huge amount of memory required to complete the verification task (state explosion). This motivates research on distributed as well as disk based verification algorithms. In this paper we present a disk based Breadth First Explicit State Space Exploration algorithm as well as an implementation of it within the Mur$\varphi$ verifier. Our algorithm exploits transition locality (i.e. the statistical fact that most transitions lead to unvisited states or to recently visited states) to decrease disk read accesses thus reducing the time overhead due to disk usage. A disk based verification algorithm for Mur$\varphi$ has been already proposed in the literature. To measure the time speed up due to locality exploitation we compared our algorithm with such previously proposed algorithm. Our experimental results show that our disk based verification algorithm is typically more than 10 times faster than such previously proposed disk based verification algorithm. To measure the time overhead due to disk usage we compared our algorithm with RAM based verification using the (standard) Mur$\varphi$ verifier with enough memory to complete the verification task. Our experimental results show that even when using 1/10 of the RAM needed to complete verification, our disk based algorithm is only between 1.4 and 5.3 times (3 times on average) slower than (RAM) Mur$\varphi$ with enough RAM memory to complete the verification task at hand. Using our disk based Mur$\varphi$ we were able to complete verification of a protocol with about $10^9$ reachable states. This would require more than 5 gigabytes of RAM using RAM based Mur$\varphi$.
Giuseppe Della Penna, Benedetto Intrigila, Enrico Tronci, and Marisa Venturini Zilli. "Synchronized Regular Expressions." Electr. Notes Theor. Comput. Sci. 62 (2002): 195–210. Notes: TOSCA 2001, Theory of Concurrency, Higher Order Languages and Types.
Abstract: Text manipulation is one of the most common tasks for everyone using a computer. The increasing number of textual information in electronic format that every computer user collects everyday stresses the need of more powerful tools to interact with texts. Indeed, much work has been done to provide nonprogramming tools that can be useful for the most common text manipulation issues. Regular Expressions (RE), introduced by Kleene, are well–known in the formal language theory. RE received several extensions, depending on the application of interest. In almost all the implementations of RE search algorithms (e.g. the egrep [A] UNIX command, or the Perl language pattern matching constructs) we find backreferences (as defind in ), i.e. expressions that make reference to the string matched by a previous subexpression. Generally speaking, it seems that all the kinds of synchronizations between subexpressions in a RE can be very useful when interacting with texts. Therefore, we introduce the Synchronized Regular Expressions (SRE) as a derivation of the Regular Expressions. We use SRE to present a formal study of the already known backreferences extension, and of a new extension proposed by us, which we call the synchronized exponents. Moreover, since we are talking about formalisms that should have a practical utility and can be used in the real world, we have the problem of how to present SRE to the final users. Therefore, in this paper we also propose a user–friendly syntax for SRE to be used in implementations of SRE–powered search algorithms.
Riccardo Focardi, Roberto Gorrieri, Ruggero Lanotte, Andrea Maggiolo-Schettini, Fabio Martinelli, Simone Tini, and Enrico Tronci. "Formal Models of Timing Attacks on Web Privacy." Electronic Notes in Theoretical Computer Science 62 (2002): 229–243. Notes: TOSCA 2001, Theory of Concurrency, Higher Order Languages and Types. DOI: 10.1016/S1571-0661(04)00329-9.
Abstract: We model a timing attack on web privacy proposed by Felten and Schneider by using three different approaches: HL-Timed Automata, SMV model checker, and tSPA Process Algebra. Some comparative analysis on the three approaches is derived.
Marco Gribaudo, Andras Horváth, Andrea Bobbio, Enrico Tronci, Ester Ciancamerla, and Michele Minichino. "Model-Checking Based on Fluid Petri Nets for the Temperature Control System of the ICARO Co-generative Plant." In 21st International Conference on Computer Safety, Reliability and Security (SAFECOMP), edited by S. Anderson, S. Bologna and M. Felici, 273–283. Lecture Notes in Computer Science 2434. Catania, Italy: Springer, 2002. ISSN: 3-540-44157-3. DOI: 10.1007/3-540-45732-1_27.
Abstract: The modeling and analysis of hybrid systems is a recent and challenging research area which is actually dominated by two main lines: a functional analysis based on the description of the system in terms of discrete state (hybrid) automata (whose goal is to ascertain for conformity and reachability properties), and a stochastic analysis (whose aim is to provide performance and dependability measures). This paper investigates a unifying view between formal methods and stochastic methods by proposing an analysis methodology of hybrid systems based on Fluid Petri Nets (FPN). It is shown that the same FPN model can be fed to a functional analyser for model checking as well as to a stochastic analyser for performance evaluation. We illustrate our approach and show its usefulness by applying it to a "real world� hybrid system: the temperature control system of a co-generative plant.
Andrea Bobbio, Sandro Bologna, Michele Minichino, Ester Ciancamerla, Piero Incalcaterra, Corrado Kropp, and Enrico Tronci. "Advanced techniques for safety analysis applied to the gas turbine control system of Icaro co generative plant." In X Convegno Tecnologie e Sistemi Energetici Complessi, 339–350. Genova, Italy, 2001.
Abstract: The paper describes two complementary and integrable approaches, a probabilistic one and a deterministic one, based on classic and advanced modelling techniques for safety analysis of complex computer based systems. The probabilistic approach is based on classical and innovative probabilistic analysis methods. The deterministic approach is based on formal verification methods. Such approaches are applied to the gas turbine control system of ICARO co generative plant, in operation at ENEA CR Casaccia. The main difference between the two approaches, behind the underlining different theories, is that the probabilistic one addresses the control system by itself, as the set of sensors, processing units and actuators, while the deterministic one also includes the behaviour of the equipment under control which interacts with the control system. The final aim of the research, documented in this paper, is to explore an innovative method which put the probabilistic and deterministic approaches in a strong relation to overcome the drawbacks of their isolated, selective and fragmented use which can lead to inconsistencies in the evaluation results.
V. Bono, and I. Salvo. "A CuCh Interpretation of an Object-Oriented Language." Electronic Notes in Theoretical Computer Science 50, no. 2 (2001): 159–177. Elsevier. Notes: BOTH 2001, Bohm's theorem: applications to Computer Science Theory (Satellite Workshop of ICALP 2001). DOI: 10.1016/S1571-0661(04)00171-9.
Abstract: CuCh machine extends pure lambda–calculus with algebraic data types and provides a the possibility of defining functions over the disjoint sum of algebras. We exploit such natural form of overloading to define a functional interpretation of a simple, but significant fragment of a typical object-oriented language.
G. Dipoppa, G. D'Alessandro, R. Semprini, and E. Tronci. "Integrating Automatic Verification of Safety Requirements in Railway Interlocking System Design." In High Assurance Systems Engineering, 2001. Sixth IEEE International Symposium on, 209–219. Albuquerque, NM, USA: IEEE Computer Society, 2001. ISSN: 0-7695-1275-5. DOI: 10.1109/HASE.2001.966821.
Abstract: A railway interlocking system (RIS) is an embedded system (namely a supervisory control system) that ensures the safe, operation of the devices in a railway station. RIS is a safety critical system. We explore the possibility of integrating automatic formal verification methods in a given industry RIS design flow. The main obstructions to be overcome in our work are: selecting a formal verification tool that is efficient enough to solve the verification problems at hand; and devising a cost effective integration strategy for such tool. We were able to devise a successful integration strategy meeting the above constraints without requiring major modification in the pre-existent design flow nor retraining of personnel. We run verification experiments for a RIS designed for the Singapore Subway. The experiments show that the RIS design flow obtained from our integration strategy is able to automatically verify real life RIS designs.
Benedetto Intrigila, Ivano Salvo, and Stefano Sorgi. "A characterization of weakly Church-Rosser abstract reduction systems that are not Church-Rosser." Information and Computation 171, no. 2 (2001): 137–155. Academic Press, Inc.. ISSN: 0890-5401. DOI: 10.1006/inco.2001.2945.
Abstract: Basic properties of rewriting systems can be stated in the framework of abstract reduction systems (ARS). Properties like confluence (or Church-Rosser, CR) and weak confluence (or weak Church-Rosser, WCR) and their relationships can be studied in this setting: as a matter of fact, well-known counterexamples to the implication WCR CR have been formulated as ARS. In this paper, starting from the observation that such counterexamples are structurally similar, we set out a graph-theoretic characterization of WCR ARS that is not CR in terms of a suitable class of reduction graphs, such that in every WCR not CR ARS, we can embed at least one element of this class. Moreover, we give a tighter characterization for a restricted class of ARS enjoying a suitable regularity condition. Finally, as a consequence of our approach, we prove some interesting results about ARS using the mathematical tools developed. In particular, we prove an extension of the Newman's lemma and we find out conditions that, once assumed together with WCR property, ensure the unique normal form property. The Appendix treats two interesting examples, both generated by graph-rewriting rules, with specific combinatorial properties.
Enrico Tronci, Giuseppe Della Penna, Benedetto Intrigila, and Marisa Venturini Zilli. "A Probabilistic Approach to Automatic Verification of Concurrent Systems." In 8th Asia-Pacific Software Engineering Conference (APSEC), 317–324. Macau, China: IEEE Computer Society, 2001. ISSN: 0-7695-1408-1. DOI: 10.1109/APSEC.2001.991495.
Abstract: The main barrier to automatic verification of concurrent systems is the huge amount of memory required to complete the verification task (state explosion). In this paper we present a probabilistic algorithm for automatic verification via model checking. Our algorithm trades space with time. In particular, when memory is full because of state explosion our algorithm does not give up verification. Instead it just proceeds at a lower speed and its results will only hold with some arbitrarily small error probability. Our preliminary experimental results show that by using our probabilistic algorithm we can typically save more than 30% of RAM with an average time penalty of about 100% w.r.t. a deterministic state space exploration with enough memory to complete the verification task. This is better than giving up the verification task because of lack of memory.
Enrico Tronci, Giuseppe Della Penna, Benedetto Intrigila, and Marisa Venturini Zilli. "Exploiting Transition Locality in Automatic Verification." In 11th IFIP WG 10.5 Advanced Research Working Conference on Correct Hardware Design and Verification Methods (CHARME), edited by T. Margaria and T. F. Melham, 259–274. Lecture Notes in Computer Science 2144. Livingston, Scotland, UK: Springer, 2001. ISSN: 3-540-42541-1. DOI: 10.1007/3-540-44798-9_22.
Abstract: In this paper we present an algorithm to contrast state explosion when using Explicit State Space Exploration to verify protocols. We show experimentally that protocols exhibit transition locality. We present a verification algorithm that exploits transition locality as well as an implementation of it within the Mur$\varphi$ verifier. Our algorithm is compatible with all Breadth First (BF) optimization techniques present in the Mur$\varphi$ verifier and it is by no means a substitute for any of them. In fact, since our algorithm trades space with time, it is typically most useful when one runs out of memory and has already used all other state reduction techniques present in the Mur$\varphi$ verifier. Our experimental results show that using our approach we can typically save more than 40% of RAM with an average time penalty of about 50% when using (Mur$\varphi$) bit compression and 100% when using bit compression and hash compaction.
Michele Cecconi, and Enrico Tronci. "Requirements Formalization and Validation for a Telecommunication Equipment Protection Switcher." In Hase. IEEE Computer Society, 2000. ISSN: 0-7695-0927-4. DOI: 10.1109/HASE.2000.895456.
Antonio Bucciarelli, Silvia de Lorenzis, Adolfo Piperno, and Ivano Salvo. "Some Computational Properties of Intersection Types (Extended Abstract)." (1999): 109–118. IEEE Computer Society. DOI: 10.1109/LICS.1999.782598.
Abstract: This paper presents a new method for comparing computation-properties of λ-terms typeable with intersection types with respect to terms typeable with Curry types. In particular, strong normalization and λ-definability are investigated. A translation is introduced from intersection typing derivations to Curry typeable terms; the main feature of the proposed technique is that the translation is preserved by β-reduction. This allows to simulate a computation starting from a term typeable in the intersection discipline by means of a computation starting from a simply typeable term. Our approach naturally leads to prove strong normalization in the intersection system by means of purely syntactical techniques. In addition, the presented method enables us to give a proof of a conjecture proposed by Leivant in 1990, namely that all functions uniformly definable using intersection types are already definable using Curry types.
Enrico Tronci. "Automatic Synthesis of Control Software for an Industrial Automation Control System." In Proc.of: 14th IEEE International Conference on: Automated Software Engineering (ASE), 247–250. Cocoa Beach, Florida, USA, 1999. DOI: 10.1109/ASE.1999.802292.
Abstract: We present a case study on automatic synthesis of control software from formal specifications for an industrial automation control system. Our aim is to compare the effectiveness (i.e. design effort and controller quality) of automatic controller synthesis from closed loop formal specifications with that of manual controller design, followed by automatic verification. Our experimental results show that for industrial automation control systems, automatic synthesis is a viable and profitable (especially as far as design effort is concerned) alternative to manual design, followed by automatic verification.
Enrico Tronci. "Formally Modeling a Metal Processing Plant and its Closed Loop Specifications." In 4th IEEE International Symposium on High-Assurance Systems Engineering (HASE), 151. Washington, D.C, USA: IEEE Computer Society, 1999. ISSN: 0-7695-0418-3. DOI: 10.1109/HASE.1999.809490.
Abstract: We present a case study on automatic synthesis of control software from formal specifications for an industrial automation control system. Our aim is to compare the effectiveness (i.e. design effort and controller quality) of automatic controller synthesis from closed loop formal specifications with that of manual controller design followed by automatic verification. The system to be controlled (plant) models a metal processing facility near Karlsruhe. We succeeded in automatically generating C code implementing a (correct by construction) embedded controller for such a plant from closed loop formal specifications. Our experimental results show that for industrial automation control systems automatic synthesis is a viable and profitable (especially as far as design effort is concerned) alternative to manual design followed by automatic verification.
Antonio Bucciarelli, and Ivano Salvo. "Totality, Definability and Boolean Circuits." 1443 (1998): 808–819. Springer. DOI: 10.1007/BFb0055104.
Abstract: In the type frame originating from the flat domain of boolean values, we single out elements which are hereditarily total. We show that these elements can be defined, up to total equivalence, by sequential programs. The elements of an equivalence class of the totality equivalence relation (totality class) can be seen as different algorithms for computing a given set-theoretic boolean function. We show that the bottom element of a totality class, which is sequential, corresponds to the most eager algorithm, and the top to the laziest one. Finally we suggest a link between size of totality classes and a well known measure of complexity of boolean functions, namely their sensitivity.
Alessandro Fantechi, Stefania Gnesi, Franco Mazzanti, Rosario Pugliese, and Enrico Tronci. "A Symbolic Model Checker for ACTL." In International Workshop on Current Trends in Applied Formal Method (FM-Trends), edited by D. Hutter, W. Stephan, P. Traverso and M. Ullmann, 228–242. Lecture Notes in Computer Science 1641. Boppard, Germany: Springer, 1998. ISSN: 3-540-66462-9. DOI: 10.1007/3-540-48257-1_14.
Abstract: We present SAM, a symbolic model checker for ACTL, the action-based version of CTL. SAM relies on implicit representations of Labeled Transition Systems (LTSs), the semantic domain for ACTL formulae, and uses symbolic manipulation algorithms. SAM has been realized by translating (networks of) LTSs and, possibly recursive, ACTL formulae into BSP (Boolean Symbolic Programming), a programming language aiming at defining computations on boolean functions, and by using the BSP interpreter to carry out computations (i.e. verifications).
Enrico Tronci. "Automatic Synthesis of Controllers from Formal Specifications." In Proc of 2nd IEEE International Conference on Formal Engineering Methods (ICFEM), 134–143. Brisbane, Queensland, Australia, 1998. DOI: 10.1109/ICFEM.1998.730577.
Abstract: Many safety critical reactive systems are indeed embedded control systems. Usually a control system can be partitioned into two main subsystems: a controller and a plant. Roughly speaking: the controller observes the state of the plant and sends commands (stimulus) to the plant to achieve predefined goals. We show that when the plant can be modeled as a deterministic finite state system (FSS) it is possible to effectively use formal methods to automatically synthesize the program implementing the controller from the plant model and the given formal specifications for the closed loop system (plant+controller). This guarantees that the controller program is correct by construction. To the best of our knowledge there is no previously published effective algorithm to extract executable code for the controller from closed loop formal specifications. We show practical usefulness of our techniques by giving experimental results on their use to synthesize C programs implementing optimal controllers (OCs) for plants with more than 109 states.
Enrico Tronci. "On Computing Optimal Controllers for Finite State Systems." In CDC '97: Proceedings of the 36th IEEE International Conference on Decision and Control. Washington, DC, USA: IEEE Computer Society, 1997.
Rosario Pugliese, and Enrico Tronci. "Automatic Verification of a Hydroelectric Power Plant." In Third International Symposium of Formal Methods Europe (FME), Co-Sponsored by IFIP WG 14.3, edited by M. - C. Gaudel and J. Woodcock, 425–444. Lecture Notes in Computer Science 1051. Oxford, UK: Springer, 1996. ISSN: 3-540-60973-3. DOI: 10.1007/3-540-60973-3_100.
Abstract: We analyze the specification of a hydroelectric power plant by ENEL (the Italian Electric Company). Our goal is to show that for the specification of the plant (its control system in particular) some given properties hold. We were provided with an informal specification of the plant. From such informal specification we wrote a formal specification using the CCS/Meije process algebra formalism. We defined properties using μ-calculus. Automatic verification was carried out using model checking. This was done by translating our process algebra definitions (the model) and μ-calculus formulas into BDDs. In this paper we present the informal specification of the plant, its formal specification, some of the properties we verified and experimental results.
Enrico Tronci. "Equational Programming in Lambda-Calculus via SL-Systems. Part 1." Theoretical Computer Science 160, no. 1&2 (1996): 145–184. DOI: 10.1016/0304-3975(95)00105-0.
Enrico Tronci. "Equational Programming in Lambda-Calculus via SL-Systems. Part 2." Theoretical Computer Science 160, no. 1&2 (1996): 185–216. DOI: 10.1016/0304-3975(95)00106-9.
Enrico Tronci. "Optimal Finite State Supervisory Control." In CDC '96: Proceedings of the 35th IEEE International Conference on Decision and Control. Washington, DC, USA: IEEE Computer Society, 1996. DOI: 10.1109/CDC.1996.572981.
Abstract: Supervisory Controllers are Discrete Event Dynamic Systems (DEDSs) forming the discrete core of a Hybrid Control System. We address the problem of automatic synthesis of Optimal Finite State Supervisory Controllers (OSCs). We show that Boolean First Order Logic (BFOL) and Binary Decision Diagrams (BDDs) are an effective methodological and practical framework for Optimal Finite State Supervisory Control. Using BFOL programs (i.e. systems of boolean functional equations) and BDDs we give a symbolic (i.e. BDD based) algorithm for automatic synthesis of OSCs. Our OSC synthesis algorithm can handle arbitrary sets of final states as well as plant transition relations containing loops and uncontrollable events (e.g. failures). We report on experimental results on the use of our OSC synthesis algorithm to synthesize a C program implementing a minimum fuel OSC for two autonomous vehicles moving on a 4 x 4 grid.
Enrico Tronci. "Defining Data Structures via Böhm-Out." J. Funct. Program. 5, no. 1 (1995): 51–64. DOI: 10.1017/S0956796800001234.
Abstract: We show that any recursively enumerable subset of a data structure can be regarded as the solution set to a B??hm-out problem.
Enrico Tronci. "Hardware Verification, Boolean Logic Programming, Boolean Functional Programming." In Tenth Annual IEEE Symposium on Logic in Computer Science (LICS), 408–418. San Diego, California: IEEE Computer Society, 1995. DOI: 10.1109/LICS.1995.523275.
Abstract: One of the main obstacles to automatic verification of finite state systems (FSSs) is state explosion. In this respect automatic verification of an FSS M using model checking and binary decision diagrams (BDDs) has an intrinsic limitation: no automatic global optimization of the verification task is possible until a BDD representation for M is generated. This is because systems and specifications are defined using different languages. To perform global optimization before generating a BDD representation for M we propose to use the same language to define systems and specifications. We show that first order logic on a Boolean domain yields an efficient functional programming language that can be used to represent, specify and automatically verify FSSs, e.g. on a SUN Sparc Station 2 we were able to automatically verify a 64 bit commercial multiplier.
Corrado Böhm, and Enrico Tronci. "About Systems of Equations, X-Separability, and Left-Invertibility in the lambda-Calculus." Inf. Comput. 90, no. 1 (1991): 1–32. DOI: 10.1016/0890-5401(91)90057-9.
Enrico Tronci. "Equational Programming in lambda-calculus." In Sixth Annual IEEE Symposium on Logic in Computer Science (LICS), 191–202. Amsterdam, The Netherlands: IEEE Computer Society, 1991. DOI: 10.1109/LICS.1991.151644.
Adolfo Piperno, and Enrico Tronci. "Regular Systems of Equations in λ-calculus." Int. J. Found. Comput. Sci. 1, no. 3 (1990): 325–340. DOI: 10.1142/S0129054190000230.
Abstract: Many problems arising in equational theories like Lambda-calculus and Combinatory Logic can be expressed by combinatory equations or systems of equations. However, the solvability problem for an arbitrarily given class of systems is in general undecidable. In this paper we shall focus our attention on a decidable class of systems, which will be called regular systems, and we shall analyse some classical problems and well-known properties of Lambda-calculus that can be described and solved by means of regular systems. The significance of such class will be emphasized showing that for slight extensions of it the solvability problem turns out to be undecidable.
Corrado Böhm, Adolfo Piperno, and Enrico Tronci. "Solving Equations in λ-calculus." In Proc. of: Logic Colloquium 88. Padova - Italy, 1989.
Adolfo Piperno, and Enrico Tronci. "Regular Systems of Equations in λ-calculus." In Ictcs. Mantova - Italy, 1989. DOI: 10.1142/S0129054190000230.
Corrado Böhm, and Enrico Tronci. "X-Separability and Left-Invertibility in lambda-calculus." In Symposium on Logic in Computer Science (LICS), 320–328. Ithaca, New York, USA: IEEE Computer Society, 1987.
Corrado Böhm, and Enrico Tronci. "X-separability and left-invertibility in the λ-calculus (extended abstract, invited paper)." In Proceedings of: Temi e prospettive della Logica e della Filosofia della Scienza contemporanea. Cesena - Italy, 1987.
This literature database is maintained by the MCLab @ Computer Science Department, Sapienza University of Rome (MCLab @ Sapienza). You're welcome to send any questions or suggestions to our feedback address. The database is powered by refbase, an open source database front-end for managing scientific literature & citations. | CommonCrawl |
There are a few things that form the common canon of education in (quantitative) finance, yet everybody knows they are not exactly true, useful, well-behaved, or empirically supported.
So here is the question: which is the single worst idea still actively propagated?
Please make it one suggestion per post.
Correlations are notoriously unstable in financial time series - yet one of the most used concepts in quant finance because their is no good theoretical substitute for it. You could say theory is not working with it yet neither without it.
For example the concept is used for diversification of uncorrelated assets or for the modelling of credit default swaps (correlation of defaults). Unfortunately when you need it most (e.g. a crash) it just vanishes. This is one of the reasons that the financial crises started because the quants modeled the cds's with certain assumptions concerning default correlations - but when a regime shift happens this no longer works.
See my follow-up question: What is the most stable, non-trivial dependence structure in finance?
CAPM as an allocation strategy.
All information is known instantaneously by all market participants.
There are no transaction costs.
One conclusion is that the higher the beta, the higher the return, but this has clearly been shown to be violated.
While it is useful for segmenting $\alpha$ and $\beta$ (and for portfolio/strategy evaluation), it simply isn't entirely reliable as a portfolio allocation strategy.
The CAPM, like Markowitz's (1952, 1959) portfolio model on which it is built, is nevertheless a theoretical tour de force. We continue to teach the CAPM as an introduction to the fundamental concepts of portfolio theory and asset pricing, to be built on by more complicated models like Merton's (1973) ICAPM. But we also warn students that despite its seductive simplicity, the CAPM's empirical problems probably invalidate its use in applications.
Note that CAPM adds many assumptions to Markowitz's fundamental model to built itself. Therein lies its fallacy because as said above, those are difficult assumptions. Markowitz' model itself is fairly general in that you can inject 'views' of higher returns or greater volatility etc into the basic framework (or not!) and still be quite rooted in reality for mid-long term horizons.
Everybody's favourite whipping boy: Identically and independently distributed returns, i.e. draws from $N(\mu, \sigma)$ to describe returns.
But they (GS) did it not with a new formula or a single rule. They did it by being smart rather than doctrinaire. They were eclectic; they had limits on all sorts of exposures -- on VaR, on the fraction of a portfolio that hadn't been modified in a year ... There isn't a formula for avoiding future losses because there isn't one cause of future losses.
It's focused on managable risk in a normal situations with the assumption that tomorrow will be like today and yesterday and without taking rare events into account.
It becomes another parameter (like profit) that could be gamed (the same profit but with low risk).
For many reasons. First, many adherents and critics support it for the wrong (often ideological) reasons. This applies even to well-known economists like John Quiggin. Second, because even fewer people know the extent and scope of the anomalies. The literature can get very technical. So even smart people rejecting the EMH, or publishing anomalies, end up being over-optimistic about their ability to beat the market.
I won't go as far as declaring gaussian copula The formula that killed Wall Street" (warning: lousy article), but will defer to T. Mikosch in his very good paper on misuses of copulas.
The main advantage of the BS delta hedging is that it presents though the big principles of hedging the rest is a matter of sophistication and derivatives trader's vista (or chance).
Backtesting - pure and simple. Its the logical and obvious thing to do right? Yet, so many pitfalls lie in wait. Be very careful people. Do it as little as possible and as late as possible.
This isn't particularly insightful, but worth pointing out in this thread. Many people get caught up in the elegance and beauty of the mathematics and tend to be disconnected from the real world.
Concepts must be based on logical ideas and proper premises. It is easy to forget a premise and then misuse a model such as CAPM as asset-allocation method as suggested by Shane so Y-Recheck-things. Do not make things personal. Do not abuse models with too complicated schemes (you may abuse some basic assumption) -- and even then don't expect pretend to know, rather to engineer.
For many years, stock market analysts have argued that value strategies outperform the market. These value strategies call for buying stocks that have low prices relative to earnings, dividends, book assets, or other measures of fundamental value. While there is some agreement that value strategies produce higher returns, the interpretation of why they do so is more controversial. This paper provides evidence that value strategies yield higher returns because these strategies exploit the mistakes of the typical investor and not because these strategies are fundamentally riskier.
Corporate Actions do not happen.
That is to say, both the models and psychology tend to ignore the possibility of such behavior as takeovers, spinouts, significant changes in leverage (ratio of debt to equity) by issuing or redeeming bonds, and the like.
Now, there are desks (such as merger arb) that specifically play these, and fundamental analysts discuss and sometimes "model" them (if you can call their relatively simple spreadsheets models). But you'll find that the difficulty of including them in options models keeps them unincorporated, and plenty of traders fail to make the necessary mental adjustments.
It is dangerous when large proportions of the trading population "religiously" believe [as a matter of unexamined faith] that certain necessary assumptions which govern the accuracy of the models they use will always hold. It is best to really understand how the models have been derived and to have a skeptics understanding of these assumptions and their impact. All models have flaws -- yet it is possible to use flawed models if you can get consistent indicators from several different or contrarian approaches based upon radically different assumptions. When the valuations from different approaches diverge, it is necessary to understand why -- when this happens, it is necessary to investigate the underlying assumptions ... this sort of environment often provides trading opportunities, but the environment can also quite easily be an opportunity for disaster.
Of course, implicit and explicit assumptions are absolutely necessary to sufficiently simplify any mathematical analysis and to make it possible to derive models that can give lots of traders the generally useful trading "yardsticks" that they rely upon. As an example, consider the Black-Scholes model. The Black-Scholes model is ubiquitous; a commonly used "yardstick" for option valuations. The Black-Scholes model of the market for a particular equity explicitly assumes that the stock price follows a geometric Brownian motion with constant drift and volatility.
This assumption of "geometric Brownian motion with constant drift and volatility" is never exactly true in the very strictest sense but, most of the time, it is a very useful, simplifying assumption because stock prices are often "like" this. It might not be reality, but the assumption is a close enough approximation of reality. This assumption is highly useful because of how makes it possible to apply stochastic partial differential equations methodology to the problem of determining appropriate option valuations. However, the assumption of "constant drift and volatility" is a very dangerous assumption in times when judgement, wisdom and intuition would tell an experienced investor "Something is "odd." It's as if we're in the calm before the storm." OR "Crowd psychology and momentum seem to be more palpable factor in the prices right now."
Persistent autocorrelations in volatility processes are due to long term memory only. I cannot help but sigh at the hundreds of papers which work under this assumption. Haven't people heard about regime shifts?
In my opinion you should question EVERYTHING.
Recently I read this article Ten Things We Should Know About Time Series by Michael McAleer which is to my opinion a good summary of some common issues in time series analysis.
See the article for a further description of each point.
The fall of the US mortgage market in 2008 as risk on mortgage bond portfolios were grossly underestimated as the strong dependence of their bonds on common variables like the state of the business and credit cycles were ignored and covariations and portfolio variance understated.
The rise and fall of the US junk-bond market fuelled by Milken using statistics covering fallen angels from decades ago to predict default rates on new bonds for which most variables but solvency ratios were other omitted and dissimilar to the variables of the bonds in the statistics.
Not the answer you're looking for? Browse other questions tagged theory research or ask your own question. | CommonCrawl |
Lemma 15.36.8. Let $R$, $S$ be rings. Let $\mathfrak n \subset S$ be an ideal. Let $R \to S$ be formally smooth for the $\mathfrak n$-adic topology. Let $R \to R'$ be any ring map. Then $R' \to S' = S \otimes _ R R'$ is formally smooth in the $\mathfrak n' = \mathfrak nS'$-adic topology.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 07EG. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 07EG, in case you are confused. | CommonCrawl |
Ambroglini, F, Armillis, R, Azzi, P, Bagliesi, G, Ballestrero, A, Balossini, G, Banfi, A, Bartalini, P, Benedetti, D, Bevilacqua, G et al (show 50 more authors) , Bolognesi, S, Cafarella, A, Calame, CMC, Carminati, L, Cobal, M, Corcella, G, Coriano', C, Dainese, A, Duca, VD, Fabbri, F, Fabbrichesi, M, Fano', L, Faraggi, AE, Frixione, S, Garbini, L, Giammanco, A, Grazzini, M, Guzzi, M, Irges, N, Maina, E, Mariotti, C, Masetti, G, Mele, B, Migliore, E, Montagna, G, Monteno, M, Moretti, M, Nason, P, Nicrosini, O, Nisati, A, Perrotta, A, Piccinini, F, Polesello, G, Rebuzzi, D, Rizzi, A, Rolli, S, Roda, C, Rosati, S, Santocchia, A, Stocco, D, Tartarelli, F, Tenchini, R, Tonero, A, Treccani, M, Treleani, D, Tricoli, A, Trocino, D, Vecchi, L, Vicini, A and Vivarelli, I (2000) Proceedings of the Workshop on Monte Carlo's, Physics and Simulations at the LHC PART II. .
Ambroglini, F, Armillis, R, Azzi, P, Bagliesi, G, Ballestrero, A, Balossini, G, Banfi, A, Bartalini, P, Benedetti, D, Bevilacqua, G et al (show 49 more authors) , Bolognesi, S, Cafarella, A, Calame, CMC, Carminati, L, Cobal, M, Corcella, G, Coriano', C, Dainese, A, Duca, VD, Fabbri, F, Fabbrichesi, M, Fano', L, Faraggi, AE, Frixione, S, Garbini, L, Giammanco, A, Guzzi, M, Irges, N, Maina, E, Mariotti, C, Masetti, G, Mele, B, Migliore, E, Montagna, G, Monteno, M, Moretti, M, Nason, P, Nicrosini, O, Nisati, A, Perrotta, A, Piccinini, F, Polesello, G, Rebuzzi, D, Rizzi, A, Rolli, S, Roda, C, Rosati, S, Santocchia, A, Stocco, D, Tartarelli, F, Tenchini, R, Tonero, A, Treccani, M, Treleani, D, Tricoli, A, Trocino, D, Vecchi, L, Vicini, A and Vivarelli, I (2000) Proceedings of the Workshop on Monte Carlo's, Physics and Simulations at the LHC PART I. .
Antonacopoulou, E and Chiva, R (2000) the Social Complexity of Organisational Learning: Dynamics of Learning and Organizing. Management Learning, 38 (3). 277 - 296.
Antonacopoulou, E and Guttel, W (2000) Staff Induction Practices and Organizational Socialization: A Review and Extension of the Debate. Society and Business Review, 5 (1). 22 - 47.
Argyres, PC, Faraggi, AE and Shapere, AD (2000) Curves of Marginal Stability in N=2 Super-QCD. .
Artale, A, Kontchakov, R, Wolter, F and Zakharyaschev, M (2000) Temporal Description Logic for Ontology-Based Data Access (Extended Version).
Biktashev, VN and Biktasheva, IV (2000) Dynamics of filaments of scroll waves. In: Unspecified .
Boucaud, P, Burgio, G, Renzo, FD, Leroy, JP, Micheli, J, Parrinello, C, Pene, O, Pittori, C, Rodriguez-Quintero, J, Roiesnel, C et al (show 1 more authors) and Sharkey, K (2000) Lattice calculation of 1/p^2 corrections to α_s and of Λ_(QCD) in the MOM-tilde scheme. J. High Energy Phys..
Coriano, C and Faraggi, AE (2000) SUSY Scaling Violations and UHECR. .
Coriano, C, Faraggi, AE and Guzzi, M (2000) Z-prime Searches at the LHC: Some QCD Precision Studies in Drell-Yan. .
Dixon, C, Fisher, M, Konev, B and Lisitsa, A (2000) Efficient First-Order Temporal Logic for Infinite-State Systems.
Easterby-Smith, M, Graca, M, Antonacopoulou, Elena and Ferdinand, Jason (2000) Absorptive Capacity: A Process Perspective. Management Learning, 39 (5). 483 - 501.
Eden, B, Howe, PS, Pickering, A, Sokatchev, E and West, PC (2000) Four-point functions in N=2 superconformal field theories. Nuclear Physics B, 581 (1-2). 523 - 558.
Ellwood, P and Pandza, K (2000) Addressing the challenges of cooperation and coordination in complex innovation environments: narratives of openness. In: European Group for Organization Studies, 2015-07-01 - 2015-07-04, Athens.
Faraggi, AE (2000) Cosmological and phenomenological implications of Wilsonian matter in realistic superstring derived models. .
Faraggi, AE (2000) Deriving The Standard Model From Superstring Theory. .
Faraggi, AE (2000) Duality, Equivalence, Mass and The Quest For The Vacuum. .
Faraggi, AE (2000) On the origin of three generation free fermionic superstring models. .
Faraggi, AE (2000) Phenomenological survey of free fermionic heterotic-string models. .
Faraggi, AE (2000) Realistic Superstring Models. .
Faraggi, AE (2000) Superstring Phenomenology - A Personal Perspective. .
Faraggi, AE (2000) Superstring phenomenology -- present and future perspective. .
Faraggi, AE (2000) The $Z_2\times Z_2$ Orbifold and the SUSY Flavor Problem. .
Faraggi, AE and Matone, M (2000) Reply to Comment on "Duality of x and psi in Quantum Mechanics".
Faraggi, AE and Rizos, J (2000) Spinor-vector duality and light Z' in heterotic string vacua. .
Fitzjohn, Matthew (2000) Hearth and home: evaluating quality of life in the ancient Greek world. In: Building, dwelling, and living in the ancient Greek world. Cambridge University Press.
HAZEL, SM, BENNETT, M, CHANTREY, J, BOWN, K, CAVANAGH, R, JONES, TR, BAXBY, D and BEGON, M (2000) A longitudinal study of an endemic disease in its wildlife reservoir: cowpox and wild rodents. Epidemiology and Infection, 124 (3). 551 - 562.
Jack, I and Jones, DRT (2000) Fayet-Iliopoulos D-terms and anomaly mediated supersymmetry breaking. PHYSICS LETTERS B, 482 (1-3). 167 - 173.
Jack, I and Jones, DRT (2000) Quasi-infrared fixed points and renormalization group invariant trajectories for nonholomorphic soft supersymmetry breaking. PHYSICAL REVIEW D, 61 (9).
Jack, I and Jones, DRT (2000) R-symmetry, Yukawa textures and anomaly mediated supersymmetry breaking. PHYSICS LETTERS B, 491 (1-2). 151 - 156.
Jack, I and Jones, DRT (2000) Renormalisation of the Fayet-Iliopoulos D-term. PHYSICS LETTERS B, 473 (1-2). 102 - 108.
Jack, I, Jones, DRT and Parsons, S (2000) Fayet-Iliopoulos D term and its renormalization in softly broken supersymmetric theories. PHYSICAL REVIEW D, 62 (12).
Karpenkov, O (2000) Vladimir Igorevich Arnold. Internat. Math. Nachrichten, 2010.
Kinderman, P and Lobban, F (2000) Evolving formulations. Behavioural and Cognitive Psychotherapy, 28. 307 - 310.
Kinderman, Peter and Lobban, Fiona (2000) EVOLVING FORMULATIONS: SHARING COMPLEX INFORMATION WITH CLIENTS. Behavioural and Cognitive Psychotherapy, 28 (3). 307 - 310.
Levene, D and Ponting, M (2000) "Recycling economies, when efficient, are by their nature invisible". A first century Jewish recycling economy. In: Talmudic Archaeology., Leiden, Netherlands.
Malbon, A, Ricci, E, Unwin, S and Chantrey, J (2000) Cerebellar Abiotrophy in Two Related Lion-tailed Macaques (Macaca silenus).
Michael, Benedict, Jeacocke, P, Varatharaj, A, Backman, Ruth, McGill, Fiona, Kneen, Rachel, Medina-Lara, Antonieta and Solomon, Tom (2000) Unselected brain imaging in suspected meningitis delays lumbar puncture, can prolong hospitalisation and may increase antibiotic costs - a pilot study.
Morton, H and Rampichini, M (2000) Mutual braiding and the band presentation of braid groups. In: Knots in Hellas '98: Proceedings of the International Conference on Knot Theory and its Ramifications. World Scientific,Singapore, 335 - 346.
Pickering, Austin and West, Peter (2000) Chiral Green's functions in superconformal field theory. Nuclear Physics B, 569 (1-3). 303 - 328.
Resta-Lopez, J, Burrows, PN, Latina, A and Schulte, D (2000) Luminosity Performance Studies of Linear Colliders with Intra-train Feedback Systems.
Shaaban, IG and Adam, M (2000) Seismic Behaviour of Beam-Column Connections in High Strength Concrete Building Frames. In: 8th Arab Structural Engineering Conference (8ASEC), 2000-10-21 - 2000-10-23, Cairo, Egypt.
Stevens, S, Garner, C, Wei, C, Greenwood, R, Hamilton-Shield, J, Costello, BDL, Ratcliffe, N and Probert, C (2000) A study of volatile compounds in the breath of children with type 1 diabetes.
Tanyi, AG (2000) Piac és igazságosság? (Market and Justice?). Napvilág,Budapest, Hungary.
Thornton, N (2000) Re-Framing Independence in Mexican Cinema: Marcela Fernández Violante a Pioneering Filmmaker. In: Latin American Women Filmmakers: Production, Politics, Poetics. IB Tauris.
Touramanis, C (2000) Physics at BABAR. .
Welsch, C (2000) FEL R&D within LA3NET. In: FEL (Free-Electron Laser) International Conference, 2013-08-26 - 2013-08-30, Manhattan, USA.
Zhang, Z, Ringeval, F, Dong, B, Coutinho, E, Marchi, E and Schuller, B (2000) Enhanced Semi-Supervised Learning for Multimodal Emotion Recognition. , 2016-03-20 - 2016-03-25.
This list was generated on Sat Apr 20 01:04:55 2019 BST. | CommonCrawl |
For the past few months, I've been commuting to work on my bicycle. I've always been a walker, but I've been out of shape and slowly gaining fat for some time now. The new activity has led to some obvious weight loss. This has inspired me to keep working at it and track my progress. As part of this, I wanted to measure my percent body fat using tools I have around my apartment. You can find calculators out on the internet which give you a singular estimate. Being a scientist though, I want some knowledge of the uncertainty of the estimate. I decided to build my own model from data which I can use to get an estimate, with the uncertainty, of my body fat percentage.
I found data from a study which measured the body density and various anatomical measurements (such as neck and chest circumferences) of a group of men. From my research, I found that body density can be measured accurately using water or air displacement. However, it is unclear how to convert density into body fat percentage because you must assume a distribution of lean and fatty tissues. There are more than a few methods, but for this analysis I'm going to use Brozek's method.
First we'll start by importing some packages we'll use and then import the data.
Now that the data is loaded, we can check it out. We'll look at the first few cases, then make some plots so we can see how some of the data is distributed. The first two columns are percent body fat (PBF) using the Brozek and Siri formula's, respectively. Every column from "neck" on is the circumference at those locations.
From this we can see the percent body fat is correlated with weight and body measurements, while height has little if any effect. There are also correlations between body measurements and weight which we'll have to deal with later. I'm going to use a linear regression model to predict percent body fat from the data.
A linear regression model is a formal way to draw a line through a set of data points. For instance, we can see that when we plot abdomen circumference on the x-axis and percent body fat on the y-axis (first row, fifth column), the data falls along a general upward sloped line. As we would expect, a larger gut indicates more body fat. We can get a decent estimate of this relationship by printing out the plot and drawing a line through the data points by hand. However, we want to find the "best" line. What I mean by "best" is that the line we draw has the least error predicting the existing data points.
using the ordinary least squares method. Here I will use the excellent scikits learn Python package to fit a linear model to our data.
We see here that our model fits the data pretty well, but I'd like to measure the quality of the fit. A common way to measure the performance of the model is to split the data into a training set and a testing set. You fit the model to the training set, then measure how well it can predict the test set. This is known as cross-validation (CV) and is sort of the industry standard. I'll use scikits learn to perform k-folds CV. This method separates the data into $k$ parts (folds). Then, it uses $k-1$ folds to fit the model and then tests the model on the left out fold. Repeat this $k$ times using each fold to test only once.
The penalty pulls the coefficients towards zero, a process known as shrinkage, and decreases the variance in the model, leading towards better predictions. The amount of shrinkage is controlled by the parameter $\lambda$, as $\lambda \rightarrow \infty$ all coefficients go to zero, and as $\lambda \rightarrow 0$ we get our normal linear regression. However, there is no way to know before hand what the best value for $\lambda$ is. I can find the best $\lambda$ by fitting a bunch of models with different $\lambda$ and choose the one with the least prediction error. This is typically done with (again) cross-validation and is available from scikits-learn.
I'll also need to scale the features so that all the coefficients are on the same scale, and center the dependent variable as well.
We can see how the coefficients start at 0 for large $\lambda$, then become non-zero as $\lambda$ is decreased.
We can see here that abdomen circumference is always a strong predictor. As $\lambda$ gets smaller, the coefficient for height becomes non-zero, then wrist circumference and age. I'm going to use these four features for my final model. It is reasonable that height and wrist circumference are negatively related to percent body fat. They indicate how long and how thick your bones are, respectively. Also, it seems that as one gets older, the percent body fat increases, holding everything else constant. I'm guessing this is from a change in the distribution of tissues, younger men having more muscle than older men. Now we can build a simpler model using only these features.
Finally, I can predict my own body fat percentage. I'll use the CV error previously calculated as an estimate of the uncertainty. | CommonCrawl |
Restricted until 1 September 2020.
Möbius transformations have been thoughtfully studied over the field of complex numbers. In this thesis, we investigate Möbius transformations over two rings which are not fields: the ring of double numbers $\mathbb O$ and the ring of dual numbers $\mathbb D$. We will see certain similarity between the cases of fields and rings along with some significant distinctions. After the introduction and necessary background material, given in the first two chapters, I introduce general linear groups, projective lines and Möbius transformations over several rings such us the ring of integer numbers, the Cartesian product ring and the two rings $\mathbb O$ and $\mathbb D.$ In the following chapters, we consider in details metrics, classification of Möbius maps based on the number of fixed points, connected continuous one-parameter subgroups and an application of Möbius maps. | CommonCrawl |
The Winter Midwest Topology Seminar will take place at UIUC, in Urbana-Champaign, IL, on Saturday, February 23, 2019.
Registration: You can register here. If you are requesting funding, please indicate as much on the registration form. Also, please register, regardless if you are requesting funding, to help us get enough refreshments for the conference.
Funding: A small amount of funding is available for graduate students, postdocs, and those without other sources of support. To be fully considered for funding, please register by January 20th. If you have other sources of funding, we encourage you to use it, so that our limited funding can be used to support others.
Location: The main conference talks will take place in 314 Altgeld Hall, 1409 W. Green Street, Urbana, IL. Refreshments will be available in 239 Altgeld Hall. Here is a campus map and here is a google map with potential points of interest. The Illinois campus and surrounding Urbana-Champaign area is pretty walkable, but is also serviced by public transportation. A bus ride costs $1, and maps and schedules can be found here.
Local information: We have reserved a block of rooms at a discounted rate at the Hampton Inn. These can be accessed via the following link. Additionally, you could also book a room at the Illini Union or at TownePlace Suites.
Schedule: Talks will occur between the times 10a-5p, with coffee and lunch breaks. Details forthcoming.
There will be four parts to this talk. (1) I will explain the Davis-Lück approach to assembly maps via groupoids and the orbit category. (2) I will explain the notion of an isomorphism conjecture, the prototypical one being the Farrell-Jones Conjecture. (3) I will talk some recent computations (joint with Wolfgang Lück) of the structure group of BG and connections with equivariant topological K-theory. (4) I will discuss work in progress with Carmen Rovi about a local-to-global bordism approach to the assembly map in L-theory and applications to the Farrell-Jones Conjecture.
B. Kahn and T. Yamazaki recently defined a new abelian category with objects homotopy invariant sheaves with transfers. Distinguished objects in this category arise from abelian varieties, torii and groups of zero-cycles on smooth projective varieties. This category has a product, which has significant applications to algebraic geometry, namely to the theory of zero-cycles. The purpose of my talk will be to present such a geometric application. In a recent joint work with Isabel Leal, we obtain some finiteness results about the Chow group of zero-cycles, CH_0(E_1\times E_2), on a product of elliptic curves, verifying in this case a very open conjecture of Colliot-Thelene.
In influential work of the 70s and 80s, Segal and Waldhausen each construct a version of K-theory that produces spectra from certain types of categories. These constructions agree, in the sense that appropriately equivalent categories yield weakly equivalent spectra. In the 2000s, work of Elmendorf--Mandell and Blumberg--Mandell produced more structured versions of Segal and Waldhausen K-theory, respectively. These versions are "multiplicative," in the sense that appropriate notions of pairings of categories yield multiplication-type structure on their resulting spectra. In this talk, I will discuss joint work with Osorno in which we show that these constructions agree as multiplicative versions of K-theory. Consequently, we get comparisons of rings spectra built from these two constructions. Furthermore, the same result also allows for comparisons of related constructions of spectrally-enriched categories.
Let G be a finite group and X a topos with homotopy coherent G-action. From this, we construct a stable homotopy theory Sp^G(X) which recovers and extends the theory of genuine G-spectra. We explain what our construction yields when: (i) X is the topos of sheaves on a topological space with G-action (ii) X is the etale C_2-topos of a scheme S adjoined a square root of -1. We conclude with an application to realization functors out of the stable motivic homotopy category of a scheme. This is joint work with Elden Elmanto.
Travel Information The closest airport is CMI, University of Illinois Willard Airport. From there one would have to take a taxi to campus. More often than not, flying to Chicago's O'Hare airport is the most convenient option, and there is a direct bus service to Champaign-Urbana, run by Peoria Charter, with a stop directly in front of the math building (Altgeld Hall) where the conference will be held.
Parking Information One can find parking information provided by the University of Illinois here. Additionally one can park in garage C7 (on John and 5th). This is the closest parking garage to Altgeld Hall and no permit is required to park there from 5p Friday to 6a Monday.
Contact us: Dominic Culver, Jeremiah Heller, Charles Rezk, Vesna Stojanoska. | CommonCrawl |
Is it poor grammar to replace normal phrases with mathematical symbols in sentences in a mathematical paper?
The matrix A has rank ≥ n.
The rank of the matrix A is greater than or equal to n.
Are expressions such as "The matrix A has rank ≥ n" considered as acceptable in mathematical papers/theses/textbooks?
No, it's often considered poor style to incorporate fragments of equations like this into text. I wouldn't go so far as to say it's ungrammatical, but many people consider it bad writing. Some others don't care about this issue, which is why you sometimes see it done, but this is more common in informal or unedited writing.
The issue is that "rank ≥ n" is mixing together English and mathematics within the same construction. If this doesn't bother you, imagine a more dramatic case like "n + five". (By contrast, when someone writes "if x ≥ y", the inequality "x ≥ y" is a self-contained unit within the sentence.) There's no logical reason why mathematical writing conventions couldn't allow this sort of mixing, but they don't.
Saying "The matrix A has rank at least n" is shorter and cleaner than "The rank of the matrix A is greater than or equal to n", but they are both acceptable. I'd recommend avoiding "The matrix A has rank ≥ n" (I can't think of a good reason to prefer it, and avoiding looking bad is a reason not to use it).
I, for one, allow these mixed constructions when editing the articles. I do know that it is not the best grammatical style, but not everything in math is easy to put down in proper English grammar. The two rules of thumb I use for these boundary cases is: Is the text clear to the reader? Can you easily make it grammatically correct?
Galois conjugates except its complex conjugate are in modulus $<1$.
Galois conjugates except its complex conjugate are in modulus less than $1$.
I prefer the first option. This went through the AMS language editoral, as far as I remember, without any problem.
I let it pass if the sentence is unambiguous and can be pronounced normally when reading without any special effort like in "If $A$ is $\ge B+C$ and $f:[0,A]\to\mathbb Z$, then... (If the quantity/parameter/number $A$ is larger than the sum $B+C$ and the function $f$ maps the interval $[0,A]$ to the set $\mathbb Z$, then...) because in this case the extra words just slow the reader down. However, when seeing any ambiguity like "If A, B, C." (which comma is "and", and which is "then" here?) or something that, if attempted to be read as a sentence, violates not only the rules of the grammar, but also those of common sense as far as structuring sentences is concerned and which, if one needs it to be said at the board in a classroom, will have to be split into separate sentences and totally restructured to be comprehended by ear, I usually object.
Side note: what's the point of not enabling mathjax on Academia?
In principle, it is generally acceptable to mix together mathematical and prose statements, as in your example: either construction would be technically be grammatically correct.
In practice, which to choose depends on how you want your reader to think about the statement that you have written. Prose emphasizes the relationship, in your example focusing the reader on "greater than." A mathematical statement tends to instead be thought of as a unit, in your example focusing the reader on "rank." You should thus choose accordingly.
We selected eight conditions to test.
We found that 8 of the 73 samples were positive.
The boundary of "small" is a bit hazy: certainly less than 10, usually less than 20.
Not the answer you're looking for? Browse other questions tagged mathematics writing-style grammar or ask your own question.
What to use instead of academic 'we' when describing an experiment?
When shall one write a mathematical expression as a separate line in a paper/Ph.D. thesis?
What GRE verbal score is needed for a non-native English speaker to get into math or applied math PhD programs in the USA?
Is it appropriate to underline key words in an exam answer?
Why do researchers sometimes use extremely complicated English sentences to convey their meaning?
How do I address research gaps in my Thesis (Research article)? | CommonCrawl |
Abstract: We report results from a programme aimed at investigating the temperature of neutral gas in high-redshift damped Lyman-$\alpha$ absorbers (DLAs). This involved (1) HI 21cm absorption studies of a large DLA sample, (2) VLBI studies to measure the low-frequency quasar core fractions, and (3) optical/ultraviolet spectroscopy to determine DLA metallicities and velocity widths.
Including literature data, our sample consists of 37 DLAs with estimates of the spin temperature $T_s$ and the covering factor. We find a strong $4\sigma$) difference between the $T_s$ distributions in high-z (z>2.4) and low-z (z<2.4) DLA samples. The high-z sample contains more systems with high $T_s$ values, $\gtrsim 1000$ K. The $T_s$ distributions in DLAs and the Galaxy are also clearly (~$6\sigma$) different, with more high-$T_s$ sightlines in DLAs than in the Milky Way. The high $T_s$ values in the high-z DLAs of our sample arise due to low fractions of the cold neutral medium.
For 29 DLAs with metallicity [Z/H] estimates, we confirm the presence of an anti-correlation between $T_s$ and [Z/H], at $3.5\sigma$ significance via a non-parametric Kendall-tau test. This result was obtained with the assumption that the DLA covering factor is equal to the core fraction. Monte Carlo simulations show that the significance of the result is only marginally decreased if the covering factor and the core fraction are uncorrelated, or if there is a random error in the inferred covering factor. | CommonCrawl |
Known widely as 'Brahmi,' the Bacopa Monnieri or Water Hyssop, is a small herb native to India that finds mention in various Ayurvedic texts for being the best natural cognitive enhancer. It has been used traditionally for memory enhancement, asthma, epilepsy and improving mood and attention of people over 65. It is known to be one of the best brain supplement in the world.
After my rudimentary stacking efforts flamed out in unspectacular fashion, I tried a few ready-made stacks—brand-name nootropic cocktails that offer to eliminate the guesswork for newbies. They were just as useful. And a lot more expensive. Goop's Braindust turned water into tea-flavored chalk. But it did make my face feel hot for 45 minutes. Then there were the two pills of Brain Force Plus, a supplement hawked relentlessly by Alex Jones of InfoWars infamy. The only result of those was the lingering guilt of knowing that I had willingly put $19.95 in the jorts pocket of a dipshit conspiracy theorist.
The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia.
Medication can be ineffective if the drug payload is not delivered at its intended place and time. Since an oral medication travels through a broad pH spectrum, the pill encapsulation could dissolve at the wrong time. However, a smart pill with environmental sensors, a feedback algorithm and a drug release mechanism can give rise to smart drug delivery systems. This can ensure optimal drug delivery and prevent accidental overdose.
We reviewed recent studies concerning prescription stimulant use specifically among students in the United States and Canada, using the method illustrated in Figure 1. Although less informative about the general population, these studies included questions about students' specific reasons for using the drugs, as well as frequency of use and means of obtaining them. These studies typically found rates of use greater than those reported by the nationwide NSDUH or the MTF surveys. This probably reflects a true difference in rates of usage among the different populations. In support of that conclusion, the NSDUH data for college age Americans showed that college students were considerably more likely than nonstudents of the same age to use prescription stimulants nonmedically (odds ratio: 2.76; Herman-Stahl, Krebs, Kroutil, & Heller, 2007).
Speaking of addictive substances, some people might have considered cocaine a nootropic (think: the finance industry in Wall Street in the 1980s). The incredible damage this drug can do is clear, but the plant from which it comes has been used to make people feel more energetic and less hungry, and to counteract altitude sickness in Andean South American cultures for 5,000 years, according to an opinion piece that Bolivia's president, Evo Morales Ayma, wrote for the New York Times.
Popular among computer programmers, oxiracetam, another racetam, has been shown to be effective in recovery from neurological trauma and improvement to long-term memory. It is believed to effective in improving attention span, memory, learning capacity, focus, sensory perception, and logical thinking. It also acts as a stimulant, increasing mental energy, alertness, and motivation.
Ginsenoside Rg1, a molecule found in the plant genus panax (ginseng), is being increasingly researched as an effect nootropic. Its cognitive benefits including increasing learning ability and memory acquisition, and accelerating neural development. It targets mainly the NMDA receptors and nitric oxide synthase, which both play important roles in personal and emotional intelligence. The authors of the study cited above, say that their research findings thus far have boosted their confidence in a "bright future of cognitive drug development."
The term "smart pills" refers to miniature electronic devices that are shaped and designed in the mold of pharmaceutical capsules but perform highly advanced functions such as sensing, imaging and drug delivery. They may include biosensors or image, pH or chemical sensors. Once they are swallowed, they travel along the gastrointestinal tract to capture information that is otherwise difficult to obtain, and then are easily eliminated from the system. Their classification as ingestible sensors makes them distinct from implantable or wearable sensors.
Please note: Smart Pills, Smart Drugs or Brain Food Supplements are also known as: Brain Smart Vitamins, Brain Tablets, Brain Vitamins, Brain Booster Supplements, Brain Enhancing Supplements, Cognitive Enhancers, Focus Enhancers, Concentration Supplements, Mental Focus Supplements, Mind Supplements, Neuro Enhancers, Neuro Focusers, Vitamins for Brain Function,Vitamins for Brain Health, Smart Brain Supplements, Nootropics, or "Natural Nootropics"
The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we'll put the 500 at $5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we'll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it's necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: 5 + (>5 \times 7.25) = >41. | CommonCrawl |
Abstract: We compute the Hochschild cohomology groups for algebras of semidihedral type contained in the family $SD(2\mathcal B)_2$ (of the famous K. Erdmann's classification) over an algebraically closed field of characteristic $\neq2$. In the calculation, we use the minimal projective bimodule resolution for algebras from the above family constructed in the previous paper by the author.
Key words and phrases: Hochschild cohomology groups, algebras of semidihedral type, bimodule resolution. | CommonCrawl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.