text
stringlengths
100
500k
subset
stringclasses
4 values
Abstract: We present the results of local, vertically stratified, radiation magnetohydrodynamic shearing box simulations of magnetorotational instability (MRI) turbulence for a (hydrogen poor) composition applicable to accretion disks in AM CVn type systems. Many of these accreting white dwarf systems are helium analogues of dwarf novae (DNe). We utilize frequency-integrated opacity and equation of state tables appropriate for this regime to accurately portray the relevant thermodynamics. We find bistability of thermal equilibria in the effective temperature, surface mass density plane typically associated with disk instabilities. Along this equilibrium curve (i.e. the S-curve) we find that the stress to thermal pressure ratio $\alpha$ varied with peak values of $\sim 0.15$ near the tip of the upper branch. Similar to DNe, we found enhancement of $\alpha$ near the tip of the upper branch caused by convection; this increase in $\alpha$ occurred despite our choice of zero net vertical magnetic flux. Two notable differences we find between DN and AM CVn accretion disk simulations are that AM CVn disks are capable of exhibiting persistent convection in outburst, and ideal MHD is valid throughout quiescence for AM CVns. In contrast, DNe simulations only show intermittent convection, and non-ideal MHD effects are likely important in quiescence. By combining our previous work with these new results, we also find that convective enhancement of the MRI is anticorrelated with mean molecular weight.
CommonCrawl
Let $\A$ be a completely rational local Möbius covariant net on $S1$, which describes a set of chiral observables. We show that local Möbius covariant nets $\cB2$ on 2D Minkowski space which contains $\A$ as chiral left-right symmetry are in one-to-one correspondence with Morita equivalence classes of Q-systems in the unitary modular tensor category $\DHR(\A)$. The Möbius covariant boundary conditions with symmetry $\A$ of such a net $\cB2$ are given by the Q-systems in the Morita equivalence class or by simple objects in the module category modulo automorphisms of the dual category. We generalize to reducible boundary conditions. To establish this result we define the notion of Morita equivalence for Q-systems (special symmetric $\ast$-Frobenius algebra objects) and non-degenerately braided subfactors. We prove a conjecture by Kong and Runkel, namely that Rehren's construction (generalized Longo-Rehren construction, $\alpha$-induction construction) coincides with the categorical full center. This gives a new view and new results for the study of braided subfactors. Keywords and Phrases: Conformal Nets, Boundary Conditions, Q-system, Full Center, Subfactors, Modular Tensor Categories. Full text: dvi.gz 133 k, dvi 617 k, ps.gz 510 k, pdf 525 k.
CommonCrawl
Photon-Pair Generation in Chalcogenide Glass: Role of Waveguide Linear Absorption, Proc INSTICC International Conf. on Photonics, Optics and Laser Technology - PHOTOPTICS, Lisboa, Portugal, Vol. 1, pp. 5 - 10, January, 2014. We investigate the impact of waveguide loss on the generation rate of quantum correlated photon-pairs through four-wave mixing in a chalcogenide glass fiber. The obtained results are valid even when the photon-pairs are generated in a medium with non-negligible loss, $\alpha L\gg 1$. The impact of the loss is quantified through the analysis of the true, total and accidental counting rates at waveguide output. We use the coincidence-to-accidental ratio (CAR) as a figure of merit of the photon-pair source. Results indicate that, the CAR parameter tends to decrease with the increase of the waveguide length, until $L<1/\alpha$. However, a continuous increase of the waveguide length tends to lead to an increase on the CAR value. In that non-negligible loss regime, $\alpha L\gg 1$, we are able to observe a significant decrease on the value of all coincidence counting rates. Nevertheless, that decrease is even more pronounced on the accidental counting rate. Moreover, for waveguide length $L= 10/\alpha$ we are able to obtain a CAR of the order of 70, which is higher than the CAR value for the specific case of $\alpha=0$ with $L=2$~cm, i.e. CAR=42. This indicates that the waveguide loss can improve the degree of quantum correlation between the photon-pairs.
CommonCrawl
In this paper we study the Markov-modulated M/M/$\infty$ queue, with a focus on the correlation structure of the number of jobs in the system. The main results describe the system's asymptotic behavior under a particular scaling of the model parameters in terms of a functional central limit theorem. More specifically, relying on the martingale central limit theorem, this result is established, covering the situation in which the arrival rates are sped up by a factor $N$ and the transition rates of the background process by $N^\alpha$, for some $\alpha>0$. The results reveal an interesting dichotomy, with crucially different behavior for $\alpha>1$ and $\alpha<1$, respectively. The limiting Gaussian process, which is of the Ornstein-Uhlenbeck type, is explicitly identified, and it is shown to be in accordance with explicit results on the mean, variances and covariances of the number of jobs in the system.
CommonCrawl
IndianStudyHub is providing all Sequence and Series Questions and Solutions : Logical Reasoning pdf free download questions and answers along with detailed explanation and Answers in an easy and understandable way. The Answers are explained in elaborated manner to get clear subject knowledge. present continuous worksheets esl pdf a) The series forms an Arithmetic Series, with and . Now, to find the number of terms, we simply need to find the th term of the sequence and equate this to the last term, upon which we may solve for and find the term number. Now that we have seen some more examples of sequences we can discuss how to look for patterns and figure out given a list, how to find the sequence in question. Example. When given a list, such as $1, 3, 9, 27, 81, \ldots$ we can try to look for a pattern in a few ways. Here you can get Class 11 Important Questions Maths based on NCERT Text book for Class XI. Maths Class 11 Important Questions are very helpful to score high marks in board exams. Here we have covered Important Questions on Sequence and Series for Class 11 Maths subject. C1 Sequences and series – Arithmetic series C1 Sequences and series: Arithmetic series – Questions 2 1. A farmer has a pay scheme to keep fruit pickers working throughout the 30 day season. Click on links below for Class 11 Sequence And Series to download solved sample papers, past year question papers with solutions, pdf worksheets, NCERT Books and solutions for Sequence And Series Class 11 based on syllabus and guidelines issued by CBSE and NCERT.
CommonCrawl
A homogeneous cone $X$ is the cone over a homogeneous variety $G/P$ embedded thanks to an ample line bundle $L$. In this article, we describe the irreducible components of the scheme of morphisms of class $\alpha\in A_1(X)$ from a rational curve to X. The situation depends on the line bundle L : if the projectivised tangent space to the vertex contains lines then the irreducible components are described by the difference between Cartier and Weil divisors. On the contrary if there is no line in the projectivised tangent space to the vertex then there are new irreducible components corresponding to the multiplicity of the curve through the vertex. 2000 Mathematics Subject Classification: 14C05, 14M17. Keywords and Phrases: homogeneous cone, scheme of morphisms, rational curves. Full text: dvi.gz 28 k, dvi 69 k, ps.gz 614 k, pdf 166 k.
CommonCrawl
The management of the software company JunkCode has recently found, much to their surprise and disappointment, that productivity has gone down since they implemented their enhanced set of coding guidelines. The idea was that all developers should make sure that every code change they push to the master branch of their software repository strictly follows the coding guidelines. After all, one of the developers, Perikles, has been doing this since long before these regulations became effective so how hard could it be? Rather than investing a lot of time figuring out why this degradation in productivity occurred, the line manager suggests that they loosen their requirement: developers can push code that weakly violates the guidelines as long as they run cleanup phases on the code from time to time to make sure the repository is tidy. She suggests a metric where the "dirtiness" of a developer's code is the sum of the pushes that violate the guidelines – so-called dirty pushes – made by that developer, each weighted by the number of days since it was pushed. The number of days since a dirty push is a step function that increases by one each midnight following the push. Hence, if a developer has made dirty pushes on days $1$, $2$, and $5$, the dirtiness on day $6$ is $5+4+1=10$. She suggests that a cleanup phase, completely fixing all violations of the coding guidelines, must be completed before the dirtiness reaches $20$. One of the developers, Petra, senses that this rule must be obeyed not only because it is a company policy. Breaking it will also result in awkward meetings with a lot of concerned managers who all want to know why she cannot be more like Perikles? Still, she wants to run the cleanup phase as seldomly as possible, and always postpones it until it is absolutely necessary. A cleanup phase is always run at the end of the day and fixes every dirty push done up to and including that day. Since all developers are shuffled to new projects at the start of each year, no dirtiness should be left after midnight at the end of new year's eve. The first line of input contains an integer $n$ ($1 \leq n \leq 365$), the number of dirty pushes made by Petra during a year. The second line contains $n$ integers $d_1, d_2, \ldots , d_ n$ ($1 \leq d_ i \leq 365$ for each $1 \le i \le n$) giving the days when Petra made dirty pushes. You can assume that $d_ i < d_ j$ for $i < j$. Output the total number of cleanup phases needed for Petra to keep the dirtiness strictly below $20$ at all times.
CommonCrawl
Abstract: In this paper, we are concerned with regularized regression problems where the prior regularizer is a proper lower semicontinuous and convex function which is also partly smooth relative to a Riemannian submanifold. This encompasses as special cases several known penalties such as the Lasso ($\ell^1$-norm), the group Lasso ($\ell^1-\ell^2$-norm), the $\ell^\infty$-norm, and the nuclear norm. This also includes so-called analysis-type priors, i.e. composition of the previously mentioned penalties with linear operators, typical examples being the total variation or fused Lasso penalties.We study the sensitivity of any regularized minimizer to perturbations of the observations and provide its precise local parameterization.Our main sensitivity analysis result shows that the predictor moves locally stably along the same active submanifold as the observations undergo small perturbations. This local stability is a consequence of the smoothness of the regularizer when restricted to the active submanifold, which in turn plays a pivotal role to get a closed form expression for the variations of the predictor w.r.t. observations. We also show that, for a variety of regularizers, including polyhedral ones or the group Lasso and its analysis counterpart, this divergence formula holds Lebesgue almost everywhere.When the perturbation is random (with an appropriate continuous distribution), this allows us to derive an unbiased estimator of the degrees of freedom and of the risk of the estimator prediction.Our results hold true without requiring the design matrix to be full column rank.They generalize those already known in the literature such as the Lasso problem, the general Lasso problem (analysis $\ell^1$-penalty), or the group Lasso where existing results for the latter assume that the design is full column rank.
CommonCrawl
For performance, it is crucial that the resulting $n \times d \times m$ tensor is a packed array of Integers. As you can see below, I have tried already several things. But in view of the fact that actually nothing has to be computed as the mantissas of the machine precision numbers are actually stored in binary, the timings are really disappointing. I have also tried to use BitGet but that was a couple of times slower than RealDigits. So, Mod and IntegerPart seem to do quite a good job, but still, there is a multiplication involved for each digit to be extracted. Does anybody know of a fast, low-level way to retrieve the digits? We can squeeze out a bit more speed by using BitAnd instead of Mod and BitShiftLeft instead of multiplication in your digit3 calculation. I sought a solution that masked the bits via combinations of BitAnd and BitShiftRight, but to my surprise they don't work on Real as you might think. Though in its defense, C doesn't support bitwise operations on floats or doubles without casting to an integer type first. gives 1.11875e-154 rather than 0.5, and it behaves how we might expect it to. We can do something similar in WL through BinarySerialize and extracting bits by masking. Note that this implementation depends on $ByteOrdering and $SystemWordLength. One bottleneck is applying Normal, which is needed since bitwise operations don't work on ByteArray. Not the answer you're looking for? Browse other questions tagged performance-tuning machine-precision digits or ask your own question. Why are so many built-in numerical function doesn't check its argument is packed array or not?
CommonCrawl
It solves a generalization of the maximum sum assignment problem by finding the k best assignments and not only the best. However, it only looks at perfect matchings. I'm am especially interested in bipartite matchings. In particular, for the bipartite graphs, the Theorem 1 p. 161 uses the fact that the matchings are considered perfect. How can I solve the k-best assignment problem for general bipartite graphs? After some thinking, I found an answer. If one has a better one I'll accept it. From a cost matrix of shape $n\times m$ with $n<m$, it is easy to add nodes that will not change anything by giving all their incident edges the same weight $w$, that is adding $(m-n)*m$ edges. Not the answer you're looking for? Browse other questions tagged graph-algorithms matching bipartite-graphs or ask your own question. Deterministic Parallel algorithm for perfect matching in general graphs? Generalization of the Hungarian algorithm to general undirected graphs? Perfect matchings in a chessboard? what is the best heuristic to solve 3AP with Euclidean costs? Is finding whether k different perfect matchings exist in a bipartite graph co-NP? Consequences of bipartite perfect matching not in NL?
CommonCrawl
Matrix $A$ is an $n\times n$ matrix, and $I$ is the $n\times n$ identity matrix. Suppose that $A^2=0$, prove that $I-A$ and $I+A$ are both invertible. Any clues for solving this problem please? The annihilating polynomial is $a(t)=t^2$ for which there are two choices of minimal polynomial viz. $m(t)=t$ or $t^2$. In both cases, $A$ has only $0$ eigenvalue. So, the only eigenvalue of $I\pm A$ is $1$, which implies that both are invertible. Are $B$ and $C$ the identity matrix? Is a square matrix $A$ where $A^3$ is the zero matrix invertible when added to the identity matrix $(I+A)$? When is the product of two non-square matrices invertible?
CommonCrawl
Abstract. We prove an upper bound for the error in the exponential approximation of the hitting time law of a rare event in $\alpha$-mixing processes with exponential decay, $\phi$-mixing processes with a summable function $\phi$ and for general $\psi$-mixing processes with a finite alphabet. In the first case the bound is uniform as a function of the measure of the event. In the last two cases the bound depends also on the time scale $t$. This allow us to get further statistical properties as the ratio convergence of the expected hitting time and the expected return time. A uniform bound is a consequence. We present an example that shows that this bound is sharp. We also prove that second moments are not necessary for having the exponential law. Moreover, we prove a necessary condition for having the exponential limit law.
CommonCrawl
In this paper, we show that SVRG and SARAH can be modified to be fundamentally faster than all of the other standard algorithms that minimize the sum of $n$ smooth functions, such as SAGA, SAG, SDCA, and SDCA without duality. Most finite sum algorithms follow what we call the ``span assumption'': Their updates are in the span of a sequence of component gradients chosen in a random IID fashion. In the big data regime, where the condition number $\kappa=O(n)$, the span assumption prevents algorithms from converging to an approximate solution of accuracy $\epsilon$ in less than $n\ln(1/\epsilon)$ iterations. SVRG and SARAH do not follow the span assumption since they are updated with a hybrid of full-gradient and component-gradient information. We show that because of this, they can be up to $\Omega(1+(\ln(n/\kappa))_+)$ times faster. In particular, to obtain an accuracy $\epsilon = 1/n^\alpha$ for $\kappa=n^\beta$ and $\alpha,\beta\in(0,1)$, modified SVRG requires $O(n)$ iterations, whereas algorithms that follow the span assumption require $O(n\ln(n))$ iterations. Moreover, we present lower bound results that show this speedup is optimal, and provide analysis to help explain why this speedup exists. With the understanding that the span assumption is a point of weakness of finite sum algorithms, future work may purposefully exploit this to yield faster algorithms in the big data regime.
CommonCrawl
Concentrically braced frames built prior to the codification of capacity-based and other ductile design provisions constitute a substantial proportion of steel building infrastructure on the West Coast of the US. These buildings, built prior to about 1990, utilize a wide variety of connection and system configurations with deficiencies expected to lead to significant damage. Does chocolate really grow on trees?! Did you know vanilla comes from an orchid?! Come discover more about two of our favorite foods. Sponsor: Renaissance and Early Modern Studies, D.E. CANCELLED Plant and Microbial Biology Plant Seminar: "RNA structure encodes specificity in intracellular phase transitions" The Gladfelter lab is interested in how cells are organized in time and space.  We study how cytoplasm is spatially patterned and how cells sense their own shape.  We also investigate how timing in the cell division cycle can be highly variable yet still accurate. It's time to reimagine the way we eat from the ground up. I will explain the following result, which was proved in a paper by Marques-Neves-speaker: on a closed manifold of dimension $3 \le d \le 7$ with a $C^\infty $-generic Riemannian metric, the union of closed, embedded minimal hypersurfaces is dense. In this workshop, undergraduates will receive detailed guidance on how to construct a research proposal in the STEM disciplines for the SURF Fellowship. terms of multi-scale Markov processes with fully dependent slow and fast fluctuating variables. Harnessing the power of social norms for improving global health. A case study from West Africa. How Did US-Russian Relations Get So Bad and How Might They Be Improved? From estimating the time to failure of battery modules for Reliability Engineering to predicting lane lines from images for Autopilot, statistics plays a vital role in building all of Tesla's products. In this talk, we present the ways in which Tesla is changing the future of sustainable energy and discuss how statisticians will help us get there. Bowen Lectures: Lecture 1: Mathematics and Computation (through the lens of one problem and one algorithm). The problem, the algorithm and the connections. - Singularity of symbolic matrices: a basic problem in both computational complexity. - Alternating Minimization: a basic heuristic in non-convex optimization. This event brings together UC Berkeley faculty, and students in a guided dialogue unpacking the tensions, frustrations, opportunities and possibilities of contentious discourse in the classroom. Were you ever interested in working for the gaming industry? Come out to the Wozniak Lounge on February 7th at 6 PM to speak to EA employees and the type of projects that you can work on! Free food will be provided! In collaboration with the Jacobs Institute for Design Innovation and Publications and Media Center, we are seeking alumni and professionals in a multitude of fields involving arts, design, and/or technology! The event will require minimal prep work from attendees; simply arrive and share your insider knowledge of the marketing and media industry through introductions and casual conversations. History Homecoming is a gathering of alumni and friends of the Department of History for fellowship, food, refreshments, and a special history panel. The topic of this year's faculty panel is "Quakes, Storms, and Wrecks: Disaster as a Window to the Past." Attendance restrictions: Limited seating is available, so please plan to arrive early. WED, FEB 7, 6:30pm. Recent fellowship recipients will present their research from their international travels. Followed by a reception in the Wurster Gallery, alongside the 2017 Branner & Stump Fellows Exhibition. Open to the CED community! Dolores Huerta is among the most important, yet least known, activists in American history. With intimate and unprecedented access to this intensely private mother to eleven, the film reveals the raw, personal stakes involved in committing one's life to social change. 95 minutes. English and Spanish with English subtitles. Credited with carrying on the legacy of artists like Ella Fitzgerald and Billie Holiday, Cécile McLorin Salvant's repertoire includes jazz standards, folk songs, and blues tunes, plus her own compositions. Cecile McLorin Salvant — "Wives and Lovers"
CommonCrawl
Citation: Quantum 2, 50 (2018). For the past twenty years, Matrix Product States (MPS) have been widely used in solid state physics to approximate the ground state of one-dimensional spin chains. In this paper, we study homogeneous MPS (hMPS), or MPS constructed via site-independent tensors and a boundary condition. Exploiting a connection with the theory of matrix algebras, we derive two structural properties shared by all hMPS, namely: a) there exist local operators which annihilate all hMPS of a given bond dimension; and b) there exist local operators which, when applied over any hMPS of a given bond dimension, decouple (cut) the particles where they act from the spin chain while at the same time join (glue) the two loose ends back again into a hMPS. Armed with these tools, we show how to systematically derive `bond dimension witnesses', or 2-local operators whose expectation value allows us to lower bound the bond dimension of the underlying hMPS. We extend some of these results to the ansatz of Projected Entangled Pairs States (PEPS). As a bonus, we use our insight on the structure of hMPS to: a) derive some theoretical limitations on the use of hMPS and hPEPS for ground state energy computations; b) show how to decrease the complexity and boost the speed of convergence of the semidefinite programming hierarchies described in [Phys. Rev. Lett. 115, 020501 (2015)] for the characterization of finite-dimensional quantum correlations. Dorit Aharonov, Daniel Gottesman, Sandy Irani, and Julia Kempe. The power of quantum systems on a line. Communications in Mathematical Physics, 287 (1): 41-65, jan 2009. 10.1007/​s00220-008-0710-3. URL https:/​/​doi.org/​10.1007. P. W. Anderson. Limits on the energy of the antiferromagnetic ground state. Physical Review, 83 (6): 1260-1260, sep 1951. 10.1103/​physrev.83.1260. URL https:/​/​doi.org/​10.1103. MOSEK ApS. The MOSEK optimization toolbox for MATLAB manual. Version 7.1 (Revision 28)., 2015. URL http:/​/​docs.mosek.com/​7.1/​toolbox/​index.html. A. C. Doherty, Pablo A. Parrilo, and Federico M. Spedalieri. Distinguishing separable and entangled states. Physical Review Letters, 88 (18), apr 2002. 10.1103/​physrevlett.88.187904. URL https:/​/​doi.org/​10.1103. Glen Evenbly and Guifre Vidal. Quantum criticality with the multi-scale entanglement renormalization ansatz. In Springer Series in Solid-State Sciences, pages 99-130. Springer Berlin Heidelberg, 2013. 10.1007/​978-3-642-35106-8_4. URL https:/​/​doi.org/​10.1007. M. Fannes, B. Nachtergaele, and R. F. Werner. Finitely correlated states on quantum spin chains. Communications in Mathematical Physics, 144 (3): 443-490, mar 1992. 10.1007/​bf02099178. URL https:/​/​doi.org/​10.1007. Edward Formanek. The Polynomial Identities and Variants of $n \times n$ Matrices. American Mathematical Society, jan 1991. 10.1090/​cbms/​078. URL https:/​/​doi.org/​10.1090. D. Gross, J. Eisert, N. Schuch, and D. Perez-Garcia. Measurement-based quantum computation beyond the one-way model. Physical Review A, 76 (5), nov 2007. 10.1103/​physreva.76.052315. URL https:/​/​doi.org/​10.1103. Leonid Gurvits. Classical complexity and quantum entanglement. Journal of Computer and System Sciences, 69 (3): 448-484, nov 2004. 10.1016/​j.jcss.2004.06.003. URL https:/​/​doi.org/​10.1016. M. Hein, J. Eisert, and H. J. Briegel. Multiparty entanglement in graph states. Physical Review A, 69 (6), jun 2004. 10.1103/​physreva.69.062311. URL https:/​/​doi.org/​10.1103. Michael Karbach, Kun Hu, and Gerhard Muüller. Introduction to the bethe ansatz II. Computers in Physics, 12 (6): 565, 1998. 10.1063/​1.168740. URL https:/​/​doi.org/​10.1063. Robert König and Renato Renner. A de finetti representation for finite symmetric quantum states. Journal of Mathematical Physics, 46 (12): 122108, dec 2005. 10.1063/​1.2146188. URL https:/​/​doi.org/​10.1063. Michael Levin and Cody P. Nave. Tensor renormalization group approach to two-dimensional classical lattice models. Physical Review Letters, 99 (12), sep 2007. 10.1103/​physrevlett.99.120601. URL https:/​/​doi.org/​10.1103. Chanchal K. Majumdar and Dipan K. Ghosh. On next-nearest-neighbor interaction in linear chain. i. Journal of Mathematical Physics, 10 (8): 1388-1398, aug 1969. 10.1063/​1.1664978. URL https:/​/​doi.org/​10.1063. Miguel Navascués and Tamás Vértesi. Bounding the set of finite dimensional quantum correlations. Physical Review Letters, 115 (2), jul 2015. 10.1103/​physrevlett.115.020501. URL https:/​/​doi.org/​10.1103. Miguel Navascués, Adrien Feix, Mateus Araújo, and Tamás Vértesi. Characterizing finite-dimensional quantum behavior. Physical Review A, 92 (4), oct 2015. 10.1103/​physreva.92.042117. URL https:/​/​doi.org/​10.1103. Roberto Oliveira and Barbara M. Terhal. The complexity of quantum spin systems on a two-dimensional square lattice. Quant. Inf, Comp., 8, 2008. Román Orús. A practical introduction to tensor networks: Matrix product states and projected entangled pair states. Annals of Physics, 349: 117-158, oct 2014. 10.1016/​j.aop.2014.06.013. URL https:/​/​doi.org/​10.1016. Asher Peres. Separability criterion for density matrices. Physical Review Letters, 77 (8): 1413-1415, aug 1996. 10.1103/​physrevlett.77.1413. URL https:/​/​doi.org/​10.1103. D. Perez-García, F. Verstraete, M. M. Wolf, and J.I. Cirac. Matrix product state representations. Quantum Inf. Comput., 7: 401, sep 2007. Ho N. Phien, Johann A. Bengua, Hoang D. Tuan, Philippe Corboz, and Román Orús. Infinite projected entangled pair states algorithm improved: Fast full update and gauge fixing. Physical Review B, 92 (3), jul 2015. 10.1103/​physrevb.92.035142. URL https:/​/​doi.org/​10.1103. David Poulin and Matthew B. Hastings. Markov entropy decomposition: A variational dual for quantum belief propagation. Physical Review Letters, 106 (8), feb 2011. 10.1103/​physrevlett.106.080403. URL https:/​/​doi.org/​10.1103. Norbert Schuch, Ignacio Cirac, and David Pérez-García. PEPS as ground states: Degeneracy and topology. Annals of Physics, 325 (10): 2153-2192, oct 2010. 10.1016/​j.aop.2010.05.008. URL https:/​/​doi.org/​10.1016. Neil J. A. Sloane. The on-line encyclopedia of integer sequences. In Towards Mechanized Mathematical Assistants, pages 130-130. Springer Berlin Heidelberg. 10.1007/​978-3-540-73086-6_12. URL https:/​/​doi.org/​10.1007. Stellan Östlund and Stefan Rommer. Thermodynamic limit of density matrix renormalization. Physical Review Letters, 75 (19): 3537-3540, nov 1995. 10.1103/​physrevlett.75.3537. URL https:/​/​doi.org/​10.1103. Barbara M. Terhal. Bell inequalities and the separability criterion. Physics Letters A, 271 (5-6): 319-326, jul 2000. 10.1016/​s0375-9601(00)00401-1. URL https:/​/​doi.org/​10.1016. Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM Review, 38 (1): 49-95, mar 1996. 10.1137/​1038003. URL https:/​/​doi.org/​10.1137. F. Verstraete, J. J. García-Ripoll, and J. I. Cirac. Matrix product density operators: Simulation of finite-temperature and dissipative systems. Physical Review Letters, 93 (20), nov 2004. 10.1103/​physrevlett.93.207204. URL https:/​/​doi.org/​10.1103. F. Verstraete, V. Murg, and J.I. Cirac. Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. Advances in Physics, 57 (2): 143-224, mar 2008. 10.1080/​14789940801912366. URL https:/​/​doi.org/​10.1080. R.F. Werner. Finitely correlated states. In Encyclopedia of Mathematical Physics, pages 334-340. Elsevier, 2006. 10.1016/​b0-12-512666-2/​00379-5. URL https:/​/​doi.org/​10.1016. Eric Ziegel, William Press, Brian Flannery, Saul Teukolsky, and William Vetterling. Numerical recipes: The art of scientific computing. Technometrics, 29 (4): 501, nov 1987. 10.2307/​1269484. URL https:/​/​doi.org/​10.2307. Miguel Navascués, Adrien Feix, Mateus Araújo, and Tamás Vértesi, "Characterizing finite-dimensional quantum behavior", Physical Review A 92 4, 042117 (2015). Miguel Navascués, "Resetting Uncontrolled Quantum Systems", Physical Review X 8 3, 031008 (2018). The above citations are from Crossref's cited-by service (last updated 2019-04-25 21:53:59) and SAO/NASA ADS (last updated 2019-04-25 21:54:00). The list may be incomplete as not all publishers provide suitable and complete citation data. This Paper is published in Quantum under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.
CommonCrawl
The relativistic Fokker-Planck equation, in which the speed of light $c$ appears as a parameter, is considered. It is shown that in the limit $c\to\infty$ its solutions converge in $L^1$ to solutions of the non-relativistic Fokker-Planck equation, uniformly in compact intervals of time. Moreover in the case of spatially homogeneous solutions, and provided the temperature of the thermal bath is sufficiently small, exponential trend to equilibrium in $L^1$ is established. The dependence of the rate of convergence on the speed of light is estimated. Finally, it is proved that exponential convergence to equilibrium for all temperatures holds in a weighted $L^2$ norm.
CommonCrawl
where $f(x)$ is the probability density function and $x_m$ is the median (see Laplace and Kenney and Keeping). Here are some examples. $B$ is the beta function, $c(x)$ is the cumulative distribution function of the sample distribution, and $f(x)$ is the probability density function of the sample distribution. which is close to the approximation given in the table. If you want the standard deviation for the sample median of a particular distribution and a $n$, then you can use numerical integration to get the answer. If you like, I could compute it for you. Just leave a comment indicating the distribution and $n$. When does the MGF of A-C exist?
CommonCrawl
Polynomials are expressions like $15x^3 - 14x^2 + 8$. Questions tagged with this concern common operations on polynomials, like adding, multiplying, polynomial long division, factoring and solving for roots. Looking only at the excerpt I considered editing it, since it suggests that polynomials only exist in $\mathbb R(x)$. But then I saw the tag info page (the extra line), which seems to suggest that polynomials and algebra-precalculus are closely related. In that case the tag excerpt wouldn't need to be altered, since for many people with questions at a precalculus level (and sometimes even at a slightly higher level) polynomials only do exist in $\mathbb R(x)$. Looking at the kinds of questions with this tag I found that they vary quite a bit in level and subject. Furthermore, a lot of time the tag is accompanied by other (IMHO) way more useful tags. For instance: partial-fractions, irreducible-polynomials, roots, factoring, splitting-field and as suggested, often with algebra-precalculus. So that led to my question: What should the polynomials tag be used for? If it is indeed intended to be used under such a wide variety of questions, then I believe the tag wiki should be changed to say so. If it is not intended to be used under all these questions then I believe the tag wiki should also say so. Personally I don't care to much for the polynomials tag, because I feel that there are much more useful tags available. In any case I believe that currently the suggestion that polynomials and algebra-precalculus often go together is misleading. Especially since this is not the case for a great number of question carrying this tag. The metric I like to use when considering tag quality is searchability: does it seem feasible that someone might want to use the tag as a filter when looking for a certain type of question? In this case, I see it being used most often as a secondary filter. For example, I could imagine someone using the tag while seeking precalculus questions involving polynomials, or probability questions involving polynomials, or recreational mathematics questions involving polynomials, etc. It is not as easy to imagine someone looking through questions about polynomials in general, since, as you point out, the scope would be quite large. Questions about properties of polynomials in general, like this one. Particular families of polynomials. Lagrange, Chebyshev, etc. Perhaps this list could be extended and added to the tag wiki. Not the answer you're looking for? Browse other questions tagged discussion tagging tag-management tag-wikis . Another reason why the (algebra) tag should be allowed.
CommonCrawl
30 What is the largest set for which its set of self bijections is countable? 17 What does "lightly crushed" mean for cardamon pods? 12 How many closed subsets of $\mathbb R$ are there up to homeomorphism? 11 How statistics/probability involves with advanced mathematics like manifold theory?
CommonCrawl
dc.description.abstract A 3-D (cube-shaped) Lagrangian sensor, inkjet printed on a paper substrate, is presented for the first time. The sensor comprises a transmitter chip with a microcontroller completely embedded in the cube, along with a $1.5 \lambda 0 dipole that is uniquely implemented on all the faces of the cube to achieve a near isotropic radiation pattern. The sensor has been designed to operate both in the air as well as water (half immersed) for real-time flood monitoring. The sensor weighs 1.8 gm and measures 13 mm$\,\times\,$ 13 mm$\,\times\,$ 13 mm, and each side of the cube corresponds to only $0.1 \lambda 0 (at 2.4 GHz). The printed circuit board is also inkjet-printed on paper substrate to make the sensor light weight and buoyant. Issues related to the bending of inkjet-printed tracks and integration of the transmitter chip in the cube are discussed. The Lagrangian sensor is designed to operate in a wireless sensor network and field tests have confirmed that it can communicate up to a distance of 100 m while in the air and up to 50 m while half immersed in water. © 1963-2012 IEEE.
CommonCrawl
In many augmented reality applications, in particular in the medical and industrial domains, knowledge about tracking errors is important. Most current approaches characterize tracking errors by $6\times 6$ covariance matrices that describe the uncertainty of a 6DOF pose, where the center of rotational error lies in the origin of a target coordinate system. This origin is assumed to coincide with the geometric centroid of a tracking target. In this paper, we show that, in case of a multi-camera fiducial tracking system, the geometric centroid of a body does not necessarily coincide with the point of minimum error. The latter is not fixed to a particular location, but moves, depending on the individual observations. We describe how to compute this point of minimum error given a covariance matrix and verify the validity of the approach using Monte Carlo simulations on a number of scenarios. Looking at the movement of the point of minimum error, we find that it can be located surprisingly far away from its expected position. This is further validated by an experiment using a real camera system.
CommonCrawl
More accurate demand forecasts are obviously good as far as inventory optimization is concerned. However, the quantitative assessment of the financial gains generated by an increase of the forecasting accuracy typically remains a fuzzy area for many retailers and manufacturers. This article details how to compute the benefits generated by an improved forecast. The viewpoint adopted in this article is a best fit for high turnover inventories, with turnovers above 15. For high turnover values, the dominant effect is not so much stockouts, but rather the sheer amount of inventory, and its reduction through better forecasts. If such is not your case, you can check out our alternative formula for low turnover. $D$ the turnover (total annual sales). $\alpha$ the cost of stockout to gross margin ratio. $p$ the service level achieved with the current error level (and current stock level). $\sigma$ the forecast error of the system in place, expressed in MAPE (mean absolute percentage error). $\sigma_n$ the forecast error of the new system being benchmarked (hopefully lower than $\sigma$). It is possible to replace the MAPE error measurements by MAE (mean absolute error) measures within the formula. This replacement is actually strongly advised if slow movers exist in your inventory. Let's consider a large retail network that can obtain a 10% reduction of the (relative) forecast error through a new forecasting system. Based on the formula above, we obtain a gain at $B=1,800,000€$ per year. If we assume that the overall profitability of the retailer is 5%, then we see that a 10% improvement in forecasting accuracy already contribute to 4% of the overall profitability. At a fundamental level, inventory optimization is a tradeoff between excess inventory costs vs. excess stockout costs. Let's assume, for now, that, for a given stock level, the stockout frequency is proportional to the forecasting error. This point will be demonstrated in the next section. The total volume of sales lost through stockouts is simple to estimate: it's $D(1-p)$, at least for any reasonably high value of $p$. In practice, this estimation is very good if $p$ is greater than 90%. Hence, the total volume of margin lost through stock-outs is $D(1-p)m$. Then, in order to model the real cost of the stock out, which is not limited to the loss of margin (think loss of customer loyalty for example), we introduce the coefficient $\alpha$. So the total economical loss caused by stock outs becomes $D(1-p)m\alpha$. Based the assumption (demonstrated below) that stockouts are proportional to the error, we need to apply the factor $(\sigma - \sigma_n) / \sigma$ as the evolution of the stockout cost caused by the new average forecast error. Let's demonstrate now the statement that, for a given inventory level, stockouts are proportional to the forecasting error. In order to do that, let's start with service levels at 50% ($p=0.5$). In this context, the safety stock formula indicates that safety stocks are at zero. Several variants exist for the safety stock formula, but they are all behaving similarly in this respect. With zero safety stocks, it becomes easier to evaluate the loss caused by forecast errors. When the demand is greater than the forecast (which happens here 50% of the time by definition of $p=0.5$), then the average percentage of sales lost is $\sigma$. Again, this is only the consequence of $\sigma$ being the mean absolute percentage error. However, with the new forecasting system, the loss is $\sigma_n$ instead. Thus, we see that with $p=0.5$, stockouts are indeed proportional to the error. The reduction of the stockouts when replacing the old forecast with the new one will be $\sigma_n / \sigma$. Now, what about $p \not= 0.5$? By choosing a service level distinct from 50%, we are transforming the mean forecasting problem into a quantile forecasting problem. Thus, the appropriate error metric for quantile forecasts becomes the pinball loss function, instead of the MAPE. However, since we can assume here that the two mean forecasts (the old one, and the new one) will be extrapolated as quantile (to compute the reorder point), though the same formula, the ratio of the respective errors will remain the same. In particular, if the safety stock is small (say less than 20%) compared to the primary stock, then this approximation is excellent in practice. The factor $\alpha$ has been introduced to reflect the real impact of a stockout on the business. A minima, we have $\alpha = 1$ because the loss caused by an extra stockout is at least equal to the volume of gross margin being lost. Indeed, when considering the marginal cost of a stockout, all infrasture and manpower costs are fixed, hence the gross margin should be considered. a loss of client loyaulty. a loss of supplier trust. more erratic stock movements, stressing supply chain capacities (storage, transport, ...). overhead efforts for downstream teams who try to mitigate stockouts one way or another. Among several large food retail networks, we have observed that, as a rule thumb, practionners are assuming $\alpha=3$. This high cost for stockouts is also the reason why, in the first place, the same retail networks typically seek high service levels, above 95%. In this section, we debunk one recurrent misconception about the impact of an extra accuracy, which can be expressed as extra accuracy only reduces safety stocks. Looking at the safety stock formula, one might be tempted to think that the impact of a reduced forecasting error will be limited to lowering the safety stock; all other variables remaining unchanged (stockouts in particular). This is a major misunderstanding. the primary stock, equal to the lead demand, that is to say the average forecast demand multiplied by the lead time. the safety stock, equal to the demand error multiplied by a safety coefficient that depends mostly of $p$, the service level. Let's go back to the situation where the service level equals 50%. In this situation, safety stocks are at zero (as seen before). If the forecast error was only impacting the safety stock component, then it would imply that the primary stock was immune to poor forecast. However, since there is no inventory here beyond the primary stock, we end-up with the absurd conclusion that the whole inventory has become immune to arbitrarily bad forecasts. Obviously, this does not make sense. Hence, the initial assumption, that only safety stocks were impacted is wrong. Despite being incorrect, the safety stock only assumption is tempting because when looking at the safety stock formula, it looks like one immediate consequence. However, one should not jump to conclusions too hastily: this is not the only one consequence. The primary stock is built on top of the demand forecast as well, and it's the first one to be impacted by a more accurate forecast. In section, we delve in further details that have been omitted in the discussion above for the sake of clarity and simplicity. The formula above indicates that reducing the forecast error at 0% should bring stockouts at zero as well. On one hand, if customer demand could be anticipated with 100% accuracy 1 year in advance, achieving near-perfect inventory levels would seem less outstanding. One the other hand, some factors such as the varying lead time complicates the task. Even if the demand is perfectly known, an varying timing of delivery might generate further uncertainties. In practice, we observe that the uncertainty related to the lead time is typically small compared to the uncertainty related to the demand. Hence, neglecting the impact of varying lead time is reasonable as long as forecasts remain somewhat inaccurate (say for MAPEs higher than 10%). Delivering superior forecasts is the number one priority for Lokad. For companies with advance forecasting systems in place, benchmarks performed by our clients indicate that we typically reduce the relative forecasting error by 10% or more. For companies with little practices in place, the gain can go up to 30%. However, don't take our word for granted, and benchmark yourself for free your inventory practices with our forecasting engine using our 30-day free trial.
CommonCrawl
In particular, this chase takes place on a $a\times b$ board with $3\leq a\leq b$. At the start, there are $2a$ police officers distributed randomly across the board and the fugitive is allowed to choose their position from any available one (after seeing the arrangement of the police). Can the police ensure the capture of the fugitive? How? The police can capture her. The police forms 2 lines of length A. The prisoner can't cross the line since she can't move diagonally. If one line is stationary, and the other line (1 officer at a time) advances the line, eventually the prisoner will be stuck between the two lines.
CommonCrawl
Abstract: We prove uniform Hausdorff and packing dimension results for the inverse images of a large class of real-valued symmetric Lévy processes. Our main result for the Hausdorff dimension extends that of Kaufman (1985) for Brownian motion and that of Song, Xiao, and Yang (2018) for $\alpha$-stable Lévy processes with $1<\alpha<2$. Along the way, we also prove an upper bound for the uniform modulus of continuity of the local times of these processes.
CommonCrawl
This NetLogo model is a toy tool to launch experiments for (small) Complex Networks. It provides some basic commands to generate and analyze small networks by using the most common and well-known graph models (random graphs, scale free networks, small world, etc). Also, it provides some methods to test dynamics on networks (spread processes, page rank, cellular automata,...). All the funtionalities have been designed to be used as extended NetLogo commands. In this way, it is possible to create small scripts to automate the generating and analyzing process in an easier way. Of course, they can be used in more complex and longer NetLogo procedures, but the main aim in their design is to be used by users with no previous experience on this language (although, if you know how to program in NetLogo, you can probably obtain stronger results in an easier way). You can find the last version of the tool in the associated Github poject. In the next sections you will learn some details about how to use it, but remember that the best way is to take a look at the sample scripts and playing around with the tool. Although the real power of the system is obtained via scripting, and it is its main goal, the interface has been designed to allow some interactions and facilitate to handle the creation and analysis of the networks. Indeed, before launching a batch of experiments is a good rule to test some behaviours by using the interface and trying to obtain a partial view and understanding of the networks to be analyzed... in this way, you will spend less time in experiments that will not work as you expect. Network Representation: In the left side. It has a panel where the network is represented and it allows one only interaction: to inspect node information when the button Inspect Node is pressed. Under this panel some widgets to manage the visualization properties are located: select layouts and fix parameters for it. Script Panel: In the right side. It has two input widgets: the first one, OneCommmand, to write one command to be executed (for example, the name of a script); and a multiline script widget, where you can test simple scripts to be run. Some functionalities are not available in this last widget, and we advise you to use the script file to write the more complex scripts you can need (also, the file editor is more comfortable than that from the widget). In the top of this panel you can find also some buttons to load, save and clear the current network. Use scripts.nls to write your customized scripts. In order to acces this file, you must go to Code Tab and then choose scripts.nls from Included Files chooser. You can add as many aditional files as you want if you need some order in your experiments and analysis (load them with the __includes command from main file). After defining your scripts, you can run them directly from the Command Center, from the OneCommand input in the interface, or as auxiliary procedure in other scripts. In the document you can find specific network commands that can be used to write scripts for creating and analyzing networks. In fact, these scripts can be written using any NetLogo command, but in this library you can find some shortcuts to make easier the process. let v val : Creates a new variable v and sets its value to val. set v val : Changes the value of the variable v to val. [ ] : Empty list, to store values in a repetition. range x0 xf incx : Returns an ordered list of numbers from x0 to xf with incx increments. foreach [x1....xn] [ [x] -> P1...Pk ] : for each x in [x1 ... xn] it executes the comands P1 to Pk. repeat N [P1...Pk] : Repeats the block of commands P1 to Pk, N times. store val L : Stores value val in list L. sum / max / min / mean L : Returns the sum/max/min/mean of values of list L. print v : Prints value of v in the Output. plotTable [x1...xn] [y1...yn] : Plots the points (x1,y1)...(xn,yn). And the next one will perform a experiment moving a parameter from \(0\) to \(0.01\) with increments of \(0.001\), and for every value of this parameter it will prepare \(10\) networks to compute their diameters (these networks are obtained by adding random edges to a preferential attachtment ntework). In fact this is the Gilbert variant of the model introduced by Erdős and Rényi. Each edge has a fixed probability of being present (\(p\)) or absent (\(1-p\)), independently of the other edges. The Watts–Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering. It was proposed by Duncan J. Watts and Steven Strogatz in their joint 1998 Nature paper. Construct a regular ring lattice, a graph with \(N\) nodes each connected to \(K\) neighbors, \(K/2\) on each side. Take every edge and rewire it with probability \(p\). Rewiring is done by replacing \((u,v)\) with \((u,w)\) where \(w\) is chosen with uniform probability from all possible values that avoid self-loops (not \(u\)) and link duplication (there is no edge \((u,w)\) at this point in the algorithm). The Barabási–Albert (BA) model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Scale-free networks are widely observed in natural and human-made systems, including the Internet, the world wide web, citation networks, and some social networks. The algorithm is named for its inventors Albert-László Barabási and Réka Albert. The network begins with an initial connected network of \(m_0\) nodes. where \(k_i\) is the degree of node \(i\) and the sum is made over all pre-existing nodes \(j\) (i.e. the denominator results in twice the current number of edges in the network). Heavily linked nodes ("hubs") tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as destination for new links. New nodes have a "preference" to attach themselves to already heavily linked nodes. The algorithm of Klemm and Eguílez manages to combine all three properties of many "real world" irregular networks – high clustering coefficient, short average path length (comparable with that of the Watts and Strogatz small-world network), and scale-free degree distribution. Indeed, average path length and clustering coefficient can be tuned through a "randomization" parameter, $\mu$, in a similar manner to the parameter $p$ in the Watts and Strogatz model. It begins with the creation of a fully connected network of size \(m_0\). The remaining \(N−m_0\) nodes in the network are introduced sequentially along with edges to/from \(m_0\) existing nodes. The algorithm is very similar to the Barabási and Albert algorithm, but a list of \(m_0\) "active nodes" is maintained. This list is biased toward containing nodes with higher degrees. The parameter \(\mu\) is the probability of new edges to be connected to non-active nodes. When new nodes are added to the network, each new edge is connected from the new node to either a node in the list of active nodes or, with probability \(\mu\), to a randomly selected "non-active" node. The new node is added to the list of active nodes, and one node is then randomly chosen, with probability proportional to its degree, for removal from the list, i.e., deactivation. This choice is biased toward nodes with a lower degree, so that nodes with the highest degree are less likely to be chosen for removal. It is a simple algorithm to be used in metric spaces. It generates \(N\) nodes that are randomly located in 2D, and after that two every nodes \(u,v\) are linked if \(d(u,v) < r\) (a prefixed radius). This algorithm is similar to the geometric one, but we can prefix the desired mean degree of the network, \(g\). It starts by creating \(N\) randomly located nodes, and then create the number of links needed to reach the desired mean degree. This link creation is random in the nodes, but choosing the shortest links to be created from them. A Grid of \(N\times M\) nodes is created. It can be chosen to connect edges of the grid as a torus (to obtain a regular grid). M - Number of horizontal nodes, N - Number of vertical nodes, t? - torus? Creates a Bipartite Graph with \(N\) nodes (randomly typed to two families) and \(M\) random links between nodes of different families. Node creation and deletion: In each iteration, nodes may be independently created and deleted under some probability distribution. All edges incident on the deleted nodes are also removed. \(pncd\) - creation, \((1 - pncd)\) - deletion. Edge creation: In each iteration, we choose some node \(v\) and some number of edges \(k\) to add to node \(v\). With probability \(\beta\), these \(k\) edges are linked to nodes chosen uniformly and independently at random. With probability \(1 − \beta\), edges are copied from another node: we choose a node \(u\) at random, choose \(k\) of its edges \((u, w)\), and create edges \((v, w)\). If the chosen node \(u\) does not have enough edges, all its edges are copied and the remaining edges are copied from another randomly chosen node. Edge deletion: Random edges can be picked and deleted according to some probability distribution. Some of them need to compute centralities before they can be used. If you choose All, you obtain a list with all the measures of the network. Communities of the current network can be computed by using the Louvain method (maximizing the modularity measure of the network). Applies the Page Rank diffusion algorithm to the current Network a number of prefixed iterations. Rewires all the links of the current Network with a probability \(p\). For every link, one of the nodes is fixed, while the other is rewired. The infected/informed nodes can spread the infection/message to its neighbors with probability \(ps\) (independently for every neighbor). Every infected/informed node can recover/forgot with a probability of \(pr\). Every recovered node can become inmunity with a probability of \(pin\). In this case, it will never again get infected / receive the message, and it can't spread it. Nodes have \(2\) possible values: on/off. In every step, every node changes its state according to the ratio of activated states. Opens a Dialog windows asking for a file-name to save/load current network. The program will automatically name the file with the distribution name and the date and time of exporting.
CommonCrawl
I'm starting work on a new site with my new partner, Patricio, a web developer. He has his own business and web site, aplitec.net, which shows he has some design talent. He's experienced with PHP and Joomla. The site is called CuencaTravel.com. It aims to be a resource connecting visitors to Cuenca with local people and businesses. The type of stuff you'd want to find on a travel website. hotels, restaurants, travel agents, tour operations, shops and artisans, etc. Reviews and comparisons of businesses, attractions, sights, etc. A map of the city and the ability to find businesses, attractions, sights, etc. A marketplace for local products from shops, artisans, etc. I hope to use this project to refine my process ideas and test drive a qa site. In essence, CuencaTravel.com will be the first qa-site user (apart from bootstrapping.) One Shore is still looking for others willing to try out QA Site and give feedback. It's free for 3 months, and maybe more, if you're a good customer. That includes a free VPS server and support. Of course I want Cuenca Travel to succeed as well. That's what made the short guy on Mad Max beyond thunderdome valuable. Masterblaster was also the name of my parents' Thanksgiving turkey. While the theme seems to be task lists, they all have one fata flaw. They consider a task to be a line of text with a check box. If only all my tasks were so easy. ActiveCollab, the only one listed above with time tracking capability, doesn't consider the time spent on tasks worth tracking. None of them consider that one task may be dependant upon (or block) another. I understand that the idea of these web 2.0 tools is simplicity. But a task list that doesn't connect to anything else, is really not an improvement over a list on scratch paper (except it can be viewed over the internet.) A plain text file does as much, excepting the "milestones" feature — which can be approximated by scrawling "due on $x_date" at the top of the page, and then writing TODAY in big letters and circling it. If you have a wiki that can add attachments, a todo|task|check list is just a page. If you've got the fancy strikethrough style and ordered lists, you're a step ahead of these things. Dependencies, importance (even if just limited to an important flag and a descope flag) are also important, and easily ignorable. See, it's not just the ability to do something that's important, it's the knowing how to do it. You might forget what you did to accomplish task X and need to do it again — or undo it. Or someone else needs to know how you did it, so they can duplicated it, or just satisfy Sarbanes-Oxley CYAbility. Kelsey has been quilting and watching Monk on her computer. It's a show we both like. I relate, of course to Adrian, because of my neuroses and admit to being "a little bit Monk." Kelsey said "alot." But I also realized, apart from being a germ freak, I share another neurosis with a fictional character. I'm an organization freak. Thankfully, I'm too lazy most of the time. But I find when it comes to organizing my thoughts, that I obsess over details, and if the plan isn't organized "just so" it distracts me. Along with a blog and basecamp, it's official. Add in skype and a wiki, sprinkle in some Ajax, and you can stick a fork in me. All I need now is a flickr album and a social networking site. I'm even buying e-books and subscriptions. Maybe I should get some ad sense or ad words? Nah. I'm going to write a book, though, online, probably as a wiki, taking from the tools wiki. Tentatively titled Open Source QA Tools. And probably self-publish it on lulu. I need some RSS feeds and a web service or two. In the process of coming up with things to do she looked up several other lists, and I was struck by how simply it can be done. While I've always known that a spreadsheet was enough, I never liked it (probably because I don't like spreadsheets), seeing that a single blog post (which is really a text document) is being used to track such detailed and long-term projects is both frightening and relieving. How much nicer would it be to have a page that linked from the list for each task, where progress could be updated. But it doesn't need to be much more complex than that. I've been evaluating alot of project management tools, currently favoring GoPlan, but they still don't seem right. I actually miss the work breakdown spreadsheets we used at our last job. Other than the inherent problems of using Excel spreadsheets (not multiuser, brittle & limited formatting, ugly versioning, difficult to customize data without changing) the real problem is usually in coming up with accurate tasks, not in tracking them. Every day I write a daily todo list on paper, that's not much more complicated than a shopping list, and every day I end up writing more than I can possibly do and adding all sorts of notes (often irrelevant) that obscure the list. I try to keep 2 notepads (one for tasks and one for notes) and some scratch paper handy to combat this, but it invariably fails. My notes are either lost, or jumbled, important stuff written on the scratch paper never gets transferred, and 1/2 of the tasks from the previous day are written again for a few days before being dropped, incomplete as my work takes a different direction. So simplicity needs to be the key. And persistence. Easy access to data is where online PM tools fall down, I think. Wikis are slightly too general, though I think a wiki based solution is a good idea. Something that ties a wiki to a blog, but not a bliki. I should have a WBS containing overarching goals for each project, tied to a project plan. And time allocation for projects. Each day I should identify which WBS tasks (and non WBS tasks) I need to perform. Then a summary at the end of the day of what I actually accomplishes (as well as updates to task lists. Tasks should have their own page with coments. Notes can exist in the ether as something to look up, but can also be linked to from tasks. Dang it, I'm not supposed to be working on a new PM tool. It seems so easy, though. Based on the strength of a review on slashdot, I bought the book "PHP in Action" by Dagfinn Reiersøl with Marcus Baker and Chris Shiflett published by Manning. I also bought (in a two-fer) "Zend Framework in Action" by Rob Allen, Nick Lo, Steven Brown from Manning. I'll let you know what I think of them. So much for Safari saving money. Find the city for you with google. Thunderfalls Family Water Park, Mackinaw City, features 12 slides and … from the food court and lounge area, which overlooks the Wave Pool and Lazy River.
CommonCrawl
Definition 13 The second exterior power $\Lambda^2V$ of a finite-dimensional vector space is the dual space of the vector space of alternating bilinear forms on $V$ . Elements of $\Lambda^2V$ are called 2-vectors. This definition is a convenience – there are other ways of defining $\Lambda^2V$ , and for most purposes it is only its characteristic properties which one needs rather than what its objects are. A lot of mathematics is like that – just think of the real numbers. Given this space we can now define our generalization of the cross-product, called the exterior product or wedge product of two vectors. My question: if I use Definition 14 for the wedge product, then how can I identify it with the bilinear form appearing in the second kind definition? Edit: Trying to understand md2perpe's comment. To avoid the confusion, I will denote the second kind wedge product by $\barwedge$. Our inventory is the following. So, as md2perpe ponted out, and the last two items show that an identification of the elements of $\mathcal A^2(V)$ (i.e. of the alternating bilinear forms on $V$) with the elements of $W^2=(V^*)^2$ (i.e of pairs of linear functionals on $V$) would solve my problem. He says that the $(x,y)\in W^2$ should be identified with the bilinear form $V^2\to \mathbb R: (u,v)\mapsto x(u)y(v)-x(v)y(u)$. As far as I see, this bilinear form is none other than $x\barwedge y$. A bit circular, but interesting. Browse other questions tagged definition multilinear-algebra exterior-algebra or ask your own question. What is the wedge product of multilinear forms? Does interior multiplication come from an operation on $k$-vectors (instead of $k$-co vectors)? Are Clifford and exterior algebras isomorphic as "wedge product algebras"?
CommonCrawl
Is there a way to reduce oscillations for the numerical integration when evaluating the Heston model. I am pricing a series of 5000 options scattered over the Heston model parameter space and I find that for some parameters, often deep-out-of-the-money options I get negative option prices. I am using 32 Gauss-Laguerre integration, so the integration grid is rather fine, also I have tried extending the maturities to say 10 years, but this only reduces the frequency. If not I guess Monte-Carlo is the only way to make sure I get no negative prices. In SV model, it is well-known that the integrand for the call price can sometimes show high oscillation, can dampen very slowly along the integration axis, and can show discontinuities. The ''Little Trap'' formulation of Albrecher et al. There has been a huge amount of work on this. Generally a Fourier transform approach is used. First, be careful to use the form of the characteristic function that does not wind about zero in order to avoid having to count the normal of windings. Second, using contour shifts can make the integral much better behaved. eg integrate along the line with $0.5$ imaginary part to price a covered call. Third, use a Black--Scholes call with the same strike as a control. This removes poles and makes the integrand much nicer. For details, see my book More Mathematical Finance Chapter 17 and/or my paper http://ssrn.com/abstract=1941464 Fourier Transforms, Option pricing and controls. I am surprised that none of the answers so far mention the work of Lord and Kahl Optimal Fourier Inversion in Semi-Analytical Option Pricing. They study this oscillation problem and propose an optimal contour for the integration. The challenge is to write a small algorithm to obtain the optimal $\alpha$. I believe it can be found in an article from Mike Staunton in a recent Wilmott magazine issue. A different trick is to use the Black-Scholes model as control variate in the integration (its characteristic function). This is detailed in Andersen and Piterbarg book "Interest Rate Modeling, Volume I: Foundations and Vanilla Models", as well as in @MarkJoshi and Chan paper. Use a quadrature that takes care of oscillations naturally. This is the approach described in An adaptive Filon quadrature for stochastic volatility models. Not the answer you're looking for? Browse other questions tagged options derivatives numerical-methods heston or ask your own question. What parameters give a smile (not smirk) in Heston? How can I improve the numerical integration accuracy in Heston model?
CommonCrawl
Thanks.) terminology type-i-errors type-ii-errors share|improve this question edited May 15 '12 at 11:34 Peter Flom♦ 57.4k966150 asked Aug 12 '10 at 19:55 Thomas Owens 6161819 Terminology is a bit Table of error types Tabularised relations between truth/falseness of the null hypothesis and outcomes of the test: Table of error types Null hypothesis (H0) is Valid/True Invalid/False Judgment of Null Hypothesis SEND US SOME FEEDBACK>> Disclaimer: The opinions and interests expressed on EMC employee blogs are the employees' own and do not necessarily represent EMC's positions, strategies or views. What are "desires of the flesh"? Unfortunately, this increases the incidences of Type II error. :) Reducing the chances of Type II error would mean making the alarm hypersensitive, which in turn would increase the chances of Going to be away for 4 months, should we turn off the refrigerator or leave it on with water inside? Data dredging after it has been collected and post hoc deciding to change over to one-tailed hypothesis testing to reduce the sample size and P value are indicative of lack of Is there an easy way to remember what the difference is, such as a mnemonic? Digital Diversity My CEO wants permanent access to every employee's emails. already suggested), but I generally like showing the following two pictures: share|improve this answer answered Oct 13 '10 at 18:43 chl♦ 37.5k6125243 add a comment| up vote 7 down vote Based Launch The "Thinking" Part of "Thinking Like A Data Scientist" Launch Big Data Journey: Earning the Trust of the Business Launch Determining the Economic Value of Data Launch The Big Data Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. Raiffa, H., Decision Analysis: Introductory Lectures on Choices Under Uncertainty, Addison–Wesley, (Reading), 1968. Reply Tone Jackson says: April 3, 2014 at 12:11 pm I am taking statistics right now and this article clarified something that I needed to know for my exam that is Twelve Tan Elvis's Ate Nine Hams With Intelligent Irish Farmers share|improve this answer answered Dec 12 '12 at 3:54 Mason Oliver 91 giggle. No funnier, but commonplace enough to remember. He's presented most recently at STRATA, The Data Science Summit and TDWI, and has written several white papers and articles about the application of big data and advanced analytics to drive Risk higher for type 1 or type 2 error?1Examples for Type I and Type II errors9Are probabilities of Type I and II errors negatively correlated?0Second type error for difference in proportions It helps that when I was at school, every time we wrote up a hypothesis test we were nagged to write "$\alpha = ...$" at the start, so I knew what Bitte versuche es später erneut. Hafner:Edinburgh. ^ Williams, G.O. (1996). "Iris Recognition Technology" (PDF). Type One and Type Two Errors are discussed in length in most introductory college texts. I set the criterion for the probability that I will make a false rejection. However, that singular right answer won't apply to everyone (some people might find an alternative answer to be better). loved it and I understand more now. One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram. explorable.com. How do I explain that this is a terrible idea? How do professional statisticians do it - is it just something that they know from using or discussing it often? (Side Note: This question can probably use some better tags. M. Also, your question should be community wiki as there is no correct answer to your question. –user28 Aug 12 '10 at 20:00 @Srikant: in that case, we should make This represents a power of 0.90, i.e., a 90% chance of finding an association of that size. A type II error, or false negative, is where a test result indicates that a condition failed, while it actually was successful. A Type II error is committed when we fail The prediction that patients with attempted suicides will have a different rate of tranquilizer use — either higher or lower than control patients — is a two-tailed hypothesis. (The word tails No matter how many data a researcher collects, he can never absolutely prove (or disprove) his hypothesis. Simple, direct.
CommonCrawl
How to find the distance between two non-parallel lines? I am tasked to find the distance between these two lines. Those two lines are nonparallel and they do not intersect (I checked that). Using the vector product I computed the normal (the line orthogonal to both of these lines), and the normal is $(3, -2, 1)$. Now I have the direction vector of the line which will intersect both of my nonparallel lines. However, here's where I encounter the problem - I don't know what next. The next logical step in my opinion would be to find a point on $p1$ where I could draw that orthogonal line and where that orthogonal line would also intersect with $p2$... There's only one such point, since we are in 3D space and I could draw an orthogonal line from any point in $p1$ but it could miss $p2$. Take the common normal direction. HINT...find any vector joining one point on one line to another point on the other line and calculate the projection of this vector onto the common normal which you have found already. Distance is measured alongside vector which is perpendicular to $v_1$ and $v_2$, we can take for example a cross product $v_\perp=v_1 \times v_2$ what you have already done. Having $t , r, s $ it's straightforward to calculate the ends of segment perpendicular to both lines and its length $d=\Vert rv_\perp \Vert$. Not the answer you're looking for? Browse other questions tagged linear-algebra vectors analytic-geometry 3d or ask your own question. How to prove that two lines in 3D are not parallel and do not intersect; also, how to find the distance between them? How to find the distance between two lines without any points?
CommonCrawl
56 A multiplication algorithm found in a book by Paul Erdős: how does it work? 53 Is there another way to solve this integral? 43 If $A^2$ and $B^2$ are similar matrices, do $A$ and $B$ have to be similar? 41 Difference between $⊂$ and $⊆$? 38 How to prove that a $3 \times 3$ Magic Square must have $5$ in its middle cell?
CommonCrawl
I'm not really sure where to go with this question, any help would be appreciated. Let $A$ be an $n\times n$ matrix. Prove that if $A$ is row equivalent to some invertible $n\times n$ matrix $B$ then A is invertible. I'm not sure where a starting point would be. I know that an $n\times n$ matrix B is invertible if there is a matrix A such that B is both the left and the right inverse of A: AB = In and BA = In, but I'm not sure if this would be useful. If $A$ is row-equivalent to say $B$ and $B$ is invertible, then thre exist elementary matrices $E_1,\dots,E_r$ such that $B=E_r\dots E_1A$. Now, each $E_i$ is invertible. So, is a product of invertible matrices. Non-trivial solutions implies row of zeros? Two proofs about invertible Matrix and row equivalent to the identity matrix. Show that if $A$ is invertible and $AB = AC$, then $B = C$. Prove that A is invertible if $A^2 - 4A -7I = 0$. Invertible matrix of non-square matrix?
CommonCrawl
I was studying chapter 1 of the book "Equilibrium Unemployment Theory" and I got confused about the way Pissarides has defined the probability of a firm not finding a worker in a short time interval $δt$ as $1-q(θ)δt$ (page 7 of the book). The first reason I got confused is that, to me that cannot represent the probability because if we take $δt$ large enough, the result can be negative. Second, if we take $q(θ)δt$ as the rate (NOT the probability) of a firm finding a match in a time interval $δt$, then the rate of the firm not finding a match in the same time interval shouldn't be $[1-q(θ)]δt$ instead of $1-q(θ)δt$? $q(\theta)$ is defined as the job-filling rate. Note that market tightness $\theta$ is not necessarily constant over time (Pissarides makes a dynamic analysis at some point). It may help to denote it $\theta_t$. As an approximation, $q(\theta_t)\delta t$ is a probability for a firm to meet a worker between $t$ and $t+\delta t$ for $\delta t$ small enough. 1) If you choose a large $\delta t$, this approximation is not valid anymore. In other words, assuming a large $\delta t$ prevent you from interpreting $q(\theta_t)\delta t$ as a probability. To define an equilibrium, Pissarides will anyway take the limit $\delta t\to 0$. 2) $q(\theta_t)$ is a rate (that is not constrained to be lower than 1), whereas $q(\theta_t)\delta t$ is a probability. Thus, the probability of a firm not meeting a worker between time $t$ and $t+\delta t$ is $1-q(\theta_t)\delta_t$. $1-q(\theta_t)$ has no clear interpretation (it can even be negative). I guess (but I would like confirmation) that the rate of a firm not finding a worker cannot be defined in this case (or it would be $+\infty$). Not the answer you're looking for? Browse other questions tagged macroeconomics labor-economics unemployment search-and-matching or ask your own question.
CommonCrawl
Non-volatile memory (NVM) as persistent memory is expected to substitute or complement DRAM in memory hierarchy, due to the strengths of non-volatility, high density, and near-zero standby power. However, due to the requirement of data consistency and hardware limitations of NVM, traditional indexing techniques originally designed for DRAM become inefficient in persistent memory. To efficiently index the data in persistent memory, this paper proposes a write-optimized and high-performance hashing index scheme, called level hashing, with low-overhead consistency guarantee and cost-efficient resizing. Level hashing provides a sharing-based two-level hash table, which achieves a constant-scale search/insertion/deletion/update time complexity in the worst case and rarely incurs extra NVM writes. To guarantee the consistency with low overhead, level hashing leverages log-free consistency schemes for insertion, deletion, and resizing operations, and an opportunistic log-free scheme for update operation. To cost-efficiently resize this hash table, level hashing leverages an in-place resizing scheme that only needs to rehash $1/3$ of buckets instead of the entire table, thus significantly reducing the number of rehashed buckets and improving the resizing performance. Experimental results demonstrate that level hashing achieves $1.4\times$$-$$3.0\times$ speedup for insertions, $1.2\times$$-$$2.1\times$ speedup for updates, and over $4.3\times$ speedup for resizing, while maintaining high search and deletion performance, compared with state-of-the-art hashing schemes.
CommonCrawl
Sudoku is a number puzzle where, given an \$ n \times n \$ grid divided into boxes of size \$ n \$, each number of \$ 1 \$ to \$ n \$ should appear exactly once in each row, column and box. In the game of Chess, the King can move to any of (at most) 8 adjacent cells in a turn. "Adjacent" here means horizontally, vertically or diagonally adjacent. The King's tour is an analogy of the Knight's tour; it is a (possibly open) path that visits every cell exactly once on the given board with Chess King's movements. The tour forms the 36-digit number 654654564463215641325365231214123321. This incomplete board gives the starting sequence of 666655546... which is the optimal sequence of 9 starting digits. Your task is to find the largest such number for standard 9-by-9 Sudoku with 3-by-3 boxes, i.e. Note that this challenge is not code-golf; the focus is to actually find the solutions rather than to write a small program that theoretically works. The score of a submission is the 81-digit number found by your program. The submission with the highest score wins. Your program should also output the Sudoku grid and the King's tour in human-readable form; please include them in your submission. Your program may output multiple results; your score is the maximum of them. There's no time limit for your program. If your program continues to run and finds a higher number afterwards, you can update the submission's score by editing the post. Tiebreaker is the earliest time to achieve the score, i.e. either the time of post (if it's not edited yet) or the time of edit when the score was updated (otherwise). Not the answer you're looking for? Browse other questions tagged code-challenge sudoku or ask your own question. How many Sudoku puzzles exist?
CommonCrawl
The mean altitudes of the 10 km $\times$ 10 km hectad squares used by the United Kingdom's Ordnance Survey in mapping Great Britain are given in the NumPy array file gb-alt.npy. NaN values in this array denote the sea. Plot a map of the island using this data with ax.imshow and plot further maps assuming a mean sea-level rise of (a) 10 m, (b) 50 m, (c) 100 m. In each case, deduce the percentage of land area remaining, relative to its present value. The code below creates a plot of Great Britain under various amounts of water.
CommonCrawl
The conference will provide an opportunity for researchers and students in topology (including both geometric topology and homotopy theory) and related fields to come together, present their research, and learn from each other. The conference will begin on Friday at 1:20 p.m. (Sign-in is just before this, 12:30-1:20 p.m.) and end at 12:15 p.m. on Sunday. The conference budget is modest: our annual tradition since 1970 is that, while there isn't funding for the travel and lodging expenses of speakers and participants, there will be nice (perhaps "classical and contemporary") Cajun-style dinners on Friday and Saturday nights (respectively) provided for by the Lloyd Roeling Conference Fund and the UL Lafayette Mathematics Department. The student talks will be 20 minutes long and 3 prizes, in the amounts of $125, $100, and $75, will be awarded. The Organizing Committee members are: Scott Bailey (Clayton State), Daniel Davis (UL Lafayette), Robin Koytcheff (UL Lafayette), and Peter Oman (Southeast Missouri State). Maxim D. Doucet Hall faces Johnston Street. Map links are posted above. There is no registration fee. However, to aid in planning please complete the registration form as soon as possible. There is an increasing interest about bimonoidal, or "rig" categories, as possible inputs of various machines outputting $K$-theory spectra. The underlying 1-types of such categories are represented by stable modules. We show that the rig structure can be described in terms of so-called biextensions of such modules. Computations involving the root invariant prompted Mahowald and Shick to develop the slogan: "the root invariant of $v_n$-periodic homotopy is $v_n$-torsion." While neither a proof, nor a precise statement, of this slogan appears in the literature, numerous authors have offered computational evidence in support of its fundamental idea. In this talk, we will discuss the modules and splittings involved in this computational evidence, and provide yet another example in support of the slogan. To capture more information about a topological space, one considers the co-chain complex of a space with extra multiplicative structure instead of just the co-chain complex itself. One of these structures is called the E-infinity structure. An E-infinity DGA is a chain complex with a multiplication that is associative and commutative up to coherent higher homotopies. Co-chain complexes of topological spaces are examples of E-infinity DGAs. Weak equivalences of E-infinity DGAs are maps of E-infinity DGAs that induce an isomorphism in homology, these are called quasi-isomorphisms. In our work, we use stable homotopy theory to construct new equivalences between E-infinity DGAs which we call E-infinity topological equivalences. E-infinity DGAs are called E-infinity topologically equivalent when the corresponding commutative ring spectra are weakly equivalent. Quasi-isomorphic E-infinity DGAs are topologically equivalent. We show that the converse to this statement is not true, i.e. we construct examples of E-infinity DGAs that are E-infinity topologically equivalent but not quasi-isomorphic. This means that there are more equivalences to consider between E-infinity DGAs than just quasi-isomorphisms. Also, we show that for co-chain complexes of spaces with integer coefficients, E-infinity topological equivalences and quasi-isomorphisms agree. We will explain how estimates from a higher stabilization theorem show that the stable homotopy completion studied by Carlsson, and subsequently in Arone-Kankaanrinta (S-localization), fits into a derived adjunction via the Arone-Ching theory that can be turned into a derived equivalence by restricting to simply connected spaces. If time permits, the analogous results from finite suspensions of spaces, and their analogs and duals in structured ring spectra, will be discussed. In joint work with a number of collaborators, particularly C. Barwick, E. Dotto, D. Nardin and J. Shah, we have developed a formalism that "takes the G out of 'genuine'" by substituting the orbit category O_G for a category with similar properties in the machinery of unstable and stable equivariant homotopy theory. I'll give an overview of some of the successes of this theory, which puts G-spectra on the same footing as seemingly dissimilar objects coming from functor calculus. I'll then discuss a new connection between the theory of orbital categories and stratified topology, linking perverse sheaves to the equivariant slice filtration used in Hill, Hopkins and Ravenel's solution of the Kervaire invariant one problem. If time permits I'll end with some musings about stratified $\infty$-topoi and their shapes. Heegaard Floer homology is an invariant of closed three-manifolds. We consider three-manifolds up to a weaker notion of equivalence known as homology cobordism. Using additional data from the involutive Heegaard Floer homology package of Hendricks and Manolescu, we discuss applications of Heegaard Floer homology to homology cobordism. This is joint work with Kristen Hendricks and Tye Lidman. Given a self-map of a compact, connected topological space we consider the problem of determining upper and lower bounds for the fixed point indices of the map. One can not expect to have bounds in general, so we need to restrict attention to the class of spaces considered and also the class of self-maps. Motivated by an elementary result in the case of a 1-dimensional complex this talk will focus attention to the setting of 2-complexes. Some past results and related examples will be presented, leading to some current joint work with D. L. Goncalves (U. Sao Paulo, Brasil). The theory of braids has been very useful in the study of classical knot theory. One can hope that higher dimensional braids will play a similar role in higher dimensional knot theory. In this talk we will introduce the concept of braided embeddings, and discuss existence, lifting and isotopy problems for braided embeddings. In 1979, V. I. Arnold showed that the fundamental invariant of 2--component links, namely the linking number, can be generalized to an invariant of volume preserving vector fields. In this talk, Arnold's construction will be outlined, together with various applications in mathematical physics and geometric knot theory. Further, more recent results concerning generalizations of this construction to Vassiliev invariants of knots, and Milnor higher linking numbers will be presented. The knot concordance group has been the subject of much study since its introduction by Ralph Fox and John Milnor in 1966. One might hope to generalize the notion of a concordance group to links; however, the immediate generalization to the set of links up to concordance does not form a group since connected sum of links is not well-defined. In 1988, Jean Yves Le Dimet defined the string link concordance group, where a link is based by a disk and represented by embedded arcs in D^2 × I. In 2012, Andrew Donald and Brendan Owens defined groups of links up to a notion of concordance based on Euler characteristic. However, both cases expand the set of links modulo concordance to larger sets and each link has many representatives in these larger groups. In this talk, I will present joint work with Matthew Hedden where we define a link concordance group based on the "knotification" construction of Peter Ozsváth and Zoltan Szabó, giving a definition of a link concordance group where each link has a unique group representative. I will also present invariants for studying this group coming from Heegaard-Floer homology as well as a new group theoretic invariant for studying concordance of knots inside certain types of 3-manifolds. Symplectic Homology is a kind of Floer homology defined for a class of non-compact symplectic manifolds including cotangent bundles and smooth affine algebraic varieties. In joint work with Luis Diogo, we have developed a method of computing SymplecticHomology for the complement of a smooth divisor in a projective variety in terms of the Gromov-Witten invariants of the divisor and of the variety. I will provide some background on symplectic homology, including a discussion of some of its applications, and will then discuss some of the ingredients of the proof of our theorem. Waldhausen's introduction of A-theory of spaces revolutionized the early study of pseudo-isotopy theory. Waldhausen proved that the A-theory of a manifold splits as its suspension spectrum and a factor Wh(M) whose first delooping is the space of stable h-cobordisms, and its second delooping is the space of stable pseudo-isotopies. I will describe a joint project with C. Malkiewich aimed at telling the equivariant story if one starts with a manifold M with group action by a finite group G. One can classify categories by using the nerve construction. But the nerve cannot determine the difference between certain types of categories. For example, the nerve cannot distinguish the difference between the trivial category and a category with two objects and one nontrivial morphism between the objects. Rezk's classifying and classification diagrams are generalizations of the nerve construction and can distinguish the difference between these categories. In this talk, we will discuss applying the appropriate diagram to the category of finite sets and the category of graphs as well as the relationship between these diagrams. Also, we will describe the classification diagram of a category where all of the morphisms are weak equivalences. Let $\pi$ be a discrete group and $G$ be a Lie group. We study the topology and in particular cohomology of the space of representations $Hom(\pi,G)$. For $\pi$ a nilpotent group or a free abelian group we describe the rational cohomology of the representation space in terms of the invariants of finite reflection groups. Moreover, we describe stability properties for the cohomology of representation spaces and character varieties. This is joint work with Dan Ramras. In the 90's Goerss, Hopkins, and Miller proved that the Morava E-theories are E_\infty-ring spectra in a unique way. Since then several people including Ando, Hopkins, Strickland, and Rezk have worked on explaining the effect of this structure on the homotopy groups of the spectrum. In this talk, I will present joint work with Barthel that shows how a form of character theory due to Hopkins, Kuhn, and Ravenel can be used to reduce this problem to a combination of combinatorics and the GL_n(Q_p)-action on the Drinfeld ring of full level structures which shows up in the local Langlands correspondence. We shall consider the Cartesian squares (powers) of manifolds with the fixed point property (f.p.p.). Examples of manifolds with the f.p.p. whose symmetric squares fail to have the f.p.p. will be given. Zakharevich gave a proof of the fact that the category of Waldhausen categories is a closed symmetric multicategory and algebraic K-theory is a multifunctor from the category of Waldhausen categories to the category of spectra. By assigning to any Waldhausen category the fundamental groupoid of the 1-type of its K-theory spectrum, we get a 1-functor from the category of Waldhausen categories to the category of Picard groupoids (since stable 1-types are classified by Picard groupoids). We want to show this 1-functor is a multifunctor. We use the algebraic model defined by Muro and Tonks to define the multifunctor. This is useful because it will describe the algebraic structures on the 1-type of the K-theory spectra induced by the multiexactness pairings on the level of Waldhausen categories. Heegaard Floer theory provides a powerful suite of tools for studying 3-manifolds and their subspaces. In 2006, Ozsvath, Szabo and Thurston defined an invariant of transverse knots which takes values in a combinatorial version of this theory for knots in the 3—sphere. In this talk, we discuss a refinement of their combinatorial invariant via branched covers and discuss some of its properties. This is joint work with Mike Wong. A few hotels reasonably near campus are listed below. Friday On Friday you will need to park in the new parking garage. You will need a "coupon code" to park. (You may be able to park by Maxim Doucet Hall late in the afternoon.) (Use 1289 Girard Park Circle to locate the entrance to the parking garage online.) If you indicated that you need a coupon code for Friday on your registration form, you will receive an email with the code. Map links are posted above.
CommonCrawl
The ring of quaternions is a four dimensional division algebra over the real numbers. They are usually denoted as $\Bbb H$ in honor of the discoverer, William Rowan Hamilton. The construction of the quaternions was given by Hamilton as follows: take three symbols $i,j,k$ and define $i^2=j^2=k^2=ijk=-1$. As a result, $ij=k$, and $jk=i$ and $ki=j$. Furthermore, $ji=-k$ and $kj=-i$ and $ik=-j$, so $kji=1$. A quaternion is a linear combination $q=\alpha+\beta i+\gamma j +\delta k$ where $\alpha, \beta,\gamma,\delta\in \Bbb R$. Multiplication between quaternions is carried out by using the distributive rule and the rules for $i,j$ and $k$. The quaternions turn out to be a noncommutative division ring. In fact, $\Bbb R$ and $\Bbb C$ and $\Bbb H$ are the only associative finite dimensional division rings over $\Bbb R$. They are also the only normed division algebras over $\Bbb R$. How to rotate unit vectors and unit quaternions by unit quaternions? Is a quaternion a way to divide vectors?
CommonCrawl
A computationally-efficient method based on Kalman filtering is introduced to capture "on the fly" the low-frequency (or very large-scale) patterns of a turbulent flow in a large-eddy simulation (LES). This method may be viewed as an adaptive exponential smoothing in time with a varying cut-off frequency that adjusts itself automatically to the local rate of turbulence of the simulated flow. It formulates as a recursive algorithm, which requires only few arithmetic operations per time step and has very low memory usage. In practice, this smoothing algorithm is used in LES to evaluate the low-frequency component of the rate of strain, and implement a shear-improved variant of the Smagrosinky's subgrid-scale viscosity. Such approach is primarily devoted to the simulation of turbulent flows that develop large-scale unsteadiness associated with strong shear variations. As a severe test case, the flow past a circular cylinder at Reynolds number $Re_D=4.7\times10^4$ (in the subcritical turbulent regime) is examined in details. Aerodynamic and aeroacoustic features including spectral analysis of the velocity and the far-field pressure are found in good agreement with various experimental data. The Kalman filter suitably captures the pulsating behavior of the flow and provides meaningful information about the large-scale dynamics. Finally, the robustness of the method is assessed by varying the parameters entering in the calibration of the Kalman filter.
CommonCrawl
If you want to get better at something, you need a plan. Improvement doesn't happen on its own. But once you have that plan, a bigger challenge is executing on it along with your other responsibilities. One way to increase your chances of following through on changes is not to try to make big changes all at once. Instead, make small changes, but make them regularly. Let's see how that works. In July of this year, Stack Exchange Inc. released an online tool that lets you calculate how much money you would make if you worked there. The number you get out of the tool is based on four factors. There's a salary floor based on the position you select (e.g., Developer or Product Designer), an adjustment based on your years of professional experience, and a bonus for living in one of a few high cost cities (New York, San Francisco, or London). Finally, the tool takes into account your skills. Having written in the past about skills for programmers, I was interested to see what Stack Exchange decided was important for success in a programming job. Here's what I found. When you first start learning something new, it is normal to see rapid — or at least steady — improvement, and when that improvement stops, it is natural to believe you've hit some sort of implacable [immovable] limit. So you stop trying to move forward, and you settle down to life on that plateau. This is the major reason that people in every area stop improving. The concept of the learning plateau is one way to describe how people approach learning, work, and self-improvement. With a new skill, there's an initial period of excitement driven by how easy it is to make progress. Then a plateau arrives, and you have to decide whether to push through it or stick with your current skill level. And even if you push through it, you can look forward to another plateau where you'll get to make the same decision again. Any reasonably complex skill will involve a variety of components, some of which you will be better at than others. Thus, when you reach a point at which you are having difficulty getting better, it will be just one or two of the components of that skill, not all of them, that are holding you back. According to this approach, the way to resist the plateau effect is to break down your target skill into its constituent parts, and be prepared to target those parts individually. In the book, the authors use typing speed as an example. Everyone who learns to type eventually reaches a speed plateau. Physical constraints mean you can't keep increasing your typing speed forever. But you may plateau at a speed that is below your physical limits, or at least is slower than you want. One idea for increasing your typing speed is just to push yourself to type faster whenever you get the chance. But according to the authors, there's a more effective way. Rather than trying to type faster 100% of the time, try typing faster for just 15-20 minutes per day. During that time, document the mistakes you make. It's likely that some letters or letter combinations will trip you up more than others. Once you identify them, you can more efficiently target those components, rather than trying to get better at the skill all at once. Typing happens to be one component of the skill known as competitive programming. If your typing skills are slower than average, or if you're competing at a high level in timed contests, working on your typing speed might be worthwhile. But for most competitive programming enthusiasts, working on other skills is more likely to produce results. What are those other skills? Given an $N \times N$ array $A$ of positive and negative integers, print the sum of the nonempty subarray of $A$ that has the maximum sum. The sum of a subarray is defined as the sum of its elements. uHunt lists this problem in the section called Max 2D Range Sum, a subcategory of Dynamic Programming. But before we get into the dynamic programming solution, let's examine the Complete Search approach.
CommonCrawl
How do conventional and T-tails differ? What design considerations go into the decision between conventional tails and T-tails? Functionally the horizontal stabilizer/stabilator are the same in both cases, providing negative lift, the elevator control and a method for pitch trim. What are the differences though? As far as I am aware the T-tails I have flown have T-tails for avoiding propwash (PA-44) or aft engine placement (EMB-145). Are there other reasons for having a T-tail? What are the aerodynamic consequences a pilot needs to be aware of with a T-tail (e.g. avoiding hard de-rotation on touchdown, issues at high AOA, etc)? The placement on top of the vertical gives it more leverage, especially with a swept tail. Depending on wing location, it stays in undisturbed flow in a stall. Note: This is really depending on the details, the HFB-320 had a forward swept wing and a T-tail, which made a deep stall possible (and in one case fatal). By designing the junction with the vertical well, the T-tail has less interference drag. It also helps to reduce wave drag, especially when using a well designed Küchemann body (the round, long, spiky thing on the tail junction of a Tu-154) by stretching the structure lengthwise. It can help to increase the effectiveness of the vertical tail by keeping the air on both sides of it separated. At the other end, the fuselage does this already, so moving the horizontal tail up does not hurt so much there. As a consequence, the tail can be built lower. The mass of the horizontal tail on a long lever arm (= the vertical tail) means that the torsional eigenfrequency of the fuselage will go down. This might be a problem in case of flutter. As a consequence of the smaller vertical tail, a T-tail can be lighter. Note that the increased leverage means that the horizontal tail can be smaller as well. This reduces friction drag and is the main reason why most modern gliders have T-tails. A T-tail produces a strong nose-down pitching moment in sideslip. The T-tail sticks the elevators out of the disturbed air of the wings, prop, and (usually most of) the fuselage which gives you better elevator authority, and makes a tail stall less likely. It has some drawbacks though, by putting the elevators directly in the (turbulent) separated flow from the wings during a stall can put you in a (more or less) unrecoverable deep stall. The considerations in the roe's answer are entirely correct but there might be other factors to take into account. First, it is true that using conventional tail leads to the fact that the airflow over the tail might be disturbed by the main wing and/or the engines and/or the fuselage. However, the downwash induced by the main wing on the flow is taken into account (for the cruise conditions) in the design of the tail in order to reduce some negative aspects of the interaction between the main wing and the tail. Another major difference between these two configurations concerns the stability. As I already explained in this answer, the tail is used to create some lift that is required to fulfil the trim relations. Regarding the "vertical" force equilibrium equation, there is no real difference between the two configurations but there is a big one for the moment equilibrium. Assuming that you have the same amount of lift generated by the both configurations (this is relevant due to the "vertical" force equilibrium), a quick sketch will convince you that both the angle and the lever arm are different. The conclusion of this study cannot be drawn without a specific example but I hope it is clear for you that stability is really impacted by the choice of the tail. From a structural point of view, when flying transonic (or even supersonic) it is not good to have a T-tail configuration because it usually induces flutter on the tail. Finally, at a lower level but still a difference, using a T-tail increases the wake (compared to a conventional configuration, where the tail is almost in the wake of the main wings and the fuselage) behind your aircraft and thus the drag you need to overcome is larger. A T-tail has structural and aerodynamic design consequences. The structural considerations are of course the increased weight of the vertical tail due to now having to support the forces and moments on the horizontal tail, including strengthening for flutter. The vertical tail can be shorter due to the end plate effect of the horizontal tail, and the moment arm to the CoG is longer - however for most higher subsonic speed aircraft these effects merely reduce the weight penalty. One unusual flight characteristic of the early Victor was its self-landing capability; once lined up with the runway, the aircraft would naturally flare as the wing entered into ground effect while the tail continued to sink, giving a cushioned landing without any command or intervention by the pilot. The aerodynamic consequences of a T-tail have most to do with stability and control in stall and post-stall behaviour, and can be grave. The Fokker 28 and F100 had stick pushers that acted upon detecting a high angle of attack, making it pretty much impossible to keep the columns at aft position. The reason for this is the reversal of the $C_M$ - $\alpha$ slope of T-tails, as depicted below. The aeroplane is aerodynamically stable when the $C_M$ - $\alpha$ slope is negative, such as in cases B and C. For configuration A, the slope becomes positive after the stall point, meaning that the nose wants to increase upwards after reaching the stall - not a good situation. The stall speed must be demonstrated during certification, and safe recovery from a stall is a requirement. A stick pusher prevents the aeroplane from entering the deep stall area. Not the answer you're looking for? Browse other questions tagged aerodynamics aircraft-design or ask your own question. What is a 'deep stall' and how can pilots recover from it? How does an aircraft tailplane work? Why do T- tail airplanes have a shorter vertical stabilizer? What airframe design is best for stormy weather? What is (theoretically) the most efficient shape for an aircraft, assuming you don't have to carry any cargo? Why did the F-104 Starfighter have a T-tail? Why do trijets (3 rear engines) usually have a T-tail instead of a normal tail? What are the advantages of the Cri-Cri's tail and fuselage design? Why does the Sukhoi-30 MKI have both canards and conventional horizontal stabilizers? Why jet engines can differ in shape? What is "settling with power" and how does it differ from vortex ring state? Why the use of twin tails? How do elevator servo and anti-servo (geared) tabs differ? How does bank angle differ from roll angle with respect to stability in fixed-wing aircraft? How would gyroscopic effects affect push-pull configuration vs conventional twin engine in the event of engine failure? Why do the C-141 and C-5 have T-tails? Is there a conventional notation or name for the slip angle? How do conventional missiles fly?
CommonCrawl
M D Sanchez-Nino and A Ortiz. Differential effects of oral and intravenous L-carnitine on serum lipids: is the microbiota the answer?. Clinical Kidney Journal 7(5):437–441, September 2014. Marco Onofrj, Fausta Ciccocioppo, Sara Varanese, Antonio Muzio, Menotti Calvani, Santina Chiechio, Maurizio Osio and Astrid Thomas. Acetyl-L-carnitine: from a biological curiosity to a drug for the peripheral nervous system and beyond.. Expert review of neurotherapeutics 13(8):925–36, August 2013. Abstract Acetyl-L-carnitine (ALC) is a molecule derived from acetylation of carnitine in the mitochondria. Carnitine acetylation enables the function of CoA and facilitates elimination of oxidative products. Beyond this metabolic activity, ALC provides acetyl groups for acetylcholine synthesis, exerts a cholinergic effect and optimizes the balance of energy processes. Acetylcarnitine supplementation induces neuroprotective, neurotrophic and analgesic effects in the peripheral nervous system. In the recent studies, ALC, by acting as a donor of acetyl groups to NF-kb p65/RelA, enhanced the transcription of the GRM2 gene encoding the mGLU2 receptors, inducing long-term upregulation of the mGluR2, evidencing therefore that its long-term analgesic effects are dependent on epigenetic modifications. Several studies, including double-blind, placebo-controlled, parallel group studies and few open studies showed the effect of ALC in diseases characterized by neuropathies and neuropathic pain: the studies included diabetic neuropathy, HIV and antiretroviral therapy-induced neuropathies, neuropathies due to compression and chemotherapeutic agents. Double-blinded studies involved 1773 patients. Statistical evaluations evidenced reduction of pain, improvements of nerve function and trophism. In conclusion, ALC represents a consistent therapeutic option for peripheral neuropathies, and its complex effects, neurotrophic and analgesic, based on epigenetic mechanism, open new pathways in the study of peripheral nerve disease management. Ettore Beghi, Elisabetta Pupillo, Virginio Bonito, Paolo Buzzi, Claudia Caponnetto, Adriano Chiò, Massimo Corbo, Fabio Giannini, Maurizio Inghilleri, Vincenzo La Bella, Giancarlo Logroscino, Lorenzo Lorusso, Christian Lunetta, Letizia Mazzini, Paolo Messina, Gabriele Mora, Michele Perini, Maria Lidia Quadrelli, Vincenzo Silani, Isabella L Simone and Lucio Tremolizzo. Randomized double-blind placebo-controlled trial of acetyl-L-carnitine for ALS.. Amyotrophic lateral sclerosis & frontotemporal degeneration 14(5-6):397–405, 2013. Abstract Our objective was to assess the effects of acetyl-L-carnitine (ALC) with riluzole on disability and mortality of amyotrophic lateral sclerosis (ALS). Definite/probable ALS patients, 40-70 years of age, duration 6-24 months, self-sufficient (i.e. able to swallow, cut food/handle utensils, and walk), and with forced vital capacity (FVC) > 80% entered a pilot double-blind, placebo-controlled, parallel group trial and were followed for 48 weeks. ALC or placebo 3 g/day was added to riluzole 100 mg/day. Primary endpoint: number of patients no longer self-sufficient. Secondary endpoints: changes in ALSFRS-R, MRC, FVC and McGill Quality of Life (QoL) scores. Analysis was made in the intention-to-treat (ITT) and per-protocol (PP) population, completers and completers/compliers (i.e. taking > 75% of study drug). Forty-two patients received ALC and 40 placebo. In the ITT population, 34 (80.9%) patients receiving ALC and 39 (97.5%) receiving placebo became non-self-sufficient (p = 0.0296). In the PP analysis, percentages were 84.4 and 100.0% (p = 0.0538), respectively. Mean ALSFRS-R scores at 48 weeks were 33.6 (SD 10.4) and 27.6 (9.9) (p = 0.0388), respectively, and mean FVC scores 90.3 (32.6) and 58.6 (31.2) (p = 0.0158), respectively. Median survival was 45 months (ALC) and 22 months (placebo) (p = 0.0176). MRC, QoL and adverse events were similar. In conclusion, ALC may be effective, well-tolerated and safe in ALS. A pivotal phase III trial is needed. M Sun, F Qian, W Shen, C Tian, J Hao, L Sun and J Liu. Mitochondrial nutrients stimulate performance and mitochondrial biogenesis in exhaustively exercised rats.. Scandinavian journal of medicine & science in sports 22(6):764–75, December 2012. Abstract The aim of this study was to investigate the effects of a combination of nutrients on physical performance, oxidative stress and mitochondrial biogenesis in rats subjected to exhaustive exercise. Rats were divided into sedentary control (SC), exhaustive exercise (EC) and exhaustive exercise with nutrient supplementation (EN). The nutrients include (mg/kg/day): R-$\alpha$-lipoic acid 50, acetyl-L-carnitine 100, biotin 0.1, nicotinamide 15, riboflavin 6, pyridoxine 6, creatine 50, CoQ10 5, resveratrol 5 and taurine 100. Examination of running distances over the 4-week period revealed that EN rats ran significantly longer throughout the entire duration of the exhaustive exercise period compared with the EC rats. Nutrient supplementation significantly inhibited the increase in activities of alanine transaminase, lactate dehydrogenase and creatine kinase, reversed increases in malondialdehyde, inhibited decreases in glutathione S-transferase and total antioxidant capacity in plasma, and suppressed the elevation of reactive oxygen species and apoptosis in splenic lymphocytes. Nutrient supplementation increased the protein expression of mitochondrial complexes I, II and III, mtDNA number and transcription factors involved in mitochondrial biogenesis and fusion in skeletal muscle. These findings suggest that mitochondrial nutrient supplementation can reduce exhaustive exercise-induced oxidative damage and mitochondrial dysfunction, thus leading to enhancement of physical performance and of fatigue recovery. Hector H Palacios, Bharat B Yendluri, Kalpana Parvathaneni, Vagif B Shadlinski, Mark E Obrenovich, Jerzy Leszek, Dmitry Gokhman, Kazimierz Gąsiorowski, Valentin Bragin and Gjumrakch Aliev. Mitochondrion-specific antioxidants as drug treatments for Alzheimer disease.. CNS & neurological disorders drug targets 10(2):149–62, 2011. Abstract Age-related dementias such as Alzheimer disease (AD) have been linked to vascular disorders like hypertension, diabetes and atherosclerosis. These risk factors cause ischemia, inflammation, oxidative damage and consequently reperfusion, which is largely due to reactive oxygen species (ROS) that are believed to induce mitochondrial damage. At higher concentrations, ROS can cause cell injury and death which occurs during the aging process, where oxidative stress is incremented due to an accelerated generation of ROS and a gradual decline in cellular antioxidant defense mechanisms. Neuronal mitochondria are especially vulnerable to oxidative stress due to their role in energy supply and use, causing a cascade of debilitating factors such as the production of giant and/or vulnerable young mitochondrion who's DNA has been compromised. Therefore, mitochondria specific antioxidants such as acetyl-L-carnitine and R-alphalipoic acid seem to be potential treatments for AD. They target the factors that damage mitochondria and reverse its effect, thus eliminating the imbalance seen in energy production and amyloid beta oxidation and making these antioxidants very powerful alternate strategies for the treatment of AD. G Nagesh Babu, Alok Kumar and Ram Lakhan Singh. Chronic pretreatment with acetyl-L-carnitine and ±DL-$\alpha$-lipoic acid protects against acute glutamate-induced neurotoxicity in rat brain by altering mitochondrial function.. Neurotoxicity research 19(2):319–29, 2011. Abstract Cellular oxidative stress and energy failure were shown to be involved in Glutamate (L-Glu) neurotoxicity, whereas, acetyl-L-carnitine (ALCAR) and ±DL-$\alpha$-lipoic acid (LA) are known to be key players in the mitochondrial energy production. To evaluate the effects of the above antioxidants, adult rats were pretreated with ALCAR (100 mg/kg i.p for 21 days) and both ALCAR and LA (100 mg/kg i.p + 50 mg/kg i.p for 21 days), before stereotactically administering L-Glu bolus (1 $\mu$mole/1 $\mu$l) in the cerebral cortex. Results showed that acute L-Glu increased ROS (P < 0.001), LPO (P < 0.001), Ca(2+) (P < 0.001), TNF-$\alpha$ (P < 0.001), IFN-$\gamma$ (P < 0.001), NO (P < 0.001) levels and mRNA expression of Caspase-3, Casapase-9, iNOS, and nNOS genes with respect to saline-injected control group. Key antioxidant parameters such as SOD, CAT, GSH, GR along with mitochondrial transmembrane potential ($¶si$∆m) were decreased (P < 0.05), while ALCAR pretreatment prevented these effects by significantly inhibiting ROS (P < 0.001), LPO (P < 0.001), Ca(2+) (P < 0.05), TNF-$\alpha$ (P < 0.05), IFN-$\gamma$ (P < 0.001), NO (P < 0.01) levels and expression of the above genes. This chronic pretreatment of ALCAR also increased SOD, CAT, GSH, GR, and $¶si$∆m (P < 0.0.01, P < 0.0.01, P < 0.05, P < 0.05, and P < 0.001, respectively) with respect to L: -Glu group. The addition of LA to ALCAR resulted in further increases in CAT (P < 0.05), GSH (P < 0.01), GR (P < 0.05), $¶si$∆m (P < 0.05) and additional decreases in ROS (P < 0.001), LPO (P < 0.05), Ca(2+) (P < 0.05), TNF-$\alpha$ (P < 0.05) and mRNA expression of iNOS and nNOS genes with respect to ALCAR group. Hence, this "one-two punch" of ALCAR + LA may help in ameliorating the deleterious cellular events that occur after L-Glu. P M Abdul Muneer, Saleena Alikunju, Adam M Szlachetka, Aaron J Mercer and James Haorah. Ethanol impairs glucose uptake by human astrocytes and neurons: protective effects of acetyl-L-carnitine.. International journal of physiology, pathophysiology and pharmacology 3(1):48–56, 2011. Abstract Alcohol consumption causes neurocognitive deficits, neuronal injury, and neurodegeneration. At the cellular level, alcohol abuse causes oxidative damage to mitochondria and cellular proteins and interlink with the progression of neuroinflammation and neurological disorders. We previously reported that alcohol inhibits glucose transport across the blood-brain barrier (BBB), leading to BBB dysfunction and neurodegeneration. In this study, we hypothesized that ethanol (EtOH)-mediated disruption in glucose uptake would deprive energy for human astrocytes and neurons inducing neurotoxicity and neuronal degeneration. EtOH may also have a direct effect on glucose uptake in neurons and astrocytes, which has not been previously described. Our results indicate that ethanol exposure decreases the uptake of D-(2-(3)H)-glucose by human astrocytes and neurons. Inhibition of glucose uptake correlates with a reduction in glucose transporter protein expression (GLUT1 in astrocytes and GLUT3 in neurons). Acetyl-L-carnitine (ALC), a neuroprotective agent, suppresses the effects of alcohol on glucose uptake and GLUT levels, thus reducing neurotoxicity and neuronal degeneration. These findings suggest that deprivation of glucose in brain cells contributes to neurotoxicity in alcohol abusers, and highlights ALC as a potential therapeutic agent to prevent the deleterious health conditions caused by alcohol abuse. Hongyu Zhang, Haiqun Jia, Jianghai Liu, Ni Ao, Bing Yan, Weili Shen, Xuemin Wang, Xin Li, Cheng Luo and Jiankang Liu. Combined R-alpha-lipoic acid and acetyl-L-carnitine exerts efficient preventative effects in a cellular model of Parkinson's disease.. Journal of cellular and molecular medicine 14(1-2):215–25, January 2010. Abstract Mitochondrial dysfunction and oxidative damage are highly involved in the pathogenesis of Parkinson's disease (PD). Some mitochondrial antioxidants/nutrients that can improve mitochondrial function and/or attenuate oxidative damage have been implicated in PD therapy. However, few studies have evaluated the preventative effects of a combination of mitochondrial antioxidants/nutrients against PD, and even fewer have sought to optimize the doses of the combined agents. The present study examined the preventative effects of two mitochondrial antioxidant/nutrients, R-alpha-lipoic acid (LA) and acetyl-L-carnitine (ALC), in a chronic rotenone-induced cellular model of PD. We demonstrated that 4-week pretreatment with LA and/or ALC effectively protected SK-N-MC human neuroblastoma cells against rotenone-induced mitochondrial dysfunction, oxidative damage and accumulation of alpha-synuclein and ubiquitin. Most notably, we found that when combined, LA and ALC worked at 100-1000-fold lower concentrations than they did individually. We also found that pretreatment with combined LA and ALC increased mitochondrial biogenesis and decreased production of reactive oxygen species through the up-regulation of the peroxisome proliferator-activated receptor-gamma coactivator 1alpha as a possible underlying mechanism. This study provides important evidence that combining mitochondrial antioxidant/nutrients at optimal doses might be an effective and safe prevention strategy for PD. Giovanna Traina, Rodolfo Bernardi, Milena Rizzo, Menotti Calvani, Mauro Durante and Marcello Brunelli. Acetyl-L-carnitine up-regulates expression of voltage-dependent anion channel in the rat brain.. Neurochemistry international 48(8):673–8, June 2006. Abstract Acetyl-L-carnitine (ALC) exerts unique neuroprotective, neuromodulatory, and neurotrophic properties, which play an important role in counteracting various pathological processes, and have antioxidative properties, protecting cells against lipid peroxidation. In this study, suppression subtractive hybridization (SSH) method was applied for the generation of subtracted cDNA libraries and the subsequent identification of differentially expressed transcripts after treatment of rats with ALC. The technique generates an equalized representation of differentially expressed genes irrespective of their relative abundance and it is based on the construction of forward and reverse cDNA libraries that allow the identification of the genes that are regulated after ALC treatment. In the present paper, we report the identification of the gene of mitochondrial voltage-dependent anion channel (VDAC) protein which is positively modulated by the ALC treatment. VDAC is a small pore-forming protein of the mitochondrial outer membrane. It represents an interesting tool for Ca(2+) homeostasis, and it plays a central role in apoptosis. In addition, VDAC seems to have a relevant role in the synaptic plasticity. Francis B Stephens, Dumitru Constantin-Teodosiu, David Laithwaite, Elizabeth J Simpson and Paul L Greenhaff. Insulin stimulates L-carnitine accumulation in human skeletal muscle.. FASEB journal : official publication of the Federation of American Societies for Experimental Biology 20(2):377–9, March 2006. Abstract Increasing skeletal muscle carnitine content may alleviate the decline in muscle fat oxidation seen during intense exercise. Studies to date, however, have failed to increase muscle carnitine content, in healthy humans, by dietary or intravenous L-carnitine administration. We hypothesized that insulin could augment Na+-dependent skeletal muscle carnitine transport. On two randomized visits, eight healthy men underwent 5 h of intravenous L-carnitine infusion with serum insulin maintained at fasting (7.4+/-0.4 mIU*l(-1)) or physiologically high (149.2+/-6.9 mIU*l(-1)) concentrations. The combination of hypercarnitinemia (approximately 500 micromol*l(-1)) and hyperinsulinemia increased muscle total carnitine (TC) content from 22.0 +/- 0.9 to 24.7 +/- 1.4 mmol*(kg dm)(-1) (P<0.05) and was associated with a 2.3 +/- 0.3-fold increase in carnitine transporter protein (OCTN2) mRNA expression (P<0.05). Hypercarnitinemia in the presence of a fasting insulin concentration had no effect on either of these parameters. This study demonstrates that insulin can acutely increase muscle TC content in humans during hypercarnitinemia, which is associated with an increase in OCTN2 transcription. These novel findings may be of importance to the regulation of muscle fat oxidation during exercise, particularly in obesity and type 2 diabetes where it is known to be impaired. Hafiz Mohmmad Abdul, Vittorio Calabrese, Menotti Calvani and Allan D Butterfield. Acetyl-L-carnitine-induced up-regulation of heat shock proteins protects cortical neurons against amyloid-beta peptide 1-42-mediated oxidative stress and neurotoxicity: implications for Alzheimer's disease.. Journal of neuroscience research 84(2):398–408, 2006. Abstract Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by loss of memory and cognition and by senile plaques and neurofibrillary tangles in brain. Amyloid-beta peptide, particularly the 42-amino-acid peptide (Abeta(1-42)), is a principal component of senile plaques and is thought to be central to the pathogenesis of the disease. The AD brain is under significant oxidative stress, and Abeta(1-42) peptide is known to cause oxidative stress in vitro and in vivo. Acetyl-L-carnitine (ALCAR) is an endogenous mitochondrial membrane compound that helps to maintain mitochondrial bioenergetics and lowers the increased oxidative stress associated with aging. Glutathione (GSH) is an important endogenous antioxidant, and its levels have been shown to decrease with aging. Administration of ALCAR increases cellular levels of GSH in rat astrocytes. In the current study, we investigated whether ALCAR plays a protective role in cortical neuronal cells against Abeta(1-42)-mediated oxidative stress and neurotoxicity. Decreased cell survival in neuronal cultures treated with Abeta(1-42) correlated with an increase in protein oxidation (protein carbonyl, 3-nitrotyrosine) and lipid peroxidation (4-hydroxy-2-nonenal) formation. Pretreatment of primary cortical neuronal cultures with ALCAR significantly attenuated Abeta(1-42)-induced cytotoxicity, protein oxidation, lipid peroxidation, and apoptosis in a dose-dependent manner. Addition of ALCAR to neurons also led to an elevated cellular GSH and heat shock proteins (HSPs) levels compared with untreated control cells. Our results suggest that ALCAR exerts protective effects against Abeta(1-42) toxicity and oxidative stress in part by up-regulating the levels of GSH and HSPs. This evidence supports the pharmacological potential of acetyl carnitine in the management of Abeta(1-42)-induced oxidative stress and neurotoxicity. Therefore, ALCAR may be useful as a possible therapeutic strategy for patients with AD. P Bigini, S Larini, C Pasquali, V Muzio and T Mennini. Acetyl-L-carnitine shows neuroprotective and neurotrophic activity in primary culture of rat embryo motoneurons.. Neuroscience letters 329(3):334–8, September 2002. Abstract We evaluated the role of acetyl-L-carnitine (ALCAR) in protecting primary motoneuron cultures exposed to excitotoxic agents or serum-brain derived neurotrophic factor (BDNF) deprived. To exclude that ALCAR works as a metabolic source, we compared its effects with those of L-carnitine (L-CAR), that seems to have no neurotrophic effect. A concentration of 10 mM ALCAR, but not L-CAR, significantly reduced the toxic effect of 50 microM N-methyl-D-aspartate (NMDA, % viability: NMDA 45.4+/-2.80, NMDA+ALCAR 90.8+/-11.8; P<0.01) and of 5 microM kainate in cultured motoneurons (% viability: kainate 40.66+/-10.73; kainate+ALCAR 63.80+/-13.88; P<0.05). The effect was due to a shift to the right of the dose-response curve for kainate (EC50 for kainate 5.99+/-1.012 microM; kainate+ALCAR 8.62+/-1.13 microM; P<0.05). ALCAR, but not L-CAR, significantly protected against BDNF and serum-deprivation reducing the apoptotic cell death (% viability respect to control: without BDNF/serum 61.8+/-13.3: without BDNF/serum+ALCAR 111.8+/-13.9; P<0.01). Immunocytochemistry showed an increase in choline acethyltransferase and tyrosine kinaseB receptors in motoneurons treated with ALCAR but not with L-CAR. These results suggest that ALCAR treatment improves the motoneurons activity, acting as a neurotrophic factor. K V Rao and I A Qureshi. Reduction in the MK-801 binding sites of the NMDA sub-type of glutamate receptor in a mouse model of congenital hyperammonemia: prevention by acetyl-L-carnitine.. Neuropharmacology 38(3):383–94, March 1999. Abstract Our earlier studies on the pharmacotherapeutic effects of acetyl-L-carnitine (ALCAR), in sparse-fur (spf) mutant mice with X linked ornithine transcarbamylase deficiency, have shown a restoration of cerebral ATP, depleted by congenital hyperammonemia and hyperglutaminemia. The reduced cortical glutamate and increased quinolinate may cause a down-regulation of the N-methyl-D-aspartate (NMDA) receptors, observed by us in adult spf mice. We have now studied the kinetics of [3H]-MK-801 binding to NMDA receptors in spf mice of different ages to see the effect of chronic hyperammonemia on the glutamate neurotransmission. We have also studied the Ca2+-dependent and independent (4-aminopyridine (AP) and veratridine-mediated) release of glutamate and the uptake of [3H]-glutamate in synaptosomes isolated from mutant spf mice and normal CD-1 controls. All these studies were done with and without ALCAR treatment (4 mmol/kg wt i.p. daily for 2 weeks), to see if its effect on ATP repletion could correct the glutamate neurotransmitter abnormalities. Our results indicate a normal MK-801 binding in 12-day-old spf mice but a significant reduction immediately after weaning (21 day), continuing into the adult stage. The Ca2+-independent release of endogenous glutamate from synaptosomes was significantly elevated at 35 days, while the uptake of glutamate into synaptosomes was significantly reduced in spf mice. ALCAR treatment significantly enhanced the MK-801 binding, neutralized the increased glutamate release and restored the glutamate uptake into synaptosomes of spf mice. These studies point out that: (a) the developmental abnormalities of the NMDA sub-type of glutamate receptor in spf mice could be due to the effect of sustained hyperammonemia, causing a persistent release of excess glutamate and inhibition of the ATP-dependent glutamate transport, (b) the modulatory effects of ALCAR on the NMDA binding sites could be through a repletion of ATP, required by the transporters to efficiently remove extracellular glutamate. M Calvani and E Arrigoni-Martelli. Attenuation by acetyl-L-carnitine of neurological damage and biochemical derangement following brain ischemia and reperfusion.. International journal of tissue reactions 21(1):1–6, January 1999. Abstract Alterations in brain metabolism after ischemia and reperfusion are described herein. Several roles played by carnitine and acetylcarnitine can be of particular relevance in counteracting these brain metabolism alterations. The effects of acetylcarnitine in several experimental models of brain ischemia in rats are described. The data obtained show that acetylcarnitine can have significant clinical neuroprotective effects when administered shortly after the onset of focal or global cerebral ischemia. In the canine cardiac arrest model, acetylcarnitine improved the postischemic neurological outcome and tissue levels of lactate and pyruvate were normalized. A trend toward reversal of pyruvate dehydrogenase inhibition in acetylcarnitine-treated dogs was also observed. The immediate postischemic administration of acetylcarnitine prevents free radical-mediated protein oxidation in the frontal cortex of dogs submitted to cardiac arrest and resuscitation. The transfer of the acetyl group to coenzyme A (CoA) to form acetyl-CoA as the primary source of energy is a plausible mechanism of action of acetylcarnitine. J Nakamura, N Koh, F Sakakibara, Y Hamada, T Hara, H Sasaki, S Chaya, T Komori, E Nakashima, K Naruse, K Kato, N Takeuchi, Y Kasuya and N Hotta. Polyol pathway hyperactivity is closely related to carnitine deficiency in the pathogenesis of diabetic neuropathy of streptozotocin-diabetic rats.. The Journal of pharmacology and experimental therapeutics 287(3):897–902, 1998. Abstract To investigate the relationship between polyol pathway hyperactivity and altered carnitine metabolism in the pathogenesis of diabetic neuropathy, the effects of an aldose reductase inhibitor, [5-(3-thienyl) tetrazol-1-yl]acetic acid (TAT), and a carnitine analog, acetyl-L-carnitine (ALC), on neural functions and biochemistry and hemodynamic factors were compared in streptozotocin-diabetic rats. Significantly delayed motor nerve conduction velocity, decreased R-R interval variation, reduced sciatic nerve blood flow and decreased erythrocyte 2, 3-diphosphoglycerate concentrations in diabetic rats were all ameliorated by treatment with TAT (administered with rat chow containing 0.05% TAT, approximately 50 mg/kg/day) or ALC (by gavage, 300 mg/kg/day) for 4 weeks. Platelet hyperaggregation activity in diabetic rats was diminished by TAT but not by ALC. TAT decreased sorbitol accumulation and prevented not only myo-inositol depletion but also free-carnitine deficiency in diabetic nerves. On the other hand, ALC also increased the myo-inositol as well as the free-carnitine content without affecting the sorbitol content. These observations suggest that there is a close relationship between increased polyol pathway activity and carnitine deficiency in the development of diabetic neuropathy and that an aldose reductase inhibitor, TAT, and a carnitine analog, ALC, have therapeutic potential for the treatment of diabetic neuropathy. E Fernandez, R Pallini, L Lauretti, F La Marca, A Scogna and G F Rossi. Motonuclear changes after cranial nerve injury and regeneration.. Archives italiennes de biologie 135(4):343–51, September 1997. Abstract Little is known about the mechanisms at play in nerve regeneration after nerve injury. Personal studies are reported regarding motonuclear changes after regeneration of injured cranial nerves, in particular of the facial and oculomotor nerves, as well as the influence that the natural molecule acetyl-L-carnitine (ALC) has on post-axotomy cranial nerve motoneuron degeneration after facial and vagus nerve lesions. Adult and newborn animal models were used. Massive motoneuron response after nerve section and reconstruction was observed in the motonuclei of all nerves studied. ALC showed to have significant neuroprotective effects on the degeneration of axotomized motoneurons. Complex quantitative, morphological and somatotopic nuclear changes occurred that sustain new hypotheses regarding the capacities of motoneurons to regenerate and the possibilities of new neuron proliferation. The particularities of such observations are described and discussed. K Schönheit, L Gille and H Nohl. Effect of alpha-lipoic acid and dihydrolipoic acid on ischemia/reperfusion injury of the heart and heart mitochondria.. Biochimica et biophysica acta 1271(2-3):335–42, 1995. Abstract The aim of the present study was to evaluate a possible interference of alpha-lipoic acid (LA) or its reduced form (dithiol dihydrolipoic acid = DHLA) in the cardiac ischemia/reperfusion injury both at the level of the intact organ and at the subcellular level of mitochondria. In order to follow the effect of LA on the ischemia/reperfusion injury of the heart the isolated perfused organ was subjected to total global ischemia and reperfusion in the presence and absence of different concentrations of LA. Treatment with 0.5 microM LA improved the recovery of hemodynamic parameters; electrophysiological parameters were not influenced. However, application of 10 microM LA to rat hearts further impaired the recovery of hemodynamic functions and prolonged the duration of severe rhythm disturbances in comparison to reperfusion of control hearts. Treatment of isolated mitochondria with any concentration of DHLA could not prevent the impairment of respiratory-linked energy conservation caused by the exposure of mitochondria to 'reperfusion' conditions. However, DHLA was effective in decreasing the formation and the existence of mitochondrial superoxide radicals (O2.-). Apart from its direct O(2.-)-scavenging activities DHLA was also found to control mitochondrial O2.- formation indirectly by regulating redox-cycling ubiquinone. It is suggested that impairment of this mitochondrial O2.- generator mitigates postischemic oxidative stress which in turn reduces damage to hemodynamic heart function. Choline acetyltransferase activities in single spinal motor neurons from patients with amyotrophic lateral sclerosis.. Journal of neurochemistry 52(2):636–40, March 1989. Abstract Activities of choline acetyltransferase (ChAT) were microassayed in individual cell bodies of motor neurons, isolated from freeze-dried sections after autopsy of lumbar spinal cords from four patients with sporadic amyotrophic lateral sclerosis (ALS) and four control patients with nonneurological diseases. Numerous large neurons were found in the anterior horn at the early degeneration stage of ALS, but the cell bodies atrophied and decreased in number at the late advanced stage. The small, atrophied neurons were very fragile and were easily destroyed during the isolation procedure with a microknife. The average activity, expressed on a dry weight basis, of 58 ALS neurons was lower than that of 67 control neurons. The large, well-preserved neurons at the early nonadvanced stage had markedly lower ChAT activities than control neurons. The specific activity gradually increased with the progress of atrophy but did not return to the control level. P Harper, C E Elwin and G Cederblad. Pharmacokinetics of intravenous and oral bolus doses of L-carnitine in healthy subjects.. European journal of clinical pharmacology 35(5):555–62, January 1988. Abstract The pharmacokinetics of single intravenous and oral doses of L-carnitine 2 g and 6 g has been investigated in 6 healthy subjects on a low carnitine diet. Carnitine was more rapidly eliminated from plasma after the higher dose. Comparing the 2-g and 6-g doses, the t1/2 beta of the elimination phase (beta) was 6.5 h vs 3.9 h, the elimination constant was 0.40 vs 0.50 h-1 and the plasma carnitine clearance was 5.4 vs 6.1 1 x h-1 (p less than 0.025), thus showing dose-related elimination. Saturable kinetics was not found in the range of doses given. The apparent volumes of distribution after the two doses were not significantly different and they were of the same order as the total body water. Urinary recoveries after the 2-g and 6-g doses were 70% and 82% during the first 24 h, respectively. Following the two oral dosing, there was no significant difference in AUCs of plasma carnitine. Urinary recoveries were 8% and 4% for the 2-g and 6-g doses during the first 24 h. The oral bioavailability of the 2-g dose was 16% and of the 6 h dose 5%. The results suggest that the mucosal absorption of carnitine is already saturated at the 2-g dose. S Di Donato, F E Frerman, M Rimoldi, P Rinaldo, F Taroni and U N Wiesmann. Systemic carnitine deficiency due to lack of electron transfer flavoprotein:ubiquinone oxidoreductase.. Neurology 36(7):957–63, 1986. Abstract A child with myopathy and systemic carnitine deficiency died at age 8 years in an acute metabolic attack. He had glutaric aciduria type II, and his cultured fibroblasts contained normal activity of four different acyl CoA dehydrogenases, but there was deficiency of electron transfer flavoprotein:ubiquinone oxidoreductase (ETF-QO). This enzyme is thought to reduce coenzyme Q in the respiratory chain, funneling reducing equivalents from seven flavoproteins in the beta-oxidation of acyl CoAs. There was massive urinary excretion of the short-chain acylcarnitines that accumulated in mitochondria as a result of the ETF-QO defect. Carnitine therefore acts as a buffer for excessive accumulation of intramitochondrial acyl CoAs, and defective beta-oxidation can cause carnitine insufficiency. G Taglialatela, D Navarra, R Cruciani, M T Ramacci, G S Alemà and L Angelucci. Acetyl-L-carnitine treatment increases nerve growth factor levels and choline acetyltransferase activity in the central nervous system of aged rats.. Experimental gerontology 29(1):55–66. Abstract The hypothesis that some neurodegenerative events associated with ageing of the central nervous system (CNS) may be due to a lack of neurotrophic support to neurons is suggestive of a possible reparative pharmacological strategy intended to enhance the activity of endogenous neurotrophic agents. Here we report that treatment with acetyl-l-carnitine (ALCAR), a substance which has been shown to prevent some impairments of the aged CNS in experimental animals as well as in patients, is able to increase the levels and utilization of nerve growth factor (NGF) in the CNS of old rats. The stimulation of NGF levels in the CNS can be attained when ALCAR is given either for long or short periods to senescent animals of various ages, thus indicating a direct effect of the substance on the NGF system which is independent of the actual degenerative stage of the neurons. Furthermore, long-term treatment with ALCAR completely prevents the loss of choline acetyltransferase (ChAT) activity in the CNS of aged rats, suggesting that ALCAR may rescue cholinergic pathways from age-associated degeneration due to lack of retrogradely transported NGF.
CommonCrawl
Here, we present the overall framework for this compiler, focusing on the IRs involved and our method for translating general recursive functions into equivalent hardware. Assume no rabbits die. Anyone who has been a part of a Computer Science program at a university will probably have dabbled with Fibonacci in their first semester of school. We propose a technique to search for neurons based on existing interpretable models, features, or programs. This can be a very powerful tool in writing algorithms. I ensure that students understand the definitions throughout the entire unit using Quick Writes at the beginning of the class regarding a major vocabulary word s that we will be discussing that day. So let's write it like this in a table. For instance, some prior approaches have assumed that the structural relation- ships between identifiers e. Kim Dynamic reconfiguration systems guided by coarse-grained program phases has found success in improving overall program performance and energy efficiency. This thesis aims to improve system reliability for this type of cyber-physical system. Water is about as good a polar solvent as other alcohols. Some early chemists did classify water as the simplest carbon-free alcohol. Consequently, we aim to learn a grasping policy through a simulation-based data driven approach. A video version of this article can be seen below. I just followed the pattern. This is 15 minus, we're subtracting the six three times from the 15, so minus three times six. And then just so that we have some practice with some of the sequence notation, I want to define them either as explicit functions of the term you're looking for, the index you're looking at, or as recursive definitions. These are macaronic terms namely, terms built from a mixture of different languages, like Greek and Latin and they should be avoided. And we would just write with there. The classification below starts with "polygons" with one or two sides, which are legitimate topological objects. This is another way of defining it. Much more so, in fact When our loop has reached our desired fifteen index, we can return whatever the new sum value is. OnBiniam Girma wrote: In the rare case when a theorem involving ellipses does not apply to circles, we must say so explicitely. Every loop iteration we are summing the previous two sequence values, then pushing our values up in a sense. We implemented this technique targeting programs that run on the JVM, creating HitoshiIO available freely on GitHuba tool to detect functional code clones. How do we know if a door leads out of the maze? And they wanna ask, they want us to figure out what the th term of this sequence is going to be. Many of these web applications are quite storage-intensive. Both approaches can be easily defeated by a motivated obfuscator. Cloud computing offers attractive and economical choices for meeting their storage needs. This paper presents the first quantitative study on concurrency attacks and their implications on tools.The Fibonacci Sequence has 1 as its first two terms, and every term after that is the sum of the previous two. $$1, 1, 2, 3, 5, 13, 21, 34\ldots$$ Since the later terms are defined in terms of the previous it is recursive. The Fibonacci sequence begins with and. These are the first and second terms, respectively. These are the first and second terms, respectively. After this, every element is. Sep 04, · To calculate the Fibonacci sequence up to the 5th term, start by setting up a table with 2 columns and writing in 1st, 2nd, 3rd, 4th, and 5th in the left column. Next, enter 1 in the first row of the right-hand column, then add 1 and 0 to get park9690.com: K. Codility Fibonacci Solution. HOME» Programming Languages: The Fibonacci sequence is defined by the following recursive formula: F(0) = 0. F(1) = 1. F(N) = F(N−1) + F(N−2) for N ≥ 2 /> Write a function: int power_fib(int N, int M); />. The Fibonacci numbers form a sequence of integers defined recursively in the following way. The first two numbers in the Fibonacci sequence are 0 and 1, and each subsequent number is.
CommonCrawl
TP9 submission deadline is Thursday, 13/04/2017 at 23:59. The TP on Friday 14/04 will start at 13:30 (instead of 15:15). Also, more control points means higher-degree polynomial, which quickly becomes impractical. Today, we'll be dealing with one possiblity on how to overcome this problem: the Bézier splines. Informally, spline is a collection of curves connected with some degree of smoothness. There is more than one way to define what does it mean for two curves to be smoothly connected. The most commonly used is the $\mathcal C^k$ smoothness. A collection of $ \mathcal C^k$-smooth splines, row-wise interpolating the same data, left to right $k=0,1,2$. meaning the two curves agree up to their $k$-th derivatives. Let's look at a particular case when $\mathbf x_0$ and $\mathbf x_1$ are Bézier curves. In fact, the derivative is also a Bézier curve, of degree $n-1$. This gives us an iterative way to compute all of $\mathbf b_1^i$. You will need to manually fix $\mathbf b_1^0$. Try the midpoint $0.5(\mathbf p_0 + \mathbf p_1)$; later, you can change its position to see how it affects the computed spline. Implement the computation of control points of a quadratic interpolating Bézier spline for a given sequence of points $\mathbf p_i$ (function ComputeSplineC1). Evaluate and visualise for the available datasets. Try changing the position of $\mathbf b_1^0$. What happens? Splines, especially the cubic splines, are very common in the world of digital geometry. The algorithm you just implemented (hopefully) works well, but it has one major drawback: it requires setting the first $\mathbf b_1^0$ by hand. That's no fun! That is why, to compute an interpolating cubic spline in this part, we will adopt a slightly different approach – by solving a linear system. We'll do the math, crunch in the data, and let the solver do the work. To do that, take a situation much like the one before: given a sequence of points $\mathbf p_i, i=0,\dots,n$, find a $\mathcal C^2$ cubic spline (i.e. $n$ cubic curves) which interpolates these datapoints. This time, there will be two unknown interior control points for each curve, not one as in the quadratic case. Well, it's starting to look like a system, but all this indexing is confusing. So let's take a step back. Imagine we want to interpolate four points, i.e. we have three curves to compute. That's 10 unique control points in total. We'll denote those as $A,B,C,D,E,F,G,H,I,J$. (Phew.) The points to interpolate are $A,D,G,J$. Let's rewrite our conditions in terms of this notation. In the second, bonus part of today's TP, your task is to implement $\mathcal C^2$ cubic spline as a solution of the above system. Implement the computation of control points of a cubic interpolating Bézier spline for a given sequence of points $\mathbf p_i$ (function ComputeSplineC2). Evaluate and visualise for the available datasets. Even if you don't realize it, you're using Bézier splines everyday; in fact, you're using them right now! Among other things, they are used in typography to represent fonts: TrueType uses quadratic Bézier splines, while PostScript uses cubic Bézier splines. This very page is in fact a collection of some 6000 Bézier splines.
CommonCrawl
where $\alpha,\beta,\gamma$ are the angles which line $P_1P_2$ makes with the positive $x,y,z$ axes respectively and $d$ is given by the figure above. These are also valid if $l,m,n$ are replaced by $L,M,N$ respectively. where $a,b,c$ are the intercepts on the $x,y,z$ axes respectively. DISTANCE FROM POINT $(x_0,y_0,z_0)$ TO A PLANE $Ax+By+Cz+D=0$. where the sign is chosen so that the distance is nonnegative. where $p=$ perpendicular distance from $O$ to plane at $P$ and $\alpha, \beta, \gamma$ are angles between $OP$ and positive $x,y,z$ axes. where $(x, y, z)$ are old coordinates [i.e. coordinates relative to $xyz$ system],$(x', y', z')$ are new coordinates [relative to the $x'y'z'$ system] and $(x_0,y_0,z_0)$ are coordinates of the new origin $O'$ relative to the old $xyz$ coordinate system. where the origins of the $xyz$ and $x'y'z'$ systems are the same and $l_1,m_1,n_1; l_2,m_2,n_2; l_3,m_3,n_3$ are the direction cosines of the $x', y', z'$ axes relative to the $x, y, z$ axes respectively. where the $O'$ of $x'y'z'$ system has coordinates $(x_0,y_0,z_0)$ relative to the $xyz$ system and $l_1,m_1,n_1; l_2,m_2,n_2; l_3,m_3,n_3$ are the direction cosines of the $x' , y', z'$ axes relative to the $x, y, z$ axes respectively. A point $P$ can be located by cylindrical coordinates $(r, \theta, z)$ as well as rectangular coordinates $(x, y, z)$. A point $P$ can be located by spherical coordinates $(r, \theta, \phi)$ as well as rectangular coordinates $(x, y, z)$. where the sphere has center $(x_0,y_0,z_0)$ and radius $R$. where the sphere has center $(r_0,\theta_0,z_0)$ in cylindrical coordinates and radius $R$. where the sphere has center $(r_0,\theta_0,\phi_0)$ in spherical coordinates and radius $R$. where $a, b$ are semi axes of elliptic cross section. If $b = a$ it becomes a circular cylinder of radius $a$. Note orientation of axes in the figure.
CommonCrawl
*Please see this poster for the tentative title of each lecture. In a series of lectures I will present a new framework of Arakelov geometry proposed by Atsushi Moriwaki and myself, namely Arakelov geometry over an adelic curve. In our setting, an adelic curve refers to a field equipped with a family of absolute values (with possible repetitions) parametrised by a measure space. I will begin with an introduction on elementary geometry of numbers and Arakelov geometry of number fields and function fields and then explain why the setting of adelic curves is natural and permits to unify several frameworks in the literature which could be apparently transversal: Arakelov geometry over number fields and function fields, arithmetic geometry over a finitely generated field (Moriwaki), height theory of M-fields (Gubler), $\mathbb R$-filtration method (Chen), Siegel fields (Gaudron and Rémond). The construction of arithmetic objects and arithmetic invariants will be discussed, with an emphasis on the geometry of adelic vector bundles and its relationship with the classic geometry of numbers and Arakelov theory. The lectures will be concluded by a view of further research topics and open problems. The following is a tentative list of subjects of each lectures.
CommonCrawl
I can't decipher the difference between large and small gauge transformations especially in its applications in physics.If perhaps one can engineer a simple physical theory that has such a transformation to describe its subtleties it would be great. As usual I love really simple explanations as I am not any sort of expert. Diverse responses welcome. . . . I don't know if I pose the question correctly, but please feel free to edit it. The simplest example I can think of is to take as manifold the circle $M=S^1$, and as bundle the trivial circle bundle $X=S^1 \times S^1$, with bundle map $(s,t) \in X \mapsto s \in M$. A section $\sigma$ is written in this notation as $\sigma(s)=(s,f(s))$ for some continuous map $f \colon S^1 \to S^1$. Any continuous $f$ determines a section. The simplest example of a section $\sigma$ is to set $f$ constant. An example of a large gauge transformation is to replace our constant $f$ with the section $\tau$ given by $\tau(s)=(s,s)$. The replacement of $\sigma$ by $\tau$ (or of $\tau$ by $\sigma$) is large because $\tau$ wraps once around the circle while $\sigma$ wraps zero times around the circle, topological invariants of sections which ensure that replacement of one by another is large. If a configuration space is topologically non-trivial, one can distinguish between "small" gauge transformations which can be smoothly deformed to the identity, and "large" gauge transformations which cannot be smoothly deformed to the identity because they "wind" around the "handles" of the configuration space. An important example of topologically non-trivial configuration space is a non-Abelian gauge theory. In the Abelian case the transverse gauge condition $\partial \cdot A=0$ is sufficient to remove the degeneracy. While in non-Abelian case, as Gribov showed, there are distinct transverse configurations $A\neq A^\prime$ such that $\partial \cdot A=\partial \cdot A^\prime=0$ and these configurations are connected with each other by "large" gauge transformations. This topological non-triviality of the configuration space has an important impact on QCD dynamics: http://arxiv.org/abs/1202.1491 (The Gribov problem and QCD dynamics, by N. Vandersickel and D. Zwanziger). Chern-Simons theory provides another example where "large" gauge transformations have an important role: http://arxiv.org/abs/hep-th/9902115 (Aspects of Chern-Simons Theory, by G. V. Dunne). Gauge connections and Lie algebras? What are the canonical and earliest references to trivial symmetries in gauge systems? Is there a specific geometric meaning why fractional charges are allowed in SU(N) gauge theories?
CommonCrawl
Definition: Let $(X, \| \cdot \|_X)$ and $(Y, \| \cdot \|_Y)$ be normed linear spaces. A linear operator $T : X \to Y$ is an Isometry from $X$ to $Y$ if $\| T(x) \|_Y = \| x \|_X$ for all $x \in X$. The following theorem tells us that if $T$ is an isometry from $X$ to $Y$ then $T$ is continuous and $T$ is injective. Therefore $T$ is continuous at $x_0$. So $T$ is continuous on all of $X$.
CommonCrawl
Let $S$ be a set on which is defined two binary operations, defined on all the elements of $S \times S$, which we will denote as $\circ$ and $*$. Let $\circ$ be distributive over $*$. Then $*$ is a distributand of $\circ$. Then $\circ$ is a distributor of $*$. Results about distributive operations can be found here.
CommonCrawl
Abstract: For parabolic spatially discrete equations, we consider Green's functions, also known as heat kernels on lattices. We obtain their asymptotic expansions with respect to powers of time variable $t$ up to an arbitrary order and estimate the remainders uniformly on the whole lattice. The spatially discrete (difference) operators under consideration are finite-difference approximations of continuous strongly elliptic differential operators (with constant coefficients) of arbitrary even order in $\mathbb R^d$ with arbitrary $d\in\mathbb N$. This genericity, besides numerical and deterministic lattice-dynamics applications, allows one to obtain higher-order asymptotics of transition probability functions for continuous-time random walks on $\mathbb Z^d$ and other lattices.
CommonCrawl
As I did last semester, I had my students (all elementary education majors) do mini-research projects and present at a small poster session. As before, these posters were optional, although a student cannot get an A for the semester without doing one. I have 37 students, and 24 choose to do a poster. Unlike last semester, there was no paper that accompanied the poster. Also unlike last semester, I did not hold the poster session during class time. Instead, I integrated it into the campus-wide "Scholarship and Creativity Day." There were no classes this day—it is a day completely devoted to showing off students' creative projects. Note that and do not have repeating decimals; we say that they "terminate." How can you tell which fractions in Martian arithmetic will terminate? Consider extensions of our Last Cookie game (basically, a Nim game). What is you could remove either 2 or 3 cookies per round, but not 1? What if you could do 1,2, or 4 What about other combinations? There is a division algorithm called "Egyptian division." Explain (as we have been doing) why this gives the correct answer to a division problem. Learn about "casting out nines," a method that helps you determine if you did an arithmetic question correctly. Explain why this method works. There is a fast and easy way to determine if a number is divisible by 3 in base ten. Explain why this method works. There is are not-so-fast and not-so-easy ways to determine if a number is divisible by 7 in base ten. Explain why one of these methods work. Research one algorithm from the Trachtenberg System, and explain why it is guaranteed to give the correct answer. Teach Mayan students how to use our number system. Come up with your own topic (talk to me about it first). By far, most students choose the "divisibility by 3" or "casting out nines" problems, a reasonable amount choose "teach Mayan students about base ten" "the Last Cookie" problem. Three others did a Trachtenberg problem, one student chose to explain "Egyptian Division," and two explained why a finger trick works for multiplication by nine. Many of the presentations were excellent, and many still had trouble understanding what the question is. This was expected. What was not expected was the number of students who participated: I expected about half the number I had. Finally, many professors from other departments approached me to compliment the poster session. In fact, the dean of the college referenced one of my students' posters in an address later that evening. I must remember to try to do this again in most of my classes. Poster presentations are a good idea. I am going to focus on the latter for now. I am teaching a linear algebra course, which is the first upper-level course that (most of) our mathematics majors take. The class is mostly sophomores, although there was a large number of freshmen in my class this semester. I had my students do research projects this semester. They were not required for everyone, although you needed to complete on if you were to get an A for the course. Also, I would very subjectively take your project into consideration for students who will not get an A. In all, 16 student out of 24 students opted to do a project. Describe how a real world application works. For instance, describe how linear algebra is used when you Google something. Given an matrix with entries , what is the largest possible determinant? Given an matrix with entries , what is the largest possible eigenvalue? Given an matrix with all entries equal to or , what is the largest possible determinant? Given an matrix with all entries equal to or , what is the largest possible eigenvalue? Given an and an eigenvector , can you determine an matrix that has as one of its eigenvectors? Suppose that Player A always puts a in an $n \times n$ matrix, and Player always puts a in the matrix. Player goes first, and then they alternate turns. Suppose that Player $B$ wants the matrix to have determinant zero, and player wants the determinant to be anything but zero. Who can always win the game, what should the player do to win, and why will it work? Write a computer program that solves systems of equations, finds kernels of matrices, etc. Create your own project. If there is some question or application that interests you, let me know. I will help you determine if it is at the right level for Math 239. I explicitly told the students that they were not expected to solve the problem. Rather, they had to be able to make progress on it. For instance, they did not need to find the exact largest determinant, but they should be able to find a lower bound for the largest determinant by constructing a family of matrices that achieve their lower bound. My project format was very similar to Derek's (I even used the same three award categories), but there were some differences. First, I did not have my students turn in a draft. This is largely because I did not have my act together, not because I am opposed to it. Second, I had the students work individually. They had been working in teams all semester, and I wanted them to have something they could definitely create on their own. They were, however, allowed to confer with each other about projects. At most, I would have had 24 projects, so this was doable (in part because of the next paragraph). Finally, the major difference between Derek's format and mine was in grading. My grading was essentially a 0/1 system: either you did the project, or you didn't. This made grading a little easier, and I am guessing (based on the psychology literature) made the project more enjoyable for the students. There were a couple of projects where I suspect the student did not put in much work, but only a couple. Those who did the project wanted to do it. (I did not really grade the projects, but I did read all of them to provide comments and feedback). I was happy with the results. This is the first course where students see proofs in any sort of serious way, and the proofs they do see typically require them to just move one step beyond a definition. Thus, I did not expect sophisticated proofs. But the students worked hard and made interested conjectures (and a couple proved a theorem). The worst part is that I forgot to bring my camera, so there were no pictures. The students also enjoyed it. I did a brief survey. Here are the results for the rating scale questions—"1″ means "Not much" and "5" means "A lot" (I averaged the numbers together for convenience, not for correctness). One student said that he/she did not spend enough time, and this got in the way of learning and enjoyment. If this student's numbers are omitted, the averages for the first two questions become 4.00 and 4.62, respectively. I surveyed everyone in the course on the last question—even those who did not do projects. The people who did not do projects averaged 3.00 on the last question; people who did projects averaged 4.36. I will end with some selected comments, the first three of which really warm a linear algebra teacher's heart.
CommonCrawl
F. Borgatti, Berger, J. A., Céolin, D., Zhou, J. Sky, Kas, J. J., Guzzo, M., McConville, C. F., Offi, F., Panaccione, G., Regoutz, A., Payne, D. J., Rueff, J. - P., Bierwagen, O., White, M. E., Speck, J. S., Gatti, M., and Egdell, R. G., "Revisiting the origin of satellites in core-level photoemission of transparent conducting oxides: The case of $n$-doped $\mathrmSnO_2$", Phys. Rev. B, vol. 97. American Physical Society, p. 155102, 2018. J. S. Zhou, Kas, J. J., Sponza, L., Reshetnyak, I., Guzzo, M., Giorgetti, C., Gatti, M., Sottile, F., Rehr, J. J., and Reining, L., "Dynamical effects in electron spectroscopy", The Journal of Chemical Physics, vol. 143. p. 184109, 2015. M. Guzzo, Kas, J. J., Sponza, L., Giorgetti, C., Sottile, F., Pierucci, D., Silly, M. G., Sirotti, F., Rehr, J. J., and Reining, L., "Multiple satellites in materials with complex plasmon spectra: From graphite to graphene", Phys. Rev. B, vol. 89. p. 085425, 2014. M. Gatti and Guzzo, M., "Dynamical screening in correlated metals: Spectral properties of SrVO3 in the GW approximation and beyond", Phys. Rev. B, vol. 87. American Physical Society, p. 155147, 2013. P. Wachsmuth, Hambach, R., Kinyanjui, M. K., Guzzo, M., Benner, G., and Kaiser, U., "High-energy collective electronic excitations in free-standing single-layer graphene", Phys. Rev. B, vol. 88. American Physical Society, p. 075433, 2013. M. Guzzo, "Dynamical correlation in solids : a perspective in photoelectron spectroscopy", Ecole Polytechnique, Palaiseau, 2012. G. Miceli, Guzzo, M., Cucinotta, C., and Bernasconi, M., "First Principles Study of Hydrogen Desorption from the NaAlH4 Surface Doped by Ti Clusters", JOURNAL OF PHYSICAL CHEMISTRY C, vol. 116, pp. 4311-4315, 2012. M. Guzzo, Kas, J. J., Sottile, F., Silly, M. G., Sirotti, F., Rehr, J. J., and Reining, L., "Plasmon satellites in valence-band photoemission spectroscopy", The European Physical Journal B, vol. 85. Springer-Verlag, pp. 1-7, 2012. N. Bergeard, Silly, M. G., Krizmancic, D., Chauvet, C., Guzzo, M., Ricaud, J. P., Izquierdo, M., Stebel, L., Pittana, P., Sergo, R., Cautero, G., Dufour, G., Rochet, F., and Sirotti, F., "Time-resolved photoelectron spectroscopy using synchrotron radiation time structure", Journal of Synchrotron Radiation, vol. 18. pp. 245–250, 2011. M. Guzzo, Lani, G., Sottile, F., Romaniello, P., Gatti, M., Kas, J. J., Rehr, J. J., Silly, M. G., Sirotti, F., and Reining, L., "Valence Electron Photoemission Spectrum of Semiconductors: Ab Initio Description of Multiple Satellites", Phys. Rev. Lett., vol. 107. American Physical Society, p. 166401, 2011. M. Guzzo, "Exchange and Correlation effects in the electronic properties of transition metal oxides: the example of NiO", vol. Physics. Universita di Milano-Bicocca, Milan (Italy), p. 85, 2009.
CommonCrawl
The processes $A$, $B$, and $C$ are started at times $0$, $5$ and $10$ milliseconds respectively, in a pure time sharing system (round robin scheduling) that uses a time slice of $50$ milliseconds. The time in milliseconds at which process C would complete its first I/O operation is ___________. each execute a loop of 100 iterations. $C$ completes it CPU burst at$= 500$ milli second. round robin manner with time slice of 50 ms. Processes atart at 0, 5 and 10 miliseconds. where are you accounting for 100 iterations? It is given that a process initiates IO , so it could be the case that after initiating the IO it again starts with the next iterations, then instead of ABCABCBCBC , we would have ABCABCABCABC. So what is stopping A's next iteration from starting. After 100 ms of computation, process A has to perform I/O for 500 ms for completing the 1st iteration. Only after completing 1st iteration can process A start 2nd iteration. Maybe, but then it is mentioned that there are sufficient IO devices, I think it refers to the buffering ability of the devices, so that requests can be buffered. What if input of 1st iteration of A is used in computation of 2nd iteration of A? Then buffering won't help since you've got to take input before processing it. After 1st CPU burst of C why it doesnt go for IO operation ie after 150ms.? In each iteration of the loop, a process performs a single computation that requires tc CPU milliseconds and then initiates a single I/O operation that lasts for tio milliseconds. So CPU can't start A (A2) again till previous A (A1) has finished? An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of processes. Consider the following set of processes with their arrival times and CPU burst times (in milliseconds): Process Arrival Time Burst Time $P1$ $0$ $12$ $P2$ ... The average waiting time (in milliseconds) of the processes is ______. Consider the following set of processes that need to be scheduled on a single CPU. All the times are given in milliseconds. ... Using the shortest remaining time first scheduling algorithm, the average process turnaround time (in msec) is ____________________. A FAT (file allocation table) based file system is being used and the total overhead of each entry in the FAT is $4$ bytes in size. Given a $100 \times 10^6$ bytes disk on which the file system is stored and data block size is $10^3$ bytes, the maximum size of a file that can be stored on this disk in units of $10^6$ bytes is _________. A cycle on $n$ vertices is isomorphic to its complement. The value of $n$ is _____. The probability that a given positive integer lying between $1$ and $100$ (both inclusive) is NOT divisible by $2$, $3$ or $5$ is ______ .
CommonCrawl
Last weekend Fei, Dan, and I put on our first Party Fortress party. Fei and I have been working with social simulation for awhile now, starting with our Humans of Simulated New York project from a year ago. Over the past month or two we've been working on a web system, "highrise" to simulate building social dynamics. The goal of the tool is to be able to layout buildings and specify the behaviors of its occupants, so as to see how they interact with each other and the environment, and how each of those interactions influences the others. Beyond its practical functionality (it still has a ways to go), highrise is part of an ongoing interest in simulation and cybernetics. Simulation is an existing practice that does not receive as much visibility as AI but can be just as problematic. It seems inevitable that it will become the next contested space of technological power. highrise is partly a continuation of our work with using simulation for speculation, but whereas our last project looked at the scale of a city economy, here we're using the scale of a gathering of 10-20 people. The inspiration for the project was hearing Dan's ideas for parties, which are in many ways interesting social games. By arranging a party in a peculiar way, either spatially or socially, what new kinds of interactions or relationships emerge? Better yet, which interactions or relationships that we've forgotten start to return? What relationships that we've mythologized can (re)emerge in earnest? highrise was the engine for us to start exploring this, which manifested in Party Fortress. I'll talk a bit about how highrise was designed and implemented and then the living prototype Party Fortress. First we needed a way to specify a building. We started by reducing a "building" to just floors and stairs, so we needed to develop a way to layout a building by specifying floor plans and linking them up with stairs. Here 0 is empty space, 1 is walkable, and 2 is an obstacle. In the example above, each 2D array is a floor, so the complete 3D array represents the building. Beyond one floor it gets a tad confusing to see them stacked up like that, but this may be an unavoidable limitation of trying to represent a 3D structure in text. Note that even though we can specify multiple floors, we don't have any way to specify how they connect. We haven't yet figured out a good way of representing staircases in this text format, so for now they are explicitly added and positioned in code. A floor plan isn't enough to properly represent a building's interior - we also needed a system for specifying and placing arbitrary objects with arbitrary properties. To this end we put together an "object designer", depicted below in the upper-left hand corner. The object designer is used to specify the footprint of an object, which can then be placed in the building. When an object is clicked on, you can specify any tags and/or key-value pairs as properties for the object (in the upper-right hand corner), which agents can later query to make decisions (e.g. find all objects tagged food or toilet). Objects can be moved around and their properties can be edited while the simulation runs, so you can experiment with different layouts and object configurations on-the-fly. It gets annoying to need to create objects by hand in the UI when it's likely you'd want to specify them along with the floor plan. We expanded the floor plan syntax so that, in addition to specifying empty/walkable/obstacle spaces (in the new syntax, these are '-', ' ', and '#' respectively), you can also specify other arbitrary values, e.g. A, B, ☭, etc, and these values can be associated with object properties. Do we have three adjacent but distinct A objects or one contiguous one? This ergonomics problem, in addition to the stair problem mentioned earlier, means there's still a bit of work needed on this part. The building functionality is pretty straightforward. Where things start to teeter (and get more interesting) is with designing the agent framework, which is used to specify the inhabitants of the building and how they behave. It's hard to anticipate what behaviors one might want to model, so the design of the framework has flexibility as its first priority. There was the additional challenge of these agents being spatial; my experience with agent-based models has always hand-waved physical constraints out of the problem. Agents would decide on an action and it would be taken for granted that they'd execute it immediately. But when dealing with what is essentially an architectural simulation, we needed to consider that an agent may decide to do something and need to travel to a target before they can act on their decision. So we needed to design the base Agent class so that when a user implements it, they can easily implement whatever behavior they want to simulate. The first key component is how agents make decisions. entropy, which represents the constant state changes that occur every frame, regardless of what action an agent takes. For example, every frame agents get a bit more hungry, a bit more tired, a bit more thirst, etc. successor, which returns the new state resulting from taking a specific action. This is applied only when the agent reaches its target. For example, if my action is eat, I can't actually eat and decrease my hunger state until I get to the food. utility, which computes the utility for a new state given an old state. For example, if I'm really hungry now and I eat, the resulting state has lower hunger, which is a good thing, so some positive utility results. Agents use this utility function to decide what action to take. They can either deterministically choose the action which maximizes their utility, or sample a distribution of actions with probabilities derived from their utilities (i.e. such that the highest-utility action is most likely, but not a sure bet). This method also takes an optional expected parameter to distinguish the use of this method for deciding on the action and for actually computing the action's resulting utility. In the former (deciding), the agent's expected utility from an action may not actually reflect the true result of the action. If I'm hungry, I may decide to eat a sandwich thinking it will satisfy my hunger. But upon eating it, I might find that it actually wasn't that filling. execute, which executes an action, returning the modified state and other side effects, e.g. doing something to the environment. Agents also can have an associated Avatar which is their representation in the 3D world. You can hook into this to move the agent and know when it's reached it's destination. Each floor is represented as a grid, and the layout of the building is represented as a network where each node is a floor and edges are staircases. When an agent wants to move to a position that's on another floor, they first generate a route through this building network to figure out which floors they need to go through, trying to minimize overall distance. Then, for each floor, they find the path to the closest stairs and go to the next floor until they reach their target. There are some improvements that I'd really like to make to the pathfinding system. Currently each position is weighted the same, but it'd be great if we held different position weights for each individual agent. With this we'd be able to represent, for instance, subjective social costs of spaces. For example, I need to go to the bathroom. Normally I'd take the quickest path there but now there's someone I don't want to talk there. Thus the movement cost of those positions around that person are higher to me than they are to others (assuming everyone else doesn't mind them), so I'd take a path which is longer in terms of physical distance, but less imposing in terms of overall cost when considering this social aspect. Those are the important bits of the agent part. When we used highrise for Party Fortress (more on that below), this was enough to support all the behaviors we needed. Since the original inspiration for highrise was parties we wanted to throw a party to prototype the tool. This culminated in a small gathering, "Party Fortress" (named after Dwarf Fortress), where we ran a simulated party in parallel to the actual party, projected onto a wall. We wanted to start by simulating a "minimum viable party" (MVP), so the set of actions in Party Fortress are limited, but essential for partying. This includes: going to the bathroom, talking, drinking alcohol, drinking water, and eating. The key to generating plausible agent behavior is the design of utility functions. Generally you want your utility functions to capture the details of whatever phenomena you're describing (this polynomial designer tool was developed to help us with this). For example, consider hunger: when hunger is 0, utility should be pretty high. As you get hungry, utility starts to decrease. If you get too hungry, you die. So, assuming that our agents don't want to die (every simulation involves assumptions), we'd want our hunger utility function to asymptote to negative infinity as hunger increases. Since agents use this utility to decide what to do, if they are really, really hungry they will prioritize eating above all else since that will have the largest positive impact on their utility. One challenge with spatial agents is that as they are moving to their destination, they may suddenly decide to do something else. Then, on the way to that new target, they again may decide to something else. So agents can get stuck in this fickleness and never actually accomplish anything. To work around this we incorporated a commitment variable for each agent. It feels a bit hacky, but basically when an agent decides to do something, they have some resolve to stick with it unless some other action becomes overwhelmingly more important. Technically this works out to mean that whatever action an agent does has its utility artificially inflated (so it's more appealing to continue doing it) until they finally execute it or the commitment wears off. This could also be called stubbornness. Since conversation is such an important part of parties we wanted to model it in higher fidelity than the other actions. This took the form of having varying topics of conversation and bestowing agents with preferences for particular topics. We defined a 2D "topic space" or "topic matrix", where one axis is how "technical" the topic is and the other is how "personal" the topic is. For instance, a low technical, low personal topic might be the weather. A high technical but low personal topic might be the blockchain. Agents don't know what topic to talk about with an agent they don't know, but they a really basic conversation model which allows them to learn (kind of, this needs work). They'll try different things and try to gauge how the other person responds, and try to remember this. As so far specified, our implementation of agents don't capture, explicitly at least, the relationships between individual agents. In the context of a social simulation this is obviously pretty important. For Party Fortress we implemented a really simple social network so we could represent pre-existing friendships and capture forming ones as well. The social network is modified through conversation and the valence and strength of modification is based on what topics people like. For example, if we talk about a topic we both like, our affinity increases in the social graph. It's not very interesting to watch the simulation with no other indicators of what's happening. These are people we're supposed to be simulating and so we have some curiosity and expectations about their internal states. We implemented a narrative system where agents will report what exactly their doing in colorful ways. Our plan for the party was to project the simulation on the wall as the party went on. But that introduces an anomaly where our viewing of the simulation may influence our behavior. We needed the simulation itself to capture this possibility - so we integrated a webcam stream into the simulation and introduced a new action for agents: "gawk". Now they are free to come and watch us, the "real" world, just as we can watch them. We have a few other ideas for "closing the loop" that we weren't able to implement in time for Party Fortress I, such as more direct communication with simulants (e.g. via SMS). We hosted Party Fortress at Prime Produce, a space that Dan has been working on for some time. We had guests fill out a questionnaire as they arrived, designed to extract some important personality features. When they submitted questionnaire a version of themselves would appear in the simulation and carry on partying. There were surprisingly several moments of synchronization between the "real" party and the simulated one. For instance, people talking or eating when the simulation "predicted" it. Some of the topics that were part of the simulation came up independently in conversation (most notably "blockchain", but that was sort of a given with the crowd at the party). And of course seeing certain topics come up in the simulation spurned those topics coming up outside of it too. Afterwards our attendees had a lot of good feedback on the experience. Maybe the most important bit of feedback was that the two parties felt too independent; we need to incorporate more ways for party-goers to feel like they can influence the simulated party and vice versa. It was a good first step - we're looking to host more of these parties in the future and expand highrise so that it can encompass weirder and more bizarre parties. Yesterday I started working on a new game, tentatively titled "Trolley Problems: A Eulogy for Social Norms". This will be my second game with Three.js and Blender (though the first, The Founder, is still not totally finished) and I thought it'd be helpful to document this process. The basic process was: 1) create the chair model in Blender and 2) load the chair into Three.js. I won't give a Blender modeling tutorial here (this was suggested as a good resource to start), but the theater chair model was quickly made by extruding a bunch from a cube. A lot of the model's character comes out in the texturing. I like to keep textures really simple, just combinations of solid colors. The way I do it is with a single texture file that I gradually add to throughout a project. Each pixel is a color, so the texture is basically a palette. In Blender I just select the faces I want to have a particular color, use the default UV map unwrapping (while in Edit Mode, hit U and select Unwrap) and then in UV/Image Editor (with the texture file selected in the dropdown, see image below) I just drag the unwrapped faces onto the appropriate colors. There is one thing you have to do to get this texture rendering properly. By default, Blender (like almost every other 3D program) will try to scale your textures. In the texture properties window (select the red-and-white checker icon in the Properties window, see below), scroll down and expand the Image Sampling section. Uncheck MIP Map and Interpolation, then set Filter to Box (see below for the correct settings). This will stop Blender from trying to scale the texture and give you the nice solid color you want. One of the best things about Three.js is that there is a Blender-to-Three.js exporter. Installation is pretty straightforward (described in the repo here). To export a Blender object to JSON, select your object (in Object Mode) and select from the menu File > Export > Three (.json). That's all that Blender's needed for. This next section involves a bunch of code. I won't reproduce everything here (you can check out the repo to see the full working example) but I'll highlight the important parts. Three.js provides a JSONLoader class which is what loads the exported Blender model. You could just use that and be done with it, but there are a few modifications I make to the loaded model to get it looking right. // but I like to make sure I'm using the proper material. And the previous code for loading the chair should place the chair into the scene. In the Scene class the onKeyDown and onKeyUp methods determine, based on what keys you press and release, which direction you should move in. The render method includes some additional code to check which directions you're supposed to be moving in and adds the appropriate velocity. The velocity x value moves you right (positive) and negative (negative), the y value moves you up (positive) and down (negative), and the z value moves you forward (negative) and backwards (positive). It's important to note that the z value is negative in the forward direction because this confused me for a while. We also keep track of how much time elapsed since the last frame (delta) so we scale the velocity appropriately (e.g. if the last frame update was 0.5s ago, you should move only half as far as you would if it had been 1s ago). You'll notice that you can walk through objects which is probably not what you want...we'll add simple collision detection later to fix this. The key to looking around is the browser's pointer lock API. The pointer lock API allows you to capture your mouse cursor and its movement. I'd never done this before, but the Three.js repo includes an example that shows the basic method. So I gutted that and moved the important bits into the Scene class. The full code is available here), but I'll explain some of it here. In the Scene class the important method is setupPointerLock, which sets up the pointer lock event listeners. It is pretty straightforward, but basically: there's an instructions element that, when clicked on, locks the pointer and puts the game into fullscreen. The onPointerLockChange method toggles the pointer lock controls, so that the controls are only listening when the pointer lock is engaged. The meat of the actual pointer movement is handled in PointerLock.js. This is directly lifted from the Three.js example implementation. It's also pretty sparse; it adjusts the yaw and pitch of the camera according to how you move your mouse. So the last bit here is to prevent the player from walking through stuff. I have a terrible intuition about 3D graphics so this took me way too long. Below are some of my scribbles from trying to understand the problem. The basic approach is to use raycasting, which involves "casting" a "ray" out from a point in some direction. Then you can check if this ray intersects with any objects and how far away those objects are. Below are an example of some rays. For example, the one going in front of the object points to (0,0,1). That sounds like it contradicts what I said earlier about the front of the object being in the negative-z direction...it doesn't quite. This will become important and confusing later. Note that the comments in the example on GitHub are incorrect (they have right and left switched...like I said, this was very confusing for me). Every update we cast these rays and see if they intersect with any objects. We check if those objects are within some collision distance, and if they are, we zero out the velocity in that direction. So, for instance, if you're trying to move in the negative-z direction (forward) and there's an object in front of you, we have to set velocity.z = 0 to stop you moving in that direction. That sounds pretty straightforward but there's one catch - the velocity is relative to where you're facing (i.e. the player's axis), e.g. if velocity.z is -1 that means you're moving in the direction you're looking at, which might not be the "true" world negative-z direction. These rays, unlike velocity, are cast in directions relative to the world axis. This might be clearer with an example. Say you're facing in the positive-x direction (i.e. to the right). When you move forward (i.e. press W), velocity.z will be some negative value and velocity.x will be zero (we'll say your velocity is (0,0,-1)). This is because according to your orientation, "forward" is always negative-z, even though in the context of the world your "forward" is technically positive-x. Your positive-x (your right) is in the world's negative-z direction (see how this is confusing??). Now let's say an object is in front of you. Because our raycasters work based on the world context, it will say there's an obstacle in the positive-x direction. We want to zero-out the velocity in that direction (so you're blocked from moving in that direction). However, we can't just zero-out velocity.x because that does not actually correspond to the direction you're moving in. In this particular example we need to zero-out velocity.z because, from your perspective, the obstacle is in front of you (negative-z direction), not to your right (positive-x direction). The general approach I took (and I'm not sure if it's particularly robust, but it seems ok for now) is to take your ("local") velocity, translate it to the world context (i.e. from our example, a velocity of (0,0,-1) gets translated to (1,0,0)). Then I check the raycasters, apply the velocity zeroing-out to this transformed ("world") velocity (so if there is an obstacle in the positive-x direction, I zero out the x value to get a world velocity of (0,0,0)), then re-translate it back into the local velocity. Ok, so here's how this ends up getting implemented. Whenever you add a mesh and the player shouldn't be able to walk through it, you need to add that mesh to this.collidable. The idea of syd is that it will be geared towards social simulation - that is, modeling systems that are driven by humans (or other social beings) interacting. To support social simulations syd will include some off-the-shelf models that can be composed to define agents with human-ish behaviors. One category of such models are around the propagation and mutation of ideas and behaviors ("social contagion"). Given a population of agents with varying cultural values, how do these values spread or change over time? How do groups of individuals coalesce into coherent cultures? What follows are some notes on a small selection of these models. Two primary mechanisms are sorting, or "homophily", the tendency to associate with similar people, and peer effects, the tendency to become more like people we are around often. Schelling's model of segregation may be the most well-known example of a sorting model, where residents move if too many of their neighbors aren't like them. A very simple peer effect model is Granovetter's model. Say there is a population of $N$ individuals and $n$ of them are involved in a riot. Each individual in the population has some threshold $T_i$; if over $T_i$ people are in the riot, they'll join the riot too. Basically this is a bandwagon model in which people will do whatever others are doing when it becomes popular enough. Granovetter's model does not incorporate the innate appeal of the behavior (or product, idea, etc), just how many people are participating in it. One could imagine though that the riot looks really exciting and people join not because of how many others are already there, but because they are drawn to something else about it. A simple extension of Granovetter's model, the Standing Ovation model, captures this. The behavior in question has some appeal or quality $Q$. We add some additional noise to $Q$ to get the observed "signal" $S = Q + \epsilon$. This error allows us to capture a bit of the variance in perceived appeal (some people may find it appealing, some people may not, the politics around the riot may be very complex, etc). If $S > T_i$, then person $i$ participates. After assessing the appeal, those who are still not participating then have the Granovetter's model applied (they join if enough others are participating). There are further extensions to the Standing Ovation model. You could say that an agent's relationships affects their likelihood to join, e.g. if I see 100 strangers rioting I may not care to join, but if 10 of my friends are then I will. Axelrod's culture model is a simple model of how a "culture" (a population sharing many traits, beliefs, behaviors, etc) might develop. This model can be extended with a consistency rule: sometimes the value of one trait is copied over to another trait (in the same individual), this models the two traits becoming consistent. For example, perhaps the changing of one belief causes dependent beliefs to change as well, or requires beliefs it's dependent on to change. Traits can also randomly change as well due to "error" (imperfect imitation or communication) or "innovation". A really interesting idea is to apply the Boids flocking algorithm to the modeling of idea dissemination (if you're unfamiliar with Boids, Craig Reynolds explains here). Here agents form a directed graph (e.g. a social network), where edges have two values: frequency of communication and respect one agent holds for the other. For each possible belief, agents have an alignment score which can have an arbitrary scale, e.g. -3 for strong disbelief to 3 for strong belief. The agent feels a "force" that causes them to change their own alignment. This force is the sum of force from alignment, force from cohesion, and force from separation. force from alignment is computed by the average alignment across all agents - this is the perceived majority opinion. force from cohesion: each agent computes the average alignment felt by their neighbors they respect (i.e. respect is positive), weighted by their respect for those neighbors. force from separation: like the force from cohesion, but computed across their neighbors they disrespect (i.e. respect is negative), weighted by their respect for those neighbors. This force is normalized and is used to compute a probability that the individual changes their alignment one way or the other. We specify some proportionality constant $\alpha$ which determines how affected an agent is by the force. For force $F$ the probability of change of alignment is just $\alpha F$. It's up to the implementer how much an agent's alignment changes. the "offspring" of a meme vary in their "appearance" Memetic transmission may be horizontal (intra-generational) and/or vertical (intergenerational). The primary mechanism for memetic transmission is imitation, but other mechanisms include social learning and instruction. Note that the transmission of an idea is heavily dependent on its own characteristics. For example, there are some ideas that have "mutation-resistant" qualities, e.g. "don't question the Bible" (though that does not preclude them from mutation entirely). Some ideas also have the attribute of "proselytism"; that is part of the idea is the spreading of the idea. The Cavalli-Sforza Feldman model is a memetics model describing how a population of beliefs evolve over time. "The popular enforcement of unpopular norms" The models presented so far make no distinction between public and private beliefs, but it's very clear that people often publicly conform to ideas that they may not really feel strongly about. This phenomenon is called pluralistic ignorance: "situations where a majority of group member privately reject a norm but assume (incorrectly) that most others accept it". It's one thing to publicly conform to ideas you don't agree with, but people often go a step further and try to enforce others to comply to these ideas. Why is that? The "illusion of transparency" refers to the tendency where people believe that others can read more about their internal state than they actually can. Maybe something embarrassing happened and you feel as if everyone knows, even if no one possibly could. In the context of pluralistic ignorance, this tendency causes people to feel as though others can see through their insincere alignment with the norm, so they take additional steps to "prove" their conviction. each agent $i$ has a binary private belief $B_i$ which can be 1 (true believer) or -1 (disbeliever). An agent $i$ choice to comply with the norm is $C_i$. If $C_i=1$ the agent chooses to complex, otherwise $C_i=-1$. This choice depends on the strength of the agent's convictions $0 < S \leq 1$. We assume that true believers ($B_i = 1$) have maximal conviction ($S_i = 1$) and so are resistant to neighbors enforcing deviance. where $0 < K < 1$ is an additional cost of enforcement for those who also comply (it is $K$ more difficult to get someone who does not privately/truly align with the belief to enforce it). Agents can only enforce compliance or deviance if they have complied or deviated, respectively. The model can be extended by making it so that true disbelievers can be "converted" to true believers (i.e. their private belief changes to conform to the public norm). Model Thinking. Scott E. Page. Cole, S. (2006, September 28). Modeling opinion flow in humans using boids algorithm & social network analysis. Gamasutra - The Art & Business of Making Games. Simpkins, B., Sieck, W., Smart, P., & Mueller, S. (2010). Idea Propagation in Social Networks: The Role of 'Cognitive Advantage'. Network-Enabled Cognition: The Contribution of Social and Technological Networks to Human Cognition. Centola, D., Willer, R., & Macy, M. (2005). The Emperor's Dilemma: A Computational Model of Self‐Enforcing Norms. American Journal of Sociology, 110(4), 1009-1040. The agents in syd will need to be parameterized in some way that meaningfully affects their behavior. Another way to put this is that the agents need some values that guide their actions. In Humans of Simulated New York Fei and I defined individuals along the axes of greed vs altruism, lavishness vs frugality, long-sightedness vs short-sightedness, and introversion vs extroversion. The exact configuration of these values are what made an agent an individual: a lavish agent would spend more of their money, an extroverted agent would have a larger network of friends (which consequently made finding a job easier), greedy agents would pay their employees less, and so on. The dimensions we used aren't totally comprehensive. There are many aspects of human behavior that they don't encapsulate. Fortunately there is quite a bit of past work to build on - there have been many past attempts to inventory a value spectrums that defines and distinguishs cultures. The paper A Proposal for Clustering the Dimensions of National Culture (Maleki, A., de Jong, M, 2014) neatly catalogues these previous efforts and proposes their own measurements as well. power distance: "the extent to which hierarchical relations and position-related roles are accepted" uncertainty avoidance: "to what extent people feel uncomfortable with certain, unknown, or unstructured situations" mastery vs harmony: "competitiveness, achievement, and self-assertion versus consensus, equity, and harmony" traditionalism vs secularism: "religiosity, self-stability, feelings of pride and, consistency between emotion felt and their expression vs secular orientation and flexibility" indulgence vs restraint: "the extent to which gratification of desires and feelings is free or restrained" assertiveness vs tenderness: "being assertive and aggressive versus kind and tender in social relationships" collaborativeness: "the spirit of 'team-work'" It doesn't feel very exact though.
CommonCrawl
Results for "Loukas P. Petrou" Multilinear Calderón-Zygmund operators on Hardy spacesOct 08 2000It is shown that multilinear Calder\'on-Zygmund operators are bounded on products of Hardy spaces. A remark on an endpoint Kato-Ponce inequalityNov 18 2013This note introduces bilinear estimates intended as a step towards an $L^\infty$-endpoint Kato-Ponce inequality. In particular, a bilinear version of the classical Gagliardo-Nirenberg interpolation inequalities for a product of functions is proved. Multilinear Multiplier Theorems and ApplicationsNov 25 2015Dec 16 2016We obtain new multilinear multiplier theorems for symbols of restricted smoothness which lie locally in certain Sobolev spaces. We provide applications concerning the boundedness of the commutators of Calder\'on and Calder\'on-Coifman-Journ\'e.
CommonCrawl
I tried to prove that the area of a rectangle is $ab$ given side lengths $a$ and $b$. The best I can do is the assume the area of a $1\times1$ square is $1$. Then not the number of $1\times1$ squares that fit in an $a\times b$ rectangle is $ab$. Therefore area is $a\cdot b$. This does not seem rigorous however. This is a good question! On the face of it, it seems that before you can start being rigorous, you have to define precisely what you mean by the area of a rectangle. But this is just what you are trying to prove! However, there is another approach: you define exactly the properties that you require of a reasonable area metric. Then you adopt these properties as your axioms, and use them to show that an $a \times\ b$ rectangle has area $ab$. $A1$: The area of a $1 \times 1$ rectangle is $1$. $A2$: Any two congruent rectangles have the same area. $A3$: If a rectangle $R$ is the union of disjoint rectangles $S$ and $T$, then the area of $R$ is equal to the sum of the areas of $S$ and $T$. $A4$: If a rectangle $R$ contains a rectangle $S$, then the area of $R$ is not less than the area of $S$. Otherwise you might be able to construct pathological area metrics using the Axiom of Choice. But I am open to correction on this. If anybody can suggest corrections or improvements to this axiom set, feel free to post them as a separate answer, so they don't get lost in the comments. In order to interpret lengths and areas as numbers, you have to fix a unit length and a unit area. The former is usually done by defining a coordinate system, whereas the latter usually defines the square of the unit length as the unit of area. But that square is a product, so the definition already assumes that the area of the $1\times 1$ square is $1$, which seems to be something you want to prove. If you accept this kind of setup, then you can consider transformations of your rectangle which turn it into another rectangle of the same area but with the unit length as the length of one edge. In that case you can prove that the other edge will be the product of the original edge lengths. You can use a von-Staudt construction to express this product geometrically, based on the unit length. Why is the area of a rectangle given as $l\times h$? How can adding an infinite number of rationals yield an irrational number? Why is the area of a square equal to side squared? Area of rectangle from area of square? Can you construct (ruler and compass) a square with an irrational area? Number of all squares placed side by side in a rectangle. Prove the ratio of the length and width of the rectangle is rational. Maximal area of equilateral triangle inside rectangle.
CommonCrawl
I'm still absorbing some basic ideas about quantum physics and now I think I have to reconsider the Uncertainty Principle. the probability that we are in any particular configuration is the relative squared amplitude. But now I've started to think about the Uncertainty Principle all over again. Long before I understood the above points, I understood that there was no exact joint (position,momentum) value that could ever be measured. If I go back over it all now, I get the impression that, within each configuration, the position and momentum is exactly defined and in fact the Uncertainty Principle comes from the fact that the amplitude distribution itself is fuzzy. That is, if I make a measurement of momentum, I am projecting many configurations, encoding many velocities, onto a preferred basis. The information from the momentum measurement is "you are in one of these configurations, for which the corresponding position is one of these values". Your bullet list is a very good summary except for the point #1 which has a flaw leading to confusion. Coordinate + momentum is phase space, while the configuration space space is "smaller" - it is just the momentum OR just the coordinate. Amplitude distribution is over the configuration space, it is precisely the uncertainty principle that precludes us to have the amplitude distribution over the phase space. Why so? Let's look at a specific amplitude distribution over the coordinate. The same distribution already contains as much information about momentum as possible (without violating the basic principles)! Information about the moment is "encoded" in the relative phases between the amplitude for different spatial coordinates. You can "translate" (t is called "change of representation") and amplitude distribution over the coordinate into an equivalent distribution for momentum (in this particular case the change of representation turns out to be the Fourier transform). For example, if the quantum state is such that there is only one coordinate component (localized particle with a position-representation wave function proportional to Dirac delta-function), then the momentum is completely "uncertain" - the corresponding amplitude in the moment space will have equal probabilities for all possible momenta. Note that it is possible to develop a phase-space language for QM - search for "Wigner function". Not the answer you're looking for? Browse other questions tagged quantum-mechanics heisenberg-uncertainty-principle or ask your own question. Why is the Heisenberg uncertainty principle stated the way it is? Is uncertainty principle a technical difficulty in measurement? Does $\sigma_x\sigma_p = 0 \cdot \infty$ after a measurement of particle position? How does the uncertainty principle make sense of the fact that momentum for massive particles depends in part on position? Is momentum perfectly conserved at the particle level given the Heisenberg uncertainty principle?
CommonCrawl
We investigate distribution testing with access to non-adaptive conditional samples. In the conditional sampling model, the algorithm is given the following access to a distribution: it submits a query set $S$ to an oracle, which returns a sample from the distribution conditioned on being from $S$. In the non-adaptive setting, all query sets must be specified in advance of viewing the outcomes. Our main result is the first polylogarithmic-query algorithm for equivalence testing, deciding whether two unknown distributions are equal to or far from each other. This is an exponential improvement over the previous best upper bound, and demonstrates that the complexity of the problem in this model is intermediate to the the complexity of the problem in the standard sampling model and the adaptive conditional sampling model. We also significantly improve the sample complexity for the easier problems of uniformity and identity testing. For the former, our algorithm requires only $\tilde O(\log n)$ queries, matching the information-theoretic lower bound up to a $O(\log \log n)$-factor. Our algorithm works by reducing the problem from $\ell_1$-testing to $\ell_\infty$-testing, which enjoys a much cheaper sample complexity. Necessitated by the limited power of the non-adaptive model, our algorithm is very simple to state. However, there are significant challenges in the analysis, due to the complex structure of how two arbitrary distributions may differ.
CommonCrawl
Regular expressions allow for a flexible description of attribute retrieval from generic read descriptors. A summary of the regular expression system adopted in the Flux Capacitor can be found there. The BED (Browser Extensible Data) format has been developed by UCSC for displaying transcript structures in the genome browser, a full description of the format can be found there. The following fields used by the Flux Capacitor and the Flux Simulator. Note: The sanity check, $chromStart + blockStarts_k + blockSizes_k = chromEnd$ must always hold, for $k$ being the last (respectively, the unique) element in the block vector! Go to the format's detail page to see ongoing discussions. Fasta formats are used very commonly as they provide easy (descriptor,sequence) tuples. Generally, it can be differentiated between single-FASTA files — that contain a single sequence — and multi-FASTA files, which correspondingly contain more than one sequence. The Flux Capacitor and Simulator programs usually output multi-FASTA files, an exception is the genomic sequence files, which are to be located in a common directory, with a file chr.fa for each chr annotated in the corresponding GTF annotation file. In contrast to FA files, the leading character of the description line in FASTAQ (Fasta Quality) files is "@" ("at"). Then, the sequence is following, and afterwards a separator line that is leaded by a "+" ("plus") character indicates the start of the qualities. See here for some examples on the FASTQ file format. The Flux Simulator encodes qualities as ASCII characters by adding an offset of 64 according to the Illumina standard. Important, read this: As by build 20090729 of the Simulator, the .ERR format has changed to allow for both, quality-based models and such without. The article here describes the new version (after build 20090729), go there to see the description of the .ERR format before build 20090729. Probability distributions over a discrete value space (e.g., quality values, substitution symbols, etc.) are coherently described by their cumulative distribution functions (CDFs). As by their nature, the number series in a CDF have to be monotonously increasing with (at least) the last value of a series being 1.0. minQual (-40) minimum quality: the minimum value for qualities in the described error models. Currently exclusively integer quality models (as Illumina and phred qualities) are addressed. Therefore, subsequent CDFs over quality spectra have all the length (maxQual - minQual + 1). Only for error files that have been built with quality values. maxQual (40) maximum quality: highest value of the quality spectrum, an integer - see above. Only for error files that have been built with quality values. tholdQual (.) the threshold quality: level below which below which all base-calls have been considered "problematic" or "accident", regardless whether the corresponding base had been called correctly or not. If none such threshold has been applied, tholdQual should be set to "." Only for error files that have been built with quality values. p(minQual), $\ldots$, p(maxQual) CDF over qualities of "unproblematic" base calls. A base call is considered as unproblematic iff it is (i) correct and (ii) equal or above the level specified by tholdQual. Only for error files that have been built with quality values. letter (A) Symbol, for which the crosstalk is specified as the observed substitution rates broken down by quality levels. minQual $\ldots$ maxQual (-40,…,40) quality level for the following observed substitution rates p(X) apply. Only for error files that have been built with quality values. p(A),p(C),p(G),p(N),p(T) probabilities (or CDF) for the symbol specified by letter to be substituted by A, C, G, N, or T. length (11) extension of the "problem" captured in this error profile. Consequently, the 0-based index of the last position affected is (start+length-1). baseProb (6.875394925958544E-5) probability as fraction of reads that shared this problem in the observed dataset. Multiplying this probability with the value nrInstances in the #MODEL block recasts the number of instances in which this error has been observed. start+i p(minQual) p(minQual+1) $\ldots$ p(maxQual) (26 0.11 0.11 0.13 0.26 $\ldots$) probabilities (or CDF) of the distribution of qualities at the corresponding position. Only for error files that have been built with quality values. Intron model files describe the format of splice site combinations that are considered as potential intron. Discriminatory attributes of biological introns are (1st) the distance of the donor/acceptor pair, (2nd) the combination of their splice site sequences. Each model block is introduced by a header line. where #MODEL introduces a new model, and minDist respectively maxDist delimit the boundaries on the lengths of valid introns that are described by the model. Subsequently, a list of donor/acceptor sequences that may co-occur in valid introns is provided. The sequences are the strings directly adjacent to exons, and may be redundant — as combinations are evaluated — as their length may vary, even amongst donors and acceptors. GEN_DIR String path to the directory with the genomic sequences of chromosomes or scaffolds used in the reference annotation. LOAD_CODING [YES|NO] Flag to load coding transcripts from the reference annotation. EXPRESSION_K Float Power law parameter $k$ of the expression simulation, should be <0. FRAG_B4_RT [YES|NO] flag to schedule the fragmentation before (YES), or after (NO) the reverse transcription. Note for fragmentations carried out before reverse transcription, exclusively random priming strategies are reasonable. FRAG_MODE [PHYSICAL|CHEMICAL] flag to switch between fragmentation according to physical or chemical attributes. FILTERING [YES|NO] Flag to indicate whether a length filtering step is carried out on the cDNA library. FILT_MIN Integer Minimum length that is retained during filtering. FILT_MAX Integer Maximum length that is retained during filtering. READ_NUMBER Integer Number of reads that are intented to produce. Note: this number is an upper boundary and gets adapted to the actual size of the intermediary generated library. READ_LENGTH Integer Length of the generated reads, depends on filtering settings. PAIRED_END [YES|NO] Flag to indicate whether read pairs are produced. FASTQ [YES|NO] Flag that indicates whether additionally the read sequences and qualities are output. Depends on GENOME_DIR and ERR_FNAME. QTHOLD Integer Quality value below which base-calls are considered problematic. TMP_DIR String Path to folder for temporary files, if different from system standard (commonly /tmp on Unix clones). 1 LOCUS_ID chrom:start-end[W|C] identifier for the intrinsic splicing locus, given by the chromosome (chrom), start and end position and the strand (Watson or Crick). 2 TRANSCRIPT_ID String transcript identifier from the reference annotation. The format of LIB (Simulated Library) files is simple and condenses the information needed to describe a fragment (RNA or cDNA) of an original transcripts. Each line corresponds to one such fragments and in 3 tab-delimited fields the estart, eend in the spliced sequence (exonic) of the transcript with transcript_id of the original annotation. Note: because the simuated transcription start and length of the poly-A tail may vary from the annotation in the reference, values for estart can drop below , and values for eend can take values higher than the transcript length.
CommonCrawl
Abstract: In this paper, we establish robustness to noise perturbations of polyhedral regularization of linear inverse problems. We provide a sufficient condition that ensures that the polyhedral face associated to the true vector is equal to that of the recovered one. This criterion also implies that the $\ell^2$ recovery error is proportional to the noise level for a range of parameter. Our criterion is expressed in terms of the hyperplanes supporting the faces of the unit polyhedral ball of the regularization. This generalizes to an arbitrary polyhedral regularization results that are known to hold for sparse synthesis and analysis $\ell^1$ regularization which are encompassed in this framework. As a byproduct, we obtain recovery guarantees for $\ell^\infty$ and $\ell^1-\ell^\infty$ regularization.
CommonCrawl
How can I compute a Kronecker sum in Mathematica? There is Kronecker product but there is no Kronecker sum? It seems like a very important features to include. So in the absence of a Kronecker sum function, how can I construct my own Kronecker sum $A\oplus B$ of two arbitrary $n\times n$ matrices $A$ and $B$? This also reminded me of SquareMatrixQ, a convenient bit of syntactic sugar which I'd seen used before, but keep forgetting. See @MarcoB's answer if you wanted the Kronecker sum. Not the answer you're looking for? Browse other questions tagged matrix summation or ask your own question. Can I regroup terms into a sum? What can I use as an equivalent of python zip in mathematica inside of Sum? Why FoldList 5 times faster than Apply for matrix multiplication?
CommonCrawl
Last week, Newsweek published an article titled The Real Minimum Wage. The authors report that "in a weeks-long experiment, we posted simple, hourlong jobs (listening to audio recordings and counting instances of a specific keyword) and continually lowered our offer until we found the absolute bottom price that multiple people would accept, and then complete the task." The results "showed" that Americans are the ones willing to accept the lowest possible salary for working on a task, compared even to people in India, Romania, Philippines, etc. In fact, they found the that there are Americans willing to work for 25 cents per hour, while they could not find anyone willing to work for less than \$1/hr in any other country. The conclusion of the article? Americans are more desperate than anyone else in the world. On an abstract, statistical level, by testing workers from multiple countries, to determine their minimum wage, we sample multiple "minimum wage distributions" trying to find the smallest value within each one of them. Each probability distribution corresponds to the minimum wages that workers from different countries are willing to accept. Let's call the CDF's of distributions $F_i(x)$, with, say, $F_1(x)$ being the distribution for minimum wages for US, $F_2(x)$ for India, $F_3(x)$ for UK, etc etc. Now, let's assume that we sample $n$ workers from one of the country-specific distributions. After running the experiment, we get back measurements $x_1, \ldots, x_n$, each one corresponding to the minimum wage for each of the workers that participated in the study, who comes from the country that we are measuring. As we get more and more workers, the more likely it is to find a value that will be at or below 25 cents/hour. So, how this approach explains the findings of Newsweek? We know that all countries are not equally represented on Mechanical Turk. Most workers are from the US (50% or so), followed by India (35% or so), and then by Canada (2%), UK (2%), Philippines (2%), and a variety of other countries with similarly small percentages. This means that in the study, we expect to have more Americans participating, followed by Indians, and then a variety of other countries. So, even if the distribution of minimum wages was identical across all countries, we expect to find lower wages in the country with the largest number of participants. Since the majority of the workers on Mechanical Turk are from US, followed by India, followed by Canada, and UK, etc, the illustration by Newsweek simply gives us the country of origin of the workers, in reverse order of popularity! At this point, someone may ask: what happens if the distribution is not uniform but, say, lognormal? (A much more plausible distribution for minimum acceptable wages.) For this specific question, as you can see from the analysis above, this does not make much of a difference: The only thing that we need to know if the value of $F(z)$ for the $z$ value of interest. A more general question is: What is the expected maximum (or minimum) value that we expect to find when we sample from an arbitrary distribution? This is the topic of extreme value theory, a field in statistics that tries to predict the probability of extreme events (e.g., what is the possible biggest possible drop in the stock market? what is the biggest rainfall in this region?) Given the events in the financial markets in 2008, this theory has received significant attention in the last few years. The three types of the distributions are all special cases of the generalized extreme value distribution. This theory has significant applications not only when modeling risk (stock market, weather, earthquakes, etc), but also when modeling decision-making for humans: Often, we model humans as utility maximizers, who are making decisions that maximize their own well-being. This maximum-seeking behavior results often in the distributions described above. I will give a more detailed description in a later blog post. We have a strong program, with 16 long papers accepted, and 16 papers being presented as demos and posters. Below you can find the titles of the papers and their abstracts. The PDF versions of the papers will be posted online by AAAI, after the completion of the conference are available through the AAAI Digital Library. Until then, you can search Google, or just ask the authors for a pre-print. So, if you are interested in crowdsourcing and human computation, we hope to see you there in San Francisco in August! Active learning and crowdsourcing are promising ways to efficiently build up training sets for object recognition, but thus far techniques are tested in artificially controlled settings. Typically the vision researcher has already determined the dataset's scope, the labels ``actively" obtained are in fact already known, and/or the crowd-sourced collection process is iteratively fine-tuned. We present an approach for *live learning* of object detectors, in which the system autonomously refines its models by actively requesting crowd-sourced annotations on images crawled from the Web. To address the technical issues such a large-scale system entails, we introduce a novel part-based detector amenable to linear classifiers, and show how to identify its most uncertain instances in sub-linear time with a hashing-based solution. We demonstrate the approach with experiments of unprecedented scale and autonomy, and show it successfully improves the state-of-the-art for the most challenging objects in the PASCAL benchmark. In addition, we show our detector competes well with popular nonlinear classifiers that are much more expensive to train. Recognizing human activities from wearable sensor data is an important problem, particularly for health and eldercare applications. However, collecting sufficient labeled training data is challenging, especially since interpreting IMU traces is difficult for human annotators. Recently, crowdsourcing through services such as Amazon's Mechanical Turk has emerged as a promising alternative for annotating such data, with active learning serving as a natural method for affordably selecting an appropriate subset of instances to label. Unfortunately, since most active learning strategies are greedy methods that select the most uncertain sample, they are very sensitive to annotation errors (which corrupt a significant fraction of crowdsourced labels). This paper proposes methods for robust active learning under these conditions. Specifically, we make three contributions: 1) we obtain better initial labels by asking labelers to solve a related task; 2) we propose a new principled method for selecting instances in active learning that is more robust to annotation noise; 3) we estimate confidence scores for labels acquired from MTurk and ask workers to relabel samples that receive low scores under this metric. The proposed method is shown to significantly outperform existing techniques both under controlled noise conditions and in real active learning scenarios. The resulting method trains classifiers that are close in accuracy to those trained using ground-truth data. This paper presents techniques for gathering data that expose errors of automatic classification models. Prior work has demonstrated the promise of having humans seek training data, as an alternative to active learning, in cases where there is extreme class imbalance. We now explore the direction where we ask humans to identify cases what will cause the classification system to fail. Such techniques are valuable in revealing problematic cases that do not reveal themselves during the normal operation of the system, and may include cases that are rare but catastrophic. We describe our approach for building a system to satisfy this requirements, trying to encourage humans to provide us with such data points. In particular, we reward a human when the provided example is difficult for the model to handle, and the reward is proportional to the magnitude of the error. In a sense, the humans are asked to ''Beat the Machine'' and find cases where the automatic model (''the machine'') is wrong. Our experimental data show that the density of the identified problems is an order of magnitude higher compared to alternative approaches, and that the proposed technique can identify quickly the ``big flaws'' that would typically remain uncovered. Crowdsourcing platforms, such as Amazon Mechanical Turk, have enabled the construction of scalable applications for tasks ranging from product categorization and photo tagging to audio transcription and translation. These vertical applications are typically realized with complex, self-managing workflows that guarantee quality results. But constructing such workflows is challenging, with a huge number of alternative decisions for the designer to consider. Artificial intelligence methods can greatly simplify the process of creating complex crowdsourced workflows. We argue this thesis by presenting the design of TurKontrol 2.0, which uses machine learning to continually refine models of worker performance and task difficulty. Using these models, TurKontrol 2.0 uses decision-theoretic optimization to 1) choose between alternative workflows, 2) optimize parameters for a workflow, 3) create personalized interfaces for individual workers, and 4) dynamically control the workflow. Preliminary experience suggests that these optimized workflows are significantly more economical than those generated by humans. Many human computation systems use crowdsourcing markets like Amazon Mechanical Turk to recruit human workers. The payment in these markets is usually very low, and still collected demographic data shows that the participants are a very diverse group including highly skilled full time workers. Many existing studies on their motivation are rudimental and not grounded on established motivation theory. Therefore, we adapt different models from classic motivation theory, work motivation theory and Open Source Software Development to crowdsourcing markets. The model is tested with a survey of 431 workers on Mechanical Turk. We find that the extrinsic motivational categories (immediate payoffs, delayed payoffs, social motivation) have a strong effect on the time spent on the platform. For many workers, however, intrinsic motivation aspects are more important, especially the different facets of enjoyment based motivation like "task autonomy" and "skill variety". Our contribution is a preliminary model based on established theory intended for the comparison of different crowdsourcing platforms. The efficient functioning of markets and institutions assume a certain degree of honesty from participants. In labor markets, for instance, employers benefit from employees who will render meaningful work, and employees benefit from employers who will pay the promised amount for services rendered. We use an established method for detecting dishonest behavior in a series of experiments conducted on \amt, a popular online labor market. Our first experiment estimates a baseline amount of dishonesty for this task in the population sample. The second experiment tests the hypothesis that the level of dishonesty in the population will be sensitive to the relative amount that can be gained by dishonest reporting, and the third experiment, manipulates the degree to which dishonest reporting can be detected at the individual level. We conclude with a demographic and cross-cultural analysis of the predictors of dishonest reporting in this market. Traditional methods of collecting translation and paraphrase data are prohibitively expensive, making constructions of large, new corpora difficult. While crowdsourcing offers a cheap alternative, quality control and scalability can become problematic. We discuss a novel annotation task that uses videos as the stimulus which discourages cheating. It also only requires monolingual speakers, thus making it easier to scale since more workers are qualified to contribute. Finally, we employed a multi-tiered payment system that helps retain good workers over the long-term, resulting in a persistent, high-quality workforce. We present the results of one of the largest linguistic data collection efforts using Mechanical Turk, yielding 85K English sentences and more than 1k sentences for each of a dozen more languages. We describe a framework for rapidly prototyping applications which require intelligent visual processing, but for which there does not yet exist reliable algorithms, or for which engineering those algorithms is too costly. The framework, CrowdSight, leverages the power of crowdsourcing to offload intelligent processing to humans, and enables new applications to be built quickly and cheaply, affording system builders the opportunity to validate a concept before committing significant time or capital. Our service accepts requests from users either via email or simple mobile applications, and handles all the communication with a backend human computation platform. We build redundant requests and data aggregation into the system freeing the user from managing these requirements. We validate our framework by building several test applications and verifying that prototypes can be built more easily and quickly than would be the case without the framework. In this paper, we present Digitalkoot, a system for fixing errors in the Optical Character Recognition (OCR) process of old texts through the use of human computation. By turning the work into simple games, we are able to attract a great number of volunteers to donate their time and cognitive capacity for the cause. Our analysis shows how untrained people can reach very high accuracy through the use of crowdsourcing. Furthermore we analyze the effect of social media and gender on participation levels and the amount of work accomplished. Human Computation is, of course, a very old field with a forgotten literature that treats many of the key problems, especially error detection and correction. The obvious methods of error detection, duplicate calculation, have proven to be subject to Babbage's Rule: Different workers using the same methods on the same data will tend to make the same errors. To avoid the consequences of this rule, early human computers developed a disciplined regimen to identify and correct mistakes. This paper reconstructs those methods, puts them in a modern context and identifies their implications for the modern version of human computation. Crowdsourcing is an effective tool for scalable data annotation in both research and enterprise contexts. Due to crowdsourcing's open participation model, quality assurance is critical to the success of any project. Present methods rely on EM-style post-processing or manual annotation of large gold standard sets. In this paper we present an automated quality assurance process that is inexpensive and scalable. Our novel process relies on programmatic gold creation to provide targeted training feedback to workers and to prevent common scamming scenarios. We find that it decreases the amount of manual work required to manage crowdsourced labor while improving the overall quality of the results. In this paper, we develop a new human computation algorithm for speech-to-text transcription that can potentially achieve the high accuracy of professional transcription using only microtasks deployed via an online task market or a game. The algorithm partitions audio clips into short 10-second segments for independent processing and joins adjacent outputs to produce the full transcription. Each segment is sent through an iterative dual pathway structure that allows participants in either path to iteratively refine the transcriptions of others in their path while being rewarded based on transcriptions in the other path, eliminating the need to check transcripts in a separate process. Initial experiments with local subjects show that produced transcripts are on average 96.6% accurate. Micro-task markets like Amazon MTurk enable online workers to provide human intelligence as Web-based on demand services (so called people services). Businesses facing large amounts of knowledge work can benefit from increased flexibility and scalability of their workforce but need to cope with reduced control of result quality. While this problem is well recognized, it is so far only rudimentarily addressed by existing platforms and tools. In this paper, we present a flexible research toolkit which enables experiments with advanced quality management mechanisms for generic micro-task markets. The toolkit enables control of correctness and performance of task fulfillment by means of dynamic sampling, weighted majority voting and worker pooling. We demonstrate its application and performance for an OCR scenario building on Amazon MTurk. The toolkit however enables the development of advanced quality management mechanisms for a large variety of people service scenarios and platforms. Many practitioners currently use rules of thumb to price tasks on online labor markets. Incorrect pricing leads to task starvation or inefficient use of capital. Formal optimal pricing policies can address these challenges. In this paper we argue that an optimal pricing policy must be based on the tradeoff between price and desired completion time. We show how this duality can lead to a better pricing policy for tasks in online labor markets. This paper makes three contributions. First, we devise an algorithm for optimal job pricing using a survival analysis model. We then show that worker arrivals can be modeled as a non-homogenous Poisson Process (NHPP). Finally using NHPP for worker arrivals and discrete choice models we present an abstract mathematical model that captures the dynamics of the market when full market information is presented to the task requester. This model can be used to predict completion times and optimal pricing policies for both public and private crowds. In online labor markets, determining the appropriate incentives is a difficult problem. In this paper, we present dynamic pricing mechanisms for determining the optimal prices for such tasks. In particular, the mechanisms are designed to handle the intricacies of the markets like mechanical turk (workers are coming online, requesters have budgets, etc.). The mechanisms have desirable theoretical guarantees (incentive compatibility, budget feasibility, and competitive ration performance) and perform well in practice. Experiments demonstrate the effectiveness and feasibility of using such mechanisms in practice. This paper reports the results of a natural field experiment where workers from a paid crowdsourcing environment self-select into tasks and are presumed to have limited attention. In our experiment, workers labeled any of six pictures from a 2 x 3 grid of thumbnail images. In the absence of any incentives, workers exhibit a strong default bias and tend to select images from the top-left (``focal'') position; the bottom-right (``non-focal'') position, was the least preferred. We attempted to overcome this bias and increase the rate at which workers selected the least preferred task, by using a combination of monetary and non-monetary incentives. We also varied the saliency of these incentives by placing them in either the focal or non-focal position. Although both incentive types caused workers to re-allocate their labor, monetary incentives were more effective. Most interestingly, both incentive types worked better when they were placed in the focal position and made more salient. In fact, salient non-monetary incentives worked about as well as non-salient monetary ones. Our evidence suggests that user interface and cognitive biases play an important role in online labor markets and that salience can be used by employers as a kind of ``incentive multiplier''. Developing Scripts to Teach Social Skills: Can the Crowd Assist the Author? The social world that most of us navigate effortlessly can prove to be a perplexing and disconcerting place for individuals with autism. Currently there are no models to assist non-expert authors as they create customized social script-based instructional modules for a particular child. We describe an approach to using human computation to develop complex models of social scripts for a plethora of complex and interesting social scenarios, possible obstacles that may arise in those scenarios, and potential solutions to those obstacles. Human input is the natural way to build these models, and in so doing create valuable assistance for those trying to navigate the intricacies of a social life. Crowdsourcing markets such as Amazon's Mechanical Turk provide an enormous potential for accomplishing work by combining human and machine computation. Today crowdsourcing is mostly used for massive parallel information processing for a variety of tasks such as image labeling. However, as we move to more sophisticated problem-solving there is little knowledge about managing dependencies between steps and a lack of tools for doing so. As the contribution of this paper, we present a concept of an executable, model-based programming language and a general purpose framework for accomplishing more sophisticated problems. Our approach is inspired by coordination theory and an analysis of emergent collective intelligence. We illustrate the applicability of our proposed language by combining machine and human computation based on existing interaction patterns for several general computation problems. In this paper, we investigate to what extent a large group of human workers is able to produce collaboratively a global ranking of images, based on a single semantic attribute. To this end, we developed CollaboRank, which is a method that formulates and distributes tasks to human workers, and aggregates their personal rankings into a global ranking. Our results show that a relatively high consensus can be achieved, depending on the type of the semantic attribute. Crowdsourcing platforms such as Amazon Mechanical Turk have become popular for a wide variety of human intelligence tasks; however, quality control continues to be a significant challenge. Recently, Dai et al (2010) propose TurKontrol, a theoretical model based on POMDPs to optimize iterative, crowd-sourced workflows. However, they neither describe how to learn the model parameters, nor show its effectiveness in a real crowd-sourced setting. Learning is challenging due to the scale of the model and noisy data: there are hundreds of thousands of workers with high-variance abilities. This paper presents an end-to-end system that first learns TurKontrol's POMDP parameters from real Mechanical Turk data, and then applies the model to dynamically optimize live tasks. We validate the model and use it to control a successive-improvement process on Mechanical Turk. By modeling worker accuracy and voting patterns, our system produces significantly superior artifacts compared to those generated through static workflows using the same amount of money. Quality assurance remains a key topic in the human computation research field. Prior work indicates that independent agreement is effective for low difficulty tasks, but has limitations. This paper addresses this problem by proposing a tournament selection based quality control process. The experimental results from this paper show that humans are better at identifying the correct answers than generating them. On today's human computation systems, designing tasks and workflows is a difficult and labor-intensive process. Can workers from the crowd be used to help plan workflows? We explore this question with Turkomatic, a new interface to microwork platforms that uses crowd workers to help plan workflows for complex tasks. Turkomatic uses a general-purpose divide-and-conquer algorithm to solve arbitrary natural-language requests posed by end users. The interface includes a novel real-time visual workflow editor that enables requesters to observe and edit workflows while the tasks are being completed. Crowd verification of work and the division of labor among members of the crowd can be handled automatically by Turkomatic, which substantially simplifies the process of using human computation systems. These features enable a novel means of interaction with crowds of online workers to support successful execution of complex work. Mutual exclusions are important information for machine learning. Games With A Purpose (or GWAP) provide an effective way to get large amount of data from web users. This research proposes MuSweeper, a minesweeper-like game, to collect mutual exclusions. By embedding game theory into game mechanism, the precision is guaranteed. Experiment showed MuSweeper can efficiently collect mutual exclusions with high precision. We present MobileWorks, a mobile phone-based crowdsourcing platform. MobileWorks targets workers in developing countries who live at the bottom of the economic pyramid. This population does not have access to desktop computers, so existing microtask labor markets are inaccessible to them. MobileWorks offers human OCR tasks that can be accomplished on low-end mobile phones; workers access it through their mobile web browser. To address the limited screen resolution available on low-end phones, MobileWorks segments documents into many small pieces, and sends each piece to a different worker. A first pilot study with 10 users over a period of 2 months revealed that it is feasible to do simple OCR tasks using simple Mobile Web based application. We found that on an average the workers do the tasks at 120 tasks per hour. Using single entry the accuracy of workers across the different documents is 89% . We propose a multiple entry solution which increases the theoretical accuracy of the OCR to more than 99%. As researchers embrace micro-task markets for eliciting human input, the nature of the posted tasks moves from those requiring simple mechanical labor to requiring specific cognitive skills. On the other hand, increase is seen in the number of such tasks and the user population in micro-task market places requiring better search interfaces for productive user participation. In this paper we posit that understanding user skill sets and presenting them with suitable tasks not only maximizes the over quality of the output, but also attempts to maximize the benefit to the user in terms of more successfully completed tasks. We also implement a recommendation engine for suggesting tasks to users based on implicit modeling of skills and interests. We present results from a preliminary evaluation of our system using publicly available data gathered from a variety of human computation experiments recently conducted on Amazon's Mechanical Turk. The advent of crowdsourcing has created a variety of new opportunities for improving upon traditional methods of data collection and annotation. This in turn has created intriguing new opportunities for data-driven machine learning (ML). Convenient access to crowd workers for simple data collection has further generalized to leveraging more arbitrary crowd-based human computation to supplement ML. While new potential applications of crowdsourcing continue to emerge, a variety of practical and sometimes unexpected obstacles have already limited the degree to which its promised potential can be actually realized in practice. This paper considers two particular aspects of crowdsourcing and their interplay, data quality control (QC) and ML, reflecting on where we have been, where we are, and where we might go from here. In this paper we develop a novel model of geospatial data creation, called CollabMap, that relies on human computation. CollabMap is a crowdsourcing tool to get users contracted via Amazon Mechanical Turk or a similar service to perform micro-tasks that involve augmenting existing maps (e.g. GoogleMaps or Ordnance Survey) by drawing evacuation routes, using satellite imagery from GoogleMaps and panoramic views from Google Street View. We use human computation to complete tasks that are hard for a computer vision algorithm to perform or to generate training data that could be used by a computer vision algorithm to automatically define evacuation routes. We describe a Z-score based outlier detection method for detection and filtering of inaccurate crowd workers. After filtering, we aggregate labels from remaining workers via simple majority voting or feature-weighted voting. Both su-pervised and unsupervised features are used, individually and in combination, for both outlier detection and weighted voting. We evaluate on noisy judgments collected from Amazon Mechanical Turk which assess Websearch relevance of query/document pairs. We find that filtering in combination with multi-feature weighted voting achieves 8.94% relative error reduction for graded accuracy (4.25% absolute) and 5.32% for binary accuracy (3.45% absolute). Systems that find music recordings based on hummed or sung, melodic input are called Query-By-Humming (QBH) systems. Such systems employ search keys that are more similar to a cappella singing than the original recordings. Successful deployed systems use human computation to create these search keys: hand-entered midi melodies or recordings of a cappella singing. Tunebot is one such system. In this paper, we compare search results using keys built from two automated melody extraction system to those gathered using two populations of humans: local paid sing-ers and Amazon Turk workers. This research aims to explore how Human Computation can be used to aid economic development in communities experiencing extreme poverty throughout the world. Work is ongoing with a community in rural Kenya to connect them to employment opportunities through a Human Computation system. A feasibility study has been conducted in the community using the 3D protein folding game Foldit and Amazon's Mechanical Turk. Feasibility has been confirmed and obstacles identified. Current work includes a pilot study doing image analysis for two research projects and developing a GUI that is usable by workers with little computer literacy. Future work includes developing effective incentive systems that operate both at the individual level and the group level and integrating worker accuracy evaluation, worker compensation, and result-credibility evaluation. Crowdsourcing platforms such as Amazon's Mechanical Turk (AMT) provide inexpensive and scalable workforces for processing simple online tasks. Unfortunately, workers participating in crowdsourcing tend to supply work of inconsistent or low quality. We report on our experiences using AMT to verify hundreds of thousands of local business listings for the online directory Yelp.com. Using expert-verified changes, we evaluate the accuracy of our workforce and present the results of preliminary experiments that work towards filtering low-quality workers and correcting for worker bias. Our report seeks to inform the community of practical and financial constraints that are critical to understanding the problem of quality control in crowdsourcing systems. We consider how to most effectively use crowd-based relevance assessors to produce training data for learning to rank. This integrates two lines of prior work: studies of unreliable crowd-based binary annotation for binary classification, and studies for aggregating graded relevance judgments from reliable experts for ranking. To model varying performance of the crowd, we simulate annotation noise with varying magnitude and distributional properties. Evaluation on three LETOR test collections reveals a striking trend contrary to prior studies: single labeling outperforms consensus methods in maximizing learner rate (relative to annotator effort). We also see surprising consistency of learning rate across noise distributions, as well as greater challenge with the adversarial case for multi-class labeling. A few months back, I started advising Tagasauris, a company that provides media annotation services, using crowdsourcing. Since there are some interesting aspects of the story, which go beyond the simple "tag using MTurk" story, I would like to give a few more details that I consider interesting. One of my favorite parts of the Magnum website is the Archival Calendar, where they have a set of photos showcasing various historic events. Beats Facebook browsing by a wide margin. But let's get back to the story. This lack of metadata is the case not only for the archive but also for the new, incoming photos that arrive every day from its members. (To put it mildly, photographers are not exactly eager to sit, tag, and describe the hundreds of photos they shoot every day.) This means that a large fraction of the Magnum Photos archive, which contains millions of photos, is virtually unsearchable. The photos are effectively lost in the digital world, even though they are digitized and available on the Internet. An example of such case of "lost" photos is a set of photos from the shooting of the movie "American Graffitti". People at Magnum Photos knew that one of their photographers, Dennis Stock who died in 2009, was on set during the production of the movie, and he had taken photos of the, then young and unknown, members of the team. Magnum Photos had no idea where these photos were. They knew they digitized the archive of Dennis Stock, they knew that the photos are in the archive, but nobody could locate the photos within the millions of other, untagged photos. For those unfamiliar with the movie, American Graffiti is a 1973 film, by George Lucas (pre-Star Wars), with starring actors the, then unknowns, Richard Dreyfuss, Ron Howard, Paul Le Mat, Charles Martin Smith,Cindy Williams, Candy Clark, Mackenzie Phillips and Harrison Ford. The latter shot to stardom of all the actors makes the movie almost a cult. The Magnum Photos archive is a trove of similar "hidden treasures". Sitting there, waiting for some accidental, serendipitous discovery. Magnum Photos had its own set of annotators. However, the annotators could not even catch up even with the volume of incoming photos. The task of going back and annotating the archive was an even more daunting task. This meant lost revenue for Magnum Photos, as if you cannot find a photo, you cannot license it, and you cannot sell it. Tagasauris proposed to solve the problem using crowdsourcing. With hundreds of workers working in parallel, it became possible to tame the influx of untagged incoming photos, and start going backwards and tagging the archive. Of course, vanilla photo tagging is not a solution. Workers type misspelled words (named entities are systematic offenders), try to get away with generic tags, etc. Following the lessons learned from ESP Game, and all the subsequent studies, Tagasauris built solutions for cleaning the tags, rewarding specificity, and, in general, clean up and ensure high-quality for the noisy tagging process. A key component was the ability to match the tags entered by the workers with named entities, which themselves were then connected to Freebase entities. The result? When workers were tagging the photos from Magnum Photos, they identified the actors in the shots, and the machine process in the background assigned "semantic tags" to the photos, such as [George Lucas], [Richard Dreyfuss], [Ron Howard], [Mackenzie Phillips], [Harrison Ford] and others. Yes, humans + machines generate things that are better than the sum of the parts. So, how the workers discovered the photos from American Graffiti? As you may imagine, the workers had no idea that the photos that they were tagging were from the shooting of the film. They could identify the actors, but that was it. Going from actor tagging to understanding the context of the photo shooting, is a task that cannot be required by layman, non-expert taggers. You need experts that can "connect the dots". Unfortunately, subject experts are expensive. And they tend not to be interested in tedious tasks, such as assigning tags to photos. However, this "connecting the dots" is a task where machines are better than humans. We have recently seen how Watson, by having access to semantically connected ontologies (often generated by humans), could identify the correct answers to a wide variety of questions. Bingo! The entity that connects together the different entities is the entity "American Graffiti", which was not used by any worker. At this point, you can understand how the story evolved. A graph activation/spreading algorithm suggests the tag, experts can verify it, and the rest is history. This is not a story to show how cool discovery based on linked entities is. This is old news for many people that work with such data. However, this is a simple example of using crowdsourcing in a more intelligent way that it is currently being used. Machines cannot do everything (in fact, they are especially bad in tasks that are "trivial" for humans) but when humans provide enough input, the machines can take it from there, and improve significantly the overall process. Someone can even see the obvious next step: Use face recognition and allow tagging to be done collaboratively with humans and machines. Google and Facebook have very advanced algorithms for face recognition. Match them intelligently with humans, and you are way ahead of solutions that rely simply on humans to tag faces. I think the lesson is clear: Let humans do what they do best, and let machines do what they do best. (And expect the balance to change as we move forward and machines can do more.) Undoing and ignoring decades of research in computer science, just because it is easier to use cheap labor, is a disservice not only to computer science. It is a disservice to the potential of crowdsourcing as well.
CommonCrawl
Projective Geometry: Prove that the mapping in P2R is not well-defined. Prove that the mapping F: P2(R) to P2(R) given by F(x1,x2,x3) = (x1x2, x2, x3) is not well-defined. I know that to determine whether a mapping is well-defined, you should pick two points that are the same in P2(R) and show that the mapping transforms them the same way. So, would the points (1,2,3) and (2,4,6), which are scalar multiples of each other so are the same point in P2(R), an example showing why it is not well-defined? Because (1,2,3) is mapped to (2,2,3) and (2,4,6) is mapped to (8,4,6) and (2,2,3) and (8,4,6) are not scalar multiples of each other. Any help is appreciated. To put your example in context, you can define a map $$ F:\Bbb P^2\longrightarrow \Bbb P^2 $$ setting $F([x_1\colon x_2\colon x_3])=[y_1\colon y_2\colon y_3]$ where each $y_i$ is a homogeneous polynomial in $x_1, x_2, x_3$ of the same degree $d$. Indeed, if so, multiplying each of the $x$'s by $\lambda\neq0$ the $y$'s get modified by the same factor $\lambda^d$ leaving the image point unchanged. But if the degrees of the $y$'s are not equal, as in your example, the map $F$ is not well-defined. Not the answer you're looking for? Browse other questions tagged projective-geometry or ask your own question.
CommonCrawl
We can apply our model augmentation framework to models that are compositions of component models. How does the epidemic size depend on the probability of recovering? What if the disease is fatal? What if the population is Growing? We are going to use the model augmentation presented in examples/agentgraft.jl as a baseline simulation and build a workflow to compose that model with the example in examples/polynomial_regression.jl. It is strongly recommended that you understand those examples before following this notebook. This example combines an agent based model of SIR diseases with a statistical model of polynomial regression to quantify the response of the agent based model with respect to one of its parameters. The input models have to be composed carefully in order to make the software work. As taught by the scientific computing education group Software Carpentry, the best practice for composing scientific models is to have each component write files to disk and then use a workflow tool such as Make to orchestrate the execution of the modeling scripts. An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language. Let $m_1,m_2$ be models, and $t_1,t_2$ be tranformations and define $M_i = t_i(m_i)$. If we denote the creation of pipelines with the function composition symbol $g\circ f$ then we want to implement everything such that the following diagram commutes. This example shows how you can use a pipeline to represent the combination of models and then apply combinations of transformations to that pipeline. Transforming models and composing them into pipelines are two operations that commute, you can transform then compose or compose and then transform. Here is the baseline model, which is read in from a text file. You could instead of using parsefile use a quote/end block to code up the baseline model in this script. find the part of the model that implements the polynomial model for regression. `f(X, β) = β*X^p+2 + β*X^q+q`. Pipelines connect models in sequence like a bash script. Define $\times$ so that $T_1\times T_2$ acts on a pipeline by creating $P(T_1(m_1),T_2(m_2), c_1,c_2)$. We are going to add an additional state to the model to represent the infectious disease fatalities. The user must specify what that concept means in terms of the name for the new state and the behavior of that state. D is a terminal state for a finite automata. Some utilities for manipulating functions at a higher level than expressions. Another change we can make to our model is the introduction of population growth. Our model for population is that on each timestep, one new suceptible person will be added to the list of agents. We use the tick! function as an anchor point for this transformation. we are able to generate all possible polynomial regression using composition of these transformations. $T_x,T_1$ are generators for our monoid of transformations $T = \langle T_x, T_1 \rangle$. Any polynomial can be generated by these two operations cf. Horner's rule.
CommonCrawl
Theorem 1 (The Riesz-Fischer Theorem): Let $(X, \mathfrak T, \mu)$ be a measure space and let $1 \leq p \leq \infty$. Then the Lebesgue space $L^p(X, \mathfrak T, \mu)$ is complete. Recall that a Cauchy sequence converges if and only if it has a convergent subsequence. We use this result in proving the Riesz-Fischer theorem.
CommonCrawl
This is a place where everyone can ask questions about things he/she did not understand or which need clarification. If you have three points on a line, A, B and C, you can express C in terms of A and B: $C=\alpha A + (1-\alpha) B$. This even holds when you apply a linear or an affine mapping to these points, due to linearity and additivity: $f(C)=f(\alpha A + (1-\alpha) B)=\alpha f(A) + (1-\alpha) f(B)$. However, it does not hold in case of projective maps. There is a good picture visualizing this effect in the CG1 script, page 38 (Figure 3.2). The ratio $\alpha$ in world coordinates is about 0.5, while on the image plane the point is shifted towards B. Cross ratios are used to establish a relationship between world space ratios and image space ratios. This is not an issue in our rendering pipeline since lines are still mapped to lines after projective transformations and we do not require world-space ratios projected to the image plane anywhere in the pipeline as far as I can remember. It would, however, be an issue for some operations - and in fact it does become when talking about the triangle rendering. Two examples would be the normal interpolation in world space rather than image space, and the perspectively corrected texture coordinate computation. Erstens: gibt es einen Grund (bzw. wurde er genannt), dass die projektiven Transformationen invariant gegenüber den cross ratios sind?
CommonCrawl
I'm beginner programmer and I've challenged myself to write sudoku solver in C. I have finished it in few days and now I want to make it faster. As I've said I'm a beginner programmer so sorry that my code is long and messy. Does anyone know how can I make this faster? Would it be faster if I used recursion? And also how can I measure real execution time? You initialize each visited, ..., visited to zero. Next, when you put a number in an empty cell, you mark it in three such "sets": current column, current row and current minisquare. This arrangement will bring the complexity of checking each row, column and minisquare from \$\Theta(d)\$ to \$\Theta(1)\$, where \$d\$ is the width/height of the input sudoku board. What comes to the actual algorithm, I suggest you use recursion/backtracking. That will clean your code quite a bit, and will allow you better adapting it to, say, \$4 \times 4\$ or \$16 \times 16\$-sudokus. The idea is as follows. You march through the board rows, each row from left to write. You leave the cells that have a predefined value as is, but you put 1 to the first empty cell, and recur to the next cell. If 1 does not belong, you try 2, 3 and so on. Once you have found a valid value, move to the next empty cell. At some point it might happen than none of the numbers fit. In such a case you go one step backwards (backtracking) and try increment the previous value. If you can translate from Java, see this, starting from line 163. Faster will be to go first where you have less possibilities. So you can put all the number that has only one possibility. In the sudoku. When no more only 1 you recalculate... if no more one possibility you recurse on each possibility of one selected cell the one that has less possibilities. While you can detect more accurately the possibilities. Less recursion and your program will be faster. If no recursion time is linear. The boolean is initially set all on true. When you check all the sudoku spots you fill the other table counting the true. To traverse the sudoku array I would prefer 2 variables: one for the column and one for the block so the code will be more readable. Not the answer you're looking for? Browse other questions tagged performance beginner c sudoku backtracking or ask your own question. What is an algorithm to solve sudoku in efficient way in c?
CommonCrawl
I have been thinking about energy levels of an atom. When we study line spectra of hydrogen atom we say that when electrons jumps back from a higher shell to a lower shell it emits photons of certain frequency, but in that example we say that hydrogen has infinite energy levels and electron can jump back from any level. But when we study bonding we say that some atoms lack in d orbitals and for example say Nitrogen has no d orbitals, but when doing line spectra it has infinite energy level. How these two statements hold up with each other? The H atom has an infinite number of energy levels spanning a finite energy range. This range is 13.6 eV, the ionisation energy, and is equal to the Rydberg R in energy. In the simplest (basic) theory the energy is $E_n=-R/n^2$, where n is the principal quantum number, $n=1..\infty$) thus as the energy increases the energy levels become closer to one another. The energy also rises (becomes less negative) as n increases. In addition, for each level n there are other orbitals of nominally the same energy which describe the angular momentum & shape of the orbital. The orbitals are 1s, [2s, 2p], [3s, 3p ,3d], [4s, 4p, 4d, 4f] etc. where the levels in brackets are nominally of the same energy. (However, interaction between electrons changes these energies but only slightly compared to their total energy ). The light emission (fluorescence) you refer to comes from transitions between any two of these levels (subject to energy and angular momentum conservation). Other types of atoms behave similarly but because there are multiple electrons the equations describing the energy become far more complex. Thus H atoms do have d orbitals just as do N atoms, but in their lowest energy state there are not enough electrons to fill any of these. The d orbitals only start to become filled as one reaches Sc in the transition metals. In H and N atoms higher orbitals can be reached, by for example by absorbing photons or imparting energy from fast moving electrons in a discharge. However, in bonding, d orbitals will not be involved in the ground state bonding orbital of a molecule if there are not enough electrons and not enough energy to initially fill the d atomic orbitals. As soon as there are enough electrons, as in transition metal complexes, d orbitals become essential to understanding bonding. Note in one case it talks about jumping from a shell to another shell and then in another it talks about the types of orbits. An atom has an infinite amount of shells available to it, but in each shell there are a finite amount of orbits allowed. Nitrogen is filled (in the ground state) at the n=2 shell, so it only has l=0 (s) and l=1 (p) orbits, it can't have d orbits in the ground state. Is the hybridization only related to atomic orbitals but not to the molecular orbitals? How does H's ionization energy relate to its transition energy (Bohr's Model)?
CommonCrawl
What was mathematics like in Victorian Oxford? The Savilian Professor of Geometry was established in 1619 by Sir Henry Savile, and has been held by many top mathematicians - including John Wallis (who introduced the $\infty$ notation), Edmond Halley (after whom Halley's comet is named), Edward Titchmarsh, and Sir Michael Atiyah. The current holder of the chair is Nigel Hitchin. Throughout the Victorian era three Savilian Professors of Geometry left their mark on Mathematics at Oxford: Baden Powel, Henry Smith, and James Sylvester. These posters tell their story. All 6 posters as a pdf.
CommonCrawl
where $f:V\times R\rightarrow V$, $x \in V$ and $\lambda \in R$ is a bifrucation parameter. Equivariance under $\Gamma$ means that $f(\gamma x,\lambda)=\gamma f(x,\lambda)$ for all $\gamma\in\Gamma$. Assuming we have a trivial solution with full symmetry $\Gamma$ (every element of $\Gamma$ leaves the solution unchanged), then if a bifurcation occurs as we vary $\lambda$ then the bifurcations which occur have less symmetry, but in a very precise way (either steady-state solutions or oscillations through a Hopf bifurcation). Similar theory can be applied to potential functions which are invariant under $\Gamma$ ($g(\gamma x,\lambda)=g(x,\lambda)$). The "bible" for this theory is Golubitsky et. al. (1988), but a more recent, and accessible, text is Golubitsky and Stewart whcih also covers advances in the area since, as well as an introduction to where the applications to networks is heading (where the symmetries of the system are no longer so obvious but can be described using groupoid formalism). My own interests in this area concern applications and the theory of systems of coupled cells where each cell has its own inherent symmetry (how these cells are coupled can have implications on the solutions you expect to see, for example Wood (2001). Applications include using the theory in the study of central pattern generators that control insect locomotion (see my insect locomotion page) and optimising the configuration of arrays of hydrophones Wood et. al. (2003). Hydrophones are essentially directionless underwater microphones used to detect anything from fish to submarines, in order to be able to detect the direction that sound is eminating from they are placed in arrays of up to 20 hydrophones and then there is a neat method for"steering" them towards the sound that is of interest. The question is how to place them within this array (see figure below). M. Golubitsky, D. Schaeffer, I. Stewart, Singularities and groups in bifurcation theory, Volume 2, Springer 1988. M. Golubitsky, I. Stewart, The Symmetry Perspective, Birkhauser, 2003. D. Wood, A Cautionary Tale Of Coupling Cells With Internal Symmetries, International Journal of Bifurcation and Chaos 11, pp 123-132, 2001.
CommonCrawl
There is already an answered question to the problem of finding a similar deductive system in case n is not prime. Construct a deductive system where $1^n$ is provable iff $n$ is prime I am also interested in the solution to that problem, because I don't understand the notation in the answer (i.e. I don't know, what lt, ndiv, ndivsmaller etc means) and I can't simply write a comment there to clear that, because my reputation isn't high enough to do so yet. To prove a number $n$ is not prime it suffices to provide a divisor greater than 1, and less than $n$. Interpret $1^n : 1^m :: 1^k$ to mean that $n \times m = k$ Interpret $1^k$ to mean $k$ is composite. Note: This system is for proving that numbers are prime, but the question was later changed to ask the opposite. Intepret $1^n : 1^m :: 1^k$ to mean that there exists an integer solution in $a,b$ of $a\times n + b\times m = k$. And of course $1^n$ means $n$ is prime. Let $\epsilon = 1^0$ be the empty string. Here is the proof of the string $11$ which means "2 is prime". Here is a proof of $111$ which means 3 is prime. Not the answer you're looking for? Browse other questions tagged logic predicate-logic formal-languages formal-proofs or ask your own question. Does there exist a first-order theory $T$ and a proof system $\vdash$ such that $T\vDash\phi$ iff $T\vdash\phi$, but $\vdash$ is not complete?
CommonCrawl
The largest number that can't be made is 7. This is because the lowest run of three numbers you can make is 8,9 and 10. If you make this using 5 or 3 and then keep adding different quantities of 3 to them you should be able to make every number higher than them. Using $3z$ and $4z$ the highest number that can't be made is 5. This is because you can make 6, 7 and 8. We know you can make all other numbers because: we made all of the numbers from 11 to 20, and we can then get the rest by adding multiples of 10 (two 5z coins) onto any of these numbers. We have also noticed that if you could get change, the 1, 2, 4 & 7 could also be made. So if you had to pay $1z$ then you could pay $6z$ and be given $5z$ back. You can make $2z$ by paying $5z$ and getting $3z$ in change. For $4z$ you can pay $9z$ and get $5z$ back. Finally $7z$ can be paid by you paying $10z$ and getting $3z$ change. The numbers that can't be made are any $C$ where for $C = 2X + 7Y$ means that $X$ and $Y$ are not whole numbers. For this if you can make 2 consecutive numbers, then you can make everything higher by adding multiples of 2. So here the lowest consecutive numbers that can be made are $6 = 2 + 2 + 2 = 2 \times 3$ and $7 = 7 = 7 \times 1$ so everything higher than 6 can be made by adding multiples of two. So you can't make 1, 3, 5 and that's it. Generally if your two coins are $a$ and $b$, where $a$ and $b$ are coprime (their only common factor is 1) and $a$ is smaller than $b$, then as soon as you can make $a$ consecutive numbers, then you can make everything higher by adding multiples of $a$. Eg. If you have $4z$ and $5z$ then you need to make 4 consecutive numbers. so every number bigger than 12 can be made. If your numbers $c$ and $d$ are not coprime (they have a common factor greater than 1), then there will never be a highest number that they cannot make, as they will only be able to make multiples of their highest common factor, and never be able to make a set of consecutive numbers. Eg. If you have $4z$ and $6z$, their highest common factor is 2, and so you can only make numbers in the 2 times table, and never make any odd numbers. If you have $40z$ and $60z$, their highest common factor is 20, and so you can only make multiples of 20. Resourceful. Creating and manipulating expressions and formulae. Modular arithmetic. Interactivities. Mathematical reasoning & proof. Generalising. Visualising. Making and testing hypotheses. Number theory. Curious.
CommonCrawl
Vaithiyanathan, S.;Raghuvanshi, S.K.S.;Mishra, A.S.;Tripathi, M.K.;Misra, A.K.;Prasad, R.;Jakhmola, R.C. The aim of the experiment was to study the changes in the activities of various rumen fibre degrading enzymes due to the feeding of chemically treated mustard (Brassica campestris) straw in sheep. Mustard straw (MS) (<5 cm particle size) was treated either with urea (4% (w/w), or with 2% sodium hydroxide (NaOH), or with alkaline hydrogen peroxide (2% NaOH and 1.5% hydrogen peroxide ($H_2O_2$)) and/or supplemented with 2% (w/w) urea. Seven maintenance type rations were prepared using MS (70 parts) with molasses (5 parts) and concentrate (25 parts). They were untreated MS (CMS), urea treated MS (UMS), urea supplemented MS (MSUS), alkali treated MS (AMS), alkali treated and urea supplemented MS (AMS-US), alkali $H_2O_2$ treated MS (AHMS) and alkali $H_2O_2$ treated and urea supplemented MS (AHMS-US). They were then compressed into a complete feed block with the help of block making machine. Forty two male hoggets of Malpura breed sheep were equally distributed into each treatment group and (were) offered feed and water ad libitum. At the end of 21 days of feeding trial, rumen liquor was collected through stomach tube from three animals in each group at 0 h, 4 h, 8 h, 12 h of post feeding. Results showed that the level of enzyme varied from 8.52 to 11.12, 40.85 to 50.37, 3.22 to 3.78, 2.09 to 2.77 and 31.44 to 44.24 units/100 ml SRL respectively for carboxymethyl cellulase (CMCase), $\alpha$-amylase, microcrystalline cellulase (MCCase), filter paper (FP) degrading enzyme and $\alpha$-glucosidase. Processing of MS affected the enzyme activities, in a way, that NaOH and AHP treatment significantly reduced CMCase and FP degrading enzyme. The effect of urea treatment showed an increase in the activity of MCCase and $\alpha$-glucosidase. But the supplementation of urea increased the activity of CMCase, FP degrading enzyme and $\alpha$-glucosidase. The CMCase, $\alpha$-amylase, $\alpha$-glucosidase activities were highest at 4hr whereas MCCase and FP degrading enzyme had maximum activities at 12 h post feeding Results suggested that MS might need longer time in the rumen for its effective degradation. Agarwal, N. 2000. Estimation of fibre degrading enzyme. In: Feed microbiology. (Ed L. C. Chaudhary, N. Agarwal, D. N. Kamra amd D. K. Agarwal). CAS Animal Nutrition, IVRI, Izatnagar, India. Agarwal, N., I. Agarwal, D. N. Kamra and L. C. Chaudhary. 2000. Diurnal variations in the activities of hydrolytic enzyme in different fractions of rumen content of Murrah Buffalo. J. Appl. Anim. Res. 18:73-80. Misra, A. K., S. A. Karim, D. L. Verma, A. S. Mishra and M. K. Tripathi. 2000. Nutrient intake, its utilization, rumen fermentation pattern and blood bio-chemical constituents of sheep fed urea treated mustard (Brassica campestris) straw. Asian-Aus. J. Anim. Sci. 13:1674-1680. Van Soest, P. J., J. B. Robertson and B. A. Lewis. 1991. Methods for dietary fibre, neutral detergent fibre and nonstarch polysaccharides in relation to animal nutrition. Symposium: Carbohydrate methodology, metabolism and nutritional implications in dairy cattle. J. Dairy Sci. 74:3583-3597. Chaturvedi, O. H., A. Santra, A. S. Mishra, R. Prasad, A. K. Misra, S. Vaithiyanathan and R. C. Jakhmola. 2002. Titrable groups and soluble phenolics compounds in untreated and treated mustard (Brassica campestris) straw and their relation with cell wall constituents and dry matter disappearance. Indian J. Anim. Sci.71:69-73. Mishra, A. S., O. H. Chaturvedi, A. K. Misra and S. A. Karim. 1996. A note on chemical composition of untreated and urea treated mustard (Brassica campestris) straw. Indian J. Small Rumi. 2:49-51. Chaturvedi, O. H., A. Santra, A. S. Mishra, R. Prasad and R. C. Jakhmola. 1999. Effect of soaking medium and hours of soaking in nutritive value of mustard (Brassica campestris) straw. Indian J. Anim. Sci. 69:739-741. Robertson, J. B. and P. J. Van Soest. 1981. The detergent system of analysis and its application to human foods. Cornell University, Ithaca, New York. Mishra, A. S., O. H. Chaturvedi, A. K. Misra and S. A. Karim. 2000. Comparative utilization of urea and fungal treated MS by sheep. Indian J. Anim. Sci. 70:854-856. Misra, A. K., O. H. Chaturvedi, A. S. Mishra and S. A. Karim. 1999. Effect of urea treatment on the cell wall constituents and dry matter digestibility of mustard (Brassica campestris) straw. Indian J. Anim. Sci. 69:1180-1182. Silva, A. T., R. J. Wallace and E. R. Orskov. 1987. Use of particle bound microbial activity to predict the rate and extent of fibre degradation: A review. Br. J. Nut. 57:407-415. AOAC. 1995. Official methods of analysis. 16th ed., Association of Official Analytical Chemists Washington, DC. Tarkow, H. and W. L. Feist. 1969. A mechanism for improving the digestibility of lignocellulose materials with dilute alkali and liquid ammonia. J. Anim. Sci. 35:451-357. Chaudhry, A. S. 1998. Chemical and biological procedures to upgrade cereal straw for ruminants. Nutr. Abst and Rev. (Series B). 68:319-331. Klopfenstein, T. J. 1978. Chemical treatment of crop residues. J. Anim. Sci. 46:841-848.
CommonCrawl
A puzzle related to magic squares: grids of integers where all rows, columns, and diagonals have the same sum. I'm having trouble trying to make a $3\times3$ magic square with magic number $12$ and I can't figure it out. Can you please help me? The following is a magic square: each row, column and diagonal add to 34, all of the numbers 1 to 16 appear exactly once. Find the missing numbers. How to calculate total number of squares if n×n square box available. Need to calculate 1×1, 2×2 up to n. How big can a witchcraft square be? I have to make a magic square using number 4-12. No ordinary magic square part 2. How many solutions are there? Number of magic squares with magic constant 0? How can we determine the number of magic squares with magic constant 0? What type of magic square is this?
CommonCrawl
Abstract: For nonempty subsets $F$ and $K$ of a nonempty set $V$ and a real valued function $f$ on $X\times X$ the notion of $f$-best simultaneous approximation to $F$ from $K$ is introduced as an extension of the known notion of best simultaneous approximation in normed linear spaces. The concept of uniformly quasi-convex function on a vector space is also introduced. Sufficient conditions for the existence and uniqueness of $f$-best simultaneous approximation are obtained. Electronic fulltext finalized on: 2 Nov 2001. This page was last modified: 16 Nov 2001.
CommonCrawl
Boente and Fraiman studied robust nonparametric estimators for regression or autoregression problems when the observations exhibit serial dependence. They established strong consistency of two families of M-type robust equivariant estimators for $\phi$-mixing processes. In this paper we extend their results to weaker $\alpha$$alpha$-mixing processes.
CommonCrawl
I recently worked on Data Mining and Analysis on Twitter as my semester project at EPFL. During the 4 months that we worked on the project, we were able to achieve many goals. In this post, I will be describing how we can cluster a small group of users on Twitter based on different similarity metrics and a brief comparison of several clustering methods that could be used for this. For this analysis, we used three different user lists on twitter as the ground truth data for a group of about five hundred users. We obtained all the tweets from the users who were listed in the three lists and then tried to obtain clusters by using different similarity metrics using the spectral clustering algorithm. In addition to this, we also explore different connections between users in addition to just the social connections in order to find out other features that affect the users being listed together. We present results of applying spectral clustering algorithm using the modularity matrix and the symmetric normalized Laplacian matrix. We compare the results of these approaches while using several different input matrices formed by different combination of the above similarity measures. If we have a set of n objects $$x_1,x_2,x_3,\dots,x_n$$ with a pairwise similarity function defined between them which are symmetric and non-negative. Spectral clustering is the set of methods and techniques that partition the set into clusters by using the eigenvectors of matrices. The motivation behind using eigenvectors for clustering is that the change of representation induced by the eigenvectors makes the cluster properties of the initial data set much more evident. In this way, spectral clustering is able to separate data points that could not be resolved by applying directly k-means clustering, for instance, as the latter tends to deliver convex sets of points. Since the introduction of spectral methods in "Lower Bounds for the Partitioning of Graphs" there have been several researches where scientists have tried using different matrices for the calculation of eigenvectors followed by applying clustering on the eigenvectors. A complete discussion about the different spectral algorithms and matrices would take a new post, so I wouldn't go into much details here. Now, I will present the results of applying spectral clustering algorithm using the modularity matrix and the symmetric normalized Laplacian matrix. We compared the results of these approaches while using several different input matrices formed by different combination of the above similarity measures. Before I begin, let's have a look at the spy plot of the user connections. The users have been ordered by the lists that they belong to and therefore, we can immediately observe three communities present in the network by looking at the plot. We present the results of application of the algorithm on users' social connections as well as several other individual similarity measures (user mention similarity, description content similarity and tweet content similarity) followed by a simple combination of the different similarity measures. For finding the combined similarity measure, we sum all the different similarity measures. Since the different similarity measures can be on different scales, these similarity measures are normalized before we add them together and apply the clustering algorithms. Therefore, the adjacency matrix that corresponds to the combined similarity measures is the sum of all the individual normalized adjacency matrices. In order to measure the accuracy of our clustering algorithms, we use several different cluster evaluation objective functions to compare obtained clusters with the ground truth data of clusters which represents the distribution of the users into different lists. I will present the values of Normalized Mutual Information and RandIndex during the analysis. A higher value of these factors means a better correspondence with ground truth. The table at the end of this post summarizes these values for different clustering algorithms and similarity metrics. I will now show some visualizations of our obtained results. The results have been obtained using Gephi for visualization. The communities are represented by different colours in the visualizations. Note that the arrangement of the nodes in the space doesn't represent any communities. The arrangement of nodes in the visualizations corresponds to a layout. We keep the same layout for all the visualizations so that it is easy to see the results of the clustering process. The different clusters in the visualizations are represented by different colours. This means that all the nodes in the visualization that have the same colour have been placed into one cluster for that visualization. Figure (a) shows the communities formed using the ground truth data. Figure (b) shows the clusters obtained for the network using the user connections as the only similarity measure using the modularity matrix for spectral clustering. Figure (c) shows the results of community detection using the combination of all similarity measures for the modularity matrix. Finally Figure (d) shows the results for community detection applied on combined similarity measures using the Symmetric Normalized Laplacian matrix. We can observe that user's social connections are the most dominating factors for this group of users while dividing the users into different communities. We can also observe that the high values of benchmark results correspond to the good community detection that corresponds closely to the ground truth data by looking at the visualizations in Figure (a) and Figure (b) which show the ground truth clusters and the clusters obtained by applying community detection using modularity matrix on social connections respectively. The other individual similarity measures like the user mentions, tweet content similarity and description content similarity don't perform very well when used for community detection using the modularity matrix. We also observe that the results for combined similarity measures, i.e. the sum of connections, mentions, tweet content similarity and description content similarity doesn't perform as well as the connections. This means that the addition of low information contents like the mentions, tweet content similarity and description content similarity decreases the accuracy of the clustering algorithm even in the presence of the highly informative social connections. A reason for the bad performance of the similarity measures based on the tweets, descriptions and mentions can be that the group of users are similar and generally post similar content on the web. This also means that the user behaviours don't seem to be consistent with the ground truth data. However, the detection of communities for the user's social connections using the Symmetric Normalized Laplacian matrix fails. This is because the Laplacian based methods are known to be quite sensitive to the presence of disconnected nodes in the graph. Therefore, the results of the social connections for the Symmetric Normalized Laplacian are similar to the other individual results which mean that we are not able to reconstruct any valuable cluster information when using the Normalized Laplacian. But the combined matrix performs consistently even when using the Normalized Laplacian for community detection. This is because addition of several different kinds of information to the social connections makes it a connected graph and therefore, we now observe results consistent with the ones obtained while using modularity matrix for community detection. Therefore, we can conclude from these results that user connections are a very good measure of information and for this model; they give the best clustering results when used with the modularity matrix for spectral clustering. We can also see that adding several other non-informative measures to this layer decreases the accuracy considerably when used with the modularity matrix. However, an interesting observation occurs from our discussion and results regarding the Symmetric Normalized Laplacian matrix that even a highly informative layer (user connections) can prove to be a very bad indication of clustering if not used with the correct clustering algorithm due to the presence of disconnected nodes in the graph. We also found that even adding a non-informative (or slightly informative) layer as in this case can improve the connectivity and therefore improve the clustering results.
CommonCrawl
The solution says that it is $(\lambda - 2)(\lambda + 2)(\lambda - 3)$. I feel like I am so close, but I don't get what I am supposed to do to get to the solution. What is wrong with this solution? Here we line the two copy of the same matrix side by side and draw blue and red lines and add all products of numbers on the blue lines and substract products of numbers on the red lines. The solution is off by 4. Characteristic polynomial and eigenvalues of a $3 \times3$ matrix. Am I misinterpreting this matrix determinant property?
CommonCrawl
where $\alpha$ is the instantaneous expected return on the stock; $\sigma^2$is the instantaneous variance of the return, conditional on no arrivals of important new information (i.e., the Poisson event does not occur); $dW$ is a standard Gauss-Wiener process; $q(t)$ is the independent Poisson process ; $dq$ and $dW$ are assumed to be independent; ¸ is the mean number of arrivals per unit time;$ k=E(Y-1)$ where $(Y-1)$ is the random variable percentage change in the stock price if the Poisson event occurs; and $E$ is the expectation operator over the random variable $Y$. now my question is why we use $E(Y-1)$ and we dont use $E(Y)$ i.e I want to know What is the purpose of -1? if $Y=1$ the stock price doesn't change since it's a percentage change not an absolute, so we have to subtract one when drift compensating. See my book Concepts etc for my discussion. Not the answer you're looking for? Browse other questions tagged stochastic-processes finance sde merton-model or ask your own question.
CommonCrawl
Is there a connected $T_2$-space $(X,\tau)$ with more than 1 point and with the following property? Whenever $D\subseteq X$ is dense, $X\setminus D$ is not dense. There is such a thing as a submaximal topology, in which every dense subset is open. These obviously satisfy your condition. Take any connected Hausdorff space $(X,\tau)$. Let $\mathscr F$ be an ultrafilter of $\tau$-dense sets. Let $\tau'$ be the topology generated by $\tau\cup \mathscr F$. Then $(X,\tau')$ is submaximal Hausdorff connected.
CommonCrawl
Let $X$ be a non-singular $C^\infty$ vector field on a three manifold $M$. There are some obvious obstructions for finding a volume form that is preserved under the flow given by $X$: If $X$ is singular, or If there's a sphere to which $X$ is transverse (which implies a singularity), or if $X$ is transverse to a torus. Could it be that these are all the obstructions? Or is there a theorem characterizing all topological obstructions? I have a Riemannian metric on the manifold to begin with, and I want to change the metric so that the vector field (that avoids the above obstructions) becomes divergence free, but is still smooth or at least $C^1$. Browse other questions tagged at.algebraic-topology ds.dynamical-systems or ask your own question. Is a non vanishing holomorphic vector field necessarily a geodesible vector field?
CommonCrawl
3.10 Is There a Foreground to the X-ray Background? 4.13 Is the High Energy Radiation From Cen A and 3C273 Scattered Jet Radiation? 4.15 Is the Detection of a Gravitational Echo from a Gamma-Ray Burst Likely? 6.06 Veritas in Duo Sigma--Look at All Those Sources! 6.08 A Low Cost Approach to EUVE Spacecraft Operations: The Future in Astrophysical Satellite Operations? 7.05 Are There Optical Proxies for Solar Flare X-ray Emission? 15.03 Periodic X-ray Emission in Flare Stars: Resonant MHD Absorption? 15.04 On the Weakness of C I and O I Resonance Line Emission from the Chromosphere of $\alpha$~Ori. 17.09 Newly Identified, Faint Dwarfs Near the Sun: Brown Dwarfs Masquerading as M Stars? 19.09 What is the Origin of the High-$z$ Gas in NGC 4631? 23.04 The Spheroid of M33: Bulge or Halo? R. Edelson (University of Iowa), D. M. Crenshaw (CSC--NASA/GSFC), B. M. Peterson (Ohio State), J. Clavel (ISO Observatory, ESTEC), D. Alloin (Observatoire de Paris), K. Horne, K. T. Korista (STScI), G. A. Kriss, J. H. Krolik (Johns Hopkins University), M. A. Malkan (U.C.L.A.), H. Netzer (Wise Observatory, Tel Aviv), P. T. O'Brien (Oxford University), S. Penton, J. M. Shull (University of Colorado), G. A. Reichert (USRA--NASA/GSFC), P. M. Rodriguez-Pascual, W. Wamsteker (European Space Agency, IUE Observatory, Madrid), M.-H. Ulrich (ESO, Garching), R. Warwick (University of Leicester), et al. 32.06 A Non-LTE Model for the Origin of the CO First Overtone Band Emission in Young Stellar Objects.
CommonCrawl
I've recently had the good fortune of winning three pigs at the village fete. However, I'm not sure whether my triangular garden is big enough for them as well as my collection of metal, wooden and other deckchairs. The pigs are of substantial size and my tape measure is not long enough to measure the longest side of the garden. I've also heard that pigs are very intelligent and would like to hear suggestions for entertaining them. It seems as though you have an issue with these pigs hogging your space. If your garden is right-angled, you can use Stythagoras' theorem. Otherwise, I recommend pigonometric functions: the swine and coswine rules will be helpful. I shan't boar you with the details. If they are math-ham-atically inclined, perhaps you could introduce them to Porkdust. Maybe skip the article about the ham sandwich theorem. My Bacon number is 2. After a thrilling winter Olympics, I have been inspired to take up competitive sport. However, my previous interests lie mostly in multivariable calculus and I have no clue how to follow a sporting lifestyle. It's completely different from anything I've done before. Do you have any experience in this area? Congratulations on your change of variables. On the surface, it might just seem a bit of fun and games, but exercise is integral to a healthy life. I recommend heading down to the gym to see if you can join a combined aquatic and winter sports team. Once a member, you can expected to be $\nabla$ed on your $\nabla\times$ing and $\nabla\cdot$ing. Thanks to your helpful advice in Chalkdust issue 06, I am now the pope! The first ever pope, in fact, to also understand finite element methods. Unfortunately, I went for a stroll the other day to purchase some badger feed and, being new to the area, I got completely lost. How can I get home? Never fear! If you're lost in Italy, just speak to Anna (my pal-in-Rome). She cannoli point you in the right direction. For future sojourns, however, I have one pizza advice. The Rome-bus will take you directly to St Peter's Square. From there, it's a short hop to the numerical-analysistine chapel. Make sure you get off at the right stop though — otherwise you'll be pasta point of no return. After the excesses of the festive season, I decided to participate in the trend known as Veganuary. For 31 days I forewent all animal-based products, in search of acceptance on my Instagram page. Now that the month is over, I have decided to permanently adopt a vegan lifestyle, and am looking to diversify my cooking. Would you happen to know of any good recipes? My dear child, it seems you are limiting yourself to s-kale-r products as you are cross with yourself. I consulted on this matter with my friend William Hamiltomatoes and my work colleague Henri Poincarrots, with whom I commute. I am afraid to report that your choice of ingredients will be limited to vegetabelian groups. Furthermore, you will no longer be able to eat duck a Lagrange (as we have realised that Lagrange is an animal). If you decide to weaken your constraints, there are stiltons of vegetarian options. I myself enjoy macaroni cheese, or for something actually Italian, ris-8. If you can't find rennet-free parmigiano-reggiano, my briemann hypothesis is that any other hard cheese is a goudapproximation.
CommonCrawl
Hence, artificial gravity can be created by simply rotating a spacecraft to create the effect of gravity on long journeys into space and a warp bubble can be used to travel to distant places at many times the speed of light without locally exceeding the speed of light in the warp bubble.... This candy ball bubble wrap party game is a crowd pleaser. I've played it many times with groups of kids of various ages and it's always a big it. This candy ball bubble wrap party game is a crowd pleaser. I've played it many times with groups of kids of various ages and it's always a big it. Image via Build It Solar. According to Build It Solar, bubble wrap is a common, effective insulator during the winter months. Applying bubble wrap to your windows takes just 15 seconds per window, meaning you can have your entire house insulated in mere minutes. The spacecraft will make it to $\alpha$ Centauri in 0.43 years as measured by an Earth observer and an observer in the flat space-time volume encapsulated by the warp bubble. A formula to arrive at $\gamma$ is given in the paper.
CommonCrawl
Consider a sample of n independent normal rvs. I would like to identify a systematic way of calculating the probability of having the sum of a subset of them larger than the sum of the rest of rvs. An example case: Population of fish. Mean: 10 kg, stdv: 3 kg. I fish five fish (n=5). What is the probability of having two fish weighing more than the rest of the three fish? The steps which can be followed is to calculate the prob for every combination of fish and then use the inclusion exclusion formula for their union. Is there anything smarter? Note: if four fish were considered the probability of having two of them heavier than the other two should be one. How could this be computed immediately? Thanks for the answers. Your example suggests that not only are the $n$ variables $X_1,X_2,\ldots,X_n$ independent, they also have the same Normal distribution. Let its parameters be $\mu$ (the mean) and $\sigma^2$ (the variance) and suppose the subset consists of $k$ of these variables. We might as well index the variables so that $X_1,\ldots, X_k$ are this subset. Little needs to change in this analysis even when the $X_i$ have different normal distributions or are even correlated: you only need to assume they have an $n$-variate Normal distribution to assure their linear combination still has a Normal distribution. The calculations are carried out in the same way and result in a similar formula. The agreement is close and the small absolute z-score allows us to attribute the discrepancy to random fluctuations rather than any error in the theoretical derivation. Not the answer you're looking for? Browse other questions tagged normal-distribution independence or ask your own question. What is probability that one normal random variable is max of three normal random variables? Are two standard normal random variables always independent?
CommonCrawl
2) if there exists a modular form $f$ with $L(E,s) = L(f,s)$. I get that from the first definition, one can deduce the second by pullback of a holomorphic differential on $E$ to a newform which has the same $L$-function, so my question is how does one get from the second to the first? Are both definitions equivalent? They seem to be used interchangeably by some authors but I have not been able to find any reference on this. The book Diamond–Shurman "A first course in modular forms", in particular p. 362, tells you how $2)$ implies $1)$. This is actually a deep result: it requires to use Faltings' isogeny theorem. If $L(f,s) = L(E,s)$ for some normalized newform $f \in S_2(\Gamma_0(N_E))$, then $a_p(f) = a_p(E)$ for every prime $p$ and then all the Fourier coefficients of $f$ at $i\infty$ are integers. In particular, the abelian variety $A_f$ constructed by Eichler and Shimura has dimension $[K_f : \Bbb Q] = 1$, where $K_f$ is the coefficient field of $f$. Thus, we have an elliptic curve $A_f$ such that $L_p(A_f, s) = L_p(E,s)$ for almost every prime $p$ (equation 8.42 in Diamond–Shurman). Faltings' isogeny theorem asserts that you have an isogeny $\phi : A_f \to E$. On page 247 of Diamond–Shurman, using the decomposition of $J_0(N)$, one gets an isogeny $J_0(N) \to E$, and on page 216, we get a non-constant holomorphic map $X_0(N) \to E(\Bbb C)$. Finally, it is mentioned on page 292 that we get a non-constant morphism $X_0(N) \to E$ defined over $\Bbb Q$ (and work of Carayol ensures that we may take $N = N_E$). Not the answer you're looking for? Browse other questions tagged reference-request elliptic-curves modular-forms arithmetic-geometry l-functions or ask your own question. $p$ adic modular forms and wide open neighbourhood (e.g. Coleman primitive): is it possible to obtain a holomorphic function? Relation between different definitions of modular forms.
CommonCrawl
I remember the first time I have been surprised by a model. I was working on the conditions under which a mutualist can protect its host from a pathogen, and in particular whether the mutualist can persist or will be displaced by the pathogen (unless there are multiple populations connected by dispersal, the answer is no). What surprised me was how, in the end, the answer to this question depended on the relative value of three parameters. Of course, nothing in modeling should be surprising, because the model encompasses the entirety of its own rules, and so of course the answer is in here, waiting to be found. But where do the models come from? People. Models are written by people. And this is what I find fascinating. Modeling is the ultimate exercise in world building. Mark J.P. Wolf wrote that creating a world "renews our vision and gives us new perspective and insight into ontological questions that might otherwise escape our notice within the default assumptions we make about reality". But as far as ecological models are concerned, these default assumptions are what constitutes the basis of our model. Populations increase in size until they consume the amount of resources that the system receives. Being eaten makes your population smaller. It's better for your growth rate to be adapted than to not be. Things move around. And yet, despite these being self evident, I am often surprised by what happens when I mix them together. Isn't this amazing? That when we pool together the things we know, we create yet more things we didn't knew yet? Which brings me to the point; modeling, deep down, is experimental work. where $\alpha$ is the rate at which the individuals will compete for the resources. And whereas they would reach $K$ individuals before, they will now reach $r/\alpha$. These are two worlds with very different emphasizes: the first has an upper limit to growth, which is hard-coded. The second also has an upper limit to growth, but this time it emerges from the choice of being explicit about the fact that individuals compete for the resource. But of course neither of these models are explicit about the fact that resources flow in and out of the system, and so we may want to add an equation for this. And we would need to add a term to explain how the resource is converted into biomass for the population. Should it be fixed, or depend on the metabolic rates? There is a very deep rabbit hole we can bury ourselves in just when deciding how to represent the simple fact that living organisms need to eat in order to grow. With this in mind, it is hardly surprising to create models whose behavior we cannot anticipate. Once the protocol/model is setup, comes the exploration phase, and the manipulation phase. In a way, speaking about "numerical experiment" is not some sleight of hand designed to make modeling look more practical than what it is; as a modeler, I am experimenting on my system, because although I may have created it, I do not understand it.
CommonCrawl
Two-dimensional nuclear magnetic resonance studies of thymosin-alpha(1) (a myosin light chain kinase activating peptide), and ribonuclease a in the presence of uridine vanadate. 1. The structure of the peptide thymosin $\alpha\sb1$ has been investigated by circular dichroism and one- and two-dimensional nuclear magnetic resonance spectroscopy. Thymosin $\alpha\sb1$ is a highly acidic peptide composed of 28 amino acid residues. Through the use of circular dichroism and two-dimensional nuclear magnetic resonance techniques, the structure of thymosin $\alpha\sb1$ in 30% (v/v) deuterated 2,2,2-trifluoroethanol has been solved. Thymosin $\alpha\sb1$ contains an $\alpha$-helix extending from residue 16 to 26 and a turn between residues 5 and 8. Thymosin $\alpha\sb1$ has been shown to be a potent activator of skeletal muscle myosin light chain kinase, a well known calmodulin-dependent enzyme. Using the solution structures of thymosin $\alpha\sb1$ and the calmodulin binding domain of skeletal muscle myosin light chain kinase, computer modelling suggests that electrostatic interactions comprise the major interacting force between these two peptides. 2. The binding of the proposed transition state analog, uridine vanadate to ribonuclease A has been investigated by one- and two-dimensional nuclear magnetic resonance spectroscopy. Analysis of the homonuclear nuclear Overhauser end exchange spectroscopy spectrum of the uridine vanadate/ribonuclease A complex exhibits cross peaks between the H$\varepsilon$1 proton of histidine 12 of RNase A and both the C$\sb6$H and C$\sb5$H protons of uridine vanadate. No cross peaks were observed between the H$\varepsilon$1 proton of histidine 119 of ribonuclease A and the C$\sb6$H and C$\sb5$H protons of uridine vanadate. However, the distances calculated from the crystallographic structure show that the H$\varepsilon$1 proton of histidine 119 is closer to the C$\sb6$H and C$\sb5$H protons of uridine vanadate, than is the H$\varepsilon$1 proton of histidine 12. These results suggests that there is a significant difference in the position of the histidine 119 side chain in the crystallographic and solution structures of the uridine vanadate/ribonuclease A complex.Dept. of Chemistry and Biochemistry. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1994 .V43. Source: Dissertation Abstracts International, Volume: 56-11, Section: B, page: 6099. Adviser: Lana Lee. Thesis (Ph.D.)--University of Windsor (Canada), 1995. Veenstra, Timothy Daniel., "Two-dimensional nuclear magnetic resonance studies of thymosin-alpha(1) (a myosin light chain kinase activating peptide), and ribonuclease a in the presence of uridine vanadate." (1995). Electronic Theses and Dissertations. 4441.
CommonCrawl
This paper deals with the trace regression model where $n$ entries or linear combinations of entries of an unknown $m_1\times m_2$ matrix $A_0$ corrupted by noise are observed. We propose a new nuclear norm penalized estimator of $A_0$ and establish a general sharp oracle inequality for this estimator for arbitrary values of $n,m_1,m_2$ under the condition of isometry in expectation. Then this method is applied to the matrix completion problem. In this case, the estimator admits a simple explicit form and we prove that it satisfies oracle inequalities with faster rates of convergence than in the previous works. They are valid, in particular, in the high-dimensional setting $m_1m_2≫n$. We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix $A_0$, a non-minimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor. Finally, we show that our procedure provides an exact recovery of the rank of $A_0$ with probability close to 1. We also discuss the statistical learning setting where there is no underlying model determined by $A_0$ and the aim is to find the best trace regression model approximating the data.
CommonCrawl
You're interested in writing a program to classify triangles. Triangles can be classified according to their internal angles. If one of the internal angles is exactly 90 degrees, then that triangle is known as a "right" triangle. If one of the internal angles is greater than 90 degrees, that triangle is known as an "obtuse" triangle. Otherwise, all the internal angles are less than 90 degrees and the triangle is known as an "acute" triangle. Your program must determine, for each set of three points, whether or not those points form a triangle. If the three points are not distinct, or the three points are collinear, then those points do not form a valid triangle. (Another way is to calculate the area of the triangle; valid triangles must have non-zero area.) Otherwise, your program will classify the triangle as one of "acute", "obtuse", or "right", and one of "isosceles" or "scalene". The first line of input gives the number of cases, $N$. $N$ lines follow, each containing integers $x_1,y_1,x_2,y_2,x_3,y_3$. You may assume that $1 \leq N \leq 100$ and $-1000 \leq x_1, y_1, x_2, y_2, x_3, y_3 \leq 1000$.
CommonCrawl
Think about how best to approximate these things from the physical world around us. You will need to make some estimations and find information from friends or other sources, as would any scientist! Take care to represent all of your answers using a sensible number of decimal places and be sure to note all of your assumptions clearly. Light travels at $c=3\times 10^8$ metres per second. How fast in this in miles per hour? How many times faster is this than a sports car? The Milky Way is a spiral galaxy with diamater about 100,000 light years and thickness about 1000 light years. There are estimated to be between 100 billion and 400 billion stars in the galaxy. Estimate the average distance between these stars. Density of lead $11.34$g/cm$^3$. How big would a tonne of lead be? Estimate the mass of ore it takes to produce a roll of aluminum kitchen foil. How many AA batteries contain enough charge between them to run a laptop for an hour? Estimate how many atoms there are in a staple. Einstein's equation tells us that the enegy $E$ stored in matter equals $mc^2$, where $m$ is the mass and $c$ is the speed of light. How much energy is contained in the staple from question 6? How long could this energy run your laptop for? How much energy would it take to raise the air temperature of the room you are in by 1$^\circ$C? How much gas must be burned to produce this much energy? What is the cost of that much gas? An obvious part of the skill with applying mathematics to physics is to know the fundamental formulae and constants relevent to a problem. By not providing these pieces of information directly, you need to engage at a deeper level with the problems. You might not necessarily know all of the required formulae, but working out which parts you can and cannot do is all part of the problem solving process! Vectors. Physics. Biology. Investigations. Maths Supporting SET. Chemistry. STEM - physical world. Standard index form/Scientific notation. Engineering. Mathematical modelling.
CommonCrawl
This section describes how to use MLlib's tooling for tuning ML algorithms and Pipelines. Built-in Cross-Validation and other tooling allow users to optimize hyperparameters in algorithms and Pipelines. An important task in ML is model selection, or using data to find the best model or parameters for a given task. This is also called tuning. Tuning may be done for individual Estimators such as LogisticRegression, or for entire Pipelines which include multiple algorithms, featurization, and other steps. Users can tune an entire Pipeline at once, rather than tuning each element in the Pipeline separately. They split the input data into separate training and test datasets. For each ParamMap, they fit the Estimator using those parameters, get the fitted Model, and evaluate the Model's performance using the Evaluator. They select the Model produced by the best-performing set of parameters. The Evaluator can be a RegressionEvaluator for regression problems, a BinaryClassificationEvaluator for binary data, or a MulticlassClassificationEvaluator for multiclass problems. The default metric used to choose the best ParamMap can be overridden by the setMetricName method in each of these evaluators. To help construct the parameter grid, users can use the ParamGridBuilder utility. CrossValidator begins by splitting the dataset into a set of folds which are used as separate training and test datasets. E.g., with $k=3$ folds, CrossValidator will generate 3 (training, test) dataset pairs, each of which uses 2/3 of the data for training and 1/3 for testing. To evaluate a particular ParamMap, CrossValidator computes the average evaluation metric for the 3 Models produced by fitting the Estimator on the 3 different (training, test) dataset pairs. After identifying the best ParamMap, CrossValidator finally re-fits the Estimator using the best ParamMap and the entire dataset. The following example demonstrates using CrossValidator to select from a grid of parameters. Note that cross-validation over a grid of parameters is expensive. E.g., in the example below, the parameter grid has 3 values for hashingTF.numFeatures and 2 values for lr.regParam, and CrossValidator uses 2 folds. This multiplies out to $(3 \times 2) \times 2 = 12$ different models being trained. In realistic settings, it can be common to try many more parameters and use more folds ($k=3$ and $k=10$ are common). In other words, using CrossValidator can be very expensive. However, it is also a well-established method for choosing parameters which is more statistically sound than heuristic hand-tuning. Refer to the CrossValidator Scala docs for details on the API. // Prepare training data from a list of (id, text, label) tuples. // Configure an ML pipeline, which consists of three stages: tokenizer, hashingTF, and lr. // We use a ParamGridBuilder to construct a grid of parameters to search over. // this grid will have 3 x 2 = 6 parameter settings for CrossValidator to choose from. // We now treat the Pipeline as an Estimator, wrapping it in a CrossValidator instance. // This will allow us to jointly choose parameters for all Pipeline stages. // A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator. // Run cross-validation, and choose the best set of parameters. // Prepare test documents, which are unlabeled (id, text) tuples. // Make predictions on test documents. cvModel uses the best model found (lrModel). Find full example code at "examples/src/main/scala/org/apache/spark/examples/ml/ModelSelectionViaCrossValidationExample.scala" in the Spark repo. Refer to the CrossValidator Java docs for details on the API. // Prepare training documents, which are labeled. // Prepare test documents, which are unlabeled. Find full example code at "examples/src/main/java/org/apache/spark/examples/ml/JavaModelSelectionViaCrossValidationExample.java" in the Spark repo. Refer to the CrossValidator Python docs for more details on the API. # Prepare training documents, which are labeled. # Configure an ML pipeline, which consists of tree stages: tokenizer, hashingTF, and lr. # We now treat the Pipeline as an Estimator, wrapping it in a CrossValidator instance. # This will allow us to jointly choose parameters for all Pipeline stages. # A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator. # We use a ParamGridBuilder to construct a grid of parameters to search over. # this grid will have 3 x 2 = 6 parameter settings for CrossValidator to choose from. # Run cross-validation, and choose the best set of parameters. # Prepare test documents, which are unlabeled. # Make predictions on test documents. cvModel uses the best model found (lrModel). Find full example code at "examples/src/main/python/ml/cross_validator.py" in the Spark repo. In addition to CrossValidator Spark also offers TrainValidationSplit for hyper-parameter tuning. TrainValidationSplit only evaluates each combination of parameters once, as opposed to k times in the case of CrossValidator. It is therefore less expensive, but will not produce as reliable results when the training dataset is not sufficiently large. Unlike CrossValidator, TrainValidationSplit creates a single (training, test) dataset pair. It splits the dataset into these two parts using the trainRatio parameter. For example with $trainRatio=0.75$, TrainValidationSplit will generate a training and test dataset pair where 75% of the data is used for training and 25% for validation. Like CrossValidator, TrainValidationSplit finally fits the Estimator using the best ParamMap and the entire dataset. Refer to the TrainValidationSplit Scala docs for details on the API. // Prepare training and test data. // In this case the estimator is simply the linear regression. // A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator. // 80% of the data will be used for training and the remaining 20% for validation. // Run train validation split, and choose the best set of parameters. Find full example code at "examples/src/main/scala/org/apache/spark/examples/ml/ModelSelectionViaTrainValidationSplitExample.scala" in the Spark repo. Refer to the TrainValidationSplit Java docs for details on the API. Find full example code at "examples/src/main/java/org/apache/spark/examples/ml/JavaModelSelectionViaTrainValidationSplitExample.java" in the Spark repo. Refer to the TrainValidationSplit Python docs for more details on the API. # Prepare training and test data. # In this case the estimator is simply the linear regression. # A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator. # 80% of the data will be used for training, 20% for validation. # Run TrainValidationSplit, and choose the best set of parameters. Find full example code at "examples/src/main/python/ml/train_validation_split.py" in the Spark repo.
CommonCrawl
Finding a minimum weighted spanning tree might not be the hardest task, however, for trees with more vertices and edges, the problem becomes complicated. We will now solve this problem with Kruskal's Algorithm. Label each vertex (e.g. $x_1, x_2, ...$ or $a, b, c, ...$, etc…). List the edges in non-decreasing order of weight. Start with the smallest weighted and beginning growing the minimum weighted spanning tree from this edge. Add the next available edge that does not form a cycle to the construction of the minimum weighted spanning tree. If the addition of the next least weighted edge forms a cycle, do not use it. Continue with step 4 until you have a spanning tree. We will now apply Kruskal's algorithm to find a minimum weighted spanning tree.. We have already labelled the vertices in this graph as $a, b, c, d, e$ and $f$. The edges listed in non-decreasing weights are $3, 4, 4, 4, 5, 5, 6, 7$. We will start with edge $ab$ since it has the least weight. The next available edges we can are all of weight $4$. These edges are $bc, cd$, and $ef$. The addition of these edges does not form a cycle, so we can add them all to our graph. The next available edges we have are $af$ and $ed$, both with weight $5$. Adding either one of these edges results in a spanning tree of the final graph, so we can choose either. Let's choose the edge $af$. Just like with defining spanning trees for a graph, saying "the" minimum weight spanning tree is not necessarily correct as some graphs could have more than one minimum weight spanning trees.
CommonCrawl
Does two Higgs doublet model predicts two Higgs bosons? You may guess it by counting the number of degrees of freedom. One doublet corresponds to 4 real degrees of freedom. However the Higgs breaks $U(1)\times SU(2)$ gauge symmetry to $U(1)$. This requires the appearance of three Goldstone modes - one neutral and two charged ones conjugated to each other. Those 3 Goldstones are not physical and instead neutral $Z$ and charged $W^\pm$ bosons get longitudinal polarizations. This leaves us with only one neutral Higgs boson in the physical sector. Two Higgs doublets give us 8 real degrees of freedom. But then 3 of them again correspond to unphysical Goldstone modes. Thus we are left with 5 physical modes (not two!) - 3 neutral bosons and charged boson and its antiparticle. 2HDM models are one of the simplest extensions of the Standard Model. You also have to introduce at least two Higgs doublets into the supersymmetric extensions like MSSM where you can't give masses to both up and down quarks with only one Higgs doublets. Another motivation comes from the axion models where you can't construct the nontrivial Peccei-Quinn symmetric potential for the one Higgs doublet. Not the answer you're looking for? Browse other questions tagged particle-physics field-theory higgs beyond-the-standard-model or ask your own question. Is cold dark matter made of Higgs bosons? Why doesn't $h\rightarrow \gamma\gamma$ rule out two higgs doublet models? Is there any Higgs Model that has 4 Charged Higgs and 1 Neutral? Why is the Standard model Higgs not a candidate of dark matter (in particular, a WIMP)? What are significant applications of the Higgs Boson itself in the standard model? Why do we need the two higgs doublet model? Why do we need a two higgs doublet model?
CommonCrawl
Last week, we looked at how to fit and evaluate linear regression and logistic regression models using the statsmodels package. This week, we will explore the scikit-learn, or sklearn package. The two packages overlap a fair amount, at least with respect to regression and logistic regression models, and there are similarities and differences between the two packages' approach to these sorts of models. First, we'll generate some random data with known $\beta$ coefficients so that we can evaluate how well the modeling algorithms do recovering the model parameters. We can use np.random.seed() to reproduce (pseudo-)random numbers so that we can reproduce our results as needed. Setting the seed will cause the random number generator to produce the same sequence of pseudo-random numbers. nsmp = 150 # number of "observations" We can create a matrix of scatter plots to see what the data looks like (and how much they diverge from the theoretical pdfs above). Because we created each random variable independently of each other, the variables are not highly correlated. np.corrcoef(array) gives you a correlation matrix for the variables in the input array. Next, we use the linear models that we imported from sklearn. More specifically, we'll fit linear regression and logistic regression models first, and we'll come back to ridge regression and the lasso model in a bit. First, we'll fit a standard linear regression model, and check to see how the fitted parameter ($\beta$) estimates compare to the $\beta$ values we used to generate the data. So, in this case, at least, the model did a good job recovering the known parameter values. Standard linear regression seems to do a good job recovering the true parameter values, but logistic regression doesn't. It gets the sign right, and it's very precise, but it's inaccurate. In statistical terminology, it exhibits fairly substantial bias. Note that this doesn't imply that logistic regression is a bad tool. The way we generated our dichotomous dependent variable was unusual, plausibly not accurately reflecting any mechanisms we might encounter in the world, for example. We won't bother simulating multiple data sets to check the variability of the estimates this time. As briefly discussed last week, lasso regression is a tool for regularization - roughly, putting pressure on a model to constrain its complexity. This can help with making more reliable predictions, though it makes it difficult (if not impossible) to interpret the parameter estimates we get from a fitted model. In sklearn as in statsmodels, fitting a lasso model is just like fitting a regular linear regression model, but we also need to specify an $\alpha$ parameter. Larger $\alpha$ values put more pressure on the model to keep the size of the coefficients small.
CommonCrawl
For a research internship I am running psychological experiments online. As it takes a while until the experiment is done (meaning that enough people participated, so that a sufficient sample size is reached), I could already look at the data and run a few analyses with that incomplete data set. This way I could already see a trend in what the final result might be. Are there any methodological reasons that would speak against that? For example, am I biasing myself in this way? Or is there any other reason this could be considered bad research practice? From an ethical standpoint, not including interim evaluations may be bad practice. I will start off with a more extreme case than in your question example, just for illustrative purposes, namely that of a clinical intervention study. If it appears that the treatment group (say, experimental medicine Y, instead of the standard of care treatment X) features substantially, if not significantly, more cases of serious adverse advents, or even deaths, that may (or may not) be related to treatment Y, the ethical best thing to do is put the study on hold until things are sorted out. This, to prevent any possibility of further physical harm caused by the experimental treatment. This happens quite regularly and should be followed by a report paper stating the results and a discussion on the best way to proceed this research, if applicable. In a more experimental setting it may also be ethically best practice to evaluate preliminary data, as possible experimental design flaws, unexpected results (weird outcomes or artifacts in, say, left-handed people) or confounding factors may become apparent, and timely adjustments to the experimental protocol can be made. Why is this ethically correct? Because you may be subjecting people to a flawed experimental paradigm and wasting many hours of otherwise more productive time. The case provided by the other answerer, where the experiment is stopped, while prior statistical power analysis was done, that is malpractice. Conversely, adding more subjects post hoc based on 'near-significance' is also questionable practice. But this is more related to what you do with the experimental interim data. In my opinion, they should be critically evaluated, but not so much on effect size, but rather on feasibility, correctness and validity - basically to sanity-check the study proceedings. gjacob is correct that optional stopping is a common research degree of freedom, and one that has a considerable and unfortunate intuitive basis. Yet, depending on the context of your research, AliceD's concerns are also important. There is, however, a middle ground between not checking at all, and p-hacking: sequential analysis. There is a Bayesian version of sequential analysis, which I can update if that's your statistical paradigm, but I'm assuming you're wanting to conduct interim analyses using null-hypothesis significance testing, so that's what I'll focus on here. Lakens (2014) provides a nice overview of this practice. In essence, you take the level of $\alpha$ you want to maintain over your "peeks" (e.g., $\alpha$ = .05), and distribute that total $\alpha$ over the number of peeks you want to take along your total sampling process. Then, if $p$ is lower than this distributed $\alpha$ at any of your peeks, you can reject the null at $\alpha$ = .05, and you won't have inflated your Type I error rate like you would have with generic optional stopping. It's a little more complicated than how I'm presenting here--and there are a number of methods for distributing your total $\alpha$--but not by much. If you can wrap your head around a bonferroni correction, it's a very similar technique. Lakens, D. (2014). Performing high‐powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44(7), 701-710. Instead, consider running a power analysis. (I recommend G*Power, which is freely downloadable). I advise to perform a power analysis before you start collecting data, determine the total N that you'll shoot for, and don't peek at your data until you've hit that. It's effectively "blinding" yourself, much in the way that medical researchers might use double-blind studies to ensure the reliability of their findings. Not the answer you're looking for? Browse other questions tagged methodology experimental-psychology bias statistics data or ask your own question. In SPSS, should you change the significance level for One way ANOVA post-hocs?
CommonCrawl
Dies ist die Homepage des Oberseminars "Mathematische Logik" im Wintersemester 2014/15. Prof. Ziegler hat ein Forschungssemester. Abstract: A non-trivial issue concerning tree-like forcings in the generalized framework is to introduce a random-like forcing, where random-like means to be κκ-bounding, < κ-closed and κ+-cc simultaneously. Shelah managed to do that for κ weakly compact. In this talk we aim at introducing a forcing satisfying these three properties for κ inaccessible, and not necessarily weakly compact. This is joint work with Sy Friedman. This is an introduction to reflection principles, on how to force them and on their consistency strength. This is based on work by Justin Moore and Stevo Todorcevic. I will introduce a new type of properties of c.c.c. and proper forcing notions, of which the Y-c.c. and Y-proper are the two most prominent examples. The Y-c.c. is an intermediate property between σ-centered and c.c.c., and Y-proper is intermediate between strongly proper and proper. These properties have interesting consequences and behave nicely with respect forcing iterations. Let $\kappa$ be an infinite cardinal and $T$ be a tree of height $\kappa$. We equip the set $[T]$ of all branches of length $\kappa$ through $T$ with the topology whose basic open subsets are sets of all branches containing a given node in $T$. Given a cardinal $\nu$, we consider the question whether $[T]$ is equal to a continuous image of the tree of all functions $s:\alpha\longrightarrow\nu$ with $\alpha<\kappa$. This is joint work with Philipp Schlicht. I will talk about how the combinatorial cardinal characteristics reviewed in [Blass, Combinatorial Cardinal Characteristics of the Continuum] can be generalized to uncountable cardinals kappa and what is known about consistency results for them. I will sketch the main ideas of my recent result that the meager ideal is Tukey reducible to the Mycielski ideal. The latter one is the ideal associated with Silver forcing. This implies that every reasonable amoeba forcing for Silver adds a Cohen real. This has been open for some years. Schanuel's Conjecture states that for a collection of n complex numbers z_1, ..., z_n, linearly independent over the field of rational numbers, the transcendence degree of z_1, ..., z_n, exp(z_1), ..., exp(z_n) is at least n. Zilber constructs in [Zilber, Pseudo-exponentiation on algebraically closed fields of characteristic zero] a sentence whose models are structures called strongly exponentially-algebraically closed fields with pseudo-exponentiation, which are unique in every uncountable cardinality. One of their main properties is that Schanuel's Conjecture holds in those fields. Firstly, I will outline the properties of Zilber's fields. Secondly, I will sketch the proof given in [Marker, A Remark on Zilber's Pseudoexponentiation] showing that, if one assumes Schanuel's Conjecture, the simplest case of one of the axioms of Zilber's fields holds in the complex exponential field. I will talk about what I asked to Spinas in the end of his talk, i.e., whether an amoeba for Silver might add Cohen reals. Two weeks ago he proved that add(J(Silver)) is at most add(M). However this is not strictly sufficient to infer that any proper amoeba for Silver does add Cohen reals, but only that it does not have the Laver property. I will clarify this issue. If there will be any time left I will also present some results about other tree ideals, which are part of a joint work, still in preparation, with Yurii Khomskii and Wolfgang Wohofsky. Kein Oberseminar. Hausbegehung und Sitzungen im Rahmen des Akkreditierungsverfahrens. When generalising arguments about cardinal characteristics of the continuum to cardinals kappa greater than omega, one frequently comes up against the problem of how to ensure that a filter built up through an iterated forcing remains kappa complete at limit stages of small cofinality. A technique of Dzamonja and Shelah is useful for overcoming this problem; in particular, there is a natural application of this technique to obtain a model in which 2^kappa is large but the ultrafilter number u(kappa) is kappa^+. After introducing this model, I will talk about joint work with Vera Fischer (Technical University of Vienna) and Diana Montoya (University of Vienna) calculating many other cardinal characteristics at kappa in the model and its variants. I will report on ongoing joint work with Alex Berenstein and Alf Onshuus on proving that all universal specializations of models of the theory of a fixed Zariski structure share the same complete theory and finding axioms for this theory. I will show this in the most fundamental particular cases (sets, vector spaces, algebraically closed fields) and discuss our progress in the general case. I spoke about this topic once before in the seminar and the current talk is a continuation of the previous one, but no knowledge of the content of that talk will be assumed. Given a nonstandard model M of arithmetic we want to expand it by interpreting a binary relation symbol R such that R^M does something prohibitive, e.g. violates the pigeonhole principle in the sense that R^M is a bijection from n+1 onto n for some (nonstandard) n in M. The goal is to do so saving as much as possible from ordinary arithmetic. More precisely, we want the expansion to satisfy the least number principle for a class of formulas as large as possible. We describe a forcing method to produce such expansions and revisit the most important results in the area. We will see a new type of expanded Sacks forcing with many combinatorial properties. Last update on January 29, 2015, H.M.
CommonCrawl
for events the day of Friday, March 16, 2018. Abstract: It is commonly known that separable Banach spaces embed isometrically into the separable space $C(\Delta)$, where $\Delta$ is the Cantor set. Taking the Effros-Borel structure $\mathcal F(C(\Delta))$, we can then view the collection of separable Banach spaces as a Borel subset $\mathcal B \subseteq \mathcal F(C(\Delta))$ and consider the existence of an isomorphism between Banach spaces to be an equivalence relation on $\mathcal B$. For this expository talk, I will present some basic descriptive set theoretic techniques used to determine the complexity of isomorphism equivalence classes, in particular the Borel case of the class for $\ell_2$, and a non-Borel analytic case with Pelczynski's universal space $\mathcal U$.
CommonCrawl
1. Chayut Kongban Poom Kumam, Quadruple random common fixed point results of generalized Lipschitz mappings in cone b-metric spaces over Banach algebras, J. Nonlinear Sci. Appl., 11 (2018), 137–156. (2016 Impact Factor 1.34; Q1 SJR). 2. Batsari, U.Y., Kumam, P., A globally stable fixed point in an ordered partial metric space, Studies in Computational Intelligence, 760, 2018: pp. 360-368. 3. Jirakitpuwapat, W., Kumam, P., The generalized Diffie-Hellman key exchange protocol on groups, Studies in Computational Intelligence, 760, 2018: pp. 115-119. 4. Hunwisai, D., Kumam, P., Kumam, W., A method for optimal solution of intuitionistic fuzzy transportation problems via centroid, Studies in Computational Intelligence, 760, 2018: pp. 94-114. 5. Sombat, A., Saleewong, T., Kumam, P., Perspectives and experiments of hybrid particle swarm optimization and genetic algorithms to solve optimization problems, Studies in Computational Intelligence, 760, 2018: pp. 290-297. 6. Ali, M.U., Ansari, A.H., Khammahawong, K., Kumam, P., Best proximity point theorems for generalized α – ψ -proximal contractions, Studies in Computational Intelligence, 760, 2018: pp. 341-352. 7. Pakkaranang, N., Kewdee, P., Kumam, P., Borisut, P., The modified multi-step iteration process for pairwise generalized nonexpansive mappings in CAT(0) spaces, Studies in Computational Intelligence, 760, 2018: pp. 381-393. 8. Khan, A., Shah, K., Kumam, P., Onsod, W., An (α, ϑ) -admissibility and theorems for fixed points of self-maps, Studies in Computational Intelligence, 760, 2018: pp. 369-380. 9. Ali, M.U., Muangchoo-in, K., Kumam, P., Zeroes and fixed points of different functions via contraction type conditions, Studies in Computational Intelligence, 760, 2018: pp. 353-359. 10. A. Padcharoen, P. Kumam, P. Saipara and P. Chaipunya Generalized Suzuki Type Z-Contraction in Complete Metric Spaces, Kragujevac Journal of Mathematics Vol. 42 No. 3 (2018). Volume 42(3) (2018), Pages 419–430. 18. Wiyada Kumam, Nuttapol Pakkaranang, Poom Kumam, Modified viscosity type iteration for total asymptotically nonexpansive mappings in CAT(0) spaces and its application to optimization problems, J. Nonlinear Sci. Appl., 11 (2018), 288–302. (2016 Impact Factor 1.34; Q1 SJR). 19. Plern Saipara and Poom Kumam, The robustness of generalized random Bayesian abstract fuzzy economy models, Random Operators and Stochastic Equations. Volume 25, Issue 1, Pages 41–47. DOI: https://doi.org/10.1515/rose-2017-0004, March 2017. 31. Surajit Karmakar, Lakshmi Kanta Dey, Poom Kumam, Ankush Chanda, Best Proximity Results for Cyclic α-implicit Contractions in Quasi-metric Spaces and Its Consequences, Adv. Fixed Point Theory, 7 (2017), No. 3, 342-358. 32. K. Khammahawong, P. Kumam, D. M. Lee and Y. J. Cho, Best proximity points for multi-valued Suzuki α–F-proximal contractions, J. Fixed Point Theory Appl. 34. Plern Saipara, Poom Kumam, Coupled coincidence point theorems for generalized nonlinear contraction in ordered cone metric spaces related with Nash equilibrium of two persons game, Journal of Inequalities and Special Functions, Volume 8 Issue 3 (2017), Pages 44-58. ESCI. 41. Muangchoo-in, Khanitin, Thongtha, Dawud, Kumam, Poom and Cho, Yeol Je, Fixed point theorems and convergence theorems for monotone (alpha, beta)-nonexpansive mappings in ordered Banach spaces, Creative Math. and Informatics Vol. 26, No. 2, 2017, 163 –180. 43. N. Chuensupantharat, Poom Kumam, Sompong Dhompongsa, A Graphical Proof of the Brouwer Fixed Point Theorem, Thai Journal of Mathematics, Volume 15 (2017) Number 3: 607–610. 15. Khanittha Promluang, Pongrus Phuangphoo, and Poom Kumam, The common solutions of complementarity problems and a zero point of maximal monotone operators by submitted to the International Journal of Mathematics and Computers in Simulatation, Volume 10, 2016, 152-160. 47. K. Sitthithakerngkiet, J. Deepho, T. Tanaka and P. Kumam, A Viscosity Extragradient Method for Variational Inequality and Split Generalized Equilibrium Problems With a Sequence of Nonexpansive Mappings, Journal of Nonlinear Systems and Applications, Volume 5, Number 3 (2016), Pages 78-89. 2. C. Kongban, V. Pragadeeswarar, M. Marudai and P. Kumam, Existence and uniqueness of coupled best proximity point in ordered metric spaces, Journal Nonlinear Functional Analysis and Applications, Vol. 20, No. 1 (2015), pp. 27-42. 6. M. Abbas, A. Hussain and P. Kumam, "A Coincidence Best Proximity Point Problem in G-Metric Spaces," Abstract and Applied Analysis, vol. 2015, Article ID 243753, 12 pages, 2015. doi:10.1155/2015/243753. 10. P. Chaipunya and P. Kumam, "Nonself KKM Maps and Corresponding Theorems in Hadamard Manifolds" Applied General Topology, 16, no. 1 (2015), 37-44. 13. P. Chaipunya and P. Kumam, An observation onset-valued contraction mappings in modular metric spaces, Thai Journal of Mathematics, Volume 13 (2015) Number 1: 9–17. 16. Saurabh Manro, S. S. Bhatia, Sanjay Kumar, Poom Kumam, Sumitra Dalal, A Note On-Weakly compatible mappings along with property in fuzzy metric spaces [Journal Nonlinear Analysis and Applications, 2013], Journal of Nonlinear Analysis and Application, Volume 2015, No. 1 (2015), Pages 17-18, Article ID jnaa-00267, 2 Pages. 4. Wutiphol Sintunavarat, Sunny Chauhan and Poom Kumam, Some fixed point results in modified intuitionistic fuzzy metric spaces, Afrika Matematika, (2014) Volume 25, Issue 2, pp 461-473. 6. W. Phuengrattana, S.Suantai, K. wattanawitoon, U. Witthayarat and P. Kumam, Weak and strong convergence theorems of proximal point algorithm and maximal monotone operators, Journal of computational analysis and applications, (2012 Impact Factor 0.502) Volume 16 No. 2, 2014: 264-281. 8. Parin Chaipunya, Yeol Je Cho and Poom Kumam,"On circular metric spaces and common fixed points for an infinite family of set-valued operators", Vietnam Journal of Mathematics, vol. 42, no. 2, pp. 205–218, 2014. 9. Parin Chaipunya, Yeol Je Cho and Poom Kumam, A remark on the property $\P$ and periodic points of order $\infty$, Matematicki Vesnik, Vol. 66, No. 4, pp. 357-363 (2014). 14. Ghasem Soleimani Rad, Hamidreza Rahimi and Poom Kumam, "Coupled common fixed point theorems under weak contractions in cone metric type spaces", Thai Journal of Mathematics, Volume 12 (2014) Number 1: 1–14. 19. H. Piri, P. Kumam and K. Sitthithakerngkiet, Approximating fixed points for lipschitzian semigroup and infinite family of nonexpansive mappings with the Meir-Keeler type contraction in Banach spaces, Dynamics of Continuous, Discrete and Impulsive Systems, Series A: Mathematical Analysis 21 (2014) 201-229. 23. Ghasem Soleimani Rad, Hassen Aydi, Poom Kumam, Hamidreza Rahimi, Common tripled fixed point results in cone metric type spaces, Rendiconti del Circolo Matematico di Palermo, August 2014, Volume 63, Issue 2, pp 287-300. 26. Siwaporn Saewan, Poom Kumam and Jong Kyu Kim, "Strong convergence theorems by hybrid block generalized f-projection method for fixed point problems of asymptotically quasi-phi-nonexpansive mappings and system of generalized mixed equilibrium problems" Thai Journal of Mathematics, Volume 12 (2014) Number 2 : 275–301. 32. Supak Phiangsungnoen Wutiphol Sintunavarat and Poom Kumam, Fuzzy fixed point theorems for fuzzy mappings via $\beta$-admissible with Applications, Journal of Uncertainty Analysis and Applications, 2014, 2:20. 36. Jitsupa Deepho, Wiyada Kumam and Poom Kumam, A new hybrid projection algorithm for solving the split generalized equilibrium problems and the system of variational inequality problems, Journal of Mathematical Modelling and Algorithms in Operations Research, December 2014, Volume 13, Issue 4, pp 405-423. , "Some remarks on generalized metric spaces of Branciari", Sarajevo Journal of Mathematics Vol. 10 (23), No. 2 (2014), 209-219. 47. Wiyada Kumam, Jitsupa Deepho and Poom Kumam, "Hybrid extragradient method for finding a common solution of the split feasibility and system of equilibrium problems", Dynamics of Continuous, Discrete and Impulsive Systems, DCDIS Series B: Applications & Algorithms, Vol. 21, No.6, (2014), 367-388. 4. Wutiphol Sintunavarat and Poom Kumam, "Coupled Coincidenceand Coupled Common Fixed Point Theorems in Partially Ordered Metric Spaces", Thai Journal of Mathematics, Volume 10 (2012) Number 3: 551–563. 11. C. Mongkolkeha and P. Kumam, "Best proximity points for asymptotic proximal pointwise weaker Meir-Keeler-type $\psi$-contraction mappings", Journal of the Egyptian Mathematical Society, (2013), Volume 21, Issue 2, July 2013, Pages 87–90. 16. Chirasak Mongkolkeha and Poom Kumam, "Some fixed point results for generalized weak contraction mappings in Modular spaces", International Journal of Analysis, vol. 2013, Article ID 247378, 6 pages. 20. Wutiphol Sintunavarat and Poom Kumam, "Coupled fixed point results for nonlinear integral equations", Journal of the Egyptian Mathematical Society (2013) 21, 266–272. 25. Sumit Chandok, Wutiphol Sintunavarat and Poom Kumam, Some coupled common fixed points for a pair of mappings in partially ordered $G$-metric spaces, Mathematical Sciences, 2013, 7:24. 50. S. Manro, S. S. Bhatia, S. Kumar, P. Kumam, S. Dalal, Weakly Compatible Mappings along with CLRS property in Fuzzy Metric Spaces, Volume 2013, Year 2013 Article ID jnaa-00206, 12 Pages. 51. Tanom Chamnarnpan, Nopparat Wairojjana and Poom Kumam, Hierarchical fixed points for strictly pseudo contractive mappings of variational inequality problems, SpringerPlus, vol. 2, no. 1, pp. 1–12, 2013. 58. Wutiphol Sintunavara and Poom Kumam, "PPF Dependent Fixed Point Theorems for Rational Type Contraction Mappings in Banach spaces", Journal of Nonlinear Analysis and Optimization: Theory & Applications, Vol. 4, No. 2, (2013), 157-162. 17. Parin Chaipunya, Chirasak Mongkolkeha, Wutiphol Sintunavarat and Poom Kumam, "Fixed Point Theorems for Multivalued Mappings in Modular Metric Spaces", Abstract and Applied Analysis, Volume 2012, Article ID 503504, 14 pages (2012 Impact Factor 1.102) (NRU) Erratum to "Fixed-Point Theorems for Multivalued Mappings in Modular Metric Spaces"Abstract and Applied AnalysisVolume 2012, Article ID 241919, 2 pages. 25. Attapol Kaekhao, Wutiphol Sintunavarat and Poom Kumam, "Common fixed point theorems of c-distance oncone metric spaces", Journal of Nonlinear Analysis and Application, 2012 Article ID jnaa-00137, 11 Pages. 52. S. Chauhan, W. Sintunavarat and P. Kumam, "Common Fixed Point Theorems for Weakly Compatible Mappings in Fuzzy Metric Spaces Using (JCLR) Property," Applied Mathematics, Vol. 3 No. 9, 2012, pp. 976-982. 58. P. Chaipunya, Y. Cho, W. Sintunavarat and P. Kumam, "Fixed Point and Common Fixed Point Theorems for Cyclic Quasi-Contractions in Metric and Ultrametric Spaces", Advances in Pure Mathematics, Vol. 2 No. 6, 2012, pp. 401-407. doi: 10.4236/apm.2012.26060. 74. Saurabh Manro and Poom Kumam, "Common Fixed Point Theorems for Expansion Mappings in Various Abstract Spaces Using Concept of Weak Reciprocal Continuity", Accepted to Fixed Point Theory and Applications, 2012, 2012:221 doi:10.1186/1687-1812-2012-221(2012 Impact Factor 1.87)Erratum to 'Common fixed point theorems for expansion mappings in various abstract spaces using concept of weak reciprocal continuity', Fixed Point Theory and Applications 2013, 2013:8. 4. K. Wattanawitoonand P. Kumam, "Hybrid proximal-point methods for zeros of maximal monotone operators, variational inequalities and mixed equilibrium problems", International Journal of Mathematics and Mathematical Sciences, Volume 2011 (2011), Article ID 174796, 31 pages (CEM 53). 29. K. Wattanawitoon, U. Humphries and P. Kumam "Strong convergence theorems for two relatively asymptotically nonexpansive mappings in Banach spaces" , International Journal of Mathematical Analysis, Vol. 5, (2011), no. 35, 1719 – 1732. (CEM53-2(usa)&NRU). 36. P. Kumam and S. Plubtieng, Viscosity approximation methods of random fixed point solutions and random variational inequalities in Hilbert spaces, Asian-European Journal of Mathematics, Volume 70 No. 1 2011, 81-107. 55. Siwaporn Saewan and Poom Kumam, A New Modified Block Iterative Algorithm for a System of Equilibrium Problems and a Fixed Point Set of Uniformly Quasi-Ф-Asymptotically Nonexpansive Mappings, Advances in Systems Science and Applications (2011), Vol.11, No.2, 124-150. 2. R. Wangkeeree, C. Jaiboon, and P. Kumam, Strong convergence theorems for equilibrium problems and fixed point problems of a countable family for nonexpansive mappings, Int. J. of Appl. Math. and Mech. 5(3): 47 – 63, 2009. (Supported by CHE: Center of Excellent: "Fixed point theory and applications") http://ijamm.bc.cityu.edu.hk/ijamm/wp_home_download_this.asp?pn=175K. 11. K. Wattanawitoon, P. Kumam, Corrigendum to "Strong convergence theorems by a new hybrid projection algorithm for fixed point problems and equilibrium problems of two relatively quasi-nonexpansive mappings" Nonlinear Analysis: Hybrid Systems 3 (2009) 11-20. 23. C. Sudsukh, C. Jaiboon and P. Kumam, convergence theorem by a hybrid extragradient method for finding variational inequality problems, fixed point problems and equilibrium problems, Advances and Applications in Mathematical Sciences, 1(2) 2009, 421–436. 26. C. Jaiboon and P. Kumam, A general iterative method for solving equilibrium problems, variational inequality problems and fixed point problems of an infinite family of nonexpansive mappings, Journal of Applied Mathematics and Computing, Volume 34, Numbers 1-2, 407-439. 1. Poom Kumam, "Some Fixed Point Theorems for Multivalued SL Maps Satisfying Inwardness Conditions" Journal of Nonlinear Functional Analysis and Differential Equations, Vol. 2, No 2, (2008) 173-181. 2. Poom Kumam "Random Fixed Point Theorems for Asymptotically Nonexpansive Random Operators" Journal of Concrete and Applicable Mathematics, ,Vol.6, No.1, (2008) 91-99. 4. Poom Kumam, Warunya Kumethong and Natchanok Jewwaiworn "Weak Convergence Theorems of Three-Step Noor Iterative Scheme for I-quasi-nonexpansive mappings in Banach Spaces" Applied Mathematical Sciences, Vol. 2, 2008, no. 59, 2915 – 2920. 2. Poom Kumam,"A note on some fixed point theorems for set-valued non-self-mappings in Banach spaces"International Journal of Mathematical Analysis.Vo. 1, no. 1, 13-19, 2007. 3. Somyot Plubtieng, Poom Kumam, and Rabian Wangkeeree, "Random three-step iteration scheme and common random fixed point of three operators", Journal of Applied Mathematics and Stochastic Analysis. Volume 2007, ArticleID82517, 10 pages. 4. Poom Kumam, (with A. Luadsong and O. Suttisri)," Random Implicit Iterations Process for Common Random Fixed Points of Finite Family of Strictly Pseudo contractive Random Operators", Far East Journal of Mathematical Sciences (FJMS), Volume 25, Issue 3, Pages 433 – 445 (June 2007). 5. Poom Kumam and Wiyada Kumam, "Random fixed points of multivalued random operators with property (D)", Random Operators and Stochastic Equations, Volume 15, Issue 2, (2007): 127-136. 6. Poom Kumam, Somyot Plubtieng and Rabian Wangkeeree, "Approximation of common random fixed point for a finite family of random operators", International Journal of Mathematics and Mathematical Sciences, Volume 2007 (2007), Article ID69626, 12 pages. 7. P. Kumam and and S. Plubtieng, "Random Coincidence and Random Common Fixed Points of Nonlinear Multivalued Random Operators", Thai Journal of Mathematics, Volume 5 Number 3 / Special Issue (2007): 155-163. 8. P. Kumam, "Coincidence and Common Fixed Points for Generalized Multivalued Nonexpansive Relative Maps", Thai Journal of Mathematics, Volume 5(2007) Number 2 : 225—230. 1. PoomKumam, "Random iterative process for strictly pseudocontractive random operators in Hilbert spaces", JP Journal of Fixed Point Theory and Applications, Volume 1, No. 2, 121 – 133, (December 2006). 2. Poom Kumam, "On nonsquare and Jordan-Von Neumann constants of modular spaces", Southeast Asian Bulletin of Mathematics, 2006, Vol. 30 Issue 1, 69-77. 3. Somyot Plubtieng and Poom Kumam, "Random fixed point theorems for multivalued nonexpansive non-self random operators", Journal of Applied Mathematics and Stochastic Analysis, Volume2006, Article ID43796, Pages 1–9. 4. Poom Kumam and Somyot Plubtieng, "Some random fixed point theorems for non self nonexpansive random operators"Turkish Journal of Mathematics, 30 (2006), 359-372.Cover by Science Citation Index Expanded (SCIE) 2007. 5. Poom Kumam and Somyot Plubtieng, "Random fixed point theorem for multivalued nonexpansive operators in Uniformly nonsquare Banach spaces", Random Operator and Stochastic Equation, Volume 14 No.1, 35-44 (2006). 1. Poom Kumam,"On Uniform Opial condition and Uniform Kadec-Klee property in modularspaces", Journal of Interdisciplinary Mathematics, Vol. 8 (2005), No. 3, 377-385. 2. Poom Kumam and Somyot Plubtieng "Some Fixed Point Theorem for setvalued Nonexpansive non-self-Operator", Takahashi, Wataru (ed.) et al., Nonlinear analysis and convex analysis. Proceedings of the 4th international conference (NACA 2005), Okinawa, Japan, June 30–July 4, 2005. Yokohama: Yokohama Publishers. 287-295 (2007). 1. Poom Kumam, "Some Geometric Properties and Fixed Point Theorem in Modular Spaces", in Book: Fixed Point Theorem and its Applications. J. Garcia Falset, Liorens Fuster and B. Sims (Editors), Yokohama Publishers (2004), 173-188. 2. Poom Kumam, "Fixed Point Theorems for nonexpansive mappings in Modular Spaces", Archivum Mathematicum, (BRONO), Volume 40 (2004), No. 4. pp 345-353. 3. Poom Kumam, "Fixed point theorem and random fixed point theorems for set-valued non-self-mappings",Thai Journal of Mathematics, Vol. 2 No. 2 (2004), 295-307.
CommonCrawl
The organizers are: Joseph H.G. Fu (Georgia), Vladimir Oliker (Emory), Mohammad Ghomi and John McCuan (Georgia Tech), Fernando Schwartz (UTK), Junfang Li (UAB). Thanks to an NSF grant currently held at the University of Georgia, we have funds to support participants, particularly students and recent Ph.D. recipients. We encourage women and minorities to apply. To apply please write to us. Program Schedule: The complete program can be found from here . Abstract: We show that radial limits of bounded solutions of the equation of prescribed mean curvature over re-entrant corner domains always exist for directions interior to the domain. Under additional conditions we show radial limits also exist for convex corners. No assumptions beyond boundedness are made on the behavior of the traces of solutions on the sides of the domain. Abstract: A surface with constant mean curvature (CMC surface) is an equilibrium surface of the area functional among surfaces which enclose the same volume (and satisfy given boundary conditions). A CMC surface is said to be stable if the second variation of the area is non-negative for all volume-preserving variations. In this talk we give criteria for stability of CMC surfaces in the three-dimensional euclidean space. We also give a sufficient condition for the existence of smooth bifurcation branches of fixed boundary CMC surfaces, and we discuss stability/ instability issues for the surfaces in bifurcating branches. By applying our theory, we determine the stability/instability of some explicit examples of CMC surfaces. Abstract: Design of freeform refractive lenses is known to be a difficult inverse problem. But solutions, if available, can be very useful, especially in devices required to redirect and reshape the radiance of the source into an output irradiance redistributed over a given target according to a prescribed pattern. In this talk I present the results of theoretical and numerical analysis of refractive lenses designed with the Supporting Quadric Method. It is shown that such freeform lenses have a particular simple geometry and qualitatively their diffractive properties are comparable with rotationally symmetric lenses designed with classical methods. Abstract: We classify circle-foliated minimal surfaces and surfaces of constant mean curvature in $\mathbb S^3$ (locally). In special, we show that there is only one cmc surface foliated by geodesics for each mean curvature. Abstract: In this talk, we will introduce the concept of manifolds with singularities and study a class of elliptic differential operators that exhibit degenerate or singular behavior near the singularities. Based on this theory, we investigate several linear and nonlinear parabolic equations arising from geometric analysis and PDE. Emphasis will be given to geometric flows with "bad" initial metrics. Abstract: Abstract: We consider a two-dimensional analogue of the problem of a ball of prescribed density floating on a liquid that partially fills a bounded container. Restricting to the case where the density of the ball is less than that of the liquid, we use a phase plane analysis to show existence of equilibrium configurations. This framework also gives us an approach to studying the uniqueness of the equilibrium configurations, and (surprisingly) there are examples of physical parameters that lead to non-uniqueness.
CommonCrawl