text
stringlengths
100
500k
subset
stringclasses
4 values
Notice that, while the graph of a rational function will never cross a vertical asymptote, the graph may or may not cross a horizontal or slant asymptote. Also, although the graph of a rational function may have many vertical asymptotes, the graph will have at most one horizontal (or slant) asymptote.... The graph of a rational function NEVER crosses a vertical asymptote. The graph of rational functions MIGHT cross a horizontal asymptote or slant asymptotes but do not necessarily do so. See example 8, 9, & 10 on pages 350 – 352. On the other hand, some kinds of rational functions do have oblique asymptotes. Rational Functions A rational function has the form of a fraction, f ( x ) = p ( x ) …... The graph of a rational function NEVER crosses a vertical asymptote. The graph of rational functions MIGHT cross a horizontal asymptote or slant asymptotes but do not necessarily do so. See example 8, 9, & 10 on pages 350 – 352. If you make the definition symmetric, by requiring that your function be injective (a.k.a one-to-one), then you'll find that the graphs of your functions cannot cross their horizontal asymptotes either. With horizontal and slant asymptotes, the function itself can cross these equations, but as its domain approached $-\infty$ and $\infty$, its graph approaches the equation of the asymptote. The fact that there is an intersection point simply means your particular equation crosses its asymptote, usually indicating a higher degree equation. To graph a rational function, find the asymptotes and intercepts, plot a few points on each side of each vertical asymptote and then sketch the graph. Finding Asymptotes Vertical asymptotes are "holes" in the graph where the function cannot have a value. A rational function has at most one horizontal asymptote or oblique (slant) asymptote, and possibly many vertical asymptotes. The degree of the numerator and degree of the denominator determine whether or not there are any horizontal or oblique asymptotes.
CommonCrawl
Abstract: We consider the map of three-dimensional $\mathcal N=4$ superfields to the $\mathcal N=3$ harmonic superspace. The left and right representations of the $\mathcal N=4$ superconformal group are constructed on $\mathcal N=3$ analytic superfields. These representations are convenient for describing $\mathcal N=4$ superconformal couplings of Abelian gauge superfields to hypermultiplets. We investigate the $\mathcal N=4$ invariance in the non-Abelian $\mathcal N=3$ Yang–Mills theory.
CommonCrawl
Abstract: We use Projected Entangled Pair States (PEPS) to study topological quantum phase transitions. The local description of topological order in the PEPS formalism allows us to set up order parameters which measure condensation and deconfinement of anyons, and serve as a substitute for conventional order parameters. We apply these order parameters, together with anyon-anyon correlation functions and some further probes, to characterize topological phases and phase transitions within a family of models based on a $\mathbb Z_4$ symmetry, which contains $\mathbb Z_4$ quantum double, toric code, double semion, and trivial phases. We find a diverse phase diagram which exhibits a variety of different phase transitions of both first and second order which we comprehensively characterize, including direct transitions between the toric code and the double semion phase.
CommonCrawl
There is a connection between type theory and logic, where types are propositions, and type checking performs the role of checking whether a proof of a proposition is correct (Curry-Howard isomorphism). But I can imagine a different connection: There seems to be a similarity between type checking and checking whether a particular mathematical structure satisfies a set of axioms. We might say that propositions (axioms) are formalized as types (just as in the CH-isomorphism), but that now, an instance of a proposition (i.e. an instance of that type) is not a proof of the proposition, but a model of it. Type checking then takes the role of checking whether a particular mathematical structure is indeed a model of that axiom. Is there a formalization of "checking whether a structure is a model of a proposition" as type checking? Could you explain such a formalization, or point to an explanation of it? The key observation of the Curry-Howard correspondence is that the inductive structure of terms in type theory mirrors the inductive structure of proofs in logic. For example, given two terms $t_1$ and $t_2$ of types $A_1$ and $A_2$, I can construct a term $(t_1,t_2)$ of the product type $A_1\times A_2$. Similarly, given two proofs $p_1$ and $p_2$ of the propositions $Q_1$ and $Q_2$, I can put them together to form a proof $(p_1,p_2)$ of the conjunction $Q_1\land Q_2$. On the other hand, models of a proposition (at least in ordinary semantics for first-order / predicate logic) just don't have this same kind of inductive structure. Given models $M_1$ and $M_2$ of sentences $\varphi_1$ and $\varphi_2$, there's no canonical way of constructing a model of the conjunction $\varphi_1\land \varphi_2$. So I'm not optimistic that the kind of extension of Curry-Howard that you're looking for is possible. It's conceivable to me that you could change the notion of "model" to something sufficiently syntactic to make this work - but that would likely involve making a "model" of $\varphi$ essentially an encoding of a proof of $\varphi$, and then relying on the usual Curry-Howard correspondence. Structures (e.g. set theoretical constructions) "fit into" propositions or don't, and that's checked by when you verify whether the structure is a model. I think there's nothing wrong with drawing an analogy of how things "fit into" another here, as both processes can be considered under the guise of a computational task. Although there are many stratification involved. Secondly, as mentioned, both "checkings" here are forms of computations and it's worth pointing out that what's doable here will depend on the underlying logic. The Hindley–Milner type system is the framework that motivated the programming languages ML and then Haskell, and grants a nice type checking routine. The frameworks where good checking is possible made cuts. Modern efforts in ramping up a dependently typed language gear towards a constructive/intuitionistic logic that now include strong forms of quantifiers, while logicans and model theorists do large parts do a deeper and thus less varied (relatively speaking) study of the long established strong first- or higher order logics. The answers in this logic SE questions are one among the most interesting on the platform. As is always the case with such broad questions on logic, one could now go down many rabbit holds. I might give a shoutout to Tarski's restricting theorem for strong logics. Decidability is also a question for type systems, if you just choose any. Along with the theme of restrictions, type checking of a term checks all it can, while with propositions, there's always many and different aspects of a structure you can validate. Again, here, if you make cuts and your type system admits subtyping and whatnot, I suppose you can take coarser views on terms here too. In the latter notion of "fit in", there's the concept of categoricity and one may try to reflect that in uniqueness types. If you have narrowed down a your logics of interest and decide you want to approach your question from a raw computation and "fit into" perspective, then the subject of abstract rewriting and rewriting systems, which may strike one as even more formal than subfields mathematical logic, may be down your alley. Side note 1: The verification process in the latter case need not even be restricted to axioms but we can do that for any propositions. Side note 2: I'm not a fan of your initial wording "checking whether a proof of a proposition is correct". I'd instead say "checking whether an expression is a proof of a proposition". I say that because if you use your language here, then we grant every term the status of being a "wrong proof" for almost all propositions. That's just a question of our language here, though. Not the answer you're looking for? Browse other questions tagged logic model-theory axioms type-theory or ask your own question. Can type theory be viewed as an alternative to model theory? How are halting oracles related to set theory? Do the real algebraic numbers satisfy a type of completeness axiom excluding free variables? What is the concept of truth in the Curry-Howard isomorphism? Can we prove that the peano axioms are true for $(\mathbb N, \sigma)$ in type theory? Introduction to categorical logic and CHL-correspondence?
CommonCrawl
Abstract: Consider the following asynchronous, opportunistic communication model over a graph $G$: in each round, one edge is activated uniformly and independently at random and (only) its two endpoints can exchange messages and perform local computations. Under this model, we study the following random process: The first time a vertex is an endpoint of an active edge, it chooses a random number, say $\pm 1$ with probability $1/2$; then, in each round, the two endpoints of the currently active edge update their values to their average. We show that, if $G$ exhibits a two-community structure (for example, two expanders connected by a sparse cut), the values held by the nodes will collectively reflect the underlying community structure over a suitable phase of the above process, allowing efficient and effective recovery in important cases. In more detail, we first provide a first-moment analysis showing that, for a large class of almost-regular clustered graphs that includes the stochastic block model, the expected values held by all but a negligible fraction of the nodes eventually reflect the underlying cut signal. We prove this property emerges after a mixing period of length $\mathcal O(n\log n)$. We further provide a second-moment analysis for a more restricted class of regular clustered graphs that includes the regular stochastic block model. For this case, we are able to show that most nodes can efficiently and locally identify their community of reference over a suitable time window. This results in the first opportunistic protocols that approximately recover community structure using only polylogarithmic work per node. Even for the above class of regular graphs, our second moment analysis requires new concentration bounds on the product of certain random matrices that are technically challenging and possibly of independent interest.
CommonCrawl
In this paper at the at the beginning of the last paragraph on p.2 it is said, that the Euler equations, which are an infinite Reynolds number limit of the Navier-Stokes equations, arise as an RNG fixed point. This fixed point is said to be non-unique for a system witn N mixing species, because it can be choosen from an N+1 dimensional parameter space spanned by N-1 dimensionless mass diffusivities, the dimensionless thermal diffusivity (or Prantl Number) and the dimensionless ratio of the anisotropic and isotropic viscosity. What is the expected behaviour of the RG flow around this fixed point? Is the non-uniqueness of this fixed point the same kind of non-uniquenes as the one described on p67 if this paper, which explains that the presence of redundant marginal operators can lead to a whole line of physically equivalent fixed points? If this way of analyzing a fixed point can be applied in my example, would the Euler equations fixed point then correspond to some kind of an N+1 dimensional surface of fixed points where the mass diffusivities, the prantl number, and the ratio of the anisotropic and isotropic viscosity play the role of such redundant marginal operators? the molecular diffusion can be neglected in this case. As is known that to fully developped turbulence all scales are expected to conribute (there is no characteristic scale present) and molecular diffusion can be neglected at larg scales, the fixed point corresponding to the Euler equations from a RG point of view is a critical IR fixed point. However as mentioned here when looking at LES dynamic subgrid scale parameterizations from a RG point of view, talking of a (scale invariant) fixed point is not exactly justified because the rescaling is missing in the renormalization step, and the IR limit the system approaches when repeating this modified renormalization step is more exactly called a limit point. The non-uniqueness alluded to in the first paper corresponds to the fact that the integration constants $\alpha_i$ are not determined by the renormalization procedure itself but as explained in the second paper have to be determined by the bare action or perfect action which lies on a renormalized trajectory. In the context of Large Eddy Simulations (LES) that make use of dynamic subgrid scale parameterizations for turbulent diffusion for example, it is possible to dispense with the non-uniqueness by calculating the corresponding integration constant directly from the resolved scale by making use of the Germano identity Eq. (4.2) in the second paper and application of the Smagorinsky scheme to calculate a dynamic mixing length.
CommonCrawl
I have a number of independent variables $x_1,x_2,...,x_m$ and a dependent variable $y$. My dataset contains some million of rows. Bear with me if my wording is not precise here, and you are welcome to correct me! I assume that there is no multicollinearity in my data, i.e. $x_i$ cannot be explained by $x_j$ for $i\neq j$. However, the dependent variable $y$ seems to react similarly on changes of, say, $x_1$ and $x_2$. How is it called, if the change in $x_1$ changes $y$ similarly like a change in $x_2$? The question here is: Being on the greater search for a way to aggregate my $x_i$ to fewer variables, what would be the right method/procedure/research area/search engine term to find out whether (and how) $x_1$ and $x_2$ explain $y$ similarly? A common way (not necessarily correct) is to use factor analysis to combine predictors. That won't work in your case as you assumed that the explanatory variables are uncorrelated. Alternatively, a way to combine variables that does not require those variables to be correlated is the sheaf coefficient (the original article). Not the answer you're looking for? Browse other questions tagged regression multicollinearity aggregation or ask your own question. How to algorithmically find plausible covariates at which to plot predicted responses? Can PCA be used for detecting multicollinearity? Basic: Why are Slope, Intercept in Regression considered Random Variables?
CommonCrawl
Why is the semiclassical approximation of the abelian Chern-Simons theory exact? **Group structure** in Chern-Simons theory? We know that the common cases, $A=A^a T^a$ is the connection as a Lie algebra valued one form. $T^a$ is the generator of the Lie group. The well-known case is the well-defined SU(2) C-S theory and SO(3) C-S theory. SU(2) is a compact, simple, simply-connected Lie group. SO(3) is a compact, simple, connected but not simply connected Lie group. Question: what is the minimum requirement on the group structure of $A$ in Chern-Simons theory? (1) to be NOT a Lie group? (2) to be NOT compact? (3) to be NOT connected? (4) to be a Lie group but NOT a simple-Lie group? Please could you also explain why is it so, and better with some examples of (1),(2),(3),(4). ps. Of course, I know C-S theory is required to be invariant under a gauge transformation $$A \to U^\dagger(A-id)U$$ with a boundary derives a Wess-Zumino-Witten term. Here I am questioning the constraint on the group. Many thanks! There is a formulation of $2+1$ gravity as a C-S theory. The groups in this case are $ISO(2,1)$, $SO(3,1)$ and $SO(2,2)$ (depending on the cosmological constant choice). So, I guess that compactness and (semi)simplicity is optional. The reference is: Witten, E. (1988). 2+1 dimensional gravity as an exactly soluble system. Nuclear Physics B, 311(1), 46-78. doi. Good to know. Thanks! More comments/Ref will be helpful. Another reference is CS theories with a finite gauge groups: Dijkgraaf, R., & Witten, E. (1990). Topological gauge theories and group cohomology. Communications in Mathematical Physics, 129(2), 393-429. doi preprint. So we do not really need the group to be a Lie group. But whether those Dijkgraaf-Witten theory can always be written as a continuous Chern-Simons action? Or may not be? Whether there is always a continuous gauge transformation for those discrete gauge theory? The formulation of D-W theory through C-S action is here: Freed, D. S., & Quinn, F. (1993). Chern-Simons theory with finite gauge group. Communications in Mathematical Physics, 156(3), 435-472. arXiv:hep-th/9111004. There are indeed only trivial gauge transformations for discrete groups. @user23660, can you specify what is your trivial gauge transformations? Do you have both finite and infinitesimal forms of gauge transformations? Thanks. The are no infinitesimal gauge transformations for the finite group. In this case the gauge transformation is just a (global) deck transformation of underlying covering space. The nLab is a great reference for all of these things and seems to answer all of your questions. They do a better job of explaining why than I would probably do. Their page on Chern-Simons Theory seems to answer questions (2)-(4). They give a method for constructing Chern-Simons theories from generic compact Lie groups in the page listed. They also have a section describing Witten's construction for 2+1 gravity, which should constitute a set of examples. Dijkgraaf-Witten Theories can be thought of as Chern-Simons theories because they are constructed in the same way Chern-Simons theories are constructed. This can be seen by comparing the two definitions and constructions in the nLab pages linked here. In this sense the DW theories are CS theories constructed from discrete groups. As to examples, I believe the simplest is the D($\mathbb Z_2$) Dijkgraaf-Witten TQFT, which has a Hamiltonian realization in Kitaev's Toric code model. Since I could only include two references above, the reference for the toric code is arxiv.org/abs/quant-ph/9707021.
CommonCrawl
Abstract: We discovered the >100 GeV $\gamma$-ray source, HESS J1713-381, apparently associated with the shell-type supernova remnant (SNR) CTB 37B, using HESS in 2006. In 2007 we performed X-ray follow-up observations with Chandra with the aim of identifying a synchrotron counterpart to the TeV source and/or thermal emission from the SNR shell. These new Chandra data, together with additional TeV data, allow us to investigate the nature of this object in much greater detail than was previously possible. The new X-ray data reveal thermal emission from a ~4' region in close proximity to the radio shell of CTB 37B. The temperature of this emission implies an age for the remnant of ~5000 years and an ambient gas density of ~0.5 cm-3. Both these estimates are considerably uncertain due to the asymmetry of the SNR and possible modifications of the kinematics due to efficient cosmic ray (CR) acceleration. A bright ($\approx$ 7 $\times$ 10-13 erg cm-2 s-1) and unresolved (<1$\arcsec$) source (CXOU J171405.7-381031), with a soft ($\Gamma$$\approx$3.3) non-thermal spectrum is also detected in coincidence with the radio shell. Absorption indicates a column density consistent with the thermal emission from the shell, suggesting a genuine association rather than a chance alignment. The observed TeV morphology is consistent with an origin in the complete shell of CTB 37B. The lack of diffuse non-thermal X-ray emission suggests an origin of the $\gamma$-ray emission via the decay of neutral pions produced in interactions of protons and nuclei, rather than inverse Compton (IC) emission from relativistic electrons. Rights: Copyright © 2008 ESO. Reproduced with permission from Astronomy & Astrophysics, © ESO.
CommonCrawl
Talk given at meeting of New Zealand Statistical Association and International Association for Statistical Computing (11-14 December 2017), Auckland, New Zealand. It is always a good idea to plot your data before fitting any models, making any predictions, or drawing any conclusions. But how do you actually plot data on thousands of smart meters, each comprising thousands of observations over time? We cannot simply produce time plots of the demand recorded at each meter, due to the sheer volume of data involved. I will propose an approach in which each long series of demand data is converted to a single two-dimensional point that can be plotted in a simple scatterplot. In that way, all the meters can be seen in the scatterplot; so outliers can be detected, clustering can be observed, and any other interesting structure can be examined. To illustrate, I will use data collected during a smart metering trial conducted by the Commission for Energy Regulation (CER) in Ireland. First we estimate the demand percentiles for each half hour of the week, giving us 336 probability distributions per household. Then, we compute the distances between pairs of households using the sum of Jensen–Shannon distances. From these pairwise distances, we can compute a measure of the "typicality" of a specific household, by seeing how many similar houses are nearby. If there are many households with similar probability distributions, the typicality measure will be high. But if there are few similar households, the typicality measure will be low. This gives us a way of finding anomalies in the data set — they are the smart meters corresponding to the least typical households. The pairwise distances between households can also be used to create a plot of all households together. Each of the household distributions can be thought of as a vector in $K$-dimensional space where $K=7\times48\times99 = 33,264$. To easily visualize these, we need to project them onto a two-dimensional space. I propose using Laplacian eigenmaps which attempt to preserve the smallest distances — so the most similar points in $K$-dimensional space are as close as possible in the two-dimensional space. This way of plotting the data easily allows us to see the anomalies, to identify any clusters of observations in the data, and to examine any other structure that might exist.
CommonCrawl
What fraction of knots are ribbon knots? Q. Among prime knots of crossing number $n$ (OEIS A002863), what is the fraction that are ribbon knots, as $n \to \infty$? Browse other questions tagged knot-theory or ask your own question. What are the points of Spec(Vassiliev Invariants)? Is a generic link diagram semi-adequate? Can knot non-equivalence be a proof-of-work for a cryptocurrency? How to add essentially new knots to the universe?
CommonCrawl
This page is about the orbit of a group action. For other uses, see Definition:Orbit. Let $G$ be a group acting on a set $X$. That is, $\Orb x = G * x$. Thus the orbit of an element is all its possible destinations under the group action. From Group Action Induces Equivalence Relation, $\mathcal R$ is an equivalence relation. The orbit of $x$, denoted $\Orb x$, is the equivalence class of $x$ under $\mathcal R$. The quotient set $X / \mathcal R_G$ is called the set of orbits of $X$ under the action of $G$.
CommonCrawl
Mudakavi, JR and Ramaswamy, YS (1986) Extraction-spectrophotometric determination of traces of mercury(II) with bromide and Rhodamine 6G. In: Journal of the Indian Institute of Science, 66 (3). pp. 155-162. A very sensitive method for the detn. of Hg(II) after extn. of its bromide-Rhodamine 6G complex into benzene was developed. The optimum pH range for the extn. is 0.5-4.5. The molar absorptivity and Sandell sensitivity are$ 8.2 \times104 L.mol-1.cm-1$ and 2.5 ng cm-2, resp. The interference of various ions was studied. The method is applicable to the detn. of Hg in coal and sea water. Copyright to this article belongs to Indian Institute of science.
CommonCrawl
How many ways can you find to put operation signs ($+$, $-$, $\times$, $\div$) between the digits make $100$? Working systematically. Practical Activity. Resourceful. Interactivities. Visualising. Generalising. Handling data. Factors and multiples. Resilient. Curious.
CommonCrawl
Abstract: Douglass B. Morris announced in 1970 that it is consistent with ZF that "For every $\alpha$, there exists a set $A_\alpha$ which is the countable union of countable sets, and $\mathcal P(A_\alpha)$ can be partitioned into $\aleph_\alpha$ non-empty sets". The result was never published in a journal, and seems to have been lost, save a mention in Jech's "Axiom of Choice". We provide a proof using modern tools derived from recent work of the author. We also prove a new preservation theorem for general products of symmetric systems, which we use to obtain the consistency of Dependent Choice with the above statement (replacing "countable union of countable sets" by "union of $\kappa$ sets of size $\kappa$").
CommonCrawl
We study numerical integration of $q$-times differentiable functions on $\mathbf R^d$ for a probability measure that is self-similar with respect to $m$ affine contraction mappings $S_1,\dots,S_m$ on $\mathbf R^d$ and corresponding probability weights $\rho_1,\dots,\rho_m$. Under mild conditions on the contractions we provide lower bounds for the worst case errors of deterministic as well as randomized algorithms in terms of the worst case (average) number of function evaluations that are used. The matching upper bounds are obtained by composite quadrature rules, which are easy to implement and are based on divide and conquer strategies that are adapted to the structure of the self-similarity. The optimal order of convergence is characterized in terms of the similarity dimension of the contractions. Joint work with Steffen Dereich (University of Muenster).
CommonCrawl
Given a finite set $E$, a subset $D\subseteq E$ (viewed as a function $E\to \mathbb F_2$) is orthogonal to a given subspace $\mathcal F$ of the $\mathbb F_2$-vector space of functions $E\to \mathbb F_2$ as soon as $D$ is orthogonal to every $\subseteq$-minimal element of $\mathcal F$. This fails in general when $E$ is infinite. However, we prove the above statement for the six subspaces $\mathcal F$ of the edge space of any $3$-connected locally finite graph that are relevant to its homology: the topological, algebraic, and finite cycle and cut spaces. This solves a problem of Diestel (2010, arXiv:0912.4213).
CommonCrawl
I got this problem from Rustan Leino, who got it from Mariela Pavlova. A room has 100 light switches, numbered by the positive integers 1 through 100. There are also 100 children, numbered by the positive integers 1 through 100. Initially, the switches are all off. Each child $k$ enters the room and changes the position of every light switch $n$ such that $n$ is a multiple of $k$. For instance, child 1 changes all the switches; child 2 changes switches $2, 4, 6, 8, \ldots$; child 3 changes switches $3, 6, 9, 12, \ldots;$ and child 100 changes only light switch 100. When all the children have gone through the room, how many of the light switches are on? Ten switches are on at the end, those numbered with squares of integers.
CommonCrawl
1.7 Obtain the transfer function $E_0(s)/E_i(s)$. 1.8 Derive the transfer function of the following systems. 1.9 Obtain the TF of the following systems. 1.10 A Solenoid valve is shown in the figure below. The coil has the electric resistance of $4\Omega$ and an inductance of 0.6H and produce the electromagnetic force $F_c(t)=K_c\times i(t)$. The valve has the mass of 0.125kg and the liner bearings produce a resistive force of $C\times u(t)$. The value of $K_c$ and $C$ are 0.4N/A and 0.25Ns/m respectively. Deduce the over all differential equation relating the input voltage $v(t)$ to the output velocity $u(t)$.
CommonCrawl
How many non-isomorphic permutation selections are on an arbitrary N x N square matrix with rotations applied? On a square $N \times N$ grid, select exact $N$ cells that satisfy condition: only one cell selected in same row and column. How many solutions will be? The answer is very simple: $N!$. How many non-isomorphic permutation selections are for arbitrary $N$ ? I could not get direct answer, and I created a project on github for getting answers for small $N$ in brute force way. Please see more information from https://github.com/gnozil/permatrix. $90^\circ$ rotational symmetry, orbit size 1. Matrices with 180$^\circ$ rotational symmetry. What must a permutation look like if its permutation matrix is unchanged by 180$^\circ$ rotation? Observe that if $N$ is odd, row $(N+1)/2$ undergoes reversal when the matrix is rotated by 180$^\circ$. In a matrix with 180$^\circ$ rotational symmetry, row $(N+1)/2$ must therefore be symmetric under reversal, which means that its single $1$ must lie in column $(N+1)/2$. This corresponds to the center point of the matrix, which is fixed under rotation. In other words, a permutation whose matrix has 180$^\circ$ rotational symmetry must fix $(N+1)/2$ if $N$ is odd. Now suppose that our permutation with 180$^\circ$ rotational symmetry maps $1$ to $j$. Because, under 180$^\circ$ rotation, row $1$ becomes row $N$ with order of elements reversed, $N$ must then map to $N+1-j$. When $N$ is odd and greater than $1$, we know that $j\ne(N+1)/2$ since $(N+1)/2$ maps to itself and therefore cannot be the image of $1$. Hence there are $N-1$ choices for $j$. When $N$ is even, there are $N$ choices for $j$. Matrices with 90$^\circ$ rotational symmetry. Now we enumerate the permutation matrices with 90$^\circ$ rotational symmetry. Suppose that such a permutation maps $1$ to $j$. Then, since row $1$ becomes column $N$ after a 90$^\circ$ clockwise rotation, we see that $j$ maps to $N$. We know from the preceding discussion of 180$^\circ$ rotations that $N$ maps to $N+1-j$. Finally, under a 270$^\circ$ clockwise rotation, row 1 becomes column 1 with the order of elements reversed. Therefore $N+1-j$ maps to $1$. As a consequence, the permutation contains the 4-cycle $(1,j,N,N+j-1)$. More generally, a permutation whose matrix has 90$^\circ$ rotational symmetry consists of 4-cycles of the form $(a,b,N+1-a,N+1-b)$. Note that $a$ cannot equal $b$ unless $a=b=(N+1)/2$ since a 90$^\circ$ clockwise rotation sends the matrix element $(a,a)$ to $(a,N+1-a)$, which would produce two $1$s in row $a$ unless $N+1-a=a$. By similar reasoning, $b$ cannot equal $N+1-a$ unless $a=b=(N+1)/2$. In the case $a=(N+1)/2$, which requires $N$ odd, the permutation contains the 1-cycle $((N+1)/2)$. This represents the center point of the matrix, which, as we have already discussed, is fixed by rotation. Not the answer you're looking for? Browse other questions tagged algorithms permutations or ask your own question. How should $k$ items be selected in $m$ trials so that the average size of the intersection of selected items is as small as possible? In how many ways can $5$ identical balls be placed in the cells of a $3 \times 3$ grid such that each row contains at least one ball? What is my problem with combination/permutation problems? How many Permutation of non overlapping Elements inside an area?
CommonCrawl
In this paper we first introduce a space of homogeneous type X, and we consider a kind of generalized upper half-space X $\times$ (0, $\infty$). We are mainly concerned with some inequalities in terms of Carleson measures or in terms of certain maximal operators with respect to general approach regions in X $\times$ (0, $\infty$). The main tool of the proof is the Whitney decomposition.
CommonCrawl
The area of a square is calculated using the following forumla: $$ Area = L \times H $$ In this equation, \(L\) is the length of the square and \(H\) is the height of the square. You may notice that this formula is the same as the formula used to calculate the area of a rectangle. The difference is that a square is a special kind of rectangle: the length and height are the same.
CommonCrawl
Chin-San Liu, Jui-Chih Chang, Shou-Jen Kuo, Ko-Hung Liu, Ta-Tsung Lin, Wen-Ling Cheng and Sheng-Fei Chuang. Delivering healthy mitochondria for the therapy of mitochondrial diseases and beyond.. The international journal of biochemistry & cell biology, May 2014. Abstract Mitochondrial transfer has been demonstrated to a play a physiological role in the rescuing of mitochondrial DNA deficient cells by co-culture with human mesenchymal stem cells. The successful replacement of mitochondria using microinjection into the embryo has been revealed to improve embryo maturation. Evidence of mitochondrial transfer has been shown to minimize injury of the ischemic-reperfusion rabbit heart model. In this mini review, the therapeutic strategies of mitochondrial diseases based on the concept of mitochondrial transfer are illustrated, as well as a novel approach to peptide-mediated mitochondrial delivery. The possible mechanism of peptide-mediated mitochondrial delivery in the treatment of the myoclonic epilepsy and ragged-red fiber disease is summarized. Understanding the feasibility of mitochondrial manipulation in cells facilitates novel therapeutic skills in the future clinical practice of mitochondrial disorder. Luz Diana Santa-Cruz and Ricardo Tapia. Role of energy metabolic deficits and oxidative stress in excitotoxic spinal motor neuron degeneration in vivo.. ASN neuro 6(2), January 2014. Abstract MN (motor neuron) death in amyotrophic lateral sclerosis may be mediated by glutamatergic excitotoxicity. Previously, our group showed that the microdialysis perfusion of AMPA ($\alpha$-amino-3-hydroxy-5-methyl-4-isoxazole propionate) in the rat lumbar spinal cord induced MN death and permanent paralysis within 12 h after the experiment. Here, we studied the involvement of energy metabolic deficiencies and of oxidative stress in this MN degeneration, by testing the neuroprotective effect of various energy metabolic substrates and antioxidants. Pyruvate, lactate, $\beta$-hydroxybutyrate, $\alpha$-ketobutyrate and creatine reduced MN loss by 50-65%, preserved motor function and completely prevented the paralysis. Ascorbate, glutathione and glutathione ethyl ester weakly protected against motor deficits and reduced MN death by only 30-40%. Reactive oxygen species formation and 3-nitrotyrosine immunoreactivity were studied 1.5-2 h after AMPA perfusion, during the initial MN degenerating process, and no changes were observed. We conclude that mitochondrial energy deficiency plays a crucial role in this excitotoxic spinal MN degeneration, whereas oxidative stress seems a less relevant mechanism. Interestingly, we observed a clear correlation between the alterations of motor function and the number of damaged MNs, suggesting that there is a threshold of about 50% in the number of healthy MNs necessary to preserve motor function. L D Osellame and M R Duchen. Quality control gone wrong: mitochondria, lysosomal storage disorders and neurodegeneration.. British journal of pharmacology 171(8):1958–72, 2014. Abstract The eukaryotic cell possesses specialized pathways to turn over and degrade redundant proteins and organelles. Each pathway is unique and responsible for degradation of distinctive cytosolic material. The ubiquitin-proteasome system and autophagy (chaperone-mediated, macro, micro and organelle specific) act synergistically to maintain proteostasis. Defects in this equilibrium can be deleterious at cellular and organism level, giving rise to various disease states. Dysfunction of quality control pathways are implicated in neurodegenerative diseases and appear particularly important in Parkinson's disease and the lysosomal storage disorders. Neurodegeneration resulting from impaired degradation of ubiquitinated proteins and $\alpha$-synuclein is often accompanied by mitochondrial dysfunction. Mitochondria have evolved to control a diverse number of processes, including cellular energy production, calcium signalling and apoptosis, and like every other organelle within the cell, they must be 'recycled.' Failure to do so is potentially lethal as these once indispensible organelles become destructive, leaking reactive oxygen species and activating the intrinsic cell death pathway. This process is paramount in neurons which have an absolute dependence on mitochondrial oxidative phosphorylation as they cannot up-regulate glycolysis. As such, mitochondrial bioenergetic failure can underpin neural death and neurodegenerative disease. In this review, we discuss the links between cellular quality control and neurodegenerative diseases associated with mitochondrial dysfunction, with particular attention to the emerging links between Parkinson's and Gaucher diseases in which defective quality control is a defining factor. Mitochondrial quality control in neurodegenerative diseases.. Biochimie 100:177–83, 2014. Abstract Mutations causing genetic forms of Parkinson's disease or hereditary neuropathies have been recently shown to affect key molecular players involved in the recycling of defective mitochondria, most notably PARKIN, PINK1, Mitofusin 2 or dynein heavy chain. Interestingly, the same pathways are also indirectly targeted by multiple other mutations involved in familial forms of amyotrophic lateral sclerosis, Huntington's disease or Alzheimer's disease. These recent genetic results strongly reinforce the notion that defective mitochondrial physiology might cause neurodegeneration. Mitochondrial dysfunction has however been observed in virtually every neurodegenerative disease and appears not restricted to the most vulnerable neuronal populations affected by a given disease. Thus, the mechanisms linking defective mitochondrial quality control to death of selective neuronal populations remain to be identified. This review provides an update on the most recent literature on mitochondrial quality control and its impairment during neurodegenerative diseases. Giuseppina Amadoro, Veronica Corsetti, Fulvio Florenzano, Anna Atlante, Antonella Bobba, Vanessa Nicolin, Stefania L Nori and Pietro Calissano. Morphological and bioenergetic demands underlying the mitophagy in post-mitotic neurons: the pink-parkin pathway.. Frontiers in aging neuroscience 6:18, 2014. Abstract Evidence suggests a striking causal relationship between changes in quality control of neuronal mitochondria and numerous devastating human neurodegenerative diseases, including Parkinson's disease, Alzheimer's disease, Huntington's disease, and amyotrophic lateral sclerosis. Contrary to replicating mammalian cells with a metabolism essentially glycolytic, post-mitotic neurons are distinctive owing to (i) their exclusive energetic dependence from mitochondrial metabolism and (ii) their polarized shape, which entails compartmentalized and distinct energetic needs. Here, we review the recent findings on mitochondrial dynamics and mitophagy in differentiated neurons focusing on how the exceptional characteristics of neuronal populations in their morphology and bioenergetics needs make them quite different to other cells in controlling the intracellular turnover of these organelles. Wenzhang Wang, Li Li, Wen-Lang Lin, Dennis W Dickson, Leonard Petrucelli, Teng Zhang and Xinglong Wang. The ALS disease-associated mutant TDP-43 impairs mitochondrial dynamics and function in motor neurons.. Human molecular genetics 22(23):4706–19, December 2013. Abstract Mutations in TDP-43 lead to familial ALS. Expanding evidence suggests that impaired mitochondrial dynamics likely contribute to the selective degeneration of motor neurons in SOD1-associated ALS. In this study, we investigated whether and how TDP-43 mutations might impact mitochondrial dynamics and function. We demonstrated that overexpression of wild-type TDP-43 resulted in reduced mitochondrial length and density in neurites of primary motor neurons, features further exacerbated by ALS-associated TDP-43 mutants Q331K and M337V. In contrast, suppression of TDP-43 resulted in significantly increased mitochondrial length and density in neurites, suggesting a specific role of TDP-43 in regulating mitochondrial dynamics. Surprisingly, both TDP-43 overexpression and suppression impaired mitochondrial movement. We further showed that abnormal localization of TDP-43 in cytoplasm induced substantial and widespread abnormal mitochondrial dynamics. TDP-43 co-localized with mitochondria in motor neurons and their colocalization was enhanced by ALS associated mutant. Importantly, co-expression of mitochondrial fusion protein mitofusin 2 (Mfn2) could abolish TDP-43 induced mitochondrial dynamics abnormalities and mitochondrial dysfunction. Taken together, these data suggest that mutant TDP-43 impairs mitochondrial dynamics through enhanced localization on mitochondria, which causes mitochondrial dysfunction. Therefore, abnormal mitochondrial dynamics is likely a common feature of ALS which could be potential new therapeutic targets to treat ALS. Parisa Ghiasi, Saman Hosseinkhani, Alireza Noori, Shahriar Nafissi and Khosro Khajeh. Mitochondrial complex I deficiency and ATP/ADP ratio in lymphocytes of amyotrophic lateral sclerosis patients.. Neurological research 34(3):297–303, April 2012. Abstract OBJECTIVES: Several lines of evidence suggest that mitochondrial dysfunction is involved in amyotrophic lateral sclerosis (ALS), but despite the fact that mitochondria play a central role in excitotoxicity, oxidative stress, and apoptosis, the intimate underlying mechanism linking mitochondrial defects to motor neuron degeneration in ALS still remains elusive. This study was performed to assess the mitochondrial respiratory chain dysfunction and cellular energy index (ATP/ADP ratio) in lymphocytes of ALS patients. METHODS: In this study, activity of mitochondrial respiratory chain complex I (measured as NADH-ferricyanide reductase) and both intracellular ATP and ADP measurements were performed on lymphocytes of ALS patients (n = 14) and control subjects (n = 26). Then, ATP/ADP ratio was calculated. RESULTS: Our finding showed that in patients compared with controls, complex I activity and intracellular ATP were significantly reduced (P = 0·001) and intracellular ADP content was increased (P<0·005) and ATP/ADP ratio subsequently was decreased and also we found strong correlation between complex I activity and intracellular ATP content and strong reverse correlation between complex I activity and intracellular ADP content in the patients with ALS (r(2) = 0·90). DISCUSSION: This study suggests that complex I deficiency and both reduction in intracellular ATP and increase in intracellular ADP content may be involved in the progression and pathogenesis of ALS. Ernesto Miquel, Adriana Cassina, Laura Mart\'ınez-Palma, Carmen Bolatto, Emiliano Tr\'ıas, Mandi Gandelman, Rafael Radi, Luis Barbeito and Patricia Cassina. Modulation of astrocytic mitochondrial function by dichloroacetate improves survival and motor performance in inherited amyotrophic lateral sclerosis.. PloS one 7(4):e34776, January 2012. Abstract Mitochondrial dysfunction is one of the pathogenic mechanisms that lead to neurodegeneration in Amyotrophic Lateral Sclerosis (ALS). Astrocytes expressing the ALS-linked SOD1(G93A) mutation display a decreased mitochondrial respiratory capacity associated to phenotypic changes that cause them to induce motor neuron death. Astrocyte-mediated toxicity can be prevented by mitochondria-targeted antioxidants, indicating a critical role of mitochondria in the neurotoxic phenotype. However, it is presently unknown whether drugs currently used to stimulate mitochondrial metabolism can also modulate ALS progression. Here, we tested the disease-modifying effect of dichloroacetate (DCA), an orphan drug that improves the functional status of mitochondria through the stimulation of the pyruvate dehydrogenase complex activity (PDH). Applied to astrocyte cultures isolated from rats expressing the SOD1(G93A) mutation, DCA reduced phosphorylation of PDH and improved mitochondrial coupling as expressed by the respiratory control ratio (RCR). Notably, DCA completely prevented the toxicity of SOD1(G93A) astrocytes to motor neurons in coculture conditions. Chronic administration of DCA (500 mg/L) in the drinking water of mice expressing the SOD1(G93A) mutation increased survival by 2 weeks compared to untreated mice. Systemic DCA also normalized the reduced RCR value measured in lumbar spinal cord tissue of diseased SOD1(G93A) mice. A remarkable effect of DCA was the improvement of grip strength performance at the end stage of the disease, which correlated with a recovery of the neuromuscular junction area in extensor digitorum longus muscles. Systemic DCA also decreased astrocyte reactivity and prevented motor neuron loss in SOD1(G93A) mice. Taken together, our results indicate that improvement of the mitochondrial redox status by DCA leads to a disease-modifying effect, further supporting the therapeutic potential of mitochondria-targeted drugs in ALS. Liesbeth Faes and Geert Callewaert. Mitochondrial dysfunction in familial amyotrophic lateral sclerosis.. Journal of bioenergetics and biomembranes 43(6):587–92, 2011. Abstract A growing body of evidence suggests that mitochondrial dysfunctions play a crucial role in the pathogenesis of various neurodegenerative disorders, including amyotrophic lateral sclerosis (ALS), a neurodegenerative disease affecting both upper and lower motor neurons. Although ALS is predominantly a sporadic disease, approximately 10% of cases are familial. The most frequent familial form is caused by mutations in the gene encoding Cu/Zn superoxide dismutase 1 (SOD1). A dominant toxic gain of function of mutant SOD1 has been considered as the cause of the disease and mitochondria are thought to be key players in the pathogenesis. However, the exact nature of the link between mutant SOD1 and mitochondrial dysfunctions remains to be established. Here, we briefly review the evidence for mitochondrial dysfunctions in familial ALS and discuss a possible link between mutant SOD1 and mitochondrial dysfunction. Ralf J Braun, Cornelia Sommer, Didac Carmona-Gutierrez, Chamel M Khoury, Julia Ring, Sabrina Büttner and Frank Madeo. Neurotoxic 43-kDa TAR DNA-binding protein (TDP-43) triggers mitochondrion-dependent programmed cell death in yeast.. The Journal of biological chemistry 286(22):19958–72, 2011. Abstract Pathological neuronal inclusions of the 43-kDa TAR DNA-binding protein (TDP-43) are implicated in dementia and motor neuron disorders; however, the molecular mechanisms of the underlying cell loss remain poorly understood. Here we used a yeast model to elucidate cell death mechanisms upon expression of human TDP-43. TDP-43-expressing cells displayed markedly increased markers of oxidative stress, apoptosis, and necrosis. Cytotoxicity was dose- and age-dependent and was potentiated upon expression of disease-associated variants. TDP-43 was localized in perimitochondrial aggregate-like foci, which correlated with cytotoxicity. Although the deleterious effects of TDP-43 were significantly decreased in cells lacking functional mitochondria, cell death depended neither on the mitochondrial cell death proteins apoptosis-inducing factor, endonuclease G, and cytochrome c nor on the activity of cell death proteases like the yeast caspase 1. In contrast, impairment of the respiratory chain attenuated the lethality upon TDP-43 expression with a stringent correlation between cytotoxicity and the degree of respiratory capacity or mitochondrial DNA stability. Consistently, an increase in the respiratory capacity of yeast resulted in enhanced TDP-43-triggered cytotoxicity, oxidative stress, and cell death markers. These data demonstrate that mitochondria and oxidative stress are important to TDP-43-triggered cell death in yeast and may suggest a similar role in human TDP-43 pathologies. Adrian Israelson, Nir Arbel, Sandrine Da Cruz, Hristelina Ilieva, Koji Yamanaka, Varda Shoshan-Barmatz and Don W Cleveland. Misfolded mutant SOD1 directly inhibits VDAC1 conductance in a mouse model of inherited ALS.. Neuron 67(4):575–87, August 2010. Abstract Mutations in superoxide dismutase (SOD1) cause amyotrophic lateral sclerosis (ALS), a neurodegenerative disease characterized by loss of motor neurons. With conformation-specific antibodies, we now demonstrate that misfolded mutant SOD1 binds directly to the voltage-dependent anion channel (VDAC1), an integral membrane protein imbedded in the outer mitochondrial membrane. This interaction is found on isolated spinal cord mitochondria and can be reconstituted with purified components in vitro. ADP passage through the outer membrane is diminished in spinal mitochondria from mutant SOD1-expressing ALS rats. Direct binding of mutant SOD1 to VDAC1 inhibits conductance of individual channels when reconstituted in a lipid bilayer. Reduction of VDAC1 activity with targeted gene disruption is shown to diminish survival by accelerating onset of fatal paralysis in mice expressing the ALS-causing mutation SOD1(G37R). Taken together, our results establish a direct link between misfolded mutant SOD1 and mitochondrial dysfunction in this form of inherited ALS. Ya-Fei Xu, Tania F Gendron, Yong-Jie Zhang, Wen-Lang Lin, Simon D'Alton, Hong Sheng, Monica Castanedes Casey, Jimei Tong, Joshua Knight, Xin Yu, Rosa Rademakers, Kevin Boylan, Mike Hutton, Eileen McGowan, Dennis W Dickson, Jada Lewis and Leonard Petrucelli. Wild-type human TDP-43 expression causes TDP-43 phosphorylation, mitochondrial aggregation, motor deficits, and early mortality in transgenic mice.. The Journal of neuroscience : the official journal of the Society for Neuroscience 30(32):10851–9, 2010. Abstract Transactivation response DNA-binding protein 43 (TDP-43) is a principal component of ubiquitinated inclusions in frontotemporal lobar degeneration with ubiquitin-positive inclusions and in amyotrophic lateral sclerosis (ALS). Mutations in TARDBP, the gene encoding TDP-43, are associated with sporadic and familial ALS, yet multiple neurodegenerative diseases exhibit TDP-43 pathology without known TARDBP mutations. While TDP-43 has been ascribed a number of roles in normal biology, including mRNA splicing and transcription regulation, elucidating disease mechanisms associated with this protein is hindered by the lack of models to dissect such functions. We have generated transgenic (TDP-43PrP) mice expressing full-length human TDP-43 (hTDP-43) driven by the mouse prion promoter to provide a tool to analyze the role of wild-type hTDP-43 in the brain and spinal cord. Expression of hTDP-43 caused a dose-dependent downregulation of mouse TDP-43 RNA and protein. Moderate overexpression of hTDP-43 resulted in TDP-43 truncation, increased cytoplasmic and nuclear ubiquitin levels, and intranuclear and cytoplasmic aggregates that were immunopositive for phosphorylated TDP-43. Of note, abnormal juxtanuclear aggregates of mitochondria were observed, accompanied by enhanced levels of Fis1 and phosphorylated DLP1, key components of the mitochondrial fission machinery. Conversely, a marked reduction in mitofusin 1 expression, which plays an essential role in mitochondrial fusion, was observed in TDP-43PrP mice. Finally, TDP-43PrP mice showed reactive gliosis, axonal and myelin degeneration, gait abnormalities, and early lethality. This TDP-43 transgenic line provides a valuable tool for identifying potential roles of wild-type TDP-43 within the CNS and for studying TDP-43-associated neurotoxicity. Andoni Echaniz-Laguna, Joffrey Zoll, Elodie Ponsot, Benoit N'guessan, Christine Tranchant, Jean-Philippe Loeffler and Eliane Lampert. Muscular mitochondrial function in amyotrophic lateral sclerosis is progressively altered as the disease develops: a temporal study in man.. Experimental neurology 198(1):25–30, 2006. Abstract We performed repeated analysis of mitochondrial respiratory function in skeletal muscle (SM) of patients with early-stage sporadic amyotrophic lateral sclerosis (SALS) to determine whether mitochondrial function was altered as the disease advanced. SM biopsies were obtained from 7 patients with newly diagnosed SALS, the same 7 patients 3 months later, and 7 sedentary controls. Muscle fibers were permeabilized with saponin, then skinned and placed in an oxygraphic chamber to measure basal and maximal adenosine diphosphate (ADP)-stimulated respiration rates and to assess mitochondrial regulation by ADP. We found that the maximal oxidative phosphorylation capacity of muscular mitochondria significantly increased, and muscular mitochondrial respiratory complex IV activity significantly decreased as the disease advanced. This temporal study demonstrates for the first time that mitochondrial function in SM in human SALS is progressively altered as the disease develops.
CommonCrawl
I know such triplet is determined by the formula of characterist exponent and is unique. So, such triplet exists, because for each Lévy process there is such triplet. I'd like to make explicit each term of the triplet but I feel confused about it. To obtain this, I was thinking the triplet is given by $(r,t,\pi),$ where $\pi$ is the measure of jumps of the Compound Poisson but I'm not sure about this. How can we make these terms explicit? Every Lévy process $Y$ can be decomposed as the sum of three independent processes $Y=B+Z+D$, respectively a Brownian motion, a Lévy jump process $Z$ and a drift $D$. Hence one can choose three objects that characterize each of $(B,\,Z,\,D)$ to characterise the Lévy process $Y$. Your Brownian motion is determined by $a\ge0$, your compound Poisson process is determined by its Lévy measure $\pi$, and your drift by the coefficient $r\in\mathbb R$. You can make $\pi$ more explicit, where $\pi(A)=\lambda \int_A \pi^X(dy)$, where $\lambda$ is the intensity of the exponential jumps, and $\pi^X$ is the law of each jump. Not the answer you're looking for? Browse other questions tagged stochastic-processes levy-processes or ask your own question.
CommonCrawl
And I'd like to verify the determinant is not zero. Vandermonde matrix, conic section - five points ... and so on. could recommend any program or website which calculate determinant of matrix? $$ $$ Thank you for your attention to this matter. I know the fact(statement) "five points decide conic section UNIQUELY" For $x_i = y_i$ this is pretty wrong. Determinant is zero iff rank is not full. But if $(x_1,\cdots,x_5)=(y_1,\cdots,y_5)$ this is already wrong. If the determinant is zero then the columns are related, which means there exist $a,b,c,d,e$ such that $$ ax_i^2 + bx_iy_i+cy_i^2 +d x_i+ey_i = 0,\ i=1..5.$$ Then $P_i(x_i,y_i)$ are on the same conic. Therefore the determinant can be zero, and is zero if and only if the points $P_i$ lie on the same conic in the plane. Not the answer you're looking for? Browse other questions tagged linear-algebra determinant conic-sections or ask your own question. How was this equation calculated? What is the equation stands for in geometry(intuitively)? How to solve the six elements equations below?
CommonCrawl
in eqns 3,5 the state eqn has the mean removed. However I have looked at several implementations of Kalman Filters for state space models I haven't seen this "de-meaned" version any where? Q1.) From a theoretical point of view are the two representations equivalent? Clearly they are if $\mu=0$ but more generally for non zero values of $\mu$?Does the mean of $x_0=\xi$ offset the process so that the rest of the observations cab be treated as mean zero? If so can someone show me how this makes the processes equivalent algebraically! When I test the model I use data generated using the non demeaned version of the state equation, with the appropriate x_0, and use the toolbox to estimate it (which also does not demean the state equation) and I get good estimates. This is to be expected as $\mu$ in this case is zero by design. However when I work with empirical data I can't be sure that the empirical state process will be mean zero. Q.2) Therefore should I always de-mean the state process when working with empirical data. Browse other questions tagged fixed-income kalman term-structure or ask your own question.
CommonCrawl
The problem/solution of counting the number of (primitive) necklaces is very well known. But what about results giving sufficient conditions for a given necklace be primitive? For example, in the binary case, a necklace of length $N$ (00..00100..01) will be primitive whenever the number $M$ of zeros between the two 1's is such that $\gcd(N,M)=1$. Any idea/references for additional results of this type? Here is the Maple code to compute this function. This gives for two colors ($N=2$) the sequence $$2, 1, 2, 3, 6, 9, 18, 30, 56, 99, 186, 335, 630, 1161, 2182,\ldots$$ which is OEIS A001037. For three colors ($N=3$) we get the sequence $$3, 3, 8, 18, 48, 116, 312, 810, 2184, 5880, 16104, 44220, 122640, 341484, 956576,\ldots$$ which is OEIS A027376. Finally for four colors ($N=4$) we get the sequence $$4, 6, 20, 60, 204, 670, 2340, 8160, 29120, 104754, 381300, 1397740, 5162220,\ldots$$ which is OEIS A027377. The simplification of the above goes like this. Here is another answer that presents an additional twist to the problem of counting primitive necklaces, namely Power Group Enumeration (as presented by Harary and Palmer and Fripertinger, in a different publication), with the group acting on the slots where the $N$ colors are placed being the cyclic group $C_n$ on $n$ elements and the group acting on the colors being the symmetric group on $N$ elements $S_N$. This treats the problem of counting primitive necklaces where colors may be swapped without the resulting necklaces being considered different. Now the Burnside computation is best done with a CAS, here is the Maple code. This gives for two colors ($N=2$) the sequence $$1, 1, 1, 2, 3, 5, 9, 16, 28, 51, 93, 170,\ldots$$ which is OEIS A000048. For three colors ($N=3$) we get the sequence $$1, 1, 2, 4, 8, 22, 52, 140, 366, 992, 2684, 7404,\ldots$$ which is OEIS A002075. Finally for four colors ($N=4$) we get the sequence $$1, 1, 2, 5, 10, 35, 102, 360, 1232, 4427, 15934, 58465,\ldots$$ which is OEIS A056300. Additional linkage. For $N=5$ we find a the sequence $$1, 1, 2, 5, 11, 38, 122, 496, 2005, 8707, 38364, 173562,\ldots$$ which is OEIS A056301. For $N=6$ we finde the sequence $$1, 1, 2, 5, 11, 39, 125, 532, 2301, 11010, 54681, 284023,\ldots$$ which is OEIS A056302. For $N=7$ we find the sequence $$1, 1, 2, 5, 11, 39, 126, 536, 2353, 11606, 60498, 336399,\ldots$$ which is not yet in the OEIS. This yields the following Maple code. Not the answer you're looking for? Browse other questions tagged combinatorics group-theory or ask your own question. Number of all labeled, unordered rooted trees with $n$ vertices and $k$ leaves. Is there a nice characterization of these classes of functions on a set of $n$ elements? Is there an obvious reason why the number of binary Lyndon words is equal to the number of irreducible polynomials over GF(2)? Counting permutations of a binary string with a condition on "distance".
CommonCrawl
I understand that we use random effects (or mixed effects) models when we believe that some model parameter(s) vary randomly across some grouping factor. I have a desire to fit a model where the response has been normalized and centered (not perfectly, but pretty close) across a grouping factor, but an independent variable x has not been adjusted in any way. This led me to the following test (using fabricated data) to ensure that I'd find the effect I was looking for if it was indeed there. I ran one mixed effects model with a random intercept (across groups defined by f) and a second fixed effect model with the factor f as a fixed effect predictor. I used the R package lmer for the mixed effect model, and the base function lm() for the fixed effect model. Following is the data and the results. I note that the intercept variance component is estimated 0, and importantly to me, x is not a significant predictor of y. Now I notice that, as expected, x is a significant predictor of y. What I am looking for is intuition regarding this difference. In what way is my thinking wrong here? Why do I incorrectly expect to find a significant parameter for x in both of these models but only actually see it in the fixed effect model? There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all. So basically what this ends up meaning for us is that only the within-cluster variability in $x$ is used to estimate the effect of $x$. The between-cluster variability in $x$ (which, as we can see above, is substantial), is "controlled out" of the analysis. So the slope that we get from lm() is the average of the 4 within-cluster regression lines, all of which are relatively steep in this case. What the mixed model does is slightly more complicated. The mixed model attempts to use both within-cluster and between-cluster variability on $x$ to estimate the effect of $x$. Incidentally this is really one of the selling points of the model, as its ability/willingness to incorporate this additional information means it can often yield more efficient estimates. But unfortunately, things can get tricky when the between-cluster effect of $x$ and the average within-cluster effect of $x$ do not really agree, as is the case here. Note: this situation is what the "Hausman test" for panel data attempts to diagnose! Specifically, what the mixed model will attempt to do here is to estimate some sort of compromise between the average within-cluster slope of $x$ and the simple regression line that ignores the clusters (the dashed bold line). The exact point within this compromising range that mixed model settles on depends on the ratio of the random intercept variance to the total variance (also known as the intra-class correlation). As this ratio approaches 0, the mixed model estimate approaches the estimate of the simple regression line. As the ratio approaches 1, the mixed model estimate approaches the average within-cluster slope estimate. As you can see, the coefficients here are identical to what we obtained in the mixed model. This is exactly what we expected to find, since as you already noted, we have an estimate of 0 variance for the random intercepts, making the previously mentioned ratio/intra-class correlation 0. So the mixed model estimates in this case are just the simple linear regression estimates, and as we can see in the plot, the slope here is far less pronounced than the within-cluster slopes. Why is the variance of the random intercepts estimated to be 0? The answer to this question has the potential to become a little technical and difficult, but I'll try to keep it as simple and nontechnical as I can (for both our sakes!). But it will maybe still be a little long-winded. I mentioned earlier the notion of intra-class correlation. This is another way of thinking about the dependence in $y$ (or, more correctly, the errors of the model) induced by the clustering structure. The intra-class correlation tells us how similar on average are two errors drawn from the same cluster, relative to the average similarity of two errors drawn from anywhere in the dataset (i.e., may or may not be in the same cluster). A positive intra-class correlation tells us that errors from the same cluster tend to be relatively more similar to each other; if I draw one error from a cluster and it has a high value, then I can expect above chance that the next error I draw from the same cluster will also have a high value. Although somewhat less common, intra-class correlations can also be negative; two errors drawn from the same cluster are less similar (i.e., further apart in value) than would typically be expected across the dataset as a whole. All of this intra-class correlation business is just a useful alternative way of describing the dependence in the data. The mixed model we are considering is not using the intra-class correlation method of representing the dependence in the data. Instead it describes the dependence in terms of variance components. This is all fine as long as the intra-class correlation is positive. In those cases, the intra-class correlation can be easily written in terms of variance components, specifically as the previously mentioned ratio of the random intercept variance to the total variance. (See the wiki page on intra-class correlation for more info on this.) But unfortunately variance-components models have a difficult time dealing with situations where we have a negative intra-class correlation. After all, writing the intra-class correlation in terms of the variance components involves writing it as a proportion of variance, and proportions cannot be negative. Judging from the plot, it looks like the intra-class correlation in these data would be slightly negative. (What I am looking at in drawing this conclusion is the fact that there is a lot of variance in $y$ within each cluster, but relatively little variance in the cluster means on $y$, so two errors drawn from the same cluster will tend to have a difference that nearly spans the range of $y$, whereas errors drawn from different clusters will tend to have a more moderate difference.) So your mixed model is doing what, in practice, mixed models often do in this case: it gives estimates that are as consistent with a negative intra-class correlation as it can muster, but it stops at the lower bound of 0 (this constraint is usually programmed into the model fitting algorithm). So we end up with an estimated random intercept variance of 0, which is still not a very good estimate, but it's as close as we can get with this variance-components type of model. One option is to just go with the fixed-effects model. This would be reasonable here because these data have two separate features that are tricky for mixed models (random group effects correlated with $x$, and negative intra-class correlation). Another option is to use a mixed model, but set it up in such a way that we separately estimate the between- and within-cluster slopes of $x$ rather than awkwardly attempting to pool them together. At the bottom of this answer I reference two papers that talk about this strategy; I follow the approach advocated in the first paper by Bell & Jones. A few things to notice here. First, the coefficient for $x_w$ is exactly the same as what we got in the fixed-effect model. So far so good. Second, the coefficient for $x_b$ is the slope of the regression we would get from regression $y$ on just a vector of the cluster means of $x$. As such it is not quite equivalent to the bold dashed line in our first plot, which used the total variance in $x$, but it is close. Third, although the coefficient for $x_b$ is smaller than the coefficient from the simple regression model, the standard error is also substantially smaller and hence the $t$-statistic is larger. This also is unsurprising because the residual variance is far smaller in this mixed model due to the random group effects eating up a lot of the variance that the simple regression model had to deal with. Finally, we still have an estimate of 0 for the variance of the random intercepts, for the reasons I elaborated in the previous section. I'm not really sure what all we can do about that one at least without switching to some software other than lmer(), and I'm also not sure to what extent this is still going to be adversely affecting our estimates in this final mixed model. Maybe another user can chime in with some thoughts about this issue. After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independent and the dependent variables. In this case, those variables are omitted or unobserved. However, I do observe the groupings between which the omitted variable ought to vary. I believe the econometrician would suggest a fixed effect model. That is, a model that includes a dummy for every grouping level (or an equivalent specification that conditions the model such that many grouping dummies are not required) in this case. With a fixed effect model, the hope is that all unobserved and time-invariant variables can be controlled by conditioning out across group (or across individual) variation. Indeed, the second model in my question is precisely a fixed effect model, and as such gives the estimate I expect. I welcome comments that will further illuminate this circumstance. Not the answer you're looking for? Browse other questions tagged mixed-model random-effects-model fixed-effects-model lme4-nlme intraclass-correlation or ask your own question. How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics? How does including a random effect, change the parameter estimate for a group level covariate? When should I *not* permit a fixed effect to vary across levels of a random effect in a mixed effects model? What can cause a slope different than one in the fitted vs observed plot of a linear mixed model? What is a fixed effect in a mixed model compared to a fixed effect model for panel data? What are the options I have for partially clustered design, when using Mixed-Effects Model?
CommonCrawl
We consider the behaviour of fermions in the background of instanton-anti\-instanton type configurations. Several different physics problems, from the high energy electroweak interactions to the study of vacuum structure of QCD and of large orders of perturbation theory are related to this problem. The spectrum of the Dirac operator in such a background is studied in detail. We present an approximation for the fermion correlation function when the instanton-anti\-instanton separation ($R$) is large compared to their sizes ($\rho $). The situation when the instanton-anti\-instanton overlap and melt, is studied through the behaviour of the Chern Simons number as a function of $ R/\rho$ and $x_4$. Applying our results to widely discussed cases of fermion-number violation in the electroweak theory, we conclude that there are no theoretical basis for expecting anomalous cross sections to become observable at energies in $10$ TeV region.
CommonCrawl
Abstract: The methods for parametric identification of fractional differential operators with $\alpha \in (1, 2)$ degree according to instantaneous values of experimental observations based on the Barrett differential equation example are suggested. The methods are based on construction of the linear parametrical discrete model for fractional differential equation. The coefficients of the model are associated with the required parameters of differentiation equation of fractional order. Different approaches to the determination of the relationships between the parameters of the differential equation and the discrete model coefficients are considered. Connection expressions for coefficients of linear parametrical discrete model and Cauchy type problem parameters to be identified are obtained. The algorithm of the method which let us reduce the problem to computation of mean-square estimates for coefficients of linear parametrical discrete model is described. Numerical investigations have been done; furthermore, their results let us conclude high efficiency of the methods.
CommonCrawl
Is there a way to convert an array of values $(x_i,y_i,z_i), i=1,\ldots,n$ to a density plot instead of a ListPointPlot3D? I don't know if this is duplicated, but somehow, I cannot seem to just use ListDensityPlot3D? Hard to know exactly what you want without the explicit data, but maybe you'd like to bin the data? Not the answer you're looking for? Browse other questions tagged plotting graphics graphics3d or ask your own question. How can I convert a 3D contour plot into a region for use as a custom slice in ListSliceDensityPlot3D? How to avoid density plot being too ragged?
CommonCrawl
I will be implementing nodal discontinuous Galerkin method soon, and having done this before I know the basic indexing arrays I will need to compute, given a mesh and polynomial data. The problem I ran into in previous code were subtle mistakes I made in computing things like interior/exterior trace indexing. Problems which didn't arise on simpler test cases would arise on larger meshes, and usually this yields an unstable scheme since boundary conditions are not properly imposed (so no chance of just watching the simulation every 10 steps or so and seeing a localized problem). I'm hoping some more experienced folks here know good tests to run on the index arrays to get some confidence that they are right. Quadratures, derivatives and the like are very easy to test, but other things I can't figure out. Some tests I have done in the past is adding interior normals to exterior normals, which should yield 0 or +-2. Being able to quickly see the result of some code change is helpful, but I can't think of a meaningful way to do this with indexing. I should also mention that these are going in for quads and hexes, with the potential for curvilinear elements. Not much existing code is there to compare against a working library already. Bonus points if there are good unit tests that I can write which wouldn't rely on an existing correct answer to compare against. I'll settle for a lot of good heuristics. The obvious answer would of course be to not implement it all yourself but to use what others have done before. For example, deal.II (http://www.dealii.org -- disclaimer: I am one of the authors of this library) already has DG elements on quads, hexes, and curved elements, tutorial programs that show how to use them, etc. All this has been tested for a decade already and is tested via 2,700 regression tests after every change to the repository. There is much work to be saved for you. But to come back to the original question: What I often do for these sort of things when writing test cases for deal.II is to do something for which I know the exact solution and verify that I indeed get it. Example: If you have an interpolating (nodal) element, then on one cell $K$, assign to every degree of freedom the value of a function $f(x)$ at the nodes $x_i$. Do this by hand, using your knowledge of how degrees of freedom are ordered. This yields a function $u_h(x)=\sum_i U_i \varphi_i(x)$ where $U_i=f(x_i)$. Then use a quadrature formula to compute something like $\|u_h-f\|^2 = \int_K |u_h(x)-f(x)|^2 \; dx$. If $f$ was a polynomial of the same order as your ansatz space, then the result should be zero. If you mixed up indices, either when assigning values to degrees of freedom by hand or in your program when evaluating $u_h$ at quadrature points, you will get something that is nonzero. The point of all this is that in your test, you use two independent methods to deal with indexing (your knowledge, when assigning values to DoFs by hand, and what your code thinks is correct when evaluating $u_h$) and they need to agree. If they don't, at least one of them is incorrect. Of course, once you have this running, you need to save the testcase and run it periodically to ensure that you are not accidentally breaking this kind of functionality again. Thanks for contributing an answer to Computational Science Stack Exchange! Not the answer you're looking for? Browse other questions tagged software testing discontinuous-galerkin or ask your own question. Where can one obtain good data sets/test problems for testing algorithms/routines? Testing suites for numerical applications in C++? Any recommendations for unit-testing frameworks compatible with code/libraries that use MPI? What are the good testing problems for hyperbolic equation?
CommonCrawl
Started work on a page on Bilinear regression. Interesting. What's the relationship between bilinear regression and the "error in variables" problem? Hi Jan, I can't access that link, but I found this on wikipeida. Assuming it's talking about the same thing, I think they're talking about two different problems. One of the motivations in the papers I've found for bilinear regression is providing a way to enforce sparsity in the model: if each sample is an $A \times B$ matrix,then simply flattening it to a vector and doing linear regression gives a model with $AB$ coefficients, whereas bilinear regression with $m$ pairs of $(u_i,v_i)$ has $m (A+B)$ coefficients. This reduction has been obtained by some loss of flexibility, but the reports I've read seem to indicated that overall it does well at preventing over-fitting. If there is a connection I'd be very interested to know more about it. I'm going to have a go at using bilinear regression on the El Nino dataset, so the entry is partly just a place to store my calculations of the derivatives -- which are done for the way more complicated logistic regression model in the papers I've found. Comment Source:Hi Jan, I can't access that link, but I found this on [wikipeida](http://en.wikipedia.org/wiki/Errors-in-variables_models). Assuming it's talking about the same thing, I think they're talking about two different problems. One of the motivations in the papers I've found for bilinear regression is providing a way to enforce sparsity in the model: if each sample is an $A \times B$ matrix,then simply flattening it to a vector and doing linear regression gives a model with $AB$ coefficients, whereas bilinear regression with $m$ pairs of $(u_i,v_i)$ has $m (A+B)$ coefficients. This reduction has been obtained by some loss of flexibility, but the reports I've read seem to indicated that overall it does well at preventing over-fitting. If there is a connection I'd be very interested to know more about it. I'm going to have a go at using bilinear regression on the El Nino dataset, so the entry is partly just a place to store my calculations of the derivatives -- which are done for the way more complicated logistic regression model in the papers I've found. If you send your email address to me at empirical_bayesian -at- ieee -dot- org happy to send you a copy. Mystery to me why you can't access. I could put it in Google. This page is unable to be displayed because the website owner has disabled directory indexing for this site and has not provided an index.html file. Please contact the website owner to find the correct URL. If this is your site you may wish to add an index.html page with more useful information. I've been a bit lazy in putting in references, but one of the papers I've been looking at is Sparse Bilinear Logistic Regression. One thing I've found is that there seem to be several things that go under the name of bilinear regression, but it's the kind of formulation in that paper that I'm looking at. I've sent you an email to give you my email address. Comment Source:I've been a bit lazy in putting in references, but one of the papers I've been looking at is [Sparse Bilinear Logistic Regression](ftp://ftp.math.ucla.edu/pub/camreport/cam14-12.pdf). One thing I've found is that there seem to be several things that go under the name of bilinear regression, but it's the kind of formulation in that paper that I'm looking at. I've sent you an email to give you my email address. We couldn't find a page for the link you visited. Please check that you have the correct link and try again. If you are the owner of this domain, you can setup a page here by creating a page/website in your account. Suddenly figured why trying to get Jan's document didn't work: the forum has merged two dashes into an n-dash in the output, so pasting doesn't work. If you select "Source" and paste the link from there, it works and I can see the paper. Comment Source:Suddenly figured why trying to get Jan's document didn't work: the forum has merged two dashes into an n-dash in the output, so pasting doesn't work. If you select "Source" and paste the link from there, it works and I can see the paper. Comment Source:This seems like a relevant reference in Jan's paper: ] N. Cahill, A. C. Parnell, A. C. Kemp, B. P. Horton, "Modeling sea-level change using errors-in-variables integrated Gaussian processes", 24th December 2013, http://arxiv.org/pdf/1312.6761.pdf . Just to note I've been trying to figure out a way to implement the bilinear model in a not-too-wasteful way in an array language like Matlab or NumPy (in order to avoid writing low-level code myself), but it's not proving obvious to me how to do this. Comment Source:Just to note I've been trying to figure out a way to implement the bilinear model in a not-too-wasteful way in an array language like Matlab or NumPy (in order to avoid writing low-level code myself), but it's not proving obvious to me how to do this. Just for the record, here's some Matlab/Octave code for evaluating a set of bilinear regression coefficients. I'm not putting it on a more permanent place because it's so ugly, inefficient and is unbelievably slow even for smaller test examples when put into Octaves fminunc optimization function. I think I do need to think about writing some lower level code that will perform even remotely reasonably on the El Nino data.
CommonCrawl
A set for which it is hard to determine whether or not it is countable. Why pi-systems and Dynkin/lambda systems? On the relative merits of approaches in measure theory. Why is the harmonic oscillator so important? (pure viewpoint sought). How to motivate its role in Getzler's work on Atiyah-Singer? Closed geodesics in free smooth loop space? Is the Lie algebra-valued curvature two-form on a principal bundle P the curvature of a vector bundle over P? Is it possible to partition $\mathbb R^3$ into unit circles? Why should one still teach Riemann integration? Which book would you like to see "texified"? What is the definition of "canonical"? Expressing any f(x,y) using only addition and unary functions?
CommonCrawl
My takeaways from 8th lecture of stanford machine learning course. Previous lecture concluded with deriving a dual problem(maximize W($\alpha$) w.r.t $\alpha$ within 2 constraints) for optimal margin classifier and noting that using SVM prediction depends only upon the inner product of input variable with those in the training sets. This lecture introduces Kernel function, an inner product of feature mappings, i.e. where $\phi$ is called the feature mapping. Interestingly, K(x,z) can be very inexpensive to compute even when $\phi(x)$ is very expensive to compute(lecture presents some examples including Gaussian Kernel). For this reason, kernel trick that is if you have any algorithm that you can write only in terms of inner products between input attribute vectors, then by replacing this with K(x,z) you can "magically" allow your algorithm to work efficiently in high dimensional feature space. And, this is the trick we apply to above optimal margin classifier and it becomes SVM. Also, the lecture proves one part of Mercer's theorem that says K is a valid kernel if and only if corresponding kernel matrix is positive semidefinite. Using this theorem it can be easily determined if a function is valid kernel. So far we assumed that our training data was linearly separable. To relax this constraint, to make the algorithm work for non-linearly separable datasets as well, we can reformulate(called l1 regularization) the dual problem for optimal margin classifier.
CommonCrawl
Seems some L t rominoes have developed a punk attitude and feel they have something to prove because people always play with dominoes instead.   They are even jealous of I trominoes, who can try to pass for tall dominoes. One L of a tromino has devised a caper that might at last gain some notoriety, or at least draw some attention, as long as enough Ls gang up to pull it off.   No problem recruiting, with this persuasive floor plan of the target. Just what are these Ls out to prove by this? any rectangular grid with side-lengths congruent to each other and not to zero modulo 3, with a single cell removed in the middle, can be covered by L-trominoes. By induction: starting with the $(3M\pm1)\times(3N\pm1)$ rectangle shown on the left, reducing it to the cases of four smaller rectangles, and continuing to apply the same reduction multiple times until they end up with a $2\times(3z-1)$ rectangle, which can be covered as shown on the right. Unknown. The proof doesn't guarantee that they'll always end up with a collection of $2\times(3z-1)$ rectangles without involving any $1\times(3z+1)$ rectangles (which clearly aren't coverable by trominoes). In fact, it appears that the general statement being considered is an open problem. we can't always split the board into four equally sized rectangles each time. Every rectangle of unit squares with a corner square removed can be tiled by L triominoes unless there is an obvious reason to fail. "the number of squares to cover is not a multiple of 3" "one side length is 1" are obvious reasons to fail. Not the answer you're looking for? Browse other questions tagged mathematics geometry polyomino proof-without-words or ask your own question.
CommonCrawl
Abstract : Let $k\ge 2$ be an integer and $T_1,\ldots, T_k$ be spanning trees of a graph $G$. If for any pair of vertices $(u,v)$ of $V(G)$, the paths from $u$ to $v$ in each $T_i$, $1\le i\le k$, do not contain common edges and common vertices, except the vertices $u$ and $v$, then $T_1,\ldots, T_k$ are completely independent spanning trees in $G$. For $2k$-regular graphs which are $2k$-connected, such as the Cartesian product of a complete graph of order $2k-1$ and a cycle and some Cartesian products of three cycles (for $k=3$), the maximum number of completely independent spanning trees contained in these graphs is determined and it turns out that this maximum is not always $k$. Keywords : Completely independent spanning tree Spanning tree Cartesian product Completely independent spanning tree.
CommonCrawl
I will consider a variant of the Navier-Stokes equations, where the classical Laplacian is substituted by a fractional Laplacian $-(-\Delta)^\alpha$. I will present two results. In the hypodissipative case, i.e. when $\alpha$ is sufficiently small, in a joint work with Maria Colombo and Luigi De Rosa we show that Leray solutions are ill-posed. In the hyperdissipative case, i.e. when $\alpha>1$, in a joint work with Maria Colombo and Annalisa Massaccesi we prove a "strong analog" of the Caffarelli-Kohn-Nirenberg Theorem, which strengthens the conclusions of a previous work by Katz and Pavlovic. Page last updated by Faculty of Arts and Sciences at 1:57 pm February 22, 2019 . This page was printed from http://fas.camden.rutgers.edu/2019/02/13/math-seminar-spring-series-continues-feb-22/ at 9:22 AM Friday, April 26, 2019.
CommonCrawl
where $I^\bullet $ is bounded below and consists of injective objects, and $\alpha $ is a quasi-isomorphism. Any two morphisms $\beta _1, \beta _2$ making the diagram commute up to homotopy are homotopic. Proof. This follows from Remark 13.18.5. We also give a direct argument here. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 013S. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 013S, in case you are confused.
CommonCrawl
Current practice for analysing functional neuroimaging data is to average the brain signals recorded at multiple sensors or channels on the scalp over time across hundreds of trials or replicates to eliminate noise and enhance the underlying signal of interest. These studies recording brain signals non-invasively using functional neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) generate complex, high dimensional and noisy data for many subjects at a number of replicates. Single replicate (or single trial) analysis of neuroimaging data have gained focus as they are advantageous to study the features of the signals at each replicate without averaging out important features in the data that the current methods employ. The research here is conducted to systematically develop flexible regression mixed models for single trial analysis of specific brain activities using examples from EEG and MEG to illustrate the models. This thesis follows three specific themes: i) artefact correction to estimate the `brain' signal which is of interest, ii) characterisation of the signals to reduce their dimensions, and iii) model fitting for single trials after accounting for variations between subjects and within subjects (between replicates). The models are developed to establish evidence of two specific neurological phenomena - entrainment of brain signals to an $\alpha$ band of frequencies (8-12Hz) and dipolar brain activation in the same $\alpha$ frequency band in an EEG experiment and a MEG study, respectively. functional brain imaging, EEG, MEG, kernel smoothing, p-splines, regression models, mixed models, repeated measures, functional data.
CommonCrawl
I'm mostly interested in high-dimensional convex geometry and its interplay with quantum information theory. 24 Is the rectangular function a convolution of $L^1$ functions? 20 What's a nice argument that shows the volume of the unit ball in $\mathbb R^n$ approaches 0?
CommonCrawl
What is an appropriate strategy for splitting the dataset? X_train is randomly split into a training and a test set 10 times (n_iter=10). Each point on the training-score curve is the average of 10 scores where the model was trained and evaluated on the first i training examples. Each point on the cross-validation score curve is the average of 10 scores where the model was trained on the first i training examples and evaluated on all examples of the test set. plot_learning_curve() can be found in the current dev version of scikit-learn (0.15-git). EDIT: I now agree with cbeleites that step 7a doesn't make much sense in this sequence. So I wouldn't adopt that. I'm not sure what you want to do in step 7a. As I understand it right now, it doesn't make sense to me. Here's how I understand your description: in step 7, you want to compare the hold-out performance with the results of a cross validation embracing steps 4 - 6. (so yes, that would be a nested setup). data leaks (dependence) between training and test data which is caused by a hierarchical (aka clustered) data structure, and which is not accounted for in the splitting. In my field, we have typically multiple (sometimes thousands) of readings (= rows in the data matrix) of the same patient or biological replicate of an experiment. These are not independent, so the validation splitting needs to be done at patient level. However, such a data leak occurs, you'll have it both in the splitting for the hold out set and in the cross validation splitting. Hold-out wold then be just as optimistically biased as cross validation. Preprocessing of the data done on the whole data matrix, where the calculations are not independent for each row but many/all rows are used to calculation parameters for the preprocessing. Typical examples would be e.g. a PCA projection before the "actual" classification. Again, that would affect both your hold-out and the outer cross validation, so you cannot detect it. For the data I work with, both errors can easily cause the fraction of misclassifications to be underestimated by an order of magnitude! If you are restricted to this counted fraction of test cases type of performance, model comparisons need either extremely large numbers of test cases or ridiculously large differences in true performance. Comparing 2 classifiers with unlimited training data may be a good start for further reading. However, comparing the model quality the inner cross validation claims for the "optimal" model and the outer cross validation or hold out validation does make sense: if the discrepancy is high, it is questionable whether your grid search optimization did work (you may have skimmed variance due to the high variance of the performance measure). This comparison is easier in that you can spot trouble if you have the inner estimate being ridiculously good compared to the other - if it isn't, you don't need to worry that much about your optimization. But in any case, if your outer (7) measurement of the performance is honest and sound, you at least have a useful estimate of the obtained model, whether it is optimal or not. update: What is "wrong" with the the scikit-learn example? First of all, nothing is wrong with nested cross validation here. Nested validation is of utmost importance for data-driven optimization, and cross validation is a very powerful approaches (particularly if iterated/repeated). Then, whether anything is wrong at all depends on your point of view: as long as you do an honest nested validation (keeping the outer test data strictly independent), the outer validation is a proper measure of the "optimal" model's performance. Nothing wrong with that. But several things can and do go wrong with grid search of these proportion-type performance measures for hyperparameter tuning of SVM. Basically they mean that you may (probably?) cannont rely on the optimization. Nevertheless, as long as your outer split was done properly, even if the model is not the best possible, you have an honest estimate of the performance of the model you got. You need ridiculously huge numbers of cases (at least compared to the numbers of cases I can usually have) in order to achieve the needed precision (bias/variance sense) for estimating recall, precision (machine learning performance sense). This of course applies also to ratios you calculate from such proportions. Have a look at the confidence intervals for binomial proportions. They are shockingly large! Often larger than the true improvement in performance over the hyperparameter grid. And statistically speaking, grid search is a massive multiple comparison problem: the more points of the grid you evaluate, the higher the risk of finding some combination of hyperparameters that accidentally looks very good for the train/test split you are evaluating. This is what I mean with skimming variance. The well known optimistic bias of the inner (optimization) validation is just a symptom of this variance skimming. Intuitively, consider a hypothetical change of a hyperparameter, that slowly causes the model to deteriorate: one test case moves towards the decision boundary. The 'hard' proportion performance measures do not detect this until the case crosses the border and is on the wrong side. Then, however, they immediately assign a full error for an infinitely small change in the hyperparameter. In order to do numerical optimization, you need the performance measure to be well behaved. That means: neither the jumpy (not continously differentiable) part of the proportion-type performance measure nor the fact that other than that jump, actually occuring changes are not detected are suitable for the optimization. Proper scoring rules are defined in a way that is particularly suitable for optimization. They have their global maximum when the predicted probabilities match the true probabilities for each case to belong to the class in question. For SVMs you have the additional problem that not only the performance measures but also the model reacts in this jumpy fashion: small changes of the hyperparameter will not change anything. The model changes only when the hyperparameters are changes enough to cause some case to either stop being support vector or to become support vector. Again, such models are hard to optimize. Brown, L.; Cai, T. & DasGupta, A.: Interval Estimation for a Binomial Proportion, Statistical Science, 16, 101-133 (2001). Cawley, G. C. & Talbot, N. L. C.: On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, Journal of Machine Learning Research, 11, 2079-2107 (2010). Brereton, R.: Chemometrics for pattern recognition, Wiley, (2009). points out the jumpy behaviour of the SVM as function of the hyperparameters. scikit.learn says that they have 1797 are in the digits data. assume that 100 models are compared, e.g. a $10 \times 10$ grid for 2 parameters. i.e., all models have the same true performance of, say, 97 % (typical performance for the digits data set). The red line marks the true performance of all our hypothetical models. On average, we observe only 2/3 of the true error rate for the seemingly best of the 100 compared models (for the simulation we know that they all perform equally with 97% correct predictions). Tuning parameters affecting the model complexity will typically cover parameter sets where the models are unstable and thus have high variance. For the UCI digits from the example, the original data base has ca. 11000 digits written by 44 persons. What if the data is clustered according to the person who wrote? (I.e. is it easier to recognize an 8 written by someone if you know how that person writes, say, a 3?) The effective sample size then may be as low as 44. Tuning model hyperparameters may lead to correlation between the models (in fact, that would be considered well behaved from a numerical optimization perspective). It is difficult to predict the influence of that (and I suspect this is impossible without taking into account the actual type of classifier). In general, however, both low number of independent test cases and high number of compared models increase the bias. Also, the Cawley and Talbot paper gives empirical observed behaviour. Not the answer you're looking for? Browse other questions tagged machine-learning cross-validation python scikit-learn or ask your own question. How this code of cross-validation work? Should SMOTE oversampling be done before or after holdout validation's training/testing split? scikit learn nested cross validation: how to correctly standardize the outer cv data set?
CommonCrawl
In occasion of the 1998 ICM in Berlin, B. Dubrovin conjectured an intriguing connection between the enumerative geometry of a Fano manifold $X$ with algebro-geometric properties of exceptional collections in the derived category $\mathcal D^b(X)$. Under the assumption of semisimplicity of the quantum co- homology of $X$, the conjecture prescribes an explicit form for local invariants of $QH^\bullet(X)$, the so-called "monodromy data", in terms of Gram matrices and characteristic classes of objects of exceptional collections. In this talk, a refinement of this conjecture will be presented, and particular attention will be given to the case of complex Grassmannians. At points of small quantum cohomology, these varieties manifest a coalescence phenomenon, whose occurrence and frequency is surprisingly subordinate to the distribution of prime numbers. A priori, the analytical description of these Frobenius structures cannot be obtained from an immediate application of the theory of isomonodromy deformations. The speaker will show how, under minimal conditions, the classical theory of M. Jimbo, T. Miwa, K. Ueno (1981) can be extended to describe isomonodromy deformations at a coalescing irregular singularity. Furthermore, a property of quasi-periodicity of Stokes matrices associated to the points of small Quantum Cohomology of complex Grassmannians will be discussed. Based on joint works with B. Dubrovin and D. Guzzetti.
CommonCrawl
1.First, draw a rectangle. In which you imagine your circle or ellipse to be fitting in. 2.Next, mark the mid-points of all four sides. 3.Keeping in mind that the sides are tangents and mid-points are where tangents meet the curve start to draw curve near the four midpints. 4.You'll see a rough ellipse emerging.... To draw a curve, We use the function $$ f(x) = x^2 $$ But how do we draw a circle using a function and a parameter of r where r is the radius? If I merely have an area of some sort, where I want to draw the circle, say $200 \times 200$, then can I merely loop through this like for ( i,j in 200 x 200): j=sqrt(r^2-i^2) or so.... pdecirc(xc,yc,R) draws a circle with the center at (xc,yc) and the radius R. The pdecirc command opens the PDE Modeler app with the specified circle already drawn in it. To draw a perfect circle you will need a drawing compass. To draw a circle you will need a pencil and paper. Starting at the top centre of the paper, place the point of the pencil.... How to draw a circle? To do this we'll use drawCircle() method and we'll create a new Paint object to set a different color for the circle. For now we've centered it (the center point of our circle is a half of the bar width and half of the height) and we set the radius to fit the height of the bar. pdecirc(xc,yc,R) draws a circle with the center at (xc,yc) and the radius R. The pdecirc command opens the PDE Modeler app with the specified circle already drawn in it. 1.First, draw a rectangle. In which you imagine your circle or ellipse to be fitting in. 2.Next, mark the mid-points of all four sides. 3.Keeping in mind that the sides are tangents and mid-points are where tangents meet the curve start to draw curve near the four midpints. 4.You'll see a rough ellipse emerging. If I merely have an area of some sort, where I want to draw the circle, say $200 \times 200$, then can I merely loop through this like for ( i,j in 200 x 200): j=sqrt(r^2-i^2) or so. Then I draw the new Ellipse in the mouse location as a black circle. Now each time you move your mouse the black circle is erased from the canvas, created again and then drawn in the new location. Here's a snapshot of the app -- when you run it and move the mouse the black circle will be redrawn wherever you move your mouse, as if you are dragging it around.
CommonCrawl
I am reading here, page 5 that d' (d-prime) does not vary with criterion (in contrast to hit rate for instance which does vary with criterion, and which can be a biased measure of a subject's perception). However, if we think of a subject that always answers "yes" when asked if a stimuli was present, even if not, then clearly the subject has a very liberal criterion (a negative criterion in particular), and it will follow that her hit rate = 1 but also her false alarm rate = 1, leading to a sensitivity of zero. This also makes me wonder how sensitivity can be inferred independently from criterion. Let's assume that the subject has a d' = 1, i.e. their internal representation of signal and noise are distinct w.r.t. means of the distributions. However if the subject does chose to nevertheless use the very liberal criterion and always answer "signal present", we will be only able to infer that their sensitivity equals zero. In summary, criterion seems to clearly impact sensitivity. Hence, why are criterion and sensitivity often discussed as being independent? Since the c's cancel out we don't have to worry about the pesky $\infty - \infty$ business. However if the subject does chose to nevertheless use the very liberal criterion and always answer "signal present", we will be only able to infer that their sensitivity equals zero. To see why this is wrong consider an estimation problem where you are trying to estimate upper and lower bounds on d' and c from a set of observations. Consider the case where we have N no signal trials resulting in N false alarms and M signal trials resulting in M hits. The upper bound depends of c depends on N and M as well as d' and we don't need to consider it here to make the point that you can only infer that d' equal zero is wrong. The lower bound on c is the key thing to consider. The lower bound is negative infinity. If c is negative infinity the probability of observing at least N false alarms and and at least M hits, give N no signal trials and M signal trials, is equal to unity for all values of d'. Statistically speaking, there is a greater than 1-alpha chance of observing at least N false alarms and M hits regardless of d' when c is infinity. Therefore we cannot rule out, at any level of statistical confidence, any value of d'. Not the answer you're looking for? Browse other questions tagged perception statistics psychophysics signal-detection or ask your own question. How are people working in psychophysics trained?
CommonCrawl
Let $S$ be a finite or countably infinite set of states. Any stochastic matrix with rows and columns indexed by $S$ is the transition matrix of some Markov chain with state space $S$. The transition behaviors of Markov chains are thus as varied as the matrices. It is helpful to set up terminology to discuss some of these behaviors. There is a path of positive probability that starts at $i$ and ends at $j$. Equivalently, there is some $n > 0$ such that $P_n(i, j) > 0$. We say that $i$ communicates with $j$ if $i \rightarrow j$ and $j \rightarrow i$. In that case we write $i \leftrightarrow j$. If all the states of a chain communicate with each other, the chain is called irreducible. The sticky reflecting random walk of the previous section is irreducible, because it is possible for the chain to get from every state to every other state. Working in discrete time has disadvantages. One of them is that states can be periodic. Let's start with the example of a random walk where the steps are based on tosses of a fair coin. Suppose the walk starts at state 0. Then it can return to 0 only at even times: the number of heads up to that point has to exactly equal the number of tails, and thus the number of tosses has to be even. We say that the state 0 has period 2. A state $i$ has period $d$ if, starting at $i$, the chain can come back to $i$ only at times that are multiples of $d$. That is, $d$ is the greatest common divisor of the set all $n$ such that $P_n(i, i) > 0$. In the random walk described above, all states have period 2. Period causes trouble with statements about long-run behavior. For example, if state $i$ has period 3, then the sequence $P_n(i, i)$ might look like "0, 0, positive, 0, 0, positive, $\ldots$", so limit statements might become complicated. In this course we will study the long run behavior of chains in which all states are aperiodic, that is, they have period 1. In other words there is no cyclical pattern to when the chain can return to any state. How do you check if all states are aperiodic? If the chain is irreducible, it turns out that all the states must have the same period. The proof of this fact isn't terribly hard but we won't go through it. What it implies is that if a chain is irreducible, which is easy to check, all you have to do is figure out the period of one of its states. Then all the others must have the same period. Some states are easy to identify as aperiodic. If the one-step transition probability $P(i, i)$ is positive, then the state $i$ has to be aperiodic. Since the chain can stay at $i$ for arbitrary lengths of time, its "returns" are not cyclical. is a transition matrix in its own right, albeit of a rather boring chain that goes deterministically back and forth between $a$ and $b$. Both $a$ and $b$ have period 2. States $d$ and $e$ form their own communicating class and are aperiodic. State $c$ communicates with itself, but once it gets to either $b$ or $d$, it can't return. In this course we will work only with irreducible, aperiodic Markov chains on finite state spaces. Much of what we say will be true for periodic chains as well, and for chains with countably infinite state spaces.
CommonCrawl
Abstract: Preparing and certifying bound entangled states in the laboratory is an intrinsically hard task, due to both the fact that they typically form narrow regions in the state space, and that a certificate requires a tomographic reconstruction of the density matrix. Indeed, the previous experiments that have reported the preparation of a bound entangled state relied on such tomographic reconstruction techniques. However, the reliability of these results crucially depends on the extra assumption of an unbiased reconstruction. We propose an alternative method for certifying the bound entangled character of a quantum state that leads to a rigorous claim within a desired statistical significance, while bypassing a full reconstruction of the state. The method is comprised by a search for bound entangled states that are robust for experimental verification, and a hypothesis test tailored for the detection of bound entanglement that is naturally equipped with a measure of statistical significance. We apply our method to families of states of $3\times 3$ and $4\times 4$ systems, and find that the experimental certification of bound entangled states is well within reach.
CommonCrawl
The state of the art of non-linearity is to use rectified linear units (ReLU) instead of sigmoid function in deep neural network. What are the advantages? I know that training a network when ReLU is used would be faster, and it is more biological inspired, what are the other advantages? (That is, any disadvantages of using sigmoid)? Two additional major benefits of ReLUs are sparsity and a reduced likelihood of vanishing gradient. But first recall the definition of a ReLU is $h = \max(0, a)$ where $a = Wx + b$. One major benefit is the reduced likelihood of the gradient to vanish. This arises when $a > 0$. In this regime the gradient has a constant value. In contrast, the gradient of sigmoids becomes increasingly small as the absolute value of x increases. The constant gradient of ReLUs results in faster learning. The other benefit of ReLUs is sparsity. Sparsity arises when $a \le 0$. The more such units that exist in a layer the more sparse the resulting representation. Sigmoids on the other hand are always likely to generate some non-zero value resulting in dense representations. Sparse representations seem to be more beneficial than dense representations. Sigmoid: tend to vanish gradient (cause there is a mechanism to reduce the gradient as "$a$" increase, where "$a$" is the input of a sigmoid function. Gradient of Sigmoid: $S'(a)= S(a)(1-S(a))$. When "$a$" grows to infinite large , $S'(a)= S(a)(1-S(a)) = 1\times(1-1)=0$). The other answers are right to point out that the bigger the input (in absolute value) the smaller the gradient of the sigmoid function. But, probably an even more important effect is that the derivative of the sigmoid function is ALWAYS smaller than one. In fact it is at most 0.25! The down side of this is that if you have many layers, you will multiply these gradients, and the product of many smaller than 1 values goes to zero very quickly. Since the state of the art of for Deep Learning has shown that more layers helps a lot, then this disadvantage of the Sigmoid function is a game killer. You just can't do Deep Learning with Sigmoid. On the other hand the gradient of the ReLu function is either $0$ for $a < 0$ or $1$ for $a > 0$. That means that you can put as many layers as you like, because multiplying the gradients will neither vanish nor explode. An advantage to ReLU other than avoiding vanishing gradients problem is that it has much lower run time. max(0,a) runs much faster than any sigmoid function (logistic function for example = 1/(1+e^(-a)) which uses an exponent which is computational slow when done often). This is true for both feed forward and back propagation as the gradient of ReLU (if a<0, =0 else =1) is also very easy to compute compared to sigmoid (for logistic curve=e^a/((1+e^a)^2)). Although ReLU does have the disadvantage of dying cells which limits the capacity of the network. To overcome this just use a variant of ReLU such as leaky ReLU, ELU,etc if you notice the problem described above. An extra piece of answer to complete on the Sparse vs Dense performance debate. Don't think about NN anymore, just think about linear algebra and matrix operations, because forward and backward propagations are a series of matrix operations. Now remember that there exist a lot of optimized operator to apply to sparse matrix and so optimizing those operations in our network could dramatically improve the performance of the algorithm. Not the answer you're looking for? Browse other questions tagged machine-learning neural-networks deep-learning or ask your own question. Keras autoencoder example – why ReLU? How to decide which activation function to use? ReLu or sigmoid? Deep Learning : Using dropout in Autoencoders? Neural Networks: What activation function should I choose for hidden layers in regression models? Requirements for a valid neural network activation function? What are the benefits of using ReLU over softplus as activation functions? Why do we use ReLU in neural networks and how do we use it? ReLU for small artificial neural networks? Why is increasing the non-linearity of neural networks desired? Do we still need to use tanh and sigmoid activation functions in neural networks, or can we always replace them by ReLU or leaky ReLU? Why does gradient descent work faster with ReLU compared to using with Signoid?
CommonCrawl
The discovery of the Higgs boson at 126 GeV seems to imply masses in the multi-TeV regime for the simplest constrained SUSY models. Such a heavy supersymmetric spectrum is somewhat at odds with the naturalness criterion. Moreover, in such a framework it is impossible to obtain the correct value of $(g-2)_\mu$. By construction, SUSY models with light sleptons and light third generation squarks do not present the same problem. In this context we analyze the status of a version of the MSSM with 9 free parameters at the end of LHC Phase 1 runs. We investigate the impact on the parameter space of different direct SUSY searches using a statistical approach. We also include various other constraints from b-physics, the anomalous magnetic moment of the muon, the relic density and direct and indirect detection of dark matter. All experimental results are implemented through the likelihood functions, including the limits from XENON100 and from two 8 TeV CMS searches (inclusive search of SUSY particles with $\alpha_T$ and electroweak production), for which the likelihood is constructed through simulation of the signal yields to be compared with observed events and backgrounds given by the experimental collaborations.
CommonCrawl
We all know that space is about going really fast. We also know that what matters in a collision isn't really absolute speed, but relative velocity. (Two cars with matched speeds on a highway touching each other doesn't necessarily lead to large damage, but if one of them was standing still, it probably would.) A large fraction of the orbiting spacecraft are in prograde orbits, simply because it's easier and, if not actively helpful, at least doesn't hurt; that also reduces the relative velocity between the two. Yet people keep saying that in-orbit collisions happen at such extreme velocities. What is the typical relative impact velocity of a piece of orbital debris to an operational spacecraft in low Earth orbit? What are the vector component values of this velocity? Bonus points for answers that include citations. Also bonus points for answers that include the data from which the "typical" is derived. Take a look at this answer by Mark Adler. As you can see, a small panel over a 15 years endured many impacts. I would expect there have been multitudes of impacts over all. I doubt anyone knows what the average impact velocity has been. I'll attempt to give you some tools to examine different scenarios, though. where $\alpha$ is the angle between a and b. The Law of Cosines may look hard. But if you remember cos(90º) is zero, you can see the Pythagorean theorem drop out when alpha is 90º. So if you just memorize the $-2 ab * cos(\alpha)$ part, the rest is the Pythagorean theorem you learned in high school. And when you do vector subtraction, the third side of the triangle is the delta v between the first two velocity vectors. Not sure how precise you need the answer, but just thinking about the first cosmic velocity and the escape velocity, it can only be a value between that. So Something between ~7.8 km/s - 11.2 km/s. Of course as you mentioned the relative velocity matters. The orbits of the debris could be opposed to the orbit of the spacecraft so the theoretical max relative velocity would be 11.2 km/s + ~7 km/s = ~ 18 km/s (since you're talking about a LEO and not a HEO or something). Since most launches take place in a prograde orbit I'd imagine that most of the debris would be in a prograde orbit as well so most impacts probably take place at a relative velocity of the perigee speed of a HEO (9-11 km/s depending on the orbit) and the speed of the spacecraft in LEO (~7-7.5 km/s). Worst case is about 19 km/s as mentioned before though. All speeds below that are possible though, as the inclinations between the spacecraft can vary, resulting in very different relative velocities. I calculated a simple example. Two objects are in a circular low orbit but in different planes. The impact velocity depends on the angle between the orbit planes. I use 7.8 km/s for the speed in orbit. For an angle of 5° the vectorial velocity difference is 0.68 km/s, for 10 ° 1.36 km/s, for 15 ° 2.04 km/s, for 30 ° 4.04 km/s, for 45 ° 5.96 km/s and for 90 ° 11.04 km/s. Two orbits with an angle difference of 45 ° to the equatorial plane in oppositional directions have an angle difference of 90 ° between them. Impact velocities of 1 to 11 km/s are possible. Not the answer you're looking for? Browse other questions tagged low-earth-orbit debris impact or ask your own question. How exactly does the inclination and direction (specifically retrograde) of orbit affect the velocity I need to attain orbit? What are the causes of breakups of spent 2nd and 3rd stages, resulting in orbital debris? What is the linear dimension/mass/possible energy state of the smallest trackable orbital debris item? If a cube or micro satellite could collect 1-2 cm space debris in Earth orbit could they be deorbited while raising the sats orbit? How can debris from an impact escape to a stable orbit?
CommonCrawl
The work in our lab uses the tools of synthetic medicinal chemistry and chemical biology to develop new tools for studying therapeutically important protein-protein interactions. Our work currently focuses on two classes of protein-protein interactions: the nuclear receptor/steroid receptor coactivator interaction and the Keap1/Nrf2 interaction. Dr. Moore teaches in the medicinal chemistry PhD and first-year PharmD curricula. Speltz TE, Danes JM, Stender JD, Frasor J, Moore T. "A cell-permeable stapled peptide inhibitor of the estrogen receptor/coactivator interaction". ACS Chemical Biology. 2018;. Yao Y, Delgado-Rivera L, Afsari HS, Yin L, Thatcher, Gregory R. J. , Moore T, Miller LW. "Time-Gated Luminescence Detection of Enzymatically Produced Hydrogen Sulfide: Design, Synthesis, and Application of a Lanthanide-Based Probe". INORGANIC CHEMISTRY. 2018;57(2):681-688. doi:10.1021/acs.inorgchem.7b02533. Speltz TE, Mayne CG, Fanning SW, Siddiqui Z, Tajkhorshid E, Greene GL, Moore T. "A "cross-stitched" peptide with improved helicity and proteolytic stability.". Organic & biomolecular chemistry. 2018;16(20):3702-3706. doi:10.1039/c8ob00790j. Popovich NG, Okorie-Awé C, Crawford SY, Balcazar FE, Vellurattil RP, Moore T, Schriever AE. "Assessing Students' Impressions of the Cultural Awareness of Pharmacy Faculty and Students.". American journal of pharmaceutical education. 2018;82(1):6161. doi:10.5688/ajpe6161. Yao Y, Kong C, Yin L, Jain AD, Ratia KM, Thatcher GR, Moore T, Driver TG, Miller LW. "Time-Gated Detection of Cystathionine $\gamma$-Lyase Activity and Inhibition with a Selective, Luminogenic Hydrogen Sulfide Sensor". Chemistry-A European Journal. 2017;23(4):752--756. Speltz TE, Fanning SW, Mayne CG, Fowler C, Tajkhorshid E, Greene GL, Moore T. "Stapled Peptides with $\gamma$-Methylated Hydrocarbon Chains for the Estrogen Receptor/Coactivator Interaction". Angewandte Chemie International Edition. 2016;55(13):4252--4255. Moore T, Gunther JR, Katzenellenbogen JA. "Estrogen Receptor Alpha/Co-activator Interaction Assay: TR-FRET". Protein-Protein Interactions: Methods and Applications. 2015;:545--553. Richardson B, Jain A, Speltz T, Moore T. "Non-electrophilic modulators of the canonical Keap1/Nrf2 pathway". Bioorganic \& medicinal chemistry letters. 2015;25(11):2261--2268. Zhu S, Kisiel W, Lu YJ, Petersen LC, Ndungu JM, Moore T, Parker ET, Sun A, Sarkaria JN, Snyder JP, others . "Visualizing cancer and response to therapy in vivo using Cy5. 5-labeled factor VIIa and anti-tissue factor antibody". Journal of drug targeting. 2015;23(3):257--265. Grimmer C, Moore T, Padwa A, Prussia A, Wells G, Wu S, Sun A, Snyder JP. "Antiviral atropisomers: Conformational energy surfaces by NMR for host-directed myxovirus blockers". Journal of chemical information and modeling. 2014;54(8):2214--2223. Moore T, Zhu S, Randolph R, Shoji M, Snyder JP. "Liver S9 fraction-derived metabolites of curcumin analogue UBS109". ACS medicinal chemistry letters. 2014;5(4):288--292. Zhu S, W Moore T, Morii N, B Howard R, F Arrendale R, Reddy P, J Evers T, Zhang H, Sica G, G Chen Z, others . "Synthetic curcumin analog UBS109 inhibits the growth of head and neck squamous cell carcinoma xenografts". Current cancer drug targets. 2014;14(4):380--393. Zhu S, Kisiel W, Lu YJ, Petersen LC, Ndungu JM, Moore T, Parker ET, Sun A, Liotta DC, El-Rayes BF, others . "Tumor angiogenesis therapy using targeted delivery of Paclitaxel to the vasculature of breast cancer metastases". Journal of drug delivery. 2014;2014. Moore T, Sana K, Yan D, Thepchatri P, Ndungu JM, Saindane MT, Lockwood MA, Natchus MG, Liotta DC, Plemper RK, others . "Asymmetric synthesis of host-directed inhibitors of myxoviruses". Beilstein journal of organic chemistry. 2013;9:197. Brown A, Shi Q, Moore T, Yoon Y, Prussia A, Maddox C, Liotta DC, Shim H, Snyder JP. "Monocarbonyl curcumin analogues: heterocyclic pleiotropic kinase inhibitors that mediate anticancer properties". Journal of medicinal chemistry. 2013;56(9):3456--3466. Moore T, Sana K, Yan D, Krumm SA, Thepchatri P, Snyder JP, Marengo, Jose´ , Arrendale RF, Prussia AJ, Natchus MG, others . "Synthesis and Metabolic Studies of Host-Directed Inhibitors for Antiviral Therapy". ACS medicinal chemistry letters. 2013;4(8):762--767. Olivera A, Moore T, Hu F, Brown AP, Sun A, Liotta DC, Snyder JP, Yoon Y, Shim H, Marcus AI, others . "Inhibition of the NF-$\kappa$B signaling pathway by the curcumin analog, 3, 5-Bis (2-pyridinylmethylidene)-4-piperidone (EF31): anti-inflammatory and anti-cancer properties". International immunopharmacology. 2012;12(2):368--377. Yamaguchi M, Moore T, Sun A, Snyder JP, Shoji M. "Novel curcumin analogue UBS109 potently stimulates osteoblastogenesis and suppresses osteoclastogenesis: involvement in Smad activation and NF-$\kappa$B inhibition". Integrative Biology. 2012;4(8):905--913. Zhu S, Moore T, Lin X, Morii N, Mancini A, Howard RB, Culver D, Arrendale RF, Reddy P, Evers TJ, others . "Synthetic curcumin analog EF31 inhibits the growth of head and neck squamous cell carcinoma xenografts". Integrative Biology. 2012;4(6):633--640. Sun A, Moore T, Gunther JR, Kim M, Rhoden E, Du Y, Fu H, Snyder JP, Katzenellenbogen JA. "Discovering Small-Molecule Estrogen Receptor $\alpha$/Coactivator Binding Inhibitors: High-Throughput Screening, Ligand Development, and Models for Enhanced Potency". ChemMedChem. 2011;6(4):654--666. Moore T, Mayne CG, Katzenellenbogen JA. "Minireview: Not picking pockets: nuclear receptor alternate-site modulators (NRAMs)". Molecular Endocrinology. 2010;24(4):683--695. Moore T, Gunther JR, Katzenellenbogen JA. "Probing the topological tolerance of multimeric protein interactions: evaluation of an estrogen/synthetic ligand for FK506 binding protein conjugate". Bioconjugate chemistry. 2010;21(10):1880--1889. Gunther JR, Du Y, Rhoden E, Lewis I, Revennaugh B, Moore T, Kim SH, Dingledine R, Fu H, Katzenellenbogen JA. "A set of time-resolved fluorescence resonance energy transfer assays for the discovery of inhibitors of estrogen receptor-coactivator binding". Journal of biomolecular screening. 2009;14(2):181--193. Moore T, Katzenellenbogen JA. "Inhibitors of nuclear hormone receptor/coactivator interactions". Annual reports in medicinal chemistry. 2009;44:443--457. Gunther JR, Moore T, Collins ML, Katzenellenbogen JA. "Amphipathic benzenes are designed inhibitors of the estrogen receptor $\alpha$/steroid receptor coactivator interaction". ACS chemical biology. 2008;3(5):282--286. Clews PK, Douthwaite RE, Kariuki BM, Moore T, Taboada M. "Layered Compounds Incorporating 9, 9 '-Spirobifluorene: Hydrogen-Bonded and Metal- Organic Networks Derived from 9, 9 '-Spirobifluorene-2, 2 ', 7, 7 '-tetracarboxylic Acid". Crystal growth \& design. 2006;6(9):1991--1994. Moore T, Kiely C, Reeves P. "Electronic properties of the trimethylenemethaneiron tricarbonyl group". Journal of Organometallic Chemistry. 2001;620(1):308--312.
CommonCrawl
Abstract: The first mixed problem with the Dirihlet homogeneous boundary-value condition and a finite initial function is considered for a certain class of second-order anisotropic doubly nonlinear parabolic equations in a cylindrical domain $D=(0,\infty)\times\Omega$. Upper estimates characterizing the dependence of the decay rate of the solution to the problem on geometry of an unbounded domain $\Omega\subset\mathbb R_n$, $n\geq3$, are established when $t\to\infty$. Existence of strong solutions is proved by the method of Galerkin's approximations. The method of their construction for the modelling isotropic equation has been earlier offered by F. Kh. Mukminov, E. R. Andriyanova. The estimate of the admissible decay rate of the solution on an unbounded domain has been obtained on the basis of Galerkin's approximations. It proves the accuracy of the upper estimate. Keywords: anisotropic equation, doubly nonlinear parabolic equations, existence of strong solution, decay rate of solution.
CommonCrawl
In the AdS/CFT context, black holes are dual to ensembles of ``heavy'' CFT states whose conformal dimension scales as the central charge. The Strominger--Vafa black hole, which admits an AdS$_3 \times$ S$^3$ decoupling limit and a dual description in terms of a two-dimensional CFT, provides an excellent model to study. \par Among the dynamical quantities one can study, the four-point functions, with two heavy states and two light probes, provide a good observable to extract detailed informations from the black hole. In particular, late time behavior of the correlators is associated to information loss and generally it provides a powerful tools to study unitarity properties of the system.
CommonCrawl
We report on new results concerning the global well-posedness, dissipativity and attractors for the quintic wave equations in bounded domains of $\mathbb R^3$ with damping terms of the form $(-\Delta _x)^\theta \partial _t u$, where $\theta =0$ or $\theta =1/2$. The main ingredient of the work is the hidden extra regularity of solutions that does not follow from energy estimates. Due to the extra regularity of solutions existence of a smooth attractor then follows from the smoothing property when $\theta =1/2$. For $\theta =0$ existence of smooth attractors is more complicated and follows from Strichartz type estimates.
CommonCrawl
A successor ordinal, by definition, is an ordinal $\alpha$ that is equal to $\beta + 1$ for some ordinal $\beta$. There are no ordinals between $\beta$ and $\beta + 1$. The union $\bigcup \beta$ can be thought of as an inverse successor function, because $\bigcup ( \beta + 1 ) = \beta$. All limit ordinals are equal to their union. This page was last modified on 5 February 2012, at 20:24.
CommonCrawl
Why did no student correctly find a pair of $2\times 2$ matrices with the same determinant and trace that are not similar? Is there a symbol for "taking a derivative of something"? +10 Is there a function that doesn't have a limit at infinity but its derivative does? +10 Adjoints of operators between different Hilbert spaces. +10 Series converges if partial sums are bounded? When do two matrices have the same exponential? Can I comb unoriented hair on a ball?
CommonCrawl
I was recently invited to a wedding of a friend, where I had the pleasure to meet the the 4-year old Benjamin1 who was fascinated by numbers and arithmetic. Sharing and often repeating the laments of Paul Lockhardt, I was eager to show him some aspects of mathematics, that are fun to explore and not commonly taught in schools. I suggested to name the number Rachel1 in honor of his mother, but Benjamin replied: "This is not a good name for a number". So we went with "GoogolPlexilliardenSeptillion" and appended it to his list. While gigantically large numbers fascinated him clearly most, he also had a considerable fun at solving arithmetic puzzles (), and so I told him the following box counting game. We represent number $n$ as set of $n$ boxes. The size and color of boxes do not matter. Box sets can be added by merging them. Box sets can be multiplied by counting boxes in rectangles. Now what can we do with this almost trivial translation of numbers into boxes? Every multiplication becomes a problem of counting boxes in a rectangle! Presented as a box counting problem, we ask how many boxes there are in a 5x7 rectangle! Divide and Conquer: Chop the rectangle into smaller rectangles, that are easier to count. That was neat, wasn't it? Let's do another example: What is 11x11. I asked Benjamin to calculate this number, and he proceeded, as expected, by the standard method of reciting the 11 succession. It was quite impressive to see him handling these calculations at pre-school age and arrive at the correct answer: 121. However, I had just asked him, 10x10, which he knew by heart: 100! So essentially, he had just counted all boxes in a 10x10 square. And posed with the problem of counting all boxes in an 11x11 square, which is just 1 box larger, he started from scratch and recited his series. Poor boy! Those simple observations already open up more questions about the large and fascinating topic of square numbers. The above trick to calculate 11x11 generalizes to the following theorem. Theorem. The number of boxes in a square of size $N+1$ is equal to the number of boxes in a square of size $N$, plus $2N$ (at the sides) plus 1 (at the tip). It's not possible to provide a formal proof using the explicit box counting methods introduced above. We either need more profound set theory and induction, or formal arithmetic $(N+1)^2 = N^2 +2N + 1$, which derives this in a breeze. However, there is value in illustrations. Benjamin followed that argument quickly, while working with parameters $N,x$ and the binomial theorem would have been too much and/or boring for him. And frankly, most schools don't bother to present proofs at all and stay at the phenomenological level. But you might not remember what $12 \times 12$ is. Do you? So, let's calculate that first. So what is 12x12? Fortunately we have already calculated $11\times 11 = 100 + 21$ so that we can substitute. Which I'll leave for you as an exercise from here. Ha, that's interesting. We decompose a square into layers of square-angles! But we don't have to stop at 10x10. We can go all the way down to 1x1! But wait, there is another pattern: The size of those square angles are just the odd numbers! Theorem. The number of boxes in a square of size $N$ is equal to the sum of all odd numbers smaller than $2N$. What are the sums of even numbers: $2 + 4 + … + 2n$ ? There is a cheap trick, how you can get from all odd numbers to all even numbers: You can subtract one! We are not at the end. There is another trick that can be applied. We are clearly just at the beginning here. Framing arithmetic problems as box counting problems allows creative approaches and raises many interesting questions: Can you find the identity hidden in the picture at the very top? The images have been created with a HUION Tablet (this is an Amazon-affiliate link) and Sketches by Tayasui. Thanks for your comments and corrections. I am fixing those as they appear. Check the Version History to see what changed and when.
CommonCrawl
I am running a mesh independence study. I start with Mesh 1 and proceed up to Mesh 4, each time doubling the number cells in the mesh. In parallel, I am comparing my computational results to experimental data. M. 1 shows poor results. M. 2 shows a significant improvement and a good match to experimetnal results. M. 3 and M. 4 produce identical results, that are only slightly different from M. 2. It would then seem sensible to pick M. 3 as my final mesh. But it seems that the results are too smooth, loosing some of the fine details, produced by M. 2 (and observed in the experiments). Can there actually be some sort of overcorrection? Could it be that the mesh independent solution is not necesarily the best one? The differential equation does not appropriately describe the real world because its solution is not close to what you measured. Your measured data are not correct. Essentially, what you are seeing is that the exact solution of the differential equation (apparently well approximated on meshes M3 and M4) does not match your measurements. Which one is wrong is for you to find out now. I claim that the independent mesh is the best one. Say the actual solution is $U$ and your solver delivers an $u_h$ depending on a mesh parameter $h$. Then you can do the estimate for the distance of $u_h$ to the actual solution $$\|U-u_h \| \leq \|U-u_m \| + \|u_m - u_h\|,$$ where $u_m$ is the solution of the model used to describe the problem mathematically. Assume the modelling error $\| U - u_m\|$, that is independent of the mesh, is of constant size $C_m$, while the numerical error satisfies an estimate of type $\|u_m - u_h\| < C_h h^p$, with $p\in \mathbb N$. Thus, the error measured can be expressed as $$\|U-u_h\| \approx C_m + C_hh^p.$$ Reducing the mesh size, i.e. $h\to 0$, there will be a point, when the numerical error goes below the modelling error, and the differences to the actual solution does not change anymore. At this point your solution is called mesh independent, since the modelling error dominates over the numerical error. Thus, your observation is probably due to a lucky cancellation of errors. But I would not call on this, since I don't see a mathematical justification for using coarser discretizations, unless one faces instabilities. Not the answer you're looking for? Browse other questions tagged fluid-dynamics mesh or ask your own question. Why does the wrong mesh scale produce a more accurate result? What is Mesh Independence Report? What's the difference between grid-based and mesh-based methods for PDEs?
CommonCrawl
Random constraint satisfaction problems encode many interesting questions in random graphs such as the chromatic and independence numbers. Ideas from statistical physics provide a detailed description of phase transitions and properties of these models. I will discuss the question of the number of solutions to random regular NAE-SAT. This involves understanding the condensation regime where the model undergoes what is known as a one step replica symmetry breaking transition. We expect these approaches to extend to a range of other models in the same universality class. I will review some recent results regarding the general problem of characterizing algebraic surfaces of the form $F(x,y,z)=0$ that can contain $\Theta(n^2)$ points of a $n \times n \times n$ Cartesian product. Assume that the derived Fukaya category of a symplectic manifold admits a collection of triangular generators. By definition, this means that any other Lagrangian submanifold which is an object of this category can be decomposed in terms of exact triangles involving the generators. The purpose of the talk is to explain why such a decomposition requires a certain non-trivial amount of "energy". The notion of energy that appears here is an extension of Hofer's energy.
CommonCrawl
55 * \brief An implementation of the Krivodonova slope limiter. 124 * used in all dimensions. 128 * limiter can be applied to any 1d or tensor product of 1d basis functions. 136 * all symmetric pairs of coefficients are left unchanged, i.e. 416 * would need to be stored in the DataBox). 420 * fewer coefficients we have a few choices. 440 * \brief The \f$\alpha_i\f$ values in the Krivodonova algorithm. 457 * approach is to not compile the limiter into the executable. 467 "The hierarchical slope limiter of Krivodonova.\n\n" 468 "This slope limiter works by limiting the highest modal " 469 "coefficients/derivatives using an aggressive minmod approach, " 470 "decreasing in modal coefficient order until no more limiting is " 504 /// \brief Package data for sending to neighbor elements. 576 "The alphas in the Krivodonova limiter must be in the range " 618 "The Krivodonova limiter does not yet support non-uniform number of " 619 "collocation points, bases, and quadrature in each direction. The " 632 "The Krivodonova limiter does not yet support differing meshes " 633 "between neighbors. Self mesh is: " 741 // direction because we are already at the lowest coefficient. 771 // times we call minmod, not because it is required for correctness. 832 // direction because we are already at the lowest coefficient. 867 // times we call minmod, not because it is required for correctness. 870 // the loop bounds are `i >= j >= k`. prints an error message to the standard error stream and aborts the program. Compute the modal coefficients from the nodal coefficients. Orient variables to the data-storage order of a neighbor element with the given orientation. Compute the nodal coefficients from the modal coefficients. The Axis of the Direction. The number of grid points in each dimension of the grid. The computational grid of the Element in the DataBox. Construct a not_null from a pointer. Often this will be done as an implicit conversion, but it may be necessary to perform the conversion explicitly when type deduction is desired. Defines helper functions for use with Variables class. The sign for the normal to the Side. Allows zero-cost unordered expansion of a parameter. Given the NodalTag holding a Tensor<DataVector, ...>, swap the DataVector with a ModalVector.
CommonCrawl
As $n \rightarrow \infty$ the exact order of relative widths of classes $W^r_1$ of periodic functions in the space $L_1$ is found under restrictions on higher derivatives of approximating functions. English version (Springer): Ukrainian Mathematical Journal 57 (2005), no. 10, pp 1652-1662. Citation Example: Parfinovych N. V. Exact order of relative widths of classes $W^r_1$ in the space $L_1$ // Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1409–1417.
CommonCrawl
Is there a systematic way to find out types of function compositions, given the types of the composed functions? Edit: As I forgot to mention, $\alpha$, $\beta$, $\gamma$ are polymorphic types, hence this composition is actually possible. Haskell says the type definition is as follows: $f \circ f :: (\alpha \rightarrow \beta_1 \rightarrow \beta_2 \rightarrow \gamma) \rightarrow ((\alpha, \beta_1), \beta_2) \rightarrow \gamma$. Browse other questions tagged lambda-calculus type-theory or ask your own question. How can I determine the cardinality of a set of polymorphic functions? What is the desirable function identification when setting up arrows in the category of types? Is this a valid proof that function composition is associative in type theory? Natural transformation = parametric polymorphic function in "structure categories"?
CommonCrawl
You are given an array that has $n$ positive integers. Your task is to find two integers such that their greatest common divisor is as large as possible. The first input line has an integer $n$: the size of the array. The second line has $n$ integers $x_1,x_2,\ldots,x_n$: the contents of the array. Print the maximum greatest common divisor.
CommonCrawl
How to calculate categorical distribution? Suppose I have 150 records with continuous and categorical values.In which only one column has categorical values with three categories namely setosa, versicolor and virginica. How to calculate categorical distribution for them? The common choice for $\alpha$ is $1$, i.e. applying uniform prior based on Laplace's rule of succession, $1/2$ for Krichevsky-Trofimov estimate, or $1/k$ for Schurmann-Grassberger (1996) estimator. Notice, however, that what you do here is apply out-of-data (prior) information in your model, so it gets a subjective, Bayesian flavor. With this approach you have to remember the assumptions you made and take them into consideration. Of course, if your out-of-data knowledge suggests using some informative, non-uniform, prior you can use different values for the $\alpha_i$. Not the answer you're looking for? Browse other questions tagged distributions categorical-data or ask your own question. How to predict a categorical variable with another categorical variable? With categorical data set, how does one determine whether a low frequency value is statistically valid? how to handle large categorical values in data frame? How to evaluate relationships with categorical variables?
CommonCrawl
The phenylhydrazine-induced erythropoietic mouse spleen is used as a model system to demonstrate the relationship between tissue growth and polyamine metabolism. Phenylhydrazine produced significant changes in spleen weights, hematocrits and reticulocyte counts in Swiss-Webster mice. The average spleen weight went up from a control of 155 mg to 875 mg at 96 hours after phenylhydrazine administration, while a 49% reduction in the value of hematocrit was observed at 72 hours. Reticulocyte counts in peripheral blood went from 0.8 to 58% at 168 hours after treatment with phenylhydrazine. Phenylhydrazine at a dose of 40 mg/kg produced significant increases in the levels of putrescine, spermidine and spermine with maxima reached within 72 hours. The levels of N$\sp1$-acetylspermidine reached a maximum of 2.7-fold compared to control at 96 hours. When the dose of phenylhydrazine was increased to 120 mg/kg, peak levels of acetylated polyamines were reached within 96 hours at which time N$\sp1$-acetylspermidine levels rose to 2.9-fold and N$\sp8$-acetylspermidine levels went from not detectable to detectable levels. the levels of putrescine, spermidine and spermine reached maxima at 96 hours of 289, 1248 and 934 nmoles/g, respectively. DL-$\alpha$-difluoromethylornithine hydrochloride monohydrate (DFMO) inhibited the increases in putrescine levels and potentiated the increases in spermine levels induced by phenylhydrazine, while 7-(N-(3-aminopropyl)amino) heptan-2-one.2HCl (APAH) induced significant increases in the levels of N$\sp8$-acetylspermidine. APAH potentiated the increases in spleen weights induced by phenylhydrazine.
CommonCrawl
Bessie the cow realizes she needs to exercise more in order to stay in good shape. She needs your help selecting potential routes around the farm that she can use for her morning jogging routine. The farm is made up of $N$ fields ($1 \leq N \leq 2 \cdot 10^5$), conveniently numbered $1 \ldots N$, and conveniently connected by a set of $M$ bi-directional trails ($1 \leq M \leq 2 \cdot 10^5$). Being creatures of habit, the cows tend to use one particular subset of $N-1$ trails for all of their daily movement between fields -- they call these the "standard" trails. It is possible to travel from any field to any other field using only standard trails. To keep her morning jog interesting, Bessie decides that she should pick a route that involves some non-standard trails. However, she is so comfortable with using standard trails, she doesn't want to use too many non-standard trails on her route. After some thought, she decides a good route is one that forms a simple cycle (returning to its starting point, and not using any field more than once) that contains exactly two non-standard trails. Please help Bessie count the number of good routes she can use. Two routes are considered the same if they involve the same set of trails. The first line contains $N$ and $M$. Each of the next $M$ lines contains two integers $a_i$ and $b_i$ describing the endpoints of a trail. The first $N-1$ of these are the standard trails. Output the total number routes Bessie might want to use.
CommonCrawl
For all natural numbers $n$, $P(n)$ is true. individually. Well, there is an infinite list of things to prove, so this will only work if you have an infinite amount of time. which makes our task infinitely easier! Proposition 1. For every natural number $n$, the sum of the first $n$ odd numbers is $n^2$. For example, say $n = 5$. Then the sum of the first $5$ odd numbers is $1 + 3 + 5 + 7 + 9 = 25$, which is indeed equal to $5^2$. Proof. We will prove this by induction. Our predicate will be $P(n)$ which means "the sum of the first $n$ odd numbers is $n^2$". For the base case, we will show that $P(1)$ is true. True enough, the sum of the first $1$ odd numbers is just $1$ and it is equal to $1^2$. Now, for the inductive case. Assume $P(n)$ for a natural number $n$, and we want to show that $P(n+1)$. The sum of the first $n+1$ odd numbers is equal to the sum of the first $n$ odd numbers plus the $(n+1)$th odd number. Now, by assumption, the sum of the first $n$ odd numbers is $n^2$. Also, note that the $(n+1)$th odd number is equal to $2n+1$. Thus, the sum of the first $n+1$ odd numbers is $n^2 + (2n+1)$. But $n^2+2n+1 = (n+1)^2$. Thus, $P(n+1)$ is true, and the inductive case is proven! Well-ordering principle Every nonempty set of natural numbers has a smallest element. This is very useful too, and it may sound very obvious, but as you may have learned by now, true statements are not necessarily obvious, and obvious statements are not necessarily true. As exercises, you can try proving the following statements using induction (or well-ordering theorem if you like). Proposition 2. For every integer $x$, either $x$ is even or $x$ is odd, but not both. This one is a little trickier. Proposition 4. For any $n$, $(1+2+\ldots+n)^2 = (1^3+2^3+\ldots+n^3)$.
CommonCrawl
Squarectiland is a 2 dimensional plane which can be assumed to infinitely extend in all directions. There is a universal Cartesian coordinate system in Squarectiland, and thus each point is uniquely described by its coordinates $(x, y)$. The king of Squarectiland wants to build a new palace. The palace has to be a square with sides aligned with the coordinate axes. The palace can be arbitrarily large, but the king also wants it to be a very secure place. There are $n$ guard stations in Squarectiland. The $i$-th station is located at the point $(x_i, y_i)$. The king can set up several guard squads to patrol around the palace. The guards are simple folk, and can only follow simple orders. Each guard squad will have exactly two distinct guard stations assigned to it. Then, the patrol route of this particular guard squad will be the axes-aligned rectangle with the smallest area which contains both of these stations on its boundary (note that this rectangle can have zero area if the stations share an $x$- or a $y$-coordinate). To avoid quarrel and confusion, each station can be assigned to at most one guard squad. The king wishes to set up $k$ guard squads as described above, and in such a way that the patrol of each of the $k$ guard squads encircles the palace (that is, every point inside or on the boundary of the palace has to be either visited or inside the rectangular patrol of each of the guard squads). Help him determine the largest possible size of the palace that can be secured this way. ###Input: - The first line of the input contains a single positive integer $T$, the number of testcases. Descriptions of $T$ test cases follow. - Each test case description starts with a line containing two integers $n$ and $k$, denoting the number of guard stations and the desired number of guard squads, respectively. - The following $n$ lines describe locations of the guard stations. The $i$-th of these lines contains two integers $x_i, y_i$, representing the coordinates of the $i$-th station. ###Output: - For each testcase, output in a new line, the largest side length of a palace that can be encircled by $k$ patrol routes. - If there is no way to build a palace with positive area, the answer should be $0$. ###Constraints - $2 \leq n \leq 5 \cdot 10^4$ - $1 \leq k \leq n / 2$ - $-10^6 \leq x_i, y_i \leq 10^6$ - The sum of all $n$ across all test cases in a single input does not exceed $5 \cdot 10^4$ - No two stations occupy the same point. ###Sample Input: ``` 3 4 1 0 1 3 0 4 3 1 4 5 2 4 0 2 4 -4 1 -1 2 -3 5 2 1 0 0 10 0 ``` ###Sample Output: ``` 2 3 0 ``` ###EXPLANATION: **Testcase 1**: The four guard stations are represented in the figure below by points A, B, C, and D. The rectangle AECF is the patrol associated with the squad which has the stations A and C assigned to it. And the palace can be built on the square GHIJ, with a side length of 2. You can check that this is the maximum possible, and hence then answer is 2. ![Alt Text](https://codechef_shared.s3.amazonaws.com/download/ACMINO18/SQRECT/1.png =350x350) **Testcase 2**: The five guard stations are represented in the figure below by points A, B, C, D, and E. The rectangle AIEH is the patrol associated with the squad which has the stations A and E assigned to it. The rectangle BGCF is the patrol associated with the squad which has the stations B and C assigned to it. And the palace can be built on the square JKLM, with a side length of 3. You can check that this is the maximum possible, and hence then answer is 3. ![Alt Text](https://codechef_shared.s3.amazonaws.com/download/ACMINO18/SQRECT/2.png =475x350) **Testcase 3**: The only possible route is a straight segment, hence there is no way to place a square of positive area inside it.
CommonCrawl
251 V.I. Arnold says Russian students can't solve this problem, but American students can -- why? 77 How do I escape characters in GitHub code search? 39 Why are complex numbers in Python denoted with 'j' instead of 'i'? 27 Can a nonspherical planet exist and can it be habitable? 23 What does Chrome's "Incognito Mode" do exactly? 20 Does it make any sense to prove $0.999\ldots=1$?
CommonCrawl
1-To be published in IEEE Xplore, Authors must submit their abstracts online through the IEEE pdf-express system by June 25, 2018. 4-Upon process completion, the option « Approve for collection » will become available in your account. Press that option if you are satisfied with the final result. The next screen will confirm the status of the paper as « Approved for collection». The Process is complete you can logoff. 5-About the IEEE Electronic Copyright Form. After submission of the accepted paper to IEEE in July, authors will receive an email from IEEE to fill in the electronic Copyright Form for the submitted paper. This will complete the process for submission to Xplore. Download the IEEE Requirements for PDF Document V3.2. Abstracts may be no longer than 1 page, including all text, figures, and references. Please note that after the submission deadline the list and the order of the authors cannot be modified, and must remain unchanged in the final version of the manuscript. Photonics North requires that each accepted paper be presented by one of the authors' in-person at the conference site according to the schedule published. Presentation by anyone else than one of the co-authors (proxies, video or remote cast) is not allowed, unless explicitly approved before the conference by the technical co-chairs. For posters, one author must be present at the poster during the entire duration of the session. Any paper accepted into the technical program, but not presented on-site will be withdrawn from the official proceedings archived on IEEE Xplore. The text of the paper should contain discussions on how the paper's contributions are related to prior work in the field. It is important to put new work in context, to give credit to foundational work, and to provide details associated with the previous work that have appeared in the literature. This discussion may be a separate, numbered section, or it may appear elsewhere in the body of the manuscript, but it must be present. You should differentiate what is new, and how your work expands on or takes a different path from the prior studies. The review process will be performed from the electronic submission of your paper. To ensure that your document is compatible with the review system and Proceedings system, you MUST adhere to the following requirements. Papers must be submitted in Adobe's Portable Document Format (PDF) format and must strictly adhere to the IEEE Requirements for PDF Documents v3.2. Have monochrome images down-sampled at 600 dpi, grayscale & color images at 300 dpi. Authors will be permitted to submit files weighing up to 10 MB. When submitting your paper, the online submission system will ask you to rename your file with a specific name that will be given to you at that time. Please strictly comply with this instruction. Once again, please note that only PDF files will be accepted. English is the official language of the conference. As a result, all papers must be entirely submitted (and presented) in English. The paper abstract should appear at the top of the left-hand column of text, about 12 mm (0.5") below the title area and no more than 80 mm (3.125") in length. Leave 12 mm (0.5 ») of space between the end of the abstract and the beginning of the main text. To achieve the best viewing experience for the review process and conference proceedings, we strongly encourage authors to use Times-Roman or Computer Modern fonts. If a font face is used that is not recognized by the submission system, your proposal will not be reproduced correctly. Use a font size that is no smaller than 9 points throughout the paper, including figure captions. In 9-point type font, capital letters are 2 mm high. For 9-point type font, there should be no more than 3.2 lines/cm (8 lines/inch) vertically. This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make the proposal much more readable. Larger type sizes require correspondingly larger vertical spacing. The paper title must appear in boldface letters and should be in ALL CAPITALS. Do not use LaTeX math notation ($x_y$) in the title; the title must be representable in the Unicode character set. Lastly, try to avoid uncommon acronyms in the title. The authors' name(s) and affiliation(s) appear below the title in capital and lower case letters. ICIP does not perform blind reviews, so be sure to include the author list in your submitted paper. Proposals with multiple authors and affiliations may require two or more lines for this information. The order of the authors on the document should exactly match in number and order the authors typed into the online submission form. Questions concerning the paper-submission process should be addressed to [email protected]. Include your paper number(s) and title(s) on all correspondence.
CommonCrawl
For questions about properties and applications of triangles. For questions about Lie algebras, an algebraic structure whose main use is in studying geometric objects such as Lie groups and differentiable manifolds. For questions concerning circles. A circle is the locus of points in a plane that are at a fixed distance from a fixed point. Questions on the mathematics required to solve problems in physics. For questions from the field of mathematical physics use (mathematical-physics) tags instead. a decomposition of a periodic function as a linear combination of sines and cosines, or complex exponentials. Use for questions about finding integer or rational solutions to polynomial equations. Question about finding the primitives of a given function, whether or not elementary. For questions about or related to Sobolev spaces, which are function spaces equipped with a norm combining norms of a function and its derivatives. Questions regarding the plotting or graphing of functions. Questions about graphs with vertices and edges should use the (graph-theory) tag instead. Questions on special functions, useful functions that frequently appear in pure and applied mathematics (usually not including "elementary" functions). Questions about quadratic functions and equations, second degree polynomials usually in the forms $y=ax^2+bx+c$, $y=a(x-b)^2+c$ or $y=a(x+b)(x+c)$. Puzzles, curiosities, brain teasers and other mathematics done "just for fun". Questions on linear programming, the optimization of a linear function subject to linear constraints. A vector space $E$, generally over the field $\mathbb R$ or $\mathbb C$ with a map $\lVert \cdot\rVert\colon E\to \mathbb R_+$ satisfying some conditions. Questions about exponentiation, the operation of raising a base $b$ to an exponent $a$ to give $b^a$. Questions on conic sections and their properties; the curves formed by the intersection of a plane and a cone. Circles, ellipses, hyperbolas, and parabolas are examples of conic sections. a vector space equipped with an inner product. The inner product is a generalization of the "dot" product often used in vector calculus. For questions about smooth manifolds, a topological manifold with a maximal smooth atlas.
CommonCrawl
This puzzle is the continuation of "Professor Halfbrain and the 99x99 chessboard (Part 1)". The difference is that "four corners of a rectangle" has now become "$2\times2$ subsquare". Professor Halfbrain has spent the last weekend with filling the squares of a $99\times99$ chessboard with real numbers from the interval $[-1,+1]$. Whenever four squares form a $2\times2$ subsquare of the chessboard, then the four numbers in these squares had to add up to zero. This puzzle asks you to improve the two theorems of professor Halfbrain and to make them even deeper. Find an integer $y$, so that "the sum in all the squares is $0$" in the first theorem may be replaced by "the sum in all the squares is $y$", and so that "the sum is at most $9801$" in the second theorem may be replaced by "the sum is at most $y$" (again yielding true statements, of course). Since we know that all $2\times2$ squares will have a sum of all values equal to $0$, then we can say that the sum of the chessboard is equal to the sum of elements in the last row and column, because we can group all the other squares into $2\times2$ squares of sum $0$. The sum of the last column and last row $= 99 \times 1 + 1-1 + 1-1 +... +1-1 = 99$. I've shown a method to get the value $99$. We can get $-99$ by multiplying each square by $-1$. I can further show that we definitely can't achieve more than $99$; this will be proof for the upper bound (also for lower bound). We will fill the chessboard with $2\times2$ tiles, where each tile covers a $2\times2$ square of which the sum is equal to $0$. Thanks to Paul Sinclair for noticing that the below tiling shows that you can place $49^2$ tiles and that we don't need to prove you can't place more. I would like to now state that for every red L shape, the sum of all 3 squares is between $[-1, +1]$. This is obvious because the red L shape, along with the neighboring green square must have a sum of $0$, and the value of the green square is between $[-1, +1]$. With this property we can now say that there are 50 blue squares on our board, and 49 red L shapes, and the sum of all these squares is the sum of our board. However the blue squares are each between $[-1, +1]$, and each red L shape is between $[-1, +1]$, so our board sum must be between $[-99, +99]$. My first tiling allows us to get the sum of all numbers in all the squares is $= x$ as long as $-99 \le x \le 99$. Say $x = 99y$. Now lets change all my $1$'s to $y$'s, and all my $-1$'s to $-y$'s. Similarly, adding the squares gives the sum equal to $99y = x$, again $-1 \le y < 1$, so for this solution, $-99 \le x \le 99$. Another way of getting 99: If we give each square an $x$ and $y$ coordinate, starting with (1,1) in the top left corner, we give each square with even $x$ and even $y$ a value of $-1$, each square with odd $x$ and $y$ a value of $1$, and each other square a value of $0$, then we have $50^2 \times 1 + 49^2 \times -1 = 99$. If we give each square an $x$ and $y$ coordinate, starting with (1,1) in the top left corner, we give each square with even $x$ and even $y$ a value of $-1$, and each other square a value of $\frac13$. Each $2\times2$ square will contain exactly one square with value $-1$, and $3$ with value $\frac13$, for a total sum of $0$. We have $49^2$ -1's, and $99^2 - 49^2$ with value $\frac13$, for a total value of $65.6$.
CommonCrawl
Abstract:In this paper some embedding theorems related to fractional integration and differentiation in harmonic mixed norm spaces $h(p,q,\alpha )$ on the half-space are established. We prove that mixed norm is equivalent to a ``fractional derivative norm'' and that harmonic conjugation is bounded in $h(p,q,\alpha )$ for the range $0<p\leq \infty $, $0<q\leq \infty $. As an application of the above, we give a characterization of $h(p,q,\alpha )$ by means of an integral representation with the use of Besov spaces.
CommonCrawl
We cover the object with square non-overlapping boxes of size $r$ and repeat the procedure using a range of $r$ values. This range is determined by the size of our sample. Calculate $M_q(r_1)$ store the result. Do this for all $r$'s in the range. If somebody want to do this in R as a final project, we cant talk later. maxBox The size of the last box. The intermediate boxes are calculated as powers of 2. numBoxSizes limit the number of boxes, not very useful. mfSBA can use tif files but I can't make it work in windows so we will use its own format that is called "sed". Sed files are text files with a matrix structure. s <- "./mfSBA K1_laSelva.sed q.sed 2 512 20 S" a.file/f.file have data for $f(\alpha)$ and $\alpha$ an equivalent way to express $D_q$ that we will not use. t.file has $\log( M_q(r))$ and the box sizes used, useful to check the validity of the regression. s.file has $\tau_q$, $\alpha$ and $f(\alpha)$, the $R^2$'s and standard deviations, so we will use mostly this file. The first step to do a function like this is to test if commands works. Oops error: the first line do not have the same number of labels as columns. When there is low $R^2$ or autocorrelation is better to check graphically. Can we make more functions to automate the task if we need to repeat the analysis? A function to read the data, show the $R^2$ histogram and add $D_q$ to a data frame.
CommonCrawl
Welcome to Physics Problems Q&A, where you can ask questions and receive answers from other members of the community. The answer should be 518A. It comes something around 3255 A. Where is the mistake? I don't see any mistake. The difference is a factor of exactly $2\pi$, which should be in the formula. According to the answer in part (a) the radial current density at $\rho=3, z=2$ is $180A/m^2$. The surface area of the band is $2\pi \times 3 \times 0.8$ so if the current density were constant the current through the band would be $2714A$. The current density increases with z so the total current must be greater than $2714A$. The answer of $518A$ must be wrong. You need to have more confidence in your calculations. Also learn to make estimates and checks such as the one I did above.
CommonCrawl
You have found $13$ gold coins and strangely their weights are from $1$ to $13$ grams (such as $1,2,3,...$). You are bored and out of the blue you decided to divide golds into groups such as the sums of the weights of the golds in all groups will be the same. In how many distinct ways can this be done? If this question was asked for $7$ gold coins with $1$ to $7$ gram weights: The answer would be 5, such as (1-6-7, 2-3-4-5), (1-6,2-5,3-4,7), (1-2-4-7,3-5-6), (1-2-5-6,3-4-7), (1-3-4-6,2-5-7) etc. $1 + 2 + ... + 13 = 91$; and $91 = 13 \times 7$, so we must have either 13 groups of 7 or 7 groups of 13. Since we have coins heavier than 7 grams, the first option is out. We must therefore have 7 groups of 13. 1 way, since 13 must be alone; 12 must be with 1; 11 must be with 2; 10 must be with 3; 9 must be with 4; 8 must be with 5; and 7 must be with 6.
CommonCrawl
A Goldbach partition $2n = p + q$ with $p$ and $q$ primes and $p \leqslant q$ is usually called minimal if the numbers $2n - k$ ($k = 1,\ldots, p-1$) are all composite. Reading through the literature, I see that the minimal Goldbach partitions of even integers have been studied a lot (for instance in https://dms.umontreal.ca/~andrew/PDF/Goldbach1.pdf). Browse other questions tagged reference-request integer-partitions goldbachs-conjecture or ask your own question.
CommonCrawl
Salem gave you $ n $ sticks with integer positive lengths $ a_1, a_2, \ldots, a_n $ . For every stick, you can change its length to any other positive integer length (that is, either shrink or stretch it). The cost of changing the stick's length from $ a $ to $ b $ is $ |a - b| $ , where $ |x| $ means the absolute value of $ x $ . A stick length $ a_i $ is called almost good for some integer $ t $ if $ |a_i - t| \le 1 $ . Salem asks you to change the lengths of some sticks (possibly all or none), such that all sticks' lengths are almost good for some positive integer $ t $ and the total cost of changing is minimum possible. The value of $ t $ is not fixed in advance and you can choose it as any positive integer. As an answer, print the value of $ t $ and the minimum cost. If there are multiple optimal choices for $ t $ , print any of them. The first line contains a single integer $ n $ ( $ 1 \le n \le 1000 $ ) — the number of sticks. The second line contains $ n $ integers $ a_i $ ( $ 1 \le a_i \le 100 $ ) — the lengths of the sticks. Print the value of $ t $ and the minimum possible cost. If there are multiple optimal choices for $ t $ , print any of them. In the first example, we can change $ 1 $ into $ 2 $ and $ 10 $ into $ 4 $ with cost $ |1 - 2| + |10 - 4| = 1 + 6 = 7 $ and the resulting lengths $ [2, 4, 4] $ are almost good for $ t = 3 $ . In the second example, the sticks lengths are already almost good for $ t = 2 $ , so we don't have to do anything.
CommonCrawl
Abstract: Given a map $\mathcal M$ on a connected and closed orientable surface, the delta-matroid of $\mathcal M$ is a combinatorial object associated to $\mathcal M$ which captures some topological information of the embedding. We explore how delta-matroids associated to dessins d'enfants behave under the action of the absolute Galois group. Twists of delta-matroids are considered as well; they correspond to the recently introduced operation of partial duality of maps. Furthermore, we prove that every map has a partial dual defined over its field of moduli. A relationship between dessins, partial duals and tropical curves arising from the cartography groups of dessins is observed as well.
CommonCrawl
SFH Handbook 4000.1 | HUD.gov / U.S. Department of Housing and. – . Relations · Davis Bacon and Labor Standards · Departmental Enforcement Center. The Federal Housing Administration's (FHA) Single Family Housing Policy. everything a lending entity needs to become FHA approved; to originate and. II forward mortgages, and program information for Nonprofit Organizations and. Credit Score For Usda Loan 2019 If you want a good deal on a home, here's the credit score you need – Credit scores signify your trustworthiness to financial institutions and can determine how easy, or how expensive, it is for you to get a mortgage. To determine your ability to pay, lenders look at.Minimum Credit Score For Fha Home Loan Credit score for mortgage: Calculate what you need to get a. – The minimum fico credit score for a conventional mortgage. A conventional mortgage is the most common type of home loan. This term refers to mortgages that meet the underwriting standards of. FHA Cash-Out Refinance 2019 | Tap into your Home's Equity – With today's low rates, see if you meet FHA cash-out refinance guidelines.. America's Top Mortgage Lender – Rated A+ by the BBB; Closes.. cannot request bank statements as part of their internal underwriting guidelines. FHA Mortgage Underwriting Guidelines | Pocket Sense – FHA makes some exceptions to its guidelines. A borrower with DTI ratios that exceed FHA underwriting guidelines may still qualify for insurance if the borrower has sufficient compensating factors that convince the lender that the borrower can make the payment. Fha Loans Guidelines Should I Get a FHA Loan or Conventional Mortgage? – These loans, while the most popular, also contain tighter qualifying guidelines than FHA: No mortgage insurance with just 10% down The wait for a new mortgage post-foreclosure is seven years; there's. Mortgage Options for First-time home buyers – Before you start looking for the home of your dreams on Zillow, the best place. a mortgage and a credit score of 740 or higher will help you get the best mortgage rates. If you don't have such a. FHA Loans Available in OHIO – FHA.com Reviews. FHA.com is a one-stop resource for homebuyers who want to make the best decisions when it comes to their mortgage. With our detailed, mobile-friendly site, individuals can access information about different FHA products, the latest loan limits, and numerous other resources to make their homebuying experience easier. Banks That Work With Fha Loans Tampa florida fha home loans, Home Loan Options | GTE Financial – GTE Financial can help you through the details of an FHA Home Loan and walk you through all your options. Perfect for first time home buyers. Can You Get A Fha Loan With Bad Credit Here is something most married folks do not realize when applying for. – FHA Loans require the lender pulls credit on the debt of the spouse. If you are not married and plan to get married and you know what the. ATL Mortgage – Home Loans in Atlanta, GA – We want you to be excited about your new home. The home-buying process is stressful, but one of our main goals is to take the stress out of the home loan process. How Will the Shutdown Affect Local Low-Income Housing? – The shutdown also means some FHA loans and HUD grants may not be processed. "While they are not the highest profile victims of the impasse, low-income people served by HUD affordable housing. Fha Home Loan Assistance Westminster Home Mortgage Services Launched – Finding a mortgage to help fit your needs can make all the difference. programs and products including conventional conforming, FHA and VA loans, larger loan amounts, investment property. HUD limiting reverse mortgages for seniors – Home prices and interest rates, among other things, have made the reverse-mortgage program volatile, HUD officials said. "Fairness dictates that future HECM loans do not adversely. He said HUD is. The Low-Income Housing Tax Credit (LIHTC) for HUD Loans – HUD. – HUD 223f Loans and LIHTC Credits. The LIHTC (Low-Income Housing Tax Credit) was created by the Tax Reform Act of 1986. According to HUD, it is "the most important resource for creating affordable housing in the United States today." This federally authorized program gives both state and local agencies the authority to issue tax credits to. New Program Will Support Clean Energy And Efficiency For Low-Income Residents – Homeowners repay the loan as a lien. live in their home. Since lower-income households spend a disproportionate amount of income on energy bills, HUD's multifamily PACE program could result in. These Low Income Home Loans May Surprise You – Even if you have a lower income, there are a variety of loan options to make home buying affordable for you. Learn about the options and how to qualify. Can You Get A Fha Loan With Bad Credit How To Get a Home Improvement Loan With Bad Credit – How To Get a Home Improvement Loan With Bad Credit Don't Let Your bad credit score stop You From Getting a Home Improvement Loan. Getting a home mortgage loan with a questionable credit. 3.5 As A Percent What is 3/5 as a percent? or What is 3/5 as a percentage? – When you enter 3/5 into the above formula, you get (3/5)*100 which calculates to: 60% Note: When Research Maniacs calculated 3/5 as a percent, we rounded the answers to nine digits after the decimal point if necessary. Tips to help you build your credit score – Here is a look at some other reasons why you should have a good credit score. More negotiating power Borrow higher limits on loans Better chances of having your home loan or rental approved Better car. Fha Mortgage Underwriting Guidelines FHA vs. Conventional Loan: The Pros and Cons | The Truth. – Another edition of mortgage match-ups: "fha vs. conventional loan." Our latest bout pits fha loans against conventional loans, both of which are popular home loan options for home buyers these days.. In recent years, FHA loans surged in popularity, largely because subprime (and Alt-A) lending was all but extinguished as a result of the ongoing mortgage crisis. How to Get Pre-Approved for a Mortgage| Experian – Unlike a mortgage approval itself, this document just states the lender's belief that it would approve your mortgage application based on the income and credit information that you've submitted. The information typically needed for a home mortgage pre-approval includes your personal information, credit history, credit score , income, assets. 2019's Best Reviews: Home Loans for Bad Credit – You can work to improve your chances of qualifying for a home loan by improving your credit score through credit repair or a debt management program. You are also much more likely to be approved if you provide a sizeable down payment. A down payment between 15% and 20% will give you the greatest chances of being approved. What Does It Mean If Your Credit Score Is Less Than 600? – Here's a rundown of what it means for three common types of borrowing — mortgages, auto loans, and credit cards. The bare minimum fico score to be approved for a conventional mortgage loan is 620 as. Credit Score Needed for FHA Loan Approval in 2018. – Today, we will cover the credit score needed for FHA loan approval in 2018. Some of the information that follows is based on the official HUD handbook for this mortgage program, and some of it is derived from our on ongoing conversations with FHA-approved lenders. What Kind of Mortgage Does Your Credit Score Qualify For? – You're probably already aware that your credit score plays an important role in your ability. it's possible that the borrower may be approved for a mortgage before he or she is truly ready. For. Fha Office Near Me Tenant Troubles: Family Enrolled in state program accused of Trashing Homes – Laura Guilmartin said she was contacted by a DMHAS employee in 2016, who said his clients were interested in renting her home near Bishop's. A representative for HUD told the Troubleshooters the. If a home does not meet FHA minimum property standards, the FHA will not supply mortgage insurance for the loan. Since the loan is contingent on the fha supplying insurance, the lender will not approve the mortgage until the seller brings the home up to FHA standards, the appraiser reinspects it and it passes. FHA Reduces Single-Family Mortgage Insurance Requirements – Law360, New York (February 5, 2013, 5:16 PM EST) — The Federal Housing Administration on Tuesday said it is reducing the number of inspection and home warranty requirements for its. a newly. Apply For Fha Mortgage Online FHA Loan | fha loan qualifications | Santander Bank – fha loan, what is an fha loan, fha loan qualifications, fha requirements, fha. Contact a mortgage specialist or fill out a pre-qualification application online. The FHA Home Inspection Checklist | Sapling.com – minimal fha standards require that a home have an adequate heat source for its size and be free of safety hazards. An inspector will check to see if a home has insulation and will estimate the remaining useful life of the home's heating, electrical and plumbing systems. Requirements for the Inspection of FHA Insured Mobile Homes – Before the agency endorses a loan for insurance, the property must meet HUD's minimum standards for health, safety and structural soundness. An FHA-approved appraiser must visit the site and inspect. FHA/VA Permanent Foundation Retrofits for Manufactured. – *Engineer inspection and certificate fee is separate and not included. This fee typically ranges from $300-500 depending on the engineer used for your location and differing lender requirements. Fha Interest Rate History Taking a look at the history behind today's calls for slavery reparations – Elizabeth Warren is more forceful, saying we must "confront the dark history of slavery and government-sanctioned. black. Maximum Loan-to-Value (LTV) Ratio for the FHA Mortgage. – If you plan to use an FHA loan to buy a house, you'll be limited to a certain loan-to-value ratio, or LTV. The maximum loan-to-value for the FHA mortgage insurance program is 96.5%, according to official HUD guidelines. Fha Loans Houston Houston Mortgage Lenders | Lone Star Financing – Houston Mortgage Lenders. Houston Texas has a lot of mortgage lenders, and we know you have a decision in your home loan shopping process.. If you are looking for a mortgage lender in Houston, we specialize in new home purchases, VA Home Loans and fha home loans. As a local Houston Mortgage.Fha Loan 580 Credit Score 580 – 640 Loan Programs (Low Credit Score Mortgages. – While most banks stop considering loan approvals at a below 640 credit score, at Cornerstone First, we have the ability to go to 580 on both VA and FHA Loans. That is correct! Our 580 – 640 loan programs can result in loan approvals with a 580 credit score, for purchase or refinance! Fha Maximum Loan Limits – Lake Water Real Estate – The maximum FHA loan limits are set each year by Congress. Below you will find the "base" FHA loan limits for the Pennsylvania counties. FHA loans that exceed $314,827 for a single family home or condominium are called jumbo fha loans. The FHA funding fee can be added to the listed amounts. Higher FHA Loan Limits for 2019 – Tip: If you want to find the FHA limit for your area, check out the FHA's Web site with updated 2019 FHA Loan Amounts for all of the counties. The FHA loan limits are based on housing prices for each. How Often Can You Get An Fha Loan FHA Loans – 17 Important Facts About FHA Loans | Zillow – FHA rates are the same and often lower than Conventional Conforming loans. Your lender can do rate comparisons based on your profile.. What are the basic qualifying rules for FHA loans? You can have a credit score as low as 580. Your total monthly housing obligation (mortgage payment, taxes.Qualify Fha Loans New Construction Fha Loans Credit Score For Usda Loan 2019 The USDA loan program also allows for no-money-down. Another factor that can affect how much home you can afford is your credit score, because that is a major factor in determining your interest.What Are Today'S Fha Mortgage Rates Current Mortgage Rates | Mortgage Rates Today | U.S. Bank – Review today's current mortgage rates. Our mortgage loan officers work to meet your lending needs with competitive products and services, convenient access to your accounts, and proven stability backed by industry-leading financial metrics.What Are Today'S Fha Mortgage Rates FHA Fixed-Rate Loans for Homebuyers and Homeowners – The most popular FHA home loan is the fixed-rate loan known as the 203(b). It often works well for first time home buyers.. fha fixed rate fha arm fha reverse mortgage condominium Loans Jumbo Loans. FHA News blog; credit qualifications FHA Requirements fha closing costs fair housing act.FHA Collections Guidelines To Qualify For FHA Home Loans – FHA Collections Guidelines To Qualify For FHA Home Loans. This BLOG On FHA Collections Guidelines To Qualify For FHA Home Loans Was UPDATED On September 19th, 2018. FHA Collections Guidelines For Home Buyers. FHA Collections Guidelines for mortgage loan borrowers were just released. 2019 FHA loan limits rise in most areas of the U.S. – The FHA announced its new mortgage limits for 2019, and they are higher. The lowest of 2019 FHA loan limits, which apply in much of the country, increased from 2018's $294,515 to $314,827. In. What Is the Maximum Loan Amount for an FHA Streamline. – The FHA does not have a maximum combined loan-to-value limit for streamline refinances. In the previous guidance, the calculation of the maximum loan amount would depend on whether the streamline refi has an appraisal or none. The current handbook notes that appraisals are not required on streamline refinances. How Often Can You Get An Fha Loan Fha Loans Qualification FCM – FHA Loans – First County Mortgage – These maximum loan amounts vary by county and state. fha loans feature low down-payments, lenient qualification guidelines and flexible credit requirements. FHA Modification – Bank of America – Eligibility. If the current market value of your house is less than the amount remaining on your loan, you may be able to sell your property in a short sale. The federal housing administration (fha) has a short sale option that provides a streamlined approval process and financial assistance to help you relocate. Westminster Home Mortgage Services Launched – Finding a mortgage to help fit your needs can make all the difference. programs and products including conventional conforming, FHA and VA loans, larger loan amounts, investment property. Home Advantage Down Payment Assistance Loan Program – FHA.com – The Washington State Housing Finance Commission offer homebuyers the Home Advantage Down Payment Assistance Loan Program, a second mortgage with a zero percent interest rate and payment deferred for 30 years that combines with the Home Advantage first mortgage. HUD would also be required to address "the financial viability" of FHA's troubled reverse-mortgage program and assess "the. Fha Loans Guidelines FHA Home Loan | PrimeLending – There are many FHA home loan options that may be right for you.. Buyers," that meet the program's eligibility guidelines, the FHA has waived its 3-year waiting.Fha Loan Limits Ca Credit Score For Usda Loan 2019 If you want a good deal on a home, here's the credit score you need – Credit scores signify your trustworthiness to financial institutions and can determine how easy, or how expensive, it is for you to get a mortgage. To determine your ability to pay, lenders look at.Mortgage Interest Rates Fha 30 Year Fixed 30 Year Fixed Mortgage Rates – Zillow – A 30-year fixed mortgage is a loan whose interest rate stays the same for the duration of the loan. For example, on a 30-year mortgage of $300,000 with a 20% down payment and an interest rate of 3.75%, the monthly payments would be about $1,111 (not including taxes and insurance).Fha Low income home loans credit Score For Usda Loan 2019 Best Mortgage Lenders of 2019 for Low Credit Score Borrowers. – The credit score needed for a mortgage depends on the type of loan. Government-backed loan programs – FHA, VA and USDA – generally have lower credit-score requirements than conventional mortgages.Lower Loan Limits for FHA & Conforming Loans – Buy a home now, if the new loan limits will stop you from buying the home you want. The mortgage market is set to take another hit. Come October 1, 2011, conforming and FHA loan limits. sales,".Fha Loans Qualification FCM – FHA Loans – First County Mortgage – These maximum loan amounts vary by county and state. fha loans feature low down-payments, lenient qualification guidelines and flexible credit requirements. FHA to stop insuring mortgages with PACE loans – which are far less comprehensive than that of traditional mortgage financing products," FHA continued. "FHA's involvement with accepting properties with PACE assessments may indirectly help to. The Department of Housing and Urban Development's decision to suspend the reduction of Federal Housing Administration mortgage insurance premiums didn't come as a shocker. FHA mortgage insurance.. What Are Today'S Fha Mortgage Rates Current Mortgage Rates & Home Loans | Zillow – Today's average mortgage rates Here are the latest average rates from multiple lenders who display rates on Zillow. These rates are based on a $300,000 home loan with 20% down and a 740+ credit score. Things to know when you need a loan for bad. – Bad credit can make it more difficult and more expensive to get loans. But you don't have to resort to riskier options, like payday lenders, when you need a loan. Remodeling? Refinancing With a 203(k) Loan Can Help – With an FHA 203(k) loan, you can roll the cost of home improvement projects into a single monthly mortgage payment by refinancing with one of two options: the limited 203(k) insured loan or the. Fha Office Near Me Find A Branch | PrimeLending – Home Loans |. – Find a PrimeLending Branch or Loan Officer near you. Our friendly lending professionals are waiting to help you get the home of your dreams.. Connect with a Loan Expert Choose a loan officer you'd like to work with. They can answer your questions and will be. 11 Best Manufactured Home Loans for Bad Credit Financing – On the plus side, you can use an easier-to-get FHA-backed loan to refinance a manufactured home, though, of course, individual requirements will be up to the lender. As with mortgages for a new purchase, you can comparison shop refinance loans to obtain multiple quotes and find the best deal. Can I get an FHA loan with a 550 credit score? – The Mortgage. – Hey Cinda, People can get FHA loans with 550 credit scores. Whether YOU can depends on the reason for your 550 score. Note that with a score under 580, you'll have to put at least ten percent down. Here is something most married folks do not realize when applying for. – FHA Loans require the lender pulls credit on the debt of the spouse. If you are not married and plan to get married and you know what the. What's the Lowest Credit Score I Need for Home Loan? – A 600-credit score isn't a high score, either, but that's generally considered the minimum credit score for an FHA-backed loan, Scott Sheldon, a senior loan officer with Sonoma County Mortgages, in Petaluma, California, said. Credit score for mortgage: Calculate what you need to get a. – The minimum FICO credit score for a conventional mortgage. A conventional mortgage is the most common type of home loan. This term refers to mortgages that meet the underwriting standards of. Fha Interest Rate History New Construction Fha loans credit score For Usda Loan 2019 The USDA loan program also allows for no-money-down. Another factor that can affect how much home you can afford is your credit score, because that is a major factor in determining your interest.What Are Today'S Fha Mortgage Rates Current Mortgage Rates | Mortgage Rates Today | U.S. Bank – Review today's current mortgage rates. Our mortgage loan officers work to meet your lending needs with competitive products and services, convenient access to your accounts, and proven stability backed by industry-leading financial metrics.fha loan rates | Bankrate | Call to lock in rate | 844-365-0498 – FHA Loan Rates A federal housing administration (FHA) loan is a popular choice for first-time buyers and people with a limited budget. Start by comparing the latest fha interest rates here. Can a Person Get a Mortgage With a 595 Credit Score? – Your monthly income, debt load and financial history affect your mortgage eligibility just as much as your credit score. Although 580 is the minimum credit score to qualify for an FHA low down payment. What Credit Score Do I Need for a Home Loan? – Here's an overview of the minimum credit score requirements for a home loan, and significantly above the minimum requirement. The average fha borrower only put 5% down and had a relatively. Minimum Credit Score For A Home Loan – Minimum Credit Score For A Home Loan – See if you can lower your monthly mortgage payment and save up money with refinancing, you should consider to do it.. With an FHA home loan lender bad credit, it can be the factor to approve a loan. Current Mortgage Rates Fha 30 Year Current Fha Refi Rates – Moving 2 Brevard – Rates on a 30-year fha-backed fixed-rate loan decreased According to the report, mortgage rates fell slightly in. made up one percent of the total refinance volume . In order to qualify for HARP, the current loantovalue ratio must be 80 percent. Quick Introduction to 30 Year Fixed Mortgages. The most popular mortgage in the U.S. is a 30-year fixed-rate loan. In fact, according to Freddie Mac, 90% of homebuyers opt for this type of home-purchase loan. 30-Year Fixed Mortgage Rates Fall; Current Rate is 3.60%, According to Zillow Mortgage Rate Ticker – SEATTLE, Aug. 29, 2017 (GLOBE NEWSWIRE) – The 30-year fixed mortgage rate on Zillow® Mortgages is currently 3.60 percent, down five basis points from this time last week. The 30-year fixed mortgage. Does Rocket Mortgage Do Fha Loans Quicken loans rocket mortgage review: full mortgage Approval. – And like other tech-based solutions, Quicken Loans Rocket Mortgage will only be successful if borrowers' answers are honest and accurate, much like a standard home loan application. At the end of the day, it's recommended that you still shop around as opposed to taking the easiest route to a mortgage. Fha Guidelines For Home Inspection FHA Home Inspection Checklist – biggerpockets.com – He was kind enough to forward me a full list of "minimum FHA property requirements," which essentially translates to the FHA home inspection checklist. While this list is by-no-means a formal checklist followed by FHA inspectors, it's a good set of guidelines. Abolish the 30-year fixed-rate mortgage! – Richard Shelby, asked a series of questions that critics of the 30-year fixed-rate mortgage have long been focused on. "What unintended consequences have been created by subsidizing the 30-year. Mortgage Rates for 30-Year Fixed U.S. Loans Increase to 4% – U.S. rates for 30-year mortgages climbed, as optimism that Europe's debt crisis will be contained pushed up yields for the Treasuries that guide home loans. The average rate for a 30-year fixed loan. Percents – Cool math Pre-Algebra Help Lessons – How to. – How to Convert a Decimal to a Percent. We just do the same thing as in the last lesson. only backwards! To convert a decimal to a percent: 1: Convert the decimal to a fraction (using the place value information). 2: Convert the fraction to a percent. 3/5 in percent form – coolconversion.com – 3/5 in percent form. convert from a fraction to percent. Here is the answer to the question: 3/5 in percent form or how to convert the fraction 3/5 to percent. Please, input values in this format: a b/c or b/c. Fraction to Percent Converter. Unaccompanied Alien Children and Family Units Are Flooding. – border security includes the ability to remove illegal aliens that the Department of Homeland Security (DHS) apprehends, otherwise we are stuck with a system that sanctions catch and release. Due to legal loopholes and court backlogs, even apprehended illegal aliens are released and become part of the temporary, illegal population of people that we cannot remove. How do you write 4 3/5 as a percent? | Socratic – 460% problem given write fraction 4(3)/(5) as a percent concept applied converting mixed number to improper fraction a(b)/(c)=((a\timesc)+b)/c decimal conversion to percent given decimal 0.cd, just multiply by 100 and you will get the percentage form \rArr0.cd\times100=cd% calculation mixed number 4(3)/(5) to improper form: ((4\times5)+3)/5=23. What is 1 3/5 as a percent – coolconversion.com – Convert from a fraction to percent. Here is the answer to the question: What is 1 3/5 as a percent or how to convert the fraction 1 3/5 to percent. Please, input values in this format: a b/c or b/c. What is 2 3/5 as a percent – coolconversion.com – Convert from a fraction to percent. Here is the answer to the question: What is 2 3/5 as a percent or how to convert the fraction 2 3/5 to percent. Please, input values in this format: a b/c or b/c. 3rd Quarter Growth of 3.5 Percent is Good News, But Doesn't Reflect a. – The Bureau of economic analysis (bea) today announced its initial estimate that the economy grew at a 3.5 percent annualized rate in the third. 3.5 as a percent – geteasysolution.com – 3.5 as a percent – solution and the full explanation with calculations. Below you can find the full step by step solution for you problem. We hope it will be very helpful. What is 3/5 as a percent? or What is 3/5 as a percentage? – When you enter 3/5 into the above formula, you get (3/5)*100 which calculates to: 60% Note: When Research Maniacs calculated 3/5 as a percent, we rounded the answers to nine digits after the decimal point if necessary. Fha Loans Guidelines FHA takes steps to relieve balance sheet stress for reverse mortgage issuers – That meant that issuers were floating hundreds or thousands of loans – and tens of thousands of dollars – on their books as they waited for the insurance to come through. Now, the FHA has taken steps.
CommonCrawl
Many biological environments, both intracellular and extracellular, are often crowded by large molecules or inert objects which can impede the motion of cells and molecules. It is therefore essential for us to develop appropriate mathematical tools which can reliably predict and quantify collective motion through crowded environments. Transport through crowded environments is often classified as anomalous, rather than classical, Fickian diffusion. Over the last 30 years many studies have sought to describe such transport processes using either a continuous time random walk or fractional order differential equation. For both these models the transport is characterized by a parameter $\alpha$, where $\alpha=1$ is associated with Fickian diffusion and $\alpha<1$ is associated with anomalous subdiffusion. In this presentation we will consider the motion of a single agent migrating through a crowded environment that is populated by impenetrable, immobile obstacles and we estimate $\alpha$ using mean squared displacement data. These results will be compared with computer simulations mimicking the transport of a population of such agents through a similar crowded environment and we match averaged agent density profiles to the solution of a related fractional order differential equation to obtain an alternative estimate of $\alpha$. I will examine the relationship between our estimate of $\alpha$ and the properties of the obstacle field for both a single agent and a population of agents; in both cases $\alpha$ decreases as the obstacle density increases, and that the rate of decrease is greater for smaller obstacles. These very simple computer simulations suggests that it may be inappropriate to model transport through a crowded environment using widely reported approaches including power laws to describe the mean squared displacement and fractional order differential equations to represent the averaged agent density profiles. More details can be found in Ellery, Simpson, McCue and Baker (2014) The Journal of Chemical Physics, 140, 054108. I will talk about the geometric properties of conic problems and their interplay with ill-posedness and the performance of numerical methods. This includes some new results on the facial structure of general convex cones, preconditioning of feasibility problems and characterisations of ill-posed systems. Title: Lost Spelunkers, Cops And Robbers and Is Someone Trying To Destroy My Network? What do the three elements of the title have in common is the utility of using graph searching as a model. In this talk I shall discuss the relatively brief history of graph searching, several models currently being employed, several significant results, unsolved conjectures, and the vast expanse of unexplored territory.
CommonCrawl
Paper summary niz Very efficient data augmentation method. Linear-interpolate training set x and y randomly at every epoch. ```python for (x1, y1), (x2, y2) in zip(loader1, loader2): lam = numpy.random.beta(alpha, alpha) x = Variable(lam * x1 + (1. - lam) * x2) y = Variable(lam * y1 + (1. - lam) * y2) optimizer.zero_grad() loss(net(x), y).backward() optimizer.step() ``` - ERM (Empirical Risk Minimization) is $\alpha = 0$ version of mixup, i.e. not using mixup. - Reduces the memorization of corrupt labels. - Increases robustness to adversarial examples. - Stabilizes the training of GAN.
CommonCrawl
The numbers 1,2, 3, are called natural numbers or counting numbers. We can say that whole nos. consist of zero and the natural numbers. Therefore, except zero all the whole nos. are natural numbers. 1) The smallest natural number is 1. 2) The number 0 is the first and the smallest whole nos. 3) There are infinitely many or uncountable number of whole-numbers. 4) All natural numbers are whole-numbers. 5) All whole-numbers are not natural numbers. For example, 0 is a whole-number but it is not a natural number. The successor of a whole number is the number obtained by adding 1 to it. Clearly, the successor of 1 is 2; successor of 2 is 3; successor of 3 is 4 and so on. The predecessor of a whole number is one less than the given number. Clearly, the predecessor of 1 is 0; predecessor of 2 is 1; predecessor of 3 is 2 and so on. The whole number 0 does not have any predecessor. If a and b are any two whole numbers, then a+b, axb are also whole numbers. If a, b and c are any two whole numbers, then (a+b)+c = a+(b+c) and (a×b)×c = a×(b×c). If a is any whole number, then $a + 0 = a = 0 + a$. If a is any whole number, then $a \times 0 = 0 = 0 \times a$. Question 1 Which of the following is not defined? Question 2 Find the value of 6536 � 91 + 9 � 6536? C)1 is the identity for multiplication of whole numbers. D)1 is the identity for addition of whole numbers. Question 5 The product of a non-zero whole number and its successor is always?
CommonCrawl
Abstract: This paper presents two new quality measures for tetrahedra which are smooth and well-suited for gradient based optimization. Bothmeasures are formulated as a distance fromthe regular tetrahedron and utilize the fact that the covariance of the vertices of a regular tetrahedron is isotropic. We use these measures to generate high quality meshes from signed distance maps. This paper also describes an approach for computing (smooth) signed distance maps from binary volumes as volumetric data in many cases originate from segmentation of objects from imaging techniques such as CT, MRI, etc. The mesh generation is split into two stages; a candidate mesh generation stage and a compression stage, where the surface of the candidate mesh is moved to the zero iso-surface of the signed distance maps, while one of the quality measures ensures that the quality remains high. We apply the mesh generation algorithm on four examples (torus, Stanford dragon, brain mask, and pig back) and report the dihedral angle, aspect ratio and radius-edge ratio. Even though, the algorithm incorporates none of the mentioned quality measures in the compression stage it receives a good score for all these measures. The minimum dihedral angle is in none of the examples smaller than 15º. Abstract: The aim of the present work is to define a calibration framework to estimate the relative orientation between a camera and an inertial orientation sensor AHRS (Attitude Heading Reference System). Many applications in computer vision and inmixed reality frequently work in cooperation with such class of inertial sensors, in order to increase the accuracy and the reliability of their results. In this context the heterogeneous measurements must be represented in a unique common reference frame (rf.) in order to carry out a joint processing. The basic framework is given by the estimation of the vertical direction, defined by a 3D vector expressed in the camera rf. as well as in the AHRS rf. In this paper a new approach has been adopted to retrieve such direction by using different geometrical entities which may be inferred from the analysis of single axis motion projective geometry. Their performances have been evaluated on simulated data as well as on real data. Abstract: Computer graphics and digital imaging have finally reached the goal of photorealism. This comes however with a huge cost in terms of memory and CPU needs. In this paper we present a lossless method for image compression using relative distances between pixel values belonging to separate and independent blocks. In our approach we try to reach a good balance between execution time and image compression rate. In a second step, by considering the parallel characteristics of this algorithm (and nonetheless the trend of multi-core processor), a parallel version of this algorithm was implemented using Nvidia CUDA architecture. Abstract: Image inpainting or image completion consists in filling in the missing data of an image in a visually plausible way. Many works on this subject have been proposed these recent years. They can mainly be decomposed into two groups: geometric methods and texture synthesis methods. Texture synthesis methods work best with images containing only textures while geometric approaches are limited to smooth images containing strong edges. In this paper, we first present an extended state of the art. Then a new algorithm dedicated to both types of images is introduced. The basic idea is to decompose the original image into a structure and a texture image. Each of them is then filled in with some extensions of one of the best methods from the literature. A comparison with some existing methods on different natural images shows the strength of the proposed approach. Abstract: Basically, document image binarization consists on the segmentation of scanned gray level images into text and background, and is a basic preprocessing stage in many image analysis systems. It is essential to threshold the document image reliably in order to extract useful information and make further processing such as character recognition and feature extraction. The main difficulties arise when dealing with poor quality document images, containing nonuniform illumination, shadows and smudge, for example. This paper presents an efficient morphological-based document image binarization technique that is able to cope with these problems. We evaluate the proposed approach for different classes of images, such as historical and machine-printed documents, obtaining promising results. Abstract: We introduce a technique that allows 3D information to be captured from a conventional flatbed scanner. The technique requires no hardware modification and allows untrained users to easily capture 3D datasets. Once captured, these datasets can be used for interactive relighting and enhancement of surface detail on physical objects. We have also found that the method can be used to scan and repair damaged photographs. Since the only 3D structure on these photographs will typically be surface tears and creases, our method provides an accurate procedure for automatically detecting these flaws without any user intervention. Once detected, automatic techniques, such as infilling and texture synthesis, can be leveraged to seamlessly repair such damaged areas. We first present a method that is able to repair damaged photographs with minimal user interaction and then show how we can achieve similar results using a fully automatic process. MOIRÉ PATTERNS FROM A CCD CAMERA - Are They Annoying Artifacts or Can They be Useful? Abstract: When repetitive high frequency patterns appear in the view of a charge-coupled device (CCD) camera, annoying low frequency Moiré patterns are often observed. This paper demonstrates that such Moiré pattern can useful in measuring surface deformation and displacement. What is required, in our case, is that the surface in question is textured with appropriately aligned black and white line gratings and this surface is imaged using a grey scaled CCD camera. The characteristics of the observed Moiré patterns are described along with a spatial domain model-fitting algorithm that is able to extract a dense camera-to-surface displacement measures. The experimental results discuss the reconstruction of planar incline and curved surfaces using only a coarse 33 lines per inch line grating patterns printed from a 600 dpi printer. Abstract: Image metamorphosis process produces deformation sequence which transforms one input image into another one. The method described in the paper applies morphological approach to achieve this goal. It is based on morphological interpolation which makes use of the interpolation functions produced from geodesic distance functions. The described method allows applying this approach to graytone images via its 3D umbra. It produces 3D interpolation function. Its thresholding at given level followed by inverse umbra transform allows obtaining frame of the interpolated sequence. Abstract: The introduction of active (pan-tilt-zoom or PTZ) cameras in Smart Rooms in addition to fixed static cameras allows to improve resolution in volumetric reconstruction, adding the capability to track smaller objects with higher precision in actual 3D world coordinates. To accomplish this goal, precise camera calibration data should be available for any pan, tilt, and zoom settings of each PTZ camera. The PTZ calibration method proposed in this paper introduces a novel solution to the problem of computing extrinsic and intrinsic parameters for active cameras. We first determine the rotation center of the camera expressed under an arbitrary world coordinate origin. Then, we obtain an equation relating any rotation of the camera with the movement of the principal point to define extrinsic parameters for any value of pan and tilt. Once this position is determined, we compute how intrinsic parameters change as a function of zoom. We validate our method by evaluating the re-projection error and its stability for points inside and outside the calibration set. Abstract: Today High Definition (HD) for video contents is one of the biggest challenges in computer vision. The 1080i standard defines the minimum image resolution required to be classified as HD mode. At the same time bandwidth constraints and latency don't allow the transmission of uncompressed, high resolution images. Often lossy compression algorithms are involved in the process of providing HD video streams, because of their high compression rate capabilities. The main issue concerned to these methods, while processing frames, is that high frequencies components in the image are neither conserved nor reconstructed. Our approach uses a simple downsampling algorithm for compression, but a new, very accurate method for decompression which is capable of high frequencies restoration. Our solution Is also highly parallelizable and can be efficiently implemented on a commodity parallel computing architecture, such as GPU, obtaining extremely fast performances. Abstract: In this paper, a noise reduction technique is introduced based on the Gabor time-frequency transform. In the proposed approach, noise is removed using low pass filters locally in the transform domain. Finding the cut-off frequency for the low pass filters in such a way that image does not loose its features, is an important issue. The optimal cut-off frequency of the low pass filters are computed in an iterative method for each sub-block of the image. The followed approach, besides showing a good performance in removing noise, it also performs well in preserving image features. Abstract: We present techniques for the amplification of small contrast of bounded signals; one is based on gamma correction and another is of an unsharp-masking type; the one of the unsharp-masking type is suitably modified for its application on circular signals as well. We enhance the saturation and luminance components of high dynamic range images on the basis of a segmentation of the image into light and dark regions. Abstract: Images are composed of geometric structures and texture, and different image processing tools - such as denoising, segmentation and registration - are suitable for different types of image contents. Characterization of the image content in terms of geometric structure and texture is an important problem that one is often faced with. We propose a patch based complexity measure, based on how well the patch can be approximated using singular value decomposition. As such the image complexity is determined by the complexity of the patches. The concept is demonstrated on sequences from the newly collected DIKU Multi-Scale image database. Abstract: Media content adaptation is the action of transforming media files to adapt to device capabilities, usually related to mobile devices that require special handling because of their limited computational power, small screen size and constrained keyboard functionality. Image retargeting is one of such adaptations, transforming an image into another with different size. Tools allowing the author to imagery once and automatically retarget that imagery for a variety of different display devices are therefore of great interest. The performance of these algorithms is directly related with the preservation of the most important regions and features of the image. In this work, we introduce an algorithm for automatically retargeting images. We explore and extend a recently proposed algorithm on the literature. The central contribution is the introduction of the stable paths for image resizing, improving both the computational performance and the overall quality of the resulting image. The experimental results confirm the potential of the proposed algorithm. Abstract: In this paper, we present a new example based approach to search for a particular product based on its visual properties. A user can take a photo of a product package with a cell-phone or webcam and submit it to an online shopping portal for finding the product details. We search a product image database for the distinctive visual features on the query image to locate the desired product. We use PCA-SIFT feature for robust retrieval, to account for possible imperfections in the query image due to uncontrolled user environment. We use Oracle Java R-Tree to index image features to realize a scalable system. We establish robustness and scalability of our approach by conducting several experiments on fairly large prototype implementations. Abstract: Within this paper, we present a hierarchical online image representation method with 3D camera position to efficiently summarize and classify the images on the web. The framework of our proposed hierarchical online image representation methodology is composed of multiple layers: at the lowest layer in the hierarchical structure, relationship between multiple images is represented by their recovered 3D camera parameters by automatic feature detection and matching. At the upper layers, images are classified using constrained agglomerative hierarchical image clustering techniques, in which the feature space established at the lowest layer consists of the camera's 3D position. Constrained agglomerative hierarchical online image clustering method is efficient to balance the hierarchical layers whether images in the cluster are many or not. Our proposed hierarchical online image representation method can be used to classify online images within large image repositories by their camera view position and orientation. It provides a convenient way to image browsing, navigating and categorizing of the online images that have various view points, illumination, and partial occlusion. Abstract: This paper presents two original applications related to discrete distance maps. Based on the relation linking inscribed convex sets and discrete distance maps, the first application is a spatially adaptive filtering method which is set up for both grey-level and color images. This spatially adaptive filter is really efficient in performances and computation time. Furthermore a new mean of computation for the Asplund distance as well as a method for determining the similarity degree between shapes are also presented. The similarity parameter enables a quantitative shape classification with respect to a set of reference shapes. Abstract: In this paper, we propose a method for representing intensity images of objects illuminated by near point light sources. Our image representation model is a linear model, and thus, the 3D shape of objects can be recovered linearly from intensity images taken from near point light sources. Since our method does not require the integration of surface normals to recover 3D shapes, the 3D shapes can be recovered, even if they are not smooth unlike the standard shape from shading methods. The experimental results support the efficinecy of the proposed method. Abstract: An approach based on affine transformations is applied to solve the problem of dewarping of scanned text images. The technique is script independent and does not make any assumptions about the nature of the text image or the nature of warping. The attendant problems of deskewing and deshadowing are also dealt with using a vertical projection technique and filtering technique respectively. Experiments were performed on scanned text images with varying font sizes, shapes and from various scripts with varying degrees of warp, skew and shadow. The proposed method was found to give good results on all the text images, thus demonstrating the effect of the approach. Abstract: Uncompressed multimedia data such as high resolution images, audio and video require a considerable storage capacity and transmission bandwidth on telecommunications systems. Despite of the development of the storage technology and the high performance of digital communication systems, the demand for huge files is higher than the available capacity. Moreover, the growth of image data in database applications needs more efficient ways to encode images. So image compression is more important than ever. One of the most used techniques is compression by wavelet, specified in the JPEG 2000 standard and recommended also for medical image DICOM database. This work seeks to investigate the wavelet image compression-denoising technique related to the wavelet family bases used (Haar, Daubechies, Biorthogonal, Coiflets and Symlets), database content and noise level. The target of the work is to define which combination present the best and the worst compression quality, through quality evaluation by quantitative functions: Root Mean Square Error (RMSE), Sign Noise Ratio (SNR) and Peak Sign Noise Ratio (PSNR). Abstract: In recent years considerable amount of researchers have been devoted to anisotropic diffusion method and achieved a series of important development. However, human visual system which perceived and interpreted images has been paid little attention to in all these models. In this paper, we define a visual gradient, which is looked as a generalization of the image gradient. After that we substitute the visual gradient for the image gradient in the anisotropic diffusion model to keep to some extent consistent with human visual system for the first time. Finally numerical results show the proposed method's performance. Abstract: The complexity of advanced robot vision systems calls for an architectural framework with great flexibility with regards to sensory, hardware, processing, and communications requirements. We are currently developing a system that uses time-of-flight and a regular video stream for mobile robot vision applications. We present an architectural framework based on YARP, and evaluate its efficiency. Overall, we have found YARP to be easy to use, and our experiments show that the overhead is a reasonable tradeoff for the convenience. Abstract: Bad weather, such as fog and haze, can significantly degrade the imaging quality, which becomes a major problem for many applications of computer vision. In this paper, we propose a novel color-preserving defog method based on the Retinex theory, using a single image as an input without user interactions. In the proposed method, we apply the Retinex theory to fog/haze removal form foggy/hazy images, and conceive a new strategy of fog/haze estimation. Experiment results demonstrate that the proposed method can not only remove fog or haze present in foggy or hazy images, but also restore real color of clear-day counterparts, without color distortion. Besides, the proposed method has very fast implementation. Abstract: This work aims to define and experimentally evaluate an adaptive strategy based on neural learning to select an appropriate regularization parameter within a regularized restoration process. The appropriate setting of the regularization parameter within the restoration process is a difficult task attempting to achieve an optimal balance between removing edge ringing effects and suppressing additive noise. In this context,in an attempt to overcome the limitations of trial and error and curve fitting procedures we propose the construction of the regularization parameter function through a training concept using a Multilayer Perceptron neural network. The proposed solution is conceived independent from a specific restoration algorithm and can be included within a general local restoration procedure. The proposed algorithm was experimentally evaluated and compared using test images with different levels of degradation. Results obtained proven the generalization capability of the method that can be applied successfully on heterogeneous images never seen during training. Abstract: The image simplification, noise elimination and edge enhancement steps are all fundamental to segmentation tasks. These processing techniques usually require the tuning of their control parameters; a procedure known to be incompatible with automatic segmentation. The aim of this paper is to adopt a procedure, based on nonlinear diffusion, that is capable of auto tuning by means of analytical expressions that relate diffusion times to the gradient module. The numerical method and experimental results are shown in 1D, 2D and 3D. Abstract: Omnidirectional vision sensors provide a large field of view for numerous technical applications. But the original images of these sensors are distorted, not simply interpretable and not easy to apply for normal image processing routines. So image transformation of original into panoramic images is necessary using various projections like cylindrical, spherical and conical projection, but which projection is best for a specific application? In this paper, we present a novel method to evaluate different projections regarding their applicability in a specific application using a novel variable, the pixel density. The pixel density allows to determine the resolution of a panoramic image depending on the chosen projection. To achieve the pixel density, first the camera model is determined based on the gathered calibration data. Secondly, a projection matrix is calculated to map each pixel of the original image into the chosen projection area for image transformation. The pixel density is calculated based on this projection matrix in a final step. Theory is verified and discussed in experiments with simulated and real image data. We also demonstrate that the common cylindrical projection is not always the best projection to rectify images from omnidirectional vision sensors. Abstract: This paper details a novel approach to automatically selecting images which improve camera calibration results. An algorithm is presented which identifies calibration images that inherently improve camera parameter estimates based on their geometric configuration or image network geometry. Analysing images in a more intuitive geometric framework allows image networks to be formed based on the relationship between their world to image homographies. Geometrically, it is equivalent to enforcing maximum independence between calibration images, this ensures accuracy and stability when solving the planar calibration equations. A webcam application using the proposed strategy is presented. This demonstrates that careful consideration of image network geometry, which has largely been neglected within the community, can yield more accurate parameter estimates with less images. Abstract: Eigenvectors from Standard Object Colour Spectra (SOCS) set were used with several other spectra sets to find the optimal sampling intervals for optimal number of eigenvectors. The sampling intervals were calculated for each eigenvector separately. The analysis was applied not only for different sets of reflectance spectra, but also for spectra sets under different real light sources and standard illuminations. It is shown that 20 nm sampling interval for eigenvectors from SOCS set can be used for reflectance data and data under such light sources which spectrum is smooth. However, data under peaky real fluorescent light sources and standard F-illuminant require accurate 5 nm or even narrower sampling interval for the first few eigenvectors, but can be wider with some of the others. These eigenvectors from SOCS set are shown to be applicable for the other data sets. The results give guidelines for the required accuracy of eigenvectors under different light sources that can be considered e.g. in eigenvector-based filter design. Abstract: In this paper we demonstrate the use of Hive as a novel basis for creating multi-sensor vision systems. Hive is a framework in which reusable modules called drones are defined and connected together to create larger systems. Drones are simple to implement, perform a specific task and using the powerful interface of Hive can be combined to create sophisticated vision pipelines. We present a set of drones defined within Hive and a suite of applications built using these drones which utilize the input from multiple cameras and a variety of sensors. Results demonstrate the flexibility of approaches possible with Hive as well as the real-time performance of the Hive applications. Abstract: In this paper, we propose a method to create automatically multi-layered contents from real world scene based on Depth from Focus and Spatio-Temporal Image Analysis. Since the contents are generated by layer representation directly from real world, the change of point of view is able to freely and it reduces the labor and cost of creating three-dimensional (3-D) contents using Computer Graphics. To extraction layer in the real images, Depth from Focus is used in case of stationary objects and Spatio-Temporal Image Analysis is used in case of moving objects. We selected above two methods, because of stability of system. Depth from Focus method doesn't need to search correspondence point and Spatio-Temporal Image Analysis has also simple computing algorithm relatively. We performed an experiment to extract layer contents from stationary and moving object automatically and the feasibility of the method was confirmed. Abstract: In this paper we introduce the representation theory of the symmetric group~$ SPG$ as a tool to investigate the structure of the space of $RGB$-histograms. We show that the theory reveals that typical histogram spaces are highly structured and that these structures originate partly in group theoretically defined symmetries. The algorithms exploit this structure and constructs a PCA like decomposition without the need to construct correlation or covariance matrices and their eigenvectors. We implemented these algorithms and investigate their properties with the help of two real-world databases (one from an image provider and one from a image search engine company) containing over one million images. Abstract: In this paper we describe an automated processing of plant serial section data for high-resolution 3-D models of internal structures. The processing pipeline includes standardization and registration of large image stacks as well as multiple tissue recognition by a joint registration-segmentation approach. By integrating segmented data from multiple individuals in a common reference, a statistical three-dimensional description is used to represent the inherent biodiversity amongst specimen. Inter-individual 3-D models are a novelty in the context of plant microscopy, and along with meaningful visualisation they deliver new insights into growth and development as well as provide a framework for the integration of functional data. Abstract: The Active Appearance Model (AAM) is a widely used method for model based vision showing excellent results. But one major drawback is that the method is not robust against occlusions. Thus, if parts of the image are occluded the method converges to local minima and the obtained results are unreliable. To overcome this problem we propose a robust AAM fitting strategy. The main idea is to apply a robust PCA model to reconstruct the missing feature information and to use the thus obtained image as input for the standard AAM fitting process. Since existing methods for robust PCA reconstruction are computationally too expensive for real-time processing we developed a more efficient method: fast robust PCA (FR-PCA). In fact, by using our FR-PCA the computational effort is drastically reduced. Moreover, more accurate reconstructions are obtained. In the experiments, we evaluated both, the fast robust PCA model on the publicly available ALOI database and the whole robust AAM fitting chain on facial images. The results clearly show the benefits of our approach in terms of accuracy and speed when processing disturbed data (i.e., images containing occlusions). Abstract: Fitting of conics to a set of points is a well researched area and is used in many fields of science and engineering. Least squares methods are one of the most popular techniques available for conic fitting and among these, orthogonal distance fitting has been acknowledged as the 'best' least squares method. Although the accuracy of orthogonal distance fitting is unarguably superior, the problem so far has been in finding the orthogonal distance between a point and a general conic. This has lead to the development of conic specific algorithms which take the characteristics of the type of conic as additional constraints, or in the case of a general conic, the use of an unstable closed form solution or a non-linear iterative procedure. Using conic specific constraints produce inaccurate fits if the data does not correspond to the type of conic being fitted and in iterative solutions too, the accuracy is compromised. The method discussed in this paper aims at overcoming all these problems, in introducing a direct calculation of the orthogonal distance, thereby eliminating the need for conic specific information and iterative solutions. We use the orthogonal distances in a fitting algorithm that identifies which type of conic best fits the data. We then show that this algorithm requires less accurate initializations, uses simpler calculations and produces more accurate results. Abstract: We present a framework for automatic inspection of welding seams based on specular reflections. Therefore, we introduce a novel feature set -- called specularity features (SPECs) -- describing statistical properties of specular reflections. For classification we use a one-class support-vector approach. The SPECs significantly outperform statistical geometric features and raw pixel intensities, since they capture more complex characteristics and depencies of shape and geometry.We obtain an error rate of 9%, which corresponds to the level of human performance. Abstract: Physicians may treat an aneurysm by injecting coils through a catheter into the aneurysm, or by anchoring a stent as a flow diverter. Since such an intervention is risky, a patient is only treated when the probability of aneurysm rupture is relatively high. Hemodynamic properties of aneurysmal blood flow, extracted by computational fluid dynamics calculations, are hypothesized to be relevant for predicting this rupture. Since hemodynamics simulations require a closed vessel section with defined inflow and outflow points, and since the user can easily overlook small side branches, we have developed an algorithm for fully-automatic geometry closure of an open vessel section. Since X-ray based flow returns an indication for the needed length to have a developed flow inside the geometry, we have also developed an algorithm to create a geometry closure around an aneurysm based on a length criterion. After both geometry closure algorithms were tested elaborately, practicability of the hemodynamics workstation is currently being tested. Abstract: Hierarchical data structures such as irregular pyramids are used by many applications related to image processing and segmentation. The construction scheme of such pyramids is bottom-up. Such a scheme forbids the definition of a level according to more global information defined at upper levels in the hierarchy. Moreover, the base of the pyramid has to encode any single pixel of the initial image in order to allow the definition of regions of any shape at higher levels. This last constraint raises major issues of memory usage and processing costs when irregular pyramids are applied to large images. The objective of this paper is to define a top-down construction scheme for irregular pyramids. Each level of such a pyramid is encoded by a combinatorial map associated to an explicit encoding of the geometry and the inclusion relationships of the corresponding partition. The resulting structure is a stack of finer and finer partitions obtained by successive splitting operations and is called a top-down pyramid. Abstract: Automatic visual inspection of wire ropes is an important but challenging task. Anomalies in wire ropes usually are unobtrusive and their detection is a difficult job. Certainly, a reliable anomaly detection is essential to assure the safety of the ropes. A one-class classification approach for the automatic detection of anomalies in wire ropes is presented. Different well-established features from the field of textural defect detection are compared to context-sensitive features extracted by linear prediction. They are used to learn a Gaussian mixture model which represents the faultless rope structure. Outliers are regarded as anomaly. To evaluate the robustness of the method, a training set containing intentionally added, defective samples is used. The generalization ability of the learned model, which is important for practical life, is exploited by testing the model on different data sets from identically constructed ropes. All experiments were performed on real-life rope data. The results prove a high generalization ability, as well as a good robustness to outliers in the training set. The presented approach can exclude up to 90 percent of the rope as faultless without missing one single defect. Abstract: Image registration is becoming an increasingly important tool in medical image analysis, and the need to understand deformations within and between subjects often requires analysis of obtained deformation fields. The current paper presents a novel representation of the deformation field based on the Helmholtz decomposition of vector fields. The two decomposed potential fields form a curl free field and a divergence free field. The representation has already proven its worth in fluid modelling and electrostatics, and we show it also has desirable features in image registration and morphometry in particular. The potentials are shown to a offer decoupling of the two potential fields in both elastic and fluid image registration. For morphometry applications, we show that when decomposing the deformation field in symmetric and antisymmetric parts, the vector potential alone describes the vorticity, and the scalar gradient potential gives a first-order approximation to the determinant of the Jacobian. We provide some insight into the behavior of curl and divergence representation of the warp field by constructed examples and by a demonstration on real medical image data. Our theoretical findings are readily observable in our empirical experiment, which further illustrates the benefit of the parametrization. Abstract: The watershed transform is a well-known approach for image segmentation. Watershed from markers and hierarchical watershed are derived from the watershed transform and are suitable for interactive image segmentation: in the former, the user can edit markers and control the segmentation result; in the latter, the user can select an image partition from a nested set of partitions. We investigate and propose ways to transition from one approach to other. Such transitions can be used to integrate both approaches in such a way that allow us to make full use of the strengths of both. We present examples that illustrate the use of the proposed transitions in conjunction with several interaction possibilities from both approaches. Abstract: We introduce a framework for analyzing symmetry of 2D and 3D objects using elastic deformations of their boundaries. The basic idea is to define spaces of elastic shapes and to compute shortest (geodesic) paths between the objects and their reflections using a Riemannian structure. Elastic matching, based on optimal (nonlinear) re-parameterizations of curves, provides a better registration of points across shapes, as compared to the previously-used linear registrations. A crucial step of orientation alignment, akin to finding planes of symmetry, is performed as a search for shortest geodesic paths. This framework is fully automatic and provides: a measure of asymmetry, the nearest symmetric shape, the optimal deformation to make an object symmetric, and the plane of symmetry for a given object. Abstract: The extraction of printed designs and woven patterns from textiles is formulated as a pixel labelling problem. Algorithms based on Markov random field (MRF) optimisation and reestimation are described and evaluated on images from an historical fabric archive. A method for quantitative evaluation is presented and used to compare the performance of MRF models optimised using $\alpha-$expansion and iterated conditional modes, both with and without parameter reestimation. Results are promising for potential application to content-based indexing and browsing. Abstract: Automatic Text/symbols retrieval in graphical documents (map, engineering drawing) involves many challenges because they are not usually parallel to each other. They are multi-oriented and curve in nature to annotate the graphical curve lines and hence follow a curvi-linear way too. Sometimes, text and symbols frequently touch/overlap with graphical components (river, street, border line) which enhances the problem. For OCR of such documents we need to extract individual text lines and their corresponding words/characters. In this paper, we propose a methodology to extract individual text lines and an approach for recognition of the extracted text characters from such complex graphical documents. The methodology is based on the foreground and background information of the text components. To take care of background information, water reservoir concept and convex hull have been used. For recognition of multi-font, multi-scale and multi-oriented characters, Support Vector Machine (SVM) based classifier is applied. Circular ring and convex hull have been used along with angular information of the contour pixels of the characters to make the feature rotation and scale invariant. Abstract: In this paper, we propose a novel level set based active contour model to segment textured images. The proposed methods is based on the assumption that local histograms of filtering responses between foreground and background regions are statistically separable. In order to be able to handle texture non-uniformities, which often occur in real world images, we use rotation invariant filtering features and local spectral histograms as image feature to drive the snake segmentation. Automatic histogram bin size selection is carried out so that its underlying distribution can be best represented. Experimental results on both synthetic and real data show promising results and significant improvements compared to direct modeling of filtering responses. Abstract: We present a novel algorithm for detection of sky areas in outdoor color images. In contrast to sky detectors in literature that detect only blue, cloudless sky we intend to detect all sorts of sky, i.e. blue, clouded and partially clouded sky. Our approach is based on the analysis of color, position, and shape properties of color homogeneous spatially connected regions detected by the CSC. An evaluation on a set of images acquired under different weather conditions proves the quality of the proposed system. Abstract: A common approach for traffic sign detection and recognition algorithms is to use shape based and in addition color features. Especially to distinguish between speed-limit and end-of-speed-limit-signs the usage of color information can be helpful as the outer border of speed-signs is in a forceful red. In this paper the focus is faced on color features of speed-limit and no-overtaking signs. The apparent color in the captured image is varying very much due to illumination conditions, sign surface condition and viewing angle. Therefore the color distribution in the HSV color space of a sufficient amount of signs at different illumination conditions and aging has been collected, examined, and a matching mathematical model is developed to describe the subregion in the according color space. Once the color region of traffic signs is known, two kinds of traffic sign segmentation algorithms are developed and evaluated with the explicit focus only on color features to preselect subregions in the image where (red bordered) traffic signs are likely to be. Abstract: In this paper we present a new method to group self-similar SIFT features in images. The aim is to automatically build groups of all SIFT features with the same semantics in an image. To achieve this a new distance between SIFT feature vectors taking into account their orientation and scale is introduced. The methods are presented in the context of recognition of buildings. A first evaluation shows promising results. Abstract: A method for gauging the appropriate scale for foreground-background discrimination in Scale-Space theory is presented. Otsu's Threshold (OT) is a statistical parameter generated from the first two moments of a histogram of a signal / image. In the current work a set of OT is derived from histograms of derivatives of image having Scale-Space representation. This set of OT, when plotted against corresponding scale, generates a Threshold Graph (TG). The TG undergoes an exponential decay, in the absence of foreground and exhibits inflection(s) in the presence of foreground. It is demonstrated, using synthetic and natural images, that the maxima of inflection indicate the scale and threshold (OT) appropriate to interface edges. The edges identified by thresholding at scale and threshold given by inflection of OT correspond to foreground-background interface edges. The histogram inherently imbeds the TG with underlying image signal parameters like background intensity range, pattern frequency, foreground-background intensity gradient, foreground size etc, making the method adaptable and deployable for unsupervised machine vision applications. Commutative, separable and symmetric properties of the Scale-Space representation of an image and its derivatives are preserved and computationally efficient implementations are available. Abstract: Marked watershed transform can be seen as a classification in which connected pixels are grouped into components included into the marks catchment basins.The weakened classifier assembly paradigm has shown its ability to give better results than its best member, while generalization and robustness to the noise present in the dataset is increased. We promote in this paper the use of the weakened watershed assembly for remote sensed image segmentation followed by a consensus (vote) of the segmentation results. This approach allows to, but is not restricted to, introduce previously existing borders (e.g. for the map update) in order to constraint the segmentation. We show how the method parameters influence the resulting segmentation and what are the choices the practitioner can make with respect to his problem. A validation of the obtained segmentation is done by comparing with a manual segmentation of the image. Abstract: This paper describes the methods of construction and the main characteristics of a solid texture database freely available for texture classification experiment. Here the purpose is to propose a solid texture database with many classes of different solid textures to allow an evaluation of properties and performance of analysis methods. Each images is described by a xml file made according to a DTD which is available in our web site. Using this formalism, it is even possible for a researcher to propose his own images or creation methods to complete this solid texture database. At last we discuss about different ways to exploit the database by reviewing some evaluation methods used to evaluate performance of classification and segmentation algorithms. Abstract: Vehicle Make and Model Recognition (Vehicle MMR) systems that are capable of improving the trustworthiness of automatic number plate recognitions systems have received attention of the research community in the recent past. Out of a number of algorithms that have been proposed in literature the use of Scale Invariant Feature Transforms (SIFT) in particular have been able to demonstrate the ability to perform vehicle MMR, invariant to scale, rotation, translation, which forms typical challenges of the application domain. In this paper we propose a novel approach to SIFT based vehicle MMR in which SIFT features are initially investigated for their relevance in representing the uniqueness of the make and model of a given vehicle class based on Adaptive Boosting. We provide experimental results to show that the proposed selection of SIFT features significantly reduces the computational cost associated with classification at negligible loss of the system accuracy. We further prove that the use of more appropriate feature matching algorithms enable significant gains in the accuracy of the algorithm. Experimental results prove that a 91% accuracy rate has been achieved on a publically available database of car frontal views. Abstract: In this paper, we present a method for segmenting phase contrast volume images of fibrous materials into fibre and background. The method is based on graph cut segmentation, and is tested on high resolution X-ray microtomography volume images of wood fibres in paper an composites. The new method produces better results than a standard method based on edge-preserving smoothing and hysteresis thresholding. The most important improvement is that the proposed method handles thick and collapsed fibres more accurately than previous methods. Abstract: This paper presents a method based on level sets to segment the liver using Computer Tomography (CT) images. Initially, the liver boundary is manually set in one slice as an initial solution, and then the method automatically segments the liver in all other slices, sequentially. In each step of iteration it fits a Gaussian curve to the liver histogram to model the speed image in which the level sets propagates. The parameters of our method were estimated using Genetic Algorithms (GA) and a database of reference segmentations. The method was tested using 20 different exams and five different measures of performance, and the results obtained confirm the potential of the method. The cases in which the method presented a poor performance are also discussed in order to instigate further research. Abstract: The core idea of this study is to build an algorithm that functions to compress video sequences. The mode value at every pixel along the temporal direction is calculated. If the frequency of the mode value satisfies a predetermined frequency, then the intensity values for entire entries at that particular pixel position will be changed to the mode value. The wavelet techniques will be applied to the pixels that do not satisfy the predetermined frequency and followed by a polynomial fitting method. For the purpose of compression, only the polynomial coefficients for pixels that do not satisfy the predetermined frequency, the mode values for pixels that satisfy the predetermined frequency and the corresponding pixel positions will be stored. To decompress, wavelet coefficients are estimated by the respective polynomials. The intensity values at the intended pixel position are obtained by inverse wavelet transform for pixels that do not satisfy the predetermined frequency. On the other hand, the stored mode values will be used to represent the intensity values throughout the time interval. This method portrays a prospect to achieve an acceptable decompressed video quality and compression ratio. Abstract: Region growing is one of the most popular image segmentation methods. The concept of region growing is easily understandable but sometimes criticized for its lack of theorical background. In order to overcome this weakness, we propose to describe region growing in a new framework which is the variational approach. A variational approach is commonly used in image segmentation methods such as active contours or level sets, but is quite original in the context of region growing. We call this method Variational Region Growing. First, we define a region-based criterion. A discrete derivation is applied to this criterion in order to get an evolution rule for the evolving region. The aim of this equation is to guide the evolving region towards a minimum of the criterion. Then, we formalize the iterative process of region growing in the proposed framework. Furthermore, we highlight the relevance of VRG for integrating shape prior. We apply VRG to synthetic and 3D-biomedical images. Results illustrate the improvements of VRG compared to classical methods. Abstract: In this paper we propose to analyze the variability of brain structures using principal component analysis (PCA). We rely on a data base of registered and segmented 3D MRI images of normal subjects. We propose to use as input of PCA sampled points on the surface of the considered objects, selected using uniformity criteria or based on mean and Gaussian curvatures. Results are shown on the lateral ventricles. The main variation tendencies are observed in the orthogonal eigenvector space. Dimensionality reduction can be achieved and the variability of each landmark point is accurately described using the first three components. Abstract: In this work a sampling scheme for filter-based feature extraction in the field of appearance-based object detection is analyzed. Optimized sampling radically reduces the number of features during the AdaBoost training process and better classification performance is achieved. The signal energy is used to determine an appropriate sampling resolution which then is used to determine the positions at which the features are calculated. The advantage is that these positions are distributed according to the signal properties of the training images. The approach is verified using an AdaBoost algorithm with Haar-like features for vehicle detection. Tests of classifiers, trained with different resolutions and a sampling scheme, are performed and the results are presented. Abstract: Ear segmentation is considered as the first step of all ear biometrics systems while the objective in separating the ear from its surrounding backgrounds is to improve the capability of automatic systems used for ear recognition. To meet this objective in the context of ear biometrics a new automatic algorithm based on topographic labels is presented here. The proposed algorithm contains four stages. First we extract topographic labels from the ear image. Then using the map of regions for three topographic labels namely, ridge, convex hill and convex saddle hill we build a composed set of labels. The thresholding on this labelled image provides a connected component with the maximum number of pixels which represents the outer boundary of the ear. As well as addressing faster implementation and brightness insensitivity, the technique is also validated by performing completely successful ear segmentation tested on "USTB" database which contains 308 profile view images of the ear and its surrounding backgrounds. Abstract: The descriptors used for image indexing - e.g. Scale Invariant Feature Transform (SIFT) - are generally parameterized in very high dimensional spaces which guarantee the invariance on different light conditions, orientation and scale. The number of dimensions limit the performance of search techniques in terms of computational speed. That is why dimension reduction of descriptors is playing an important role in real life applications. In the paper we present a modified version of the most popular algorithm, SIFT. The motivation was to speed up searching on large feature databases in video surveillance systems. Our method is based on the standard SIFT algorithm using a structural property: the local maxima of these high dimensional descriptors. The weighted local positions will be aligned with a dynamic programming algorithm (DTW) and its error is calculated as a new kind of measure between descriptors. In our approach we do not use a training set, pre-computed statistics or any parameters when finding the matches, which is very important for an online video indexing application. Abstract: This paper addresses image annotation, i.e. labelling pixels of an image with a class among a finite set of predefined classes. We propose a new method which extracts a sample of subwindows from a set of annotated images in order to train a subwindow annotation model by using the extremely randomized trees ensemble method appropriately extended to handle high-dimensional output spaces. The annotation of a pixel of an unseen image is done by aggregating the annotations of its subwindows containing this pixel. The proposed method is compared to a more basic approach predicting the class of a pixel from a single window centered on that pixel and to other state-of-the-art image annotation methods. In terms of accuracy, the proposed method significantly outperforms the basic method and shows good performances with respect to the state-of-the-art, while being more generic, conceptually simpler, and of higher computational efficiency than these latter. Abstract: In this paper we present a novel method to obtain regions of interest in color images. The strategy consists in the evaluation of the stability of a region according to its properties of color and spatial arrangement. We propose a fusion of the classical color image segmentation with the space scale analysis. An image can be decomposed in a set of regions that describe the whole image content. Using a set of manual labelled images we have evaluated the properties of the detector according to the human perception. The proposed region detector has a potential application in the field of the content based image retrieval by sketch. Abstract: We present a novel algorithm for the segmentation of bony tissues in MR images. Our approach is based on the level set algorithm. We introduce some pre-processing phases that improve image quality and segmentation performance. The technique requires no training and operates semi-automatically, requiring only the entry of a single seed point within the tissue to be segmented. The proposed approach is more robust than the other approaches present in the literature, with respect to the position of the initial seed point. The quantitative analysis of the results on a significant number of images demonstrate the effectiveness of our approach. Abstract: This paper presents a new approach for broadcast soccer video navigation and summarization based on specific representative images of the video. It also takes into account some soccer video features to better describe these videos. This work considers a special color reduction based on an HSV subquantization and a shot classification approach for soccer videos by exploring the dominant color related to the playground area. Abstract: Image Segmentation has been used by many approaches and techniques in artificial vision but none of them has been proved to be applied completely successfully for any image or object type. We propose in this paper a segmentation approach based on level sets which incorporate low scale cooperative analysis of both image and curve. The image at a low resolution level provides information on coarse variation of grey level intensity. For the same perspective, the curve at a low resolution scale provides a coarser curvature value. The purpose of image scale cooperative approach is to avoid stopping the curve evolution at local minima of images. This method is tested on a sample of a 2D abdomen image, and can be applied on other image types. The results obtained are satisfying and show good precision of the method. Abstract: Image segmentation is a broad area, which covers strategies for splitting one input image into its components. This paper aims to present a re-segmentation approach applied to urban imagery, where the interest elements (houses roofs) are considered to have a rectangular shape. Our technique finds and generates rectangular objects, leaving the remaining objects as background. With an over-segmented image we connect adjacent objects in a graph structure, known as Region Adjacency Graph - RAG. We then go into the graph, searching for best cuts that may result in segments more rectangular, in a relaxation-like approach. Graph search considers information about object class, through a pre-classification stage using Self-Organizing Maps algorithm. Results show that the method was able to find rectangular elements, according user-defined parameters, such as maximum levels of graph searching and minimum degree of rectangularity for interest objects. Abstract: A new approach to shape comparison problem is presented in this work. The approach is based on skeleton isomorphism. We propose a shape metrics construction instrument which is based on finding close shapes having isomorphic continuous skeletons. We propose several metrics based on this instrument that can be used for shape comparison. The main advantage over existing approaches is mathematically correctly defined shape metrics via Hausdorff distance. The efficiency of the proposed approach is confirmed on the shapes recognition problem. Abstract: Breast tissue microarrays (TMAs) facilitate the study of very large numbers of breast tumours in a single histological section, but their scoring by pathologists is time consuming, typically highly quantised, and not without error. This paper compares the results of different classification and ordinal regression algorithms trained to predict the scores of immunostained breast TMA spots, based on spot features obtained in previous work by the authors. Despite certain theoretical advantages, Gaussian process ordinal regression failed to achieve any clear performance gain over classification using a multi-layer perceptron. The use of the entropy of the posterior probability distribution over class labels for avoiding uncertain decisions is demonstrated. Abstract: We describe a system that uses image processing and computer vision techniques to discover and recognize mathematical, logical, geometric, and other structures and symbols from bit-map images. The system uses a modular architecture to allow easy incorporation of new kinds of object recognizers. The systems uses a ``blackboard'' data-structure to retain the list of objects that have been recognized. Particular object recognizers check this list to discover new objects. Initially, objects are simple pixel clusters resulting from image-processing and segmentation operations. First-level object recognizers include symbol/character recognizers and basic geometric elements. Higher-level object recognizers collect lower-level objects and build more complex objects. This includes mathematical-logical expressions, and complex geometric elements such as polylines, graphs, and others. The recognized objects and structures can be exported to a variety of vector graphic languages and type-setting systems, such as SVG and LaTeX. Abstract: Volume representations of blood vessels acquired by 3D rotational angiography are very suitable for diagnosing a stenosis or an aneurysm. For optimal treatment, physicians need to know the shape of the diseased vessel parts. Binary segmentation by thresholding is the first step in our shape extraction procedure. Assuming a twofold Gaussian mixture model, the model parameters (and thus the threshold for binary segmentation) can be extracted from the observations (i.e. the gray values) by the Expectation-Maximization (EM) algorithm. Since the EM algorithm requires a number of iterations through the observations, and because of the large number of observations, the EM algorithm is very time-consuming. Therefore, we developed a method to apply the EM algorithm to the histogram of the observations, requiring a single pass through the observations and a number of iterations through the much smaller histogram. This variant gives almost the same results as the original EM algorithm, at least for our clinical volumes. We have used this variant for an evaluation of the accuracy of the EM algorithm: the maximum relative error in the mixing coefficients was less than 7%, the maximum relative error in the parameters of the two Gaussian components was less than 2.5%. Abstract: In this paper, a new color segmentation scheme of microscopic color images is proposed. The approach combines a region growing method and a clustering method. Each channel plane of the color images is represented by a set of regions using a watershed algorithm. Those regions are represented and modeled by a Region Adjacency Graph (RAG). A novel method is introduced to simplify the RAG by merging candidate regions until the violation of a stopping aggregation criterion determined using a statistical method which combines the generalized likelihood ratio (GLR) and the Bayesian information criterion (BIC). From the resulting segmented and simplified images, the RGB image is computed. Structural features as cells area, shape indicator and cells color are extracted using the simplified graph and then stored in a database in order to elaborate meaningful queries. A regularization step based on the use of an automatic classification will take place. Results show that our method that does not involve any a priori knowledge is suitable for several types of cytology images. Abstract: Representation of developmental gradients in biological structures requires visualization of storage compounds, metabolites or mRNA hybridization patterns in a 3D morphological framework. NMR imaging can generate such a 3D framework by non-invasive scanning of living structures. Histology provides the distribution of developmental markers as 2D cross-sections. Multimodal alignment tries to put such different image modalities into correspondence. Here we compare different methods for rigid registration of 3D NMR datasets and 2D cross-sections of developing barley grains. As metrics for similarity measurements mutual information, cross correlation and overlap index are used. In addition, different filters are applied to the images before the alignment. The algorithms are parallelized, partially vectorized and implemented on the Cell Broadband Engine processor in a Playstation® 3. Evaluation is done by a comparison of the results to a manually defined gold standard of a NMR dataset and a corresponding 2D cross-section of the same grain. The results show, that best alignment is achieved by application of mutual information on sobel-filtered images and, compared to the implementation on a standard single-core CPU, the computation is accelerated by a factor up to 1.95. Abstract: This paper addresses the issue of accurate lesion segmentation in retinal imagery, using level set methods and a novel stopping mechanism - an elementary features scheme. Specifically, the curve propagation is guided by a gradient map built using a combination of histogram equalization and robust statistics. The stopping mechanism uses elementary features gathered as the curve deforms over time, and then using a lesionness measure, defined herein, 'looks back in time' to find the point at which the curve best fits the real object. We implement the level set using a fast upwind scheme and compare the proposed method against five other segmentation algorithms performed on 50 randomly selected images of exudates with a database of clinician marked-up boundaries as ground truth. Abstract: This paper presents a novel approach for detection and classification of steel sheet defects. A Defects database with enough samples and good imaging conditions introduced. A set of new features proposed to extract the appropriate textural characteristics from defects images. This is followed by the selection of important features using SFFS algorithm. Modifications to SFFS feature selection method were presented to achieve the real-time needs of research. The proposed scheme decrease computational complexity in cost of little decreasing of classification accuracy. Abstract: A Phase Correlation Method (PCM) is a well known and effective strategy for 2D image registration. Earlier we presented a derived method called Cylindrical Phase Correlation Method (CPCM) which belongs among many improvements and applications of PCM published by other authors. CPCM utilizes the effective and robust approach of PCM for a 3D image rigid registration task in an iterative optimization procedure. In this paper, the improvement to the rotation estimation step based on the non-uniform sampling in the cylindrical coordinate system is described in detail. Experimental results are provided both for the original and improved version of the rotation estimation algorithm as well as the results of the final method and its comparison to reference methods. Abstract: This study is based on the elaboration of a software for computer-aided driving. A video is acquired through the windscreen while driving, showing the scene observed by the driver. The purpose is to extract characteristic elements on each image of the video sequence in order to interpret them and help the driver to make a decision. In this way, the road width is estimated. As well, road signs are extracted from the video and the information they contain is interpreted. The presented works are based on a preliminary study giving a draft software and experimental results are shown on several examples. Abstract: A method for spatial registering pairs of digital images of the retina is presented, using intrinsic feature points (landmarks) and dense local transformation. First, landmarks, i.e. blood vessel bifurcations, are extracted from both retinal images using filtering followed by thinning and branch point analysis. Correspondances are found by topological and structural comparisons between both retinal networks. From this set of matching points, a displacement field is computed and, finally, one of the two images is transformed. Due to complex retinal registration problem, the presented transformation is dense, local and adaptive. Expermimental results established the effectiveness and the interest of the dense registration method. Abstract: This paper presents an image processing technique for noise removal in the intermediate stage of crack detection algorithm. Unlike noise in other domains, noise in this kind of image is unique in terms of size and dispersal. This technique is based on Newton's theory of universal gravitation. The technique highlights noise within an image by giving low values to noise objects while giving high values to cracks, thus, making it simple to indicate an object as a noise or a crack. This method gave good results in removing noise from crack segmentation algorithm. Abstract: Nowadays, ancient coins are becoming subject to a very large illicit trade. Thus, the interest in reliable automatic coin recognition systems within cultural heritage and law enforcement institutions rises rapidly. Central component in the permanent identification and traceability of coins is the underlying image recognition technology. Prior to any analysis a coin image has to be segmented into two areas: the area depicting the coin and the area belonging to the background. In this paper, we focus on the segmentation task as a preprocessing step for any automated coin recognition system. The objective is a robust segmentation procedure for a large variety of coin image styles. We present a simple and fast method for coin segmentation, based on local entropy and gray value range. Results of the developed algorithm are shown for an image database of ancient coins and demonstrate the benefits of our approach. Abstract: In this paper we present a novel scale invariant interest point detector of blobs which incorporates the idea of blob movement along the scales. This trajectory of the blobs through the scale space is shown to be valuable information in order to estimate the most stable locations and scales of the interest points. Our detector evaluates interest points in terms of their self trajectory along the scales and its evolution avoiding redundant detections. Moreover, in this paper we present a differential geometry view to understand how interest points can be detected. We propose analyze the gaussian curvature to classify image regions as elliptical (blobs) or hyperbolic (corners or saddles). Our interest point detector has been compared with Harris-Laplace and Hessian-Laplace detectors on infrared (IR) images, outperforming their results in terms of the number and precision of interest points detected. Abstract: In this paper, we address the problem of the analysis of cellular phenotype from time-lapse image sequences using object tracking algorithms and feature extraction and classification. We discusses the application of an object tracking algorithm for in the analysis of high content cell-migration time-lapse image sequence of extremely motile cells; these cells are captured at low time-resolution.. The small size of the objects and significant deformation of the object during the process renders the tracking as a non-trivial problem. To that end, the 'KDE Mean Shift', a real-time tracking solution, is adapted for our research. We illustrate that in a simulation experiment with artificial objects, with our algorithm an accuracy of over 90% can be established. Based on the tracking result, we propose several morphology and motility based measurements for the analysis of cell behaviour. Our analysis requires only initial manual interference; the majority of the processing is automated. Abstract: In this paper we address the problem of detecting objects form a moving camera by jointly considering lowlevel image features and high-level object information. The proposed method partitions an image sequence into independently moving regions with similar 3-dimensional (3D) motion and distance to the observer. In the recognition stage category-specific information is integrated into the partitioning process. An object category is represented by a set of descriptors expressing the local appearance of salient object parts. To account for the geometric relationships among object parts a structural prior over part configurations is designed. This prior structure expresses the spatial dependencies of object parts observed in a training data set. To achieve global consistency in the recognition process, information about the scene is extracted from the entire image based on a set of global image features. These features are used to predict the scene context of the image from which characteristic spatial distributions and properties of an object category are derived. The scene context helps to resolve local ambiguities and achieves locally and globally consistent image segmentation. Our expectations on spatial continuity of objects are expressed in a Markov Random Field (MRF) model. Segmentation results are presented based on real image sequences. Abstract: In this paper we consider the limitations of Linear Discriminative Analysis (LDA) when applying it for largescale problems. Since LDA was originally developed for two-class problems the obtained transformation is sub-optimal if multiple classes are considered. In fact, the separability between the classes is reduced, which decreases the classification power. To overcome this problem several approaches including weighting strategies and mixture models were proposed. But these approaches are complex and computational expensive. Moreover, they were only tested for a small number of classes. In contrast, our approach allows to handle a huge number of classes showing excellent classification performance at low computational costs. The main idea is to split the original data into multiple sub-sets and to compute a single LDA space for each sub-set. Thus, the separability in the obtained subspaces is increased and the overall classification power is improved. Moreover, since smaller matrices have to be handled the computational complexity is reduced for both, training and classification. These benefits are demonstrated on different publicly available datasets. In particular, we consider the task of object recognition, where we can handle up to 1000 classes. Abstract: The recognition of dynamic textures is fundamental in processing image sequences as they are very common in natural scenes. The computation of the optic flow is the most popular method to detect, segment and analyse dynamic textures. For weak dynamic textures, this method is specially adequate. However, for strong dynamic textures, it implies heavy computational load and therefore an important energy consumption. In this paper, we propose a novel approach intented to be implemented by very low-power integrated vision devices. It is based on a simple and flexible computation at the focal plane implemented by power-efficient hardware. The first stages of the processing are dedicated to remove redundant spatial information in order to obtain a simplified representation of the original scene. This simplified representation can be used by subsequent digital processing stages to finally decide about the presence and evolution of a certain dynamic texture in the scene. As an application of the proposed approach, we present the preliminary results of smoke detection for the development of a forest fire detection system based on a wireless vision sensor network. Abstract: This paper describes a semi-supervised distance metric learning algorithm which uses pairwise equivalence (similarity and dissimilarity) constraints to discover the desired groups within high-dimensional data. As opposed to the traditional full rank distance metric learning algorithms, the proposed method can learn nonsquare projection matrices that yield low rank distance metrics. This brings additional benefits such as visualization of data samples and reducing the storage cost, and it is more robust to overfitting since the number of estimated parameters is greatly reduced. Our method works in both the input and kernel induced-feature space, and the distance metric is found by a gradient descent procedure that involves an eigen-decomposition in each step. Experimental results on high-dimensional visual object classification problems show that the computed distance metric improves the performance of the subsequent clustering algorithm. Abstract: Several researchers have proposed effective approaches for binary classification in the last years. We can easily extend some of those techniques to multi-class. Notwithstanding, some other powerful classifiers (e.g., SVMs) are hard to extend to multi-class. In such cases, the usual approach is to reduce the multi-class problem complexity into simpler binary classification problems (divide-and-conquer). In this paper, we address the multi-class problem by introducing the concept of affine relations among binary classifiers (dichotomies), and present a principled way to find groups of high correlated base learners. Finally, we devise a strategy to reduce the number of required dichotomies in the overall multi-class process. Abstract: For many computer vision problems, the most time consuming component consists of nearest neighbor matching in high-dimensional spaces. There are no known exact algorithms for solving these high-dimensional problems that are faster than linear search. Approximate algorithms are known to provide large speedups with only minor loss in accuracy, but many such algorithms have been published with only minimal guidance on selecting an algorithm and its parameters for any given problem. In this paper, we describe a system that answers the question, "What is the fastest approximate nearest-neighbor algorithm for my data?" Our system will take any given dataset and desired degree of precision and use these to automatically determine the best algorithm and parameter values. We also describe a new algorithm that applies priority search on hierarchical k-means trees, which we have found to provide the best known performance on many datasets. After testing a range of alternatives, we have found that multiple randomized k-d trees provide the best performance for other datasets. We are releasing public domain code that implements these approaches. This library provides about one order of magnitude improvement in query time over the best previously available software and provides fully automated parameter selection. Abstract: In this paper, we extend the Maximum uncertainty Linear Discriminant Analysis (MLDA), proposed recently for limited sample size problems, to its kernel version. The new Kernel Maximum uncertainty Discriminant Analysis (KMDA) is a two-stage method composed of Kernel Principal Component Analysis (KPCA) followed by the standard MLDA. In order to evaluate its effectiveness, experiments on face recognition using the well-known ORL and FERET face databases were carried out and compared with other existing kernel discriminant methods, such as Generalized Discriminant Analysis (GDA) and Regularized Kernel Discriminant Analysis (RKDA). The classification results indicate that KMDA performs as well as GDA and RKDA, with the advantage of being a straightforward stabilization approach for the within-class scatter matrix that uses higher-order features for further classification improvements. Abstract: Regular textures can be modelled as consisting of periodic patterns where a fundamental unit, or texel, occurs repeatedly. This paper explores the use of a representation of texel geometry for classification and comparison of regular texture images. Texels are automatically extracted from images and the distribution of texel shape and orientation is modelled. The application of this model to image retrieval and browsing is discussed using examples from a database of art and textile images. Abstract: The contrast statistics of natural images can be adequately characterized by a two-parameter Weibull distribution. Here we show how distinct regimes of this Weibull distribution lead to various classes of visual content. These regimes can be determined using model selection techniques from information theory. We experimentally explore the occurrence of the content classes, as related to the global statistics, local statistics, and to human attended regions. As such, we explicitly link local image statistics and visual content. Abstract: Variance map can be used to detect and distinguish texts from the background in images. However previous variance maps work as one level and they revealed a limitation in dealing with diverse size, slant, orientation, translation and color of texts. In particular, they have difficulties in locating texts of large size or texts with severe color gradation due to specific value in mask sizes. We present a method of robustly segmenting text regions in complex web color images using two-level variance maps. The two-level variance maps works hierarchically. The first level finds the approximate locations of text regions using global horizontal and vertical color variances with the specific mask sizes. Then the second level segments each text region using intensity variation with a local new mask size, in which a local new mask size is determined adaptively. By the second process, backgrounds tend to disappear in each region and segmentation can be accurate. Highly promising experimental results have been obtained using the our method in 400 web images. Abstract: Human action recognition is an important research area in the field of computer vision having a great number of real-world applications. This paper presents a multi-view action recognition framework able to extract human silhouette clues from different synchronized static cameras and then to validate them introducing advanced reasonings about scene dynamics. Two different algorithmic procedures have been introduced: the first one performs, in each acquired image, the neural recognition of the human body configuration by using a novel mathematic tool named Contourlet transform. The second procedure performs, instead, 3D ball and player motion analysis. The outcomes of both procedures are then properly merged to accomplish the final player activity recognition task. Experimental results were carried out on several image sequences acquired during some matches of the Italian Serie A soccer championship. Abstract: Research has shown that regions with conspicuous colours are very effective in attracting attention, and that regions with different textures also play an important role. We present a biologically plausible model to obtain a saliency map for Focus-of-Attention (FoA), based on colour and texture boundaries. By applying grouping cells which are devoted to low-level geometry, boundary information can be completed such that segregated regions are obtained. Furthermore, we show that low-level geometry, in addition to rendering filled regions, provides important local cues like corners, bars and blobs for region categorisation. The integration of FoA, region segregation and categorisation is important for developing fast gist vision, i.e., which types of objects are about where in a scene. Abstract: This paper tackles the problem of recognizing characters in images of natural scenes. In particular, we focus on recognizing characters in situations that would traditionally not be handled well by OCR techniques. We present an annotated database of images containing English and Kannada characters. The database comprises of images of street scenes taken in Bangalore, India using a standard camera. The problem is addressed in an object cateogorization framework based on a bag-of-visual-words representation. We assess the performance of various features based on nearest neighbour and SVMclassification. It is demonstrated that the performance of the proposed method, using as few as 15 training images, can be far superior to that of commercial OCR systems. Furthermore, the method can benefit from synthetically generated training data obviating the need for expensive data collection and annotation. Abstract: In this paper, experimental results from the face contour classification tests are shown. The presented approach is dedicated to a face recognition algorithm based on the Active Shape Model method. The results were obtained from experiments carried out on the set of 3300 images taken from 100 persons. Automatically fitted contours (as 194 ordered face contour points vector, where the contour consisted of eight components) were classified by Nearest Neighbourhood Classifier and Support Vector Machines classifier, after feature space decomposition, carried out by the Linear Discriminant Analysis method. Feature subspace size reduction and classification sensitivity analysis for boundary case testing set are presented. Abstract: Phyllotaxis is the study of the morphological order of plants. Remarkably, in spite of the overwhelming diversity of plant morphology, there are common patterns that link a wide variety of species. The date palm, having a phyllotactic order, possesses a simple, repetitive model. Only a small number of parameters are needed to represent the phyllotactic order of the date palm. This a priori knowledge we have on the date palm can help in the 3D reconstruction of the tree and can even make it possible to reconstruct a 3D model from only one image. The proposed algorithm receives as input a single image of the date palm. Upon image acquisition, the algorithm proceeds to search for, and locate, the trunk followed by a few prominent leaves. From the location of the prominent leaves the algorithm proceeds to calculate tree model parameters, which can then be used to search for additional, neighboring, leaves. Complete 3D reconstruction is achieved by utilizing the calculated tree model parameters and by the known location of the leaves on the 2D image. Abstract: The component-tree structure allows to analyse the connected components of the threshold sets of an image by means of various criteria. In this paper we propose to extend the component-tree structure by associating robust shape-descriptors to its nodes. This allows an efficient shape based classification of the image connected components. Based on this strategy, an original and generic methodology for object recognition is presented. This methodology has been applied to segment and recognize ancient graphical drop caps. Abstract: This paper investigates the detection and classification of fighting and pre and post fighting events when viewed from a video camera. Specifically we investigate normal, pre, post and actual fighting sequences and classify them. A hierarchical AdaBoost classifier is described and results using this approach are presented. We show it is possible to classify pre-fighting situations using such an approach and demonstrate how it can be used in the general case of continuous sequences. Abstract: This work focuses on fast approaches for image retrieval and classification by employing simple features to build image signatures. For this purpose a neural model for soft classification and automatic image annotation is proposed. The salient aspects of this solution are: a) the employment of a Radial Basis Function Network built on top of an image retrieval distance metric b) a soft learning strategy for annotation handling. Experiments have been conducted on a subset of the Corel image dataset for evaluation and comparative analysis. Abstract: This paper presents a new method of projection peak analysis for rapid eye localization. First, the eye region is segmented from the face image by setting appropriate candidate window. Then, a threshold is obtained by histogram analysis of the eye region image to binarize and segment the eyes out of the eye region. Thus, a series of projection peak will be derived from vertical and horizontal gray projection curves on the binary image, which is used to confirm the positions of the eyes. The proposed eye-localization method does not need any a priori knowledge and training process. Experiments on three face databases show that this method is effective, accurate and rapid in eye localization, which is fit for real-time face recognition system. Abstract: Very few research is done to deal with the problem of generic object recognition from range images. With the upcoming technique of Time-of-Flight cameras (TOF), for example the PMD-cameras, range images can be acquired in real-time and thus recorded range data can be used for generic object recognition. This paper presents a model for generic recognition of 3D objects from TOF images. The main challenge is the low resolution in space and the noise level of the data which makes careful feature selection and robust classifier necessary. Our approach describes the objects as a set of local shape specific features. These features are computed from interest regions detected and extracted using a suitable interest point detector. Learning is performed in a weakly supervised manner using RealAdaBoost algorithm. The main idea of our approach has previously been applied to 2D images, and, up to our knowledge, has never been applied to range images for the task of generic object recognition. As a second contribution, a new 3D object category database is introduced which provides 2D intensity as well as 3D range data about its members. Experimental evaluation of the performance of the proposed recognition model is carried out using the new database and promising results are obtained. Abstract: In this paper, we address the problem of human identification using gait. Considering the recent work of Lee et al. (Lee et al., 2007) proposed for gait recognition. First we will introduce the algorithm proposed by Lee et al.. This method has two main steps: (1) extract key frames to define the gait cycle pattern, and (2) compute Shape Variation-based frieze patterns. These patterns are then used to classify and perform the gait identification. We modify the utilized features in this approach. We try to omit redundant features based on the effect of each feature on recognition rate and in next step, we improve performance of this approach by making some changes in way of feature extraction. Finally, we use the statistical characteristics of employed features instead of direct applying of remaining features. We test the proposed method on CASIA database. The experimental results are used to compare the proposed method with Lee et al. method. Abstract: This paper proposes a set of new image descriptors based on local histograms of basic operators. These descriptors are intended to serve in a first-level stage of an hierarcical representation of image structures. For reasons of efficiency and scalability, we argue that descriptors suitable for this purpose should be able to capture and separate invariant and variant properties. Unsupervised clustering of the image descriptors from training data gives a visual vocabulary, which allow for compact representations. We demonstrate the representational power of the proposed descriptors and vocabularies on image categorization tasks using well-known datasets. We use image representations via statistics in form of global histograms of the underlying visual words, and compare our results to earlier reported work. Abstract: We present an approach to synthesising the effects of ageing on human face images using three-dimensional modelling. We extract a set of three-dimensional face models from a set of two-dimensional face images by fitting a Morphable Model. We propose a method to age these face models using Partial Least Squares to extract from the data-set those factors most related to ageing. These ageing related factors are used to train an individually weighted linear model. We show that this is an effective means of producing an aged face image and compare this method to two other linear ageing methods for ageing face models. This is demonstrated both quantitatively and with perceptual evaluation using human raters. Abstract: In the paper we address the applied problem of detecting and recognizing street name plates in urban images by a generic approach to structural object detection and recognition. A structured object is detected using a boosting approach and false positives are filtered using a specific method called the texture transform. In a second step the subregion containing the key information, here the text, is segmented out. Text is in this case characterized as texture and a texton based technique is applied. Finally the texts are recognized by using Dynamic Time Warping on signatures created from the identified regions. The recognition method is general and only requires text in some form, e.g. a list of printed words, but no image models of the plates for learning. Therefore, it can be shown to scale to rather large data sets. Moreover, due to its generality it applies to other cases, such as logo and sign recognition. On the other hand the critical part of the method lies in the detection step. Here it relied on knowledge about the appearance of street signs. However, the boosting approach also applies to other cases as long as the target region is structured in some way. The particular scenario considered deals with urban navigation and map indexing by mobile users, e.g. when the images are acquired by a mobile phone. Abstract: In this paper, we introduce a pre-detection algorithm dedicated to French danger-warning and prohibitory road signs. The proposed method combines color, shape, location and symmetry features to select among large image databases, a small subset of pictures that probably contain road signs. We report the results of a systematic experimental assessment that we performed on five image databases, comprised of more than 26,000 images, covering 176 km and containing 371 traffic signs, among which a non-negligible amount (about 5% in average) is damaged. The experiments show that about 10% images of the sequences are selected and more than 87% traffic signs are detected. The missed objects always correspond to dirty, worn-out or badly oriented signs that would be difficult to detect even for a human operator. Abstract: We register close-range depth images of objects using a Swissranger sensor and apply a spring-mass model for 3D object reconstruction. The Swissranger sensor delivers depth images in real time which have, compared with other types of sensors, such as laser scanners, a lower resolution and are afflicted with larger uncertainties. To reduce noise and remove outliers in the data, we treat the point cloud as a system of interacting masses connected via elastic forces. We investigate two models, one with and one without a surface-topology preserving interaction strength. The algorithm is applied to synthetic and real Swissranger sensor data, demonstrating the feasibility of the approach. This method represents a preliminary step before fitting higher-level surface descriptors to the data, which will be required to define object-action complexes (OACS) for robot applications. Abstract: This paper addresses the problem of localized content based image retrieval. Contrary to classic CBIR systems which rely upon a global view of the image, localized CBIR only focuses on the portion of the image where the user is interested in, i.e. the relevant content. Using the proposed algorithm, it is possible to recognize an object by clicking on it. The algorithm starts with an automatic gamma correction and bilateral filtering. These pre-processing steps simplify the image segmentation. The segmentation itself uses dynamic region growing, starting from the click position. Contrary to the majority of segmentation techniques, region growing only focuses on that part of the image that contains the object. The remainder of the image is not investigated. This simplifies the recognition process, speeds up the segmentation, and increases the quality of the outcome. Following the region growing, the algorithm starts the recognition process, i.e., feature extraction and matching. Based on our requirements and the reported robustness in many state-of-the-art papers, the Scale Invariant Feature Transform (SIFT) approach is used. Extensive experimentation of our algorithm on three different datasets achieved a retrieval efficiency of approximately 80%. Abstract: This paper presents a method for weight estimation and classification of milled rice kernels using support vector machines. Shape descriptors are used as input features for determining the grade factors based on physical shapes such as headrice, broken kernel, and brewer. Colour histogram is extracted from milled rice image to obtain 24 colour features in RGB and Cielab colour spaces. We built a support vector regression (SVR) model for estimating rice kernel weight and support vector classifier (SVC) for rice defectives. Results showed that in real data, the performance of SVR is better than linear regression (LR) with a mean square error (MSE), mean absolute error (MAE) and correlation coefficient of 78.35x10-3, 0.206 and 0.9943, respectively. In determining grade factors based on colour appearance (rice defectives), SVC outperforms the generalized regression neural network (GRNN) with an accuracy of 98.86%. Abstract: Declarative knowledge and control decisions on the sequence of interpretation acts are separated in a structural pattern recognition system. The control can be optimized leaving the knowledge fixed. A simple production system is used as declarative example knowledge. It is tailored to recognize and locate rectangles in images – where object primitives are several thousand very short contour segments. Different control strategies can be realized: (i) a simple quality driven bottom-up control; (ii) an heuristic strategy punishing object instances which have been partner in an already performed reduction and (iii) a new psychologically inspired strategy that combines local inhibition with less local excitation. These strategies are compared quantitatively on synthetic data and qualitatively on a real aerial image. Abstract: This paper first presents a brief review on visual perception in the built environment and the Standard Feature Model of visual cortex (SFM); following experiments are presented for architectural cue recognition (door, wall and doorway) using SFM feature-based model. Based on the findings of these experiments, we conclude that the visual differences between architectural cues are too subtle to realistically simulate human vision for the SFM. Abstract: Traditionally, image thresholding is applied to segmentation - allowing foreground objects to be segemented. However, selection of thresholds in such schemes can prove difficult. We propose a solution by applying multiple thresholds. The task of object recognition then becomes that of matching binary objects, for which we present a new method based on local shape features. We embed our recognition method in a system which reduces the computational increase caused by using multiple thresholding. Experimental results show our method and system work well despite only using a single example of each object class for matching. Abstract: This paper proposes a window detection system using applied statistics and image based methods from Terrestrial Laser Scanners which can be used for direct application in a deformation measurement system. It exploits the laser distance information either directly in the laser scanner spherical coordinate space images, or on segmented planar facade patches, both with the assumption that the laser beam penetrates windows. The applied statistical method uses basic local features on local distance variations and decides on an adaptive threshold on the basis of the 1-Sigma percentile upper limit with P90 90% and P10 10% produced sample quartiles of the data for the laser spherical coordinate system image and Q3 -Sigma for the ortho images of segmented 3D facade planes as a location in the order statistics. For window detection the image is binarized and morphological closing is performed using the derived adaptive threshold. Thereafter we do the contour analysis and obtain the bounding rectangles positions that directly form the window segments in the image. We compare the window detection results on the laser spherical coordinate system image with those on ortho images of segmented 3D facades. The system provides a windows detection rate of more than 85% with a processing time of less than a minute in a typical 360 degree laser scan image. Abstract: Facial expression recognition has been the subject of much research in the last years within the Computer Vision community. The detection of smiles, however, has received less attention. Its distinctive configuration may pose less problem than other, at times subtle, expressions. On the other hand, smiles can still be very useful as a measure of happiness, enjoyment or even approval. Geometrical or local-based detection approaches like the use of lip edges may not be robust enough and thus researchers have focused on applying machine learning to appearance-based descriptors. This work makes an extensive experimental study of smile detection testing the Local Binary Patterns (LBP) as main descriptors of the image, along with the powerful Support Vector Machines classifier. The results show that error rates can be acceptable, although there is still room for improvement. Abstract: This paper proposes a new estimation of facial asymmetry in 3D face models of humans and an algorithm to compute it. We consider models derived by 3D scanning method. Each model is given as a cloud of points in 3D space and can be considered as a discrete single-valued function of two variables. We present an approach for constructing a disparity measure between original face model and its reflected model. Main stages of proposed algorithm are construction Delaunay triangulations of two models and general Delaunay triangulation, function interpolation on basis of triangulations localization in each other and comparison of functions on separate triangles of general triangulation. Further using elementary manipulations of reflected model algorithm searches such position that two models constitute a maximum matching so that the corresponding disparity measure will be minimal. We carry out computing experiments on database consisting of about 200 face models. These experiments have indicated that the proposed estimation is stable for different models of one and the same person. Abstract: Drosophila melanogaster is a model organism in genetics thanks to the compactness of its genome and its relative simplicity. Recently, certain developmental patterns in Drosophila have been studied by mathematical models, with the aim of gaining deeper and quantitative insight into the morphogenesis of this insect. There is a need for accurate dynamical of the epithelial cell structure and organization within the fly wing, to further the understanding of a phenomenon known as planar cell polarity. The present study tackles the problem of retrieving such a salient structure using classical tools of dynamical system theory embedded with network and graph concepts. On the one hand the goal is to provide a visual detection and representation of the cell packaging that is accurate and fine. Particular care is also put in obtaining a model of this structure, whose main features are the compactness and simplicity. Abstract: In this paper, we propose three-dimensional (3-D) shape reconstruction endoscope using shape from focus/defocus (SFF/SFD). 3-D shape measurement that uses the endoscope image sequence can measure both the shape and the texture at the same time. It has some advantages such as the analysis of lesion location that integrates the analysis of shape and texture. And the shape and the texture from the endoscope can be recorded quantitatively. To obtain 3-D information, shape measurement methods using stereo cameras is often used. But in case of narrow space, 3-D reconstruction using focus information such as SFF/SFD is more appropriate in terms of apparatus size. Therefore, we apply SFF method to endoscope for shape reconstruction, and conducted two basic experiments to confirm the possibility of the system using general camera as a first step. First, to estimate the accuracy of shape measurement of the system, shape measurement of the objects that the shape is already-known was conducted. And the error of the system was calculated about 1 to 5 mm. Next, to confirm the possibility to measure biological inner wall, the measurement of inner wall of the pig stomach was conducted, and the shape was reconstructed. Abstract: This paper concerns 3D object recognition from vision. In our robotics context,an object must be recognized and localized in order to be grasped by a mobile robot equipped with a manipulator arm: several cameras are mounted on this robot, on a static mast or on the wrist of the arm. The use of such a robot for object recognition, makes possible active strategies for object recognition. This system must be able to place the sensor in different positions around the object in order to learn discriminant features on every object to be recognized in a first step, and then to recognize these objects before a grasping task. Our method exploits the Mutual Information to actively acquire visual data until the recognition, like it was proposed in works presented in (Denzler and Brown, 2000) and (Denzler et al., 2001): color histogram, shape context, shape signature, Harris or Sift points descriptors are learnt from different viewpoint around every object in order to make the system more robust and efficient. Abstract: Monocular SLAM reconstruction algorithm advancements enable their integration in various applications: trajectometry, 3D model reconstruction, etc. However proposed methods still have drift limitations when applied to large-scale sequences. In this paper, we propose a post-processing algorithm which exploits a CAD model to correct SLAM reconstructions. The presented method is based on a specific deformable transformations model and then on an adapted non-rigid ICP between the reconstructed 3D point cloud and the known CAD model. Experimental results on both synthetic and real sequences point out that the 3D scene geometry regains its consistency and that the camera trajectory is improved: mean distance between the reconstructed cameras and the ground truth is less than 1 meter on several hundreds of meters. Abstract: We present a discrete distance transform in style of the vector propagation algorithm by Danielsson. Like other vector propagation algorithms, the proposed method is close to exact, i.e., the error can be strictly bounded from above and is significantly smaller than one pixel. Our contribution is that the algorithm runs entirely on consumer class graphics hardware, thereby achieving a throughput of up to 96 Mpixels/s. This allows the proposed method to be used in a wide range of applications that rely both on high speed and high quality. Abstract: Restoration of spatial objects characteristics with locally symmetric elements is proposed in this paper. An approach based on the model of a spatial flexible object defined as a family of spheres with the centres on a graph with a tree-like structure is proposed. A method of real time identification of such objects using the stereo mate images of their silhouettes is introduced. Image processing comprises construction of continuous skeletons of silhouettes. Application to real time gesture recognition is considered. Abstract: This paper presents a method for improving any object tracking algorithm based on machine learning. During the training phase, important trajectory features are extracted which are then used to calculate a confidence value of trajectory. The positions at which objects are usually lost and found are clustered in order to construct the set of 'lost zones' and 'found zones' in the scene. Using these zones, we construct a triplet set of zones i.e. 3 zones: In/Out zone (zone where an object can enter or exit the scene), 'lost zone' and 'found zone'. Thanks to these triplets, during the testing phase, we can repair the erroneous trajectories according to which triplet they are most likely to belong to. The advantage of our approach over the existing state of the art approaches is that (i) this method does not depend on a predefined contextual scene, (ii) we exploit the semantic of the scene and (iii) we have proposed a method to filter out noisy trajectories based on their confidence value. Abstract: This paper addresses real-time automatic tracking and labeling of a variable number of generic objects, using one or more static cameras. The multi-object configuration is tracked through a Markov Chain Monte-Carlo Particle Filter (MCMC PF) method. As this method sequentially processes particles, it cannot be speeded up by parallel computing allowed by multi-core processing units. As a main contribution, we propose in this paper an extended MCMC PF algorithm, benefiting from parallel computing, and we show that this strategy improves tracking operation. This paper also addresses object tracking involving occlusions, deep scale and appearance changes: we propose a global observation function allowing to fairly track far objects as well as close objects. Experiment results are shown and discussed on pedestrian and on vehicle tracking sequences. Abstract: Tracking multiple targets with similiar appearance is a common task in computer vision applications, especially in sports games. We propose a Rao-Blackwellized Resampling Particle Filter (RBRPF) as an implementable real-time continuation of a state-of-the-art multi-target tracking method. Target configurations are tracked by sampling associations and solving single-target tracking problems by Kalman filters. As an advantage of the new method the independence assumption between data associations is relaxed to increase the robustness in the sports domain. Smart resampling and memoization is introduced to equip the tracking method with real-time capabilities in the first place. The probabilistic framework allows for consideration of appearance models and the fusion of different sensors. We demonstrate its applicability to real world applications by tracking soccer players captured by multiple cameras through occlusions in real-time. Abstract: This paper proposes a novel framework for vision based door traversal that contributes to the ultimate goal of purely vision based mobile robot navigation. The door detection, door tracking and door traversal is accomplished by processing omnidirectional images. In door detection candidate line segments detected in the image are grouped and matched with prototypical door patterns. In door localisation and tracking a Kalman filter aggregates the visual information with the robots odometry. Door traversal is accomplished by a 2D visual servoing approach. The feasibility and robustness of the scheme are confirmed and validated in several robotic experiments in an office environment. Abstract: A novel mathematical method and a sensing system that detects velocity vector distribution on an optical image with a pixel-wise spatial resolution and a frame-wise temporal resolution is proposed. It is provided by the complex sinusoidally-modulated imaging using the three-phase correlation image sensor (3PCIS) and the exact algebraic inversion method based on the optical flow identity (OFI) satisfied by an intensity image and a complex-sinusoidally modulated image captured by the 3PCIS. Since the OFI is free from time derivatives, any limitations on the object velocity and inaccuracies due to approximated time derivatives is thoroughly avoided. An experimental system was constructed with a 320×256 pixel 3PCIS device and a standard PC for inversion operations and display. Several experimental results are shown including the dense motion capture of face and gesture and the particle image velocimetry of water vortices. Abstract: Graphical models have proved to be very efficient models for labeling image data. In particular, they have been used to label data samples from human body images. In this paper, the use of graphical models is studied for human-body landmark localization. Here a new algorithm based on the Branch&Bound methodology, improving the state of the art, is presented. The initialization stage is defined as a local optimum labeling of the sample data. An iterative improvement is given on the labeling space in order to reach new graphs with a lower cost than the current best one. Two branch prune strategies are suggested under a B&B approach in order to speed up the search: a) the use of heuristics; and b) the use of a node dominance criterion. Experimental results on human motion databases show that our proposed algorithm behaves better than the classical Dynamic Programming based approach. Abstract: In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and large scale maps of the road. We make use of temporal and spatial disambiguation mechanisms to increase the reliability of visually extracted 2D and 3D information. This information is then used to interpret the layout of the road by using lane markers that are detected via Bayesian reasoning. We also estimate the ego-motion of the car which is used to create large scale maps of the road and also to detect independently moving objects. Sample results for the presented algorithms are shown on a stereo image sequence, that has been collected from a structured road. Abstract: This paper presents a generic unsupervised learning based solution to unexpected event detection from a static uncalibrated camera. The system can be represented into a probabilistic framework in which the detection is achieved by a likelihood based decision. We propose an original method to approximate the likelihood function using a sparse vector machine based model. This model is then used to detect efficiently unexpected events online. Moreover, features used are based on optical flow orientation within image blocks. The resulting application is able to learn automatically expected optical flow orientations from training video sequences and to detect unexpected orientations (corresponding to unexpected event) in a near real-time frame rate. Experiments show that the algorithm can be used in various applications like crowd or traffic event detection. Abstract: This paper deals with video-based face recognition and tracking from a camera mounted on a mobile robot companion. All persons must be logically identified before being authorized to interact with the robot while continuous tracking is compulsory in order to estimate the position of this person. A first contribution relates to experiments of still-image-based face recognition methods in order to check which image projection and classifier associations lead to the highest performance of the face database acquired from our robot. Our approach, based on Principal Component Analysis (PCA) and Support Vector Machines (SVM) improved by genetic algorithm optimization of the free-parameters, is found to outperform conventional appearance-based holistic classifiers (eigenface and Fisherface) which are used as benchmarks. The integration of face recognition, dedicated to the previously identified person, as intermittent features in the particle filtering framework is well-suited to this context as it facilitates the fusion of different measurement sources by positioning the particles according to face classification probabilities in the importance function. Evaluations on key-sequences acquired by the mobile robot in crowded and continuously changing indoor environments demonstrate the tracker robustness against such natural settings. The paper closes with a discussion of possible extensions. Abstract: A new architecture for indoor positioning and tracking is proposed, based on a single low cost pan and tilt camera, where three main modules can be identified: one related to the interface with the camera, supported on parameter estimation techniques; other, responsible for isolating and identifying the target, based on advanced image processing techniques, and a third, that resorting to nonlinear dynamic system suboptimal state estimation techniques, performs the tracking of the target and estimates its position, and linear and angular velocities. To assess the performance of the proposed methods and this new architecture, a software package was developed. An accuracy of 20 cm was obtained in a series of indoor experimental tests, for a range of operation of up to ten meter, under realistic real time conditions. Abstract: In this paper, we address full-body articulated human motion tracking from multi-view video sequences acquired in a studio environment. The tracking is formulated as a multi-dimensional nonlinear optimisation and solved using particle swarm optimisation (PSO), a swarm-intelligence algorithm which has gained popularity in recent years due to its ability to solve difficult nonlinear optimisation problems. Our tracking approach is designed to address the limits of particle filtering approaches: it initialises automatically, removes the need for a sequence-specific motion model and recovers from temporary tracking divergence through the use of a powerful hierarchical search algorithm (HPSO). We quantitatively compare the performance of HPSO with that of the particle filter (PF) and annealed particle filter (APF). Our test results, obtained using the framework proposed by (Balan et al., 2005) to compare articulated body tracking algorithms, show that HPSO's pose estimation accuracy and consistency is better than PF and compares favourably with the APF, outperforming it in sequences with sudden and fast motion. Abstract: A method is presented that fuses multiple differently exposed images of the same static real-world scene into a single high dynamic range radiance map. Firstly, the response function of the imaging device is recovered, that maps irradiating light at the imaging sensor to gray values, and is usually not linear for 8-bit images. This nonlinearity affects image processing algorithms that do assume a linear model of light. With the response function known this compression can be reversed. For reliable recovery the whole set of images is segmented in a single step, and regions of roughly constant radiance in the scene are labeled. Under- and overexposed parts in one image are segmented without loss of detail throughout the scene. From these segments and a parametrization of digital film the slope of the response curve is estimated, whereby various noise sources of an imaging sensor have been modeled. From its slope the response function is recovered and images are fused. The dynamic range of outdoor environments cannot be captured by a single image. Valuable information gets lost because of under- or overexposure. A radiance map overcomes this problem and makes object recognition or visual self-localisation of robots easier. Abstract: This paper presents an intelligent control loop add-on to reduce the total amount of hardware operations – and therefore the resulting execution speed – of a real-time depth scanning algorithm. The analysis module of the control loop predicts redundant brute-force operations, and dynamically adjusts the input parameters of the algorithm, to avoid scanning in a space that lacks the presence of objects. Therefore, this approach reduces the algorithmic complexity in proportion with the amount of void within the scanned volume, while remaining fully compliant with stream-centric paradigms such as CUDA and Brook+. Abstract: The intelligent monitoring of complex scenes usually requires the adoption of different sensors depending on the type of application (i.e. radar, sonar, chemical, etc.). From the past few years, monitoring is mainly represented by visual-surveillance. In this field, the research has proposed great innovation improving the surveillance from the standard CCTV to modern systems now able to infer behaviors in limited contexts. Though, when environments allow the creation of complex scenes (i.e. crowds, clutter, etc.) robust solutions are still far to be available. In particular, one of the major problems is represented by the occlusions that often limit the performance of the algorithms. As matter of fact, the majority of the proposed visual surveillance solutions processes the data flow generated by a single camera. These methods fail to correctly localize an occluded object in the real environment. Stereo vision can be introduced to solve such a limit but the number of needed sensors would double. Thus, to obtain the benefits of the stereo vision discharging some of its drawbacks, a novel framework in stereo vision is proposed by adopting the sensors available in common visual-surveillance networks. In particular, we will focus on the analysis of a stereo vision system which is build from a pairs of heterogeneous sensors, i.e., static and PTZ cameras with a task to locate objects accurately. Abstract: A new method is proposed for recovering 3D human poses in video sequences taken from a single uncalibrated camera. This is achieved by exploiting two important constraints observed from human bipedal motion: coplanarity of body key points during the mid-stance position and the presence of a foot on the ground – i.e. static foot - during most activities. Assuming 2D joint locations have been extracted from a video sequence, the algorithm is able to perform camera auto-calibration on specific frames when the human body adopts particular postures. Then, a simplified pin-hole camera model is used to perform 3D pose reconstruction on the calibrated frames. Finally, the static foot constraint which is found in most human bipedal motions is applied to infer body postures for non-calibrated frames. We compared our method with (1) "orthographic reconstruction" method and (2) reconstruction using manually calibrated data. The results validate the assumptions made for the simplified pin-hole camera model and reconstruction results reveal a significant improvement over the orthographic reconstruction method. Abstract: In this paper we address the problem of geometric video projector calibration using a markerless planar surface (wall) and a partially calibrated camera. Instead of using control points to infer the camera-wall orientation, we find such relation by efficiently sampling the hemisphere of possible orientations. This process is so fast that even the focal of the camera can be estimated during the sampling process. Hence, physical grids and full knowledge of camera parameters are no longer necessary to calibrate a video projector. Abstract: This paper deals with the temporal synchronization of videos representing the same dynamic event from different viewpoints. We propose a novel approach to automatically synchronize such videos based on temporal self-similarities of sequences. We explore video descriptors which capture the structure of video similarity over time and remain stable under viewpoint changes. We achieve temporal synchronization of videos by aligning such descriptors by Dynamic Time Warping. Our approach is simple and does not require point correspondences between views while being able to handle strong view changes. The method is validated on two public datasets with controlled view settings as well as on other videos with challenging motions and large view variations. Abstract: Although robust object tracking has a wide variety of applications ranging from video surveillance to recognition from motion, it is not completely solved. Difficulties in tracking objects arise due to abrupt object motion, changing appearance of the object or partial and full object occlusions. To resolve these problems, assumptions are usually made concerning the motion or appearance of an object. However in most applications no models of object motion or appearance are previously available. This paper presents an approach which improves the performance of a tracking algorithm due to simultaneous online model generation of a tracked object. The achieved results testify the stability and the robustness of this approach. Abstract: It is a challenging problem to detect human and recognize their behaviors in video sequence due to the variations of background and the uncertainty of pose, appearance and motion. In this paper, we propose a systematic method to detect the behavior of tailgating. Firstly, in order to make the tracking process robust in complex situation, we propose an improved Gaussian Mixture Model (IGMM) for background and combine the Deterministic Nonmodel-Based approach with Gaussian Mixture Shadow Model (GMSM) to remove shadows. Secondly, we have developed an algorithm of object tracking by establishing tracking strategy and computing the similarity of color histograms. Having known door position in the scene, we specify tailgating behavior definition to detect tailgater. Experiments show that our system is robust in complex environment, cost-effective in computation and practical in real-time application. Abstract: We compare computational results for three procedures for reconstruction and texturing of 3D urban terrain. One pro¬¬cedure is based on recently developed "L1 splines", another on conventional splines and a third on "α-shapes". Computational results generated from optical images of a model house and of the Gottesaue Palace in Karlsruhe, Germany are presented. These comparisons indicate that the L1-spline-based procedure pro¬duces textured reconstructions that are superior to those produced by the conventional-spline-based pro¬ce¬dure and the α-shapes-based procedure. Abstract: This paper presents a method to fuse the information from motion segmentation with online adaptive neural classifier for robust object tracking. The motion segmentation with object classification identify new objects present in the video sequence. This information is used to initialize the online adaptive neural classifier which is learned to differentiate the object from its local background. The neural classifier can adapt to illumination variations and changes in appearance. Initialized objects are tracked in following frames using the fusion of their neural classifiers with the feedback from the motion segmentation. Fusion is used to avoid drifting problems due to similar appearance in the local background region. We demonstrate the approach in several experiments using benchmark video sequences with different level of complexity. Abstract: In this paper, we present a real-time dense disparity map estimation based on beliefs propagation inference algorithm. While being real-time, our implementation generates high quality disparity maps. Despite the high complexity of the calculations beliefs propagation involves, our implementation on graphics processor using CUDA API makes more than 100 times speedup compared to CPU implementation. We tested our experimental results in the Middlebury benchmark and obtained good results among the real-time algorithms. We use several programming techniques to reduce the number of iterations to convergence and memory usage in order to maintain real-time performance. Abstract: Phase-measuring profilometry is a well known technique for 3D surface reconstruction based on a sinusoidal pattern that is projected on a scene. If the surface is partly occluded by, for instance, other objects, then the depth shows abrupt transitions at the edges of these occlusions. This causes ambiguities in the phase and, consequently, also in the reconstruction. This paper introduces a reconstruction method that is based on the instantaneous frequency instead of phase. Using these instantaneous frequencies we present a method to recover from ambiguities caused by occlusion. The recovery works under the condition that some surface patches can be found that are planar. This ability is demonstrated in a simple example. Abstract: In this work we describe a novel setup for implementation and development of stereo vision attention models in a realistic embodied setting. We introduce a stereo vision robot head, called POPEYE, that provides degrees of freedom comparable to a human head. We describe the geometry of the robot as well as the characteristics that make it a good candidate for studying models of visual attention. Attentional robot control is implemented with JAMF, a graphical modeling framework which allows to easily implement current state-of-the-art saliency models. We give a brief overview over JAMF and show implementations of four exemplary attention models that can control the robot head. Abstract: This paper presents a new annealing method for particle filtering in the context of body pose estimation. The feature-based annealing is inferred from the weighting functions obtained with common image features used for the likelihood approximation. We introduce a complementary weighting function based on the foreground extraction and we balance the different measures through the annealing layers in order to improve the posterior estimate. This technique is applied to estimate the upper body pose of a subject in a realistic multi-view environment. Comparative results between the proposed method and the common annealing strategy are presented to assess the robustness of the algorithm. Abstract: The virtual reality is a powerful tool to simulate the behavior of the physical systems. The visual system of a robot and its interplay with the 3D environment can be modeled and simulated through the geometrical relationships between the virtual stereo cameras and the virtual 3D world. The novelty of our approach is related to the use of the virtual reality as a tool to simulate the behavior of active vision systems. In the standard way, the virtual reality is used for the perceptual rendering of the visual information exploitable by a human user. In the proposed approach, a virtual world is rendered to simulate the actual projections on the cameras of a robotic system, thus the mechanisms of the active vision are quantitatively validated by using the available ground truth data. Abstract: We propose a robust approach to annotating independently moving objects captured by head mounted stereo cameras that are worn by an ambulatory (and visually impaired) user. Initially, sparse optical flow is extracted from a single image stream, in tandem with dense depth maps. Then, using the assumption that apparent movement generated by camera egomotion is dominant, flow corresponding to independently moving objects (IMOs) is robustly segmented using MLESAC. Next, the mode depth of the feature points defining this flow (the foreground) are obtained by aligning them with the depth maps. Finally, a bounding box is scaled proportionally to this mode depth and robustly fit to the foreground points such that the number of inliers is maximised. Abstract: The quality of point correspondences is crucial for the successful application of multi camera self-calibration procedures. There are several interest point detectors, local descriptors and matching algorithms, which can be combined almost arbitrarily. In this paper, we compare the point correspondences produced by several such combinations. In contrast to previous comparisons, we evaluate the correspondences based on the accuracy of relative pose estimation and multi camera calibration. Abstract: In this paper we review the main techniques for volume reconstruction from a set of views using Shape from Silhouette techniques and we propose a new method that adapts the inconsistencies analysis shown in (Landabaso et al., 2008) to the graph cuts framework (Snow et al., 2000) which allows the introduction of spatial regularization. For this aim we use a new viewing line based inconsistency analysis within a probabilistic framework. Our method adds robustness to errors by projecting back to the views the volume occupancy obtained from 2D foreground detections intersection, and analysing this projection. The final voxel occupancy of the scene is set following a maximum a posteriori (MAP) estimate. We have evaluated a sample of techniques and the new method proposed to have an objective measure of the robustness to errors in real environments. Abstract: This paper deals with the problem of tracking multiple objects in outdoor scenarios for the prospective of intelligent vehicles. The input of the proposed algorithm is the result of a stereovision obstacle detection algorithm. The aim is to establish the correspondence between the detected objects in consecutive frames and to reconstruct the trajectory of each individual object. To this purpose, an object model based on its scene position and its intensity caracteristic is defined. A track management strategy including track initiation, track termination and track continuation is also proposed. This strategy enables to deal with issues such as object appearance, dispapearance, occlusion and detection failure. An adaptive model update technique is applied in order to take into account appearance variations of the tracked object along time. Experiments were carried out in the context of pedestrian detection. Results on urban scenarios illustrate the performance of the proposed method. Abstract: This paper introduces a novel estimation technique to compute camera translation and rotation (only in the axis that is perpendicular to the image plane) when a marker is partially occluded. The approach has two main advantages: 1) only one marker is necessary; and 2) it has a low computational cost. As a result of the second feature, this proposal is ideal for mobile devices. Our method is implemented in ARToolkitPlus library, but it could be implemented in another marker-tracking library with square markers. A little extra image processing is needed, taking advantage of temporal coherence. Results show that user feels enough realistic sensation to apply this technique in some applications. Abstract: Video surveillance is one of the most studied application in Computer Vision. We propose a novel method to identify and track people in a complex environment with stereo cameras. It uses two stereo cameras to deal with occlusions, two different background models that handle shadows and illumination changes and a new segmentation algorithm that is effective in crowded environments. The algorithm is able to work in real time and results demonstrating the effectiveness of the approach are shown. Abstract: This paper presents algorithms and techniques towards a real-time and accurate Voxel Coloring framework. We combine Visual Hull, Voxel Coloring and Marching Cubes techniques to derive an accurate 3D model from a set of calibrated photographs. First, we adapted the Visual Hull algorithm for the computation of the bounding box from image silhouettes. Then, we improved the accuracy of the Voxel Coloring algorithm using both colorimetric and geometric citerions. The calculation time is reduced using an Octree data structure. Then, the Marching Cubes is used to obtain a polygonal mesh from the voxel reconstruction. Finally, we propose a practical way to speed up the whole process using graphics hardware capababilities. Abstract: In this work, we address the problem of 3D circle detection in a hierarchical representation which contains 2D and 3D information in the form of multi-modal primitives and their perceptual organizations in terms of contours. Semantic reasoning on higher levels leads to hypotheses that then become verified on lower levels by feedback mechanisms. The effects of uncertainties in visually extracted 3D information can be minimized by detecting a shape in 2D and calculating its dimensions and location in 3D. Therefore, we use the fact that the perspective projection of a circle on the image plane is an ellipse and we create 3D circle hypotheses from 2D ellipses and the planes that they lie on. Afterwards, these hypotheses are verified in 2D, where the orientation and location information is more reliable than in 3D. For evaluation purposes, the algorithm is applied in a robotics application for grasping cylindrical objects. Abstract: For tracking objects, the various template matching methods are usually used. However, those cannot completely cope with apparent changes of a target object in images. On the other hand, to discriminate multiple objects in still images, the label assignment based on the MAP estimation using object's features is convenient. In this study, we propose a method which enables to track multiple objects stably without explicit tracking by extending the above MAP assignment in the temporal direction. We propose two techniques; information of target position and its size detected in the previous frame is propagated to the current frame as a prior probability of the target region, and distribution properties of target's feature values in a feature space are adaptively updated based on detection results at each frame. Since the proposed method is based on a label assignment and then, it is not an explicit tracking based on target appearance in images, the method is robust especially for occlusion. Abstract: In this paper, we propose a new approach which registers a range image which is acquired from a 3-D range sensor to a DSM to estimate the 3-D pose of an unmanned ground vehicle. Generally, 3-D registration is divided into two parts that called as coarse and refinement steps. Above all, a proper feature matching technique is demanded between the DSM and the range image for the coarse registration to register precisely and speedy. We generated signatures using shape parameterization about the DSM and the range images and got a 3-D rigid transformation by matching them to minimize registration error. Abstract: In this paper we present an approach to object detection in surveillance video based on detecting moving edges using the Hadamard transform. The proposed method is characterized by robustness to illumination changes and ghosting effects and provides high speed detection, making it particularly suitable for surveillance applications. In addition to presenting an approach to moving edge detection using the Hadamard transform, we introduce two measures to track edge history, Pixel Bit Mask Difference (PBMD) and History Update Value (HUV) that help reduce the false detections commonly experienced by approaches based on moving edges. Experimental results show that the proposed algorithm overcomes the traditional drawbacks of frame differencing and outperforms existing edge-based approaches in terms of both detection results and computational complexity. Abstract: In this paper, we propose a novel edge gradient based template matching method for object detection. In contrast to other methods, ours does not perform any binarization or discretization during the online matching. This is facilitated by a new continuous edge gradient similarity measure. Its main components are a novel edge gradient operator, which is applied to query and template images, and the formulation as a convolution, which can be computed very efficiently in Fourier space. We compared our method to a state-of-the-art chamfer based matching method. The results demonstrate that our method is much more robust against weak edge response and yields much better confidence maps with fewer maxima that are also more significant. In addition, our method lends itself well to efficient implementation on GPUs: at a query image resolution of 320×256 and a template resolution of 80×80 we can generate about 330 confidence maps per second. Abstract: We propose a novel algorithm for stereo matching using a dynamical systems approach. The stereo correspondence problem is first formulated as an energy minimization problem. From the energy function, we derive a system of differential equations describing the corresponding dynamical system of interacting elements, which we solve using numerical integration. Optimization is introduced by means of a damping term and a noise term, an idea similar to simulated annealing. The algorithm is tested on the Middlebury stereo benchmark. Abstract: We propose a new method for Motion Detection using stationary camera, where the information of different motion detectors which are not robust but light in terms of computation time (what we will call weak motion detector (WMD)) are merged with spatio-temporal Markov Random Field to improve the results. We put the strength, instead of on the weak motion detectors, on the fusion of their information. The main contribution is to show how the MRF can be modeled for obtaining a robust result. Experimental results show the improvement and good performance of the proposed method. Abstract: Feature density approximation (FDA) based visual object appearance representation is emerging as an effective method for object tracking, but its challenges come from object's complex motion (e.g. scaling, rotation) and the consequent object's appearance variation. The traditional adaptive FDA methods extract features in fixed scales ignoring the object's scale variation, and update FDA by sequential Maximum Likelihood estimation, which lacks robustness for sparse data. In this paper, to solve the above challenges, a robust multi-scale adaptive FDA object representation method is proposed for tracking, and its robust FDA updating method is provided. This FDA achieve robustness by extracting features in the selected scale and estimating feature density using a new likelihood function defined both by feature set and the feature's effectiveness probability. In FDA updating, robustness is achieved updating FDA in a Bayesian way by MAP-EM algorithm using density prior knowledge extracted from historical density. Object complex motion (e.g. scaling and rotation) is solved by correlating object appearance with its spatial alignment. Experimental results show that this method is efficient for complex motion, and robust in adapting the object appearance variation caused by changing scale, illumination, pose and viewing angel. Abstract: Registration of laser range data becoming from different scanner positions is still a current topic in literature. In this paper we introduce the possibility of solving it by using spin images, which create a 2D image for every 3D coordinate vertex in the scans. Matching between spin images allows the estimation of an initial rigid transformation between the scans, which later can be refined with ICP process in order to achieve a more accurate registration. Abstract: A new approach for modelling electrical discharges is proposed. To this purpose, an active contour named 3Dsnake is used that is geometrically represented by a B-spline which evolves in 3D space constrained by internal and external energies. More specifically, this external energy come from a pair of images. This new model is much less dependent on determination of homologous points than the approaches found in the literature for recovering 3D geometry of electrical discharges. In addition, the proposal discussed here is capable of tracking the evolution os the electrical discharge taking into account the time dependence between consecutive pairs of frames in two videos. Abstract: This paper presents a method for counting and classifying vehicles on motorway. The system is based on a multi-camera system fixed over the road. Different features (maximum phase congruency and edges) are detected on the two images and matched together with local matching algorithm. The resulting 3D points cloud is processed by maximum spanning tree clustering algorithm to group the points into vehicle objects. Bounding boxes are defined for each detected object, giving an approximation of the vehicles 3D sizes. A complementary 2D quadrilateral detector has been developed to enhance the probability of matching features on vehicle exhibiting little texture such as long vehicles. The algorithm presented here was validated manually and gives 90% of good detection accuracy. Abstract: The problem of human detection in crowded scenes where people may occlude each other has been tackled recently using the planar homography constraint in a multiple view framework. The foreground objects detected in each view are projected on a common plane in an accumulated fashion and then the maxima of this accumulation are matched to the moving targets. However the superposition of foreground objects projections on a common plane may create artifacts which can seriously disorientate a human detector by creating false positives. In this work we present a method which eliminates those artifacts by using only geometrical information thus contributing to robust human detection for multiple views. The presented experimental results validate the proposed approach. Abstract: In this work we introduce a color based image-audio system that enhances the perception of the visually impaired users. Traditional sound-vision substitution systems mainly translate gray scale images into corresponding audio frequencies. However, these algorithms deprive the user from the color information, an critical factor in object recognition and also for attracting visual attention. We propose an algorithm that translates the scene into sound based on some classical computer vision algorithms. The most salient visual regions are extracted by a hybrid approach that blends the computed salient map with the segmented image. The selected image region is simplified based on a reference color map dictionary. The centroid of the color space are translated into audio by different musical instruments. We chose to encode the audio file by polyphonic music composition reasoning that humans are capable to distinguish more than one instrument in the same time but also to reduce the playing duration. Testing the prototype demonstrate that non-proficient blindfold participants can easily interpret sequence of colored patterns and also to distinguish by example the quantity of a specific color contained by a given image. Abstract: During the two last decades, many contributions have been proposed on 3D reconstruction from image sequences. Nevertheless few practical applications exist, especially using vision. We are concerned by the analysis of image sequences acquired during crash tests. In such tests, it is required to extract 3D measurements about motions of objects, generally identified by specific markings. With numerical cameras, it is quite simple to acquire video sequences, but it is very difficult to obtain from operators in charge of these acquisitions, the camera parameters and their relative positions when using a multicamera system. In this paper, we are interested on the simplest situation: two cameras observing the motion of an object of interest: the challenge consists in reconstructing the 3D model of this object, estimating in the same time, the intrinsic and extrinsic parameters of these cameras. So this paper copes with 3D Euclidean reconstruction with uncalibrated cameras: we recall some theoretical results in order to evaluate what are the possible estimations when using only two images acquired by two distinct perspective cameras. Typically it will be the two first images of our sequences. It is presented several contributions of the state of the art on these topics, and then results obtained from synthetic data, so that we could state on advantages and drawbacks of several parameter estimation strategies, based on the Sparse Bundle Adjustment and on the Levenberg-Marquardt optimization function. Abstract: The present paper proposes a visual integration algorithm that integrates intensity edge information into a stereo algorithm. The stereo algorithm assumes two constraints of continuity and uniqueness on disparity distribution. Since depth discontinuity around object boundaries does not satisfy the continuity constraint, it causes numerous errors in stereo disparity detection. In order to reduce the errors due to the depth discontinuity, we propose a new algorithm that integrates intensity edge information into the stereo algorithm. The stereo algorithm utilizes reaction-diffusion equations, in which diffusion coefficients control the continuity constraint. Thus, we introduce anisotropic diffusion fields into the reaction-diffusion equations; that is, we modulate the diffusion coefficients according to results of edge detection applied to image intensity distribution. We demonstrate how the proposed algorithm works around areas having depth discontinuity and confirm quantitative performance of the algorithm in comparison to other stereo algorithms. Abstract: The development of new interaction paradigms requires a natural interaction. This means that people should be able to interact with technology with the same models used to interact with everyday real life, that is through gestures, expressions, voice. Following this idea, in this paper we propose a non intrusive vision based tracking system able to capture hand motion and simple hand gestures. The proposed device allows to use the hand as a "natural" 3D mouse, where the forefinger tip or the palm centre are used to identify a 3D marker and the hand gesture can be used to simulate the mouse buttons. The approach is based on a monoscopic tracking algorithm which is computationally fast and robust against noise and cluttered backgrounds. Two image streams are processed in parallel exploiting multi-core architectures, and their results are combined to obtain a constrained stereoscopic problem. The system has been implemented and thoroughly tested in an experimental environment where the 3D hand mouse has been used to interact with objects in a virtual reality application. We also provide results about the performances of the tracker, which demonstrate precision and robustness of the proposed system. Abstract: We address the problem of camera motion from points and line correspondences across multiple views. We investigate firstly the mathematical mathematical formula between slopes of lines in the different images acquired after rotation motion of camera. Assuming that lines in successive images are tracked, this relation is used for estimating rotation angles of the camera. Experiments are conducted over real images and the obtained results are presented and discussed. Abstract: The recovery of three dimensional structures from moving elements is one of the main abilities of the human perception system. It is mainly based on particularities of how we interpret moving features, especially on the enforcement of geometrical grouping and definition of relation between features. In this paper we evaluate how the human abilities of motion based feature clustering can be transferred to an algorithmic approach to determine the structure of a rigid or articulated body in an image sequence. It shows how to group sparse 3D motion features to structural clusters, describing the rigid elements of articulated body structures. The location and motion properties of sparse feature point clouds have been analyzed and it is shown that moving features can be clustered by their local and temporal properties without any additional image information. The assembly of these structural groups could allow the detection of a human body in an image as well as its pose estimation. So, such a clustering can establish a basis for a markerless reconstruction of articulated body structures as well as for human motion recognition by moving features. Abstract: Object detection in videos involves verifying the presence of an object in image sequences and possibly locating it precisely for recognition. Object tracking is to monitor an object's spatial and temporal changes during a video sequence, including its presence, position, size, shape, etc. These two processes are closely related because tracking usually starts with detecting objects, while detecting an object repeatedly in subsequent image sequence is often necessary to help and verify tracking. In this paper, a novel approach is being presented for detecting and tracking object. It includes combination of Kalman filter and fast mean shift algorithm. Kalman prediction is measurement follower. It may be misled by wrong measurement. In order to cater it, fast mean shift algorithm is used. It is used to locate densities extrema, which gives clue that whether Kalman prediction is right or it is misled by wrong measurement. In case of wrong prediction, it is corrected with the help of densities extrema in the scene. The proposed approach has the robust ability to track the moving object in the consecutive frames under some kinds of difficulties such as rapid appearance changes caused by image noise, illumination changes, and cluttered background. Abstract: This paper presents an extension of a previously reported method for object tracking in video sequences to handle the problems of object crossing and occlusion by other objects in the same class that the one followed. The proposed solution is embedded in a system that integrates recognition and tracking in a probabilistic framework. In a recent work, a method to approach the object occlusion problem was proposed that failed when the object crossed or was occluded by another object of the same class. Here we present an attempt to overcome this limitation and show some promising results. The method is based on the assumption that when two objects cross each other there is not a brusque change of the trajectories. Our system uses object recognition results provided by a neural net that are computed from colour features of image regions for each frame. The location of tracked objects is represented through probability images that are updated dynamically using both recognition and tracking results. From these probabilities and a prediction of the motion of the object in the image, a binary decision is made for each pixel and object. Abstract: Data Assimilation is a mathematical framework used in environmental sciences to improve forecasts performed by meteorological, oceanographic or air quality simulation models. Data Assimilation techniques require the resolution of a system with three components: one describing the temporal evolution of a state vector, one coupling the observations and the state vector, and one defining the initial condition. In this article, we use this framework to study a class of ill-posed Image Processing problems, usually solved by spatial and temporal regularization techniques. A generic approach is defined to convert an ill-posed Image Processing problem in terms of a Data Assimilation system. This method is illustrated on the determination of optical flow from a sequence of images. The resulting software has two advantages: a quality criterion on input data is used for weighting their contribution in the computation of the solution and a dynamic model is proposed to ensure a significant temporal regularity on the solution. A NEW LIKELIHOOD FUNCTION FOR STEREO MATCHING - How to Achieve Invariance to Unknown Texture, Gains and Offsets? Abstract: We introduce a new likelihood function for window-based stereo matching. This likelihood can cope with unknown textures, uncertain gain factors, uncertain offsets, and correlated noise. The method can be fine-tuned to the uncertainty ranges of the gains and offsets, rather than a full, blunt normalization as in NCC (normalized cross correlation). The likelihood is based on a sound probabilistic model. As such it can be directly used within a probabilistic framework. We demonstrate this by embedding the likelihood in a HMM (hidden Markov model) formulation of the 3D reconstruction problem, and applying this to a test scene. We compare the reconstruction results with the results when the similarity measure is the NCC, and we show that our likelihood fits better within the probabilistic frame for stereo matching than NCC. Abstract: An important research is done to exploit the characteristics of PTZ cameras. These cameras allow motorized cover a wide field of view. A classic application of these cameras is to image mosaicing. But they can also be used to track moving objects. In this paper, we present an original approach for performing the registration, adapted to the case of central projection and a background subtraction algorithms for these cameras. The background image is iteratively updated and only on the part "seen" by the camera. We have experimented different segmentation algorithms using our background modeling technique and this approach makes it possible object tracking in real time for PTZ cameras. Abstract: Motivated by cultural heritage, industry, medicine we are developing 3D-scanners and post-processing systems for rapid and precise documentation of surfaces with curvature. By constantly increasing resolution and accuracy of our system we can enable the documentation of small deviations of even flat surfaces – like frescos. This enables documentation of important features for restoration like small fractures or topology of paintstrokes for scientific research. The 3D-documentation can be done in-situ, radiation-free and contact-free using a structured (coded) light-source and a digital camera. Using light for documentation of colourful painted surface lead to the integration of colour-filtering techniques to "see thru" the first layer(s) of paint. This approach, typically known from photography, is used to reveal under- drawings of paintings. While photographs suffer from lens distortion lacking a precise scale, we can provide the height of paint-layers in µm in a properly calibrated scale. This method has already been successful tested on synthetic data and medieval paintings and statues, which cover not all painting techniques known to art historians. Therefore we conducted experiments in Pompei to determine the capabilities of our system for fresco paintings. Results shown in this report cover traditional close-range 3D-acquisition for larger fields of view (m2) and multi-spectral 3D-acquisition for paint layers having a field of view of ˜ 600cm2. Regarding performance – having a tremendous amount of frescos – we could show that 3D-acquisition can be done in ˜ 15 minutes per m2. Multi-spectral 3D-acquisition can be applied in a similar fast manner by using expert-knowledge to narrow down the areas of interest. Abstract: This paper deals with an image stabilization for video based tracking systems. At the beginning an introduc-tion to the image stabilization is stated. Short description of known algorithms for image stabilization follows including our solution based on these methods with some optimizations. At the end, we represent a suitable hardware platform, which was developed and constructed by us and uses DSP, FPGA and SDRAM. The connection of our software and our hardware is new and very promising. Abstract: This paper presents a new mono-camera system for traffic surveillance. It uses an original algorithm to obtain automatically a calibration pattern from road lane markings. Movement detection is done with a ∑ - ∆ background estimation which is a non linear method of background substraction based on comparison and elementary increment/decrement. Foreground and calibration data obtained allow to determine vehicles speed in an efficient manner. Finally, a new method to estimate the height of vehicles is presented. Abstract: This paper presents a time-varying Gabor filter bank predictor for use with vehicle tracking via surveillance video. A frame-based 2D Gabor-filter bank is selected as a primary detector for any changes in a given video frame sequence. Detected changes are localized in each frame by fitting a bounding box on the silhouette of the vehicle in the region of interest (ROI). Arbitrary motion of each vehicle is fed to a non-linear directional predictor in the time axis for estimating the location of the tracked vehicle in the next frame of the video sequence. Real-time traffic-video experimentation dictates that the cone Gabor filter structure is able to tune itself into a selected target and trace it accordingly. This property is highly desirable in the fast and accurate moving vehicle or target tracking purposes in range and intensity driven sensing. Abstract: To be efficient outdoors, automated video surveillance systems should recognize and monitor humans activities under various amounts of light. In this paper, we present a human face tracking system that is based on the classification of the skin pixels using colour and texture properties. The originality of this work concerns the use of a specific dynamical classifier. An incremental svm algorithm equipped with dynamic learning and unlearning rules, is designed to track the variation of the skin-pixels distribution. This adaptive skin classification system is able to detect and track a face in large lighting condition variations.
CommonCrawl
So this is the simplex algorithm that Dantzig developed for the USAF. :sigh: here we go. As with the rest of the LP notes, pretty much taking a lot of this right from U. Wash. Math 407. You also give a slack variable to the objective function (what you're minimizing/maximizing). Once you've assigned all your slack variables, this is called your dictionary. Now throw your stuff back into a form where all the $x_s$ are on one side and they equal your $b_s$ (like 5 in the above). See slide 17 of the simplex1 notes in the link at the top if you don't get it. Now you have what's called an augmented matrix or simplex tableau. The slack (decision variables) that we added before? The ones on the left hand side of the simplex tableau/dictionary. Yeah, well those are now called the basic variables (i.e. $x_5$ in the second eq. above) and the ones in the center (i.e. $x_2, x_3, x_4$ of the first equation I just threw out there) are called the nonbasic variables. So to get the basic solution, we basically just set the basic variables to 0 and solve the system of equations. These solutions are actually called the basic feasible solutions. The associated dictionary is said to be the feasible dictionary and the LP is said to have feasible origin. The optimal value is $z=16$ and now you also have another feasible dictionary. The process of moving from one feasible dictionary to another is called simplex pivoting. The pivoting process above is called simplex pivoting. "A pivot corresponds to doing Gauss-Jordan elimination on the column in the simplex tableau (augmented matrix) corresponding to the incoming variable." You can actually then apply simplex pivoting right on the simplex tableau. The UW slides have a really good example of this starting on slide 96. It's very very clear. Thank you James V. Burke!!
CommonCrawl
Abstract The JIMWLK Hamiltonian for high energy evolution of QCD amplitudes is presented at the next-to-leading order accuracy in $\alpha_s $. The general form of the Hamiltonian is deduced from the symmetries and the structure of the hadronic light cone wavefunction. The independent functional kernels are then extracted by comparing the rapidity evolution of the quark dipole and the three-quark singlet states generated by this Hamiltonian with the corresponding results available in the literature.
CommonCrawl
Abstract: A new operation of product of groups, the $n$-periodic product of groups for odd exponent $n\ge 665$, was proposed by the author in 1976 in the paper . This operation is described on the basis of the Novikov–Adyan theory introduced in the monograph of the author. It differs from the classic operations of direct and free products of groups, but has all of the natural properties of these operations, including the so-called hereditary property for subgroups. Thus, the well-known problem of A. I. Maltsev on the existence of such new operations was solved. Unfortunately, in the paper , the case where the initial groups contain involutions, was not analyzed in detail. It is shown that, in the case where the initial groups contain involutions, this small gap is easily removed by an additional restriction on the choice of defining relations for the periodic product. It suffices to simply exclude products of two involutions of previous ranks from the inductive process of defining new relations for any given rank $\alpha$. It is suggested that the adequacy of the given restriction follows easily from the proof of the key Lemma II.5.21 in the monograph . We also mention that, with this additional restriction, all the properties of the periodic product given in remain true with obvious corrections of their formulation. Moreover, under this restriction, one can consider $n$-periodic products for any period $n\ge665$, including even periods.
CommonCrawl
Today is a big day for Samuel. He wants to join the baseball team, but since he's not a real big athlete he uses his engineering skills to figure out a way to make the team. Samuel developed a special pair of glasses to analyze his opponents. He uses factoring trinomials to figure out when the ball will hit the ground. Before the player hits the ball, he gets a function of the flight path of the ball to know where it will land. Let's take a look at the view from his glasses. Samuel is standing in right field, ready for anything. The batter hits the ball with this function: h(x) = -5x² + 14x + 3, where h(x) represents the height of the ball in feet and 'x' represents the time in seconds. We are looking for h(x) = 0, or when the ball hits the ground. Hmm this is not easy to solve, but Sammy's special spectacles can factor things in a jiffy. As you know, the standard form of a quadratic function is y = ax² + bx + c. In this problem, a = -5, b = 14, and c = 3. First, we have to find the product of a and c, which is -5 times 3, which equals -15. Now, we have to find the factor pairs of -15 (since -15 is negative, only one of the factors should be negative). Let's think of the possible factors of -15. The factors we have are -1(15), 1(-15), -3(5), and 3(-5). These are the only possible factor pairs of -15. Next, we want to find the pair that sums to b, which, in our case is positive 14. Let's look at the sum of these factors. Remember, we're looking for 14 as the result. The first combination of -1 and 15 equals 14. It looks like we've found the correct factor pair. To factor this quadratic function, where the right side is a trinomial with 'a ≠ 1', we use the box method. We fill the box with the terms of the quadratic function. Generally, we place the first term in the upper-left-hand corner and the last term in the lower-right-hand corner. Now that we know that 14x is equal to -1x + 15x, we can complete the box with these terms in the other two corners. We need to find the greatest common factor for each row. For the top row, the greatest common factor is -1x. For the bottom row, the greatest common factor is 3. Next we need to find the greatest common factor for each column. For the first column, the greatest common factor is 5x and 1 for the second. The factorization of the trinomial term is the product of the GCF column sum (5x + 1) and the GCF row sum (-1x + 3) giving us (5x + 1)(-x + 3). We can write our trinomial as a product of two binomials. Because we want to know where the ball hits the ground, we set h(x) = 0 because 'h' represents the height of the ball. The equations 5x + 1 = 0 and -x + 3 = 0 represent the two times the ball will hit the ground. The first step is to subtract the integer value from both sides of the equation. The next step is to divide by the coefficient in front of 'x' on both sides of the equation. The solutions to the two equations are -1/5 and 3. We can't use -1/5 because we can't measure time with negative values, so the ball will hit the ground after 3 seconds. 1) Factor out the GCF, if there's any, and make sure that the trinomial is written in standard form; i.e. ax² + bx + c. 2) Multiply the leading coefficient a and the constant c. 3) Find m and n such that m*n=ac and m+n=b. 5) Group like terms or use the box method. 2. a=6 and c=5. So 6x5=30. 3. 15x2=30. So m=15 and n=2. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Factoring Trinomials with a ≠ 1 kannst du es wiederholen und üben. Factor the quadratic function: $h(x) = 2x^2 + 4x - 6$. Before applying the box method, replace $b$ with a sum of two factors of $ac$. Fill in a box with terms from the quadratic function and then find the greatest common factors of the columns and rows. When presented with a quadratic function in standard form, the first step in factoring is to identify its coefficients; i.e. $a$, $b$, and $c$. We then need to find the product of $a$ and $c$, and then find the pair of factors of this product which sum to $b$. For example, with the equation $h(x) = 2x^2 + 4x - 6$, we have that $a = 2$, $b = 4$, and $c = -6$. Calculating the product of $a$ and $c$ gives us $ac = 2(-6) = -12$. So we need to find a pair of factors of $-12$ that sums to $4$. $h(x) = 2x^2 - 2x + 6x - 6$. Then we apply the box method to the trinomial, using the form of our function that has four terms to fill in the quadrants of the box. We find the greatest common factor (GCF) for each row and each column of the box, and sum the GCFs of the rows and the GCFs of the columns. The product of these sums gives us the factored form of our quadratic equation. Identify the standard form of a quadratic function. Remember, the standard form has not been factored. The standard form is simplified as much as possible. The function $h(x) = ax^2 + bx + c$ is in standard form. The functions $h(x) = (x-a) (x-b)$ and $h(x) = m (x-a) (x-b)$ are both in factored form. The factored form of a quadratic function provides the values of $x$ at which the function is equal to zero. These are called the roots, and are equal to $a$ and $b$. The second function is a special case of the third function, where $m = 1$. The function $h(x) = a(x-h)^2 + k$ is in vertex form. Determine when a ball will land by using a quadratic function. What is the height of the ball off the ground, if the ball is touching the ground? The right hand side of the equation is a product of two terms. When is the product of two terms equal to zero? Do you think that the flight of the ball is represented by the same function before the batter hits the ball? If the flight of the ball changes when the batter hits the ball, what does that mean about negative values of time in the equation? Keep in mind that time is equal to zero when the batter hits the ball, so a negative value of time occurs before the batter hits the ball. Let's use the example equation given: $h(x) = (5x+1)(-x +3)$. In order to find the time that the ball hits the ground, we need to find the value of $x$ that satisfies this equation. This will occur when $h(x)$ is equal to zero. Since the $h(x)$ is a product of two terms, it will be equal to zero when either (or both) of these terms is equal to zero. $5x + 1 = 0$. Similarly, with the second expression, we find that $x = 3$. We know that $h(x)$ is a function representing the height of the ball as a function of time. When the batter hits the ball, its flight changes completely. It doesn't make sense to have a negative value of time in our equation. Because before the batter hits the ball it will have a completely different trajectory that is not described by our function. So we must reject the value of $x$, or time, that is negative. This leaves us with the ball hitting the ground after three seconds. Whenever applying math to real-life situations it's important to think about your solutions and decide if they are reasonable. Find the factored form of each function written in standard form. Pay close attention to the signs. Each standard form equation can be factored using the box method. Let's use this function $h(x) = 3x^2 + 5x + 2$ as an example. First we need to find the pair of factors of $ac$ that sum to $b$. In the case of $h(x)$, we have $ac = 6$ and $b = 5$. We than have that $2$ and $3$ is the pair of factors we need. We can now replace the coefficient $b$ with $3 + 2$, giving us the function $h(x) = 3x^2 + 3x +2x + 2$. Now that we have $h(x)$ written with four terms, we can factor using the box method. First, we fill in the four quadrants of the box, as in the image. Then we find the greatest common factor (GCF) of each row and column. The greatest common factor of the top row is $3x$, because this is the largest term that divides both $3x^2$ and $3x$. So we write $3x$ off to the left of the top row. Similarly, we find the GCFs of each row and column. Then we sum the GCFs of the rows and the GCFs of the columns. The product of these sums is our factored function. $h(x) = (3x+2)(x + 1)$. Use the box method to factor $h(x)=4x^2 + 7x - 2$. Which blanks make the most sense to fill in first? What pair of factors do you need to find before you can begin to apply the box method? so we need the pair of factors of $2$ that sum to $3$. $1$ and $2$, and $-1$ and $-2$. Summing these pairs give us, $1 + 2 = 3$, and $-1 + (-2) = -3$. So the pair of factors of $ac$ that sum to $b$ is $1$ and $2$. To apply the box method, we need to know all four terms that fill in the sections of the box. This means that we need to find a pair of factors of $ac$, which sum to $b$, so that we can replace the coefficient $b$ by their sum. For our particular function, $h(x)=4x^2 + 7x - 2$, we have that $ac = 4\times -2 = -8$. The term $b$ is equal to $7$, so the pair of factors of $ac$ which sum to $b$ is $8$ and $-1$. This means that our function becomes $h(x) = 4x^2 + 8x - x -2$ and the remaining two terms in our box become $8x$ and $-x$. We then need to find the greatest common factors of each row and column in our box. Let's put $8x$ in the upper right hand corner, and $-x$ in the lower left hand corner. The greatest common factor for the top row is $4x$ because this is the largest term that both $4x^2$ and $8x$ are divisible by. Likewise, the greatest common factor (GCF) for the bottom row is $-1$, the GCF for the first column is $x$, and the GCF for the second column is $2$. Now we find the product of the GCF column sum and the GCF row sum, which gives us the factored form of our function, $h(x) = (4x - 1)(x + 2)$. Factor the quadratic function $h(x) = -3x² + 4x + 4$. Does the box method apply to this exercise? What pair of factors do you need to find before you can apply the box method? We start with the equation $h(x) = -3x^2 + 4x + 4$. We need to find its factored form to help Samuel understand when the ball will hit the ground. The coefficients of $h(x)$'s standard form equation are $a = -3$, $b = 4$, and $c = 4$. The product of $a$ and $c$ is -12. So we know we need to find a pair of factors of $-12$ that sums to $4$. The pair of factors that satisfies these requirements is $6$ and $-2$. We then replace the coefficient $b$ from the standard form equation with the sum of $6$ and $-2$ and apply the box method using the terms of $h(x)=-3x^2 + 6x -2x + 4$. We must then find the greatest common factor for each row and column. The greatest common factor (GCF) for the first row is $3x$, because $3x$ is the largest term that both $-3x^2$ and $6x$ are divisible by. Similarly, we find the GCFs of each row and column. $h(x) = (3x + 2)(-x + 2)$.
CommonCrawl
Finite temperature study of the axial U(1) symmetry on the lattice with overlap fermion formulation - Download this document for free, or read online. Document in PDF available to download. We examine the axial U(1) symmetry near and above the finite temperature phase transition in two-flavor QCD using lattice QCD simulations. Although the axial U(1) symmetry is always violated by quantization, (i.e.) the chiral anomaly, the correlation functions may manifest effective restoration of the symmetry in the high temperature phase. We explicitly study this possibility by calculating the meson correlators as well as the Dirac operator spectral density near the critical point. Our numerical simulations are performed on a $16^3\times 8$ lattice with two flavors of dynamical quarks represented by the overlap fermion formalism. Chiral symmetry and its violation due to the axial anomaly is manifestly realized with this formulation, which is a prerequisite for the study of the effective restoration of the axial U(1) symmetry. In order to avoid discontinuity in the gauge configuration space, which occurs for the exactly chiral lattice fermions, the simulation is confined in a fixed topological sector. It induces finite volume effect, which is well described by a formula based on the Fourier transform from the $\theta$-vacua. We confirm this formula at finite temperature by calculating the topological susceptibility in the quenched theory. Our two flavor simulations show degeneracy of the meson correlators and a gap in the Dirac operator spectral density, which implies that the axial U(1) symmetry is effectively restored in the chirally symmetric phase.
CommonCrawl
Wideband phased arrays require very tight element spacing to permit wide angle scanning of the main beam over the wide bandwidth. The consequence of tight spacing is very high mutual coupling among the elements in the array. Previous efforts by Virginia Tech Antenna Group has shown that the strong coupling can be utilized in arrays to obtain broadband frequency response while maintaining a small element spacing. However, mutual coupling between elements in a tightly coupled array can sometimes dramatically change the operating frequency, bandwidth, and radiation pattern from that of the single isolated element. Thus, there are some fundamental questions that remain regarding the effective operation of highly coupled arrays for beam forming, beam scanning, and aperture reconfiguration. Existing antenna pattern analysis techniques including the active element pattern method are inadequate for the application in highly coupled arrays. This dissertation focuses on the development of a new antenna array analysis technique. The presented method is based on the scattering parameter network descriptions of the array elements, associated feed network and the active element patterns. The developed model is general. It can be applied to an array of any size and configuration. The model can be utilized to determine directivity, gain and realized gain of arrays as well as their radiation efficiency and impedance mismatch. Using the network model, the relationship between radiation pattern characteristics and the input impedance characteristics of the array antennas becomes clear. Three types of source impedance matching conditions for array antennas are investigated using the model. A numerically simulated array of strip dipole array is used to investigate the effects of various impedance matching methods on the radiation pattern and impedance bandwidth. An application of network analysis is presented on an experimental investigation of $3\times 3$ Foursquare array test bed to further verify the concepts.
CommonCrawl
The key idea behind the MCMC algorithms is that under certain conditions, Markov chains have a stationary distribution. If we can build a Markov chain whose stationary distribution is the distribution that we would like to sample from then it is relatively easy to get draws from this distribution by simulating lots of points and then randomly drawing from them. Gibbs sampling uses closed form expressions for conditionals such that we accept each draw. 2. It is hard to determine rates of convergence, so the number of periods you choose for "convergence" is somewhat arbitrary and is done using guess work. In order to deal with this, we can take advantage of a very simple idea. Let's start a process at period $-\infty$ and simulate it forward until period 0. At period 0, it should have converged to the stationary distribution. In continuous state spaces it is more difficult, so I will explain it for discrete spaces. Imagine we have $N$ states in our Markov chain then at period $-T$, it has to have arrived at one of the $N$ states. Start $N$ processes (one at each state) at period $-T$ and simulate them forward until period 0 (with the same set of shocks!). If all of these processes have converged to a single state then we know that our original process would have also converged to that state and thus the process is now in its stationary distribution (because we have «simulated» the process for an infinite number of periods). If it hasn't converged then try again with a larger $T$ – These processes are guaranteed to «coalesce» in finite number of states.
CommonCrawl
Summary: This paper considers partitioning the vertices of an n-vertex tree into p disjoint sets C1, C2, . . . , Cp, called clusters so that the number of vertices in a cluster and the number of subtrees in a cluster are minimized. For this NP-hard problem we present greedy heuristics which differ in (i) how subtrees are identified (using either a best-fit, good-fit, or first-fit selection criteria), (ii) whether clusters are filled one at a time or simultaneously, and (iii) how much cluster sizes can differ from the ideal size of c vertices per cluster, n = cp. The last criteria is controlled by a constant $\alpha , 0 \leq \alpha < 1$, such that cluster Ci satisfies ($1 - \alpha 2$)c $\leq $|Ci| $\leq c(1 + \alpha ), 1 \leq i \leq p$. For algorithms resulting from combinations of these criteria we develop worst-case bounds on the number of subtrees in a cluster in terms of c, $\alpha $, and the maximum degree of a vertex. We present experimental results which give insight into how parameters c, $\alpha $, and the maximum degree of a vertex impact the number of subtrees and the cluster sizes. Communicated by G. Liotta: submitted November 1999, revised August 2000.
CommonCrawl
Doctoral Committee Chair(s): Gewirth, Andrew A. Abstract: This thesis encompasses several studies of the behavior of aromatic molecules on Au(111) single crystal electrode surfaces. In the first study, the initial stages of the binding and oxidation of phenol at high pH were addressed. Phenoxide binds to the electrode surface through the oxygen atom and is tilted away from the surface normal prior to oxidation, as shown with surface infrared spectroscopy (IR). The phenoxide forms a $\rm(\surd3\times\surd3)R30\sp\circ$ overlayer on Au(111). At the onset of oxidation, the molecule reorients to lie with the ring relatively parallel to the electrode surface as it polymerizes. Oligomers have been observed with scanning tunneling microscopy (STM). The continued oxidation of phenol and 2-naphthol was monitored with atomic force microscopy (AFM) and surface IR. The morphology of the growing polymer was found to depend on the constituent monomer. The impact of ring substitution on the binding and reactivity of cyanophenols was studied using STM and surface IR. Cyanophenols substituted in the 2 or 4 position displayed an inhibition of oxidation, while 3-cyanophenol oxidized at a similar rate to phenol. Only 4-cyanophenol was found to form an ordered $\rm(\surd3\times\surd3)R30\sp\circ$ overlayer on Au(111). Steric repulsions between the molecules prevented 2- and 3-cyanophenol from forming ordered adlayers. Finally, the adsorption of uracil on Au(111) was studied while varying concentration and pH. At neutral pH values, uracil displays strongly concentration-dependent behavior. At higher concentrations, the voltammetry displays sharp peaks which are associated with phase transitions. IR data indicates that uracil binds flat to the electrode at neutral pH. In alkaline solutions, the concentration-dependence of the adsorption is less marked, but IR data indicates that the molecule binds on-edge when it is deprotonated.
CommonCrawl
The method of Cauchy random projections is popular for computing the $l_1$ distance in high dimension. In this paper, we propose to use only the signs of the projected data and show that the probability of collision (i.e., when the two signs differ) can be accurately approximated as a function of the chi-square ($\chi^2$) similarity, which is a popular measure for nonnegative data (e.g., when features are generated from histograms as common in text and vision applications). Our experiments confirm that this method of sign Cauchy random projections is promising for large-scale learning applications. Furthermore, we extend the idea to sign $\alpha$-stable random projections and derive a bound of the collision probability.
CommonCrawl
Ndiaye, C. B., & Xiao, J. (submitted). Toward Gauss-Bonnet-Chern Inequalities and Isoperimetric Deficits for Conformal Metrics on $\mathbb R^n, n\ge 3$. Abstract: The aim of this paper is to establish the Gauss-Bonnet-Chern integral inequalities and isoperimetric deficit formulas for complete conformal metrics on $\mathbb R^n$, $n\ge 3$ with scalar curvature being nonnegative near infinity and Q-curvature being absolutely convergent.
CommonCrawl
It might help to have labelled counters with the five symbols $+ , -, \times , \div$ and $ =$ on them, which you can move around on this sheet. Visualising. Multiplication & division. Inverses. Practical Activity. Compound transformations. Working systematically. Games. Addition & subtraction. Trial and improvement. Interactivities.
CommonCrawl
Any real number can be expressed as a decimal. The number of numbers that can be expressed by n decimal places is 10^n. 10^(n+1) = 10x10^n is countable. Most reals can't be expressed using n decimal places for any finite n. Hence it doesn't follow that the reals are countable. Why do you keep on posting this nonsense? It's not going to become true however many times you say it. Your counting scheme omits all infinite decimals. Please note "for all n." Last edited by zylo; January 25th, 2016 at 07:13 AM. Reason: removed reference to "nonsense" No, it's your understanding that doesn't progress past the first step. Your statement "for all n" includes only finite numbers. The natural numbers are defined inductively by $1 \in \mathbb N$ and $n \in \mathbb N \implies (n+1) \in \mathbb N$. Since $1$ is finite, if there were any infinite natural numbers there must be one of them that is equal to $(N+1)$ where $N$ is finite. Show me such an $N$. Last edited by v8archie; January 25th, 2016 at 08:49 AM. "Mathematical induction is a mathematical proof technique, most commonly used to establish a given statement for all natural numbers" It still hasn't become true. (That still hasn't become false). I said "for all n."
CommonCrawl
Abstract: Decomposition of unipotents gives short polynomial expressions of the conjugates of elementary generators as products of elementaries. It turns out that with some minor twist the decomposition of unipotents can be read backwards, to give very short polynomial expressions of elementary generators themselves in terms of elementary conjugates of an arbitrary matrix and its inverse. For absolute elementary subgroups of classical groups this was recently observed by Raimund Preusser. I discuss various generalisations of these results for exceptional groups, specifically those of types $\mathrm E_6$ and $\mathrm E_7$, and also mention further possible generalisations and applications. Key words and phrases: classical groups, Chevalley groups, normal structure, elementary subgroups, decomposition of unipotents, reverse decomposition of unipotents.
CommonCrawl
Each interior angle of a regular polygon is 120� greater than each exterior angle. How many sides are there in the polygon? In what ratio should a 20% methyl alcohol solution be mixed with a 50% methyl alcohol solution, so that the resultant solution has 40% methyl alcohol in it? A, B and C can do a piece of work in 15 days. All three worked for 2 days and then A left. B and C worked for 10 more days and B left. C worked for another 40 days and completed the work. In how many days can A alone complete the work, if C can complete it in 75 days? Assume the total work to be 600 units. Then C's 1 day work = 8 units. $\Rightarrow$(A + B + C)'s 1 day work = 40 units. This work is done by B and C in 10 days. A tower standing on a horizontal plain subtends a certain angle at a point 160 m apart from the foot of the tower. On moving 100 m towards the tower, it is found that the subtended angle is now twice as before. What will be twice the length of the height of the tower? If $2x^2 + 5 x + 1 \over 5x$ = 3, then $2x \over 2x^2 + 1$ = ? The average age of a college is 21.8 years. The average age of students of college is 24.2 years and average age of lecturers of college is 20.6 years. Find the ratio of the number of students to that of lecturers? A man and a woman 81 miles apart from each other, start travelling towards each other at the same time. If the man covers 5 m/hr to the women's 4 m/hr, how far would the woman have travelled when they meet? If x + $2\over x$ = 1. then $x^2 + x + 2 \over x^2 (1 - x)$ = ? A wheel of a car of radius 21 cm is rotating at 600 RPM. What is the speed of the car in km/hr? The radius of the wheel measures 21 cm. = 2 $\times$ $\pi$ $\times$ 21 = 132 cm. In an hour, the wheel will cover a distance of (132 $\times$ 600) $\times$ 60 = 4752000 cm.
CommonCrawl
There are 250 scraps. For each 11 removed, one extra can be obtained. How much maximum is possible, if one cut by one unit? I have read this somewhere, and one guy gave the answer as 25, but I can't understand why. Can anyone please explain? 10 - 11 = -1 ?????? You can't continue after doing this 24 times. Or you will get -1. As said by @Novarg , you can't take 11 things out of 10 things. Assuming I understand your puzzle correctly: You have 250 'things'. Each time you remove 11, you get back one. How many 11s can you remove? But here is another way to look at it without the "...". If you have 250 "scraps", then for every 11, you can make a new one. There are 22 groups of 11 in 250 with 8 left over. So, from 250, you can use $22\times11=242$, and get 22 new ones. Now, from these 30, you can get 2 more by using 22. So, in total, you've made $22+2=24$ new scraps, you've used $242+22=264$ scraps in total, and have 10 left over.
CommonCrawl