text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
This tutorial covers the basics for running a self-consistent DFT calculation for silicon. The goal is to introduce you to the different file types and the basics of running the code.
Alternatively, you can run the following one-liner to get the same result. This assumes you have already created the init file. See Additional Exercises for more details.
Executables blm, lmfa, and lmf are required and are assumed to be in your path. The source code for all Questaal executables can be found here.
To begin with, create a new working directory and move into it. Here we will call it "si".
ALAT specifices the lattice constant, which is in atomic units by default (Questaal code uses Atomic Rydberg Units). The primitive lattice vectors are listed after PLAT and are in row format (i.e. the first row contains the x, y and z components of the first lattice vector and so forth). The SITE section specficies the atom types and their positions in cartesian coordinates (indicated by POS) in units of the lattice constant. More information can be found in the Additional Exercises.
(such as ctrl for input file and "site" for structure file). blm writes actrl.si rather than ctrl.si to prevent overwriting of an existing ctrl file. Extension "si" labels the material; it is something you choose. Most files read or generated by Questaal programs will append this extension.
Run blm as shown below and then copy the template file actrl.si to ctrl.si, which is the name of the main input file recognised by most codes in the Questaal package. The --express switch tells blm to make a particularly simple input file; we will see more complicated examples in later tutorials.
The start of the blm output shows some structural and symmetry information. Further down, the "makrm0:" part gives information about creating the augmentation spheres, both silicon atoms were assigned spheres of radii 2.22 Bohr. Now open up the site file and you can see it contains the lattice constant and lattice vectors in the first line. The other terms in the first line are just standard settings and a full explanation can be found in the online page for the site file. The second line is a comment line and the subsequent lines contain the atomic species labels and coordinates. Note that running blm produces a new actrl.si and site.si file each time.
Next take a look at the input file ctrl.si.
The first few lines are just header information, then you have a number of basic parameters for a calculation. We won't talk about these values now but a full description is provided on the ctrl file page. Defaults are provided by blm for most of the variables except gmax and nkabc, which are left as "NULL". nkabc specifies the k mesh and there is no sensible way for blm to select a default value, as it depends on the bandgap or details of the Fermi surface, and also the precision needed for the physical property you need. gmax specifies how fine a real space mesh is used for the interstitial charge density and this depends on the basis set as described below. Take a look at the last line, it contains information about the different atoms in the system (here we only have silicon) and their associated augmentation spheres.
Again the output shows some structural information and then details about finding the free atom density, basis set fitting and, at the end an estimate of gmax is printed out. Note that the Barth-Hedin exchange-correlation functional is used, as indicated by "XC:BH", this was specified by "xcfun=2" in the ctrl file (the default). We won't go into more detail now, but a full description can be found on the lmfa page. One thing to note is a recommended value for gmax given towards the end: "GMAX=5.0". Now that we have a gmax value, open up the ctrl file and change the default NULL value to 5.0.
Check the contents of your working directory and you will find two new files atm.si and basp0.si . The file atm.si contains the free atom densities calculated by lmfa. File basp0.si is the template basis set file; the standard basis set name is basp and the extra 0 is appended to avoid overwriting. Take a look at basp0.si and you will see that it contains basis set parameters that define silicon's smooth Hankel functions. Changing these values would change their functional form, but lmfa does a reasonable job (also later on parameters can be automatically optimized, if desired) so we will leave them as they are. Copy basp0.si to basp.si, which is the name lmf recognises for the basis set file.
The second unknown parameter is the k-mesh. A $4\times4\times4$ k mesh is sufficient for Si. Set this value with your text editor by simply changing "nkabc=NULL" to "nkabc=4" (4 is automatically used for each mesh dimension, you could equivlaently use "nkabc=4 4 4").
We now have everything we need to run an all-electron, full-potential DFT calculation. This is done using the lmf program. Double check that you have specified the k mesh (nkabc) and a gmax value and then run the following command. A lot of text is produced so it will be easier to redirect the output to a file, here we call it out.lmfsc (appending sc to indicate a self-consistent cycle).
The first line above tells us about what input density is used. lmf first looks for a restart file rst.si and if it's not found it then looks for the free atom density file atm.si. lmf then overlaps the free atom densities to form a trial density (Mattheis construction) and this is used as the input density. Next lmf begins the first iteration of a self-consistent cycle: calculate the potential from the input density, use this potential to solve the Kohn-Sham equations and then perform Brillouin zone integration to get the output density. Towards the end of the output, the Kohn-Sham total energy is reported along with the Harris-Foulkes total energy. These two energies will be the same (or very close) at self-consistency.
Now move to the end of the file. The 'c' in front of the Harris Foulkes ehf and Kohn-Sham ehk energies indicates that convergence was reached (note how similar the ehf and ehk energies are). A few lines up you can see that it took 7 iterations to converge: look for "it 7 of 20". At the end of each iteration the ehf and ehk total energies are printed and a check is made for self-consistency. The two parameters conv and convc in the ctrl file specify, respectively, are the self-consistency tolerances for the total energy and root mean square (RMS) change in the density. Note that by default both tolerances have to be met. To use a single tolerance you simply set the one that you don't want to zero.
Further down the Fermi energy and band gap values, and other key bits of information are reported in the Brillouin zone integration section. You should find something similar to the output snippet below.
To see how the density and energy changes between iterations, try grepping for "DQ" and "ehk=-". The RMS DQ measures the RMS change in the density between iterations; ehk is the Hohenberg-Kohn Sham total energy (also just called Kohn-Sham).
You can also check how the bandgap changes as iterations proceed to self-consistency by grepping out.lmfsc for "gap".
And that's it! You now have a self-consistent density and have calculated some basic properties such as the band gap and total energy. Other tutorials to look at are those to generate energy band structures, and density-of-states, or calculate a mechanical property such as the optical mode frequency.
This tutorial on PbTe also covers the basic self-consistency cycle, in a bit more detail. It has a companion tutorial for the ASA, allowing you to compare the FP and ASA methods. There is a more detailed tutorial with some description of important tags the lmf reads, and of the lmf basis set.
For a tutorial showing a self-consistent calculation of ferromagnetic metal, see this tutorial. This tutorial shows how to calculate optical properties for PbTe. See this tutorial for the calculation of the optical mode frequency in Si. This document gives an overview of the lmf implementation; the formalism behind the method is described in this book chapter.
Below is a list of frequently asked questions. Please get in contact if you have other questions.
1) What is the relation between Cartesian coordinates and coordinates expressed as fractions of lattice vectors (aka crystal coordinates)?
POSi = X1Pi,1 + X2 Pi,2 + X3 Pi,3 where Pi,j is Cartesian component i of lattice vector j.
The expression above equally applies in reciprocal space if the primitive reciprocal lattice vectors Q are substituted for P.
Sometimes called "crystal coordinates" (VASP uses the term "direct coordinates"), they are used when site positions or k points are expressed in fractional multiples of the lattice vectors (ALAT in real space, 2π/ALAT in k space).
This example makes the conversion of a k point in Cartesian coordinates to multiples of the reciprocal lattice vectors, using Questaal conventions.
2) How does blm determine the augmentation sphere radii?
Overlaps free atom densities and adjusts the radii to make the potential as similar as possible on the surface of each sphere. You can constrain the radius to a value of your choosing with species tag CSTRMX.
3) What is the log file?
The log file log.si keeps a compact record of key outputs in the current directory. In successive runs, data is appended to the log file.
4) What is the Harris-Foulkes energy?
It is a functional of the input density, rather than the output density. At self-consistency it should be the same as the standard Kohn-Sham functional. The Harris-Foulkes functional tends to be more stable, and like the Kohn-Sham functional, it is stationary at the self-consistent density. But it is not necessarily a minimum there. See this paper by M. Foulkes and R. Haydock. It is not trivial that the two functionals yield the same result; indeed if you do not set tolerances tight enough (e.g. the mesh density typically set by parameter GMAX), they may differ somewhat.
Create a new directory and try running the above command, starting from the init file. N.B. You should be careful about running calculations in the same directory as certain files (such as the rst, mix and moms files) will be read and used. First of all, notice that here we use blm si instead of blm ctrl.si. blm accepts either the full init file name or just the extension. Next, try running the above command without the --express switch. Compare this new ctrl file to the original one and you will see that the file is different, and provides more options. Note also that values for nkabc and gmax were supplied as switches (--) to blm. This removes the need to edit the ctrl file by hand.
You should find that the gap gets smaller by about 0.13 eV, which is close to the experimentally observed splitting between the conduction minimum and the conduction band at X. Instead of running a very fine k-point mesh, you can also do a gradients minimization as described in the Extremal points and effective mass tutorial.
The output has been redirected to the file init.si (it is sent to standard output by default). Take a look at the converted init file. UNITS=A on the third line specifies that the lattice constant is in Angstrom units, which is the default for the POSCAR format. The Questaal code accepts other units in the init file but will convert to atomic units. Check this by running blm and you will see that the lattice constant in the site file has been converted to Bohr. Last thing to note is the X= after the atom species. The init file from the tutorial denotes positions by POS=, indicating cartesian coordinates but here X= indicates "direct" or "fractional" coordinates, in which vectors are defined as (noninteger) multiples of the three lattice vectors (see the following additional exercises for more). The poscar2init utility only accepts POSCAR files with coordinates specified in direct format. Also note that the POSCAR file must specify the atom types and number of atoms per atomic species (lines 6 and 7 in the POSCAR example above). More information, including an example of cif conversion, can be found in the detailed input file tutorial.
There is a simple relation between "fractional" coordinates X= and Cartesian coordinates POS=, as described in the FAQ.
To compare, try running the command blm init.si --express --wsitex and you will see that xpos has been added to the first line of site.si; this indicates that the coordinates are now in fractional form. Note that in this example for silicon the Cartesian and fractional coordinates happen to be the same.
Lattice vectors, lattice constant and the position of the basis vectors place in SITE is all that is needed to fix the structural information. Atomic numbers for each atom is inferred from the symbol (Si). Token X= specifies that the coordinates are in "direct" representation, that is, as fractional multiples of lattice vectors. It doesn't matter in this case, but you can use Cartesian coordinates : use POS= instead of X (see additional exercise 4). In such a case positions are given in Cartesian coordinates in units of ALAT. So are the lattice vectors, if you supply them instead of the space group number. | CommonCrawl |
Jepetto the toymaker was thinking about a new toy to add to his tiling product line. His new design involved a punctured chessboard: an ordinary $8 \times 8$ chessboard, except with a single square removed.
Don't you see? You've ruined it! How can anyone tile this board with $3\times 1$ rectangular pieces!?
On how many squares could the hole have been misplaced?
Not the answer you're looking for? Browse other questions tagged mathematics combinatorics tiling checkerboard or ask your own question. | CommonCrawl |
Any signal $x(t)$ can be written as the sum of an even signal and odd signal.
where, $x_e(t)$ is the even part and $x_o(t)$ is the odd part.
If $x(t)$ is real $\rightarrow X(j\omega)$ is conjugate symmetric.
If $x(t)$ is real and even $\rightarrow X(j\omega)$ is real.
If $x(t)$ is complex, conjugate symmetric $\rightarrow X(j\omega)$ is real.
$a(t)=a(-t) \Rightarrow a(t)$ is even.
and $b(t)=-b(-t) \Rightarrow b(t)$ is odd.
$r(t)=r(-t) \Rightarrow r(t)$ is even.
and $\theta(t)=-\theta(-t) \Rightarrow \theta(t)$ is odd. | CommonCrawl |
Je suis maître de conférences à l'université de Toulouse III (Paul Sabatier).
I am interested in Fuchsian groups, arithmetic groups, symmetric spaces, hyperbolic geometry, asymptotic geometry.
Systoles in translation surfaces, preprint (with C. Boissy).
The limit sets of subgroups of lattices in $PSL(2,R)^r$, to appear in Geometriae Dedicata, DOI: 10.1007/s10711-015-0129-x.
On arithmetic Fuchsian groups and their characterisations, Ann. Fac. Sci. Toulouse Math. (6) 23 (2014), no. 5, 1093-1102.
Examples of infinite covolume subgroups of $PSL(2,R)^r$ with big limit sets, Mathematische Zeitschrift 272 (2012), 389-404.
The limit set of subgroups of arithmetic groups in $PSL(2,C)^q \times PSL(2,R)^r$, Groups Geom. Dyn. 8 (2014), no. 4, 1047–1099.
The limit set of subgroups of a class of arithmetic groups, Dissertation, Karlsruhe, 2009.
A geometric characterization of arithmetic Fuchsian groups, Duke Math. J. 142 (2008), 111-125 (with E. Leuzinger). | CommonCrawl |
Why do we have the present order of operations, and how do hyperoperations fit in?
Something that's been bugging me for a fairly decent while is the order of operations - not so much using it, however, as to understanding where it comes from.
...all typically done in a left-to-right order. But this seems arbitrary in a sense - why this particular ordering of steps, as opposed to any other particular ordering? Why should I calculate $4+5\times 6$ by finding $5\times 6$ first, as opposed to $4+5$ first?
Obviously, we adopted a standard in order to prevent ambiguity - but then, why this standard?
However, recalling a video by the Taylor series on hyperoperations, I did notice a bit of a pattern (which I later found touched on in opening of the Wikipedia article on hyperoperations).
So we ask - what is operation $4$? Of course, in this scheme, it's repeated exponentiation - that is to say, tetration! Rather fitting since "tetra-" is the prefix for $4$, but I digress.
Of course, then, this sequence of hyperoperations also uses pentation, hexation, and so on, as repeated applications of the previous.
Is this "sequence of operations" the reason we define the order of operations as we do - that is, we do exponentiation before multiplication, because the former is a repetition of the latter? Or is it just a "happy consequence" of whatever the reason is?
Where do these hyperoperations, e.g. tetration and pentation, fit into the scheme? If the previous is true, then this suggests that we would handle all bracketed expressions, then the highest $n$-ations, then the $(n-1)$-ations, and so on - that is, we would handle hexation, then pentation, then tetration, then exponentiation, and so on.
Browse other questions tagged math-history hyperoperation or ask your own question.
Order of operations - why are they in the order they're in?
How do hyperoperations like tetration exist if operations are seperate relations and not repeatitions of each other.
Why is tetration considered the next step after exponentiation?
How are hyperoperations of rational and irrational numbers calculated?
Can there be fractional hyper operators and if so how to they function? | CommonCrawl |
Griffin, Kent ; Stuart, Jeffrey L. ; Tsatsomeros, Michael J.
Let $a$, $b$ and $c$ be fixed complex numbers. Let $M_n(a,b,c)$ be the $n\times n$ Toeplitz matrix all of whose entries above the diagonal are $a$, all of whose entries below the diagonal are $b$, and all of whose entries on the diagonal are $c$. For $1\leq k\leq n$, each $k\times k$ principal minor of $M_n(a,b,c)$ has the same value. We find explicit and recursive formulae for the principal minors and the characteristic polynomial of $M_n(a,b,c)$. We also show that all complex polynomials in $M_n(a,b,c)$ are Toeplitz matrices. In particular, the inverse of $M_n(a,b,c)$ is a Toeplitz matrix when it exists. | CommonCrawl |
Résumé : This opening session for the working group on topology, geometry and statistics will be a non-technical introduction to information geometry, a topic that has recently gained a lot of attention.
The starting point will be the Cramer-Rao bound in estimation theory and its expression with the so-called Fisher information. The main idea behind information geometry is that Fisher information may be viewed as a local inner product, which in turn allows to endow the parameter space of a parameterized model with a Riemannian structure. Through the example of the univariate normal distribution family, the importance of the geometry arising from this structure will be demonstrated. In particular, the notions of geodesic distance between densities, curvature and barycenters will be introduced.
Switching to the natural exponential family illustrates the need for new objects called $\alpha$-connections, that have a simple statistical interpretation.
Finally, the concept of statistical manifolds will be introduced as a natural framework for dealing with objects generalizing the $\alpha$-connections in the exponential case.
PS : details about the next sessions will be posted on this web page.
Gauss-Lucas theorem The convex hull $K(p)$ of the zeros of a polynomial $p(z)$ contain the zeros of its derivative $p'(z)$.
The lecture is devoted to some open extreme problems in the geometry of polynomials. | CommonCrawl |
We report our work about the main necessary and sufficient conditions for weak lower semi-continuity of integral functionals in vector Calculus of Variations.
In particular we provide tools to investigate rank-one convexity of functions defined on $2\times 2$-matrices. Furthermore, we explore some consequences and examples.
We also explore the quasiconvexity condition in the case where the integrand of an integral functional is a fourth-degree homogeneous polynomial. | CommonCrawl |
Executes a user-defined function on parameter draws from the prior distribution. Dynare returns the results of the computations for all draws in an $ndraws$ by $n$ cell array named oo_.prior_function_results.
The function must have the following header output_cell = FILENAME(xparam1,M_,options_,oo_,estim_params_,bayestopt_,dataset_,dataset_info), providing read-only access to all Dynare structures. The only output argument allowed is a \(1 \times n\) cell array, which allows for storing any type of output/computations. This option is required.
Number of draws used for sampling. Default: 500.
Same as the prior_function command but for the posterior distribution. Results returned in oo_.posterior_function_results.
Generates trace plots of the MCMC draws for all estimated parameters and the posterior density in the specified Markov Chain CHAIN_NUMBER.
Depending on the value of FLAG, the internals command can be used to run unitary tests specific to a Matlab/Octave routine (if available), to display documentation about a Matlab/Octave routine, or to extract some informations about the state of Dynare.
Performs the unitary test associated to ROUTINENAME (if this routine exists and if the matlab/octave .m file has unitary test sections).
Prints on screen the internal documentation of ROUTINENAME (if this routine exists and if this routine has a texinfo internal documentation header). The path to ROUTINENAME has to be provided, if the routine is not in the current directory.
At this time, will work properly for only a small number of routines. At the top of the (available) Matlab/Octave routines a commented block for the internal documentation is written in the GNU texinfo documentation format. This block is processed by calling texinfo from MATLAB. Consequently, texinfo has to be installed on your machine.
Displays information about the previously saved MCMC draws generated by a .mod file named MODFILENAME. This file must be in the current directory.
Loads into the Matlab/Octave's workspace informations about the previously saved MCMC draws generated by a .mod file named MODFILENAME.
The number of MCMC chains.
A Nblck*n, where n is the number of estimated parameters, array of doubles. Initial state of the MCMC.
A Nblck*n, where n is the number of estimated parameters, array of doubles. Current state of the MCMC.
A Nblck*1 array of doubles. Initial value of the posterior kernel.
A Nblck*1 array of doubles. Current value of the posterior kernel.
A 1*Nblck structure array. Initial state of the random number generator.
A 1*Nblck structure array. Current state of the random number generator.
A 1*Nblck array of doubles. Current acceptance ratios.
Prints a table describing the marginal prior distributions (mean, mode, std., lower and upper bounds, HPD interval).
Computes and displays first and second order moments of the endogenous variables at the prior mode (considering the linearized version of the model).
Computes and displays the prior mean and prior standard deviation of the first and second moments of the endogenous variables (considering the linearized version of the model) by randomly sampling from the prior. The results will also be stored in the prior subfolder in a _endogenous_variables_prior_draws.mat file.
Optimizes the prior density (starting from a random initial guess). The parameters such that the steady state does not exist or does not satisfy the Blanchard and Kahn conditions are penalized, as they would be when maximizing the posterior density. If a significant proportion of the prior mass is defined over such regions, the optimization algorithm may fail to converge to the true solution (the prior mode).
Computes the effective prior mass using a Monte-Carlo. Ideally the effective prior mass should be equal to 1, otherwise problems may arise when maximising the posterior density and model comparison based on marginal densities may be unfair. When comparing models, say \(A\) and \(B\), the marginal densities, \(m_A\) and \(m_B\), should be corrected for the estimated effective prior mass \(p_A\neq p_B \leq 1\) so that the prior mass of the compared models are identical.
Plots the marginal prior density. | CommonCrawl |
A horizontal conveyor belt is running at a constant speed $v_b$ = 3.0 m/s.
between the disc and the belt is 0.50.
This 2nd part of this question is the same as [A block is pushed onto a conveyor belt](http://physics.qandaexchange.com/?qa=205/a-block-is-pushed-onto-a-conveyor-belt&show=205#q205).
(a) Your calculation is very good but you have not reached the end of it.
The distance of $2.5m$ which the disk travels across the belt is measured diagonally. It is the hypotenuse of a $3:4:5$ triangle. The perpendicular distance across the belt is $2.0m$. So that is the minimum width.
(b) This is the same problem as in A block is pushed onto a conveyor belt.
Initially the velocity of the disk over the ground is $OA=4m/s$ along the $Oy$ axis. When the disk has come to rest relative to the conveyor belt its velocity over the ground is $OB=3m/s$ along the $Ox$ axis. At intermediate times the velocity of the disk over the ground is represented by points on the line $AB$.
At point $C$ the magnitude $OC$ of the velocity over the ground is minimum, where $OC\perp AB$. $OAB$ and $COB$ are similar triangles. Therefore $OC=\frac45 \times 3=2.4m/s$. | CommonCrawl |
Abstract: The leading order dynamics of the type IIB Large Volume Scenario is characterised by the interplay between $\alpha'$ and non-perturbative effects which fix the overall volume and all local blow-up modes leaving (in general) several flat directions. In this paper we show that, in an arbitrary Calabi-Yau with at least one blow-up mode resolving a point-like singularity, any remaining flat directions can be lifted at subleading order by the inclusions of higher derivative $\alpha'$ corrections. We then focus on simple fibred cases with one remaining flat direction which can behave as an inflaton if its potential is generated by both higher derivative $\alpha'$ and winding loop corrections. Natural values of the underlying parameters give a spectral index in agreement with observational data and a tensor-to-scalar ratio of order $r=0.01$ which could be observed by forthcoming CMB experiments. Dangerous corrections from higher dimensional operators are suppressed due to the presence of an approximate non-compact shift symmetry. | CommonCrawl |
Abstract: We consider preheating in the theory $1/4 \lambda \phi^4 + 1/2 g^2\phi^2\chi^2 $, where the classical oscillating inflaton field $\phi$ decays into $\chi$-particles and $\phi$-particles. The parametric resonance which leads to particle production in this conformally invariant theory is described by the Lame equation. It significantly differs from the resonance in the theory with a quadratic potential. The structure of the resonance depends in a rather nontrivial way on the parameter $g^2/\lambda$. We construct the stability/instability chart in this theory for arbitrary $g^2/\lambda$. We give simple analytic solutions describing the resonance in the limiting cases $g^2/\lambda\ll 1$ and $g^2/\lambda \gg 1$, and in the theory with $g^2=3\lambda$, and with $g^2 =\lambda$. From the point of view of parametric resonance for $\chi$, the theories with $g^2=3\lambda$ and with $g^2 =\lambda$ have the same structure, respectively, as the theory $1/4 \lambda \phi^4$, and the theory $\lambda /(4 N) (\phi^2_i)^2$ of an N-component scalar field $\phi_i$ in the limit $N \to \infty$. We show that in some of the conformally invariant theories such as the simplest model $1/4 \lambda\phi^4$, the resonance can be terminated by the backreaction of produced particles long before $<\chi^2>$ or $<\phi^2 >$ become of the order $\phi^2$. We analyze the changes in the theory of reheating in this model which appear if the inflaton field has a small mass. | CommonCrawl |
Programming languages often have built in mathematical functions, that do much more than add, subtract, multiply and divide. In this lesson, we'll look at the absolute value function.
If you recall, the absolute value of a number is the number "made positive" by dropping the minus sign if the number if negative and just letting the number pass if it's positive. So for example, the absolute value of -3 is 3 and the the absolute value of 3 is 3. In math sense, this would be written as $|-3|=3$ or $|3|=3$, where the two vertical bars around a number mean absolute value. Absolute values can also be taken of entire expressions like $|5\times 2-15|$, which would come out to be 5 (not -5).
Notice the abs in the name, which is short for "ABSolute value." The prefix of math. means that abs is included in the "math" library that the computer knows about. The "something.something" notation is very common in computer programming.
Now you try. Use the print statement to find the absolute value of some positive and negative numbers. | CommonCrawl |
Abstract : We examine the stability of linear integer-order and fractional-order systems with commensurate delays of neutral type in the sense of $H_\infty$-stability. The systems may have chains of poles approaching the imaginary axis. While several classes of these systems have been previously studied on a case-by-case basis, a unified method is proposed in this paper which allows to deal with all these classes at the same time. Approximation of poles of large modulus is systematically calculated based on a convex hull derived from the coefficients of the system. This convex hull also serves to establish sufficient conditions for instability and necessary and sufficient conditions for stability. | CommonCrawl |
In the newly created index.js file we can now write some boilerplate code that will allow us to run our environment and visualize it. See the bottom of this article for the contents of this file.
Basically, these numbers are 4 unknown random continuous numbers, that we do not know what they do in our algorithm… (though we can make an interpretation of them with our human brain and say that they represent position $x$, velocity $v$, angle $\theta$ and angular velocity $\alpha$).
Next to these 4 random numbers, we also know by observing the action space that we can take 2 specific discrete actions (again by using our human brain we can classify this as moving the cart to the left or moving it to the right).
Now look at the reward that we got from taking an action, and update our Q-Table with this reward.
To start with point 1, we want to be able to represent our observation as a state index. This is needed to be able to construct our Q-Table which will consist of our state and actions. For this we use a method called Bucketization which will convert our observations from a continuous state (the numbers that fit between our lower bound and upper bound can be described as ]$-\infty$, $\infty$[) towards a discrete state (we "bucket" our numbers, returning only a subset of indexes rather than $\infty$ ones), allowing us to keep our states minimal and represent them in our memory.
Note: I did limit the velocity speed and the angle of our cartpole to achieve convergence faster.
For point 2, we will then use what we learned in the Multi-armed bandit framework article, stating that we take our actions as returned by taking the maximum value for that state it's actions (Math.max(Q[s]) but that we add an exploration factor that will still take a random action on some points, but still minimizes regret.
* Note: this is also called "discretization"
* We get an observation (e.g.) [1, 6, 9, 2.3] which defines our "state" or what we currently see.
While creating this solution, I used the following articles as a reference. | CommonCrawl |
The aim of the present paper is to introduce the concept "Finite dimension" in the theory of associative rings R with respect to two sided ideals. We obtain that if R has finite dimension on two sided ideals, then there exist uniform ideals $U_1,U_2,\ldots,U_n$ of R whose sum is direct and essential in R. The number n is independent of the choice of the uniform ideals $U_i$ and 'n' is called the dimension of R.
M. R. Adhikari, Groups, Rings and Modules with Applications, University Press (India) Ltd., Hyderabad, India, 1999.
A. W. Goldie, The Structure of Noetherian Rings, Lectures on Rings and Modules, Springer-Verlag, New York, 1972.
I. N. Herstein, Topics in Algebra (Second Edition), Wiley Eastern Limited, New Delhi, 1998.
Y. V. Reddy and Bh. Satyanarayana, A Note on Modules, Proc. of the Japan Academy, 63-A(1987), 208-211.
Bh. Satyanarayana, A note on E-direct and S-inverse Systems, Proc. of the Japan Academy, 64-A(1988), 292-295.
Bh. Satyanarayana, K. Syam Prasad and D. Nagaraju, A Theorem on Modules with Finite Goldie Dimension, Soochow J. Maths 32(2)(2006), 311-315. | CommonCrawl |
Is there a way to make sage print out all monomials of a Boolean Ring?
In a finite ring, is there a way to get Sage to print out all possible monomials of the ring? I have a method, but it's very crude. It's simply doing something like taking a random variable to mutiply with a random ring element from B.random_element(). But this is obviously not what I want.
Below is the ring I'm working with.
Is there a Sage function to put every monomial of the ring into a list? I can't seem to find it in Sage documentation.
Hint: start with the boolean polynomial ring generators.
Detailed answer: use powerset to build on the hint.
Examining the quotients of a module $R\times R$ where $R$ is a finite ring.
Subbing variables into equations not working when variables are from B.gen() function.
Any way to make Sage display large matrices all in 1 line?
How to convert boolean polynomials to DIMACS format?
Sage not detecting intergers in Boolean Polynomial Ring?
Dividing Boolean Polynomials in Sage. | CommonCrawl |
1. $g_1(g_2a) = (g_1 \cdot g_2)a$ for all $g_1, g_2 \in G$ and for all $a \in A$.
2. $1a = a$ for all $a \in A$ (where $1 \in G$ denotes the identity element).
1. $(ag_1)g_2 = a(g_1 \cdot g_2)$ for all $g_1, g_2 \in G$ and for all $a \in A$.
2. $1a = a$ for all $a \in A$.
In either case we say that the group $G$ is (left/right) Acting on the set $A$.
We begin by stating some basic results regarding (left) group actions of a group on a set.
a) For each $g \in G$, $\sigma_g$ is a permutation of the set $A$, and so $\varphi_g \in S_A$ for all $g \in G$.
b) The map $\varphi : G \to S_A$ (called the Associated Permutation Representation) defined for all $g \in G$ by $\varphi(g) = \sigma_g$ is a group homomorphism of $G$ to $S_A$.
c) If $\psi : G \to S_A$ is any homomorphism from $G$ to $S_A$ then the map $G \times A \to A$ defined by $(g, a) \to \varphi(g) a$ is a left group action of $G$ on $A$.
A similar result can be stated when $(G, \cdot)$ is right acting on a set $A$.
Proof of c) Let $\psi : G \to S_A$ be a group homomorphism of $G$ to $S_A$. We aim to show that $(g, a) \to [\psi(g)](a)$ is a left group action of $G$ on $A$, i.e., we need to verify the two properties in the definition. | CommonCrawl |
"every bivector can be written as the sum of two simple bivectors. It is useful to choose two orthogonal bivectors for this, and this is always possible to do."
How can I rewrite $B = B_1' + B_2'$ with $B_1'\cdot B_2' = 0 = B_1'\times B_2'$ ?
We only need to solve for $Q$ in terms of $B$. I was able to take a square root of the third equation (by guessing that $Q = xe_1e_2+ye_3e_4$) but I didn't find the specific root that satisfies the other equations.
then $B_1$ and $B_2$ are blades, and $B_1\cdot B_2=0$ regardless of $v$, and $B_1\times B_2=0$ if and only if $v\wedge\big((v\cdot B)\cdot B\big)=0$. This means that $(v\cdot B)\cdot B$ must be parallel to $v$; in other words, $v$ is an eigenvector of the operator $(B\,\cdot)^2$. It follows that $v\cdot B=w$ is also an eigenvector with the same eigenvalue, and $v\cdot w=0$.
Of course, all vectors $v$ must also be orthogonal to all $w=v\cdot B$.
This is only a partial answer; it doesn't work when $|B\cdot B| = \lVert B\wedge B\rVert$, and I still don't know what to do in higher dimensions.
It's easy to see that $(J\,\cdot\,)^2$ has no eigenvectors, so $J$ is not orthogonally decomposable.
I suspect that the result is still true for Euclidean and Lorentzian spaces.
Not the answer you're looking for? Browse other questions tagged eigenvalues-eigenvectors orthogonality clifford-algebras geometric-algebras gram-schmidt or ask your own question.
Why does Gram-Schmidt give a non-orthogonal basis? | CommonCrawl |
What is the probability pairs of sums are distinct?
What is the probability that all of the $Z_i$ are distinct?
Numerically it seems that if $s = n^2$ the probability is approximately $0.75$. I don't know how true this mathematically is however.
What is the relationship between the first experiment and the second experiment that is determined by this probability function?
How do I go about solving this probability question?
With 4 rooks on a $4\times4$ chessboard such that no rook can attack another, what is the probability there are no rooks on the diagonal?
What is the probability all the sums are distinct? | CommonCrawl |
Anyway, besides working with log probabilities I still get numeric instability for longer training sentences. This is primarily due to the size of the set of observed variables. As the size is 80,000, each state in the HMM has an average emission probability of 1/80,000 for each word. I was wondering how to counteract this? Would reducing the number of emissions be a valid option?
To further illustrate: This is a section of a debug representation of the forward trellis after doing the forward algorithm on a training sequence. As you can see the right most fifth of the table is full of -inf, and they all appear at the same time... which I don't quite know why yet.
When I reduce the number of emissions and thus increase the average emission probability the problem goes away. But I am unsure whether that is the right way to go about this as the more emissions the model is trained with the better it will perform later on.
You probably want to be using additive smoothing when estimating probabilities from count data.
With a dictionary of 80,000 words, most of those words will be very rare: many of them might never appear anywhere in your training data, or will never appear associated with a particular part of speech. Thus, your counts for those words will be zero. That causes you to estimate an emission probability of zero, if you naively estimate the probability as a ratio of counts. However, a zero probability is unlikely to be physically meaningful (I doubt that there is truly zero probability of outputting that word; instead, it's probably some small probability that's close to zero but not exactly zero). Additive smoothing is one way to address this.
Lack of additive smoothing probably explains the -inf. You are computing with log-probabilities. A probability of zero corresponds to a log-probability of $-\infty$ (i.e., -inf). Implement additive smoothing, and your -inf's might largely go away.
While computing with log probabilities does generally improve stability compared to computing with probabilities directly, there are some tricky aspects you must be careful of.
For instance, instead of adding probabilities by computing $\log(\exp(x)+\exp(y))$, it's usually better to compute $x + \log(1 + \exp(y-x))$ (assuming $y \ge x$; if $y <x$, swap $x,y$ before doing this computation). This avoids loss of accuracy when $x,y$ are both small and are close to each other. You might want to use a built-in library for computing the function $u \mapsto \log(1+u)$. Some languages have built-in library functions for computing $\log(\exp(x)+\exp(y))$ in a numerically-stable way: it might be called something like logsumexp.
Other relevant references: https://stackoverflow.com/q/7480996/781723, https://math.stackexchange.com/q/2189716/14578, https://stackoverflow.com/q/42355196/781723, https://stackoverflow.com/q/23630277/781723.
Not the answer you're looking for? Browse other questions tagged machine-learning hidden-markov-models or ask your own question. | CommonCrawl |
While mono-spaced fonts are generally accepted as the best option for programming and coding, I feel they make it to hard to read longer texts and thus are not my preferred font for writing latex. What fonts are a good compromise between the advantages of mono-space (good readability of special symbols like $ ? / \ etc and brackets) versus being nice to the eye and making it a pleasure to read/write longer texts?
Browse other questions tagged fonts editors sourcecode or ask your own question.
What font does \mathbf use?
What is the .ttf font for default LaTeX font?
What is the default font for LaTeX? | CommonCrawl |
There is a staircase that consists of $n$ stairs, numbered $1,2,\ldots,n$. Initially, each stair has some number of balls.
There are two players who move alternately. On each move, a player chooses a stair $k$ where $k \neq 1$ and it has at least one ball. Then, the player moves any number of balls from stair $k$ to stair $k-1$. The winner is the player who moves last.
Your task is to find out who wins the game when both players play optimally.
The first line contains an integer $n$: the number of stairs.
The next line has $n$ integers $p_1,p_2,\ldots,p_n$: the initial number of balls on each stair.
For each test, print "first" if the first player wins the game and "second" if the second player wins the game. | CommonCrawl |
The family of unidirectional continuous fiber reinforced polymeric composites are currently used in automotive bumper beams and load floors. The material properties and mechanical characteristics of the compression molded parts are determined by the curing behavior, fiber orientation and formation of knit lines, which are in turn determined by the mold filling parameters. In this paper, a new model is presented which can be used to predict the 3-dimensional flow under consideration of the slip of mold-composites and anisotropic viscosity of composites during compression molding of unidirectional fiber reinforced thermoplastics for isothermal state. The composites is treated as an incompressible Newtonian fluid. The effects of longitudinal/transverse viscosity ratio A and slip parameter $\alpha$ on the buldging phenomenon and mold filling patterns are also discussed. | CommonCrawl |
In this short post, I describe some errors that are produced by MATLAB's strip spectral correlation analyzer function commP25ssca.m. I don't recommend that you use it; far better to create your own function.
Correlation Algorithm for Cyclic Spectrum Estimation". February, 1994.
where input is the complex-valued input data, fs is the sampling rate corresponding to input, df is the spectral resolution, and dalpha is the cycle-frequency resolution.
commP25ssca.m computes the non-conjugate spectral correlation function (SCF). So if you want the conjugate spectral correlation or either of the coherences, you'll have to write your own functions or modify commP25ssca.m.
The trouble is that this function produces strong spurious cycle frequencies. In particular, it produces large spectral correlation function magnitudes for the normalized cycle frequencies of $\alpha = latex k/4$ for all inputs, whether or not those cycle frequencies are exhibited by the signals in the input data. Let's show the evidence.
I use two implementations of the SSCA. The first is commP25ssca.m and the second is one that I've written myself. The latter has been validated by checking its output against the theoretical spectral correlation function for several signals for which we have nice formulas for the SCF (such as PSK and QAM).
in the call to commP25ssca.m.
I haven't tried to figure out what is going wrong with commP25ssca.m. If you do, please go ahead and post your findings to the comments section of this post.
I want to thank reader Serg for bringing this MATLAB problem to my attention!
1st Thank you so much for this very helpful blog. For one thing I'd have never known about the defect in the Matlab code if you hadn't posted about it. I had based my C++ implementation off of it and the papers by Nancy Carter and da Costa.
2.) I'm pretty sure it is result of the replication process that is prevalent in those papers. The reason I believe this to be the case is Eric April references it in his paper on the Strip Spectral Correlation Algorithm (section 3.1, just before section 3.2) and says it causes this exact problem. I've spent about 2 weeks trying to figure this out and haven't had a chance to verify yet. For one thing I'm curious how you deal with complex demodulates being at the full rate, since x(t) has been decimated by L.
3.) Also I'm unsure if there is one windowing function or two? April seems to indicate there are 2 but most papers seem to have just one. Although, most of the equations have an a(n) and g(n).
Thanks for the comments, Zack, and for stopping by the CSP Blog.
I'll answer (2) in reply to another of your comments.
Regarding (3), yes there are two windowing functions in the normal SSCA estimator expression: and . The former is a window applied to the short-time FFTs that comprise the front-end "channelizer" function of the SSCA, and the latter is a window applied to the long-time FFTs that comprise the back-end "strip" function of the SSCA. In my implementations, is a Dolph-Chebyshev window and is just a rectangle (no windowing). I've not been able to squeeze out enough performance gain for the cost of a non-rectangular .
Sorry, I was wrong. It isn't the replication, it is the decimation by L in the channelizer (which necessitates the replication). If you modify commP25ssca to set L = 1 the false cycle frequencies disappear. However, apparently Brown suggested this as a way of lowering computational costs. April suggests this might be worthwhile as the false cycles could be predicted. Any suggestions?
I suppose commP25ssca should have as an input variable. | CommonCrawl |
From the properties of the geometric definition of the cross product and the scalar triple product, we can discover a link between $2 \times 2$ determinants and area, and a link between $3 \times 3$ determinants and volume.
The relationship between determinants and area or volume by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
Welcome to the MAT232 Introduction to Sage worksheet. This tutorial will guide you through the basic operations necessary to use Sage to handle basic matrix operations. Sage is a powerful tool; we'll only just begin to see its capabilities here. If you are interested you can learn much more about its capabilities at http://www.sagemath.org.
Click on the cell below, hold the shift key down, and press the enter key.
You may be surprised that nothing appeared to happen. In fact, the value 1 was assigned to the variable a, but this result was not echoed. Repeat on the cell below; click on the cell, then use "shift-enter" or click on "evaluate".
This time the value 2 was assigned to b and then the value of b was echoed. The semicolon is used to separate commands appearing on the same line. The same thing could be done by typing "b=2", pressing enter, typing "b" and then using "shift-enter".
Feel free to use whichever of these approaches is most comfortable to you.
Now, let's enter a matrix. The function matrix() is used to do this. For now the first argument to the function should be "QQ", this means that the matrix will contain either integers or rational numbers (fractions). The second argument is the matrix data.
The matrix M is defined with three rows and four columns. Notice how the matrix was specified row-by-row, with each row inside a pair of square brackets and all three rows enclosed in another set of square brackets; commas are used to separate matrix entries and rows.
Notice how the function row() is used; it is "attached" to the matrix varible with a dot. This means that the row function operates on the matrix M.
Remember, row numbering starts at 0. Pay careful attention to the changes made to the matrix by each of the following commands.
We want to start with a 1 in the first position of the first column, so we begin by scaling the first row by 1/2.
The first argument to rescale_row() is the index of the row to be scaled and the second argument is the scale factor. We could, of course, use 0.5 rather than 1/2 for the scale factor.
Now that there is a 1 in the first position of the first column we continue by eliminating the entry below it.
The first argument is the index of the row to be replaced, the second argument is the row to form a multiple of, and the final argument is the scale factor. Thus M.add_multiple_of_row(n,m,a) would replace row n with (row n)+a*(row m).
Since the last entry in the first column is already zero we can move on to the second column. Our first step is to get a 1 in the second position of the second column. Normally we would do this by scaling the row but in this case we can swap the second and third rows.
The arguments to swap_rows() are fairly obvious, just remember that row 1 is the second row and row 2 is the third row.
Now we want to eliminate the 5 below the 1. This is done by multiplying the second row by $-5$ and adding it to the third row.
To get a 1 as the last entry in the third column we can scale the third row by $-1/2$.
At this point the matrix is in echelon form (well, having the 1's down the diagonal of the matrix is not required, but it does make our work easier). All that remains to find the solution is to put the matrix in reduced echelon form which requires that we replace all entries in the first three columns that are not on the main diagonal (where the 1's are) with zeros. We will start with the third column and work our way up and to the left. Remember that this is an augmented matrix and we are going to ignore the right-most column; it just "goes along for the ride."
At this point our solution is complete. We see that the solution to our linear system is $x_1=2$, $x_2=1$, and $x_3=-1$.
There is an easy way to check your work, or to carry out these steps in the future. First, let's reload the matrix M.
The function echelon_form() will display the echelon form of a matrix without changing the matrix.
Notice that the matrix M is unchanged.
To replace M with its reduced echelon form, use the echelonize() function.
This brings us to the end of this demonstration. | CommonCrawl |
We will now look at some more examples of computing errors and relative errors.
An sample yields a true value of $x_T = 25044.33$. A mathematical model estimates the experiment for which this sample was taken and approximates $x_T$ as $x_A = 24993.53$. Find the error and relative error of $x_A$ from $x_T$.
A function $p$ is defined on the interval $[0, 1]$ such that $0 ≤ p(x) ≤ 2$ for all $x \in [0, 1]$. Suppose that $p$ approximates a function $f(x) = x^2$ on the interval $[0, 1]$. Find the largest possible error of $p$ from $f$.
We are given that $0 ≤ p(x) ≤ 2$ for $x \in [0, 1]$. Furthermore, we note that the function we are approximating, $f$ is also bounded on $[0, 1]$. Furthermore, $f$ is increasing on $[0, 1]$. We can see this from the fact that $f(x) = x^2$ is a parabola that opens up whose vertex is at $(0, 0)$, or alternatively, we can see this since $f'(x) = 2x > 0$ for $x \in [0, 1]$. The minimum value of $f$ on $[0, 1]$ is $0$ and occurs at $x = 0$ since $f(0) = 0$, and furthermore, the maximum value of $f$ on $[0, 1]$ is $1$ and occurs at $x = 1$ since $f(1) = 1$. Therefore $0 ≤ f(x) ≤ 1$ for $x \in [0, 1]$. The largest possible error is thus $-2$ which occurs if $p(0) = 2$ and $f(0) = 0$.
We will see that this will be important later when the true values for which we want to compute error for are unknown. | CommonCrawl |
What's the fewest weights you need to balance any weight from 1 to 40 pounds?
How many different non congruent polygons can you make on a 3x3 dot grid?
Is it possible to fill the $121$ entries in an $11\times11$ square with the values $0,+1,-1$, so that the row sums and column sums are $22$ distinct numbers?
Can political debates really work?
How many squares can you make with equal ranged points?
How can 12 teams rotate through 6 games without overlaps?
How many friends does Tiffany have? | CommonCrawl |
Let $S \to X$ be an $S^3$-fiber bundle over a smooth manifold $X$. If $S$ is an oriented manifold does this fiber bundle admit the structure of an $SU(2)$-principal bundle?
There is a similar theorem for the case of circle bundles and is proved in Morita's book on differential forms. Unfortunately, I do not see a way to extend his argument to this case and he does not discuss $SU(2)$-principal bundles in detail.
I should explain why the nontrivial vector bundle has nontrivial unit sphere bundle, that is, why $\pi_1 SO(4)$ injects into $\pi_1 SDiff(S^3)$.
You can use the big theorem of Hatcher ("Smale Conjecture"), which says that $\pi_k SO(4)$ maps isomorphically to $\pi_k SDiff(S^3)$ for all $k$.
Alternatively, you can use that the bundle is detected by the Stiefel-Whitney class $w_2$, and that Stiefel-Whitney classes of vector bundles are invariant under fiber homotopy equivalence of unit sphere bundle.
Or you can use $\pi_3$ instead of $\pi_1$ as suggested by Dylan Wilson, making a bundle over $S^4$; there are elements of $\pi_3 SO(4)\cong \mathbb Z\times \mathbb Z$ not coming from $\pi_3 SU(2)\cong\mathbb Z$. Again, the resulting sphere bundle is nontrivial either by Hatcher's theorem or by using characteristic classes. I'm not sure what the most elementary version of the characteristic-class argument would be.
One example of a rank $4$ real vector bundle over $S^4$ that does not admit a complex structure is the tangent bundle!
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry at.algebraic-topology differential-topology principal-bundles orientation or ask your own question.
How are fiber bundles, transition functions and principal bundles related?
Non-Existence of a Principal Connection for the Sphere over Projective Space?
What does reduction of structure group of principal bundle say? | CommonCrawl |
In this paper, a multi-state diagnosis and prognosis (MDP) framework is proposed for tool condition monitoring via a deep belief network based multi-state approach (DBNMS). For fault diagnosis, a cost-sensitive deep belief network (namely ECS-DBN) is applied to deal with the imbalanced data problem for tool state estimation. An appropriate prognostic degradation model is then applied for tool wear estimation based on the different tool states. The proposed framework has the advantage of automatic feature representation learning and shows better performance in accuracy and robustness. The effectiveness of the proposed DBNMS is validated using a real-world dataset obtained from the gun drilling process. This dataset contains a large amount of measured signals involving different tool geometries under various operating conditions. The DBNMS is examined for both the tool state estimation and tool wear estimation tasks. In the experimental studies, the prediction results are evaluated and compared with popular machine learning approaches, which show the superior performance of the proposed DBNMS approach.
Label aggregation is an efficient and low cost way to make large datasets for supervised learning. It takes the noisy labels provided by non-experts and infers the unknown true labels. In this paper, we propose a novel label aggregation algorithm which includes a label aggregation neural network. The learning task in this paper is unsupervised. In order to train the neural network, we try to design a suitable guiding model to define the loss function. The optimization goal of our algorithm is to find the consensus between the predictions of the neural network and the guiding model. This algorithm is easy to optimize using mini-batch stochastic optimization methods. Since the choices of the neural network and the guiding model are very flexible, our label aggregation algorithm is easy to extend. According to the algorithm framework, we design two novel models to aggregate noisy labels. Experimental results show that our models achieve better results than state-of-the-art label aggregation methods.
Deep convolutional neural networks (CNN) have recently been shown in many computer vision and pattern recog- nition applications to outperform by a significant margin state- of-the-art solutions that use traditional hand-crafted features. However, this impressive performance is yet to be fully exploited in robotics. In this paper, we focus one specific problem that can benefit from the recent development of the CNN technology, i.e., we focus on using a pre-trained CNN model as a method of generating an image representation appropriate for visual loop closure detection in SLAM (simultaneous localization and mapping). We perform a comprehensive evaluation of the outputs at the intermediate layers of a CNN as image descriptors, in comparison with state-of-the-art image descriptors, in terms of their ability to match images for detecting loop closures. The main conclusions of our study include: (a) CNN-based image representations perform comparably to state-of-the-art hand- crafted competitors in environments without significant lighting change, (b) they outperform state-of-the-art competitors when lighting changes significantly, and (c) they are also significantly faster to extract than the state-of-the-art hand-crafted features even on a conventional CPU and are two orders of magnitude faster on an entry-level GPU.
Semantic segmentation is a fundamental task in computer vision, which can be considered as a per-pixel classification problem. Recently, although fully convolutional neural network (FCN) based approaches have made remarkable progress in such task, aggregating local and contextual information in convolutional feature maps is still a challenging problem. In this paper, we argue that, when predicting the category of a given pixel, the regions close to the target are more important than those far from it. To tackle this problem, we then propose an effective yet efficient approach named Vortex Pooling to effectively utilize contextual information. Empirical studies are also provided to validate the effectiveness of the proposed method. To be specific, our approach outperforms the previous state-of-the-art model named DeepLab v3 by 1.5% on the PASCAL VOC 2012 val set and 0.6% on the test set by replacing the Atrous Spatial Pyramid Pooling (ASPP) module in DeepLab v3 with the proposed Vortex Pooling. Moreover, our model (10.13FPS) shares similar computation cost with DeepLab v3 (10.37 FPS).
As a basic task in computer vision, semantic segmentation can provide fundamental information for object detection and instance segmentation to help the artificial intelligence better understand real world. Since the proposal of fully convolutional neural network (FCNN), it has been widely used in semantic segmentation because of its high accuracy of pixel-wise classification as well as high precision of localization. In this paper, we apply several famous FCNN to brain tumor segmentation, making comparisons and adjusting network architectures to achieve better performance measured by metrics such as precision, recall, mean of intersection of union (mIoU) and dice score coefficient (DSC). The adjustments to the classic FCNN include adding more connections between convolutional layers, enlarging decoders after up sample layers and changing the way shallower layers' information is reused. Besides the structure modification, we also propose a new classifier with a hierarchical dice loss. Inspired by the containing relationship between classes, the loss function converts multiple classification to multiple binary classification in order to counteract the negative effect caused by imbalance data set. Massive experiments have been done on the training set and testing set in order to assess our refined fully convolutional neural networks and new types of loss function. Competitive figures prove they are more effective than their predecessors.
In this paper, we present a new automatic diagnosis method of facial acne vulgaris based on convolutional neural network. This method is proposed to overcome the shortcoming of classification types in previous methods. The core of our method is to extract features of images based on convolutional neural network and achieve classification by classifier. We design a binary classifier of skin-and-non-skin to detect skin area and a seven-classifier to achieve the classification of facial acne vulgaris and healthy skin. In the experiment, we compared the effectiveness of our convolutional neural network and the pre-trained VGG16 neural network on the ImageNet dataset. And we use the ROC curve and normal confusion matrix to evaluate the performance of the binary classifier and the seven-classifier. The results of our experiment show that the pre-trained VGG16 neural network is more effective in extracting image features. The classifiers based on the pre-trained VGG16 neural network achieve the skin detection and acne classification and have good robustness.
Object detection aims at high speed and accuracy simultaneously. However, fast models are usually less accurate, while accurate models cannot satisfy our need for speed. A fast model can be 10 times faster but 50\% less accurate than an accurate model. In this paper, we propose Adaptive Feeding (AF) to combine a fast (but less accurate) detector and an accurate (but slow) detector, by adaptively determining whether an image is easy or hard and choosing an appropriate detector for it. In practice, we build a cascade of detectors, including the AF classifier which make the easy vs. hard decision and the two detectors. The AF classifier can be tuned to obtain different tradeoff between speed and accuracy, which has negligible training time and requires no additional training data. Experimental results on the PASCAL VOC, MS COCO and Caltech Pedestrian datasets confirm that AF has the ability to achieve comparable speed as the fast detector and comparable accuracy as the accurate one at the same time. As an example, by combining the fast SSD300 with the accurate SSD500 detector, AF leads to 50\% speedup over SSD500 with the same precision on the VOC2007 test set.
The difficulty of image recognition has gradually increased from general category recognition to fine-grained recognition and to the recognition of some subtle attributes such as temperature and geolocation. In this paper, we try to focus on the classification between sunrise and sunset and hope to give a hint about how to tell the difference in subtle attributes. Sunrise vs. sunset is a difficult recognition task, which is challenging even for humans. Towards understanding this new problem, we first collect a new dataset made up of over one hundred webcams from different places. Since existing algorithmic methods have poor accuracy, we propose a new pairwise learning strategy to learn features from selective pairs of images. Experiments show that our approach surpasses baseline methods by a large margin and achieves better results even compared with humans. We also apply our approach to existing subtle attribute recognition problems, such as temperature estimation, and achieve state-of-the-art results.
Automatic pain intensity estimation possesses a significant position in healthcare and medical field. Traditional static methods prefer to extract features from frames separately in a video, which would result in unstable changes and peaks among adjacent frames. To overcome this problem, we propose a real-time regression framework based on the recurrent convolutional neural network for automatic frame-level pain intensity estimation. Given vector sequences of AAM-warped facial images, we used a sliding-window strategy to obtain fixed-length input samples for the recurrent network. We then carefully design the architecture of the recurrent network to output continuous-valued pain intensity. The proposed end-to-end pain intensity regression framework can predict the pain intensity of each frame by considering a sufficiently large historical frames while limiting the scale of the parameters within the model. Our method achieves promising results regarding both accuracy and running speed on the published UNBC-McMaster Shoulder Pain Expression Archive Database.
Semi-Supervised Learning (SSL) has been proved to be an effective way to leverage both labeled and unlabeled data at the same time. Recent semi-supervised approaches focus on deep neural networks and have achieved promising results on several benchmarks: CIFAR10, CIFAR100 and SVHN. However, most of their experiments are based on models trained from scratch instead of pre-trained models. On the other hand, transfer learning has demonstrated its value when the target domain has limited labeled data. Here comes the intuitive question: is it possible to incorporate SSL when fine-tuning a pre-trained model? We comprehensively study how SSL methods starting from pretrained models perform under varying conditions, including training strategies, architecture choice and datasets. From this study, we obtain several interesting and useful observations. While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label). We hope our studies can deepen the understanding of SSL research and facilitate the process of developing more effective SSL methods to utilize pre-trained models. Code is now available at github.
Appropriate comments of code snippets provide insight for code functionality, which are helpful for program comprehension. However, due to the great cost of authoring with the comments, many code projects do not contain adequate comments. Automatic comment generation techniques have been proposed to generate comments from pieces of code in order to alleviate the human efforts in annotating the code. Most existing approaches attempt to exploit certain correlations (usually manually given) between code and generated comments, which could be easily violated if the coding patterns change and hence the performance of comment generation declines. In this paper, we first build C2CGit, a large dataset from open projects in GitHub, which is more than 20$\times$ larger than existing datasets. Then we propose a new attention module called Code Attention to translate code to comments, which is able to utilize the domain features of code snippets, such as symbols and identifiers. We make ablation studies to determine effects of different parts in Code Attention. Experimental results demonstrate that the proposed module has better performance over existing approaches in both BLEU and METEOR.
We designed a gangue sorting system,and built a convolutional neural network model based on AlexNet. Data enhancement and transfer learning are used to solve the problem which the convolution neural network has insufficient training data in the training stage. An object detection and region clipping algorithm is proposed to adjust the training image data to the optimum size. Compared with traditional neural network and SVM algorithm, this algorithm has higher recognition rate for coal and coal gangue, and provides important reference for identification and separation of coal and gangue.
Inspired by practical importance of social networks, economic networks, biological networks and so on, studies on large and complex networks have attracted a surge of attentions in the recent years. Link prediction is a fundamental issue to understand the mechanisms by which new links are added to the networks. We introduce the method of robust principal component analysis (robust PCA) into link prediction, and estimate the missing entries of the adjacency matrix. On one hand, our algorithm is based on the sparsity and low rank property of the matrix, on the other hand, it also performs very well when the network is dense. This is because a relatively dense real network is also sparse in comparison to the complete graph. According to extensive experiments on real networks from disparate fields, when the target network is connected and sufficiently dense, whatever it is weighted or unweighted, our method is demonstrated to be very effective and with prediction accuracy being considerably improved comparing with many state-of-the-art algorithms.
As we aim at alleviating the curse of high-dimensionality, subspace learning is becoming more popular. Existing approaches use either information about global or local structure of the data, and few studies simultaneously focus on global and local structures as the both of them contain important information. In this paper, we propose a global and local structure preserving sparse subspace learning (GLoSS) model for unsupervised feature selection. The model can simultaneously realize feature selection and subspace learning. In addition, we develop a greedy algorithm to establish a generic combinatorial model, and an iterative strategy based on an accelerated block coordinate descent is used to solve the GLoSS problem. We also provide whole iterate sequence convergence analysis of the proposed iterative algorithm. Extensive experiments are conducted on real-world datasets to show the superiority of the proposed approach over several state-of-the-art unsupervised feature selection approaches.
Supervised learning methods are widely used in machine learning. However, the lack of labels in existing data limits the application of these technologies. Visual interactive learning (VIL) compared with computers can avoid semantic gap, and solve the labeling problem of small label quantity (SLQ) samples in a groundbreaking way. In order to fully understand the importance of VIL to the interaction process, we re-summarize the interactive learning related algorithms (e.g. clustering, classification, retrieval etc.) from the perspective of VIL. Note that, perception and cognition are two main visual processes of VIL. On this basis, we propose a perceptual visual interactive learning (PVIL) framework, which adopts gestalt principle to design interaction strategy and multi-dimensionality reduction (MDR) to optimize the process of visualization. The advantage of PVIL framework is that it combines computer's sensitivity of detailed features and human's overall understanding of global tasks. Experimental results validate that the framework is superior to traditional computer labeling methods (such as label propagation) in both accuracy and efficiency, which achieves significant classification results on dense distribution and sparse classes dataset.
We study how to learn a semantic parser of state-of-the-art accuracy with less supervised training data. We conduct our study on WikiSQL, the largest hand-annotated semantic parsing dataset to date. First, we demonstrate that question generation is an effective method that empowers us to learn a state-of-the-art neural network based semantic parser with thirty percent of the supervised training data. Second, we show that applying question generation to the full supervised training data further improves the state-of-the-art model. In addition, we observe that there is a logarithmic relationship between the accuracy of a semantic parser and the amount of training data.
While attributes have been widely used for person re-identification (Re-ID) which aims at matching the same person images across disjoint camera views, they are used either as extra features or for performing multi-task learning to assist the image-image matching task. However, how to find a set of person images according to a given attribute description, which is very practical in many surveillance applications, remains a rarely investigated cross-modality matching problem in person Re-ID. In this work, we present this challenge and formulate this task as a joint space learning problem. By imposing an attribute-guided attention mechanism for images and a semantic consistent adversary strategy for attributes, each modality, i.e., images and attributes, successfully learns semantically correlated concepts under the guidance of the other. We conducted extensive experiments on three attribute datasets and demonstrated that the proposed joint space learning method is so far the most effective method for the attribute-image cross-modality person Re-ID problem.
Multiple Camera Systems (MCS) have been widely used in many vision applications and attracted much attention recently. There are two principle types of MCS, one is the Rigid Multiple Camera System (RMCS); the other is the Articulated Camera System (ACS). In a RMCS, the relative poses (relative 3-D position and orientation) between the cameras are invariant. While, in an ACS, the cameras are articulated through movable joints, the relative pose between them may change. Therefore, through calibration of an ACS we want to find not only the relative poses between the cameras but also the positions of the joints in the ACS. In this paper, we developed calibration algorithms for the ACS using a simple constraint: the joint is fixed relative to the cameras connected with it during the transformations of the ACS. When the transformations of the cameras in an ACS can be estimated relative to the same coordinate system, the positions of the joints in the ACS can be calculated by solving linear equations. However, in a non-overlapping view ACS, only the ego-transformations of the cameras and can be estimated. We proposed a two-steps method to deal with this problem. In both methods, the ACS is assumed to have performed general transformations in a static environment. The efficiency and robustness of the proposed methods are tested by simulation and real experiments. In the real experiment, the intrinsic and extrinsic parameters of the ACS are obtained simultaneously by our calibration procedure using the same image sequences, no extra data capturing step is required. The corresponding trajectory is recovered and illustrated using the calibration results of the ACS. Since the estimated translations of different cameras in an ACS may scaled by different scale factors, a scale factor estimation algorithm is also proposed. To our knowledge, we are the first to study the calibration of ACS.
Recovering the absolute metric scale from a monocular camera is a challenging but highly desirable problem for monocular camera-based systems. By using different kinds of cues, various approaches have been proposed for scale estimation, such as camera height, object size etc. In this paper, firstly, we summarize different kinds of scale estimation approaches. Then, we propose a robust divide and conquer the absolute scale estimation method based on the ground plane and camera height by analyzing the advantages and disadvantages of different approaches. By using the estimated scale, an effective scale correction strategy has been proposed to reduce the scale drift during the Monocular Visual Odometry (VO) estimation process. Finally, the effectiveness and robustness of the proposed method have been verified on both public and self-collected image sequences. | CommonCrawl |
We show that any similar structure measure on the segment $[0,1]$ is an image-measure of the appropriate constructed infinite direct product of discrete probability measures.
We construct two types of equilibrium dynamics of an infinite particle system in a locally compact metric space $X$ for which a permanental point process is a symmetrizing, and hence invariant measure. The Glauber dynamics is a birth-and-death process in $X$, while in the Kawasaki dynamics interacting particles randomly hop over $X$. In the case $X=\mathbb R^d$, we consider a diffusion approximation for the Kawasaki dynamics at the level of Dirichlet forms. This leads us to an equilibrium dynamics of interacting Brownian particles for which a permanental point process is a symmetrizing measure.
In this paper we generalize the results given in about real sequences which are not necessarily positive (i.e, they are not sequences of power moments) but can be mapped, by a difference operator, into a power moment sequence. We prove by elementary methods that the integro-polynomial representation of such sequences remains after dropping the condition on its growth imposed in the mentioned article. Some additional results on the uniqueness of the representation are included. | CommonCrawl |
I am trying to understand the concept of outer product in quantum mechanics. I read "Quantum Computing explained" of David MacMahon.
But how to get $(\langle \phi | \phi | \chi \rangle ) | \psi \rangle$ ?
Why it is possible to get through such steps?
I'm thinking that it's a typo, and all the author wanted in the last term was to write $$ (\langle \phi|\chi\rangle)\,|\psi\rangle. $$ The proof uses that you have a kind of associativity in the first equality $(|\psi\rangle\langle\phi|)\,|\chi\rangle= |\psi\rangle\,\langle\phi|\chi\rangle$ which I think is brought out of the blue if you introduce bras and kets out of nowhere.
The equality is obvious if you notice that kets as simple column vectors in $\mathbb C^n$, and bras are their adjoints (conjugate transpose). In that setting your equality is $$ (\psi\phi^*)\,\chi=\psi\,(\phi^*\chi)=(\phi^*\chi)\,\psi, $$ where the associativity is that of the product of matrices.
Not the answer you're looking for? Browse other questions tagged operator-theory inner-product-space quantum-mechanics quantum-computation outer-product or ask your own question.
Under What Conditions Does the Action of the Dual Space Induce an Hermitian Inner Product?
How do outer products differ from tensor products? | CommonCrawl |
Abstract: I have investigated signal recognition particle (SRP)-mediated protein targeting using a combination of genetic, biochemical, and molecular sequence analysis techniques. First, I consider the SRP cycle from the perspective of molecular evolution. This analysis provides insight into the significance of structural variation in SRP RNA and identifies novel conserved motifs in the polypeptide subunits of the particle. The conservation of SRP cycle components, combined with biochemical data from the mammalian, bacterial and yeast systems, suggests that this pathway for protein export is ancient in evolutionary origin. Next, I have used a variety of genetic and biochemical techniques to define the role of the Srp54p GTPase in the SRP cycle. Repressing synthesis of the essential Srp54 protein produces a growth defect that correlates with an accumulation of secretory precursors. I have also analyzed the effects of 17 site-specific mutations in the G domain of Srp54p. Several mutant alleles confer lethal and conditional phenotypes, indicating that GTP binding and hydrolysis are critical to the in vivo role of Srp54p. Enzymatic assays reveal that S. pombe Srp54p exhibits GTPase activity in vitro, while a mutant predicted to be catalytically defective has a reduced ability to hydrolyze GTP. Most importantly, the pattern of genetic dominance that these mutants display leads me to propose a model for the role of GTP hydrolysis by Srp54p during the SRP cycle, in which the SRP receptor $\alpha$ subunit (SR$\alpha$) serves as a GTPase activating protein (GAP) regulating signal sequence binding by the Srp54p subunit. Lastly, I have cloned and sequenced the S. pombe SR$\alpha$ gene. The gene encodes a 70 kDa protein that bears striking sequence similarity to the previously cloned mammalian and S. cerevisiae 70 kDa SR$\alpha$ proteins. The cloning of SR$\alpha$ opens the door to both exploring the biochemical effects of the SR$\alpha$ protein on the already existing Srp54p catalytically defective mutants and to genetically isolating the next downstream component in this complex GTPase cycle. | CommonCrawl |
Purpose: This tutorial shows you how to visualize the phonon modes found in an earlier phonon calculation, using a basic python script. As in other tutorials, we consider the diamond crystal as example.
Continue in the working directory where you did the phonon calculation, e.g., /home/exciting-tutorial/diamond-phonons. In particluar you need to have the files input.xml and PHONON.OUT ready. The script will also read the relevant species files, C.xml in our example for diamond, in the path specified by $EXCITINGROOT/species.
to obtain a supercell of 4$\times$4$\times$1 unit cells.
When executing the script PLOT-phonon-anim.py, you will be required to enter the number of steps in the animation which complete one period of the vibration.
Specify, e.g., 20 steps, as above. The screen output of the script is completed by information about the number of atoms, number of q-points, and the frequency (in cm-1) for each mode at each q-point.
Beyond this information, the script produces a number of files, named, e.g., q1_mode1.axsf and q1_mode1.xyz. One file is written for each mode at each q-point. These files contain atomic coordinates for the supercell at every step of the animation. The axsf format can be used in connection with XCrySDen (see next section), the files with extension xyz are simple xyz-coordinate files.
You can click on the larger window and keeping the click you can also change the angle of view to the cell. Now, you can use the arrow buttons in the smaller window to watch the animation of the mode. If XCrySDen is properly set up, the button Animated GIF/MPEG of the smaller window and the dialog that opens can be used to produce a movie of the animation, refer to the XCrySDen webpage for details.
The temporary frame files can be reused at a later point to change the settings of the movie (see Section 3).
Many visualization packages can plot the files with extension .xyz. Here, we make use of VMD as an example, and assume that you have an installed and properly set up version available on you system.
Use the Graphics -> Representations button to change the appearance of atoms and bonds, colors and that like. You might also want to change to a non-perspective display or omit the axes in the image, using the Display menu.
Adjust path, filename and movie duration according to your needs. Make sure you set Movie Settings -> User Defined Procedure and start the animation in the VMD Main window by pressing the bottom right arrow button, prior to pressing the Make Movie button. By default VMD produces an animated GIF using the convert. See the VMD webpage for details and other settings. Note that in order to be able to adjust the settings of your movie later, we recommend to keep the intermediate images by unchecking Movie Settings -> 4: Delete Image Files, see Section 3.
As another example, let us have a look at the acoustic mode 1 at q=(0.1, 0.1, 0), on the line Δ (i.e. Γ-X). You have to rerun the phonon calculation with this q-point added to the list, and invoke the script PLOT-phonon-anim.py once more, to obtain the appropriate .xyz-file.
for the first, second, etc. frame. VMD uses the .ppm image file format by default. This format can also be chosen in XCrySDen, so we use it here exemplarily. For other file formats the procedure would be analogous. In general, it is not recommended to use .gif temporary files (as a necessary compression step should be done only when creating the movie).
where -delay 4 keeps every frame displayed for 0.04 seconds and -loop 0 means to repeat the loop infinitely; the movie is output to file q1_mode4.gif.
If you want to change the size and scale of your images at this point you can use, e.g.
where -crop 560x420+10+10 means that a rectangular region of 560x420 pixels, offset by 10 pixels in x and y, is used. The option +repage then resets the image canvas, and -resize 80% rescales images to 80%. The result is a smaller image size in the movie, with everything outside the region of 560x420 pixels cut off. There are many more options to the convert program, which are explained at the ImageMagick website.
More details on preparing animations can be found here. | CommonCrawl |
I'm trying to solve a 3x3 magic square for 99 which starts at 29. I got the rows, columns and one diagonal but the other diagonal is (way) off.
Is there even a magic square which satisfies both diagonals for these parameters? As you can see I've been playing around in a spreadsheet, moving numbers around. But is there a quicker way to spot if a magic square for a number starting at an arbitrary number is possible?
It might interest you that one of Ramanujan's first hobbies was trying to understand magic squares, and they appear in his first notebook often.
I always construct it in following manner and it works, I don't know whether there is alternative way to do this or what's proof of correctness of my construction.
Not the answer you're looking for? Browse other questions tagged number-theory magic-square or ask your own question.
Fewest required values in magic square?
$4\times4$ magic square magic number possibilities?
do you know another Magic Square with this property?
How to prove that there are infinite number of magic squares consisted of only consecutive prime numbers?
Can there be a magic square with equal diagonal sums different from equal row and column sums? | CommonCrawl |
You are in a book shop which sells $n$ different books. You know the price and number of pages of each book.
You have decided that the total price of your purchases will be at most $x$. What is the maximum number of pages you can buy? You can buy each book at most once.
The first input line contains two integers $n$ and $x$: the number of books and the maximum total price.
The next line contains $n$ integers $h_1,h_2,\ldots,h_n$: the price of each book.
The last line contains $n$ integers $s_1,s_2,\ldots,s_n$: the number of pages of each book.
Print one integer: the maximum number of pages.
Explanation: You can buy books 1 and 3. Their price is $4+5=9$ and the number of pages is $5+8=13$. | CommonCrawl |
Implementation of <a href = "https://en.wikipedia.org/wiki/Knight%27s_tour#Warnsdorf's_rule">Warnsdorff's rule</a> - heuristic for finding a knight's tour on chessboards. A knight's tour is a sequence of moves of a knight on a chessboard such that the knight visits every square only once. If the knight ends on a square that is one knight's move from the beginning square (so that it could tour the board again immediately, following the same path), the tour is closed, otherwise it is open. The knight's tour problem is the mathematical problem of finding a knight's tour. Description of the Warnsdorff's rule: set a start cell. Always proceed to the cell that have the fewest onward moves. In case of a tie(i.e. there exist more than one possible choice for the next cell) go to the cell with largest Euclidean distance from the center of the board. This implementation also allows you to find a structured knight's tour. Knight's tour on board of size $n \times m$ is called structured if it contains the following $8$ UNDIRECTED moves: 1). $(1, 0) \to (0, 2)$ - denoted as $1$ on the picture below. 2). $(2, 0) \to (0, 1)$ - denoted as $2$ on the picture below. 3). $(n - 3, 0) \to (n - 1, 1)$ - denoted as $3$ on the picture below. 4). $(n - 2, 0) \to (n - 1, 2)$ - denoted as $4$ on the picture below. 5). $(0, m - 3) \to (1, m - 1)$ - denoted as $5$ on the picture below. 6). $(0, m - 2) \to (2, m - 1)$ - denoted as $6$ on the picture below. 7). $(n - 3, m - 1) \to (n - 1, m - 2)$ - denoted as $7$ on the picture below. 8). $(n - 2, m - 1) \to (n - 1, m - 3)$ - denoted as $8$ on the picture below. ######################################### #*12*********************************34*# #2*************************************3# #1*************************************4# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #6*************************************8# #5*************************************7# #*65*********************************78*# ######################################### If you are confused with the formal definition of the structured knight's tour please refer to illustration on the page $3$ of the paper "An efficient algorithm for the Knight's tour problem " by Ian Parberry. One more feature of this implementation is that it provides an option to return a shifted knight's tour, where all cell's coordinates are shifted by some values. Basically it is the same as knight's tour of some piece of the board.
Generates a knight's tour that satisfies the input parameters.
n - width and height of the board.
Generates a knight's tour that satisfies the input parameters. Warnsdorff's rule heuristic is an example of a greedy method, which we use to select the next cell to move, and thus may fail to find a tour. However, another greedy heuristic is used to prevent failing: in case of a tie we will select a cell with the largest euclidean distance from the center of the board. Such combination of greedy methods significantly increases our chances to find a tour.
type - of the tour.
structured - true if we want the tour to be structured, otherwise false.
shiftX - the value will be added to each cell's x-coordinate to reach effect of shifting.
shiftY - the value will be added to each cell's t-coordinate to reach effect of shifting. | CommonCrawl |
Fyi, your posts are all missing attachments!
a. Transform the given system into a single equation of second order.
b. Find $x_1$ and $x_2$ that also satisfy the given initial conditions.
I think c1 should be -2 and c2 should be 2.
This is in fact not the question TUT0601 got, we had the question from Zhiya's post. I think it's meant to be for next week, since the TA just did it in tutorial and nobody had practiced it. | CommonCrawl |
where $k(t)$ has, in addition, a certain smoothness in a neighborhood of the points $\pm a$. Specifically, for $1<p<\infty$ this property is realized if and only if $\alpha\ne1-1/p$, while for $p=1$ or $\infty$ there is no extension of convergence. This result is applied to the question of bases of exponential functions in $L^p(-a,a)$, $1<p<\infty$. | CommonCrawl |
Equation: "integral from 0 to 6 of -(x^2)+36"
So I know the answer, I just don't understand what my teacher wants. I know there's an "n(n+1)(2n+1) that's supposed to be thrown in there.
What the teacher wants if for you to divide the interval (0,6) into a lot of evenly spaced intervals and set up the Riemann sum over these intervals. Evaluate the sum and then let the number of intervals become infinite and see that the sum converges to the integral.
Thanks for the information, but I have no idea how to do that. He's never gone over a single problem like that and it's due for homework tomorrow. Do you know any good places to read up on this method of solving?
which tends to -6(12) + 216 = 144 as $n \to \infty$.
Thanks for your help. I have no idea what you did, but I guess I'll learn soon! | CommonCrawl |
Abstract: This paper is concerned with the study of theexistence/non-existence of the discrete spectrum of the Laplaceoperator on a domain of $\mathbb R ^3$ which consists in atwisted tube. This operator is defined by means of mixed boundaryconditions. Here we impose Neumann Boundary conditions on abounded open subset of the boundary of the domain (the Neumannwindow) and Dirichlet boundary conditions elsewhere. | CommonCrawl |
Abstract: We present multi-epoch, $R-$band imaging obtained from the Palomar Transient Factory of a small, fan-shaped reflection nebula in NGC 1333 that experiences prominent brightness fluctuations. Photometry of HBC 340 (K7e) and HBC 341 (M5e), a visual pair of late-type, young stellar objects lying near the apex of the nebula, demonstrates that while both are variable, the former has brightened by more than two magnitudes following a deep local minimum in September 2014. Keck high dispersion (R$\sim$45,000-66,000), optical spectroscopy of HBC 340 suggests that the protostar is a spectroscopic binary (HBC 340Aa + HBC 340Ab). Both HBC 340 and HBC 341 exhibit strong H$\alpha$ and forbidden line emission, consistent with accretion and outflow. We conclude that the brightness fluctuations in the reflection nebula represent light echos produced by varying incident radiation emanating from HBC 340. The short-term variability observed in the protostar is attributed to irregular accretion activity, while correlated, dipping behavior on a several hundred day time scale may be due to eclipse-like events caused by orbiting circumstellar material. Archival Hubble Space Telescope imaging of the region reveals a second, faint (F814W$\sim$20.3 mag) companion to HBC 340 that lies 1.02" ($\sim$235 AU) east of the protostar. If associated, this probable substellar mass object (20-50 Jupiter masses), HBC 340B, is likely unrelated to the observed brightness variations. The sustained brightening of HBC 340 since late 2014 can be explained by an EXor-like outburst, the recovery from a long duration eclipse event caused by obscuring circumstellar dust, or by the gradual removal of extincting material from along the line of sight. Our analysis here favors one of the extinction scenarios. | CommonCrawl |
Theorem 1 (Riesz's Lemma): Let $(X, \| \cdot \|)$ be a normed linear space and let $Y \subseteq X$ be a proper and closed linear subspace of $X$. Then for all $\epsilon$ such that $0 < \epsilon < 1$ there exists an element $x_0 \in X$ with $\| x_0 \| = 1$ such that $\| x_0 - y \| \geq 1 - \epsilon$ for every $y \in Y$.
Proof: Let $X$ be a normed linear space and let $Y$ be a proper and closed linear subspace of $X$ and let $\epsilon$ be such that $0 < \epsilon < 1$. | CommonCrawl |
The motional Stark effect with laser-induced fluorescence (MSE-LIF) diagnostic is under development to extend the MSE magnetic pitch angle diagnostic to lower fields ( $<$ 0.5 T) and enable measurement of magnetic field magnitude as well as direction. The technique involves injecting a low energy-spread neutral hydrogen beam (30 kV, 30 mA) into plasma, and using a collinear laser to excite transitions from the n=2 to n=3 atomic states in the beam atoms. The subsequent fluorescence from the same transition (Balmer-alpha, near 650 nm for the Doppler-shifted beam) is observed, and its splitting and polarization due to the E = v $\times$ B electric field in the beam frame is used to determine the background magnetic field magnitude and direction. This poster will present recent results from MSE-LIF development. A new plasma testbed for MSE-LIF, a spiral antenna helicon source, has been built. | CommonCrawl |
Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Photon emission from nanostructures in an STM. Nanostructured Materials 3, 345–348 (1993).
Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Photon emission from small particles in an STM. Zeitschrift für Physik D Atoms, Molecules and Clusters 26, 87–88 (1993).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of local photoemission using a scanning tunneling microscope. Ultramicroscopy 42, 366–370 (1992).
Berndt, R., Schlittler, R. R. & Gimzewski, J. K. Photon emission processes in STM. AIP Conf Proceedings 241, 328–336 (1992).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of mass transport on Au (110)-(1$\times$ 2) reconstructed surfaces using scanning tunneling microscopy. Surface Science 247, 327–332 (1991).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of the temporal evolution of the (1$\times$ 2) reconstructed Au (110) surface using scanning tunneling microscopy. Journal of Vacuum Science & Technology B 9, 897–901 (1991). | CommonCrawl |
Can select many disjoint pairs with prescribed differences from Z_n?
Suppose we have a sequence $d_i<2n$ for $i=1,\ldots,n$ and we want to select $n$ disjoint pairs from $Z_p$, $x_i,y_i$ such that $x_i-y_i=d_i \mod p$. Then how big $p$ has to be compared to $n$ to do this? I am primary interested on an upper bound on $p$. Is it true that there is always a $p\le (1+\epsilon)2n+O(1)$?
My comments. It is trivial that $p\ge 2n$ because all the numbers $x_i,y_i$ must be different and $d_1=1, d_2=2$ shows that this is not always enough. I also guess that it helps if $p$ is a prime, maybe the smallest prime bigger than $2n$ works which would answer the question.
Not the answer you're looking for? Browse other questions tagged nt.number-theory co.combinatorics or ask your own question.
When is Hom(G, H) the same size as Hom(H, G)?
Automorphisms of a matrix in Smith normal form?
Does Littlewood's bound on $\zeta(1+it)$ extend to all the partial sums?
How long iterations of $x \to (p \!\!\! \mod \!\! x)$ can be? | CommonCrawl |
Sorry that this is such a simple question!
Paternoster, R., Brame, R., Mazerolle, P., & Piquero, A. R. (1998). Using the Correct Statistical Test for the Equality of Regression Coefficients. Criminology, 36(4), 859–866.
SE being the standard error of the respective 'slopes' in your case.
If the slopes come from ordinary least squares regression, it would be good to verify that the year-to-year data which generated these values are indeed independent. Most capture-recapture studies need to account for previous years' volumes using some method of handling the dependence of volume over time.
Using standard errors, you can construct confidence intervals around your slope parameters. A naive test for whether they are different at the correct $\alpha$ level is to inspect whether any of the confidence intervals overlap. (Note the confidence interval from one parameter has to overlap the other actual parameter value, not its confidence interval, in order to fail to reject the null hypothesis that they're different).
Not the answer you're looking for? Browse other questions tagged r regression statistical-significance or ask your own question.
What test can I use to compare slopes from two or more regression models?
How to compare two regression slopes for one predictor on two different outcomes?
How to compare 2 regression slopes with R?
Two slopes between 0 and 1, neither different from 0, could they be different from each other?
Is the significance of difference in slopes equivalent to the significance of the slope of the difference of two series?
Test for statistical difference between two curves on same plot?
Statistical difference in trend between two groups? | CommonCrawl |
You are given a rooted tree that consists of $n$ nodes. The nodes are numbered $1,2,\ldots,n$, and node $1$ is the root. Each node has a value.
The first input line contains two integers $n$: the number of nodes and queries. The nodes are numbered $1,2,\ldots,n$.
The next line has $n$ integers $v_1,v_2,\ldots,v_n$: the value of each node.
Then there are $n-1$ lines that describe the edges. Each line contans two integers $a$ and $b$: there is an edge between nodes $a$ and $b$.
Finally, there are $q$ lines that describe the queries. Each query is either of the form "1 $s$ $x$" or "2 $s$".
Print the answer to each query of type 2. | CommonCrawl |
Which shape does an elastic rod take as its ends are getting closer?
Note that in the case of the circle arc you cannot bring $a$ to $0$ closer than $a = L/\pi$ without leaving the realm of functions $y=f(x)$ - for parabolas and sine functions you can.
Note that this doesn't look at all like a sinoidal wave.
The energy as defined in the question, $\int_0^1|f''(x)|\,\mathrm dx$, is the total variation of the slope $f'(x)$. If $f$ is concave*, as the figure suggests is desired, then $f''$ is negative and the energy is simply $f'(0)-f'(1)$. The optimal curve must be (two edges of) a triangle, otherwise one can reduce $f'(0)$ without changing $f'(1)$ by replacing the curve with the triangle $ABC$ where $A=(0,0)$, $C=(1,0)$, $BC$ has slope $f'(1)$, and $B$ is chosen so $|AB|+|BC|=L$. Among all triangles with $|AB|+|BC|=L$, the energy $f'(0)-f'(1)=\tan\angle A+\tan\angle C$ is minimized by an isosceles triangle with sides $L/2$, $L/2$, and $1$.
Since this is not anything close to what an elastic rod does, there must be some problem with the formulation. I advise reconsidering the choice of the energy function.
*I expect that for any curve with non-monotonic slope, one can find another curve of the same length but with lower energy, but I don't have a proof yet.
Not the answer you're looking for? Browse other questions tagged real-analysis optimization physics calculus-of-variations or ask your own question.
Moment of inertia about center of mass of a curve that is the arc of a circle.
Can this ant find its way back to the nest?
relation between moment of inertia about center of mass and curvature energy for a smooth plane curve of finite fixed length.
Isoperimetric inequality via Lagrange multipliers in infinite dimensions. End of a proof.
How long does the ball take to reach half of its terminal velocity? | CommonCrawl |
Electric birefringence (EB) of suspensions of anisotropic particles can be considered an electrokinetic phenomenon in a wide sense, as both liquid motions and polarization of the electrical double layer (EDL) of the particles participate in the process of particle orientation under the applied field. The EB spectrum can be exploited for obtaining information on the dimensions, average value and anisotropy of the surface conductivity of the particles, and concentration and Maxwell-Wagner polarization of the EDLs. It is thus a highly informative technique, applicable to non-spherical particles. In this paper, we investigate the birefringent response of plate-like montmorillonite particles as a function of the frequency and amplitude of the applied AC electric field, for different compositions (pH, ionic strength, particle concentration) of the suspensions. The transient electric birefringence (i.e., the decay of the refractive index anisotropy with time when the field is switched off) is used for estimating the average dimensions of the particle axes, by modeling it as an oblate spheroid. The obtained values are very similar to those deduced from electron microscopy determinations. The frequency spectra show a very distinct behaviour at low (on the order of a few Hz) and high (up to several MHz) frequencies: the $\alpha$ and Maxwell-Wagner-O'Konski relaxations, characteristic of EDLs, are detected at frequencies above 10 kHz, and they can be well explained using electrokinetic models for the polarization of EDLs. At low frequencies, in contrast, the birefringence changes to negative, an anomalous response meaning that the particles tend to orient with their symmetry axis parallel to the field. This anomaly is weaker at basic pHs, high ionic strengths and low concentrations. The results can be explained by considering the polydispersity of real samples: the fastest particles redistribute around the slowest ones, inducing a hydrodynamic torque opposite to that of the field, in close similarity with results previously described for mixtures of anisometric particles with small amounts of spherical nanoparticles. | CommonCrawl |
This category contains definitions related to Formal Systems.
Related results can be found in Category:Formal Systems.
A formal system is a formal language $\mathcal L$ together with a deductive apparatus for $\mathcal L$.
Pages in category "Definitions/Formal Systems" | CommonCrawl |
Masters student in Computational Chemistry at University of Copenhagen, writing about what I feel like. Enjoy the reading.
Now that we can easily provide DHCP, DNS and TFTP and a debian image for all the nodes, we want to make it easy to maintain the cluster and setup user management. For maintaining packages and configuration etc we use Puppet on Debian. So awesome!
NOTE: remember to add "puppet" and "puppetmaster" in /etc/hosts on the server, so dnsmasq can provide DNS! Otherwise puppet agent will not know where to connect.
A nice addition to the puppet service is the stdlib.
Puppet is then either run by manually or by adding puppet to the /etc/rc.local to be run on every boot.
echo -n "Waiting for network."
So by the power elimination I got put in charge of administration/setup of the local cluster system for the theoretical/computational chemistry department. The current system was completely out-dated, and made it impossible to apt-get update/upgrade, so with the addition of additional 60+ nodes from another cluster it was up to me to save the system! Which practically means I had to set it up from scratch. And a lot of googling. So much googling.
First thing first, I wanted it easily maintainable and scalable. There is no way I wanted to install software manually on all the nodes, which means all installation and setup needs to be done automatically from the masternode (frontend).
This was done via PXE/TFTP booting, and installing of a netboot Debian image (with a few extra packages). After the Debian installation, package management and configuration of the nodes is done via Puppet.
To speed things up, the whole installation is done via a local apt-get mirror on the master node. This also insures that all the packages are exactly the same version.
We want all the nodes to have the same packages installed, also on the frontend, for consistency. The way this is implemented is to have local copy of the apt-get server. You will need apache for http requests.
The first thing to setup is the DHCP server on the frontend, and because we want to run a DNS server as well, the easiest service to setup is dnsmasq, instead of ics-dhcp etc.
after installation we configure the server with /etc/dnsmasq.conf.
append initrd=debian-installer/amd64/initrd.gz auto=true priority=critical url=http://192.168.
The preseed cfg is placed in the apache http folder so it can be loaded over the net. Remember to setup the mirror settings to use the local mirror on the frontend.
Next is setup of user management and network shared folders (home and opt).
Small guide on how to setup the QM software GAMESS on a normal Ubuntu computer and work in parallel with multiple nodes (via sockets). Loosely this is based on this guide on how to compile GAMESS.
I'm going to pretend that you are working on Ubuntu 12.04 LTS, but I'm sure you can relate it to whatever distribution you are working on.
1. Download the newest GAMESS.
and answer the questions. Answer them truthfully.
and everything will happen automatically. You can add the flag "-j4" if you have 4 CPU's and want the compiling to go a little faster. It takes a few minutes.
Now we just need to change the ./liked script and add some compiler flags. If you are compiling with gfortran then you need to find and update the following two lines in ./lked under the gfortran section.
and similar if you are using ifort to compile edit the section in lked and set the correct compiler flags.
and then it should liked alright.
To run GAMESS there is a included a run script in the root folder, which needs to be updated located in the beginning, for the scratch folder and GAMESS path. so, edit (using VIM, if you are awesome, EMACS, if you are not).
Included in GAMESS is a list of input files to be tested to see if the software is working as it should. This is also useful to run if you do changes to the source code. Go to the root of the GAMESS folder and write.
This will then output all the exams. If all is passed, then GAMESS should be working alright.
If you want to have GAMESS working with a cluster system then there are a few things to change. This is to get the parallelization with sockets to work.
You'll need to edit the part of rungms that checks the hostname of the current node. As present the rungms check if you are running calculations on any of the Iowa nodes (where GAMESS is developed), and you are probably not.
As default, rungms does not contain loadleveler default settings, so you will need to setup the following test.
If you don't use the $USERSCR output, you can set that too the local scratch folder as well.
in the input file, with mwords being how much memory you want to allocate.
Remember: Remember that you need 1 space before the sections on GAMESS input files.
And that's it. So simple and easy out-of-the-box. You should be running loads of QM calculations now.
I want to calculate the RMSD (Root-mean-square deviation) between two molecule structures in XYZ format. And after googling around I concluded that the easiest way to do it was to use pymol. However being a CLI user, I do not want to download the files and open up a GUI all the time, I just want a script that can do it via a terminal. Time for a little google and python project.
However this is without taking into account that the two molecules could be identical and only translated in space. To solve this we need to position the molecules in the same center and rotate one onto the other.
The problem is solved by first find the centroid for both molecules and translating both molecules to the center of the coordinate system. Then we need an algorithm to align the molecules by rotation. For this I found Kabsch algorithm from 1976.
The algorithm is nicely written out on wikipedia, so it was straight forward to implement (still took me a little time though). So I wont go into details of it here.
However it is clear that the centering the molecules using their centeroid could possibly not be the best way of finding the minimal RMSD between two vector sets. So +Lars Bratholm got the idea of using a fitting function, fitting the center of the molecules and using Kabsch to calculate a minimal RMSD.
The output will then be "pure rmsd", "rmsd after rotation" and "rmsd after fit".
Note; for a testcase I calculated the rmsd for a molecule set with 140 atoms, to be ~0.15, and when I did the same calculation in pymol I got 0.092. However pymol did state that it was cutting 20 atoms, but didn't state which, where as my script takes all the atoms into account.
Good news and bad news. After working with a fortran 77 file called mndod.F and mopac 7.1 (First mopac Fortran 95 version) for a while, it seemed like a lot of subroutines was missing in this old file. So recently we wrote James Stewart (Mr. Mopac) asking for potential missing files (with the missing subroutines). James, luckily enough, instead of finding more files for us, found a updated version of the mndod.F file with the missing subroutines.
This was great, but it also meant I had to start over with respect to mapping and compiling of this file.
The mndod.f, for historical purpose, is the d-integrals from the old mndo-d method (created by Walter Theil), and then converted to work with MOPAC from version 7+, for AM1-D and on.
To follow the naming convention of GAMESS file structure, this holy file will now be known as mpcintd.src (because, yeah, d-integrals).
So I started over with mapping (see figure above), fixing compile errors, updating and deleting common blocks. This would not be possible without the source from Mopac 7.1 (available on openmopac.net), because the new mndod.f file lacked a lot of comments and documentation, referencing to unknown common-blocks and subroutines.
This was a lot of work, but still faster than rewriting mopac 7.1, because this version of mopac uses interfaces and modules, and so a lot of subroutine headers would need to be written before I could compile it, yet alone test it. No Thanks.
After working a weeks time on implementing the new code/subroutines, the result is now I get the correct nuclear repulsion term, but I do not get the correct electronic energy. This results in correct 'NUCLEAR ENERGY' (gamess output), but incorrect electronic energy, and therefore incorrect Heat of Formation.
Clearly on step 4, trying to make it work.
Stay tuned for the dramatic conclusion of implementation of PM6 in GAMESS!
Okay, so I'm still working on implementing PM6 integrals in GAMESS.
I got the source code from MOPAC 7.1 which includes d-integrals for the MNDO-D method (which is what Jimmy Stewart is using for PM6 in the newest MOPAC (hopefully), which originates from a program written by Walter Thiel).
So the strategy is simply to 'export' the subroutines / modules from MOPAC 7.1 needed to replicate the d-integrals in GAMESS (written in Fortran 90), and 'import' them into GAMESS-US.
Now, the semi-emperical part of GAMESS-US is actually based on a older version of MOPAC (written in Fortran 77) so the subroutines should be very similar to the code I'll be trying to import.
First part of this mission is too map the relevant subroutines in both GAMESS and MOPAC. And hopefully I'll be able too se a pattern and merge the 'trees'.
Now I just need an idea for merging the two trees. Since Stewart based his d-integrals on code he got from Thiel it seems like most of the subroutines is collected in a single file called mndod.F90 (fitting name, lol).
The highlighted area is only the first part of the problem though. After the fock matrix has been put together with the new and cool d-items the matrix needs to be solved and we need the fockd1 and fockd2 for that. They are conveniently also put in the same file with the rest of the subroutines.
Step 2: Integration: Make IF(PM6) and run the mndod code instead of gamess with pm6 parameters and more.
Step 4: Find out why it does not work and solve the problem.
Okay, so I'm working on implementing the semi-empirical method PM6 (by Jimmy "Mopac" Stewart) in GAMESS-US.
Note that the $\alpha$ parameter is now a di-atomic parameter unlike the mono-atomic parameter is PM3. Another parameter $x$ is also introduced, but that is 'pritty much it'. (there are also a Lennard Jones Term and a van der Waals term, but that is for another blog post). The parameters are all located in the PM6 article, but Jimmy Stewart was kind of enough to send his files including his implementation of PM6 core equation and the list of all parameters. This saved me alot of pointless typing time, so thanks!
Energy difference $\Delta E$ is calculated from mopac energy minus the corresponding gamess energy.
Arrgghh! How am I going to implement a new method, when the already implemented methods varies this much from the original program?
Okay, so the problem was that the SE part of GAMESS was based on a very old version of MOPAC, and so we figured alot of the energy deviation must be originating from the lack of update on physical constants. The MOPAC integrals use two physical constants to calculate the integrals in atomic units, namely bohr radius and electron volts, so by using grep I found all the places where the constants/variables were defined (which was alot!), and then updated acording to the constant defined on MOPAC's website using a common block, instead of alot of local instances.
Okay, but is this better? Hell yeah! The total energy is clearly more stable compared to MOPAC energy, which is the energy that matters most. The deviation in the nuclear and electronic energy looks very much linear which hints to more constants needed to be updated. Note I have only updated the constants located in MOPAC part of GAMESS and therefor only effects the semi-empirical part of GAMESS.
However the effect is there, and even though the energy is working now, it will prove a problem for people who wants to reproduce data already calculated with GAMESS. So be warned GAMESS users, keep a copy of your GAMESS when the PM6 update is integrated in GAMESS-US.
The integration of gradient was actually really easy, because GAMESS only uses numerical gradients for semi-emperical calculations.
Am I done? Unfortunately no. To get PM6 fully working I need to implement the d-integrals from MOPAC. As it is now only s- and p-orbitals are used for calculating the integrals. Is that easy? No.
tldr; PM6, PM3 and AM1 did not work as expected, which was partially fixed by updating physical constants in the semi-empirical part of GAMESS. PM6 energy and gradient now works up to including Ne, but will need d-orbitals before it is fully operational. | CommonCrawl |
The binary-options tag has no usage guidance.
How do binary options broker hedge themselves against losses?
In Delta of binary option, I do not see how to prove that the limit of $\partial C_t/\partial S_t$ is equal to $+\infty$ as $t \rightarrow T$. Can someone help ?
Justification for Binary Option's Infinite Delta?
if I am short GBPJPY and it start to jump up, instead of closing it, could I use Binary Options to long it immediately after jump up? So I could hedge current Forex position if possible.
Is probability implied by binary FX options risk neutral or real world?
Does an implied volatility always exist for a binary option? | CommonCrawl |
I asked this question to my Biology teacher and he, in collaboration with a Chemistry teacher, couldn't find the answer. My question is the following: "What does the $\alpha$ and $\beta$ represent in the secondary structure of a protein? I do reference to the $\alpha$ helix and $\beta$ sheets in proteins." I had a similar question with the glucose and we concluded that it was related to the Chemistry, the $\alpha$ or $\beta$, was related with the functional groups bonded to the carbon.
The answer is very simple. As described in the accepted answer to the related question about alpha-subunits vs alpha-helices the alpha- and beta- are arbitrary names. It could easily have been 1 and 2 or A and B based on ordering of letters or numbers; indeed, there are "type I" and "type II" turns.
(Emphasis added). In other words, he named the "α-form" first, and considered the "β-form" to be an uncoiled form of the first one. These names later became "α-helix" and "β-sheet". It could easily have been the other way around.
so if we followed a similar naming scheme, the alpha helix would be a 3.613 helix. Admittedly this is a bit of a cumbersome name.
As well there are the alpha sheet - a sheet made of helical strands - and the beta helix - a helix made of strands. For both of these, the "alpha" and "beta" part of the name has been chosen because of the naming of the original structures they are related to.
Not the answer you're looking for? Browse other questions tagged proteins protein-folding protein-structure definitions or ask your own question.
Proteins with alpha helices alone and beta sheets alone? | CommonCrawl |
Basic Rules of "Ratio Sudoku"
The special clue-numbers in this Sudoku variant are fractions or ratios in the lowest terms, hence the name for the puzzle.
The clue-numbers are always placed on the border lines between selected pairs of neighbouring cells of the grid.
Each clue-number is the fraction of the two numbers in adjacent cells (to the left and right). Each fraction is written in its lowest terms, with the smaller number denoted as the numerator. Thus $1/2$ can stand for the following combinations of numbers in the two adjacent cells: $1$ and $2$, $2$ and $1$, $2$ and $4$, $4$ and $2$, $3$ and $6$, $6$ and $3$, $4$ and $8$, $8$ and $4$.
if three fractions are placed on the consecutive border lines in the following order $2/3, 1/4$ and $1/2$, the combinations of answers in the four neighbouring cells would be $3, 2, 8$ and $4$, or $6, 4, 1$ and $2$.
The remaining rules are as in a "standard" Sudoku, and the object of the puzzle is to fill in the whole $9 \times 9$ grid with numbers $1$ through $9$ (one number per cell) so that each horizontal line, each vertical line, and each of the nine $3 \times 3$ squares (outlined with the bold lines) must contain all the nine different numbers $1$ through $9$.
Networks/Graph Theory. Logo. Visualising. Working systematically. PrimaryGames-Strategy. Combinatorics. Games. Ratio. Mathematical reasoning & proof. Interactivities. | CommonCrawl |
How do I find a tangent plane without a specified point?
I was having a problem finding the points on $z=3x^2 - 4y^2$ where vector $n=<3,2,2>$ is normal to the tangent plane.
How do we calculate the tangent plane equation without a specific point to calculate it at?
I also had an idea to take the cross product of $2$ vectors in the plane and somehow compare it to the $n$ vector but I don't know exactly how to do this. Thank you for any help!
The problem does not ask you to find a tangent plane! It asks you to find points where the normal vector is parallel to <3, 2, 2>. The normal vector at any point of f(x,y,z)= constant is $\nabla f$. Here $f(x, y, z)= 3x^2- 4y^2- z= 0$. Find $\nabla f$ and set it equal to <3k, 2k, 2k> for some k.
The other answers already covered the basics: you don't need to find any tangent planes per se, but only points at which the normal to the surface is parallel to $n$. Since you're working in $\mathbb R^3$, you have a bit of a short cut available: two nonzero vectors are parallel iff their cross product vanishes. Thus, you can avoid introducing another variable by stating the condition in the problem as $\nabla F\times n=0$, where $F:(x,y,z)\mapsto 3x^2-4y^2-z$. This generates three equations (only two of which are independent) to solve together with the original implicit equation of the surface.
Not the answer you're looking for? Browse other questions tagged multivariable-calculus vectors orthonormal tangent-spaces or ask your own question.
How to find the tangent plane to a given point on a surface?
Find closest point on a plane to a given point. Discrepancy with normal vector.
Can skew lines share a normal vector? And how would I derive the equation for the new plane?
Find the plane which contains the lines.
Calculating the tangent plane to a surface.
How to find the plane with given 2 vectors? | CommonCrawl |
We consider the classification of BPS and non-BPS D-branes in orientifold models. In particular we construct all stable BPS and non-BPS D-branes in the Gimon-Polchinski (GP) and Dabholkar-Park-Blum-Zaffaroni (DPBZ) orientifolds and determine their stability regions in moduli space as well as decay products. We find several kinds of integrally and torsion charged non-BPS D-branes. Certain of these are found to have projective representations of the orientifold $\times$ GSO group on the Chan-Paton factors. It is found that the GP orientifold is not described by equivariant orthogonal K-theory as may have been at first expected. Instead a twisted version of this K-theory is expected to be relevant. | CommonCrawl |
Abstract. We consider a non-relativistic electron interacting with a classical magnetic field pointing along the $x_3$-axis and with a quantized electromagnetic field. The system is translation invariant in the $x_3$-direction and we consider the reduced Hamiltonian $H(P_3)$ associated with the total momentum $P_3$ along the $x_3$-axis. For a fixed momentum $P_3$ sufficiently small, we prove that $H(P_3)$ has a ground state in the Fock representation if and only if $E'(P_3)=0$, where $P_3 \mapsto E'(P_3)$ is the derivative of the map $P_3 \mapsto E(P_3) = \inf \sigma (H(P_3))$. If $E'(P_3) \neq 0$, we obtain the existence of a ground state in a non-Fock representation. This result holds for sufficiently small values of the coupling constant. | CommonCrawl |
Abstract: Let $G$ be a locally compact group, and take $p\in(1,\infty)$. We prove that the Banach left $L^1(G)$-module $L^p(G)$ is injective (if and) only if the group $G$ is amenable. Our proof uses the notion of multi-norms. We also develop the theory of multi-normed spaces. | CommonCrawl |
is A = matrix([[1, 2, 3], [4, 5, 6]]).
The matrix $A$ is a $2 \times 3$ matrix with entries in the integers.
The matrix $A$ below has entires in the rationals, QQ. We may replace QQ with RR (the floating point real numbers) or CC (the floating point complex numbers).
The number of rows (2) and columns (3) can be entered.
You can specify how many rows the matrix will have and provide one big grand list of entries, which will get chopped up, row by row, if you prefer.
The commands A.nrows() and A.ncols() will return the number of rows and columns of the matrix $A$, respectively.
The command A.base_ring() will return the ring or field for the entries in the matrix $A$.
Rows in the matrix $A$ and numbered 0 to 1, while columns are numbered 0 to 2. The command A[i,j] returns the entry in the $i$th row and $j$th column of the matrix $A$ or 6.
Primary Tags: Linear algebra: Matrices.
Secondary Tags: Matrices: Matrix basics. | CommonCrawl |
Title: "What Big Data can tell you - and why it matters"
Title: "Bayes and Big Data"
Title: "What's new with the GPU? Programmable parallel processing for future computing"
Title: "Peer effects and global treatments: Design and analysis of experiments in networks"
Title: "Relative fixed-width stopping rules for Markov chain Monte Carlo simulations"
Title: "Modelfree prediction intervals for regression and autoregression"
Title: "Response Prediction for Computational Advertising Applications"
Title: "Nonconvex methods for high-dimensional regression with noisy and missing data"
Title: "Low-dimensionality in mathematical signal processing"
Title: "Statistical and computational tradeoffs in high-dimensional learning"
Title: "Estimation over multiple undirected graphs"
Title: "New Algorithms for M-Estimation of Multivariate Location and Scatter "
Title: "A Semiparametric View to Dimension Reduction: Estimation, Inference and Efficiency "
Title: "Before Statistics on networks"
Title: "The geometry of needlets excursion sets".
Title: "Limiting spectral distribution of normalized sample autocovariance matrices of linear time series when $p,n \to \infty$ such that $p/n \to 0$."
Title: "An $L^2$ Test Theory for Nonstationary Time Series"
Joint Stat/Biostat seminar Title: "Using Animal Instincts to Find Efficient Experimental Designs"
Title: "Statistical inference for fractional diffusion processes"
Title: "A statistical perspective on the validation and uncertainty quantification of computational models" | CommonCrawl |
A system of $n$ screw dislocations in an isotropic crystal undergoing antiplane shear is studied in the framework of linear elasticity. Imposing a suitable boundary condition for the strain, namely requesting the non-vanishing of its boundary integral, results in a confinement effect. More precisely, in the presence of an external strain with circulation equal to n times the lattice spacing, it is energetically convenient to have n distinct dislocations lying inside the crystal. The result is obtained by formulating the problem via the core radius approach and by studying the asymptotics as the core size vanishes. An iterative scheme is devised to prove the main result. This work sets the basis for studying the upscaling problem, i.e., the limit as $n\to\infty$, which is treated in 17. | CommonCrawl |
A directional repeater system with simple architecture composed of wideband receive and transmit antennas, a high gain low-power amplifier, and a bandpass filter is proposed for the 2450-MHz ISM band. The repeater system does not require down-conversion and up-conversion with frequency translation and does not rely on any operational protocol to manage users. As a result, the repeater (RF booster) can support all users, channels, modulation themes, and devices simultaneously. Close to 70 dB of isolation between the Tx and Rx antennas over the entire band (~100 MHz) is achieved for a compact structure with lateral dimensions of 8 cm $\times $ 12 cm and thickness of 1 cm. The isolation between the Tx and Rx antennas is achieved using polarization mismatch between orthogonal double-stack patch antennas and a novel two-element Rx antenna whose elements are appropriately located with respect to the Tx antenna to cancel the signal leakage from the Tx antenna to the Rx antennas. Two types of repeaters, one utilizing only the ground plane and polarization mismatch, and the second using the two-element Rx antenna in addition to polarization mismatch, are fabricated and tested in different environments. It is shown that more than 30 dB improvement in coverage can be achieved. | CommonCrawl |
Hurricanes can cause a lot of destruction when they pass over land. Given the path that a hurricane takes and the locations of a number of cities, find which city is in the most danger of damage. That is, the city that is closest to the path the hurricane takes. Assume that each hurricane travels in a straight line, and that it has been traveling for a long time and will continue to do so for a long time.
The first line of input contains an integer $1 \le n \le 500$ indicating the number of test cases that follow. Each case starts with a line containing four integers $x_1~ y_1~ x_2~ y_2$ indicating the coordinates of the two different locations the hurricane was already spotted. The second line contains an integer $1 \le m \le 100$ indicating the number of cities for that case. The following $m$ lines each contain a unique city name ($1$ to $20$ characters from a–z) and two integers indicating the $x$ and $y$ coordinates of that city. All coordinates are in the range $[0, 1\, 000]$.
For each set of cities, print the name of the city that is in the greatest danger. If more than one city is considered in greatest danger, print them all separated by spaces in the order they appear in the input. | CommonCrawl |
Which one is correct, $x_i$s or $x_i$'s?
The $x_i$s form a sequence.
The $x_i$'s form a sequence.
I prefer adding a noun. You can decline the noun and remind the reader of the type of the object in question. For example: "The numbers/points $x_i$ form a sequence." "One of the numbers $x_i$ has to be an integer."
Neither. The $x_i$ form a sequence.
Shall you use one of these, according to Oxford Dictionaries Online, you should only use an apostrophe "for the sake of clarity", therefore opting for the first option ($x_i$s).
There are cases, like this in Statistics, where you can use the plural, for example, for a variable as the p value. You would then write ps to show the use of multiple p values.
Not the answer you're looking for? Browse other questions tagged soft-question mathematical-writing or ask your own question.
Is there a reasonable way to refer to a 23 page article with 28 authors?
Where to break paragraphs in a proof?
When do you use "s" apostrophe to refer to authors ($e.g.$ of inequalities)? | CommonCrawl |
Well, here are the caterpillars!
Our caterpillars have numbers on each little part - numbers $1, 2, 3, 4, \ldots$ up to $16$.
You can see their pale blue head, and their body bending at right angles so that each part is lying in a square.
When you've discovered many new ones, with the shapes - and therefore the numbers - all showing differently, you could compare them. What is the same and what is different? Can you explain why?
You could try to put the caterpillars in shapes that are not squares!
a) Choose one of the caterpillars and, using the numbers and the way that they are arranged, explore the patterns and relationships you can find.
b) Then let that caterpillar grow nine more parts so that it becomes a $25$ caterpillar with the shape bending in just the same way. Explore those patterns and relations.
c) Finally, compare the two different groups of things you've discovered in a) and in b).
Do tell us about all the things you find out.
Patterned numbers. Investigations. Comparing and Ordering numbers. Practical Activity. Addition & subtraction. Combinations. Games. Multiplication & division. Working systematically. Trial and improvement. | CommonCrawl |
Dijktra's Algorithm and Single-Source Shortest Paths. Suppose we want to record the shortest paths from some source to every single other vertex (so that we can rapidly found a route from s to X, from s to Y, and so forth). We already know how to do this if we're only counting the number of edges, we just use BFS.
But if edges have weights (representing, for example road lengths), we have to do something else. It turns out that even considering edge weights, we can preprocess the shortest route from the source to every vertex very efficiently. We store the answer as a "shortest paths tree". Typically, a shortest paths tree is stored as an array of edgeTo values (and optionally distTo values if we want a constant time distTo() operation).
Runtime is $O(V \times \log V + V \times \log V + E \times \log V)$, and since $E \gt V$ for any graph we'd run Dijkstra's algorithm on, this can be written as more simply O(E log V). See slides for runtime description.
A* Single-Target Shortest Paths. If we need only the path to a single target, then Dijkstra's is inefficient as it explores many many edges that we don't care about (e.g. when routing from Denver to NYC, we'd explore everything within more than a thousand miles in all directions before reaching NYC).
To fix this, we make a very minor change to Dijkstra's, where instead of visiting vertices in order of distance from the source, we visit them in order of distance from the source + h(v), where h(v) is some heuristic.
It turns out (but we did not prove), that as long as h(v) is less than the true distance from s to v, then the result of A* will always be correct.
Note: In the version in class, we did not use an explicit 'mark'. Instead, we tossed everything in the PQ, and we effectively considered a vertex marked if it had been removed from the PQ.
Suppose we have an edgeTo and a distTo array, how would we implement a public int distTo(int w) method? How would we implement a public Iterable<int> pathTo(int w) method, where distTo returns the length of the shortest path and pathTo returns an Iterable of vertices where the 0th element is the source, the 1st element in the next item from the source towards w, and so forth.
Problem 4 from Princeton's Fall 2009 final.
True or false: Adding a constant to every edge weight does not change the solution to the single-source shortest-paths problem.
True or false: Multiplying a positive constant to every edge weight does not change the solution to the single-source shortest-paths problem.
Problem 1c and 1d from my Spring 2016 final.
Problem 9 from my Spring 2015 final.
(great problem) Problem 4 from Princeton's Fall 2011 final. Note that when the exam says to "relax a vertex", that mean to relax all of a vertex's edges. A similar problem is given as #6 on Princeton's Spring 2012 final.
Problem 5 from Princeton's Fall 2012 final.
Adapted from Algorithms 4.4.25: Given a digraph with positive edge weights, and two distinguished subsets of vertices S and T, find a shortest path from any vertex in S to any vertex in T. Your algorithm should run in time proportional to E log V, in the worst case.
Problem 5 from my Spring 2013 final.
Describe a family of graphs with V vertices and E edges for which the worst-case running time of Dijkstra's algorithm is achieved.
Iterative DFS: Problem 6 from this semester's discussion worksheet provides a flawed implementation of DFS.
Adapted from Algorithms 4.4.34. Give an algorithm to solve the following problem: Given a weighted digraph, find a monotonic shortest path from s to every other vertex. A path is monotonic if the weight of every edge on the path is either strictly increasing or strictly decreasing. The path should be simple (no repeated vertices).
Adapted from Algorithms 4.4.37. Develop an algorithm for finding an edge whose removal causes maximal increase in the shortest-paths length from one given vertex to another given vertex in a given edge-weighted digraph.
Problem 12 from my http://www.cs.princeton.edu/courses/archive/fall13/cos226/exams/fin-f13.pdf. | CommonCrawl |
Charlie wants to know how many factors 360 has.
How would you work it out?
Click below to see what Alison did.
"I can stop there, because the next factor would be 20 and I've already got that. So there are 24 factors."
Charlie thought about it in a different way. Click below to see what he did.
Charlie started by working out the prime factorisation of 360.
So $360 = 2^3 \times 3^2 \times 5$.
Then he used a table to find all the possible combinations of the prime factors.
When she saw Charlie's method, Alison said "There must be lots of numbers which have exactly 24 factors!"
Charlie and Alison think all of these numbers have exactly 24 factors. Can you see why?
How can I find a number with exactly 14 factors?
How can I find the smallest such number?
How can I find a number with exactly 15 factors?
How can I find a number with exactly 18 factors?
Which numbers have an odd number of factors?
What is the smallest number with exactly 100 factors?
Which number less than 1000 has the most factors?
Creating and manipulating expressions and formulae. Mathematical reasoning & proof. Properties of numbers. Multiplication & division. Modular arithmetic. Divisibility. Common factors. Factors and multiples. Integers. Working systematically. | CommonCrawl |
$ min \langle c, x \rangle$ s.t. $Ax \geq b, x \geq 0$, where $x$ is an integer and $A$ has integer entries. Show the optimal value of this is no less than the optimal value of this maximum: $max \langle b^* , \lambda \rangle$ s.t. $A^T\lambda \leq c, \lambda \geq 0$, where $b^*$ denotes the roundup of vector $b$: the smallest integer vector greater than or equal to $b$.
I tried doing this using the dual problem, i.e $L = \langle c, x \rangle + \langle b-Ax, \lambda \rangle = \langle c -A^T\lambda, x \rangle + \langle b , \lambda \rangle$ so the minimum of this w.r.t $x$ is $-\infty$ if $c > A^T\lambda$ and $\langle b , \lambda \rangle$ if $c \leq A^T\lambda$. So the new problem because to maximize w.r.t $\lambda$ so its $max \langle b , \lambda \rangle$ s.t. $c \leq A^T\lambda, \lambda \geq 0$, which is obviously less equal the max we were looking to show greater or equal to.
What am I doing wrong? I tried using the dual problem to solve the original max, but also got nowhere.
How can I find the dual problem to the following convex problem?
Where does the linear term in the dual of quadratic program go?
Is there any way to make the following function convex?
How to obtain the dual of a Lagrangian? | CommonCrawl |
To solve complicated variable equations such as those with variables on both sides of the equal sign, remember what you know about the balance scale – you must perform the same operation on both sides of the scale in order to stay balanced.
There are many steps to solve multi-step equations, hence the name. Step 1, if parentheses are shown in the equation, you may need to use the Distributive Property to simplify. Step 2, combine like terms on each side of the equal sign.
Like terms can be constants, variables alone, or variables with coefficients. Constants are numbers alone such as 3 or 1.5. Coefficients are numbers that are attached to a variable as a result of multiplication or division such as 2x or x/4.
Step 3, use the inverse operations (opposite operations) of addition or subtraction to move constants to one side of the equal sign and variables to the other. Use common sense, and do this in the most efficient manner. Next is Step 4, use inverse operations of multiplication or division to isolate the variable and calculate the answer. It's always a good idea to check your work by substituting the solution for the variable back into the original equation and use PEMDAS to simplify.
Determine how many Grey Borings Emilio needs to exchange to buy one more Rainbow Blinkey.
Decide which steps are needed to solve the equation $3\times (2x-5)+10=4x+9$.
Find the right equations for the fish on the scale.
Decide when the numbers of inhabitants match.
Evaluate how we can solve an equation. | CommonCrawl |
Abstract: A numerical study of Anderson transition on random regular graphs (RRG) with diagonal disorder is performed. The problem can be described as a tight-binding model on a lattice with N sites that is locally a tree with constant connectivity. In certain sense, the RRG ensemble can be seen as infinite-dimensional ($d\to\infty$) cousin of Anderson model in d dimensions. We focus on the delocalized side of the transition and stress the importance of finite-size effects. We show that the data can be interpreted in terms of the finite-size crossover from small ($N\ll N_c$) to large ($N\gg N_c$) system, where $N_c$ is the correlation volume diverging exponentially at the transition. A distinct feature of this crossover is a nonmonotonicity of the spectral and wavefunction statistics, which is related to properties of the critical phase in the studied model and renders the finite-size analysis highly non-trivial. Our results support an analytical prediction that states in the delocalized phase (and at $N\gg N_c$) are ergodic in the sense that their inverse participation ratio scales as $1/N$. | CommonCrawl |
Dopamine (DA) agonist-induced behavioral supersensitivity in the adult rat has served as the standard model for certain of the motor and behavioral side effects associated with long-term exposure to DA agonists in humans. The mechanisms relating receptor events with behavior mediation, however, remain unclear. The striatum of rats progresses through a prolonged and varied postnatal developmental period. In order to examine the relative contribution of D1 and D2 receptor-mediated mechanisms to behavioral changes which follow chronic dopamine agonist exposure, developing rats were treated daily from birth with a D1 agonist, SKF 38393 hydrochloride (3.0 mg/kg $\times$ 32d, i.p.), or a D2 agonist, LY 171555 hydrochloride (3.0 mg/kg $\times$ 32d, i.p.), and/or 6-OHDA (134 $\mu$g, i.c.v., at 3 d after birth). Following a drug-free interval, behavioral responses to selective DA agonists were evaluated. The results indicate that (1) prolonged LY 171555 treatments in development produced a supersensitive animal model for yawning and eating behaviors. (2) Perioral movements of high frequency could be produced by a very low dose of the DA D2 antagonist spiroperidol in rats treated neonatally with 6-OHDA, thereby providing a useful animal model to study tardive dyskinesia. (3) The "priming" phenomenon described by Breese and co-workers which was thought to be produced by D1 agonists only has been found in this study to be produced by a D2 agonist as well. This model provides a means for studying specific stereotypic behaviors in animals. (4) (3H) SCH 23390 and (3H) spiroperidol binding to striatal tissue was not altered in rats treated in development with specific agonists or antagonists for the D1 and D2 receptors. A neonatal 6-OHDA lesion did not modify binding in any of the agonist- or antagonist-treated groups. In conclusion, DA D1 and D2 agonist treatments during postnatal development are effective means of producing new animal models that are potentially useful for studying clinical disorders in man. | CommonCrawl |
Combinations. Games. Selecting and using information. Visualising. Interactivities. Working systematically. Optimisation. Addition & subtraction. Logic. Resilient.
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
A collection of resources to support work on Factors and Multiples at Secondary level.
You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku. | CommonCrawl |
talks.cam : Lower semicontinuity and relaxation of nonlocal $L^\infty$ functionals.
University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Lower semicontinuity and relaxation of nonlocal $L^\infty$ functionals.
We consider variational problems involving nonlocal supremal functionals, i.e.
L1(;Rm) 3 u 7! esssup(x;y)2 W(u(x); u(y)); with Rn a bounded, open set and a suitable function W : Rm Rm ! R.
W allow us to generalize the results to the vectorial setting m > 1. | CommonCrawl |
$R \to S$ maps a regular system of parameters of $R$ to part of a regular system of parameters of $S$.
In particular $R \to S$ is flat (see Algebra, Lemma 10.127.2) with regular fibre $S/\mathfrak m_ R S$ (see Algebra, Lemma 10.105.3).
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 07NN. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 07NN, in case you are confused. | CommonCrawl |
The highest $\alpha$ is, the harder it is to find a solution. I tried both simulated annealing and Jacobian-Free Newton Krylov numerical methods, without much success. I believe an issue here might be the way I state the problem.
Should JFKN or SA compute a perturbation/update on each matrix element separately? It does not sound reasonable to me, as I am dealing with timeseries, each column of $X$ being a timeseries of length $N$; I guess perturbation should be flexible yet consistent from one element to the next. Ha! A bit of insight or advice would be very much appreciated here!
Browse other questions tagged nonlinear-equations nonlinear-programming newton-method or ask your own question. | CommonCrawl |
What is the definition of continuity of set-valued functions?
According to the wiki of Kakutani's fixed-point theorem, A set-valued mapping $\varphi$ from a topological space $X$ into a powerset $\wp(Y)$ called upper semi-continuous if for every open set $W \subseteq Y$, $\lbrace x| \varphi(x) \subseteq W \rbrace$ is an open set in $X$.
What is the definition of continuity of a multi valued map $\varphi$?
What's the definition of open sets in $\wp(Y)$, in other words, what topology does $\wp(Y)$ have?
$\phi$ is upper semicontinuous if, for every open $W\subset Y$, the set $\lbrace x | \phi(x)\subset W\rbrace $ is open in $X$.
$\phi$ is lower semicontinuous if, for every open $W\subset Y$, the set $\lbrace x | \phi(x)\cap W\neq \emptyset\rbrace$ is open in $X$.
$\phi$ is continuous if it is both upper semincontinuous and lower semicontinuous.
The definition quoted is an "order" notion of upper semicontinuous, not a "topology" notion. For real-valued functions, the two coincide. But in other settings you can have one but not the other.
One sensible way of generalizing continuity to set-valued functions (from $X$ to subsets of $Y$) is to require the graph of the function to be closed in the product $X\times Y$. This would be equivalent to the continuity of the function if $Y$ is compact. Thus, the Heaviside function is not continuous because one of the points 0 or 1 on the $y$-axis is not in the graph, but if one redefines it to take both values at 0, the graph becomes closed subset of the plane. See http://en.wikipedia.org/wiki/Closed_graph_theorem for a related (but different) notion.
Not the answer you're looking for? Browse other questions tagged gn.general-topology continuity or ask your own question.
What is the pure intuition for topological continuity and topology?
Non trivial definition of bicontinuous functions and the ring of all bicontinuous functions.
The set of Upper semi-continuous functions as a ring.
Is the set of entire functions Borel in the space of analytic functions? | CommonCrawl |
Let $V$ be a finite dimensional vector space over the finite field $F_q$ with basis $e_1,\dots, e_n$. Suppose $x_1,\dots, x_n$ is the dual basis of the dual vector space $V^*$. Let $G \le GL(V)$ and consider the polynomial ring in the $n$ indeterminates $F_q[x_1,\dots, x_n]$. Invariant theory over finite fields is a branch of abstract algebra. The theory deals with those elements of $F_q[x_1,\dots, x_n]$ which do not change under the action of the group $G$. These elements form a ring structure which is called the ring of invariants of the group $G$. In this talk I will present a brief summary of my PhD thesis which is concerned with the polynomial invariants of the finite $G$. | CommonCrawl |
switch is closed what is the total amount of charge that flows from Y to X ?
As initially switch is closed then the wire with capacitor will act as zero resistance .
So why will any current(charge) will flow from the wire containing resistance .
hence there should be no charge flown in wire XY .
In both cases (switch open/closed), in the steady state, current flows only through the lower branch, and none through the upper branch. There is no electrical connection through the capacitors. So the 9V emf of the cell is divided between the 2 resistors in the ratio of 3:6 - ie 3V and 6V.
The difference between the two cases is that when the switch is closed the potential at X must be the same as at Y, so that the PDs across the capacitors are 3V and 6V respectively. Whereas when the switch is open the potential at X is not constrained, the only constraint is that the total PD across the two capacitors in series is 9V.
The charges on the capacitors are different in the 2 cases, and this difference in charge has to come through the connection XY. So it is simply a question of calculating the charges on the two capacitors in each case, then finding the difference.
When switch S is open the 2 capacitors are in series. The total capacitance is $C=C_1C_2/(C_1+C_2)=18/9=2\mu F$. The total charge is $Q=CV=2\mu F \times 9V=18\mu C$. The charges are the same on each capacitor because the two inner plates are connected and isolated : the total charge on these two plates was initially zero (before the cell was connected), and there is no path by which charge can get onto or leave them (because switch S is open). The charges on the plates in $\mu C$ are are shown in the upper half of the diagram below.
When switch S is closed the potential across each capacitor is the same as across the corresponding resistor, ie 3V and 6V. So the charges are $3\mu F \times 3V=9\mu C$ and $6\mu F \times 6V=36\mu C$. The charges on each plate are now as shown in the lower half of the diagram below.
Initially (S open) the total charge on the inner plates was zero. (It was not actually necessary to calculate them individually to know that the sum would be zero.) Finally (S closed) the total charge on the inner plates is $-9+36=+27\mu C$. This charge must have come through the switch, ie from Y to X.
Thanks I got it . I have done silly mistake . | CommonCrawl |
I wrote this recursive function to print a number sequence from x to n and n to x back.
I want to know if this way to solve the problem is good and how could I improve that code.
If you print a sequence of numbers from \$x\$ through \$n\$ in increasing order, and then the same sequence in decreasing order, you'll get a row of numbers which starts and ends with \$x\$ and contains an analogous double sequence of numbers \$x+1\ldots n\$ between, right?
Importing all names of a namespace is a bad habit to get into, and can cause surprise when names like begin and size are in the global namespace. Get used to using the namespace prefix (std is intentionally very short), or importing just the names you need into the smallest reasonable scope.
The exceptions to this rule are namespaces explicitly intended to be imported wholesale, such as the std::literals namespaces.
The value we return from the recursive function is only ever used for our main()'s exit status. It ultimately returns n, but what we want from main() is 0 for success and non-zero for failure (small positive values work best). So we always end up reporting failure, except when n is zero.
In our case, we don't have any failures we can report, so printNumberSequence() should return void, and main() can always return 0 - we can do that explicitly, or we can just allow execution to run off the end of main() (note that no other function is allowed to do that, though).
Not the answer you're looking for? Browse other questions tagged c++ recursion or ask your own question. | CommonCrawl |
The pressure is 23.5 atm. This is much too high for a garden hose. For comparison, a stove top pressure cooker has a pressure of about 15 atm.
The volume flow rate is too high.
$N_R = 5.28 \times 10^6 >> 3000$. The flow is turbulent.
This is College Physics Answers with Shaun Dychko. Poiseuille's Law says that the volume flow rate through tube equals the difference in pressure between the two ends times by Pi times the radius of the tube R4 divided by eight times the viscosity of fluid flowing through it times the length of the tube. Now, P2 is what we're trying to find. P1 is atmospheric pressure, the pressure at the mouth of the hose. And so, we'll multiply both sides by eight nu l over Pi r to the fourth. And then, also add P1 to both sides and we solve for P2, eight nu lQ over Pi r to the fourth plus P1, P1 being atmospheric pressure. So we have eight times the viscosity of 1.005 times ten to the minus three Pascals seconds times the garden hose length of 23 meters times the volume flow rate of 50 liters per second converted into cubic meters per second by multiplying by one cubic meter for every 1000 Liters. And we divide that by Pi times the radius converted into meters to the power of four plus atmospheric pressure 1.01 times ten to the five Pascals. And we get 2.37 times ten to the six Pascals, must be the pressure at the beginning of the hose at the faucet. Now in b, we have to make sense of this number. Is it reasonable or not? Well it's divided by atmospheric pressure and we see that this pressure here is 23 and a half atmospheres and that is too high for a garden hose. For comparison, consider a stove top pressure cooker has a pressure of a boat 15 atmospheres and it's made out of steel and that's really strong. Whereas, a garden hose is a flexible tube and we certainly cannot sustain, 15 atmospheres is indefinitely would not sustain, 23 atmospheres either. And this is brought about by the volume flow rate being too high. Now part d, asks us to calculate the Reynold's number. So, Reynold's number is two times the density of the fluid times its speed to the tube times the radius to the tube divided the viscosity of the fluid. Now, we can figure out V by using the formula for the volume flow rate being the cross-sectional area of the tube times V. Cross-sectional area is Pi r squared. And we can divide both sides by Pi r squared to solve for V, so it's Q over Pi r squared. And we'll substitute that in for V here. And so the Reynold's number is going to be two rho Q over Pi r nu. So that's two times the density of water times its volume flow rate in cubic meters per second divided by Pi times the radius of the garden hose times the viscosity. And this works out to 5.28 times ten to the six which is much much larger than 3000. And so flow is definitely going to be turbulent and because it's so turbulent, this formula up here doesn't even apply actually because this is meant for lavender flow. | CommonCrawl |
Group isomorphism $h:(\mathbb R,+)\to (\mathbb R^+,\times)$ that is not an exponential function.
Question of whether two given spaces are homeomorphic.
Sum of a nilpotent element and a zero divisor in a commutative ring $A$.
Can we extend a linearly independent list in a f.g. module of a commutative ring?
+8 "Paradox" with the Axiom of Dependent Choice.
+5 Prove that Sequentially Compact Metric Spaces are Lindelöf without the Axiom of Choice.
+40 Group isomorphism $h:(\mathbb R,+)\to (\mathbb R^+,\times)$ that is not an exponential function.
+5 What is a good route for a math student to self study computer science systematically and efficiently? | CommonCrawl |
Specifically, this is about table III-IV. I have downloaded the average return data that one can obtain, in the HML and SMB setting. This means, I have the "Small", "Medium" and "Big" firms (size) average monthly returns since 1926-2017, as well as the "Low", "Medium" and "High" average returns of the BE/ME size of the firms.
[...] We form 9 portfolios (S/L, S/M, S/H, M/L, M/M, M/H, B/L, B/M and B/H) as the intersections of the three size and the three BE/ME groups. The 9 portfolios are each subdivided into three portfolios (Lh, Mh or Hh) using pre-formating HML slopes. These slopes are estimated with five years (three years minimum) of monthly returns ending in December of year t-1.
I don't understand how I should construct these 9 portfolios, as well as the "pre-formation" HML slopes. I am aware that the slope of the HML is its own factor coefficient (such as beta, in the CAPM) but what does it mean in this context?
[...] isolate variation in the HML risk loading that is independent of BE/ME.
Collect book common equity (BE) and market valuation (ME) for stocks, excluding financial, transportation and public firms. ME is measured as the number of shares outstanding with the stock price at the end of December. BE is measured at the fiscal year ending.
Calculate cross-sectional breakpoints (33% and 67% percentile) for size (ME) and book-to-market ratio (BE/ME). The size breakpoint for year $t$ is the median NYSE market equity at the end of June of year $t$. BE/ME for June of year $t$ is the book equity for the last fiscal year end in $t−1$ divided by ME for December of year $t−1$.
Sort each stock independently by BE/ME and ME. This results in the mentioned nine portfolios.
Calculate the value-weighted return for each portfolio from July of year $t$ to end of June in year $t+1$.
where $R_i$ is the monthly value-weighted portfolio return, $R_f$ the risk-free rate of return, $R_M$ the market return, $SMB$ the return of the size-factor mimicking portfolio and equivalently $HML$ for the value-factor. The regression is run using monthly return data over the previous five years (minimum of three years), ending in December of year $t-1$, i.e. if you form the portfolios in end of June of year $y$, the regression uses data from January $y-5$ to December $t-1$.
The regression results in estimates for the factor-loadings $b_i$, $s_i$ and $h_i$ (and the intercept $\alpha$). What follows is, that you split up all nine portfolios into three subsamples which results in a total of 27 portfolios. So for each of the 9 portfolios, you sort each stock within a portfolio by the variable $h_i$ from the above regression. Just look at the monoton increasing value for $h_i$ in Table 3, which are the breakpoints for sorting. For each of the 27 portfolios you finally have to calculate the value-weighted return.
Daniel/Titman (1997) subdivide the 9 size / book-to-market portfolios into five portfolios by $h_i$ which results in a total of 45 portfolios. The problem is, that some of the 45 portfolios only contain a single or very few stocks, i.e. the "portfolio" is not diversified. Thats the reason for only subdividing into 3 further sorts.
I am aware that the slope of the HML is its own factor coefficient (such as beta, in the CAPM) but what does it mean in this context?
HML is measured as the difference between the return of a portfolio of high BE/ME stocks and the return of a portfolio of low BE/ME stocks constructed to be neutral with respect to size. The HML portfolio is generally based on the entire stock market, but your 9 size and book-to-market portfolios (for which you are running the above regression) are much smaller subsamples. The slope just measures the portfolio specific exposure towards the HML portfolio. As you can see on the values of $h_i$ (and the t-values), the portfolios are far from being "its own factor coefficient".
Bali/Engle/Murray (2016), Empirical asset pricing: the cross section of stock returns, John Wiley & Sons, 1. ed.
Not the answer you're looking for? Browse other questions tagged finance fama-french or ask your own question.
Are the Fama-French factor portfolios calculated based on absolute or relative value`? | CommonCrawl |
This functions appends a child to an existing list of children or creates a new list of children.
The node being appended. It can be created with e.g. DomDocument->create_element, DomDocument->create_text_node etc. or simply by using any other node.
You can not append a DOMAttribute using this method. Use DomElement->set_attribute instead.
Returns the appended node on successIm Fehlerfall wird FALSE zurückgegeben..
4.3.0 You are not allowed anymore to insert a node from another document.
4.3.0 Prior to PHP 4.3.0, the new child is duplicated before being appended. Therefore the new child is a completely new copy which can be modified without changing the node which was passed to this function. If the node passed has children itself, they will be duplicated as well, which makes it quite easy to duplicate large parts of an XML document. The return value is the appended child. If you plan to do further modifications on the appended child you must use the returned node.
4.3.0 and 4.3.1 The new child newnode is first unlinked from its existing context, if it's already a child of DomNode. Therefore the newnode is moved and not copies anymore. This is the behaviour according to the W3C specifications. If you need the old behaviour, use DomNode->clone_node before appending.
4.3.2 The new child newnode is first unlinked from its existing context, if it's already in the tree. Same rules apply.
The following example adds a new element node to a fresh document and sets the attribute align to left.
A more complex example is the one below. It first searches for a certain element, duplicates it including its children and adds it as a sibling. Finally a new attribute is added to one of the children of the new sibling and the whole document is dumped.
The above example could also be done with DomNode->insert_before instead of DomNode->append_child.
As of version 4.3 PHP doesn't support Appending a child from another source document. If you are trying to import information from multiple sources into a final document [for transformation using XSL as an example] then you can have problems. This technique can be used to do it though.
I am assuming you have two documents open, $xmldoc1 and $xmldoc2 and you have selected [via XPath or explicit searching] the nodes you want in each document. Thus $xmldoc1_appending_node is the node you would like to add $xmldoc2_importnode to.
// first we create a temporary node within the document we want to add to.
// now we have a node that is in the right document context we can clone the one we want into this one.
// by using true in the above call to clone_node() we copy all of the child nodes as well. Use false or nothing if you just want the containing node with no children.
Now your document contains the new nodes imported from a different document.
$x_head, $x_title, $x_meta, $x_link and $x_body in the same way.
What you can do is, reverse the order of your appendings.. so that you append the highest level node at the end.. so that your reference is still valid after, or wait till you have filled all of the nodes you wanted to add before you append it to your parent node. | CommonCrawl |
In this note we study asymptotic properties of random lifts of graphs introduced by Amit and Linial as a new model of random graphs. Given a base graph $G$ and an integer $n$, a random lift of $G$ is obtained by replacing each vertex of $G$ by a set of $n$ vertices, and joining these sets by random matchings whenever the corresponding vertices of $G$ are adjacent. In this paper we study connectivity properties of random lifts. We show that the size of the largest topological clique in typical random lifts, with $G$ fixed and $n\rightarrow\infty$, is equal to the maximum degree of the core of $G$ plus one. A similar idea can be used to prove that for any graph $G$ with $\delta(G)\geq2k-1$ almost every random lift of $G$ is $k$-linked. | CommonCrawl |
Construct a continuous real valued function which takes zero on integers and such that image of function is not closed.
Does Bhagavad Gita say anything about whether a person should eat Vegetarian or Non Vegetarian food?
How to effectively read a mathematical textbook?
Showing that given matrix does not have negative eigenvalues without using the knowledge that it is positive definite.
What does order of element mean in the Symmetric Group?
Polynomial in $k[x_1, \ldots,x_n]$ has finitely many roots? | CommonCrawl |
1 Math Unlike Other Subjects Ch.
2 Belief Systems & Student Achievement Ch.3 Protocols & Expectations Ch.
4 Preparation Affects Everything Ch.
5 Instruction Really Matters Ch.
6 Writing it Down Ch.
7 Making Homework Worthwhile Ch.
8 Test Prep, Light at the End of the Tunnel Ch.
9 The Test, Where the Rubber Hits the Road Ch.
10 Cultivating Parent & Community Support Ch.
In this book Bill Hanlon provides examples and recommends highly effective and practical instructional and assessment strategies that classroom teachers can immediately implement and that school administrators can readily observe. These high yield strategies build on accepted practices and directly address the needs of struggling students or students living in poverty. The common sense approach assists classroom teachers in organizing their instruction by connecting preparation and instruction to student notes, homework, test preparation, and assessments so students study more effectively and efficiently resulting in increased student performance. Hanlon also emphasizes the importance of student-teacher relationships and the implementing a success-on-success model.
Book — xxii, 355 pages : illustrations ; 23 cm.
13. Explanations and Implications of Diminishing Intervention Impacts Across Time Drew Bailey.
The fifth volume in the Mathematical Cognition and Learning series focuses on informal learning environments and other parental influences on numerical cognitive development and formal instructional interventions for improving mathematics learning and performance. The chapters cover the use of numerical play and games for improving foundational number knowledge as well as school math performance, the link between early math abilities and the approximate number system, and how families can help improve the early development of math skills. The book goes on to examine learning trajectories in early mathematics, the role of mathematical language in acquiring numeracy skills, evidence-based assessments of early math skills, approaches for intensifying early mathematics interventions, the use of analogies in mathematics instruction, schema-based diagrams for teaching ratios and proportions, the role of cognitive processes in treating mathematical learning difficulties, and addresses issues associated with intervention fadeout.
Matrosov, Valery V., 1960- author.
Book — x, 244 pages ; 24 cm.
Modern technological, biological, and socioeconomic systems are extremely complex. The study of such systems largely relies on the concepts of competition and cooperation (synchronization). The main approaches to the study of nonlinear dynamics of complex systems are now associated with models of collective dynamics of networks and ensembles, formed by interacting dynamical elements.Unfortunately, the applicability of analytical and qualitative methods of nonlinear dynamics to such complex systems is severely restricted due to the high dimension of phase space. Therefore, studying the simplest models of networks, which are ensembles with a small number of elements, becomes of particular interest. Such models allow to make use of the entire spectrum of analytical, qualitative, and numerical methods of nonlinear dynamics. This book is devoted to the investigation of a kind of such systems, namely small ensembles of coupled, phase-controlled oscillators. Both traditional issues, like synchronization, that are relevant for applications in radio-communications, radio-location, energy, etc., and nontraditional issues of excitation of chaotic oscillations and their possible application in advanced communication systems are addressed.
Curry, Sean N., 1990- author.
Book — v, 81 pages ; 25 cm.
Katsoulis, Elias G., 1963- author.
Book — vii, 85 pages ; 25 cm.
Bring math to life with routines that are academically rigorous, standards-based, and engaging! Go beyond circling ABCD on your bell ringers and do nows and get your students reasoning, modeling, and communicating about math every day! In this new book from bestselling author and consultant Dr. Nicki Newton, you'll learn how to develop effective daily routines to improve students' thinking, reasoning, and questioning about math. The book provides a wide variety of rigorous, high-interest routines and explains how to rotate and implement them into your curriculum. Inside, you'll find: Questioning techniques that encourage students to think beyond the "right vs. wrong" continuum Tips for building a math-learning environment that is friendly and supportive of all students Math vocabulary exercises that are meaningful and fun An assortment of innovative daily activities, including "Fraction of the Day, " "Truth or Fib, " "Find and Fix the Error, " "Guess My Number, " "What Doesn't Belong?" and many, many more. Each chapter offers examples, charts, and tools that you can use immediately. With these resources and the practical advice throughout the book, you'll increase students' ability to understand math on a deeper level while keeping them engaged in their own learning processes.
Part I: Why Jump-Start Routines? The First Few Minutes of Mathematics Class Why the Traditional Warmup Doesn't Work The Problem with Going Over Homework Jump-start Routines: New Warmups for a New Era Routines for Reasoning in Mathematics Routines for Improving Number Sense and Fluency Building Number Sense and Fluency Over Time with Routines 10,000 Hours of Practice Routines that Satisfy the Need for Quality Practice Routines for Achieving Improved Performance Routines for Rehabilitating Number Pluckers, Pluggers, and Crunchers Routines for Growth Mindset Routines to Honor and Leverage Errors Routines to Actively Develop Confidence Implementing Jump-Start Routines Routines that are Ready for Use Flexible Use Timing of Routines: How Long? When? Which Routines to Use? Plan for the Routine Select the Routine Routines Set the Stage for Meaningful Discourse Practical Advice for Routines Modify, Modify, Modify Identifying or Creating the Content or Topics Using Routines Formatively Be Committed and Creative Part II: The Routines Picture It Where's the Point? Is This the End? That's a Fact Math Yapper Broken Numbers It's About The Best Tool Relating Three Two Columns Numbered Star Switcharoo Express It Relating Without Calculating More or Less Somewhere in Between Patterns and Generalizations If I know This What It Takes to Make Two Truths and a Lie Part III: Where to Go Next Make a Plan Identify Content for Routines Identify Routines Determine the Rotation Give It Time Set Goals Adjust to Their Adjustments Further Modifying Routines Design Your Own Routines Work Collaboratively and Share the Load Jumpstart Mathematics Engagement, Number Sense, and Reasoning.
Offering a year's worth of daily instructional material that you can use each class period to promote student reasoning and number sense, this resource will help students: Frequently revisit essential mathematical concepts Foster and shore up conceptual understanding Engage in mental mathematics, leading to efficiency and fluency Engage in mathematical discourse by constructing viable arguments and critiquing the reasoning of others Reason mathematically, which can improve performance on high-stakes assessments Move learning beyond "correctness" by valuing mistakes and discourse to encourage a growth mindset.
Part 3: Where to Go Next Make a Plan Identify Content for Routines Identify Routines Determine the Rotation Give It Time Set Goals Adjust to Their Adjustments Further Modifying Routines Design Your Own Routines Work Collaboratively and Share the Load Jumpstart Mathematics Engagement, Number Sense, and Reasoning.
The routines in this book will help students: Frequently revisit essential mathematical concepts Foster and shore up conceptual understanding Engage in mental mathematics, leading to efficiency and fluency Engage in mathematical discourse by constructing viable arguments and critiquing the reasoning of others Reason mathematically, and prepare for high stakes assessments Move learning beyond "correctness" by valuing mistakes and discourse and encouraging a growth mindset.
Healy, Kieran Joseph, 1973- author.
An accessible primer on how to create effective graphics from data This book provides students and researchers a hands-on introduction to the principles and practice of data visualization. It explains what makes some graphs succeed while others fail, how to make high-quality figures from data using powerful and reproducible methods, and how to think about data visualization in an honest and effective way. Data Visualization builds the reader's expertise in ggplot2, a versatile visualization library for the R programming language. Through a series of worked examples, this accessible primer then demonstrates how to create plots piece by piece, beginning with summaries of single variables and moving on to more complex graphics. Topics include plotting continuous and categorical variables; layering information on graphics; producing effective "small multiple" plots; grouping, summarizing, and transforming data for plotting; creating maps; working with the output of statistical models; and refining plots to make them more comprehensible. Effective graphics are essential to communicating ideas and a great way to better understand data. This book provides the practical skills students and practitioners need to visualize quantitative data and get the most out of their research findings. Provides hands-on instruction using R and ggplot2 Shows how the "tidyverse" of data analysis tools makes working with R easier and more consistent Includes a library of data sets, code, and functions.
Helton, J. William, 1944- author.
Book — v, 106 pages ; 26 cm.
Introduction Dilations and Free Spectrahedral Inclusions Lifting and Averaging A Simplified Form for $\vartheta $ $\vartheta$ is the Optimal Bound The Optimality Condition $\alpha =\beta $ in Terms of Beta Functions Rank versus Size for the Matrix Cube Free Spectrahedral Inclusion Generalities Reformulation of the Optimization Problem Simmons' Theorem for Half Integers Bounds on the Median and the Equipoint of the Beta Distribution Proof of Theorem 2.1 Estimating $\vartheta (d)$ for Odd $d$. Dilations and Inclusions of Balls Probabilistic Theorems and Interpretations continued Bibliography Index.
An operator $C$ on a Hilbert space $\mathcal H$ dilates to an operator $T$ on a Hilbert space $\mathcal K$ if there is an isometry $V:\mathcal H\to \mathcal K$ such that $C= V^* TV$. A main result of this paper is, for a positive integer $d$, the simultaneous dilation, up to a sharp factor $\vartheta (d)$, expressed as a ratio of $\Gamma $ functions for $d$ even, of all $d\times d$ symmetric matrices of operator norm at most one to a collection of commuting self-adjoint contraction operators on a Hilbert space.
Quantitative thinking is our inclination to view natural and everyday phenomena through a lens of measurable events, with forecasts, odds, predictions, and likelihood playing a dominant part. The Error of Truth recounts the astonishing and unexpected tale of how quantitative thinking came to be, and its rise to primacy in the nineteenth and early twentieth centuries. Additionally, it considers how seeing the world through a quantitative lens has shaped our perception of the world we live in, and explores the lives of the individuals behind its early establishment. This worldview was unlike anything humankind had before, and it came about because of a momentous human achievement: we had learned how to measure uncertainty. Probability as a science was conceptualised. As a result of probability theory, we now had correlations, reliable predictions, regressions, the bellshaped curve for studying social phenomena, and the psychometrics of educational testing. Significantly, these developments happened during a relatively short period in world history- roughly, the 130-year period from 1790 to 1920, from about the close of the Napoleonic era, through the Enlightenment and the Industrial Revolutions, to the end of World War I. At which time, transportation had advanced rapidly, due to the invention of the steam engine, and literacy rates had increased exponentially. This brief period in time was ready for fresh intellectual activity, and it gave a kind of impetus for the probability inventions. Quantification is now everywhere in our daily lives, such as in the ubiquitous microchip in smartphones, cars, and appliances; in the Bayesian logic of artificial intelligence, as well as applications in business, engineering, medicine, economics, and elsewhere. Probability is the foundation of quantitative thinking. The Error of Truth tells its story- when, why, and how it happened.
A more intuitive approach to the mathematical foundation of computer science Discrete mathematics is the basis of much of computer science, from algorithms and automata theory to combinatorics and graph theory. This textbook covers the discrete mathematics that every computer science student needs to learn. Guiding students quickly through thirty-one short chapters that discuss one major topic each, this flexible book can be tailored to fit the syllabi for a variety of courses. Proven in the classroom, Essential Discrete Mathematics for Computer Science aims to teach mathematical reasoning as well as concepts and skills by stressing the art of proof. It is fully illustrated in color, and each chapter includes a concise summary as well as a set of exercises. The text requires only precalculus, and where calculus is needed, a quick summary of the basic facts is provided. Essential Discrete Mathematics for Computer Science is the ideal introductory textbook for standard undergraduate courses, and is also suitable for high school courses, distance education for adult learners, and self-study. The essential introduction to discrete mathematics Features thirty-one short chapters, each suitable for a single class lesson Includes more than 300 exercises Almost every formula and theorem proved in full Breadth of content makes the book adaptable to a variety of courses Each chapter includes a concise summary Solutions manual available to instructors. | CommonCrawl |
A type of convergence of sequences of functions (mappings). Let $f_n : X \rightarrow Y$, $n=1,2,\ldots$ where $X$ is some set and $Y$ is a topological space; then pointwise convergence means that for any element $x \in X$ the sequence of values $y_n = f_n(x)$, $n=1,2,\ldots$ converges in the space $Y$. The function $f : x \mapsto \lim_n y_n$ is then the pointwise limit of the sequence $(f_n)$. The definition extends to generalized sequences of functions and their values.
An important subclass of the pointwise-convergent sequences for the case of mappings between metric spaces (or, more generally, uniform spaces) is that of the uniformly-convergent sequences (cf. Uniform convergence).
See also Pointwise convergence, topology of.
This page was last modified on 31 December 2016, at 19:36. | CommonCrawl |
Music teachers challenge students to listen and participate.
English and History teachers invite students to journey in other worlds.
Art and Drama teachers offer students opportunities to explore.
What are we to offer students if they are to function mathematically?
What do you understand by 'functioning mathematically'?
What characteristic behaviours do your highly achieving mathematicians exhibit?
What do these behaviours look like in practice?
Many numbers can be expressed as the sum of two or more consecutive integers.
Look at numbers other than 15 and find out all you can about writing them as sums of consecutive whole numbers.
Odd numbers can be written as two consecutive numbers.
Multiples of $3$ can be written as three consecutive numbers.
Even numbers can be written as four consecutive numbers.
Multiples of $3$ can be written as the sum of three consecutive numbers.
Multiples of $5$ can be written as the sum of five consecutive numbers.
If you give me any multiple of three, I can tell you the three numbers by dividing by three and that will be the middle number.
If you give me three consecutive numbers I can always turn them into a multiple of three.
Same will apply to five, seven, nine and any odd number because you can pair off numbers on either side of the middle number.
Can anyone think of a counter example?
What might you try next?
Is there a way you could organise your findings?
Drawing attention to and valuing process as well as outcome.
How can you generate questions that promote these HOTS in a mathematical context?
If the area of a rectangle is $24$ cm² and the perimeter is $22$ cm, what are its dimensions?
What if the area of a rectangle (in cm²) is equal to the perimeter (in cm), what could its dimensions be?
Find a rectangle which has unit sides and a perimeter of $100$ .
How many answers are there and how do you know you've got them all?
Find the area and perimeter of a $3cm \times 8cm$ rectangle.
If the area of a rectangle is $24$cm² and the perimeter is $22$cm, what are its dimensions? How did you work this out?
"A teacher of mathematics has a great opportunity. If he fills his allotted time with drilling his students in routine operations he kills their interest, hampers their intellectual development, and misuses his opportunity. But if he challenges the curiosity of his students by setting them problems proportionate to their knowledge, and helps them to solve their problems with stimulating questions, he may give them a taste for, and some means of, independent thinking."
"I don't expect, and I don't want, all children to find mathematics an engrossing study, or one that they want to devote themselves to either in school or in their lives. Only a few will find mathematics seductive enough to sustain a long term engagement. But I would hope that all children could experience at a few moments in their careers...the power and excitement of mathematics...so that at the end of their formal education they at least know what it is like and whether it is an activity that has a place in their future."
This article was originally presented to the Council of Boards of School Education in India Conference,"Addressing Core Issues and Concerns in Science and Mathematics", in Rishikesh, India in April 2007. | CommonCrawl |
Abstract : In this paper, we propose a Ward-like hierarchical clustering algorithm including spatial/geographical constraints. Two dissimilarity matrices $D_0$ and $D_1$ are inputted, along with a mixing parameter $\alpha \in [0,1]$. The dissimilarities can be non-Euclidean and the weights of the observations can be non-uniform. The first matrix gives the dissimilarities in the "feature space" and the second matrix gives the dissimilarities in the "constraint space". The criterion minimized at each stage is a convex combination of the homogeneity criterion calculated with $D_0$ and the homogeneity criterion calculated with $D_1$. The idea is then to determine a value of $\alpha$ which increases the spatial contiguity without deteriorating too much the quality of the solution based on the variables of interest i.e. those of the feature space. This procedure is illustrated on a real dataset using the R package ClustGeo. | CommonCrawl |
where $x$ is called indeterminate.
where $x^0$ means the identity element in the structure which $x$ belongs to.
For the study of the polynomial itself, I think these two definition make no difference. But when you treat a polynomial (expression) as a polynomial function, and if we use the second definition, we can just substitute all the $x$ with some variable (eg. some square matrix), rather than define it in a "adhoc" way, i.e. if we are using the first definition, we have to explicitly define the function to add an identity element to the constant term $a_0$ to make $a_1 x$ and $a_0$ addable.
Is the second definition equivalents to the first one?
If yes, then is it true that the authors of those texts actually means def 2 when the define the polynomial using def 1, they just omit the identity element?
If not, why? Would it be nicer to eliminate the non-addable $a_1x + a_0$ with $a_1x + a_0 x^0$, (even though we really don't need to add these two terms together when we are studying the knowledge of polynomial itself)?
$$(a_0,a_1,\dots,a_n,\dots)$$ where only finitely many $a_i$ are non-zero.
Then we add series point-wise, and we find their products by the Cauchy product.
Then $(1,0,0,0,\dots)$ is the multiplicative identity, and $(0,1,0,0,\dots)$ is $x$. So $1$ isn't really $x^0$, and $x^2$ just means $x\cdot x$.
Not the answer you're looking for? Browse other questions tagged polynomials definition or ask your own question.
Is a field (ring) an algebra over itself?
Does substitute $\lambda$ with matrix $A$ in a polynomial conflict with the Axiom of Substitution?
How do I prove that a irreducible polynomial in F[x] has a root in an extension E of F.
Nilpotent/invertible polynomial over commutative ring.
Let $p(x) = 1+a_1x+a_2x^2+\cdots+a_nx^n$ be a polynomial where $a_1,\ldots,a_n$ are integers, and $a_1 + … + a_n$ is even.
What's the difference between these two definitions of polynomial function?
Difference between a polynomial of degree $n$ and an $n$- tuple with the $n$th component $\neq 0$?
Is polynomial in general the same as polynomial function? | CommonCrawl |
We show that the total area of two distinct Gaussian curvature 1 surfaces with the same conformal factor on the boundary, which are also conformal to the Euclidean unit disk, must be at least 4π. In other words, the areas of these surfaces must cover the whole unit sphere after a proper rearrangement. We refer to this lower bound of total areas as the Sphere Covering Inequality. This inequality and it's generalizations are applied to a number of open problems related to Moser-Trudinger type inequalities, mean field equations and Onsager vortices, etc, and yield optimal results. In particular we confirm the best constant of a Moser-Truidinger type inequality conjectured by A. Chang and P. Yang in 1987. This is a joint work Changfeng Gui.
For a bounded domain, we consider the L^\infty-functional involving a nonnegative Hamilton function. Under the continuous Dirichlet boundary condition and some assumptions of Hamiltonian H, the uniqueness of absolute minimizers for Hamiltonian H is established. This extendes the uniqueness theorem to a larger class of Hamiltonian $H(x,p)$ with $x$-dependence. As a corollary, we confirm an open question on the uniqueness of absolute minimizers posed by Jensen-Wang-Yu. Our proofs rely on geometric structure of the action function induced by Hamiltonian H(x,p), and the identification of the absolute subminimality with convexity of the associated Hamilton-Jacobi flow.
In the 1960s, Benjamin and Feir, and Whitham, discovered that a Stokes wave would be unstable to long wavelength perturbations, provided that (the carrier wave number) x (the undisturbed water depth) > 1.363.... In the 1990s, Bridges and Mielke studied the corresponding spectral instability in a rigorous manner. But it leaves some important issues open, such as the spectrum away from the origin. The governing equations of the water wave problem are complicated. One may resort to simpler approximate models to gain insights.
I will begin by Whitham's shallow water equation and the modulational instability index for small amplitude and periodic traveling waves, the effects of surface tension and vorticity. I will then discuss higher order corrections, extension to bidirectional propagation and two-dimensional surfaces. This is partly based on joint works with Jared Bronski (Illinois), Mat Johnson (Kansas), and Ashish Pandey (Illinois). | CommonCrawl |
There has been much said in the popular media about declining standards and participation in secondary school mathematics in Australia.
It is strange that it should be so, but it is true all the same, that many of the most debated aspects of Mathematics concern matters that are really completely trivial.
The Marriage Problem: How to Choose?
The problem of how to successfully choose the partner most likely to lead to a long and happy marriage is a task which has occupied the minds of young and older people alike, men and women, among all races and cultures, throughout the ages.
A Pythagorean triad $(x,y,u)$ consists of positive integers $x,y,u$ such that $x^2+y^2=u^2$. Geometrically, the integers represent the lengths of the sides of a right-angled-triangle with the hypotenuse $u$.
Q1302 Let $\alpha$, $\beta$ and $\gamma$ be the angles of one triangle.
ANS: (Correct solution by J.C. | CommonCrawl |
| Angenent, Feldman, Kim, Tran.
mentioned problem regarding the system of cross-diffusion equations. The remaining part of the talk will be focused on some new generalized results on regularity gradient estimates for some general class of quasi-linear parabolic equations. Regularity estimates for gradients of solutions in Lorentz spaces will be presented. Ideas of the proofs for the results are given.
Quantitative homogenization and Harnack inequality for a degenerate discrete nondivergence form random operator.
where $a(x)=diag(a_1(x),..., a_d(x)), x\in\mathbb Z^d$ are random nonnegative diagonal matrices which are identically distributed and independent and with a positive expectation.
A difficulty in studying this problem is that coefficients are allowed to be zero. In this talk, using random walks in random media and its percolative structure, we will present a Harnack inequality and quantitative homogenization result for this random operator. Joint work with N.Berger, M.Cohen and J.-D. Deuschel.
Inverse problems for Maxwell's equations and its application.
This talk will illustrate the application of complex geometrical optics (CGO) solutions to Maxwell's equations.
First, I will explain the increasing stability behavior of coefficients for Maxwell equations.
In particular, by using CGO solutions, the stability estimate of the conductivity is improving when frequency is growing.
Second, I will describe the construction of new families of accelerating and almost nondiffracting beams for Maxwell's equations.
They have the form of wave packets that propagate along circular trajectories while almost preserving a trasverse intensity profile.
Abstract: We describe Riemannian (non-Kähler) Ricci flow solutions that develop finite-time Type-I singularities whose parabolic dilations converge to a shrinking Kähler–Ricci soliton singularity model. More specifically, the singularity model for these solutions is the "blowdown soliton" discovered by Feldman, Ilmanen, and Knopf in 2003. Our results support the conjecture that the blowdown soliton is stable under Ricci flow. This work also provides the first set of rigorous examples of non-Kähler solutions of Ricci flow that become asymptotically Kähler, in suitable space-time neighborhoods of developing singularities, at rates that break scaling invariance. These results support the conjectured stability of the subspace of Kähler metrics under Ricci flow.
of the level set formulation of the crystalline mean curvature flow.
respect to approximation by a smooth anisotropic mean curvature flow. | CommonCrawl |
I'm using daily settlement data to get yield levels for a couple of products. From this data I am doing PCA on a rolling collection of the yield levels. I have been using sci-kit learn's PCA function, but I also see the issue when doing my own PCA through Numpy. So as far as I know it's not an issue of the libraries.
Here's an example. I have data for 150 settlements and I calculate the PC's using data from day 0-100, then I recalculate 10 days later on data 10-110, etc.
And here are the corresponding weights.
As you can see the $x_1$, $x_3$ weights start to blow up at some point which doesn't really make sense given the nature of the data.
Does anybody have any insight to my problem?
Do the PCA on changes or log-changes in your series. That is often how PCA is conducted in fixed-income settings.
Think about smoothing your weights (with some priors). Remember that you are trying to explain the interaction of three fixed income instruments (which are driven by economies, politics, and market forces) by only 9 parameters. You need to match your analysis with reasonable expectations on its performance.
Not the answer you're looking for? Browse other questions tagged interest-rates statistics yield-curve pca yield or ask your own question. | CommonCrawl |
I wonder, terms like "Auxin", "Cytokinin", "Gibberellin" etc means NOT a single compound; but a class of compound.
For example "Auxin" does-not mean a single compound, but several compounds such as : natural auxin (IAA in all plants*, 4-Cl-IAA in pea*, IBA in mustards*) and their synthetic analogs like NAA, 2,4-D, 2,4,5-T, dicamba ( * ), etc.
Now, my question is, would all the auxins work in exactly the same way? or would there be slightest difference (qualitatively) between their actions? and similarly, Would all the cytokinins work the same way? and would all the gibberellins work in the same way?
But beside that; my question is, would there be ay qualitative difference?
So far I haven't found anything clear-cut about this in any book or on the web.
In one source, Plant Tissue Culture: Theory and Practice/ Bhojwani and Razdan, (edition= ? ) 3.2.3. Growth hormones; , it is told that "IBA and IAA are widely used for rooting, and with a interaction with a cytokinin, for shoot proliferation. 2,4-D and 2,4,5-T are very effective for the induction and growth 2,4-D and 2,4,5-T are very effective for the induction and growth of callus. 2,4-D is also an important factor for the induction of somatic embryogenesis."
However there I could not found, is this difference is really due to a difference in biological (signaling-path) process, or due to some-other superficial difference (solubility, binding with same receptors, etc) of these applied molecules.
Preface: As @David says, the question is very broad. So, here, I'll take up the most common plant growth factors i.e. auxins to give you an idea.
Short Answer: We don't definitely know yet whether the pathways of two auxins (I take indole-3-acetic acid (IAA) and indole-3-butyric acid (IBA) here) are exactly same or not.
Auxins are a class of plant hormones (or plant growth substances) with some morphogen-like characteristics. Auxins have a cardinal role in coordination of many growth and behavioral processes in the plant's life cycle and are essential for plant body development.
genetic evidence has been found that suggests that IBA may be converted into IAA through a similar process to $\beta$-oxidation of fatty acids.
Now, when it comes to whether the exact mechanisms of how IAA and IBA work are similar or not then, as I said, we don't know yet.
Plant cells elongate irreversibly only when load-bearing bonds in the walls are cleaved. Auxin causes the elongation of stem and coleoptile cells by promoting wall loosening via cleavage of these bonds. This process may be coupled with the intercalation of new cell wall polymers. Because the primary site of auxin action appears to be the plasma membrane or some intracellular site, and wall loosening is extracellular, there must be communication between the protoplast and the wall. Some "wall-loosening factor" must be exported from auxin-impacted cells, which sets into motion the wall loosening events. About 20 years ago, it was suggested that the wall-loosening factor is hydrogen ions. This idea and subsequent supporting data gave rise to the Acid Growth Theory, which states that when exposed to auxin, susceptible cells excrete protons into the wall (apoplast) at an enhanced rate, resulting in a decrease in apoplastic pH. The lowered wall pH then activates wall-loosening processes, the precise nature of which is unknown. Because exogenous acid causes a transient (1-4 h) increase in growth rate, auxin must also mediate events in addition to wall acidification for growth to continue for an extended period of time. These events may include osmoregulation, cell wall synthesis, and maintenance of the capacity of walls to undergo acid-induced wall loosening. At present, we do not know if these phenomena are tightly coupled to wall acidification or if they are the products of multiple independent signal transduction pathways.
When auxin stimulates rapid cell elongation growth of cereal coleoptiles, it causes a degradation of 1,3:1,4-$\beta$-glucan in hemicellulosic polysaccharides. We examined gene expressions of endo-1,3:1,4-$\beta$-glucanase (EI) and exo-$\beta$-glucanase (ExoII), of which optimum pH are about 5, and molecular distribution of hemicellulosic polysaccharides in barley (Hordeum vulgare L.) coleoptile segments treated with or without IAA. IAA (10–5 M) stimulated the gene expression of EI, while it did not affect that of ExoII. IAA induced gene expression of EI after 4 h and increased wall-bound glucanase activity after 8 h. The molecular weight distribution of hemicellulosic polysaccharides from coleoptile cell walls was shifted to lower molecular weight region by 2 h of IAA treatment. Fusicoccin (10–6 M) mimicked IAA-induced elongation growth and the decrease in molecular weight of hemicellulosic 1,3:1,4-$\beta$-glucan of coleoptiles in the first 4 h, but it did not promote elongation growth thereafter. These facts suggest that acidification of barley cell walls by IAA action enhances pre-existing cell wall-bound glucanase activity in the early first phase of IAA-induced growth and the late second phase involves the gene expression of EI by IAA.
Although the exact method of how IBA works is still largely unknown, genetic evidence has been found that suggests that IBA may be converted into IAA through a similar process to $\beta$-oxidation of fatty acids. The conversion of IBA to IAA then suggests that IBA works as a storage sink for IAA in plants. There is other evidence that suggests that IBA is not converted to IAA but acts as an auxin on its own.
We have examined in vitro rooting of apple 'Jork 9' shoots exposed for three weeks to each of the three auxins commonly used for ex vitro rooting: indole-3-acetic acid (IAA), indole-3-butyric acid (IBA) and $\alpha$-naphthaleneacetic acid (NAA). During the initial five days of the rooting treatment, the cultures were incubated in darkness. In this period, the root initials are formed. Then, the cultures were moved to the light. NAA resulted in a low (ca. 8 roots), and IAA or IBA in a high (ca. 15 roots) maximal root number. The maximal root number was reached at a wide range of IAA concentrations (10-100 $\mu$M) but at only one concentration of IBA (10 $\mu$M) or NAA (3 $\mu$M). With NAA and IBA, growth of roots and shoots was much more inhibited than with IAA. For these reasons, IAA is the preferable auxin for in vitro rooting of apple 'Jork 9' shoots.
This suggests that IAA and IBA use different mechanisms or that IBA can't use the same mechanism as effectively as IAA.
Not the answer you're looking for? Browse other questions tagged plant-physiology cell-signaling or ask your own question.
Why do some fruits have a much wider range of acceptable sizes than others?
Can I estimate leaf temperature from air temperature? What other information would be required?
Do all sources supply all sinks on a plant?
What is a rough estimate of CO₂ saturation for high-yield or high growth rate plants?
On the premise of crop rotation, is there similarly a plant or organism able to provide nutrients into its environment that exists in water biomes? | CommonCrawl |
Preprint arXiv:1309.5603 [math.PR]; last accessed October 23, 2018.
Abstract: Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent, which we handle by a careful analysis of the dependencies and tools from Fourier analysis to obtain quantified convergence rates. Our method can be applied to many other systems; as an example, we show that the $n!$ entries in the determinant expansions of $n\times n$ matrices with entries independently drawn from nice random variables converges to Benford's Law. | CommonCrawl |
The power of Bayesian modelling really clicked for me when I was first introduced to hierarchical modelling. In this blog post we will highlight the advantage of using hierarchical Bayesian modelling as opposed to non-hierarchical Bayesian modelling. This hierachical modelling is especially advantageous when multi-level data is used, making the most of all information available by its 'shrinkage-effect', which will be explained below.
Having multiple sets of measurements comes up all the time, in Psychology for example you test multiple subjects on the same task. You then might want to estimate a model that describes the behavior as a set of parameters relating to mental functioning. Often we are interested in individual differences of these parameters but also assume that subjects share similarities (being human and all). Software from our lab, HDDM, allows hierarchical Bayesian estimation of a widely used decision making model but we will use a more classical example of hierarchical linear regression here to predict radon levels in houses.
As you can see, we have multiple radon measurements (log-converted to be on the real line) in a county and whether the house has a basement (floor == 0) or not (floor == 1). We are interested in whether having a basement increases the radon measured in the house.
Where $i$ represents the measurement, $c$ the county and floor contains a yes (1) or no (0) to the question if the household has a basement. If you need a refresher on Linear Regressions in PyMC, check out my previous blog post. Critically, we are only estimating one intercept and one slope for all measurements over all counties.
Note that we added the subindex $c$ so we are estimating $n$ different $\alpha$s and $\beta$s -- one for each county.
This is the extreme opposite model, where above we assumed all counties are exactly the same, here we are saying that they share no similarities whatsoever which ultimately is also unsatisifying.
We thus assume the intercepts $\alpha$ and slopes $\beta$ to come from a normal distribution centered around their respective group mean $\mu$ with a certain standard deviation $\sigma^2$, the values (or rather posteriors) of which we also estimate. That's why this is called multilevel or hierarchical modeling.
How do we estimate such a complex model with all these parameters you might ask? Well, that's the beauty of Probabilistic Programming -- we just formulate the model we want and press our Inference Button(TM).
Note that the above is not a complete Bayesian model specification as we haven't defined priors or hyperpriors (i.e. priors for the group distribution, $\mu$ and $\sigma$). These will be used in the model implementation below but only distract here.
To really highlight the effect of the hierarchical linear regression we'll first estimate the non-hierarchical Bayesian model from above (separate regressions). For each county a new estimate of the parameters is initiated. As we have no prior information on what the intercept or regressions could be we are placing a Normal distribution centered around 0 with a wide standard-deviation. We'll assume the measurements are normally distributed with noise $\epsilon$ on which we place a Half-Cauchy distribution.
Instead of initiating the parameters separatly, the hierarchical model initiates group parameters that consider the county's not as completely different but as having an underlying similarity. These distributions are subsequently used to influence the distribution of each county's $\alpha$ and $\beta$.
/opt/miniconda/lib/python2.7/site-packages/pymc3/step_methods/hmc/nuts.py:451: UserWarning: The acceptance probability in chain 1 does not match the target. It is 0.889237663849, but should be close to 0.8. Try to increase the number of tuning steps.
/opt/miniconda/lib/python2.7/site-packages/pymc3/step_methods/hmc/nuts.py:451: UserWarning: The acceptance probability in chain 2 does not match the target. It is 0.636029263947, but should be close to 0.8. Try to increase the number of tuning steps.
/opt/miniconda/lib/python2.7/site-packages/pymc3/step_methods/hmc/nuts.py:467: UserWarning: Chain 2 contains 2 diverging samples after tuning. If increasing `target_accept` does not help try to reparameterize.
/opt/miniconda/lib/python2.7/site-packages/pymc3/step_methods/hmc/nuts.py:467: UserWarning: Chain 3 contains 25 diverging samples after tuning. If increasing `target_accept` does not help try to reparameterize.
The marginal posteriors in the left column are highly informative. mu_a tells us the group mean (log) radon levels. mu_b tells us that having no basement decreases radon levels significantly (no mass above zero). We can also see by looking at the marginals for a that there is quite some differences in radon levels between counties; the different widths are related to how much measurements we have per county, the more, the higher our confidence in that parameter estimate.
After writing this blog post I found out that the chains here (which look worse after I just re-ran them) are not properly converged, you can see that best for `sigma_beta` but also the warnings about "diverging samples" (which are also new in PyMC3). If you want to learn more about the problem and its solution, see my more recent blog post "Why hierarchical models are awesome, tricky, and Bayesian".
To find out which of the models works better we can calculate the Root Mean Square Deviaton (RMSD). This posterior predictive check revolves around recreating the data based on the parameters found at different moments in the chain. The recreated or predicted values are subsequently compared to the real data points, the model that predicts data points closer to the original data is considered the better one. Thus, the lower the RMSD the better.
As can be seen above the hierarchical model performs a lot better than the non-hierarchical model in predicting the radon values. Following this, we'll plot some examples of county's showing the true radon values, the hierarchial predictions and the non-hierarchical predictions. | CommonCrawl |
N.Tsilevich, A.Vershik, "Quasi-invariance of the gamma process and multiplicative properties of the Poisson-Dirichlet measures"
Quasi-invariance of the gamma process and multiplicative properties of the Poisson-Dirichlet measures.
Abstract. In this paper we describe new fundamental properties of the law $P_\Gamma$ of the classical gamma process and related properties of the Poisson-Dirichlet measures $PD(\theta)$. We prove the quasi-invariance of the measure $P_\Gamma$ with respect to an infinite-dimensional multiplicative group (the fact first discovered in ) and the Markov-Krein identity as corollaries of the formula for the Laplace transform of $P_\Gamma$.
The quasi-invariance of the measure $P_\Gamma$ allows us to obtain new quasi-invariance properties of the measure $PD(\theta)$. The corresponding invariance properties hold for sigma-finite analogues of $P_\Gamma$ and $PD(\theta)$. We also show that the measure $P_\Gamma$ can be considered as a limit of measures corresponding to the $\alpha$-stable Levy processes when parameter $\alpha$ tends to zero.
Our approach is based on simultaneous considering the gamma process (especially its Laplace transform) and its simplicial part - the Poisson-Dirichlet measures. | CommonCrawl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.