text
stringlengths
100
500k
subset
stringclasses
4 values
An optical study of the HII region populations evident in three galaxies, M101, M51 and NGC 4449, has been made. Using narrow-band filters, emission line imagery has been taken using a CCD focal-reducing camera, at wavelengths covering the emission from H $\alpha,$ H $\beta,$ (O III) $\lambda$5007 and (S II) $\lambda\lambda$6716+6731. Using several identification techniques to spatially select the HII regions, emission line properties have been derived for 625 HII regions in M101, 465 in M51 and 163 in NGC 4449, making this the most complete study of its kind to date. Several trends have been discovered concerning the properties of the HII regions with radial position within their galaxy. M101 exhibits a large gradient in excitation, and oxygen abundance, as well as a gradient in the line-of-sight reddening. No positional variation in the derived ionization parameter for each HII region was found. Local variations in the effective collapse density for neutral gas have been detected for both M101 and M51. No such analysis was possible for NGC 4449 due to a lack of available data. M51 shows systematic emission variations only in the brightest cores of its largest HII regions, an effect attributed to a larger influence of the local ISM on the properties of the fainter, and more obscured, HII regions. M51 exhibits a spiral pattern that does not follow a single mathematical description, departing most dramatically at the corotation radius. A variation in the evolutionary time from peak local compression to peak star formation with radius has been detected for one of the arms in the galaxy, but not the other. NGC 4449 displays no systematic variations in the derived emission properties of its HII region population. This is attributed to a star formation mechanism that is independent of the radial ordinate, contrasting with the spiral density wave mechanism dominant in spiral galaxies. Unprecedented deep CCD imagery of this galaxy is presented, revealing the complicated structure of ionized filaments between the HII regions. The emission properties of these filaments are studied. Scowen, Paul Andrew. "A study of the H II region populations of M101, M51 and NGC 4449." (1992) Diss., Rice University. https://hdl.handle.net/1911/16665.
CommonCrawl
How to compute the levy path integral with zero potential? where h is the Planck constant. It is known that in Feynman functional measure (generated by the process of the Brownian motion) and with zero potential ($V(x)=0$), the amplitude can be computed exactly, but what is the case with the non-Gaussian case? In the paper of Prof.Nikolai Laskin it said it can be computed with the measure generated by the $\alpha$ stable Levy motion ($1<\alpha<2$), but in this case the probability density function is so different from the Brownian case, so how to compute the amplitude? It turns out that for the Levy path integral, the calculation of the amplitude of a quantum particle uses the Fourier translation of the probability density function, since this representation is an integral of an exponential function which is easy to compute. Not the answer you're looking for? Browse other questions tagged stochastic-processes mathematical-physics stochastic-analysis quantum-mechanics levy-processes or ask your own question. Can I apply the Girsanov theorem to an Ornstein-Uhlenbeck process? How to transform a stochastic jump diffusion equation to a Levy stochastic differential equation? Probabilistic solution of Hydrogen atom PDE?
CommonCrawl
This is the second in a series of posts in which I explore concepts in Andrew Ng's Introduction to Machine Learning course on Coursera. In each, I'm implementing a machine learning algorithm in Python: first using standard Python data science and numerical libraries, and then with TensorFlow. Logistic regression is similar to linear regression, but instead of predicting a continuous output, classifies training examples by a set of categories or labels. For example, linear regression on a set of social and economic data might be used to predict a person's income, but logistic regression could be used to predict whether that person was married, had children, or had ever been arrested. In a basic sense, logistic regression only answers questions that have yes / no answers, or questions that can be answered with a 1 or 0. However, it can easily be extended to problems where there are a larger set of categories. Here, I'm using the Wine dataset from UCI. It maps thirteen continuous variables representing chemical contents of a wine to three labels, each a different winery in Italy. Initially, I'm only using two features from the data set: alcohol and ash. The labels are supplied as an array of data with values from 1 to 3, but at first, I only want a simple regression problem with a yes or no answer. To do this, I first filter the data set, reducing it to only include wines with labels 1 or 2. Then, I use the scikit-learn label_binarize function, which takes an $m$-length list with $n$ possible values (two, in this case), and converts it to an $m \times n$ matrix, where each column represents one label with a value of 1, and all others with a value of 0. I choose the first column, though the second would be equally valid here, just with the labels reversed. I've provided a small example of label_binarize below, shuffling the whole input data set first (the examples are sorted by winery), and then selecting the first ten. I also split the data into training and testing sets before going further. A simple way to do this is with the train_test_split function from scikit-learn, which allows me to specify a percentage (here 25%) to sample randomly from the data set and partition away for testing. Because I'm going to be drawing a lot of data plots, I define a function that takes an $n \times 2$ array of data points xy, and an $n \times 1$ array labels to vary the symbol and color for each point. This function supports three distinct labels, sufficient for this data set. There's a fairly obvious area near the center of the plot where a line could divide the two colors of points with a small amount of error. To implement logistic regression, I need a hypothesis function $h_\theta(x)$, a cost function $J(\theta)$, and a gradient function that computes the partial derivatives of $J(\theta)$. It's worth noting the treatment of y and theta above. In each function, I explicitly convert each to an $n$ or $m \times 1$ ndarray, so the matrix operations work correctly. An alternative is to use a numpy matrix, which has stricter linear algebra semantics and treats 1-dimensional matrices more like column vectors. However, I found that it was awkward to get the matrix interface to work correctly with both the optimization function used below, and with TensorFlow. The indexing syntax can be thought of as explicitly columnizing the array of parameters or labels. Instead of manually writing a gradient descent, I use an optimization algorithm from Scipy called fmin_tnc to perform it. This function takes as parameters the cost function, an initial set of parameters for $\theta$, the gradient function, and a tuple of args to pass to each. I define a train function that prepends a columns of 1s to the training data (allowing for a bias parameter $\theta_0$), run the minimization function and return the first of its return values, final parameters for $\theta$. I can evaluate the results of the optimization visually and statistically, but I also need one more function: predict, which takes an array of examples X and learned parameter values theta as inputs and returns the predicted label for each. Here too, 1s must be prepended to the inputs, and I return an integer value representing whether the result of the sigmoid hypothesis function is equal to or greater than 0.5. To test the results of those predictions, Scikit-learn provides three functions to calculate accuracy, precision and recall. The test data from earlier is used here, so the results represent the performance of the classifier on unseen data. It's much more interesting to review the results visually, at least while the number of features is limited to two. To do this, I need to plot the input points again, then overlay the decision boundary on top. I tried several approaches for this in Matplotlib, and found that an unfilled countour plot gave me the best results. This can also be done by manually calculating the function to plot, or using a filled contour plot that shades over the actual areas, but doing the math by hand is tedious, and the colormaps for filled contour plots leave a lot to be desired visually. Below, I define a function plot_boundary that takes $n \times 2$ matrix of feature values $(x_0, x_1)$ and a prediction function, then builds a mesh grid of $(x, y)$ points corresponding to possible $(x_0, x_1)$ values within the input range. After running the prediction function on all of them, the result is an $(x, y, z)$ point in space. Because the result isn't continuous and flips directly from 0 to 1, there's only one contour that can be drawn on the plot: the decision boundary. With the basics working, the next step is something more interesting: a similar set of two features from the data set (this time alcohol and flavanoids), but with all three labels instead of two. The only differences below in loading the data are that I no longer filter out rows with the third label, that I use the full output from label_binarize, resulting in an $m \times 3$ array for y. The plotted data points again suggest some obvious linear boundaries between the wines. It turns out that solving this as three one-vs-all problems is trivial, and re-uses all the code I just wrote. Instead of one array of theta values I train three, one per problem, and then define a new predict_multi function that computes the three sigmoids for each example using each array of theta parameters. This time, rather than return 1 or 0 based on whether the value is above or below 0.5, I return the argmax of each resulting row, the index of the largest value. Looking at the plot above, it seems like the boundaries could be much more accurate if they didn't have to be straight lines. To allow for this, I define a function transform to add some polynomial features, converting each input example of $(x_0, x_1)$ to $(x_0, x_1, x_2, x_3, x_4)$, where $x_2 = x_0^2$, $x_3 = x_1^2$ and $x_4 = x_0x_1$. Next, I want to include all the features from the data set. To do this, instead of specifying what columns I want to include, I use drop to include everything except the class column. Because I'm now significantly increasing the number of features, I apply regularization as part of new cost and gradient functions. Regularization prevents overfitting, a situation where a large number of features allows the classifier to fit the training set too exactly, meaning that it fails to generalize well and perform accurately on data it hasn't yet seen. Below, I define the new cost and gradient functions, as well as a new function to train the classifier, given the addition of a new parameter l, for $\lambda$. This parameter can be adjusted to change the effect of regularization; here I'm just using 1.0. In each case, I ensure that $\theta_0$ isn't regularized by creating a temporary theta_reg, starting with a zero followed by elements one and onward from theta. In this last section, I implement logistic regression using TensorFlow and test the model using the same data set. TensorFlow allows for a significantly more compact and higher-level representation of the problem as a computational graph, resulting in less code and faster development of models. One item definitely worth calling out is the use of the AdamOptimizer instead of the GradientDescentOptimizer from the previous post. Although the latter can still be used here, I found it a poor fit for two reasons: it is very sensitive to learning rate and lambda parameters, and it converges extremely slowly. Correct convergence required a very low learning rate (around 0.001 at most), and could still be seen decreasing at over 300,000 iterations, with a curve that appeared linear after the first thousand. Poor tuning resulted in the optimizer spinning out of control and emitting nan values for all the parameters. Using a different optimizer helped tremendously, especially one that is adaptive. It converges significantly faster and requires much less hand-holding to do so. Even then, these graphs take typically 25x the time to converge properly compared to the manual implementation above, and I'm not sure why this is the case. Since Tensorflow does the calculus itself to find the gradient, it could be that this is the result of some issue or lack of optimization. On the other hand, given that the platform is designed to distribute computations and scale to significantly larger data sets, this could be some overhead that is quite reasonable in those scenarios but is felt heavily in a small demonstration with a tiny number of examples. I also adjusted all placeholders and variables to tf.float64, to avoid any issues with numerical precision. After this and the adaptive optimizer, the results improved dramatically. Because I want to build a few different graphs, I define a function that builds one given a few parameters: the number of features, the number of labels, and a lambda value for regularization. This function tf_create builds a graph, and returns two functions itself: one to train the algorithm by running the optimizer, and another to predict labels for new values. To compute the loss for regularization, I use the built-in tf.nn.l2_loss function, which is equivalent to the regularization loss I computed manually before. First, I evaluate the model against the 2-feature, 3-label example from above. Next, I use the transform function to apply additional polynomial features to the dataset, allowing for a non-linear decision boundary. Finally, I include all the features from the data set, all the labels, and apply a small amount of regularization. You can find the IPython notebook for this post on GitHub.
CommonCrawl
Can anyone help me find articles or another information on topics "Thermal superconductivity". I consider that this is not about thermal effects accompanying the electrical superconductivity, but the thermally superconductivity as an independent phenomenon, realized for ultra-pure materials (materials that contain a negligible number of inclusions and defects). I need information specialists in the subject. Maybe someone is engaged in thermal superconductivity. For example on model of one-dimensional crystal. Maybe someone has already researched the thermomechanical processes in such systems (infinite thermal conductivity, which can be interpreted as a superconductivity). In fact, the distribution of heat in these systems occurs at a rate close to the speed of sound. This is a super-pure materials. As tmschaefer explained, you won't get truly infinite thermal conductivity. But one can apparently achieve it in the limit of infinite length nanotubes; see my answer below. I think we should be talking quantum mechanically. Perhaps about materials whose Lieb-Robinson velocity is the speed of light. Are there such materials? I search experimental work about breakdown fourier law in 1D structure like work "Breakdown of Fourier's Law in Nanotube Thermal Conductors" by A. Zettl et al. May be you know work like Zettl's work? I am not exactly sure if I understand your question correctly. I will assume that by thermal superconductivity you mean perfect conduction of heat. This is indeed an interesting question: Are there systems that are analogous to superfluids and superconductors, perfect conductors of particle number and charge, that are perfect conductors of heat? I think the answer is no. In superfluids and superconductors there is a spontaneoulsy broken symmetry, and an associated Goldstone boson $\varphi$. Gradients of $\varphi$ defines a superfluid velocity that enters in the hydrodynamic description and describes transport without dissipations. In a superfluid, $v_s=\nabla\varphi/m$ is the superfluid velocity. The equation of motion $\partial_tv_s =-\nabla\mu+\ldots$ shows that an arbitrarily small gradient of the chemical potential will drive a flow. Gradients of $v_s$ do not contribute to the stress tensor, so there is no dissipation. In a superconductor we have the London current $j_s=e^2n/(mc)(\nabla\varphi-A)$, and the equation of motion is $\partial_t(\nabla\varphi)=-\nabla V+\ldots$. The London current does not contribute to Ohmic heating. So can we find a Goldstone boson that contributes to the energy current, and has an equation of motion of the form $\partial_t(\nabla\varphi)=-\nabla T+\ldots$? I see several difficulties with this: The symmetry would have to be time reparametrization invariance (or Lorentz boosts in the relativistic case). Also, $\nabla T$ does not appear in the effective Lagrangian (which is a $T=0$ object), it would have to arise from integrating out thermal fluctuations (but that will usually lead to dissipative terms, because of fluctuation-dissipation relations). This is not a proof, of course, but at least it shows that it is not obvious how to obtain perfect conductors of heat. 1) In a typical material small gradients of T do not lead to convection, the heat current is diffusive and proportional to $-\kappa\nabla T$. This is how we measure $\kappa$. In superfluids (like liquid He) small gradients of T drive a new type of convection, involving a flow of the normal component balanced by a backflow of the superfluid. The energy current is not proportional to $-\kappa\nabla T$, so the thermal conductivity appears to be very large. Helium does, however, have a finite thermal conductivity which can be measured by attenuation of first and second sound. 2) There are systems in which the mean free path (of electrons or phonons or other carriers) is larger than the system size. The usual example is nanowires or carbon nanotubes. Transport is ballistic, not diffusive, and a naive measurement of thermal conductivity will give a very large $\kappa$. There is a large amount of literature on this, look for the Landauer formula or the Landauer-Buttiker formalism. I have heard several times of the concept of "thermal superconductivity" (As opposed to "electrical superconductivity"). I search article about it. I search article about breakdown fourier law in 1D structure (nanotube and etc). A reference would indeed be useful. Fourier's law is often violated in low-dimensional (d<2) systems (see here for a review). This has nothing to do with superconductivity, but is a consequence of non-ergodicity, fluctuations, etc. The following papers discuss indications that in the limit of infinite length of a nanotube, the thermal conductivity may diverge. Thank you ( tmschaefer and Arnold Neumaier ) very much. Especially I search experimental work about breakdown fourier law in 1D structure (nanotube and etc) like work "Breakdown of Fourier's Law in Nanotube Thermal Conductors" by C. W. Chang, D. Okawa, H. Garcia, A. Majumdar, and A. Zettl. May be you help me find similar articles. @sashavak: I can't do the literature search for you. Please go to scholar.google.com and enter the title of the above paper. You'll get information about the paper and a list of over 200 papers citing it. Go through these to find out what you need. Such searches are the standard work of all scientists who want to enter a field that is new to them; so you better learn doing it now. Thank you for the answer. But I do not mean scientific articles that have a reference to the scientific article Zettl. I mean experimental scientific articles that showed a violation of the Fourier Law for 1D structures. These scientific articles do not necessarily have a reference to a scientific article Zettl. Maybe you know these scientific articles. @sashavak: You can find these papers if they exist by looking at promising papers that cite the paper you know and look into their references. Then repeat the whole procedure with some of the titles (you can also enter authors) that look most promising, and iterate the process. After a while you find in this way everything that has been done in the field, and in particular the things you were looking for. Finding the right papers to read is one of the harder parts of research work. You cannot expect that anyone else is doing this work for you. Moreover, the work in searching is not spent in vain as you get to know this way a lot of other information that is relevant for your research - especially in the long run.
CommonCrawl
In the talk we will focus mainly on the resonance condition and resonance asymptotics on quantum graphs. We are interested in the number of resolvent resonances enclosed in the circle of radius $R$ in the $k$-plane in the limit $R\to \infty$. In some cases the leading term of the asymptotics is smaller than expected by Weyl asymptotics. We will recall the main results for these non-Weyl graphs. Using the method of pseudo-orbit expansion we will construct the resonance condition and find the expression of the effective size of the graph, which is proportional to the coefficient by the leading term of the asymptotics. The main results are bounds on the effective size.
CommonCrawl
New EFTs and bases can be defined by placing the definition files into the public repository (http://github.com/wcxf/wcxf-bases). This page lists the currently defined EFTs and bases. For each basis, a PDF file listing all the operators is linked. Standard Model Effective Field Theory with linearly realized electroweak symmetry breaking. Basis suggested by Grzadkowski, Iskrzyński, Misiak, and Rosiek (arXiv:1008.4884v3). At variance with their definition, the Wilson coefficients are defined to be dimensionful, such that $\mathcal L=\sum _i C_i O_i$. The set of redundant operators coincides with the choice of DSixTools (arXiv:1704.04504). The weak basis for the fermion fields is chosen such that the running dimension-6 mass matrices of charged leptons and down-type quarks are diagonal at the scale where the coefficient values are specified, while up-type quark singlet field is rotated to diagonalise the running dimension-6 up-type quark mass matrix "from the right". Variant of the Warsaw basis where the up-type quark mass matrix (rather than the down-type quark) is diagonal. Variant of the Warsaw basis where all fermion fields are rotated such as to make their mass matrices diagonal. This rotation breaks $SU(2)_L$ invariance and is ambiguous for some operators. We adhere to the choice of arXiv:1704.03888 by Dedes, Materkowska, Paraskevas, Rosiek, and Suxho, which coincides with the "tilded" basis in arXiv:1512.02830 by Aebischer, Crivellin, Fael, and Greub. Weak effective theory below the electroweak scale with five dynamical quark flavours. Basis suggested by Jenkins, Manohar, and Stoffer (arXiv:1709.04486). Currently only includes baryon and lepton number conserving operators. Neutrinos are in the flavour basis. Basis suggested by Aebischer, Fael, Grueb, and Virto (arXiv:1704.06639). Neutrinos are in the flavor basis. Basis used by the flavio package. Neutrinos are in the flavour basis. Weak effective theory with dynamical up and down quark and electron, valid below the stange quark mass scale. Variant of the basis suggested by Jenkins, Manohar, and Stoffer (arXiv:1709.04486) with only two dynamical quark flavors. Weak effective theory with three dynamical quark flavours and two charged lepton flavours valid between the strange and charm quark mass scales. Variant of the basis suggested by Jenkins, Manohar, and Stoffer (arXiv:1709.04486) with only three dynamical quark flavors. Weak effective theory with four dynamical quark flavours valid between the charm and bottom quark mass scales. Variant of the basis suggested by Jenkins, Manohar, and Stoffer (arXiv:1709.04486) with only four dynamical quark flavors.
CommonCrawl
Аннотация: In this paper, a compact design of a balanced $1 \times 4$ optical power splitter based on coupled mode theory (CMT) is presented. The design consists of seven vertically slotted waveguides based on the silicon-on-insulator platform. The $1 \times 4$ OPS is modelled using commercial finite element method (FEM) simulation tool COMSOL Multiphysics 5.1. The optimized OPS is capable of working across the whole C-band with maximum $\sim 39 %$ of power decay in the wavelength range 1530-1565 nm. Ключевые слова: slot waveguides, 1x4 power splitter, SOI, C-band, finite element method.
CommonCrawl
Abstract: The extraction of the weak phase $\alpha$ from $B\to\pi\pi$ decays has been controversial from a statistical point of view, as the frequentist vs. bayesian confrontation shows. We analyse several relevant questions which have not deserved full attention and pervade the extraction of $\alpha$. Reparametrization Invariance proves appropriate to understand those issues. We show that some Standard Model inspired parametrizations can be senseless or inadequate if they go beyond the minimal Gronau and London assumptions: the single weak phase $\alpha$ just in the $\Delta I=3/2$ amplitudes, the isospin relations and experimental data. Beside those analyses, we extract $\alpha$ through the use of several adequate parametrizations, showing that there is no relevant discrepancy between frequentist and bayesian results. The most relevant information, in terms of $\alpha$, is the exclusion of values around $\alpha\sim \pi/4$; this result is valid in the presence of arbitrary New Physics contributions to the $\Delta I=1/2$ piece.
CommonCrawl
Calculates the scattering from a cylinder with spherical section end-caps. Like `barbell`, this is a sphereocylinder with end caps that have a radius larger than that of the cylinder, but with the center of the end cap radius lying within the cylinder. This model simply becomes a convex lens when the length of the cylinder $L=0$. See the diagram for the details of the geometry and restrictions on parameter values. The $\left<\ldots\right>$ brackets denote an average of the structure over all orientations. $\left< A^2(q)\right>$ is then the form factor, $P(q)$. The scale factor is equivalent to the volume fraction of cylinders, each of volume, $V$. Contrast $\Delta\rho$ is the difference of scattering length densities of the cylinder and the surrounding solvent. The requirement that $R \geq r$ is not enforced in the model! It is up to you to restrict this during analysis. The 2D scattering intensity is calculated similar to the 2D cylinder model. Definition of the angles for oriented 2D cylinders.
CommonCrawl
Abstract: One of the most striking results from the Relativistic Heavy Ion Collider is the strong elliptic flow. This review summarizes what is observed and how these results are combined with reasonable theoretical assumptions to estimate the shear viscosity of QCD near the phase transition. A data comparison with viscous hydrodynamics and kinetic theory calculations indicates that the shear viscosity to entropy ratio is surprisingly small, $\eta/s < 0.4$. The preferred range is $\eta/s \simeq (1\leftrightarrow 3) \times 1/4\pi$.
CommonCrawl
ALWAYS 4 WILL BE THE SCALING FACTOR IN THE HLEN FIELD? I have just one doubt.. then we continue with the process ?? so here we assume Header length of IP-datagram + Header length of TCP = 10B ?? why are we talking about sequence number in ipv4?? The no of data bytes of ipv4 packet a head of any fragment divided by 8 is the fragmentation offset. But why are we taking them as sequence no? HLEN=10 it means ip header is 40 bytes. How are u dividing it into 20 bytes TCP and 20 bytes ip header? $M=0$ meaning no more fragments after this. Hence, its the last fragment. IHL = internet header length = $10 \times 4 = 40B$ coz $4$ is the scaling factor for this field. fragment offset = $300 \times 8 = 2400B$ = represents how many Bytes are before this. $8$ is the scaling factor here. You have assumed that sequence number has started from 0 for 1st Byte of first fragment but as for as I know sequence number is allotted randomly. So, we can only be sure that difference between sequence number for first Byte and last Byte must be 359. We can't surely tell that it must start with 2400. Please, do reply me if I get it wrong. @amar u are saying that 2400 represents how many Bytes are before this.. then how come it becomes the first byte..? 2400 is the first byte of the last packet. @Nitesh Tripathi what you mention is the case at Transport layer. BUT sequence number is given to every byte of the data which doesn't include header bytes of tcp too?? Offset given is 300 which means (300*8=2400)2400 bytes are ahead of that packet, and in one packet we can store 360 bytes after removing header of 40 bytes as total length is 400 bytes but 2400 is not multiple of 360 so how can we divide packets? why 8 is scaling factor here?? Here ..sequence numbers are considered for IP payload ..which contains TCP header..we should not consider sequence number for TCP header right ..as sequence numbers are given to data part only ?????? For those who are getting confused between HLEN and size of header here is the answer. These are two different things. HLEN is feild in ip header.To find the size of header using hlen multiply it by 4 which is scaling factor. @arjun sir @pooja could you please tell me why payload is taking initial bits instead of header. Header should take first 40 bytes here then the payload. Are we assuming according to the options given? I think range of sequence number must cover 360 byte. For the last fragmented packet offset range would be [ 300 - 344 ]. So offset field contains number = 300. Only option C satisfies both the conditions of last fragment (MF = 0) and total 360 Byte (400-10*4 byte).i.e. possible sequence number can be [ 2400 - 2759 ] . That's why option C. I don't think initial sequence number assumption (as zero) is valid always. correct if wrong ! What if option C is given as = Last fragment,5000 and 5359. is it correct too ? there is a similar problem has been given in "Forouzan book" Since M=0; It is indication of last fragment. I think it is silly doubt but it should be clear. Can you please tell me why we are not taking header length equal to 10 since it is given in the question ? hence we are getting 10 * 4 = 40byte?
CommonCrawl
$\theta$ and $\alpha$ are two angles of the two different triangles, and $n$ is a real number. The sine of angle $\theta$ is equal to the product of $n$ and sine of sum of angles $\theta$ and $2 \alpha$. On the basis of this equation, the value of $\tan(\theta +\alpha) \cot \alpha$ is required to find trigonometrically. Directly, the equation cannot be transformed as the required form but it is possible if angles $\theta$ and $\theta + 2\alpha$ are added and also subtracted. The ratio of sine functions can be expressed in the form the ratio of sum to subtraction from by using the componendo and dividendo rule. It helps us to add and subtract the angles in next few steps. The two sine functions with different angles are added in numerator. The sum of them can be expressed in product form by using the sum to product form transformation identity of sine functions. Similarly, the two sine functions with same different angles are subtracted in denominator. The subtraction of them can also be expressed in product form by using sum of product identity in subtraction form of sine functions. Cosine function is having a negative angle in numerator and the sine function is also having negative angle in denominator. They can be expressed normally according to the trigonometric functions of allied angles identities. Split the fractional function as two multiplying factors as per the angle of the functions. According to quotient identities of the trigonometric functions. The quotient of sine of angle by cosine of angle is tangent and the quotient of cosine of angle by sine of angle is cotangent. It is the required solution for this trigonometry problem and it can also be written as follows.
CommonCrawl
A question on the existence of Dirac points in graphene? When considering the asymptotic continuity of quantum states, one works with asymptoticly continuous functions. where n is in the natural numbers $\rho_n$ and $\sigma_n$ are states in a sequence indexed by n, and anything I've missed has the usual definition. My question is what is the point of $H_n$ with $dim H_n\rightarrow\infty$ why not simply say when the function on the sequence of states gets arbitarily close so does the difference between the asymptotically cts functions on them. I intend to abstract this definition, and it seems to work fine, however the existence of this dimension term on the bottom for the particular case of vector spaces worries me.
CommonCrawl
Abstract: The dimensional-deconstruction prescription of Arkani-Hamed, Cohen, Kaplan, Karch and Motl provides a mechanism for recovering the $A$-type (2,0) theories on $T^2$, starting from a four-dimensional $\mathcal N=2$ circular-quiver theory. We put this conjecture to the test using two exact-counting arguments: In the decompactification limit, we compare the Higgs-branch Hilbert series of the 4D $\mathcal N=2$ quiver to the "half-BPS" limit of the (2,0) superconformal index. We also compare the full partition function for the 4D quiver on $S^4$ to the (2,0) partition function on $S^4 \times T^2$. In both cases we find exact agreement. The partition function calculation sets up a dictionary between exact results in 4D and 6D.
CommonCrawl
This is a quote from the 1986 movie, "True Stories", and it's true; well, almost true. You could buy four packs of $10$ hotdogs and five packs of $8$ buns. That would give you exactly $40$ of each. However, you can make things even with fewer packs if you buy two packs of $10$ hotdogs, along with a pack of $8$ buns and another pack of $12$ buns. That would give you $20$ of each, using only $4$ total packs. For this problem, you'll determine the fewest packs you need to buy to make hotdogs and buns come out even, given a selection of different bun and hotdog packs available for purchase. The first input line starts with an integer, $H$, the number of hotdog packs available. This is followed by $H$ integers, $h_1 \ldots h_ H$, the number of hotdogs in each pack. The second input line starts with an integer, $B$, giving the number of bun packs available. This is followed by $B$ integers, $b_1 \ldots b_ B$, indicating the number of buns in each pack. The values $H$ and $B$ are between $0$ and $100$, inclusive, and the sizes of the packs are between $1$ and $1\, 000$, inclusive. Every available pack is listed individually. For example, if there were five eight-bun packs available for purchase, the list of bun packs would contain five copies of the number eight. If it's not possible to purchase an equal number of one or more hotdogs and buns, just output "impossible". Otherwise, output the smallest number of total packs you can buy (counting both hotdog and bun packs) to get exactly the same number of hotdogs and buns.
CommonCrawl
for a spherical cavity of radius $r_c$, $V_c = 4/3 \times \pi r_c^3$. where $m$ is a mass and $k$ a stiffness. The mass is composed of the particles inside the neck while the much larger number of particles inside the cavity constitutes the spring stiffness. Hypothesis: all dimensions are smaller than the acoustic wavelength. For more accurate results, the radiation effects of the air particles in the neck should be accounted for (usually by adding a length correction to the the geometrical length of the neck).
CommonCrawl
Gelation of rodlike polymers was studied by a variety of techniques. Fourier transform video microscopy was used to quantitatively study poly(p-phenylenebenzobisthiazole) in sulfuric acid on a preformed gel and as the gelation process occurred. Differential scanning calorimetry was used to determine the extent of the solvent shell surrounding molecules of hydroxypropyl cellulose upon gel formation and melting. Various light scattering experiments were performed on gels made from poly($\gamma$-benzyl-$\alpha$,L-glutamate) in toluene. Some of the analysis is geared towards a percolative type mechanism. The results and some suggestions for further work are presented. Tipton, Deborah L., "Gelatin of Rodlike Polymers." (1995). LSU Historical Dissertations and Theses. 5932.
CommonCrawl
The client has access to the Mathematica files, either locally or from a file server on the network. The license server running MathLM is available on the TCP/IP network. A license server can also function as its own client. However, this is not recommended. If the machine has to be rebooted for any reason, the serving of licenses to all other clients on the network may be disrupted. If you wish to license Mathematica from a MathLM license server, MathLM should already be installed and running on a license server on the network (see "Installing MathLM" for details). To complete the Mathematica installation, you will need to know the name or IP address of the license server running MathLM. To install Mathematica, you must be logged in with administrative privileges, or be able to elevate to administrative privileges. You also must activate Mathematica using the Wolfram User Portal in order to run it (see "Activating Mathematica" for details). One convenient way to install Mathematica on a client is to run the installer remotely from a file server. This is an efficient way of making Mathematica available to a large number of users without having to supply a CD/DVD to each one. You can install Mathematica from a file server on a client running any supported platform. It is not necessary that the client platform be the same as the file server platform. To install Mathematica from a file server, you first must make the installer and Mathematica files available to the clients. You can do this by copying the contents of the CD/DVD to the file server and exporting the directory, or by exporting the CD/DVD mount point on the file server. Then, mount the directory with the Mathematica distribution on the client and change to this directory, and run the installer as usual. Installing Mathematica from a file server requires first copying the installer executable and all files in the Mathematica distribution from the DVD onto the file server. Open the Windows directory from the DVD. Double-click the file Setup.exe to launch the installer and follow the prompts. The main Windows installer includes a custom setup option which allows you to control whether to install secondary components, including support for the Mathematica web browser plugin and components for indexing notebooks on the file system. Mathematica may be installed by dragging the Mathematica application bundle into the Applications folder, as illustrated by the startup window when you insert the DVD. The DVD also includes an installer to install secondary components, including support for the Mathematica web browser plugin and for Spotlight and Quick Look support of Mathematica-created documents. 1. Mount the CD or DVD. For information on mounting a CD/DVD, see "Mounting a CD or DVD on Linux". Note: This step may not be required on most Linux distributions, as most operating systems automatically handle mounting. 2. Change directory to /cdrom/Unix/Installer. Note that the exact location of the CD/DVD mount point might be different for your platform. 3. Run the installer. Default installation under /usr/local requires root privileges. 4. Follow the installer prompts. If you are installing Mathematica on multiple machines, it can be time consuming to respond to all of the installer prompts on each individual machine. By supplying command-line options to the installer, you can customize various features of the installation process or automate it entirely. Mathematica Installer supported command-line options. The following instructions explain how to write a simple script to silently install Mathematica from a file server. These instructions require that you have a mathpass file with a valid password. See "Registrations and Passwords" for more information on sitewide mathpass configurations. 1. Follow the instructions in the first part of "Installing Mathematica from a File Server" to copy the installer and files from the DVD to a file server. 2. Copy your mathpass file to the same directory on the file server as the installer and Mathematica files. 3. Open Notepad (Start Menu ▶ Programs ▶ Accessories ▶ Notepad) and type the following lines into a new file. 4. Change all instances of \\server\math to the pathname of the network share where the Mathematica installation files and mathpass file were copied. 5. Change "C:\Directory\Name" to the directory listed here for your version of Windows. Be sure to enclose the name of the directory in quotes. Windows XP—"C:\Documents and Settings\All Users\Application Data\Mathematica\Licensing" Note: These directories are the values of $BaseDirectory for different versions of Windows. See "Configuration Files" for further information. 6. To save the file, choose File ▶ Save. Save the file in the same directory as the Mathematica installation files. Type the file name install.bat and choose All Files from the Save as type popup menu. Click Save, then quit Notepad. 2. The installation is now complete. If you see any messages other than those printed here, check the file C:\Windows\Temp\install.log on the client machine for further information. Installing Mathematica in this way eliminates the need to take the DVD to each client machine, and saves time by allowing you to run a simple script instead of responding to the installer questions. Note: Default values are used for any options that are not specified explicitly on the command line. Valid input for -createdir is y for yes or n for no. By default, this value is set to y. The default directory for -execdir is /usr/local/bin. This option only works with an automatic installation. The values for -method may vary by product. When this option is applicable, the values can be determined by running the installer. The default value for this option is Full. Valid input for -overwrite is y for yes or n for no. By default, this value is set to y. This option only works with an automatic installation. The default for -platforms is the system you are installing on, if that information is available to the installer. This option only works with an automatic installation. Valid input for -selinux is y for yes or n for no. By default, this value is set to n. The option -silent suppresses any output from being displayed on the screen. The output is instead written to a file named InstallerLog-number. If the installation is unsuccessful, the log file is saved in the /tmp directory. Otherwise, the file is moved to the target directory and renamed InstallerLog. The directory specified for -targetdir corresponds to the value of the global variable $InstallationDirectory. The default value is /usr/local/Wolfram/Mathematica/11.2. This option only works with an automatic installation. To complete the installation in one step, run a command like the following. To do the same using the sudo command, you may need to use sudo's -- flag. This allows you to complete the installation automatically in one step, while still being able to customize various details such as the directory to install to. You are not prompted to enter your password using this method, so you will need to enter a password the first time Mathematica is launched. If you are doing many installations, you might find it convenient to include the MathInstaller command with all the relevant options in a shell script. Running the shell script is then an easy way to do an identical customized installation on multiple machines. You can further simplify the installation process by including a line in your script that copies an existing mathpass file to the appropriate location on the newly installed machine. Note that MathInstaller must be run from the directory in which it is located, so your script may require a command to change directory. See "Registrations and Passwords" for information on sitewide mathpass configurations.
CommonCrawl
Problem: Let $x_1, x_2$ be the two roots for equation $x^2+x-3=0$, find the value of $x_1^3-4x_2^2+19$. Solution: $x_1,x_2$ are roots of the equation $x^2+x-3$, so $x_1^2+x_1-3=0$ and similarly $x_2^2+x_2-3=0$. Hence, $x_1^2=3-x_1,x_2^2=3-x_2$, and we have $x_1^3-4x_2^2+19=x_1(3-x_1)-4(3-x_2)+19$. Simplifying and substituting again we have $3x_1-x_1^2+4x_2+7=3x_1-(3-x_1)+4x_2+7=4(x_1+x_2)+4$. By Viete's Theorem we have $x_1+x_2=-1$, so $4(x_1+x_2)+4 = 4(-1)+4 = 0$. For some students, a possible approach to this problem is to naively determine the roots of $x^2 + x - 3 = 0$ and plug the values of the roots into $x_1^3-4x_2^2+19$, making the process extremely vulnerable to algebraic mistakes. Because of this, there are various answers that can be given depending on the algebraic mistake that was produced by the student. There is a possibility that the student incorrectly remembers Viete's Formula. They may have forgotten that there is a negative sign for the sum of the two roots and because of this $x_1 + x_2 = -1$. If you have any approach that you would like to share, please post below!
CommonCrawl
31 July, 2018 The slide file was uploaded. law of the iterated logarithm. Journal of Logic and Analysis, Vol 10, pp.1–13, 2018. We consider the behaviour of Schnorr randomness, a randomness notion weaker than Martin-L\"of's, for left-r.e. reals under Solovay reducibility. Contrasting with results on Martin-L\"of-randomenss, we show that Schnorr randomness is not upward closed in the Solovay degrees. Next, some left-r.e. Schnorr random $\alpha$ is the sum of two left-r.e. reals that are far from random. We also show that the left-r.e. reals of effective dimension $>r$, for some rational $r$, form a filter in the Solovay degrees. 1 Mar, 2018 The slide file was uploaded.
CommonCrawl
What is the probability of $\cos(\theta_1) + \cos(\theta_2) + \cos(\theta_1 - \theta_2) + 1 \le 0$? What is the probability of $\cos(\theta_1) + \cos(\theta_2) + \cos(\theta_1 - \theta_2) + 1 \le 0$ given that $\theta_1$ and $\theta_2$ are chosen randomly between $0$ and $2\pi$? amounts to a regionalization of the domain $(\theta_1,\theta_2) \in [0,2\pi) \times [0,2 \pi)$, materialized by the square ABCD (see graphics below). These boundaries define 6 regions. Which regions are the good ones ? Determining the sign of $f(\theta_1,\theta_2)$ for each region can be done by obtaining the sign of each factor; the regions to be selected are those where the product of these 3 signs is negative. Not the answer you're looking for? Browse other questions tagged probability trigonometry or ask your own question. Can I factorize $\sin\theta_1\cos\theta_1 +\sin\theta_2\cos\theta_2$ into $ \sin(\theta_1 +\theta_2)\cos(\theta_1-\theta_2)$?
CommonCrawl
Q1. Is every subspace $Y\subset \ell_\infty^c(X)$ isomorphic to $\ell_\infty^c(X)$ complemented? Q2. If not, does every subspace of $\ell_\infty^c(X)$ isomorphic to $\ell_\infty^c(X)$ contain a subspace still isomorphic to $\ell_\infty^c(X)$ which is complemented? I was hoping to employ separable injectivity but somehow it does not work. I am almost sure that Q2 holds true. Browse other questions tagged fa.functional-analysis banach-spaces or ask your own question. The space $H(D)$ of holomorphic functions.
CommonCrawl
Is it possible to come up with an NxN S-box which would have a difference distribution table with N entries of 100% probability? I am studying the properties of S-boxes and I don't quite understand how a poorly designed S-box can destroy all the cipher's security. For example, if one replaces the AES-256's S-box table with such an S-box I described above, would he/she be able to crack the cipher with some known plaintext/cipherext pairs? But I cannot figure out by what rule it was constructed. Any affine function will do. Let your Sbox be $$S(x)=Mx\oplus c$$ where $M$ is an $n\times n$ binary matrix and $c$ is an $n-$bit constant vector. The output difference for this Sbox is, for any nonzero $a$ $$ S(x \oplus a)\oplus S(x)=(M(x\oplus a)\oplus c )\oplus Mx\oplus c= M a\oplus c $$ which is a constant for fixed $a$ so all the output differences for that input difference take on the same value. For nontriviality pick either $c$ nonzero or $M$ a non identity matrix. The matrix must be invertible (over $GF(2)$) for the Sbox to be invertible, i.e., a proper substitution. AES with this type of Sbox can be trivially broken. Edit: See the answer to this question for details. Not the answer you're looking for? Browse other questions tagged aes s-boxes differential-analysis or ask your own question. How can an S-Box be reversed? AES S-Box Affine Mapping - how to do it? Is the difference distribution table of AES S-box uniform? How can we create the AES S-box using a different irreducible polynomial? How are calculated input S-box?
CommonCrawl
You may wish to explore the problem Which Is Cheaper? before working on this task. Which is bigger, $n+10$, or $2n+3$? "I wonder what happens when $n=4$." "$4+10=14$ but $2 \times 4 + 3$ is only $11$." "So it looks like $n+10$ is bigger." "I wonder what happens when $n=10$." "$10+10=20$ but $2 \times 10 +3$ is $23$." "So it looks like $2n+3$ is bigger." Can you explain why we have come to different conclusions? Is there a diagram you could draw that would help? For the following pairs of expressions, can you work out when each expression is bigger? Find two expressions so that one is bigger whenever $n< 5$ and the other is bigger whenever $n> 5$. Find three expressions so that the first is biggest whenever $n< 0$, the second is biggest whenever $n$ is between 0 and 4, and the third is biggest whenever $n> 4$. Find three expressions so that the first is biggest whenever $n< 3$, the second is biggest when $n> 3$, and the third is never the biggest. Simultaneous equations. Quadratic functions. Graph plotters. Solving equations graphically. Transformation of functions. Gradients. Creating and manipulating expressions and formulae. Inequalities. Maximise/minimise/optimise. Graphs.
CommonCrawl
I am planning to finish school soon and I would like to shed some weight before moving on. I have collected a fair number of books that I will likely never use again and it would be nice to get some money for them. Sites like Amazon and eBay let you sell your book to other customers, but Amazon will also buy some of your books directly (Trade-Ins), saving you the hassle of waiting for a buyer. Before selling, I remembered listening to a Planet Money episode about a couple of guys that tried to make money off of buying and selling used textbooks on Amazon. Their strategy was to buy books at the end of a semester when students are itching to get rid of them, and sell them to other students at the beginning of the next semester. To back up their business, they have been scraping Amazon's website for years, keeping track of prices in order to find the optimal times to buy and sell. I collected a few books I was willing to part with and set up a scraper in R. I am primarily interested in selling my books to Amazon, so I tracked Amazon's Trade-In prices for these books. This was done fairly easily with Hadley Wickham's package rvest and the Chrome extension Selector Gadget. Selector Gadget tells me that the node I'm interested in is #tradeInButton_tradeInValue. The code to do the scraping is below. After manually collecting this data for less than a week, I am able to plot the trends for the eight books I am interested in selling. The plot and code are below. I am surprised how much the prices fluctuate. I expected them to be constant for the most part, with large changes once a week or less often. Apparently Amazon is doing quite a bit of tweaking to determine optimal price points. I would also guess that $2 is their minimum trade-in price. It looks like I missed out on my best chance to sell A First Course in Stochastic Processes, but that the price of A Primer on Linear Models will keep rising forever. Time lapses are a fun way to quickly show a long period of time. They typically involve setting up your camera on a tripod and taking photos at a regular interval, like every 5 seconds. After all the photos have been taken, they are combined into a movie at a much faster rate, for example 30 frames per second. Time stacking is a way to combine all the photos into a single photo, instead of a movie. This is a common method to make star trails, and Matt Molloy has recently been experimenting with it in many different settings. There are many possible ways to achieve a time stack, but the most common way is to combine the photos with a lighten layer blend. For every pixel in the final image, the combined photo will use the corresponding pixel from the photo that was the brightest in all of the photos. This gives the desired result of motion of the stars or clouds in a scene. Another way to combine the photos is through time slicing (see, for example, this photo). In time slicing, the final combined image will contain a "slice" from each of the original photos. Time slices can go left-to-right, right-to-left, top-to-bottom, or bottom-to-top. For example, a time slice that goes from left to right will use vertical slices of the pixels. If you took 100 photos for your time lapse, each of which being 1000 pixels wide, the left-most 10 vertical pixel slices of the final image would contain the corresponding pixels from the first photo, the 11th through 20th vertical pixel slices would contain the corresponding pixels from the second photo, and so on. Different directions will produce different effects. There is free software available to do lighten layer blending of two photos, but I could not find any to automatically do it for a large number of photos. Similarly for the time slice, it is easy enough to manually slice a few photos, but not hundreds of photos. Therefore, I wrote a couple of scripts in R that would do this automatically. A gist of the scripts to create a time stack script and a time slice is here. They both require you to give the directory containing the JPEG photos and the time slice script let's you enter the direction you would like it to go. To try this out on my own photos, I used the time lapse I had created of a sunset (movie created with FFmpeg), which consisted of 225 photos taken over 20 minutes. The source material isn't that great (the photos were out of focus), but you can still see the effects. The following picture is a time stack. The following four pictures are the time slices with different directions. Fangraphs recently published an interesting dataset that measures defensive efficiency of fielders. For each player, the Inside Edge dataset breaks their opportunities to make plays into five categories, ranging from almost impossible to routine. It also records the proportion of times that the player successfully made the play. With this data, we can see how successful each player is for each type of play. I wanted to think of a way to combine these five proportions into one fielding metric. From here on, I will assume that there is no error in categorizing a play as easy or hard and that there is no bias in the categorizations. The model I will build is motivated by ordinal regression. If we only were concerned with the success rate in one of the categories, we could use standard logistic regression, and the probability that player $i$ successfully made a play would be assumed to be $\sigma(\theta_i)$, where $\sigma()$ is the logistic function. Using our prior knowledge that plays categorized as easy should have a higher success rate than plays categorized as difficult, I would like to generalize this. Say there are only two categories: easy and hard. We could model the probability that player $i$ successfully made an hard play as $\sigma(\theta_i)$ and the probability that he made an easy play as $\sigma(\theta_i+\gamma)$. Here, we would assume that $\gamma$ is the same for all players. This assumption implies that if player $i$ is better than player $j$ at easy plays, he will also be better at hard plays. This is a reasonable assumption, but maybe not true in all cases. Since we have five different categories of difficulty, we can generalize this by having $\gamma_k, k=1,\ldots,4$. Again, these $\gamma_k$'s would be the same for everyone. A picture of what this looks like for shortstops is below. In this model, every player will effectively be shifting the curve either left or right. A positive $\theta_i$ means the player is better than average and cause the curve to shift left and vice versa for negative $\theta_i$. I modeled this as a multi-level mixed effects model, with the players being random effects and the $\gamma_k$'s being fixed. Technically, I should optimize subject to the condition that the $\gamma_k$'s are increasing, but the unconstrained optimization always yields increasing $\gamma_k$'s because there is a big difference between success rate in the categories. I used combined data from 2012 and 2013 seasons and included all players with at least one success and one failure. I modeled each position separately. Modeling player effects as random, there is a fair amount of regression to the mean built in. In this sense, I am more trying to estimate the true ability of the player, rather than measuring what he did during the two years. This is an important distinction, which may differ from other defensive statistics. Below is a summary of the results of the model for shortstops. I am only plotting the players with the at least 800 innings, for readability. A bonus of modeling the data like this is that we get standard error estimates as a result. I plotted the estimated effect of each player along with +/- 2 standard errors. We can be fairly certain that the effects for the top few shortstops is greater than 0 since their confidence intervals do not include 0. The same is true for the bottom few. Images for the other positions can be found here. The results seem to make sense for the most part. Simmons and Tulowitzki have reputations as being strong defenders and Derek Jeter has a reputation as a poor defender. Further, I can validate this data by comparing it to other defensive metrics. One that is readily available on Fangraphs is UZR per 150 games. For each position, I took the correlation of my estimated effect with UZR per 150 games, weighted by the number of innings played. Pitchers and catchers do not have UZR's so I cannot compare them. The correlations, which are in the plot below, range from about 0.2 to 0.65. In order to make this fielding metric more useful, I would like to convert the parameters to something more interpretable. One option which makes a lot of sense is "plays made above/below average". Given an estimated $\theta_i$ for a player, we can calculate the probability that he would make a play in each of the five categories. We can then compare those probabilities to the probability an average player would make a play in each of the categories, which would be fit with $\theta=0$. Finally, we can weight these differences in probabilities by the relative rate that plays of various difficulties occur. For example, assuming there are only two categories again, suppose a player has a 0.10 and 0.05 higher probability than average of making hard and easy plays, respectively. Further assume that 75% of all plays are hard and 25% are easy. On a random play, the improvement in probability over an average player of making a play is $.10(.75)+.05(.25)=0.0875$. If a player has an opportunity for 300 plays in an average season, this player would be $300 \times 0.0875=26.25$ plays better than average over a typical season. I will assume that the number of opportunities to make plays is directly related to the number of innings played. To convert innings to opportunities, I took the median number of opportunities per inning for each position. For example, shortstops had the highest opportunities per inning at 0.40 and catchers had the lowest at 0.08. The plot below shows the distribution of opportunities per inning for each position. We can extend this to the impact on saving runs from being scored as well by assuming each successful play saves $x$ runs. I will not do this for this analysis. Finally, I put together a Shiny app to display the results. You can search by team, position, and innings played. A team of '- - -' means the player played for multiple teams over this period. You can also choose to display the results as a rate statistic (extra plays made per season) or a count statistic (extra plays made over the two seasons). To get a seasonal number, I assume position players played 150 games with 8.5 innings in each game. For pitchers, I assumed that they pitched 30 games, 6 innings each. I don't know if I will do anything more with this data, but if I could do it again, I may have modeled each year separately instead of combining the two years together. With that, it would have been interesting to model the effect of age by observing how a player's ability to make a play changes from one year to the next. I also think it would be interesting to see how changing positions affects a players efficiency. For example, we could have a $9 \times 9$ matrix of fixed effects that represent the improvement or degradation in ability as a player switches from their main position to another one. Further assumptions would be needed to make sure the $\theta$'s are on the same scale for every position. At the very least, this model and its results can be considered another data point in the analysis of a player's fielding ability. One thing we need to be concerned about is the classification of each play into the difficultness categories. The human eye can be fooled into thinking a routine play is hard just because a fielder dove to make the play, when a superior fielder could have made it look easier. I have put the R code together to do this analysis in a gist. If there is interest, I will put together a repo with all the data as well. Update: I have a Github repository with the data, R code for the analysis, the results, and code for the Shiny app. Let me know what you think. Trevor Hastie and Rob Tibshirani are currently teaching a MOOC covering an introduction to statistical learning. I am very familiar with most of the material in the course, having read Elements of Statistical Learning many times over. One great thing about the class, however, is that they are truely experts and have collaborated with many of the influencial researchers in their field. Because of this, when covering certain topics, they have included interviews with statisticians who made important developments to the field. When introducing the class to R, they interviewed John Chambers, who was able to give a personal account of the history of S and R because he was one of the developers. Further, when covering resampling methods, they spoke with Brad Efron, who talked about the history of the bootstrap and how he struggled to get it published. Today, they released a video interview with Jerome Friedman. Friedman revealed many interesting facts about the history of tree-based methods, including the fact that there weren't really any journal articles written about CART when they wrote their book. There was one quote that I particularly enjoyed. And of course, I'm very gratified that something that I was intellectually interested in for all this time has now become very popular and very important. I mean, data has risen to the top. My only regret is two of my mentors who also pushed it, probably harder and more effectively than I did – namely, John Tukey and Leo Breiman – are not around to actually see how data has triumphed over, say, theorem proving. In a previous post, I showed you how to scrape playlist data from Columbus, OH alternative rock station CD102.5. Since it's the end of the year and best-of lists are all the fad, I thought I would share the most popular songs and artists of the year, according to this data. In addition to this, I am going to make an interactive graph using Shiny, where the user can select an artist and it will graph the most popular songs from that artist. First off, I am assuming that you have scraped the appropriate data using the code from the previous post. Next, I will select just the data from 2013 and find the songs that were played most often. I will make a plot similar to the plots made in the last post to show when the top 5 songs were played throughout the year. Alt-J was more popular in the beginning of the year and the Foals have been more popular recently. I can similarly summarize by artist as well. The pattern for the artists are not as clear as it is for the songs. Finally, I wrote a Shiny interactive app. They are surprisingly easy to create and if you are thinking about experimenting with it, I suggest you try it. I will leave the code for the app in a gist. In the app, you can enter any artist you want, and it will show you the most popular songs on CD102.5 for that artist. You can also select the number of songs that it plots with the slider. For example, even though Muse did not have one of the most popular songs of the year, they were still the band that was played the most. By typing in "MUSE" in the Artist text input, you will get the following output. They had two songs that were very popular this year and a few others that were decently popular as well.
CommonCrawl
In evolutionary game theory and related fields, one often needs to visualize the dynamics of three-dimensional systems, e.g. competition between three strategies $x_1$, $x_2$ and $x_3$ for which $x_1 + x_2 + x_3 = 1$. This is most conveniently done on a 2-simplex (ternary plot, de Finetti diagram), and the following code snippet defines a minimal way of visualizing data on a 2-simplex using R base graphics. The function takes a minimum of four arguments: x and y are vectors holding the $x_1$ and $x_2$ values (it is not necessary to input the remaining, third value, as $x_3 = 1 - x_1 - x_2$); label is a vector of length 3 giving labels for the vertices of the simplex. Additional standard graphical parameters can be specified, e.g. will request a line plot with line width 2 and a red colour.
CommonCrawl
In this talk, we prove Massera's type result for nonlinear dynamic equation on time scales defined on $\mathbb R$ and on infinite dimensional Banach space. Also, we prove that any almost periodic solution of dynamic equation on time scales is a $p$-periodic solution of this equation. These first results hold for any invariant under translations time scales. We also prove a version of Massera's theorem for linear and nonlinear $q$-difference equations. Finally, we provide some examples to illustrate our results. The main new results are parts of two joint works, one of them with Hernán Henríquez and Eduard Toon, and the other with Martin Bohner. Motivated by the need for dynamical analysis and model reduction in stiff stochastic chemical systems, we focus on the development of methodologies for analysis of the dynamical structure of singularly-perturbed stochastic dynamical systems. We outline a formulation based on random dynamical systems theory. We demonstrate the analysis for a model two-dimensional stochastic dynamical system built on an underlying deterministic system with a tailored fast-slow structure, and an analytically known slow manifold, employing multiplicative brownian motion noise forcing. Consider generalized KDV equations with a power non-linearity $(u^p)_x$. These gKDV equations have solitary traveling waves, which are linearly unstable when p>5 (supercritical case). Jointly with Zhiwu Lin and Chongchun Zeng, we constructed invariant manifolds (stable, unstable and center) near the orbits of the unstable traveling waves in the energy space. In particular, the local uniqueness and orbital stability of the center manifold is obtained. These invariant manifolds give a complete classification of the dynamics near unstable traveling waves. We present an update of the latest results as well as applications to functional differential equations, measure differential equations and the Black-Scholes equation. We account for the longtime behavior of solutions for a class of reaction-diffusion equations. In particular, we address those with global well-posedness but exhibiting blow-up in infinite time. The existence of unbounded trajectories requires the introduction of some objects interpreted as equilibria at infinity, yielding a more complex orbit structure than that appearing on dissipative systems. Under this setting, we still manage to extend known results and obtain a complete decomposition for the related unbounded global attractor. We study systems of lattice differential equations (i.e., equations with discrete space and continous time) of reaction-diffusion type. Such systems frequently appear in population dynamics (e.g., predator-prey models with diffusion). After establishing some basic properties such as the local existence and global uniqueness of bounded solutions, we proceed to our main goal, which is the study of invariant regions. Our main result can be interpreted as an analogue of the weak maximum principle for systems of lattice differential equations. It is inspired by existing results for parabolic differential equations, but its proof is different and relies on the Euler approximations of solutions to lattice differential equations. As a corollary, we obtain a global existence theorem for nonlinear systems of lattice reaction-diffusion equations. In this talk we introduce new characterizations of spectral fractional Laplacian to incorporate nonhomogeneous Dirichlet and Neumann boundary conditions. The classical cases with homogeneous boundary conditions arise as a special case. We apply our definition to fractional elliptic equations of order $s \in (0,1)$ with nonzero Dirichlet and Neumann boundary conditions. Here the domain $\Omega$ is assumed to be a bounded, quasi-convex Lipschitz domain. To impose the nonzero boundary conditions, we construct fractional harmonic extensions of the boundary data. It is shown that solving for the fractional harmonic extension is equivalent to solving for the standard harmonic extension in the very-weak form. The latter result is of independent interest as well. The remaining fractional elliptic problem (with homogeneous boundary data) can be realized using the existing techniques. We introduce finite element discretizations and derive discretization error estimates in natural norms, which are confirmed by numerical experiments. We also apply our characterizations to Dirichlet and Neumann boundary optimal control problems with fractional elliptic equation as constraints. In this talk we exhibit different methods to analyze the asymptotic behavior of solutions to a 2D-Navier-Stokes model when the external force contains hereditary characteristics (constant, distributed or variable delay, memory, etc). First we provide some results on the existence and uniqueness of solutions. Next, the existence of stationary solution is established by Lax-Milgram theorem and Schauder fixed point theorem. Then the local stability analysis of stationary solution is studied by using the theory of Lyapunov functions, the Razumikhin-Lyapunov technique. In the end, Lyapunov functionals is also exploited some stability results. We highlight the differences in the asymptotic behavior in the particular case of bounded or unbounded variable delay. We will discuss a non-isothermal viscous relaxation of some nonlocal Cahn$-$Hilliard equations. This perturbation problem generates a family of solution operators exhibiting dissipation and conservation. The solution operators admit a family of compact global attractors that are bounded in a more regular phase-space. An upper-semicontinuity result for this family of global attractors is also sought. Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays a non- negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction. This talk considers asymptotic properties of systems of weakly interacting particles with a common Markovian switching. It is shown that when the number of particles tends to infinity, certain measure-valued stochastic processes describing the time evolution of the systems converge to a stochastic nonlinear process which can be described as a solution to a stochastic McKean-Vlasov equation.
CommonCrawl
Abstract: For a reflecting Bessel process, the inverse local time at 0 is an \$\alpha\$-stable subordinator, then the corresponding subordinate Brownian motion is a symmetric \$2\alpha\$-stable process. Based on a discussion of Esscher and Girsanov transforms of general diffusions, we would get a comparison theorem between inverse local times of Bessel processes and perturbed Bessel processes. An immediate application would be Green function estimates of trace processes. This is joint work with Prof. Zhen-Qing Chen.
CommonCrawl
38 Does $\mathbb C\mathbb P^\infty$ have a group structure? 26 Does there exist a closed manifold that can be given both a Euclidean and a Hyperbolic structure? 22 Why are Tamagawa numbers equal to Pic/Sha? 21 Does smooth and proper over $\mathbb Z$ imply rational?
CommonCrawl
Being a high-school student, I only have a qualitative understanding of the wave mechanical model of atom. The concerned orbital has $3$ radial nodes (I'm counting the places where the value of the y-coordinate is zero excluding the ones at the extreme left and right). Hence, the orbital should satisfy $n-l-1=3$. The orbitals satisfying the condition are $4s, 5p, 6d, 7f,\ldots$. I think your inference is correct, though the plot looks very awkward. There should be an exponential decay at large values of $r$, but in the plot the curve plunges sharply onto the $x$-axis. Also the shapes of each peak should not be so symmetric, but instead skewed towards the left. Not the answer you're looking for? Browse other questions tagged physical-chemistry orbitals atoms atomic-structure or ask your own question. What is meant by the "probability density" of finding an electron?
CommonCrawl
Abstract: Here we review some results of J. -H. Lee of the $N\times N$ Zakharov–Shabat system with a polynomial spectral parameter. We define a scattering transform following the set-up of Beals–Coifman . In the $2 \times 2$ cases, we modify the Kaup–Newell and Kuznetsov–Mikhailov system to assure the normalization with respect to the spectral parameter. Then we are able to apply the technique of Zakharov–Shabat for the solitons of NLS to our cases. We obtain the long-time behavior of the equations which can be transformed into DNLS and MTM in laboratory coordinates respectively.
CommonCrawl
This is a simple jekyll theme based on minima. Social account can be set in your _config.yml file. This will be shown in the footer of page. Any file except 404 in root folder will be add to header, like about.md and archive.html. Set your google_analytics UA in _config.yml. Use $\alpha$ for inline formula and $$\Sigma$$ for outline formula. TODO: Write usage instructions here. Describe your available layouts, includes, sass and/or assets. Bug reports and pull requests are welcome on GitHub at https://github.com/kemingy/jekyll-theme-ink. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct. To set up your environment to develop this theme, run bundle install. Your theme is setup just like a normal Jekyll site! To test your theme, run bundle exec jekyll serve and open your browser at http://localhost:4000. This starts a Jekyll server using your theme. Add pages, documents, data, etc. like normal to test your theme's contents. As you make modifications to your theme and to your content, your site will regenerate and you should see the changes in the browser after a refresh, just like normal. When your theme is released, only the files in _layouts, _includes, _sass and assets tracked with Git will be bundled. To add a custom directory to your theme-gem, please edit the regexp in jekyll-theme-ink.gemspec accordingly.
CommonCrawl
I have a large number of data sets. Each data set has something 200K data points lying in a square times a circle. The square is solid $I\times I$. The circle $S^1$ is hollow (dim 1). By reasoning from the experimental setup that produces the data, I am convinced that this collection of points is a sample from mixed noise plus gaussian, and the gaussian is confined to a small region. (We can assume this at first, but more realistic reasoning would give a mixture of a small number of gaussians, with centres pretty close together. I'll treat it as a single gaussian for the moment.) The problem is that the noise is far from uniform. Moreover, the noise distribution definitely differs from one dataset to the next. Given one of my datasets D with k points (k varies with D) and underlying noise distribution $Noise(D)$, I have a way of rapidly producing a sample of k points drawn from $Noise(D)$. In other words, the information that comes with D is sufficient to identify $Noise(D)$ in the sense that it is possible to generate a sample of k points from $Noise(D)$. My guess is that there are something like 100 points explained by the Gaussian, though this could be optimistic. I'm not a statistician. How could I now estimate my unknown gaussian? It seems that this should be possible, since I "know" $Noise(D)$. Could someone explain to me how I could do something like the EM algorithm in my current situation. Also, could someone please recommend an online explanation of how EM is usually carried out? Browse other questions tagged mixed-model expectation-maximization or ask your own question. Should I add noise to my truth data before before training my classifier? How to calculate likelihood for a mixture model with missing data? Need intuition - how do they simplify the Q function for gaussian mixture EM?
CommonCrawl
In what situations would it be more favorable to use random projection to reduce the dimensionality of a dataset as opposed to PCA? By more favorable, I mean preserve the distances between points of the dataset. PCA maintains the best possible projection. With very high dimensions, if speed is an issue, then consider that on a matrix of size $n \times k$, PCA takes $O(k^2 \times n+k^3)$ time, whereas a random projection takes $O(nkd)$, where you're projecting on a subspace of size $d$. With a sparse matrix its even faster. The data may well be low-dimensional, but not in a linear subspace. PCA assumes this. Random projection are also quite fast for reducing the dimension of a mixture of Gaussians. If the data is very large, you don't need to hold it in memory for a random projections, whereas for PCA you do. In general PCA works well on relatively low dimensional data. Not the answer you're looking for? Browse other questions tagged pca dimensionality-reduction or ask your own question. Can PCA be applied twice or more? Kernel PCA increases dimensionality compared with PCA? What kind of information does PCA preserve? Is "random projection" strictly speaking not a projection?
CommonCrawl
Abstract: Godsil-McKay switching is an operation on graphs that doesn't change the spectrum of the adjacency matrix. Usually (but not always) the obtained graph is non-isomorphic with the original graph. We present a straightforward sufficient condition for being isomorphic after switching, and give examples which show that this condition is not necessary. For some graph products we obtain sufficient conditions for being non-isomorphic after switching. As an example we find that the tensor product of the $\ell\times m$ grid ($\ell>m\geq 2$) and a graph with at least one vertex of degree two is not determined by its adjacency spectrum.
CommonCrawl
The Mantel-Haenszel method is an approach for fitting meta-analytic fixed-effects models when dealing with studies providing data in the form of 2x2 tables or in the form of event counts (i.e., person-time data) for two groups (Mantel & Haenszel, 1959). The method is particularly advantageous when aggregating a large number of studies with small sample sizes (the so-called sparse data or increasing strata case). The method is available in the metafor package via the rma.mh() function. By default, the results obtained may differ slightly from those obtained via the metan function in Stata (for more details, see Harris et al., 2008; Sterne, 2009), the Review Manager (RevMan) from the Cochrane Collaboration, or Comprehensive Meta-Analysis (CMA). The reason for such discrepancies is explained further below using an illustrative dataset from a meta-analysis comparing the risk of catheter-related bloodstream infection (CRBSI) when using anti-infective-treated versus standard catheters in the acute care setting (Niel-Weise et al., 2007). Variables ai and ci indicate the number of CRBSIs in patients receiving an anti-infective or a standard catheter, respectively, while n1i and n2i indicate the total number of patients in the respective groups. Note that the number of infections was quite low in many studies, with zero cases observed in several of the treatment groups. Also, no cases (infections) were observed in either group in the Yucel (2004) study. Therefore, the odds ratio is estimated to be .299 (with 95% CI: 0.193 to 0.462). In other words, the odds of an infection are estimated to be approximately 70% lower (i.e., $(1 - .299) \times 100%$) in patients receiving an anti-infective-treated catheter instead of a standard catheter. The overall effect is clearly statistically significant (with both the Wald-type z-test and the Cochran-Mantel-Haenszel chi-square test in close agreement). The Q-test for heterogeneity is not significant ($Q(16) = 16.86, p = .39$), although Tarone's test is suggestive of potential heterogeneity. Note that the estimated overall odds ratio (and corresponding CI) is slightly different than the one obtained earlier. Also, the z-test of the overall effect and the chi-square test for heterogeneity are slightly different. These results match what is reported by Stata and are again slightly different compared to the results obtained with metafor. Finally, the figure below shows the results from Comprehensive Meta-Analysis (CMA). These results match those obtained with Stata and RevMan and differ slightly from those obtained with metafor. The results differ because studies with zero cases in either group are handled by default in a different way in metafor compared to Stata, RevMan, and CMA. To understand this better, note that the Mantel-Haenszel method itself does not require the calculation of the observed outcomes of the individual studies (in the present example, the observed (log) odds ratios of the $k$ studies) and instead directly makes use of the 2×2 table counts. Zero cells are not a problem (except in some extreme cases, such as when there are zero cases in one or both groups across all of the 2×2 tables). Therefore, it is unnecessary to add some constant to the cell counts of a study with zero cases in either group. However, both Stata, RevMan, and CMA apply an adjustment (often called a continuity correction) to the cell counts in such studies (but studies with zero cases in both groups are dropped/excluded from the method). In particular, 1/2 is added to each of the cells of the 2×2 table in such studies before applying the Mantel-Haenszel method. By default, metafor uses the same adjustment when calculating the observed outcomes (the observed log odds ratios) of the $k$ studies (here, zero cells can be problematic, so adding a constant value to the cell counts ensures that all $k$ values can be calculated). Also, similarly, studies with zero cases in both groups are automatically dropped/excluded. However, when applying the Mantel-Haenszel method, no adjustment to the cell counts is made, since this is not necessary (and in fact can increase the bias in the Mantel-Haenszel method – see Bradburn et al., 2007). These are the exact same results as obtained with Stata, RevMan, and CMA. However, the results of Bradburn et al. (2007) suggest that the 1/2 adjustment should only be used with caution when applying the Mantel-Haenszel method. Also, alternative correction factors could be considered, which may actually lead to more accurate results (see Sweeting et al., 2004). Finally, the findings by Bradburn et al. (2007) suggest that Peto's method (as implemented in the rma.peto() function) can actually give the least biased results and may be preferrable when events are rare (as long as treatment and control groups are of approximately equal size within trials and the true odds ratio underlying the studies is not very large). Bradburn, M. J., Deeks, J. J., Berlin, J. A., & Localio, A. R. (2007). Much ado about nothing: A comparison of the performance of meta-analytical methods with rare events. Statistics in Medicine, 26(1), 53–77. Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22(4), 719–748. Niel-Weise, B. S., Stijnen, T., & van den Broek, P. J. (2007). Anti-infective-treated central venous catheters: A systematic review of randomized controlled trials. Intensive Care Medicine, 33(12), 2058–2068. Sterne, J. A. C. (Ed.) (2009). Meta-analysis in Stata: An updated collection from the Stata Journal. Stata Press, College Station, TX. Sweeting, M. J., Sutton, A. J., & Lambert, P. C. (2004). What to add to nothing? Use and avoidance of continuity corrections in meta-analysis of sparse data. Statistics in Medicine, 23(9), 1351–1375.
CommonCrawl
J. Araujo, J-C. Bermond, F. Giroire, F. Havet, D. Mazauric, and R. Modrzejewski. Weighted improper colouring. Journal of Discrete Algorithms, 16:53-66, 2012. J. Araujo and C. Linhares Sales. On the Grundy number of graphs with few P4's. Discrete Applied Mathematics, 160(18):2514-2522, 2012. V. Bilo, M. Flammini, G. Monaco, and L. Moscardelli. On the performances of Nash equilibria in isolation games. Journal of Combinatorial Optimization, 22:378-391, 2011. O. Dalle. Should Simulation Products Use Software Engineering Techniques or Should They Reuse Products of Software Engineering? -- Part 1. Modeling & Simulation Magazine, 11(3), 07 2011. O. Dalle. Should Simulation Products Use Software Engineering Techniques or Should They Reuse Products of Software Engineering? -- Part 2. Modeling & Simulation Magazine, 11(4), 10 2011. J-C. Bermond, L. Gargano, and A. A. Rescigno. Gathering with minimum completion time in sensor tree networks. JOIN, 11(1-2):1-33, 2010. O. Dalle, Q. Liu, G. Wainer, and B. P. Zeigler. Applying Cellular Automata and DEVS Methodologies to Digital Games: A Survey. Simulation & Gaming, 41(6):796-823, December 2010. J-C. Bermond, R. Correa, and M.-L. Yu. Optimal Gathering Protocols on Paths under Interference Constraints. Discrete Mathematics, 309(18):5574-5587, September 2009. M. Flammini, R. Klasing, A. Navarra, and S. Pérennes. Tightening the Upper Bound for the Minimum Energy Broadcasting problem. Wireless Networks, 14(5):659--669, October 2008. R. J. Kang, T. Müller, and J.-S. Sereni. Improper colouring of (random) unit disk graphs. Discrete Mathematics, 308:1438--1454, April 2008. F. Honsell, M. Lenisa, and L. Liquori. A Framework for Defining Logical Frameworks. Electronic Notes in Theoretical Computer Science, 172:399 - 436, 2007. M. Flammini, A. Navarra, and S. Pérennes. The Real Approximation Factor of the MST heuristic for the Minimum Energy Broadcasting. ACM Journal of Experimental Algorithmics, 11:1--13, 2006. B. Reed, S. W. Song, and J. L. Szwarcfiter. Preface [Brazilian Symposium on Graphs, Algorithms and Combinatorics]. Discrete Appl. Math., 141(1-3):1, 2004. M. Loebl, J. Nesetril, and B. Reed. A note on random homomorphism from arbitrary graphs to $\mathbb Z$. Discrete Math., 273(1-3):173--181, 2003. C. McDiarmid and B. Reed. Channel assignment on graphs of bounded treewidth. Discrete Math., 273(1-3):183--192, 2003. D. Rautenbach and B. Reed. The Erdos-Pósa property for odd cycles in highly connected graphs. Combinatorica, 21(2):267--278, 2001. L. Perkovic and B. Reed. An improved algorithm for finding tree decompositions of small width. Internat. J. Found. Comput. Sci., 11(3):365--371, 2000. C. Berge and B. Reed. Edge-disjoint odd cycles in graphs with small chromatic number. Ann. Inst. Fourier (Grenoble), 49(3):783--786, 1999. A. Ferreira, C. Kenyon, A. Rau-Chaplin, and S. Ubéda. d-Dimensional Range Search on Multicomputers. Algorithmica, 24(3/4):195-208, 1999. P. Berthomé and A. Ferreira. Communication Issues in Parallel Systems with Optical Interconnections. International Journal of Foundations of Computer Science, 8(2):143--162, 1997. L. Perkovic and B. Reed. Edge coloring regular graphs of high degree. Discrete Math., 165/166:567--578, 1997. A. Ferreira, A. Goldman, and S. W. Song. Gossiping in bus interconnection networks. Parallel Algorithms and Applications, 8:309--331, 1996. T. Duboux, A. Ferreira, and M. Gastaldo. A Scalable Design for VLSI Dictionary Machines. Microprocessors & Microprogramming Journal, 41:359--372, 1995. Note: Special Issue on Parallel Programmable Architectures and Compilation for Multi-dimensional Processing. B. Reed. Rooted routing in the plane. Discrete Appl. Math., 57(2-3):213--227, 1995. A. Ferreira. On space-efficient algorithms for certain NP-Complete problems. Theoretical Computer Science, 120:311--315, 1993. A. Ferreira and J. Zerovnik. Bounding the probability of success of stochastic methods for global optimization. International Journal on Computers & Mathematics with Applications, 25(10/11):1--8, 1993. J.-C. Bermond, K. Berrada, and J. Bond. Extensions of networks with given diameter. Discrete Math., 75(1-3):31--40, 1989. C. T. Hoàng and B. A. Reed. $P\sb 4$-comparability graphs. Discrete Math., 74(1-2):173--200, 1989. J.-C. Bermond and C. Peyrat. Broadcasting in de Bruijn networks. Congr. Numer., 66:283--292, 1988. J.-C. Bermond, C. Delorme, and G. Farhi. Large graphs with given degree and diameter. III. In Graph theory, volume 62 of North-Holland Math. Stud., pages 23--31. North-Holland, Amsterdam, 1982. J.-C. Bermond, D. Sotteau, A. Germa, and M.-C. Heydemann. Chemins et circuits dans les graphes orientés. Ann. Discrete Math., 8:293--309, 1980. J.-C. Bermond. Hamiltonian decompositions of graphs, directed graphs and hypergraphs. Ann. Discrete Math., 3:21--28, 1978. J.-C. Bermond, A. Germa, and M.-C. Heydemann. Graphes représentatifs d'hypergraphes. Cahiers Centre Études Rech. Opér., 20(3-4):325--329, 1978. Unsolved problems. pp 678--696. Congressus Numerantium, No. XV, 1976. Problems. In Recent advances in graph theory (Proc. Second Czechoslovak Sympos., Prague, 1974), pages 541--544. Academia, Prague, 1975. J.-C. Bermond. $1$-graphes réguliers minimaux de girth donné. Cahiers Centre Études Recherche Opér., 17(2-4):125--135, 1975. J.-C. Bermond and J. C. Meyer. Hypergraphes et configurations. Cahiers Centre Études Recherche Opér., 17(2-4):137--154, 1975. J.-C. Bermond and P. Rosenstiehl. Pancyclisme du carré du graphe aux arêtes d'un graphe. Cahiers Centre Études Recherche Opér., 15:285--286, 1973. G. D'Angelo, G. Di Stefano, and A. Navarra. Gathering asynchronous and oblivious robots on basic graph topologies under the Look -Compute-Move model. In Steve Alpern, Robbert Fokkink, Leszek Gasieniec, Roy Lindelauf, and VS Subrahmanian, editors,Search Games and Rendezvous. Springer, . S. Bessy and F. Havet. Enumerating the edge-colourings and total colourings of a regular graph. Journal of Combinatorial Optimization, . S. Caron, F. Giroire, D. Mazauric, J. Monteiro, and S. Pérennes. P2P Storage Systems: Study of Different Placement Policies. ELSEVIER Journal of Peer-to-Peer Networking and Applications, Springer, . S. Cicerone, G. D'Angelo, G. Di Stefano, D. Frigioni, and V. Maurizio. Engineering a new algorithm for distributed shortest paths on dynamic networks. Algorithmica, . G. D'Angelo, G. Di Stefano, and A. Navarra. Flow problems in multi-interface networks. IEEE Transactions on Computers, . W. Fang, X. Liang, S. Li, L. Chiaraviglio, and N. Xiong. VMPlanner: Optimizing Virtual Machine Placement and Traffic Flow Routing to Reduce Network Power Costs in Cloud Data Centers. Computer Networks, September. F. Havet and L. Sampaio. On the Grundy and $b$-chromatic Numbers of a Graph. Algorithmica, pp 1-15, . F. Havet and X. Zhu. The game Grundy number of graphs. Journal of Combinatorial Optimization, .
CommonCrawl
In this survey, a recent computational methodology paying a special attention to the separation of mathematical objects from numeral systems involved in their representation is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor's and non-standard analysis views and is based on the Euclid's Common Notion no. 5 "The whole is greater than the part" applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses a computational device called the Infinity Computer (patented in USA and EU) working numerically (recall that traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. It is argued that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. (recall that traditionally different numerals $\infty, \aleph_0, \omega$, etc. are used in different situations related to infinity). Numerous numerical examples and theoretical illustrations are given. The accuracy of the achieved results is continuously compared with those obtained by traditional tools used to work with infinities and infinitesimals. In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems. Keywords: Numerical infinities and infinitesimals, numbers and numerals, grossone, infinity computer, numerical analysis, infinite sets, divergent series, continuum hypothesis, Riemann zeta function, numerical differentiation, ODEs, optimization, probability, fractals.
CommonCrawl
A well known characteristic of $k$-SAT instances is the ratio of the number of clauses $m$ over the number of variables $n$, i.e., the quotient $\rho = m/n$. For every $k$, there is a threshold value $\alpha$ s.t.\ for $\rho \ll \alpha$, most instances are satisfiable, and for $\rho \gg \alpha$ most instances are unsatisfiable. There has been a lot of research done for problems where $\rho \ll \alpha$, and for problems with sufficiently small $\rho$, $k$-SAT becomes solvable in polynomial time. See, for instance, Dimitris Achlioptas's survey article from the Handbook of Satisfiability (PDF). I am wondering if any work has been done in the other direction (where $\rho \gg \alpha$), e.g., if we can somehow transform the problem from CNF to DNF in this case to solve it quickly. So, essentially, What is known regarding SAT where $\rho = m/n \gg \alpha$? Moshe Vardi, Phase transitions and computational complexity, 2014. Let $\rho$ denote the clause ratio. As the value of $\rho$ increases beyond the threshold the problem becomes easier for existing SAT solvers, but not as easy as it was before reaching the threshold. There is a very steep increase in difficulty as we approach the threshold from below. After the threshold the problem becomes easier compared to the threshold but the decrease in difficulty is much less steep. $\rho \gg$ the threshold: $T_\rho(n)$ remains exponential in $n$ but the exponent decreases as $\rho$ increases. For such formulas lower bounds on the length of refutations in resolution and stronger propositional proof systems have been shown, starting with the paper "Many hard examples for resolution" by Chvátal and Szemerédi. These resolution lower bounds imply lower bounds on the runtime of DPLL- and CDCL-based SAT-solvers. The strongest lower bounds are for Polynomial Calculus, due to Ben-Sasson and Impagliazzo. For such formulas there are efficient deterministic algorithms for certifying unsatisfiability, i.e., algorithms that either output "UNSAT" or "Don't Know", where the answer "UNSAT" is required to be correct, and it has to output "UNSAT" on unsatisfiable formulas with high probability. The strongest results in that direction are due to Feige and Ofek. here is an older but relevant study/angle by a leading expert. In figure 4, we plot the estimated constrainedness down the heuristic branch for random 3-SAT problems. For L/N < 4.3, problems are under-constrained and soluble. As search progresses, $\kappa$ decreases as problems become more under-constrained and obviously soluble. For L/N > 4.3, problems are over-constrained and insoluble. As search progresses, $\kappa$ increases as problems become more over-constrained and obviously insoluble. the question asks about $m/n \gg\alpha$. but this is known from empirical analysis to be highly overconstrained and therefore basically approaching P-time instances (a solver "quickly" discovers they are unsolvable) and therefore not as theoretically interesting (because they do not "elicit/exercise" the exponential-time-behavior of solvers on average). however, have not personally seen papers/ transformations/ theory that prove this more theoretically/rigorously (other than this paper as a start on that). Not the answer you're looking for? Browse other questions tagged cc.complexity-theory sat random-k-sat phase-transition or ask your own question. What are the current best known upper and lower bounds on the (un)satisfiability threshold for random k-sat and/or 3-sat? What do we know about the phase transition of #P-Complete problems? What is the counting complexity of random 2-SAT?
CommonCrawl
The $L_\infty$-structure on symplectic cohomologyMar 28 2019We construct the chain level $L_\infty$-structure that extends the Lie bracket on symplectic cohomology. Cyclic actions on rational ruled symplectic four-manifoldsMar 27 2019Let $(M,\omega)$ be a ruled symplectic four-manifold. If $(M, \omega)$ is rational, then every homologically trivial symplectic cyclic action on $(M,\omega)$ is the restriction of a Hamiltonian circle action. Partly-local domain-dependent almost complex structuresMar 13 2019We fill a gap pointed out by N. Sheridan in the proof of independence of genus zero Gromov-Witten invariants from the choice of divisor in the Cieliebak-Mohnke perturbation scheme. Smoothly non-isotopic Lagrangian disk fillings of Legendrian knotsMar 12 2019In this paper, we construct the first families of distinct Lagrangian ribbon disks in the standard symplectic 4-ball which have the same boundary Legendrian knots, and are not smoothly isotopic or have non-homeomorphic exteriors. Intersection pairings in the N-fold reduced product of adjoint orbitsMar 05 2019In previous work we computed the symplectic volume of the symplectic reduced space of the product of N adjoint orbits of a compact Lie group. In this paper we compute the intersection pairings of the same object. The Hall Algebras of AnnuliFeb 27 2019We refine and prove the central conjecture of our first paper for annuli with at least two marked intervals on each boundary component by computing the derived Hall algebras of their Fukaya categories. Mahler's conjecture for some hyperplane sectionsFeb 24 2019We establish Mahler's conjecture for hyperplane sections of $\ell_p$-balls and of the Hanner polytopes. A note on disk counting in toric orbifoldsFeb 14 2019Feb 20 2019We compute orbi-disk invariants of compact Gorenstein semi-Fano toric orbifolds by extending the method used for toric Calabi-Yau orbifolds. As a consequence the orbi-disc potential is analytic over complex numbers.
CommonCrawl
NEWS: PhD Course on "Multi-Agent Distributed Optimization and Learning over Wireless Networks" , Paris-Saclay, 3-7/06/2019. - HYCON2 Workshop on Distributed Optimization in Large Networks and its Applications, Zurich, Swizerland, July 2013. Symposium on New Directions of Automatic Control, Seoul, Korea, October 2011. WISE-WAI project, Final Review Meeting, Padova, May 2011. Workshop on Multi-Agent Estimation and Control, Lund, Sweden, January 2010. In this work we focus on the problem of minimizing the sum of convex cost functions in a distributed fashion over a peer-to-peer network. In particular we are interested in the case in which communications between nodes are lossy and the agents are not synchronized among themselves. We address this problem by proposing a modified version of the relaxed ADMM (R-ADMM), which corresponds to the generalized Douglas-Rachford operator applied to the dual of our problem. By exploiting results from operator theory we are then able to prove the almost sure convergence of the proposed algorithm under i.i.d. random packet losses and asynchronous operation of the agents. By further assuming the cost functions to be strongly convex, we are able to prove that the algorithm converges exponentially fast in mean square in a neighborhood of the optimal solution. Moreover, we provide an upper bound to the convergence rate. Finally, we present numerical simulations of the proposed algorithm over random geometric graphs in the aforementioned lossy and asynchronous scenario. neighbors, a framework that we refer to as `partition-based' optimization. geometric graphs subject to i.i.d. random packet losses. In this work we address the problem of distributed optimization of the sum of convex cost functions in the context of multi-agent systems over lossy communication networks. Building upon operator theory, first, we derive an ADMM-like algorithm, referred to as relaxed ADMM (R-ADMM) via a generalized Peaceman-Rachford Splitting operator on the Lagrange dual formulation of the original optimization problem. This algorithm depends on two parameters, namely the averaging coefficient $\alpha$ and the augmented Lagrangian coefficient $\rho$ and we show that by setting $\alpha=1/2$ we recover the standard ADMM algorithm as a special case. Moreover, first, we reformulate our R-ADMM algorithm into an implementation that presents reduced complexity in terms of memory, communication and computational requirements. Second, we propose a further reformulation which let us provide the first ADMM-like algorithm with guaranteed convergence properties even in the presence of lossy communication. Finally, this work is complemented with a set of compelling numerical simulations of the proposed algorithms over random geometric graphs subject to i.i.d. random packet losses. We consider the problem of controlling a smart lighting system of multiple luminaires with collocated occupancy and light sensors. The objective is to attain illumination levels higher than specified values (possibly changing over time) at the workplace by adapting dimming levels using sensor information, while minimizing energy consumption. We propose to estimate the daylight illuminance levels at the workplace based on the daylight illuminance measurements at the ceiling. More specifically, this daylight estimator is based on a model built from data collected by light sensors placed at workplace reference points and at the luminaires in a training phase. Three estimation methods are considered: Regularized least squares, locally weighted regularized least squares, and cluster-based regularized least squares. This model is then used in the operational phase by the lighting controller to compute dimming levels by solving a linear programming problem, in which power consumption is minimized under the constraint that the estimated illuminance is higher than a specified target value. The performance of the proposed approach with the three estimation methods is evaluated using an open-office lighting model with different daylight conditions. We show that the proposed approach offers reduced under-illumination and energy consumption in comparison to existing alternative approaches. In this paper, we study the problem of real-time optimal distributed partitioning for perimeter patrolling in the context of multicamera networks for surveillance, where each camera has limited mobility range and speed, and the communication is unreliable. The objective is to coordinate the cameras in order to minimize the time elapsed between two different visits of each point of the perimeter. We address this problem by casting it into a convex problem in which the perimeter is partitioned into nonoverlapping segments, each patrolled by a camera that sweeps back and forth at the maximum speed. We then propose an asynchronous distributed algorithm that guarantees that these segments cover the whole patrolling perimeter at any time and asymptotically converge to the optimal centralized solution under reliable communication. We finally modify the proposed algorithm in order to attain the same convergence and covering properties even in the more challenging scenario, where communication is lossy and there is no channel feedback, i.e., the transmitting camera is not aware whether a packet has been received or not by its neighbors. In this work we introduce an algorithm for distributed average consensus which is able to deal with asynchronous and unreliable communication systems. It is inspired by two algorithms for average consensus already present in the literature, one which deals with asynchronous but reliable communication and the other which deals with unreliable but synchronous communication. We show that the proposed algorithm is exponentially convergent under mild assumptions regarding the nodes update frequency and the link failures. The theoretical results are complemented with numerical simulations.
CommonCrawl
Problem we are trying to solve: Given some data, the goal of machine learning is to find pattern in the data. There are various settings, like supervised learning, unsupervised learning, reinforcement learning, etc. But the most common one is supervised learning; so we're going to focus only on that in the big picture. Here, you are given labelled data [called the "training data"], and you want to infer labels on new data [called the "test data"]. For instance, consider self-driving cars. Labelled data would include the image of the road ahead at a particular instance as seen from the car, and the corresponding label would be the steering angle [let's assume the speed is controlled manually, for simplicity]. The goal of self-driving car is, given a new image of the road ahead, the system should be able to figure out the optimal steering angle. How to solve: Most of supervised machine learning can be looked at using the following framework — You are given training data points $(x_1,y_1),…,(x_n,y_n)$, where $x_i$ is the data [e.g. road image in the example above], and $y_i$ is the corresponding label. You want to find a function ff that fits the data well, that is, given $x_i$, it outputs something close enough to $y_i$. Now where do you get this function ff from? One way, which is the most common in ML, is to define a class of functions F , and search in this class the function that best fits the data. For example, if you want to predict the price of an apartment based on features like number of bedrooms, number of bathrooms, covered area, etc. you can reasonably assume that the price is a linear combination of all these features, in which case, the function class F is defined to be the class of all linear functions. For self-driving cars, the function class F you need will be much more complex. How to evaluate: Note that just fitting the training data is not enough. Data are noisy — for instance, every apartment with the same number of bedrooms, same number of bathrooms and same covered area are not priced equally. Similarly, if you label data for self-driving cars, you can expect some randomness due to the human driver. What you need is that your framework should be able to extract out the pattern, and ignore the random noise. In other words, it should do well on unseen data. Therefore, the way to evaluate models is to hold out a part of the training data [called "validation set"], and predict on this held out data to measure how good your model is. Now whatever you study in machine learning, you should try to relate the topics to the above big picture. For instance, in linear regression, the function class is linear and the evaluation method is square loss, in linear SVM, the function class is linear and the evaluation method is hinge loss, and so on. First understand these algorithms at high-level. Then, go into the technical details. You will see that finding the best function ff in the function class FF often results in an optimization problem, for which you use stochastic gradient descent. You can find some reasonable material on most of these by searching for "<topic> lecture notes" on Google. Usually, you'll find good lecture notes compiled by some professor teaching that course. The first few results should give you a good set to choose from. See Prasoon Goyal's answer to How should I start learning the maths for machine learning and from where? Skim through these. You don't need to go through them in a lot of detail. You can come back to studying the math as and when required while learning ML. Then, for a quick overview of ML, you can follow the roadmap below. Most common settings: Supervised setting, Unsupervised setting, Semi-supervised setting, Reinforcement learning. Most common problems: Classification (binary & multiclass), Regression, Clustering. Preprocessing of data: Data normalization. Concepts of hypothesis sets, empirical error, true error, complexity of hypotheses sets, regularization, bias-variance trade-off, loss functions, cross-validation. Terminology & Basic concepts: Convex optimization, Lagrangian, Primal-dual problems, Gradients & subgradients, ℓ1ℓ1 and ℓ2ℓ2regularized objective functions. Algorithms: Batch gradient descent & stochastic gradient descent, Coordinate gradient descent. Implementation: Write code for stochastic gradient descent for a simple objective function, tune the step size, and get an intuition of the algorithm. Support vector machines: Geometric intuition, primal-dual formulations, notion of support vectors, kernel trick, understanding of hyperparameters, grid search. Online tool for SVM: Play with this online SVM tool (scroll down to "Graphic Interface") to get some intuition of the algorithm. Top-down and bottom-up hierarchical clustering. Basic terminology: Priors, posteriors, likelihood, maximum likelihood estimation and maximum-a-posteriori inference. Latent Dirichlet Allocation: The generative model and basic idea of parameter estimation. Basic terminology: Bayesian networks, Markov networks / Markov random fields. Inference algorithms: Variable elimination, Belief propagation. Simple examples: Hidden Markov Models. Ising model. Basic terminology: Neuron, Activation function, Hidden layer. Convolutional neural networks: Convolutional layer, pooling layer, Backpropagation. Memory-based neural networks: Recurrent Neural Networks, Long-short term memory. Tutorials: I'm familiar with this Torch tutorial (you'll want to look at 1_supervised1_supervised directory). There might be other tutorials in other deep learning frameworks. You can use the last day to catch up on anything left from previous days, or learn more about whatever topic you found most interesting / useful for your future work. While Murphy's book is more current and is more elaborate, I find Bishop's to be more accessible for beginners. You can choose one of them according to your level. At this point, you should have a working knowledge of machine learning. Beyond this, if you're interested in a particular topic, look for specific online resources on the topic, read seminal papers in the subfield, try finding some simpler problems and implement them. For deep learning, here's a tutorial from Yoshua Bengio's lab that was written in the initial days of deep learning : Deep Learning Tutorials. This explains the central ideas in deep learning, without going into a lot of detail. Because deep learning is a field that is more empirical than theoretical, it is important to code and experiment with models. Here is a tutorial in TensorFlow that gives implementations of many different deep learning tasks — aymericdamien/TensorFlow-Examples. Try running the algorithms, and play with the code to understand the underlying concepts better. Finally, you can refer to Deep Learning book, which explains deep learning in a much more systematic and detailed manner. For the latest algorithms that are not in the book, you'll have to refer to the original papers. There are different levels at which you can understand an algorithm. At the highest level, you know what an algorithm is trying to do and how. So for instance, gradient descent finds a local minimum by taking small steps along the negative gradient. Going slightly deeper, you will delve into the math. Again, taking gradient descent for example, you will learn about how to take gradient for vector quantities, norms, etc. At about the same level of depth, you'll also have other variants of the algorithm, like handling constraints in gradient descent. This is also the level at which you learn how to use libraries to run your specific algorithm. They both have the same functionality, but the second one is 20 times faster. Similarly, you will learn some other important implementation techniques, such as parallelizing code, profiling, etc. You will also learn some algorithm-specific details, like how to initialize your model for faster convergence, how to set the termination condition to trade-off accuracy and training time, how to handle corner cases [like saddle points in gradient descent], etc. Finally, you will learn techniques to debug machine learning code, which is often tricky for beginners. Finally, comes the depth at which libraries are written. This requires way more systems knowledge than the previous steps — knowing how to handle very large data, computational efficiency, effective memory management, writing GPU code, effective multi-threading, etc. Now, in how much detail do you need to know the algorithms? For the most part, you don't need to know the algorithms at the depth of library-implementation, unless you are into systems programming. For most important algorithms in ML — like gradient descent, SVM, logistic regression, neural networks, etc. — you need to understand the math, and how to use libraries to run them. This would be sufficient if you are not an ML engineer, and only use ML as a black-box in your daily work. However, if you are going to be working as an ML engineer / data scientist / research scientist, you need to also implement some algorithms from scratch. Usually the ones covered in online courses are enough. This helps you learn many more nuances of different tools and algorithms. Also, this will help you with new algorithms that you might need to implement.
CommonCrawl
Your task is to find out the number in row y and column x. Any programming language is allowed. This is a code-golf challenge so shortest code wins. The position \$(x, y)\$ is located on arm \$\max(x, y)\$ (assigned to variable z). Then, the largest number on arm \$n\$ is \$n^2\$, which alternates between being in the bottom left and top right position on the arm. Subtracting \$x\$ from \$y\$ gives the sequence \$-n+1, -n+2, \ldots, -1, 0, 1, \ldots, n-1, n-2\$ moving along arm \$n\$, so we choose the appropriate sign based on the parity of \$n\$, adjust by \$n-1\$ to get a sequence starting at 0, and subtract this value from \$n^2\$. Thanks to Mr. Xcoder for saving a byte. First time golfing! I'm more than aware this is not optimal, but whatever. Essentially runs on the same principle as @Doorknob C code. Edit: Same technique as @Doorknob's answer, just arrived at differently. The difference between the diagonal elements of the spiral is the arithmetic sequence \$ 0, 2, 4, 6, 8, \ldots \$. Sum of \$ n \$ terms of this is \$ n(n - 1) \$ (by the usual AP formula). This sum, incremented by 1, gives the diagonal element at position \$ (n, n) \$. Given \$ (x, y) \$, we find the maximum of these two, which is the "layer" of the spiral that this point belongs to. Then, we find the diagonal value of that layer as \$ v = n(n-1) + 1 \$. For even layers, the value at \$ (x, y) \$ is then \$ v + x - y \$, for odd layers \$ v - x + y \$. where \$ m = max(x, y) \$. where \$ k = abs(x-y) + x + y \$. This is the function the solution implements. Adapted from Doorknob's solution over a few beers. An almost literal translation of Rushabh Mehta's answer. h.MZQ | Q's maximal value. Uses Doorknob's method. Way too long. Computes the diagonal term with ²_'Ṁ and adds/subtracts to the correct index value with ṀḂḤ'×I. -1 byte thanks to @Emigna changing Èi to G. Port of @sundar's MATL answer, so make sure to upvote him! uses math;var x,y,z:word;begin read(x,y);z:=max(x,y);write(z*z-z+1+(1and z*2-1)*(y-x))end. Port of Doorknob's answer, but sundar's answer gave me idea for z mod 2*2-1 which I transformed into 1and z*2-1 to remove space.
CommonCrawl
We will now state a remarkable theorem which tells us that if $H_1$ and $H_2$ are both Hadamard matrices then so is their Kronecker product. Theorem 1: If $H_1$ is an $m \times m$ Hadamard matrix and $H_2$ is an $n \times n$ Hadamard matrix then $H_1 \otimes H_2$ is an $mn \times mn$ Hadamard matrix. We know that if $H$ is a Hadamard matrix of order $n$ then $n = 1$, $n = 2$, or $n \equiv 0 \pmod 4$ but in general, it is not known if these are sufficient conditions for the existence of a Hadamard matrix of order $n$, that is, there may exist a positive integer $n > 2$ with $n \equiv 0 \pmod 4$ for which no Hadamard matrix exists. The theorem above gives us a criterion for the existence of Hadamard matrices of certain orders. In particular, if $n\equiv 0 \pmod 4$ and $n = st$ where $s, t = 1, 2 $] or [[$ s \equiv 0 \pmod 4$ and $t \equiv 0 \pmod 4$; and it is known that Hadamard matrices of order $s$ and $t$ exist, then it is guaranteed that a Hadamard matrix of order $n$ exists.
CommonCrawl
Let $\mathcal R$ be a binary relation on a non-empty set $S$. that is, that $\mathcal R$ is a left-total relation (specifically a serial relation). that is, that $\mathcal R$ is a right-total relation. Some sources call this the Axiom of Dependent Choices, reflecting the infinitely many choices made. This axiom can be abbreviated ADC or simply DC. This axiom is a weaker form of the axiom of choice, as shown in Axiom of Choice Implies Axiom of Dependent Choice. This axiom is also a stronger form of the axiom of countable choice, as shown in Axiom of Dependent Choice Implies Axiom of Countable Choice. Dependent Choice (Fixed First Element)‎ shows that it is possible to choose any element of the set to be the first element of the sequence.
CommonCrawl
I am learning the basic concept of Newtonian method and text book introduces a function which newtonian step values are Oscillating as a example which is Newtonian method is inapplicable. However, I think for any kind of differentiable function at all domain, oscialltion guarantees the where the root is since if $x_n$ oscialltes between 1+h and 1-h where h>0, root would be 1. However, I think, Osciallation is good thing since we can just guarantee that that oscillating criterion becomes root. Browse other questions tagged roots or ask your own question. What is the formula for $a_m$ in the cosine like and sine like functions for nested radical constants?
CommonCrawl
Abstract: We consider Projected Entangled Pair State (PEPS) models with a global $\mathbb Z_N$ symmetry, which are constructed from $\mathbb Z_N$-symmetric tensors and are thus $\mathbb Z_N$-invariant wavefunctions, and study the occurence of long-range order and symmetry breaking in these systems. First, we show that long-range order in those models is accompanied by a degeneracy in the so-called transfer operator of the system. We subsequently use this degeneracy to determine the nature of the symmetry broken states, i.e., those stable under arbitrary perturbations, and provide a succinct characterization in terms of the fixed points of the transfer operator (i.e.\ the different boundary conditions) in the individual symmetry sectors. We verify our findings numerically through the study of a $\mathbb Z_3$-symmetric model, and show that the entanglement Hamiltonian derived from the symmetry broken states is quasi-local (unlike the one derived from the symmetric state), reinforcing the locality of the entanglement Hamiltonian for gapped phases.
CommonCrawl
In order to better understand the motion of a serial manipulator arm, an example is given below. A 6 DOF arm is shown, and contains three prismatic and three revolute joints. For this arm, the three revolute joints compose a spherical wrist. The forward and inverse kinematics of the arm will be determined and a demonstration of the serial arm's motion will be presented. 1. Label all joints i = 1 to n. 2. Assign z-axes for joints 0 to n-1 (zo along joint 1, etc.). 3. Assign xo normal to zo. 4. Assign x1 through xn-1, which lie at the common normals between zo to zn-1. 5. Establish y1 to yn-1 to complete each frame. 6. Assign zn freely (but carefully) and define xn. 7. Create the table of DH link parameters, defining $\alpha$i, ai, di, and $\theta$i for each joint. 8. Create Tii-1 for i = 1 to n. 9. Solve Tno = T1o * T21 * … * Tnn-1. 10. Show the position as the last column of T6o and the orientation as the first three columns of T6o, excluding the fourth row. 1) Start with the given tool pose T6o. The top left 3x3 is the orientation matrix and the 4th column is the position matrix. 2) Solve the forward kinematics for T3o and R63. Normally R63 is a spherical wrist, so it controls the orientation of the end effector. T3o will work on satisfying the position requirement. 3) Find the location of the wrist center, pc. The vector (x,y,z) would be the end effector location. We need to back it up to the wrist center, (xp , yp , zp). 4) Set pc equal to the last column of T3o from the forward kinematics. Solve these equations for the 1st 3 joint parameters. 5) From the forward kinematics, use R3o and R6o. Determine R63 (given) = (R3o)T * R6o. Remember that R6o = R3o * R63. For this case, it is a spherical wrist. 6) Solve for the final joint parameters by setting the orientation matrix equal to R63. Remember to keep track of the number of solutions. This video shows how the robot moves through some simple lines in the x, y, and z direction. Next, it shows the changing orientation of the spherical wrist. Last, it moves through a more complex shape by solving the inverse kinematics as shown above.
CommonCrawl
One of my favorite topic when I discuss models with multiple variables is the notion of transgressive overyielding. When two species are in competition, there is a range of situations for which the total biomass when both species are present is superior to what the most productive species would achieve alone. I like this example not only because it has important implications for the biodiveristy-ecosystem functioning relationship, but because it is a good opportunity to develop intitutions on the notoriously difficult notion of isoclines, as it lends itself to an easy visual interpretation. And then, as I was writing this blog post, @PillGouh19 published a paper that contains some provocative ideas about transgressive overyielding and its relationship with the mechanisms of coexistence. Because I have not yet processed the paper, much less the maths in it, I will not go into much details – but this is interesting to see that the discussion over transgressive overyielding, coexistence, stability, and competition, is apparently not over. The competition strength is represented by the values of α, where $\alpha_1$ is the effect of species 1 on species 2, and conversely. So far, so good. The relevant information is that the isocline of a population is a linear function of the density of the other population; its slope is the ration between limitation by the other species and self-limitation, and its intercept is the carrying capacity of the species alone. If the two isoclines intersect, there is an equilibrium where both species coexist. Well, it's not quite true – we want the two lines to intersect at a point where $N_1^\star$ and $N_2^\star$ are both larger than 0. Notice something cool? The isoclines depend only of the carrying capacity and the inter-specific competition strength. Neat, right? In other formulations of the model, specifically those using both intra and inter-specific competition, the growth rate remains here. The results are the same, but we have to deal with a lot more parameters at every step, so this simplication makes sense (and there is, in fact, one more simplication coming). In any case, the value of $K$ control the intercepts, and the values of $\alpha$ control the slopes. So by picking the right combination of these parameters, we can make the lines cross (or not). By the way, we are only interested in the lines crossing for $N_1$ and $N_2$ greater than 0 – there are many situations in which the lines cross but where the equilibrium is not biologically meaningful. There is a nice trick to find when the equilibrium is stable, i.e. $\alpha_1 < K_2/K_1 < 1/\alpha _2$ [@Case00]. Something familiar? The limit terms, $\alpha_1$ and $1/\alpha_2$, are the slopes of the isoclines. It's nice how everything fits together, isn't it? You can play with the following figure to figure out conditions in which the equilibrium exists. The isocline for the first population is in green, and for the second population in brown. The solid circles on each axes are the values of $K_1$ and $K_2$. The dot at the intersection between the two isoclines will turn solid when the equilibrium is stable, and white when it is unstable. What happens across this line, the Relative Yield Total (or RYT) is that we trade units of $N_2$ and $N_1$ at a rate of $-K_2/K_1$, so that at any point, the value of this line is $N_1/K_1+N_2/K_2$. When the equilibrium is below this line, it is visually obvious that $N_1^\star + N_2^\star$ is lower than best performing monoculture. But when the equilibrium is above the line, then the stably coexisting two-species mix is outperforming the best monoculture, and transgressive overyielding occurs. Of course, it is also possible to work this out directly from the model, but I think that the visual intuition of isoclines (although they are limited to two-species models) is important. The whole analysis, including models with non-linear isoclines, is done in @Lore04, where there are situations in which coexistence and transgressive overyielding are distinct.
CommonCrawl
Ramanarayan, H and Abinandanan, TA (2003) Phase field study of grain boundary effects on spinodal decomposition. In: Acta Materialia, 51 (16). pp. 4761-4772. We have developed a phase field model of a polycrystalline alloy by combining the Cahn–Hilliard model [J Chem Phys 28 (1958) 258] with a model of polycrystals due to Fan and Chen [Acta Mater 45 (1997) 3297]. We have used this model to study grain boundary (GB) effects on spinodal decomposition (SD) in two-dimensional (2D) systems. In binary A–B systems with constant atomic mobility, when the GB-energy $(\gamma\alpha)$ of the A-rich $\alpha$ phase is lower than that $(\gamma\beta)$ of the B-rich $\beta$ phase, decomposition starts by enriching the GB with species A, setting off a composition wave that produces alternating $\alpha$ and $\beta$ bands near the GB. Simultaneously, the grain interiors undergo normal SD. Thus, when decomposition ends, GB-bands coexist with grain interiors with spinodal microstructure. The number of GB bands is rationalized in terms of $(\gamma\beta - \gamma\alpha)$ and the rate of SD in the grain interior. Further, during decomposition, grain growth is effectively suppressed.
CommonCrawl
The area of the rectangle is $18$cm$^2$ and its perimeter is $18$cm. The second shape is $12$cm$^2$ and its perimeter is $22$cm. Thomas from Colet Court examined the eight shapes which were drawn on the cards. He labelled the shapes A, B, C, D, E, F, G and H, going from left to right in the top row, then left to right in the bottom row. The perimeter is always bigger except for one (Shape G). I found if I did $4\times4$ I would get an area of $16$. If I counted the sides there would be four on each side: $4+4+4+4 = 16$. The area and perimeter are the same. The same happened if there you have a rectangle that has a length of $6$ and a width of $3$. Yes you can draw a the shape in which the perimeter is numerically twice the area, it is a $2$ by $2$ square, because the area is $4$cm$^2$ and the perimeter is $8$cm. ...you go from a dented square to a square shape. by inserting a dent in your shape the area gets reduced by the dent. You can say that the more indents in the figure the more perimeter. First we picked a shape which was a square so we looked at the area and perimeter. The area was $25$cm$^2$ and the perimeter was $20$cm. We took $1$ chunk out of the top of the square and it did make the perimeter bigger and the area became smaller. The perimeter became bigger because it adds on $2$ more lines so the perimeter became $22$cm. Very, very well done all of you. You have obviously put a lot of thought into this problem. Trial and improvement. Visualising. Working systematically. Area - squares and rectangles. Generalising. Combinations. Addition & subtraction. Investigations. Perimeter. Interactivities.
CommonCrawl
1. For how many $n=2,3,4,\ldots,99,100$ is the base-$n$ number $235236_n$ a multiple of $7$? 2.What is the smallest base-10 integer that can be represented as $AA_5$ and $BB_7$, where $A$ and $B$ are valid digits in their respective bases? This holds true for any \(n\geq 7\). Consider all possible values of P(n) mod 7 when \(0 \leq n \leq 6, n\in \mathbb Z\). Generally, \(P(7k + 1) \equiv 0 \pmod 7 \quad\forall \;k\in \mathbb Z^+\). All possible values of n are: 8,15,22,29,36,43,50,57,64,71,78,85,92,99. So there are 14 possible values of n such that \(235236_n\) is divisible by 7. From this, we know that the required base-10 integer must be divisible by both 6 and 8. Least common multiple of 6 and 8 = 24. So the answer is 24.
CommonCrawl
Viechtbauer (2007) is a general article about meta-analysis focusing in particular on random- and mixed-effects (meta-regression) models. An example dataset, based on a meta-analysis by Linde et al. (2005) examining the effectiveness of Hypericum perforatum extracts (St. John's wort) for treating depression, is used in the paper to illustrate the various methods. The data are given in the paper in Table 1 (p. 105). Variables ai and ci indicate the number of participants with significant improvements between baseline and the follow-up assessment in the treatment and the placebo group, respectively, variables n1i and n2i are the corresponding group sizes, variable yi is the log of the relative improvement rate (i.e., the improvement rate in the treatment group divided by the improvement rate in the placebo group), vi is the corresponding sampling variance, dosage is the weekly dosage (in grams) of the Hypericum extract used in each study, major indicates whether a study was restricted to participants with major depression or not (1 or 0, respectively), baseline denotes the average score on the Hamilton Rating Scale for Depression (HRSD) at baseline (i.e., before treatment begin), and duration indicates the number of treatment weeks before response assessment. Variables yi and vi are not actually included in the original dataset and were added by means of the escalc() function. Note that, for illustration purposes, only a subset of the data from the Linde et al. (2005) meta-analysis are actually included in this example. Therefore, no substantive interpretations should be attached to the results of the analyses given below. With transf=exp, the values of the outcome measure (i.e., the log relative improvement rates) and corresponding confidence interval bounds are exponentiated and hence transformed back from the log scale. Therefore, variable yi now indicates the relative improvement rate, and ci.lb and ci.ub are the bounds of an approximate 95% confidence interval for the true relative improvement rate in the individual studies (note that this is not a permanent change – object dat still contains the log transformed values, which we need for the analyses below). Therefore, the estimated relative rate is 1.38 with an approximate 95% CI of 1.26 to 1.52. However, the Q-test suggests that the true (log) relative rates are not homogeneous. We can interpret the model estimate obtained above as an estimate of the (weighted) average of the true log relative rates for these 17 studies. This is the so-called fixed-effects model, which allows us to make a conditional inference (about the average effect) that only pertains to this set of studies. We can model the heterogeneity in the true log relative rates and apply a random-effects model. This allows us to make an unconditional inference about a larger population of studies from which the included set of studies are assumed to be a random selection. The baseline HRSD score will be used to reflect the severity of the depression in the patients. Since these two variables may interact, their product will also be included in the model. Finally, for easier interpretation, we will also center the variables at (roughly) their means when including them in the model. I(baseline - 20) -0.0672 0.0352 -1.9086 0.0563 -0.1363 0.0018 . These are the same results as given in Table 2 on page 113. Therefore, it appears that St. John's wort is more effective for lower baseline HRSD scores (the coefficient is negative, but just misses being significant at $\alpha = .05$ with $p = .06$). On the other hand, the total dosage of St. John's wort administered during the course of a study does not appear to be related to the treatment effectiveness ($p = .56$) and there does not appear to be an interaction between the two moderators ($p = .65$). So, for a low baseline HRSD score (i.e., mildly depressed patients), the estimated average relative improvement rate is quite high (2.67 with 95% CI: 1.46 to 4.88), but at a high baseline HRSD score (i.e., more severely depressed patients), the estimated average relative improvement rate is low (1.26 with 95% CI: 0.99 to 1.61) and in fact not significantly different from 1. Linde, K., Berner, M., Egger, M., & Mulrow, C. (2005). St John's wort for depression: Meta-analysis of randomised controlled trials. British Journal of Psychiatry, 186, 99–107. Viechtbauer, W. (2007). Accounting for heterogeneity via random-effects models and moderator analyses in meta-analysis. Zeitschrift für Psychologie / Journal of Psychology, 215(2), 104–121. Note that the equation used to compute these bounds is slightly different from the equation given in footnote 4 in the article. The bounds given above do take the uncertainty in the estimate of $\mu$ into consideration and are therefore a bit wider than the ones reported in the article.
CommonCrawl
Auteur(s): Neumann Walter D., Pichon A. We investigate the relationships between the Lipschitz outer geometry and the embedded topological type of a hypersurface germ in $(\mathbb C^n,0)$. It is well known that the Lipschitz outer geometry of a complex plane curve germ determines and is determined by its embedded topological type. We prove that this does not remain true in higher dimensions. Namely, we give two normal hypersurface germs $(X_1,0)$ and $(X_2,0)$ in $(\mathbb C^3,0)$ having the same outer Lipschitz geometry and different embedded topological types. Our pair consist of two superisolated singularities whose tangent cones form an Alexander-Zariski pair having only cusp-singularities. Our result is based on a description of the Lipschitz outer geometry of a superisolated singularity. We also prove that the Lipschitz inner geometry of a superisolated singularity is completely determined by its (non embedded) topological type, or equivalently by the combinatorial type of its tangent cone. Auteur(s): Neumann Walter D, Pedersen Helge Møller, Pichon A. Any germ of a complex analytic space is equipped with two natural metrics: the outer metric induced by the hermitian metric of the ambient space and the inner metric, which is the associated riemannian metric on the germ. These two metrics are in general nonequivalent up to bilipschitz homeo-morphism. We show that minimal surface singularities are Lipschitz normally embedded, i.e., their outer and inner metrics are bilipschitz equivalent, and that they are the only rational surface singularities with this property. The proof is based on a preliminary result which gives a general characterization of Lipschitz normally embedded normal surface singularities. Auteur(s): Neumann Walter D, Pichon A. We describe the Lipschitz geometry of complex curves. To a large part this is well known material, but we give a stronger version even of known results. In particular, we give a quick proof, without any analytic restrictions, that the outer Lipschitz geometry of a germ of a complex plane curve determines and is determined by its embedded topology. This was first proved by Pham and Teissier, but in an analytic category. We also show the embedded topology of a plane curve determines its ambient Lipschitz geometry. Auteur(s): Birbrair Lev, Neumann Walter D, Pichon A. We describe a natural decomposition of a normal complex surface singularity $(X,0)$ into its ``thick'' and ``thin'' parts. The former is essentially metrically conical, while the latter shrinks rapidly in thickness as it approaches the origin. The thin part is empty if and only if the singularity is metrically conical; the link of the singularity is then Seifert fibered. In general the thin part will not be empty, in which case it always carries essential topology. Our decomposition has some analogy with the Margulis thick-thin decomposition for a negatively curved manifold. However, the geometric behavior is very different; for example, often most of the topology of a normal surface singularity is concentrated in the thin parts.By refining the thick-thin decomposition, we then give a complete description of the intrinsic bilipschitz geometry of $(X,0)$ in terms of its topology and a finite list of numerical bilipschitz invariants. We prove that the outer Lipschitz geometry of the germ of a normal complex surface singularity determines a large amount of its analytic structure. In particular, it follows that any analytic family of normal surface singularities with constant Lipschitz geometry is Zariski equisingular. We also prove a strong converse for families of normal complex hypersurface singularities in $\C^3$: Zariski equisingularity implies Lipschitz triviality. So for such a family Lipschitz triviality, constant Lipschitz geometry and Zariski equisingularity are equivalent to each other.
CommonCrawl
Suppose $G$ is a finite group with no abelian centralizers. Is it true that $G$ must be a 2-group? No, this is not necessarily the case. If $G$ is a group such that there are no abelian centralizers in $G$, then $G \times H$ also has this property for any group $H$. Minimal normal subgroup and its centralizer. Is it true all centralizer of G are abelian? Is a finite centerless metabelian group always a semidirect product of two abelian groups?
CommonCrawl
Convolution algebras for double groupoids? There is a lot of work of course on convolution algebras of measured groupoids, and this gives "Noncommutative geometry". However there is a lot of interest in algebraically structured groupoids, for example groupoids internal to the categories of say groups, or groupoids, or Lie algebras. Thus double groupoids, i.e. groupoids internal to groupoids, can be seen as "more noncommutative" than groupoids. They are also quite difficult to understand in general, though special cases have been studied extensively, e.g. 2-groupoids, and what are called 2-groups (groupoids internal to groups). They all have relations with crossed modules. How does one find then a jacking up of Noncommutative Geometry to take into account these algebraically structured groupoids? can be extended to a kind (or rather many kinds!) of matrix convolution looking at all decompositions of $z$ as a matrix composition (I find it difficult to write this down in this system!) but it all depends on the size $m \times n$ of the matrix, and because one needs the interchange law it all gets complicated and not reducible to the individual compositions in the double groupoid. If this could be done, it might open up new worlds! Not the answer you're looking for? Browse other questions tagged oa.operator-algebras ct.category-theory fa.functional-analysis noncommutative-geometry groupoids or ask your own question. Is a groupoid determined by its Hopfish algebra? Is there a Fourier transform for principal r-discrete groupoids?
CommonCrawl
This semester (Spring 2006) the seminar meets Wednesdays 1:50-2:40, in Milner 317. A Very Elementary Introduction to the d-bar_b Problem on the Heisenberg group in C^n, IV: Convolution and the group structure on the Heisenberg group with an application to the \bar\partial_b-problem. Pointwise estimates on kernels of a family of heat equations in $\mathbb R\times \mathbb C$ with applications to several complex variables.
CommonCrawl
Problems of Eigenvectors and Eigenspaces. From introductory exercise problems to linear algebra exam problems from various universities. Basic to advanced level. From introductory exercise problems to linear algebra exam problems from various universities. The dimension of the eigenspace corresponding to an eigenvalue is less than or equal to the multiplicity of that eigenvalue. The techniques used here are practical for $2 \times 2$ and $3 \times 3$ matrices. For the given matrix A, find a basis for the corresponding eigenspace for the given eigenvalue.
CommonCrawl
Abstract: It is shown that $2\beta_1(\G)\leq h(\G)$ for any countable group $\G$, where $\beta_1(\G)$ is the first $\ell^2$-Betti number and $h(\G)$ the uniform isoperimetric constant. In particular, a countable group with non-vanishing first $\ell^2$-Betti number is uniformly non-amenable. We then define isoperimetric constants in the framework of measured equivalence relations. For an ergodic measured equivalence relation $R$ of type $\IIi$, the uniform isoperimetric constant $h(R)$ of $R$ is invariant under orbit equivalence and satisfies $$ 2\beta_1(R)\leq 2C(R)-2\leq h(R), $$ where $\beta_1(\R)$ is the first $\ell^2$-Betti number and $C(R)$ the cost of $R$ in the sense of Levitt (in particular $h(R)$ is a non-trivial invariant). In contrast with the group case, uniformly non-amenable measured equivalence relations of type $\IIi$ always contain non-amenable subtreeings. An ergodic version $h_e(\G)$ of the uniform isoperimetric constant $h(\G)$ is defined as the infimum over all essentially free ergodic and measure preserving actions $\alpha$ of $\G$ of the uniform isoperimetric constant $h(\R_\alpha)$ of the equivalence relation $R_\alpha$ associated to $\alpha$. By establishing a connection with the cost of measure-preserving equivalence relations, we prove that $h_e(\G)=0$ for any lattice $\G$ in a semi-simple Lie group of real rank at least 2 (while $h_e(\G)$ does not vanish in general). Journal reference: Geometry, Groups, and Dynamics 2 (2008), 595-617.
CommonCrawl
Let $R$ and $S$ be rings with $1\neq 0$. Prove that every ideal of the direct product $R\times S$ is of the form $I\times J$, where $I$ is an ideal of $R$, and $J$ is an ideal of $S$. Let $K$ be an ideal of the direct product $R\times S$. We claim that $I$ and $J$ are ideals of $R$ and $S$, respectively. Let $a, a'\in I$. Then there exist $b, b'\in S$ such that $(a, b), (a', b')\in K$. It follows that $a+a'\in I$. \[(r,0)(a,b)=(ra,0)\in K\] because $K$ is an ideal. Thus, $ra\in I$, and hence $I$ is an ideal of $R$. Similarly, $J$ is an ideal of $S$. Next, we prove that $K=I \times J$. Let $(a,b)\in K$. Then by definitions of $I$ and $J$ we have $a\in I$ and $b\in J$. Thus $(a,b)\in I\times J$. So we have $K\subset I\times J$. On the other hand, consider $(a,b)\in I \times J$. Since $a\in I$, there exists $b'\in S$ such that $(a, b')\in K$. Also since $b\in J$, there exists $a'\in R$ such that $(a', b)\in K$. \[(a,b)=(a,0)+(0,b)\in K.\] Hence $I\times J \subset K$. Putting these inclusions together gives $k=I\times J$ as required. The ideals $I$ and $J$ defined in the proof can be alternatively defined as follows. Since the natural projections are surjective ring homomorphisms, the images $I$ and $J$ are ideals in $R$ and $S$, respectively.
CommonCrawl
Abstract: We construct global solutions to Type IIB supergravity with 16 residual supersymmetries whose space-time is $AdS_6 \times S^2$ warped over a Riemann surface. Families of solutions are labeled by an arbitrary number $L\geq 3$ of asymptotic regions, in each of which the supergravity fields match those of a $(p,q)$ five-brane, and may therefore be viewed as near-horizon limits of fully localized intersections of five-branes in Type IIB string theory. These solutions provide compelling candidates for holographic duals to a large class of five-dimensional superconformal quantum field theories which arise as non-trivial UV fixed points of perturbatively non-renormalizable Yang-Mills theories, thereby making them more directly accessible to quantitative analysis.
CommonCrawl
Abstract: The colour fields, created by a static gluon-quark-antiquark system, are computed in quenched SU(3) lattice QCD, in a $24^3\times 48$ lattice at $\beta=6.2$ and $a=0.07261(85)\,fm$. We study two geometries, one with a U shape and another with an L shape. The particular cases of the two gluon glueball and quark-antiquark are also studied, and the Casimir scaling is investigated in a microscopic perspective. This also contributes to understand confinement with flux tubes and to discriminate between the models of fundamental versus adjoint confining strings, analogous to type-II and type-I superconductivity.
CommonCrawl
Mixed volumes, which are the polarization of volume with respect to the Minkowski addition, are fundamental objects in convexity. In this note we announce the construction of mixed integrals, which are functional analogs of mixed volumes. We build a natural addition operation $\oplus$ on the class of quasi-concave functions, such that every class of $\alpha$-concave functions is closed under $\oplus$. We then define the mixed integrals, which are the polarization of the integral with respect to $\oplus$. We proceed to discuss the extension of various classic inequalities to the functional setting. For general quasi-concave functions, this is done by restating those results in the language of rearrangement inequalities. Restricting ourselves to $\alpha$-concave functions, we state a generalization of the Alexandrov inequalities in their more familiar form. Keywords: alpha-concavity, log-concavity, Mixed integrals, Brunn-Minkowski., mixed volumes, quasi-concavity. Mathematics Subject Classification: Primary: 52A39; Secondary: 26B2. Mordecai Avriel, r-convex functions,, Mathematical Programming, 2 (1972), 309. Sergey Bobkov, Convex bodies and norms associated to convex measures,, Probability Theory and Related Fields, 147 (2009), 303. doi: 10.1007/s00440-009-0209-7. Sergey Bobkov, Andrea Colesanti and Ilaria Fragalà, Quermassintegrals of quasi-concave functions and generalized Prékopa-Leindler inequalities,, (2012), (2012). Christer Borell, Convex measures on locally convex spaces,, Arkiv för Matematik, 12 (1974), 239. Christer Borell, Convex set functions in d-space,, Periodica Mathematica Hungarica, 6 (1975), 111. Herm J. Brascamp and Elliott H. Lieb, On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation,, Journal of Functional Analysis, 22 (1976), 366. Bo'az Klartag and Vitali Milman, Geometry of log-concave functions and measures,, Geometriae Dedicata, 112 (2005), 169. doi: 10.1007/s10711-004-2462-3. Vitali Milman and Liran Rotem, Mixed integrals and related inequalities,, Journal of Functional Analysis, 264 (2013), 570. doi: 10.1016/j.jfa.2012.10.019. Liran Rotem, Support functions and mean width for $\alpha$-concave functions,, preprint, (2012). doi: 10.1016/j.bulsci.2012.03.003. Rolf Schneider, "Convex Bodies: The Brunn-Minkowski Theory,", Encyclopedia of Mathematics and its Applications, 44 (1993). doi: 10.1017/CBO9780511526282.
CommonCrawl
I am covering the classic literature on predictions of Cabibbo angle or other relationships in the mass matrix. As you may remember, this research was a rage in the late seventies, after noticing that $\tan^2 \theta_c \approx m_d/m_s$. A typical paper of that age was Wilczek and Zee Phys Lett 70B, p 418-420. The technique was to use a $SU(2)_L \times SU(2)_R \times \dots $ model and set some discrete symmetry in the Right multiplets. Most papers got to predict the $\theta_c$ and some models with three generations or more (remember the third generation was a new insight in the mid-late seventies) were able to producte additional phases in relationship with the masses. Now, what I am interested is on papers and models including also some prediction of mass relationships, alone, or cases where $\theta_c$ is fixed by the model and then some mass relationship follows. Of course in such case $\theta_c$ is fixed to 15 degrees. But also $m_u=0$, which is an extra prediction even if the fixing of Cabibbo angle were ad-hoc. Ok, so my question is, are there other models in this theme containing predictions for quark masses? Or was Harari et al. an exception until the arrival of Koide models? Also, published criticisms of these papers are welcome. I am aware of some for Harari et al.
CommonCrawl
Can you assemble a formula using the least amount of one digit numbers (from $0$ to $9$) so that the results equals to 2018 with the rules below? Using a direct square root is not allowed since it is actually power of $0.5$. You may use brackets to clarify order of operations. You are allowed to use one digit number as much as you want, such as you may try to assembly a formula using four $2$s two $1$s etc. You are not allowed to concatenate. Double, triple, etc. factorials (n-druple-factorials), such as $4!! = 4 \times 2$ are not allowed either. Not the answer you're looking for? Browse other questions tagged formation-of-numbers or ask your own question.
CommonCrawl
where the %$z$% direction is that of the colliding beams. where the $z$ direction is that of the colliding beams. Prove that %$\tanh \eta=\cos \theta$%. Prove that $\tanh \eta=\cos \theta$. Find the distribution in %$\eta$%. Find the distribution in $\eta$. Prove that rapidity equals pseudo-rapidity, %$\eta=y$% for a relativistic particle %$E\gg m$%. Prove that rapidity equals pseudo-rapidity, $\eta=y$ for a relativistic particle $E\gg m$. Consider a generic particle %$X$% of mass %$M$% (such as a Z boson or a Higgs) produced on shell at the LHC , with zero transverse momentum, %$pp \to X$%. Find the relevant values of %$x_1,x_2$% of the initial partons that can be accessed by producing such a particle. Compare your results with that of Fig.1, considering the scale %$Q=M$%. Consider a generic particle $X$ of mass $M$ (such as a Z boson or a Higgs) produced on shell at the LHC , with zero transverse momentum, $pp \to X$. Find the relevant values of $x_1,x_2$ of the initial partons that can be accessed by producing such a particle. Compare your results with that of Fig.1, considering the scale $Q=M$.
CommonCrawl
Tom starts at B then goes right and gets $5$ more which takes him to $15$. He then goes up four adding $8$ to his score taking him up to $23$. He then goes right and gets $5$ more taking him to $28$. So Tom's score ends up at $28$. Ben starts at B and goes to the left two squares taking his score to $0$. He then goes up two squares taking his score to $4$. He then goes four to the right adding $20$ to his score so he now has $24$. He then goes up two squares taking his score to $28$. So Ben's score is $28$, which, oddly enough, is the exact same score as Tom. We found that in all four of our journeys we got $28$. The whole of the class got $28$ at least two times. Aidan and I used red and blue crayon to mark which way we went. Aidan and I think that we got $28$ because when you go one way you always have to go back the same way. grid 1) We have found out that they both ended up on $28$. It doesn't matter how long or short their trial is, it will end up on the same answer. grid 2) On this grid we have found out that the answer is the same as on the first grid. However, the grid can be as big or as small as you like but you will still have an answer of $28$. grid 3) On the final grid the answer is always $800,000$. Harry also played around with the grids a bit, changing addition to multiplication and subtraction to division or vice versa. This changed the answer but the answer was still always the same, no matter what route you took. No matter what way you go, you will always end up with the same anwser. We tried a few ways using our computer screen following the journey with our fingers, this was on the multiplication and divide square. Every time we did it we ended up with the anwser of $800000$. So we tried even more journeys and always got the same answer of $800000$. Whichever route or sequence you take, you will always get the same answer, as long as the grid is the same. This is because every time there is a times and a divide, or an add and a takeaway, and every time they cancel each other. For example if you go right for a times by $2$ and left for a divide by $2$, they cancel each other out. For example: $3 \times 2 = 6$ but if you divide it back ... $6 \div2 = 3 $ (which is just halving by the way! and $\times2$ is doubling!) It just cancels out your calculation. Or $10 + 9 = 19$ OPPOSITE $19 - 9 =10$ tada!!! (It's like dividing and times, it's just undoing again, just like the undo button on your computer!) . Well done too to Barbara, Cong and Nazra from Arnhem Wharf Primary School who also realised that they would always get the same score for a particular grid. Inverses. Generalising. Combinations. Interactivities. Trial and improvement. Visualising. Investigations. Multiplication & division. Working systematically. Addition & subtraction.
CommonCrawl
I was perusing the "Close Questions" queue just now and noticed a common theme. It seems like a large number of questions involve gaining intuition about some result; most of them use the word "intuitively" and/or have the intuition tag. Is this a change in site policy? Some of the questions thus tagged seemed like pretty good ones to me, but I'm admittedly not one of the most active or longest-term users of this stack. Intuition is extremely important in mathematics and not always explicitly taught in math courses. As a working mathematician, I need "hard" knowledge to prove and define things in detail, but "soft" knowledge is an indispensable guide that steers my work. We would lose much if we were to ban intuitive questions here. It's not easy to apply the same standards to hard and soft questions, but I very much want intuitive questions asking for professional opinions of experts to be on-topic. A hard math question can be very concise but still clear. An intuition question needs more explanation as to what is actually sought for. They are different in nature, but all questions on the site should be sufficiently clear and have enough context. This just happens to mean different things in practice for different kinds of questions. Intuitively, one can think of the several variable chain rule as follows. Let $x$ be close to $x_0$. Then Newton's approximation asserts that $$ f(x) - f(x_0) \approx f'(x_0)(x - x_0) $$ and in particular $f(x)$ is close to $f(x_0)$. Since $g$ is differentiable at $f(x_0)$, we see from Newton's approximation again that $$ g(f(x)) - g(f(x_0)) \approx g'(f(x_0))(f(x) - f(x_0)). $$ Combining the two, we obtain $$ g \circ f(x) - g \circ f(x_0) \approx g'(f(x_0)) f'(x_0)(x - x_0) $$ which then should give $(g \circ f)'(x_0) = g'(f(x_0))f'(x_0).$ This argument however is rather imprecise; to make it more precise one needs to manipulate limits rigorously; see Exercise 17.4.3. I have often found intuitive explanations similar to this on math.stackexchange that I thought were very enlightening. The "pre-rigorous" stage, in which mathematics is taught in an informal, intuitive manner, based on examples, fuzzy notions, and hand-waving. (For instance, calculus is usually first introduced in terms of slopes, areas, rates of change, and so forth.) The emphasis is more on computation than on theory. This stage generally lasts until the early undergraduate years. The "rigorous" stage, in which one is now taught that in order to do maths "properly", one needs to work and think in a much more precise and formal manner (e.g. re-doing calculus by using epsilons and deltas all over the place). The emphasis is now primarily on theory; and one is expected to be able to comfortably manipulate abstract mathematical objects without focusing too much on what such objects actually "mean". This stage usually occupies the later undergraduate and early graduate years. The "post-rigorous" stage, in which one has grown comfortable with all the rigorous foundations of one's chosen field, and is now ready to revisit and refine one's pre-rigorous intuition on the subject, but this time with the intuition solidly buttressed by rigorous theory. (For instance, in this stage one would be able to quickly and accurately perform computations in vector calculus by using analogies with scalar calculus, or informal and semi-rigorous use of infinitesimals, big-O notation, and so forth, and be able to convert all such calculations into a rigorous argument whenever required.) The emphasis is now on applications, intuition, and the "big picture". This stage usually occupies the late graduate years and beyond. The transition from the first stage to the second is well known to be rather traumatic, with the dreaded "proof-type questions" being the bane of many a maths undergraduate. (See also "There's more to maths than grades and exams and methods".) But the transition from the second to the third is equally important, and should not be forgotten. Note Tao's emphasis on intuition here. I find math.stackexchange to be very helpful for developing the kind of intuition that Tao is referring to. There was no very recent discussion about this, though the subject came up in the not too distant past (see the comment by Zachary Selk). In that sense, no there was no change in policy. However, the general guidelines are very broad and for the most part finer points of policy just write done what is already done in practice. That is to say, it is normal that there is first a change in practice, and then a policy is introduced. Likely, somebody just went ahead and tested if their point of view has some traction. When reviewing evaluate each question on an individual basis according to your standards. Not the answer you're looking for? Browse other questions tagged discussion allowed-questions policy . What is "the intuition behind a result"? Are teaching/applications off-topic? Vote off-topic only if community consensus? What Do We Mean by "Context"? Are review questions on topic? Are math-based economics questions welcome here?
CommonCrawl
Ovulation has been hypothesized as an inflammatory process. Interleukin(IL)-1$\alpha$, IL-1$\beta$ and tumor necrosis factor(TNF)-$\alpha$ are potent cytokines produced from macrophages and various other cell types, and are pivotal components of inflammation. Although previous studies have investigated cytokine activities in the reproductive system, there is little information on their precise localization and activities during the periovulatory period. To investigate the role of cytokines in ovulation, experiments were designed to determine the immunohistochemical localization and time specific production of cytokines IL-1 and TNF-$\alpha$ using a mouse model at 36h, 12h, 6h, 2h before ovulation, and at 6h and 18h after ovulation in vivo. Isolated individual follicles in vitro were used to determine more precise roles of cytokines on follicular development, ovulation and steroidogenesis. From these studies it was found that (1) granulosa cells were the primary sites of IL-1$\alpha$ and TNF-$\alpha$ production from large antral follicles and preovulatory follicles in vivo, (2) production of IL-1$\alpha$ and TNF-$\alpha$ increased as ovulation neared, first appearing in the cumulus cells and expanding to antral and mural granulosa cells, (3) less intense staining of these cytokines in the theca layer of smaller follicles suggests that theca cells may contribute to the production of these cytokines to some extent, (4) but there was no IL-1$\beta$ production, (5) localized and temporal production of cytokines during the periovulatory period suggests precise regulation, (6) decrease of IL-1$\alpha$ in the ovary after gonadotropin injection determined by enzyme linked immunoadsorbent assay suggests that IL-1$\alpha$ production may be under the control of gonadotropins, (7) in follicle culture without bone marrow derived cells, granulosa cells were confirmed as the main source of cytokine production, (8) addition of IL-1$\alpha$ and TNF-$\alpha$ to follicles in culture tend to decrease estradiol production. In conclusion, immunoreactive cytokine production correlated positively with the periovulatory follicular development suggesting their role as ovulatory mediators. It requires further studies on what are the signals for the initiation and termination of cytokine production, how transcription and translation of these cytokines are regulated during the periovulatory period, and how they contribute to the ovulation.
CommonCrawl
Abstract : E-functions are entire functions with algebraic Taylor coefficients satisfying certain arithmetic conditions, and which are also solutions of linear differential equations with rational functions coefficients. They were introduced by Siegel in 1929 to generalize Diophantine properties of the exponential function, and studied further by Shidlovskii in 1956. The celebrated Siegel-Shidlovskii Theorem deals with the algebraic (in)dependence of values at algebraic points of E-functions solutions of a differential system. However, somewhat paradoxically, this deep result may fail to decide whether a given E-fuction assumes an algebraic or a transcendental value at some given algebraic point. Building upon André's theory of E-operators, Beukers refined in 2006 the Siegel-Shidlovskii Theorem in an optimal way. In this paper, we use Beukers' work to prove the following result: there exists an algorithm which, given a transcendental E-function $f(z)$ as input, outputs the finite list of all exceptional algebraic points $\alpha$ such that $f(\alpha)$ is also algebraic, together with the corresponding list of values $f(\alpha)$. This result solves the problem of deciding whether values of E-functions at algebraic points are transcendental.
CommonCrawl
Abstract: Exclusive differential spectra in color-singlet processes at hadron colliders are benchmark observables that have been studied to high precision in theory and experiment. We present an effective-theory framework utilizing soft-collinear effective theory to incorporate massive (bottom) quark effects into resummed differential distributions, accounting for both heavy-quark initiated primary contributions to the hard scattering process as well as secondary effects from gluons splitting into heavy-quark pairs. To be specific, we focus on the Drell-Yan process and consider the vector-boson transverse momentum, $q_T$, and beam thrust, $\mathcal T$, as examples of exclusive observables. The theoretical description depends on the hierarchy between the hard, mass, and the $q_T$ (or $\mathcal T$) scales, ranging from the decoupling limit $q_T \ll m$ to the massless limit $m \ll q_T$. The phenomenologically relevant intermediate regime $m \sim q_T$ requires in particular quark-mass dependent beam and soft functions. We calculate all ingredients for the description of primary and secondary mass effects required at NNLL$'$ resummation order (combining NNLL evolution with NNLO boundary conditions) for $q_T$ and $\mathcal T$ in all relevant hierarchies. For the $q_T$ distribution the rapidity divergences are different from the massless case and we discuss features of the resulting rapidity evolution. Our results will allow for a detailed investigation of quark-mass effects in the ratio of $W$ and $Z$ boson spectra at small $q_T$, which is important for the precision measurement of the $W$-boson mass at the LHC.
CommonCrawl
This account is temporarily suspended for rule violations. The suspension period ends in 4 days. I know some skills in doing Maths. Maybe I will be there at Milan. 14 My favorite questions need to be classified! 5 Happy Second Birthday, M.SE! 11 Difference between "plotting" and "drawing" 10 Is 2-sylow subgroup of a rational group also a rational group? 8 When do we use "suppose" and when "let"? 6 Triangle mapped on a sphere in $\mathbb R^3$?
CommonCrawl
How can I get a random positive definite (or positive semi definite) matrix of given order? I know that there is a inbuilt function random_matrix with an additional feature algorithm to set with it. But there we can get some special matrices like 'echelon_form', 'orthogonal' ', 'echelonizable', 'diagonalizable'.... but positive definite command is not in built there. Some other important matrix classes are not there. So how to obtain random matrix of those classes? Thank you. One more thing, what to do if I don't want my matrix to be 'diagonalizable'. I tried to remove the algorithm, but there were some errors. then it induces a Hilbert space structure on $\mathbb R^N$, seen as space of column vectors, i.e. matrices $N\times 1$, by setting: $$ \langle x,y\rangle _A = x' Ay\ . $$ A Hilbert basis exists, and can be used to diagonalize. In fact, any symmetric matrix can be diagonalized.
CommonCrawl
There are $n$ children around a round table. For each child, you know the amount of food they want and the amount of food they currently have. The total amount of food in the table is correct. At each step, a child can give one unit of food to their neighbour. What is the minimum number of steps needed? The first input line contains an integer $n$: the number of children. The next line has $n$ integers $a_1,a_2,\ldots,a_n$: the required amount of food for each child. The last line has $n$ integers $b_1,b_2,\ldots,b_n$: the current amount of food for each child. Print one integer: the minimum number of steps. Explanation: Child 1 gives one unit of food to child 3, and child 2 gives one unit of food to child 3.
CommonCrawl
1 . What will come in place of the question mark (?) in the following questions? 2 . What will come in place of the question mark (?) in the following questions? 3 . What will come in place of the question mark (?) in the following questions? 4 . What will come in place of the question mark (?) in the following questions? 35 + 15 $\times$ 1.5 = ? 5 . What will come in place of the question mark (?) in the following questions? 6 . What will come in place of the question mark (?) in the following questions? 3 + 33 + 333 + 3.33 = ? 7 . What will come in place of the question mark (?) in the following number series? 8 . What will come in place of the question mark (?) in the following number series? 9 . What will come in place of the question mark (?) in the following number series? 10 . What will come in place of the question mark (?) in the following number series?
CommonCrawl
Given \$n\$ digits and \$m\$ indices \$x\$ from \$1. \ldots,n\$, calculate the difference \$b_y = a_x - a_y\$ for all indices \$y \; (y < x)\$. Then, calculate \$B1\$, the sum of all \$b_y\$ which are greater than \$0\$ and \$B2\$, the sum of all \$b_y\$ which are less than \$0\$. The answer for this step is \$B1 - B2\$. I've tried quite hard to optimize the following solution, but online judge always shows Time Limit Exceeded. Is it possible to optimize this code further to bring time down less than 1 second, or should I change my approach or perhaps shift to C? You didn't put any effort into choosing good names. Given a code challenge, either stick to the variable names used in the problem statement (n, m, ...) or, preferably, choose explicit and descriptive names such as steps, digits and indices rather than z, y, etc. Why should you bother with this? Because good names reveal intent, and knowing what the code is supposed to do makes it possible for you and others to understand and improve it easily. Your algorithm uses a naive, brute-force approach. This is fine as a starting point! But when you get a time limit exceeded message, you need to think about smarter solutions. Rene's answer gives you some good advice: He suggests that you employ caching, separate I/O from the calculation, and add up all the absolute values of by, thus skipping the calculation of B1 and B2. This is a good start but you can go much further. Janne's answer tells you the essence of what's wrong with your algorithm: you need to avoid looping over the previous digits for every step. Why? Because there will be up to 105 (one hundred thousand) digits and up to 105 steps. However, it is unnecessary to make a sorted copy of the indices — see below. We are saying: print each line of the result of solving the problem with the input from sys.stdin. We discard the first line _ because we can deduce the length of the digits and the number of steps from the rest of the input, so there is no need to waste cycles parsing them. We then store the digits which are given in the second line, as well as the indices from the following lines. We can pass the digits (stripping the newline character at the end) and the indices to this function to obtain usable values which we can use to calculate_answer(digits, indices). Since we have already parsed the input, this function can be focused on our algorithm to solve the problem. Store the indices in a dictionary, which allows them to be looked up in constant time. Set up another dictionary to count the number of occurrences of each unique digit, which will help us avoid iterating over all previous digits on every step. Assign to the current index the sum of all the absolute values of the differences between the current digit and each unique previous digit d multiplied by its occurrences n. In the order of the given indices, retrieve each calculated value from result. The end result is fast enough to be accepted by CodeChef. As you didn't specify a language version, this is in Python 3, which you should use as well. If you insist on sticking with Python 2.x, you will need to make the necessary adjustments. A better algorithm usually wins over micro-optimizations. Do you really need to do all these calculations every time? Maybe you could cache them and re-use the results. You could perform all the string to int conversions up front, afterwards only looking up numbers from array. This whole b1 & b2 thing could be replaced with simple abs() function call. Not sure if it would speed things up, but it sure would simplify the code. Sort a copy of the list of indices x given in the input. Sweep through the string, counting the occurrences of each digit as you go along. Each time you hit one of the x, compute the answer from the current counts. Put the answer in a dict indexed by x. Output the answers in correct order by looping through the original list and looking up the value in the dict. Not the answer you're looking for? Browse other questions tagged python optimization performance programming-challenge or ask your own question.
CommonCrawl
Abstract: We examine the polynomial form of the scattering equations by means of computational algebraic geometry. The scattering equations are the backbone of the Cachazo-He-Yuan (CHY) representation of the S-matrix. We explain how the Bezoutian matrix facilitates the calculation of amplitudes in the CHY formalism, without explicitly solving the scattering equations or summing over the individual residues. Since for $n$-particle scattering, the size of Bezoutian matrix grows only as $(n-3)\times(n-3)$, our algorithm is very efficient for analytic and numeric amplitude computations.
CommonCrawl
electricity Web searches / news Regional electricity prices. The BTC market data is updated daily. Mining distribution, electricity and hardware tables are updated as new information becomes available. A summary of some of the data used is given in the Appendices. The ultimate goal of the model is to be able to generate a cash cost, or supply curve for the bitcoin mining network at any instant in time. The cash cost curve as used here is the total cumulative hashing capacity of the network, ranked in order of increasing cost to operate. Both operating cost (cash cost) and capital costs (fully absorbed cash cost) are estimated to this end. The utility of presenting the data in this way is that it becomes easy to see at what bitcoin price a given operator becomes unprofitable; and rationally, should shut down. The process of generating these curves is somewhat involved and is described next. Firstly, and independently of the actual BTC market dynamics, we calculate the running and capital costs of all hardware in the Mining Hardware database (Appendix C) over the date range of interest. where $C_e$ is the daily electricity cost, $C_c$ is daily operating cost (electricity + overheads), $C_d$ and $C_r$ are depreciation and return on capital resectively. $L$ is the lifetime of the hardware, typically 1.5 years, $r$ is the ROI expected, typically 15% for investment purposes, but a 0% return will be used here to illustrate the break-even point. $\alpha$ is the overheads fraction. Electricity costs are estimated from a range of online sources, including regional electricity price publications and prices quoted by mining farm operators in news reports and press releases. While significant effort has been made to use reliable figures, there is significant uncertainty in the values used here. The error in cash cost estimates is proportional to the error in the electricity price. Of course, electricity is not the only input cost in bitcoin mining. Maintenance, staff, insurance and network fees also contribute to the cost of producing bitcoin. These costs are not modelled explicitly here, but are rolled into a single, constant, overheads factor (set at 45% for the calculation presented here). This is clearly an oversimplification; overheads will be lower in China than in the U.S.A. or Western Europe for instance, where labour is far cheaper. Some attempt to take this into account can be made without modifying the model by regarding electricity price as an effective total variable cost and adjusting the electricity price database accordingly; but this was not done in this set of calculations and should be borne in mind when reading the discussion to follow. An estimate of the number of mining hardware rigs available on each day of the time series of interest is calculated using the following algorithm. Determine the most energy efficient available hardware. The frontier equation for determining the 'most efficient' hardware is variable, but it is a strong function of J/GHs and \$/GHs. There is typically a clear winner though, regardless of the actual formula used. Calculate the marginal hashrate, $\Delta H$, the increase from the previous day's calculated capacity and the historical average hashrate for the current day (or the actual hashrate for day 0). The previous day's supply is $\sum s_i n_i f_i $, where $n_i$ is the number of machines that have been produced for each hardware type, and $f$ is the fraction of those machines that should be running based on their cash cost and the actual BTC price (i.e. miners shouldn't run at a loss, but they may temporarily -- several empirical relations were used to estimate the form of $f$). If $\Delta H > 0$, then calculate how many machines are required to produce the marginal hash rate. Update $n_i$ for the current day. Note that for Day 0, the marginal hashrate is the network hashrate. This will give the wrong hardware distribution for Day 0 and indeed until that hardware becomes obsolete. For this reason, the start date for the calculations was always chosen to be early enough that GPUs were still the dominant mining hardware. By the time that GPUs contributed an insignifiant percentage of the total hashrate (Q2 2013), one could assume that the hardware distribution provides a reasonable picture of the overall mining supply landscape. where $P_b$ is the BTC price. This curve for different values of $\beta$ is plotted in Figure 1. Our empirical investigations into matching hashrate with hardware investment suggest that $2.5 < \alpha < 3.5$. A value of 2.95 was used for most of our calculations. Note: For ratios above 1, f is always 1. The first thing the model allows us to visualise is the evolution of the mining hardware landscape. The mining hardware supply was estimated over the period from January 1, 2013 to April 1, 2016. This period encompasses several BTC price bubbles including the major one in H2 2013. Hashing capacity has been added to the network at an exponential rate over almost the period under consideration. During the bubble of 2013, hashrate addition was super-exponential, representing an unprecedented investment in bitcoin mining hardware capacity (See Figure 3). Figure 2 displays the hardware distribution over time as the hashing difficulty increases and the hardware improves. The charts in the left and right columns show the same data, with the only difference being that the left hand column has the hashrate on a log scale. In addition, the top row plots all installed mining capacity (whether it is running or not) and the bottom row plots only those machine that are (or should be) running, according to the participation model, $f$, employed. How quickly mining equipment moves from dominating the landscape to becoming insignificant (12 - 18 months). How the state of the art equipment becomes the dominant hashrate contributor within 3 - 6 months of its release. This means that the most efficient hardware only about a year; 18 months if very lucky; to produce a return on investment (and R&D costs) before the arms race and exponentially rising difficulty level overtakes it and renders it scrap metal. Figure 3 presents a minimum capital investment into bitcoin mining hardware. It is estimated by summing up the published retail prices of mining hardware rigs for each piece of hardware added to the supply landscape, as given in Figure 2. For each day in the calculation, only the most efficient available hardware is used to account for the entire marginal increase in hashing capacity. If less efficient hardware is actually installed, its cost per GH/s is necessarily higher than what was estimated here. There is no such thing as "free mining". Any other hardware contributing compute cycles to BTC mining, whether knowingly or not, including things like illegal zombie networks are contributing greater-than marginal-cost hardware to the network. This necessarily represents a higher capital investment than what we've allocated. The fact that the beneficiaries of the mining effort haven't paid that capital is irrelevant. This graph is useful to estimate what it would take for an independent malicious attacker (let's say a rogue government or terror organisation) wishing to execute a 51% attack on the network would need to spend asuming they have no hashing capacity now. As of 1 April 2016, this value stands at approximately \$2.35 bn, assuming the rest of the world stood still. It is an interesting thought experiment to consider how the world would respond if it relied on bitcoin for its financial infrastructure today, and North Korea decided to try and build a mining farm to bring the network down; perhaps something for a future post. One observation here is that \$2.35 bn represents 36% of bitcoin's current market capitalisation of \$6.5 bn. Our estimates suggest that capital accounts for anywhere between 35% and 70% of the fully realised cost of mining bitcoin, suggesting that society is getting fair value at least from the current bitcoin price at its level of \$420/BTC. Another noticeable observation about Figure 3 is the remarkable addition of hardware over Q1 2014, where at least $1.0 bn (with a B) was added to bitcoin mining operations in 3 months. Sources suggest that large sections of China's IC fabrication units were turned onto ASIC manufacture during this period, looking to cash in on four-digit BTC prices. This is an awesome demonstration of the flexibility of the sector given the right economic incentives. The most useful aspect of the hashrate supply model is the ability to plot cash cost curves for bitcoin mining at any instance in time. Figures 5 and 6 show the curves at various points in the interval January 2013 to March 2016. Figure 5 shows some cost curves in and around the major price bubble that occurred late 2013. The blue lines indicate the estimated cash cost (electricity and overheads) for the entire network on the given day. The green lines include the cost of paying back capital, given an 18 month lifetime for mining hardware. Three snapshots are given: October 1, 2013 (when the BTC price was \$132), two months later on November 30, 2013, when the price had exploded to \$1135, and May 2014, when it had come down to \$454. Firstly and probably most significantly, the bubble ruined things for miners, probably forever. Prior to the bubble, at \$130/BTC, miners were making absolutely huge margins. Fully absorbed cash costs were under \$50 for almost anyone running any kind of ASIC resulting in capital returns in the region of 300% - 500% p.a. These margins obviously wouldn't have lasted for ever, but the price bubble brought a lot of attention to the sector and we saw an astronomical investment in mining hardware over the following 6 months as was shown earlier. This resulted in the super-exponential addition of hashing power to the network, which drove the difficulty and subsequently the cost of mining up massively. At the peak of the bubble (only 60 days later), the fully absorbed cost of mining a bitcoin had risen two to three times. This was fine as long as prices were north of \$1,000, but this situation was about to change. By May 2014, the price had fallen to \$450 (after briefly dipping below \$300). By this time operating costs for the majority of miners were in excess of \$50 per bitcoin, but the difficulty had risen so fast that Moore's Law could not keep pace and the relative capital cost of ASICs was suddenly very expensive. At this time, depreciation costs of all but the most efficient ASICs were around 80% - 90% of the total mining cost. Miners were barely able to cover the costs of their equipment, let alone produce a return. We now found ourselves in a position where it made sense to continue mining if you had already bought hardware (the bitcoin price was comfortably above the operating cost), but mining, like its traditional counterpart, was now a capital intensive and risky endeavour. This meant that very little further investment could be made while the bitcoin price remained range bound, but the difficulty would not decrease because cash costs were still relatively low. The gold rush of Bitcoin mining was largely over. The difficulty continued to rise steadily, though at an almost linear rate over the course of 2014, even as the price fell further to settle at around \$250. New hardware started to come onto the market, including the AntMiner S2 and 3 series, reducing the capital intensity of mining and allowing some continued investment to take place. This of course, continued to drive up the base cost of BTC production and for the first time, we see a significant portion of the network operating at, or just below break-even by February 2015. Figure 6 shows some more cost curve snapshots from the period 2015 - 2016. The price remained below \$300 for the first three quarters of 2015, resulting in a sort of stasis for the mining landscape. The cash cost curve seemed to remain fairly stable, with the most efficient miners producing at under \$100/BTC with a steady increase in cost to around \$300/BTC at the 85% capacity level. However, this apparent calm belied considerable turbulence beneath the surface. The apparent equilibrium was only achievable due to a constant turnover of hardware, with the increase in ASIC speeds and efficiencies just barely keeping up with the difficulty increases in the hashing algorithm. With approximately 25% of the network barely covering costs, there were bound to be casualties. Over the course of the year several hardware manufacturers went bankrupt 1, 2, 3. In late 2015, too late for some, the bitcoin price rallied to \$500 and provided some breathing room to mining operators. However, the price rally again brought in a rush of new mining capacity. Whereas in 2015, some older equipment could still eke out enough money to cover costs, it seems that this time, only the strongest (read AntMiner S7 and equivalents) are able to survive. As of 1 April 2016, our estimate is that over 70% of the network is driven by S7 (or equivalents) and are comfortably covering their capital costs at a price of \$420/BTC. However, second tier ASICs, such as S5-class machines are at the break-even point. This has some ramifications for the state of the mining supply landscsape for the second half of 2016. The current mining hashrate is dominated by a single class of hardware, leading to a very flat cash cost curve. This makes the network very vulnerable to sharp changes in the BTC price, or cost of production. There is also very little news of new hardware appearing on the horizon, with the exception of the Bitfury 16nm ASIC, slated for H2 2016. It is possible that other manufacturers are operating in secret; it would be extremely surprising if Antmain, for example, did not have a successor to their very successful Antminer S7 series up their sleeves. Nevertheless, as it stands, the cash cost curve of 30-03-2016 shows that a halving in price to \$200, or roughly equivalently, a doubling in cash cost, would put 30% to 70% of the current hashing capacity out of business. This is a very wide margin, and is due to the very flat cash cost curve brought on by a single dominant participant. In July of this year, the bitcoin reward will halve, bringing this very scenario into existence. Miners will only receive 12.5 BTC per block mined, instead of the 25 they currently receive. Overnight, most of their costs will double. We suspect that what happens next is largely a function of the price at the time. \$400 is a fairly critical level now. Above that, the roughly 600 PH of capacity that will be producing at or around that price will be able to continue operating, meaning that the network will perform as usual and the world may never even notice that a halving ever happened. If however, the price in mid-July is below the $380 mark, almost anything could happen. In a worst-case scenario we could see a dramatic fall-off of hashing capacity. This would lead to disruption in confirmation times and general disarray in the bitcoin network, affecting confidence and causing the price to fall further. The difficulty would be adjusted downward two weeks later; bringing all the mothballed capacity back online again for two weeks. This might not force the price back up, but the backlog of transactions would start to clear until the difficulty is again ratcheted up in the following adjustment. Most of the hashing capacity has been replaced by next-generation chips (and the current set would be obsolete anyway). The bitcoin price and network settle down to a new equilibrium value in the \$275 to \$325 price range. We have developed a forecasting algorithm to try and predict how this scenario will pan out, and may release those results in a future post. Bitcoin mining went through a golden period in 2012 - late 2013 where costs were low and the bitcoin price (even at \$120) was far higher than the fully absorbed cash cost of mining. Those early adopters are smiling now. Since 2014, mining has been in a ruthless arms-race, where only the most efficient ASICs stand a chance of providing a decent return on capital before being usurped by the next generation of chip. Currently, about 70% of the network hashing capacity is produced by a single generation of chip. This could make the network very vulnerable to widespread disruptions following the halving in July if the bticoin price cannot maintain its present position above \$400, with substantial further downside pressure should the price fall below that level come the mid-July halving. Price can decrease in parts based on news articles or other information inferring bitcoin farms receiving favourable rates.
CommonCrawl
How did Romans do multiplications? The Romans hadn't indian numerals, but what 's worse hadn't the decimal system, yet produced amazing works of engineering and architecture. How was that possible? It's troublesome to make simple sums, but how could they make products and complex calcs? Any textbook of math survived to tell us how they made complex calcs? They used abacus. The techniques used for operations with abacus were understood and were basically the same used also until quite recent time also in China and Japan, as far as I know. This does not require indoarabic numerals. In fact you shouldn't think "Latins". Latin numerals were in use in Europe till the XIIIth century at least. Liber Abaci by Leonardo Fibonacci is credited as being one of the main sources of introduction of indo-arabic numerals in the Western countries and dates back to 1228. One of the reason of its success is exactly that it explained how using such numerals could improve computations. It is, in fact, a Book of Abacus, as the name says, i.e. a book centered around techniques for algebraic computations. You should be more specific when you say "Romans". If you mean ancient Romans, almost no mathematical text survived in Latin from the times before 2nd century AD. From the Roman empire we mostly have Greek texts. (See also Roman engineers). Almost all technical literature which we have from the Roman empire is written in Greek. Greek was also the spoken language of large portion of the empire. Greek system is described in detail in the book of van der Waerden, Science awakening. The digits of the decimal system were denoted by Greek letters. One had to memorize (as we do) the multiplication table for digits. That's all one need to multiply numbers. For example, $$265\times265=200\times200+200\times60+200\times5+60\times200+60\times60$$ $$+60\times5+5\times200+5\times60+5\times5=70225.$$ To simplify the task, a counting board was used with counting stones. Such a counting board in mentioned in Polybius, for example. For computations with simple fractions a more complicated algorithm was used. For astronomical computations, Babylonian positional system with base 60, including fractions based on 1/60th was used (but the numbers from 1 to 60 were denoted by pairs of Greek letters). Multiplication tables were used (as Babylonians did before). How did Napier come to invent logarithms? How did Poincaré discover the fundamental group? Historically, how did people define multiplication for negative numbers? How much of mathematics did Russell's paradox really break? How did Ramanujan learn to do mathematics? How did Ramanujan empirically obtain these errors? How many languages did historically well-known mathematicians master? How did Euler stumble on this proof?
CommonCrawl
We establish the optimal quantization problem for probabilities under constrained Rényi-$\alpha$-entropy of the quantizers. We determine the optimal quantizers and the optimal quantization error of one-dimensional uniform distributions including the known special cases $\alpha = 0$ (restricted codebook size) and $\alpha = 1$ (restricted Shannon entropy). A. Gersho and R. M. Gray: Vector Quantization and Signal Compression. Kluwer, Boston 1992. G. H. Hardy, J. E. Littlewood and G. Polya: Inequalities. Second edition. Cambridge University Press, Cambridge 1959.
CommonCrawl
When I published the trilateration results there were several people showing interest in how I came up with the results on both the Frontier forums and later Reddit. I posted the full math behind the trilateration but the complex equations made it intimidating for most people which was evident especially in the Reddit comments. Despite the scary looking equations I feel that the basic math involved is rather simple and the necessary concepts are taught in high school in most countries. This article is my attempt at showing how simple concepts can be combined to calculate results that seem way complex. While I know this probably won't be the most popular post on a gaming-related subreddit, as long as this helps even one person to understand math better or inspires them to approach it in a different way, I feel like spending the time to write this was worth it. Also despite the common belief, I don't have a degree in mathematics. Nor did I receive my high school education in English. I'm confident most of the terms and concepts below are correct but I won't claim it's error proof. Corrections can be left on the Reddit post. When the Unknown Probe is deployed near a planet, it transmits a distance between the planet and an unknown system (among other things) using an unknown unit of length. For convenience let's name this unit a Bob. While we know that the Probe uses Merope (5C) as a reference planet and 1 Bob is the distance between the unknown system and Merope, we have no way of telling how many light years one Bob is without finding the unknown system. We can still use it to compare planets though: A planet with a reported distance of 0.8 Bobs is twice as far from the target system compared to a planet with a reported distance of 0.4 Bobs. The measurement from a single planet doesn't really tell us anything - but once we gather measurements from different planets we start getting some kind of a picture of the possible location of the reference system. For the calculations here we have measured the distance from three different systems: Sirius, Merope and Betelgeuse. These systems are shown in figure 1 below. Note: The distance values presented in the figure below are not the real values from the probe. The values the probe transmits are specific to Elite's 3D-galaxy and cannot be used for a 2D-example. The values below have been calculated based on the locations of the known systems and the (now known) unknown system location in 2D. Those who are interested in the original 3D-values can check the Python solution at the bottom of the article. Figure 1: Unknown Probe measurements in various systems. Let's use $d$ for distance (in light years). $d_\text"UnknownSirius"$ is the distance from the unknown system to Sirius. Let's use $x$ and $y$ for the coordinates (in light years). $x_\text"UnknownSirius"$ is the distance on x-axis between the unknown system and Sirius. $y_\text"UnknownSirius"$ is the distance on y-axis between the unknown system and Sirius. These symbols are shown in figure 2. Now given the Pythagorean theorem, we can write our first equations using the above notation. The equations are listed below. Equation 1: Distances with Pythagorean theorem. That's a start! These groups of equations are called systems of equations in mathematics. The basic systems taught in high school are system of linear equations - what we have here is a system of polynomial equations (thanks to the exponents). Fortunately this doesn't change things too much. A general rule of thumb with systems of equations is that they are solvable when the amount of equations corresponds to the amount of unknown variables. This gives us our first obstacle. Looking at the equations above, each of them contains three unknown values: $d\text"..."$, $x\text"..."$ and $y\text"..."$. Doing more measurements and adding more equations for different planets just introduces more variables. So before we can solve the group we need to find a way to eliminate some of the variables. $x_\text"Sirius"$ is the x-coordinate of Sirius. $y_\text"Sirius"$ is the y-coordinate of Sirius. Now we can redraw figure 2 using these new variables. This is shown in figure 3. Figure 3: Distances with absolute coordinates. Equation 2: Distances using absolute coordinates. While this made the equation more complicated, the important difference is that it eliminated a large amount of unknown variables. Previously each equation had a different unknown $x$ and $y$ variable. Now all the equations share the same unknown $x_\text"Unknown"$ and $y_\text"Unknown"$ variables. All the other $x$ and $y$ variables are known and we don't need to solve them. We could substitute them with the numerical values (from figure 1), but we'll leave them like this for now just for clarity (I guess a personal preference). $r_\text"bob"$ is the amount of light years corresponding to one Bob. $b$ Bobs = $br_\text"bob"$ light years. $b_\text"UnknownSirius"$ is the distance between Sirius and the unknown system in Bobs. $b_\text"UnknownSirius"r_\text"bob"$ is the distance between Sirius and the unknown system in light years. This allows us to write more equations. $d_\text"UnknownSirius"$ : Distance between Sirius and the unknown system in light years. $b_\text"UnknownSirius"$ : Distance between Sirius and the unknown system in Bobs. Equation 3: Relationship between Bobs and light years. Equation 4: Distance equations based on Bob measurements. Finally we have 3 equations with 3 unknown variables: $r_\text"bob"$, $x_\text"Unknown"$ and $y_\text"Unknown"$. This should make the equations solvable! The only missing steps are inserting the known values to the equations and finding something to solve it with. Distance measurements decoded from the probe. Equation 5: Final equations with numerical values. And finally typing that last part into Wolfram Alpha will give us the result. ... well, actually two candidates. Polynomial equations (Having exponents like $^2$ or $^3$) often yield multiple values. To figure out which one of these candidates is the real system, we need one more reference value. By replacing the equation values of Belegeuse with those of Sol ($b=1.156$, $(x,y) = (0,0)$) we can write the equations for Sirius-Merope-Sol combination to Wolfram Alpha. This gives us another result. Only the candidate around $(x,y) = (688, -698)$ was common for these calculations. 4 unknown values - 4 equations - 4 reference systems. # Make SciPy solve the system using an initial guess. # The initial guess affects which of the "candidates" SciPy finds. The analysis above described how simple concepts of Pythagorean theorem and equation systems can be applied together to calculate locations of unknown star systems. The article started with the theory in 2D and then generalized this to 3D by introducing the 3D-version of the Pythagorean theorem. My hope is that at at least few people reading this will finally realize that learning hundreds of different formulas and equations is not important in mathematics. The important thing is to figure out how to apply the few you know to the problems at hand. Any questions and comments can be left on the Reddit post or directly to /u/Wace. This article was written and posted on Reddit August 21st, 2016.
CommonCrawl
I am a first-year undergraduate student at the University of São Paulo. My research interests include (spectral) algebraic geometry and (chromatic) homotopy theory. You can contact me at theo.de.oliveira.santos[at]usp.br. 13 Is the $\infty$-category of spectra "convenient"? 12 How do topological automorphic forms fit into homotopy theory and what makes them interesting? 11 How should one approach reading Higher Algebra by Lurie?
CommonCrawl
Let $A$ be a commutative ring with unity. The ring $A$ is local if and only if it has a unique maximal ideal. The ring $A$ is local if and only if it is nontrivial and the sum of any two non-units is a non-unit. One also writes $(A, \mathfrak m)$ for a commutative local ring $A$ with maximal ideal $\mathfrak m$.
CommonCrawl
You're hanging out with a bunch of other mathematicians - you go out to dinner, you're on the train, you're at a department tea, et cetera. Someone says something like "A group of 100 people at a party are each receive hats with different prime numbers and ..." For the next few minutes everyone has fun solving the problem together. I love puzzles like that. But there's a problem -- I running into the same puzzles over and over. But there must be lots of great problems I've never run into. So I'd like to hear problems that other people have enjoyed, and hopefully everyone will learn some new ones. So: What are your favorite dinner conversation math puzzles? I don't want to provide hard guidelines. But I'm generally interested in problems that are mathematical and not just logic puzzles. They shouldn't require written calculations or a convoluted answer. And they should be fun - with some sort of cute step, aha moment, or other satisfying twist. I'd prefer to keep things pretty elementary, but a cool problem requiring a little background is a-okay. If you post the answer, please obfuscate it with something like rot13. Don't spoil the fun for everyone else. "There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours. Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces). If a tribesperson does discover his or her own eye color, then their religion compels them to commit ritual suicide at noon the following day in the village square for all to witness. All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth). What effect, if anything, does this faux pas have on the tribe?" You and infinitely many other people are wearing hats. Each hat is either red or blue. Every person can see every other person's hat color, but cannot see his/her own hat color; aside from that, you cannot share any information (but you are allowed to agree on a strategy before any of the hats appear on your heads). Everybody simultaneously guesses the color of his/her hat. You win if all but finitely many of you are right. Find a strategy so that you always win. 1000 prisoners are in jail. There's a room with 1000 lockers, one for each prisoner. A jailer writes the name of each prisoner on a piece of paper and puts one in each locker (randomly, and not necessary in the locker corresponding to the name written on the paper!). The game is the following. The prisoners are called one by one in the room with the lockers. Each of them can open 500 lockers. If a prisoner finds the locker which contains is name, the game continues meaning that he leaves the room (and leaves it is the exact same state as when it entered it, meaning that he cannot leave any hint), and the following prisoner is called. If anyone of the prisoners fails to recover his name, they all lose and get killed. Of course they can agree before the beginning of the game on a common strategy, but after that, they cannot communicate anymore, and they cannot leave any hint to the following prisoners. A trivial strategy where each prisoner opens 500 random lockers would lead to a winning probability of 1/2^1000. But there exists a strategy that offers a winning probability of roughly 30%. There are $n$ balls rolling along a line in one direction and $k$ balls rolling along the same line in the opposite direction. The speeds of the balls in the first group and in the second group are equal. Initially the two groups of balls are separated from one another and at some point the balls start colliding. The collisions are assumed to be elastic. How many collisions will there be? (I learned this puzzle from Ravi Vakil.) Suppose you have an infinite grid of squares, and in each square there is an arrow, pointing in one of the 8 cardinal directions, with the condition that any two orthogonally adjacent arrows can differ by at most 45 degrees. Can there be a closed cycle? (i.e. start at some arrow, move to the square that arrow points to, follow where the arrow there points and so on, and come back to the square you started at). Adam Hesterberg told me this one ages ago. It apparently used to circulate around MOP. Can the spiders guarantee that they will catch the fly in finite time, regardless of the initial positions of the spiders and the fly? Does the answer depend on the value of $\epsilon$? Most of us know that, being deterministic, computers cannot generate true random numbers. Can you use this box to create a unbiased random generator of binary numbers? Here is another of my favorites: Player 1 thinks of a polynomial P with coefficients that are natural numbers. Player 2 has to guess this polynomial by asking only evaluations at natural numbers (so one can not ask for $P(\pi)$). How many questions does the second player need to ask to determine P? It is very important that you tell these two puzzles in the correct order, i.e., first the first puzzle and then the second one. The first puzzle is very easy but messes with people's minds in just the right way. In my experience some mathematicians are driven crazy by the second puzzle. Puzzle 1: Grandma made a cake whose base was a square of size 30 by 30 cm and the height was 10 cm. She wanted to divide the cake fairly among her 9 grandchildren. How should she cut the cake? Puzzle 2: Grandma made a cake whose base was a square of size 30 by 30 cm and the height was 10 cm. She put chocolate icing on top of the cake and on the sides, but not on the bottom. She wanted to divide the cake fairly among her 9 grandchildren so that each child would get an equal amount of the cake and the icing. How should she cut the cake? What is the resistance between 2 adjacent vertices of an infinite checkerboard if every edge is a 1 ohm resistor? You have a glass of red wine and a glass of white wine (of equal volume). You take a teaspoon of the red wine and put it in the glass of white wine and stir. You then take a teaspoon of the white wine (which now has a teaspoon of the red wine in it) and put it in the glass of red wine and stir. Which glass has a higher ratio of (original wine)/(introduced wine)? A certain rectangle can be covered by 25 coins of diameter 2. Can it always be covered with 100 coins of diameter 1? When you watch yourself in a mirror, left and right are exchanged. But why aren't top and bottom? Alice shuffles an ordinary deck of cards and turns the cards face up one at a time while Bob watches. At any point in this process before the last card is turned up, Bob can guess that the next card is red. Does Bob have a strategy that gives him a probability of success greater that .5? Let $x_1, x_2, \dots, x_n$ be $n$ points (in that order) on the circumference of a circle. Dana starts at the point $x_1$ and walks to one of the two neighboring points with probability $1/2$ for each. Dana continues to walk in this way, always moving from the present point to one of the two neighboring points with probability $1/2$ for each. Find the probability $p_i$ that the point $x_i$ is the last of the $n$ points to be visited for the first time. In other words, find the probability that when $x_i$ is visited for the first time, all the other points will have already been visited. For instance, $p_1=0$ (when $n>1$), since $x_1$ is the first of the $n$ points to be visited. Let $\pi$ be a random permutation of $1,2,\dots,n$ (from the uniform distribution). What is the probability that 1 and 2 are in the same cycle of $\pi$? Choose $n$ points at random (uniformly and independently) on the circumference of a circle. Find the probability $p_n$ that all the points lie on a semicircle. (For instance, $p_1 = p_2 = 1$.) More generally, fix $\theta<2\pi$ and find the probability that the $n$ points lie on an arc subtending an angle $\theta$ . A small boat carrying a heavy stone is floating in a swimming pool. What happens to the level of water (up, down or remains equal) in the swimming pool if one removes the stone from the boat and throws it in the swimming pool? The very easy solution suggests the following joke (illustrating the well-known ignorance of mathematicians of reality): Instead of sending scores of ships for saving passengers from the Titanic, one should have sunk all possible rescue-ships in order to lower the sea-level. Instead of recommending some puzzles, I'll recommend some books containing many puzzles. Peter Winkler, Mathematical Puzzles; Peter Winkler, Mathematical Mind-Benders; Miodrag Petkovic, Famous Puzzles of Great Mathematicians. Via the great Martin Gardner: A cylindrical hole is drilled straight through the center of a solid sphere. The length of hole in the sphere (i.e. of the remaining empty cylinder) is 6 units. What is the volume of the remaining solid object (i.e. sphere less hole)? Yes, there is enough information to solve this problem! (I learned this problem from Persi Diaconis.) A deck of $n$ different cards is shuffled and laid on the table by your left hand, face down. An identical deck of cards, independently shuffled, is laid at your right hand, also face down. You start turning up cards at the same rate with both hands, first the top card from both decks, then the next-to-top cards from both decks, and so on. What is the probability that you will simultaneously turn up identical cards from the two decks? What happens as $n \to \infty$? And does the answer for small $n$ (say, $n=7$) differ greatly from $n=52$? Countably many little dwarfs are going to their everyday work to the mine. They are marching and singing in a well-ordered line (by natural numbers), so that number 1 watches the backs of all the other ones, and, in general, number n watches the backs of all the others from n+1 on. Suddenly, an evil wizard appears on the top of a small hill, and magically puts a name on the back of each dwarf. Any name may be used, even more than once: existing ones, old-fashioned ones, or just weird sounds sprang out of his sick imagination, included grunts, sneezes and any snort-like name (you may enjoy providing your listerners with examples if they ask for). Then, he claims that, at his signal, everybody has to guess his own name, and say it loudly, all together. Whoever fails, will disappear immediately. Poor dwarfs are not new to these bully spells, and do have a strategy, that allows all but finitely many of them to survive. How do they do? To formalize, we may think the evil wizard has attached a real number to each dwarf's back. You have 1000 bottles of wine. Exactly one of the bottles contains a deadly poison, but you don't know which one. The killing time of the poison varies from person to person, but death is imminent in at most $t$ hours after ingestion. You are allowed to use 10 notorious criminals as poison fodder (they are on death row). How much time do you need to correctly determine the poisoned bottle? Each rope has non-uniform density, meaning it is thicker at some points than others. Consequently, burning half a rope cannot be guaranteed to take 32 minutes. The goal is to identify when exactly 63 minutes have passed. There is a square with seven monkeys on the floor and seven bananas on the top. Seven ladders go up the square, from one monkey to the banana over it, and the monkeys can climb them. Moreover there are some ropes which connect the ladders. A monkey will go up towards the bananas, but whenever it meets a rope it cannot resist the temptation to stray and hang on it. Prove that every monkey will reach a banana, no matter the configuration of ropes. There are at least two different solutions to this. When they came to diner some shook hands. Ask them to prove that that two of them shook hands the same number of times. You are on a game show with three doors, behind one of which is a car and behind the other two are goats. You pick door #1. Monty, who knows what's behind all three doors, reveals that behind door #2 is a goat. Before showing you what you won, Monty asks if you want to switch doors. Should you switch? There is a plane with 100 seats and we have 100 passengers entering the plane one after the other. The first one cannot find his ticket, so chooses a random (uniformly) seat. All the other passengers do the following when entering the plane (they have their tickets). If the seat written on the ticket is free, one sits on this seat, if not he chooses a other (free) seat at random (uniformly). What is the probability the last passenger entering the plane gets the correct seat? Not the answer you're looking for? Browse other questions tagged big-list puzzle recreational-mathematics or ask your own question. Examples of math hoaxes/interesting jokes published on April Fool's day?
CommonCrawl
Abstract: In the study of deterministic distributed algorithms it is commonly assumed that each node has a unique $O(\log n)$-bit identifier. We prove that for a general class of graph problems, local algorithms (constant-time distributed algorithms) do not need such identifiers: a port numbering and orientation is sufficient. Our result holds for so-called simple PO-checkable graph optimisation problems; this includes many classical packing and covering problems such as vertex covers, edge covers, matchings, independent sets, dominating sets, and edge dominating sets. We focus on the case of bounded-degree graphs and show that if a local algorithm finds a constant-factor approximation of a simple PO-checkable graph problem with the help of unique identifiers, then the same approximation ratio can be achieved on anonymous networks. As a corollary of our result and by prior work, we derive a tight lower bound on the local approximability of the minimum edge dominating set problem. Our main technical tool is an algebraic construction of homogeneously ordered graphs: We say that a graph is $(\alpha,r)$-homogeneous if its nodes are linearly ordered so that an $\alpha$ fraction of nodes have pairwise isomorphic radius-$r$ neighbourhoods. We show that there exists a finite $(\alpha,r)$-homogeneous $2k$-regular graph of girth at least $g$ for any $\alpha < 1$ and any $r$, $k$, and $g$. This entry was posted in PODC 2012, Podcasts and tagged approximation, Local Algorithms, lower bound by chen. Bookmark the permalink.
CommonCrawl
We present a method of finding weighted Koppelman formulas for $(p,q)$-forms on $n$-dimensional complex manifolds $X$ which admit a vector bundle of rank $n$ over $X \times X$, such that the diagonal of $X \times X$ has a defining section. We apply the method to $\Pn$ and find weighted Koppelman formulas for $(p,q)$-forms with values in a line bundle over $\Pn$. As an application, we look at the cohomology groups of $(p,q)$-forms over $\Pn$ with values in various line bundles, and find explicit solutions to the $\dbar$-equation in some of the trivial groups. We also look at cohomology groups of $(0,q)$-forms over $\Pn \times \Pm$ with values in various line bundles. Finally, we apply our method to developing weighted Koppelman formulas on Stein manifolds.
CommonCrawl
Abiti Adili, Bixiang Wang. Random attractors for non-autonomousstochastic FitzHugh-Nagumo systems with multiplicative noise. Conference Publications, 2013, 2013(special): 1-10. doi: 10.3934\/proc.2013.2013.1. Inkyung Ahn, Wonlyul Ko, Kimun Ryu. Asymptotic behavior of a ratio-dependent predator-prey system withdisease in the prey. Conference Publications, 2013, 2013(special): 11-19. doi: 10.3934\/proc.2013.2013.11. Inmaculada Ant\u00F3n, Juli\u00E1n L\u00F3pez-G\u00F3mez. Global bifurcation diagrams of steady-states for a parabolic model related to a nuclear engineering problem. Conference Publications, 2013, 2013(special): 21-30. doi: 10.3934\/proc.2013.2013.21. Soohyun Bae. Classification of positive solutions of semilinear elliptic equations with Hardy term. Conference Publications, 2013, 2013(special): 31-39. doi: 10.3934\/proc.2013.2013.31. Sara Barile, Addolorata Salvatore. Radial solutions of semilinear elliptic equations with broken symmetry on unbounded domains. Conference Publications, 2013, 2013(special): 41-49. doi: 10.3934\/proc.2013.2013.41. Rossella Bartolo, Anna Maria Candela, Addolorata Salvatore. Infinitely many radial solutions of a non--homogeneous $p$--Laplacian problem. Conference Publications, 2013, 2013(special): 51-59. doi: 10.3934\/proc.2013.2013.51. J. Becker, M. Ferreira, B.M.P.M. Oliveira, A.A. Pinto. R&d dynamics. Conference Publications, 2013, 2013(special): 61-68. doi: 10.3934\/proc.2013.2013.61. Jean-Philippe Bernard, Emmanuel Fr\u00E9nod, Antoine Rousseau. Modeling confinement in \u00C9tang de Thau: Numerical simulations and multi-scale aspects. Conference Publications, 2013, 2013(special): 69-76. doi: 10.3934\/proc.2013.2013.69. Morten Br\u00F8ns. An iterative method for the canard explosion in general planar systems. Conference Publications, 2013, 2013(special): 77-83. doi: 10.3934\/proc.2013.2013.77. Carmen Calvo-Jurado, Juan Casado-D\u00EDaz, Manuel Luna-Laynez. The homogenization of the heat equation with mixed conditions on randomly subsets of the boundary. Conference Publications, 2013, 2013(special): 85-94. doi: 10.3934\/proc.2013.2013.85. Santiago Cano-Casanova. Bifurcation to positive solutions in BVPs of logistic type with nonlinear indefinite mixed boundary conditions. Conference Publications, 2013, 2013(special): 95-104. doi: 10.3934\/proc.2013.2013.95. Dmitriy Chebanov. New class of exact solutions for the equations of motion of a chain of $n$ rigid bodies. Conference Publications, 2013, 2013(special): 105-113. doi: 10.3934\/proc.2013.2013.105. Xin Chen, Ana Bela Cruzeiro. Stochastic geodesics and forward-backward stochastic differential equations on Lie groups. Conference Publications, 2013, 2013(special): 115-121. doi: 10.3934\/proc.2013.2013.115. Jann-Long Chern, Yong-Li Tang, Chuan-Jen Chyan, Yi-Jung Chen. On the uniqueness of singular solutions for a Hardy-Sobolev equation. Conference Publications, 2013, 2013(special): 123-128. doi: 10.3934\/proc.2013.2013.123. Shihchung Chiang. Numerical optimal unbounded control with a singular integro-differential equation as a constraint. Conference Publications, 2013, 2013(special): 129-137. doi: 10.3934\/proc.2013.2013.129. C. I. Christov, M. D. Todorov. Investigation of the long-time evolution of localized solutions of a dispersive wave system. Conference Publications, 2013, 2013(special): 139-148. doi: 10.3934\/proc.2013.2013.139. Leonardo Colombo, David Mart\u00EDn de Diego. Optimal control of underactuated mechanical systems with symmetries. Conference Publications, 2013, 2013(special): 149-158. doi: 10.3934\/proc.2013.2013.149. Chiara Corsato, Franco Obersnel, Pierpaolo Omari, Sabrina Rivetti. On the lower and upper solution method for the prescribed mean curvature equation in Minkowski space. Conference Publications, 2013, 2013(special): 159-169. doi: 10.3934\/proc.2013.2013.159. Ronan Costaouec, Haoyun Feng, Jes\u00FAs Izaguirre, Eric Darve. Analysis of the accelerated weighted ensemble methodology. Conference Publications, 2013, 2013(special): 171-181. doi: 10.3934\/proc.2013.2013.171. Marcello D\'Abbicco. Small data solutions for semilinear wave equations with effective damping. Conference Publications, 2013, 2013(special): 183-191. doi: 10.3934\/proc.2013.2013.183. Diane Denny. A unique positive solution to a system of semilinear ellipticequations. Conference Publications, 2013, 2013(special): 193-195. doi: 10.3934\/proc.2013.2013.193. Priyanjana M. N. Dharmawardane. Decay property of regularity-loss type forquasi-linear hyperbolic systems of viscoelasticity. Conference Publications, 2013, 2013(special): 197-206. doi: 10.3934\/proc.2013.2013.197. Jishan Fan, Tohru Ozawa. An approximation model for the density-dependent magnetohydrodynamic equations. Conference Publications, 2013, 2013(special): 207-216. doi: 10.3934\/proc.2013.2013.207. Jo\u00E3o Fialho, Feliz Minh\u00F3s. The role of lower and upper solutions in the generalization ofLidstone problems. Conference Publications, 2013, 2013(special): 217-226. doi: 10.3934\/proc.2013.2013.217. Marcus Fontaine, William D. Kalies, Vincent Naudot. A reinjected cuspidal horseshoe. Conference Publications, 2013, 2013(special): 227-236. doi: 10.3934\/proc.2013.2013.227. Takeshi Fukao, Nobuyuki Kenmochi. Abstract theory of variational inequalities and Lagrange multipliers. Conference Publications, 2013, 2013(special): 237-246. doi: 10.3934\/proc.2013.2013.237. Charles Fulton, David Pearson, Steven Pruess. Characterization of the spectral density function for a one-sided tridiagonalJacobi matrix operator. Conference Publications, 2013, 2013(special): 247-257. doi: 10.3934\/proc.2013.2013.247. Matthew A. Fury. Regularization for ill-posed inhomogeneous evolution problems in a Hilbert space. Conference Publications, 2013, 2013(special): 259-272. doi: 10.3934\/proc.2013.2013.259. John R. Graef, Shapour Heidarkhani, Lingju Kong. Existence of nontrivial solutions to systems of multi-point boundary value problems. Conference Publications, 2013, 2013(special): 273-281. doi: 10.3934\/proc.2013.2013.273. John R. Graef, Lingju Kong, Qingkai Kong, Min Wang. Positive solutions of nonlocal fractional boundary value problems. Conference Publications, 2013, 2013(special): 283-290. doi: 10.3934\/proc.2013.2013.283. John R. Graef, Lingju Kong, Min Wang. Existence of multiple solutions to a discrete fourth order periodic boundary value problem. Conference Publications, 2013, 2013(special): 291-299. doi: 10.3934\/proc.2013.2013.291. Antonio Greco, Giovanni Porru. Optimization problems for the energy integral of p-Laplace equations. Conference Publications, 2013, 2013(special): 301-310. doi: 10.3934\/proc.2013.2013.301. Ellina Grigorieva, Evgenii Khailov, Andrei Korobeinikov. An optimal control problem in HIV treatment. Conference Publications, 2013, 2013(special): 311-322. doi: 10.3934\/proc.2013.2013.311. Gemma Huguet, Rafael de la Llave, Yannick Sire. Fast iteration of cocycles over rotations and computation of hyperbolic bundles. Conference Publications, 2013, 2013(special): 323-333. doi: 10.3934\/proc.2013.2013.323. Sachiko Ishida. $L^\\infty$-decay property for quasilinear degenerate parabolic-elliptic Keller-Segel systems. Conference Publications, 2013, 2013(special): 335-344. doi: 10.3934\/proc.2013.2013.335. Sachiko Ishida, Tomomi Yokota. Remarks on the global existence of weak solutions to quasilinear degenerate Keller-Segel systems. Conference Publications, 2013, 2013(special): 345-354. doi: 10.3934\/proc.2013.2013.345. Navnit Jha. Nonpolynomial spline finite difference scheme for nonlinear singuiar boundary value problems with singular perturbation and its mechanization. Conference Publications, 2013, 2013(special): 355-363. doi: 10.3934\/proc.2013.2013.355. Huiqiang Jiang. Regularity of a vector valued two phase free boundary problems. Conference Publications, 2013, 2013(special): 365-374. doi: 10.3934\/proc.2013.2013.365. A. Jim\u00E9nez-Casas, Mario Castro, Justine Yassapan. Finite-dimensional behavior in a thermosyphon with a viscoelastic fluid. Conference Publications, 2013, 2013(special): 375-384. doi: 10.3934\/proc.2013.2013.375. Yoshitsugu Kabeya. A unified approach to Matukuma type equations on the hyperbolic space or on a sphere. Conference Publications, 2013, 2013(special): 385-391. doi: 10.3934\/proc.2013.2013.385. Byungik Kahng, Miguel Mendes. The characterization of maximal invariant sets of non-linear discrete-time control dynamical systems. Conference Publications, 2013, 2013(special): 393-406. doi: 10.3934\/proc.2013.2013.393. Dina Kalinichenko, Volker Reitmann, Sergey Skopinov. Asymptotic behavior of solutions to a coupled system of Maxwell\'s equations and a controlled differential inclusion. Conference Publications, 2013, 2013(special): 407-414. doi: 10.3934\/proc.2013.2013.407. Shuya Kanagawa, Ben T. Nohara. The nonlinear Schr\u00F6dinger equation created by the vibrations of an elastic plate and its dimensional expansion. Conference Publications, 2013, 2013(special): 415-426. doi: 10.3934\/proc.2013.2013.415. Yukio Kan-On. Structure on the set of radially symmetric positive stationary solutions for a competition-diffusion system. Conference Publications, 2013, 2013(special): 427-436. doi: 10.3934\/proc.2013.2013.427. Diana Keller. Optimal control of a linear stochastic Schr\u00F6dinger equation. Conference Publications, 2013, 2013(special): 437-446. doi: 10.3934\/proc.2013.2013.437. Masahiro Kubo. Quasi-subdifferential operators andevolution equations. Conference Publications, 2013, 2013(special): 447-456. doi: 10.3934\/proc.2013.2013.447. Atul Kumar, R. R. Yadav. Analytical approach of one-dimensional solute transport through inhomogeneous semi-infinite porous domain for unsteady flow: Dispersion being proportional to square of velocity. Conference Publications, 2013, 2013(special): 457-466. doi: 10.3934\/proc.2013.2013.457. Kousuke Kuto, Tohru Tsujikawa. Bifurcation structure of steady-states forbistable equations with nonlocal constraint. Conference Publications, 2013, 2013(special): 467-476. doi: 10.3934\/proc.2013.2013.467. Laura Levaggi. Existence of sliding motions for nonlinear evolution equations in Banach spaces. Conference Publications, 2013, 2013(special): 477-487. doi: 10.3934\/proc.2013.2013.477. Runchang Lin, Huiqing Zhu. A discontinuous Galerkin least-squares finite element method for solving Fisher\'s equation. Conference Publications, 2013, 2013(special): 489-497. doi: 10.3934\/proc.2013.2013.489. Shaobo Lin, Xingping Sun, Zongben Xu. Discretizing spherical integrals and its applications. Conference Publications, 2013, 2013(special): 499-514. doi: 10.3934\/proc.2013.2013.499. Juli\u00E1n L\u00F3pez-G\u00F3mez, Marcela Molina-Meyer, Andrea Tellini. Intricate bifurcation diagrams for a class of one-dimensional superlinear indefinite problems of interest in population dynamics. Conference Publications, 2013, 2013(special): 515-524. doi: 10.3934\/proc.2013.2013.515. T. F. Ma, M. L. Pelicer. Attractors for weakly damped beam equations with $p$-Laplacian. Conference Publications, 2013, 2013(special): 525-534. doi: 10.3934\/proc.2013.2013.525. Monica Marras, Stella Vernier Piro. On global existence and bounds for blow-up time in nonlinear parabolic problems with time dependent coefficients. Conference Publications, 2013, 2013(special): 535-544. doi: 10.3934\/proc.2013.2013.535. Stanis\u0142aw Mig\u00F3rski. A note on optimal control problem fora hemivariational inequality modeling fluid flow. Conference Publications, 2013, 2013(special): 545-554. doi: 10.3934\/proc.2013.2013.545. Feliz Minh\u00F3s, Jo\u00E3o Fialho. Existence and multiplicity of solutions in fourth order BVPs with unbounded nonlinearities. Conference Publications, 2013, 2013(special): 555-564. doi: 10.3934\/proc.2013.2013.555. Minoru Murai, Waichiro Matsumoto, Shoji Yotsutani. Representation formula for the plane closed elastic curves. Conference Publications, 2013, 2013(special): 565-585. doi: 10.3934\/proc.2013.2013.565. Richard D. Neidinger. Efficient recurrence relations for univariate and multivariate Taylor series coefficients. Conference Publications, 2013, 2013(special): 587-596. doi: 10.3934\/proc.2013.2013.587. Kazuhiro Oeda. Positive steady states for a prey-predator cross-diffusion system with a protection zone and Holling type II functional response. Conference Publications, 2013, 2013(special): 597-603. doi: 10.3934\/proc.2013.2013.597. Darren C. Ong. Orthogonal polynomials on the unit circle with quasiperiodic Verblunsky coefficients have generic purely singular continuous spectrum. Conference Publications, 2013, 2013(special): 605-609. doi: 10.3934\/proc.2013.2013.605. Iordanka N. Panayotova, Pai Song, John P. McHugh. Spatial stability of horizontally sheared flow. Conference Publications, 2013, 2013(special): 611-618. doi: 10.3934\/proc.2013.2013.611. Purnima Pandit. Fuzzy system of linear equations. Conference Publications, 2013, 2013(special): 619-627. doi: 10.3934\/proc.2013.2013.619. Saroj Panigrahi. Liapunov-type integral inequalities for higher order dynamic equations on time scales. Conference Publications, 2013, 2013(special): 629-641. doi: 10.3934\/proc.2013.2013.629. Saroj P. Pradhan, Janos Turi. Parameter dependent stability\/instability in a human respiratory control system model. Conference Publications, 2013, 2013(special): 643-652. doi: 10.3934\/proc.2013.2013.643. Lih-Ing W. Roeger. Dynamically consistent discrete-time SI and SIS epidemic models. Conference Publications, 2013, 2013(special): 653-662. doi: 10.3934\/proc.2013.2013.653. Florian Rupp, J\u00FCrgen Scheurle. Analysis of a mathematical model for jellyfish blooms and the cambric fish invasion. Conference Publications, 2013, 2013(special): 663-672. doi: 10.3934\/proc.2013.2013.663. Henri Schurz. Stochastic heat equations with cubic nonlinearity and additivespace-time noise in 2D. Conference Publications, 2013, 2013(special): 673-684. doi: 10.3934\/proc.2013.2013.673. Masatoshi Shiino, Keiji Okumura. Control of attractors in nonlinear dynamical systems using external noise: Effects of noise on synchronization phenomena. Conference Publications, 2013, 2013(special): 685-694. doi: 10.3934\/proc.2013.2013.685. Inbo Sim, Yun-Ho Kim. Existence of solutions and positivity of the infimum eigenvalue for degenerate elliptic equationswith variable exponents. Conference Publications, 2013, 2013(special): 695-707. doi: 10.3934\/proc.2013.2013.695. Changming Song, Hong Li, Jina Li. Initial boundary value problem for the singularly perturbedBoussinesq-type equation. Conference Publications, 2013, 2013(special): 709-717. doi: 10.3934\/proc.2013.2013.709. Dmitry Strunin, Mayada Mohammed. Validity and dynamics in the nonlinearly excited 6th-order phase equation. Conference Publications, 2013, 2013(special): 719-728. doi: 10.3934\/proc.2013.2013.719. Futoshi Takahashi. Morse indices and the number of blow up points of blowing-up solutions for a Liouville equationwith singular data. Conference Publications, 2013, 2013(special): 729-736. doi: 10.3934\/proc.2013.2013.729. Stefanie Thiem, J\u00F6rg L\u00E4ssig. Modeling the thermal conductance of phononic crystal plates. Conference Publications, 2013, 2013(special): 737-746. doi: 10.3934\/proc.2013.2013.737. Jianjun Paul Tian, Shu Liao, Jin Wang. Analyzing the infection dynamics and control strategies of cholera. Conference Publications, 2013, 2013(special): 747-757. doi: 10.3934\/proc.2013.2013.747. Yu Tian, John R. Graef, Lingju Kong, Min Wang. Existence of solutions to a multi-point boundary value problem for a second order differential system via the dual least action principle. Conference Publications, 2013, 2013(special): 759-769. doi: 10.3934\/proc.2013.2013.759. Antonio Vitolo, Maria E. Amendola, Giulio Galise. On the uniqueness of blow-up solutions of fully nonlinear elliptic equations. Conference Publications, 2013, 2013(special): 771-780. doi: 10.3934\/proc.2013.2013.771. Hiroshi Watanabe. Existence and uniqueness of entropy solutions to strongly degenerate parabolic equations with discontinuous coefficients. Conference Publications, 2013, 2013(special): 781-790. doi: 10.3934\/proc.2013.2013.781. Frank Wusterhausen. Schr\u00F6dinger equation with noise on the boundary. Conference Publications, 2013, 2013(special): 791-796. doi: 10.3934\/proc.2013.2013.791. Zhijian Yang, Ke Li. Longtime dynamics for an elastic waveguide model. Conference Publications, 2013, 2013(special): 797-806. doi: 10.3934\/proc.2013.2013.797. Jean-Claude Zambrini. Stochastic deformation of classical mechanics. Conference Publications, 2013, 2013(special): 807-813. doi: 10.3934\/proc.2013.2013.807. Aijun Zhang. Traveling wave solutions with mixed dispersal for spatially periodic Fisher-KPP equations. Conference Publications, 2013, 2013(special): 815-824. doi: 10.3934\/proc.2013.2013.815. Shuai Zhang, L.R. Ritter, A.I. Ibragimov. Foam cell formation in atherosclerosis: HDL and macrophage reverse cholesterol transport. Conference Publications, 2013, 2013(special): 825-835. doi: 10.3934\/proc.2013.2013.825. Jo\u00E3o P. Almeida, Albert M. Fisher, Alberto Adrego Pinto, David A. Rand. Anosov diffeomorphisms. Conference Publications, 2013, 2013(special): 837-845. doi: 10.3934\/proc.2013.2013.837.
CommonCrawl
Can the existence of antimatter be inferred from Matrix Mechanics? It is well known that Antimatter was first predicted by interpreting the matrices that show up in the Dirac Equation as indicating its existence. Dirac factorizes $E^2=p^2+m^2$ ($c=1,\hbar=1$) into $E=\alpha_x \hat p_x+\alpha_y \hat p_y+\alpha_z \hat p_z+\beta m$, such that $$i\partial_t \psi=-i\alpha_x \partial_x\psi-i\alpha_y \partial_y\psi-i\alpha_z \partial_z\psi+\beta m$$ This is only possible if the $\alpha$'s and the $\beta$ are matrices, so the wavefunction is a vector, which implies spin is integrated into the wavefunction, but the matrices have to be $4\times4$ matrices, so the wavefunction is a $4$-component vector, with two solutions of negative energy. This all happens because we try to find the wavefunction of a relativistic particle. Matrix Mechanics, Heisenberg's formulation, instead of wavefunctions and differential operators has the observables represented by matrices, and the state represented by a state vector. Can spin and antimatter be predicted from this formulation of Quantum mechanics, without starting by accounting for the states of different spin and antimatter in the Hamiltonian a priori? Browse other questions tagged quantum-mechanics quantum-spin antimatter dirac-equation spinors or ask your own question. Why do we need matrices in the Dirac equation? How to commutate angular momentum and spin in the Dirac equation?
CommonCrawl
Friday afternoon, 31th May, and Saturday morning, 1th June 2019, at Zentrum Mathematik, Technische Universität München. Gioia Carinci: Inclusion process, sticky brownian motion and condensation. Abstract: Many real world phenomena can be modelled by dynamic random networks. We will focus on preferential attachment models where the networks grow node by node and edges with the new vertex are added randomly depending on a sublinear function of the degree of the older vertex. Using Stein's method provides rates of convergence for the total variation distance between the evolving degree distribution and an asymptotic power-law distribution as the number of vertices tends to infinity. This is a joint work with Carina Betken and Marcel Ortgiese. Abstract: In this joint work with Gerónimo Uribe-Bravo, we prove and extend results from the physics literature about a random walk with random reinforced relocations. The "walker" evolves in $\mathbb Z^d$ or $\mathbb R^d$ according to a Markov process, except at some random jump-times, where it chooses a time uniformly at random in its past, and instatnly jumps to the position it was at that random time. This walk is by definition non-Markovian, since the walker needs to remember all its past. Under moment conditions on the inter-jump-times, and provided that the underlying Markov process verifies a distributional limit theorem, we show a distributional limit theorem for the position of the walker at large time. The proof relies on exploiting the branching structure of this random walk with random relocations; we are able to extend the model further by allowing the memory of the walker to decay with time. Abstract: I discuss crossing probabilities of multiple interfaces in the critical Ising model with alternating boundary conditions. In the scaling limit, they are conformally invariant expressions given by so-called pure partition functions of multiple SLE(kappa) with kappa=3. I also describe analogous results for critical percolation and the Gaussian free field. This is joint work with Hao Wu (Yau Center / Tsinghua University). Abstract: In this talk, I will focus on the behavior of the following cluster growth models: internal DLA, the rotor model, and the divisible sandpile model. These models can be run on any infinite graph, and they are based on particles moving around according to some rule (that can be either random or deterministic) and aggregating. Describing the limit shape of the cluster these particles produce is one of the main questions one would like to answer. For some of the models, the fractal nature of the cluster is, from the mathematical point of view, far away from being understood. I will give an overview on the known limit shapes for the above mentioned growth models; in particular I will present a limit shape universality result on the Sierpinski gasket graph, and conclude with some open questions. The results are based on collaborations with J. Chen, W. Huss, and A. Teplyaev.
CommonCrawl
Abstract: We present a chemical abundance distribution study in 14 $\alpha$, odd-Z, even-Z, light, and Fe-peak elements of approximately 3200 intermediate metallicity giant stars from the APOGEE survey. The main aim of our analysis is to explore the Galactic disk-halo transition region between -1.20 $<$ [Fe/H] $<$ -0.55 as a means to study chemical difference (and similarities) between these components. In this paper, we show that there is an $\alpha$-poor and $\alpha$-rich sequence within both the metal-poor and intermediate metallicity regions. Using the Galactic rest-frame radial velocity and spatial positions, we further separate our sample into the canonical Galactic components. We then studied the abundances ratios, of Mg, Ti, Si, Ca, O, S, Al, C+N, Na, Ni, Mn, V, and K for each of the components and found the following: (1) the $\alpha$-poor halo subgroup is chemically distinct in the $\alpha$-elements (particularly O, Mg, and S), Al, C+N, and Ni from the $\alpha$-rich halo, consistent with the literature confirming the existence of an $\alpha$-poor accreted halo population; (2) the canonical thick disk and halo are not chemically distinct in all elements indicating a smooth transition between the thick disk and halo; (3) a subsample of the $\alpha$-poor stars at metallicities as low as [Fe/H] $\sim$ -0.85 dex are chemically and dynamically consistent with the thin disk indicating that the thin disk may extend to lower metallicities than previously thought, and (4) that the location of the most metal-poor thin disk stars are consistent with a negative radial metallicity gradient. Finally, we used our analysis to suggest a new set of chemical abundance planes ([$\alpha$/Fe], [C+N/Fe], [Al/Fe], and [Mg/Mn]) that may be able to chemically label the Galactic components in a clean and efficient way independent of kinematics.
CommonCrawl
We introduce the concepts of IVF irresolute mappings and IVF irresolute open mappings, and investigate characterizations for such mappings on the interval-valued fuzzy topological spaces. Y. B. Jun. G. C. Kang and M. A. Ozturk Interval-valued fuzzy semiopen, preopen and $\alpha$-open mappings, Honam Math. J., 28 (2) (2006), 241-259. T. K. Mondal and S. K. Samanta, Topology of interval-valued fuzzy sets, Indian J. Pure Appl. Math., 30(1) (1999), 23-38.
CommonCrawl
Doubt in the statement of $n$th Derivative Test. Why curl of a vector field that is proportional to 1/r^2 equal to 0? Why is the total differential divided by the norm of h bounded? Finding $A$ such that $\nabla \times A = B$ for given $B$. Is it possible to integrate the following functions? If so, how?
CommonCrawl
We give an equivalent description of taut submanifolds of complete Riemannian manifolds as exactly those submanifolds whose normal exponential map has integrable fibers. It turns out that every taut submanifold is also $\mathbb Z_2$-taut, so that tautness is essentially the same as $\mathbb Z_2$-tautness. In the case where the normal exponential map of a submanifold has integrable fibers, we explicitely construct generalized Bott-Samelson cycles for the critical points of the energy functionals on the path spaces which, generically, represent a basis for the $\mathbb Z_2$-cohomology. We also consider singular Riemannian foliations all of whose leaves are taut and discuss some of their main features. Using our characterization of taut submanifolds, we are able to show that tautness of a singular Riemannian foliation is actually a property of the quotient.
CommonCrawl
Plane detection is a widely used technique that can be applied in many applications. For example, augmented reality, where we have to first detect a plane to generate AR models, and 3D scene reconstruction, especially for man-made scenes, which consist of many plane objects. Nowadays, with proliferation of acquisitive devices, deriving a massive point cloud is not a difficult task, which shows a great promise in doing plane detection in 3D point clouds. There are some existing plane detection and point cloud machine learning approaches proposed by recent researches. The 3D Hough transformation is one possible approach for doing plane detection. As well as line detection in 2D space, planes can be parameterized into a 3D Hough space. The RAPter is another method that can be used for plane detection. It finds out the planes in a scene according to the predefined inter-plane relations, so RAPter is efficient for man-made scenes with significant inter-plane relations, but not adequate for some more general cases. People have designed many machine learning approaches on different representations of 3D data. For instance, the Volumetric CNN consumes volumetric data as input, and apply 3D convolutional neural networks on voxelized shapes. The Multiview CNN projects the 3D shape into 2D images and then apply 2D convolutional neural networks to classify them. The Feature-based DNN focuses on generating a shape vector of the object according to its traditional shape features and then use a fully connected net to classify the shape. The PointNet proposed a new design of neural network based on symmetric functions that can take unordered input of point clouds. PointNet uses symmetric functions that can effectively capture the global features of a point cloud. Inspired by this, this experiment uses a symmetric network that concatenates global features and local features. Besides, for comparison, I also used a traditional network that simply generates a high dimensional local feature space by multilayer perceptions. According to the universal approximation of symmetric function proposed by PointNet, a symmetric function $f$ can be arbitrarily approximated by a composition of a set of single variable functions and a max pooling function, as is described in Theorem 1. where $x_1,x_2,\ldots,x_n$ is the full list of elements in $S$ ordered arbitrarily, $\gamma$ is a continuous function, and $\max$ is a vector max operator that takes $n$ vectors as input and returns a new vector of the element-wise maximum. Thus, according to Theorem 1, the symmetric network can be designed as a multi-layer perceptron network connected with a max pooling function, which is shown in Figure 1. Figure 1: Architecture of symmetric network. For invariance towards geometric transformation, an input alignment ($T_1$ in Figure 1) and a feature alignment ($T_2$ in Figure 1) are respectively applied on the input space and feature space. The points are first mapped to a 64 dimensional feature space point and then mapped to a 1024 dimensional feature space. A max pooling function is applied on the 1024 dimensional feature space to generate a 1024 length global feature vector. The global vector is then concatenated to the 64 dimensional feature space which generates a 1088 dimensional space. Lastly, a 2 dimensional vector for each point, which represents the score for planar part and non-planar part, is updated from the 1088 dimensional space. For doing comparison, a traditional non-symmetric network is also introduced in this experiment. It is basically modified from the symmetric network by detaching the max pooling function from the multi-layer perceptron network. The architecture of the non-symmetric network is shown in Figure 2. Figure 2: Architecture of non-symmetric network. Instead of concatenating the global feature vector to the 64 dimensional feature space, the non-symmetric network simply concatenate the 64 dimensional feature space and the 1024 dimensional feature space, and generates the 2 dimensional scores from that. The experiment is conducted in the following pattern: firstly, prepare the data for training and testing; then feed the training data respectively to the symmetric network and non-symmetric network and find the optimal training model according to the minimum total loss; lastly, compare the plane detection results of symmetric network and non-symmetric network. This experiment uses data from ShapeNetPart dataset. I chose 64 tables, which have a significant planar surface, from the table repository for training, and 8 for testing and evaluation. Each the point cloud was previously subsampled to a size of 2048 points, using a random sampler. The point clouds for training were written into an HDF5 file, which contains 2 views, points and pid, respectively recording the point coordinate and the planar information associated with each point. The planar part were marked out manually on the original data. Figure 3 shows a few examples of table objects for training. Figure 3: A part of the training data. Basically, a point cloud contains more points in non-planar part than points in planar part. In order to handle this unbalanced data, I used a weighted cross entropy function to calculate the mean loss for each epoch. The planar part is assigned with a weight of 0.7 and the non-planar part is assigned with a weight of 0.3. Both networks are trained for 150 epochs. The plot of mean loss for training the symmetric network is shown in Figure 4. Figure 4: Total mean loss per epoch for the training of symmetric network. According to Figure 4, total mean loss is stuck at a very low value after 100th epoch, so I chose trained model from 130th epoch for testing. Besides, Figure 5 is the plot of mean loss for training the non-symmetric network. Figure 5: Total mean loss per epoch for the training of non-symmetric network. The loss value stays low after 100th epoch, and there is a fluctuation around 130th epoch. Such fluctuation may be caused by overfitting, so I chose the trained model from 110th epoch for doing testing. The testing set has a size of 8 objects. The testing result on the model of symmetric network shows an accuracy of 83.4534% and an Intersection-over-Union (IoU) of 71.1421%. Figure 6 illustrates the plane detection result on the testing set using the model generated by symmetric network. Figure 6: Testing results of the symmetric network. The result shows a fairly good performance for neural network classifiers doing the plane detection work on objects of single category. The accuracy could even rise if larger training set is prepared. The classifier seems to favor a table object with a more normal shape, i.e., a table with a square tabletop and four straight legs. For tables without a regular shape, the classification accuracy is relatively lower, and the classifier tends to misclassify the points in the middle of the table top. The non-symmetric network shows a similar testing result, which comes out with an accuracy of 85.7117% and an IoU of 75.0279%. The plane detection results of the non-symmetric network is shown in the Figure 7 below. Figure 7: Testing results of the symmetric network. Similar to the results of the symmetric network, the classifier based on the non-symmetric network shows a good performance on objects with a more regular shape. It may also fail to classify the points in the middle of the tabletop. Furthermore, the non-symmetric network could also misclassify a few points on the legs of a table, which is shown in the 1st, 2nd and 8th objects, and such pattern is not observed in the results of symmetric network. In this experiment, although the non-symmetric network shows a slightly higher classification accuracy, we cannot conclude that non-symmetric network has a better performance for doing the plane detection work. Since only one category of object is included, the global vector generated in the symmetric function actually does not make an effort. In the further experiments, we can introduce more categories of objects and see how the networks will work on more complicated shapes. This experiment shows the potential of neural network classifiers for doing plane detection. According to the experiment results, misclassification is basically providing holes on the detected planes. This is a not a very severe problem for plane detection tasks, as we can apply a 3D Hough transformation afterwards, which is robust to missing and contaminated data, on the points of detected planar part to generate the plane information. All the code and data of this experiment can be found over my GitHub repository: IsaacGuan/PointNet-Plane-Detection.
CommonCrawl
Consider a non-compact complete Riemannian manifold $(M, g)$ with smooth compact boundary $\partial M$. Suppose also that $M \setminus \partial M$ has positive/non-negative Ricci curvature. My question is, is it possible to remove $\partial M$, and replace it by another compact manifold $N$ with boundary $\partial N = \partial M$, and $M \cup N$ is a complete manifold (without boundary) of positive/non-negative Ricci curvature? Let us assume the maximal volume growth on $M$, and let us also assume for convenience that $M \setminus K$, where $K$ is compact, has only one connected unbounded component. The naive mental picture I am having is "fitting a spherical cap to an end of a cylinder". But I am not sure how true such a heuristic is in general. Thanks in advance for any suggestions! On the moduli space of positive Ricci curvature metrics on homotopy spheres by D. Wraith. Construction of manifolds of positive Ricci curvature with big volume and large Betti numbers by G. Perelman. For example, Perelman shows how to glue two positive Ricci curvature manifolds with isometric boundaries to get a positively Ricci metric, which requires (roughly speaking) that the normal curvatures at one boundary is greater than the negative of the normal curvature at the other boundary when the normals are chosen correctly, like in filling a cylinder with a cap. one find the following: If $M$ is a complete Riemannian manifold of nonnegative Riccie curvature which is flat outside a compact set and is simply-connected at infinity , then $M$ is flat. For example, $\mathbb R^n$ is simply-connected at infinity when $n>2$, so if you replace a round disk in $\mathbb R^n$ with a compact manifold with sphere boundary, and hope to have nonnegative Ricci curvature on the result, then the result must be flat, and in particular, be finitely covered by the product of a Euclidean space and a torus. Note that a capped $2$-dimensional cylinder is not simply-connected at infinity, while a capped $n$-dimensional cylinder ($n>2$) is not flat outside the cap. Take an n-sphere with its standard metric of positive curvature, cut out an n-ball and reglue it by a diffeomorphism of the bounding n-1-sphere. If the diffeomorphism is not isotopic to the identity, you will get an exotic sphere (this is called the twisted sphere construction) and you are asking whether the positively curved metric on the n-ball extends to a positively or non-negatively Ricci-curved metric on the exotic sphere. However, Hitchin in his work on harmonic spinors has shown that certain exotic spheres do not admit a metric of positive scalar curvature, in particular not a metric of positive Ricci curvature. (You find more information in this survey.) Thus a constriction as you want can not work in general. Not the answer you're looking for? Browse other questions tagged reference-request dg.differential-geometry riemannian-geometry geometric-analysis or ask your own question. Fundamental groups of compact manifolds with non-negative Ricci curvature. Have heat kernels for generalized Laplacians on non-compact manifolds been constructed? A Converse to Cartan–Hadamard theorem? Is Thierry Aubin's theorem true on Hermitian manifolds?
CommonCrawl
I got this problem from Rustan Leino, who Bertrand Meyer, who had heard it was once given on the Putnam exam. At some point during a baseball season, a player has a batting average of less than 80%. Later during the season, his average exceeds 80%. Prove that at some point, his batting average was exactly 80%. Also, for which numbers other than 80% does this property hold? We will prove that this property holds for numbers $p$ such that $1/(1-p)$ is an integer. We will furthermore prove that the property doesn't hold for any other $p$ such that $0 < p < 1$. Since $p = 0.8$ makes $1/(1-p)$ an integer (specifically, five), the property holds for an 80% batting average. First, we prove that the property holds if $1/(1-p)$ is an integer. Suppose it doesn't hold for some such $p$. Then, there can be a single at-bat before which the player's average is $< p$ and after which the player's average is $> p$. This must be a successful at-bat since it increases the player's average. Let $h$ be the number of safe hits before this at-bat, and let $b$ be the number of at-bats preceding this at-bat. Also, let $n = 1/(1-p)$, which we know to be an integer. We must have $h/b < p < (h+1)/(b+1).$ Some algebra shows that $p = (n-1)/n$, so we have $h/b < (n-1)/n < (h+1)/(b+1).$ This means that $hn < b(n-1)$ and $(b+1)(n-1) < (h+1)n.$ The first of these can be transformed into $hn + n - 1 < (b+1)(n-1)$ and the latter can be transformed into $(b+1)(n-1) < hn + n.$ Thus, we have \[ hn + n - 1 < (b+1)(n-1) < hn + n. \] In other words, there's an integer $(b+1)(n-1)$ that lies between the integer $hn + n - 1$ and its immediate successor. This is impossible, so the supposition must be false and this case is proved. Second, we prove that the property doesn't hold if $1/(1-p)$ isn't an integer. $1/(1-p)$ must be positive, so it must be between some non-negative integer and its immediate successor. In other words, there exists an integer $n$ such that $0 \leq n < 1/(1-p) < n + 1$. Some algebra shows that because of this, $(n-1)/n < p < n/(n+1).$ Now, suppose a player has a total of $n+1$ at-bats, out of which all but the first are safe hits. His batting average will take on the values $0/1, 1/2, 2/3, \ldots, (n-1)/n, n/(n+1).$ All but the last of these is $< p$, and the last is $> p$. Thus, the property that $p$ makes this impossible doesn't hold.
CommonCrawl
The authors of this paper introduce a novel approach to GP classification, called GPD. The authors use a GP to produce the parameters of a Dirichlet distribution, and use a categorical likelihood for multi-class classification problems. After applying a log-normal approximation to the Dirichlet distribution, inference for GPD is the same as exact-GP inference (i.e. does not require EP, Laplace approximation, etc.) The authors show that GPD has competitive accuracy, is well calibrated, and offers a speedup over existing GP-classificaiton methods. Quality: The method introduced by this paper is a clever probabilistic formulation of Bayesian classification. The method is well-grounded in Bayesian statistics and makes good comparisons to similar existing approaches. The one issue I have with this paper are the claims about the ``speedup'' afforded by this method (see detailed comments). The authors are not specific about the speedup (is it training time or inference time?) and I'm concerned they are missing an important baseline. Clarity: The paper is clearly written. Originality: To the best of my knowledge this is a novel approach to Bayesian classification. Significance: The idea is an interesting new approach to approximate inference for classification, though it does not perform better than other classification methods (in terms of accuracy). Nevertheless, it will be of interest to the Bayesian ML community. Overall: The proposed method is a novel approach to classification - and based on these merits I would be willing to accept the paper. However, the authors need to substantiate the speedup results with more detail and analysis if they wish to report this as a primary advantage. Detailed comments: My main concern with this paper is with regards to the ``speedup'' that the authors claim. There are many missing details. The authors claim that their GPD method will be faster than GP classification because "GPC requires carrying out several matrix factorizations." (What matrix factorizations are the authors referring to?) The inducing-point GP-classification method in the results (SVGP) is asymptotically the same as the inducing-point method used for GPD (SGPR). The NMLL computations (used for training) and inference computations for SVGP and SGPR both require the Cholesky decomposition of an m x m matrix (where m is the number of inducing points). This is the primary computational bottleneck of both SVGP (for GP classification) and SGPR (for GPD). I'm assuming the authors claim that GPD is faster because GPD does not require learning variational parameters. The ``training'' of GPD - as far as I understand - only requires optimizing the hyperparameters, which presumably requires fewer optimization iterations. However, one key advantage of SVGP (for GP classification) is that it can be optimized with SGD. So even though SVGP requires more optimization iterations, each iteration will be fast since it only requires a minibatch of data. On the other hand, every optimization iteration of GPD will require the entire dataset. Therefore, there is not an obvious reason why GPD (and GP regression) should be that much faster than GP classification. The authors should elaborate with more detail in Sections 2 and 3 as to why GP-classification is slow. While the authors do report some empirical speedups, the experiments are missing several important details. Firstly, what is the ``speedup'' measuring - training or inference? (I'm assuming that it is measuring training, but this must be clarified.) Secondly, was the author's GP-classification baseline trained with or without SGD? If it was trained without SGD, I would argue that the speed comparisons are unfair since stochastic optimization is key to making GP-classification with SVGP a fast method. Small comments: - 149: "For a classifier to be well-calibrated, it should accurately approximate fp." - This is not necessarily true. Assuming a balanced dataset without loss of generality, a latent function that outputs zero (in the case of CE) or 0.5 (in the case of least-squares classification) for all samples is well calibrated. - Figure 1: it is difficult to differentiate between the lines in the n=60 and n=80 plots - 241: do you mean the initial inducing points were chosen by k-means? SVGP and SGPR should optimize the inducing point locations. ----------------------- Post author response: While I would still vote to accept the paper, I will note that the author's response did not clarify my questions about the speedup. (Was it training or inference?) Additionally, the author's claim about speed still does not acknowledge many recent advances in scalable GP classification: "Our claim in the paper about multiple matrix factorizations pertains to training times and it is specific to the case of standard implementations of EP and the Laplace approximation." Though the authors make a statement about GPC speed based on the EP/Laplace approximations, yet they do not compare to these methods in their experiments! Regarding SGD-based GPC methods: there have been a number of recent works in scalable GPC methods that rely on SGD [1-3]. These papers report large empirical speedups when using SGD-based optimization over batch GD optimization. By not comparing against SGD-based methods, the authors are disregarding GPC advances from the past 4 years. That being said, I still do believe that this method is novel, and I would vote for acceptance. I would encourage the authors to be much more rigorous about their speedup claims. There are some scenarios where a non-GPC-based approach would be clearly advantageous (e.g. in an active learning setting, when the kernel hyperparameters can be reused between data acquisition), and I would suggest the authors to focus on these specific scenarios rather than making broad claims about speed. Hensman, James, Alexander Matthews, and Zoubin Ghahramani. "Scalable Variational Gaussian Process Classification." Artificial Intelligence and Statistics. 2015. Hernández-Lobato, Daniel, and José Miguel Hernández-Lobato. "Scalable Gaussian process classification via expectation propagation." Artificial Intelligence and Statistics. 2016. Wilson, Andrew G., et al. "Stochastic variational deep kernel learning." Advances in Neural Information Processing Systems. 2016. The authors propose a novel approximation for multi-class Gaussian process classification. The proposed approximation allows GP classification problems to be solved as GP regression problems and thus yielding a significant speed up. The core idea is to represent the likelihood as a categorical distribution with a Dirichlet prior on the parameters and then use the fact that a Dirichlet distribution can be represented using a set of independent gamma distributions. These independent gamma distributions are then approximated using log-normal distributions via moment matching. The result is a model with heteroscedastic Gaussian likelihood. The downside is that the approximation introduces a new parameter that cannot be tuned using marginal likelihood. Besides the model approximation introduced in the paper, the authors also rely on standard approximate inference methods to make their method scale, i.e. variational inference and inducing points. The proposed method is evaluated and compared to benchmark methods using 9 different datasets. The experiments shows that the performance of the proposed method is comparable to the reference methods in terms of classification error, mean-negative log likelihood and expected calibration error. The paper appears technically sound and correct. The author do not provide any theoretical analysis for the proposed method, but the claims for the method are supported by experimental results for 9 datasets showing that the method is comparable to the reference methods. My only concern is the parameter alpha_eps introduced in the model approximation. In figure 3, the authors show that the performance is quite sensitive to the value of alpha_eps. However, they also show that optimal value for alpha_eps (in terms of MNLL) is strongly correlated between the training and test set. However, they only show this for 4 out of the 9 data sets included in the paper. It would be more convincing if the authors would include the corresponding figures for the remaining 5 datasets in the supplementary material along with the reliability diagrams. The paper is in general clear, well-organized, and well-written. However, there are a couple details that could be made more clear. If I understand the method correctly, then solving a classification problem with C classes essentially boils down to solving C independent GP regression problems each with different likelihoods, where each regression problem is basically solving a one-vs-rest classification problem. After solving the C regression problems, the posterior class probabilities can be computed by sampling from C GP posteriors using eq. (7). 1) If this is correct, the authors should mention explicitly that each regression problem is solved independently and discuss the implications. 2) How are the hyperparameters handled? Is each regression problem allowed to have different hyperparameters or do they share the same kernel? 3) In my opinion, the authors should also state the full probabilistic model before and after the approximation. That would also help clarifying point 1). The idea of solving classification problems as regression problem is not new. However, to the best of my knowledge, the idea of representing the multi-class classification likelihood in terms of independent gamma distributions and then approximating each gamma distribution using a lognormal distribution is indeed original and novel. The authors propose a method for making GP classification faster. I believe that these results are important and that both researchers and practitioners will have an interest in using and improving this method. Further comments Line 89: "The observable (non-Gaussian) prior is obtained by transforming". What is meant by an observable prior? Line 258: "Figure 5 summarizes the speed-up achieved by the used of GPD as opposed to the variational GP classification approach". 1) Typo, 2) For completeness, the authors should add a comment on where the actual speed up comes from. I assume it's due to the availability of closed-form results for the optimal variational distribution in (Titsias, 2009) which are not available for non-Gaussian likelihoods? Line 266: "In Figure 5, we explore the effect of increasing the number of inducing points for three datasets" Can you please elaborate on the details? --------------------------------------------------------------------- After author feedback --------------------------------------------------------------------- I've read the rebuttal and the authors addressed all my concerns properly. Therefore, I have updated my overall score from 6 to 7. This paper is concerned with multi-class classification using GPs, placing particular emphasis on producing well calibrated probability estimates. Inference in multi-class classification for GPs is in general a hard task, involving multiple intractabilities (matrix factorisations). One of the attractions of using GPs for classification is that they aught to be well calibrated, which is not always the case for non-probabilistic classifiers. The paper here proceeds in two directions: how to make an efficient multi-class GP (with comparisons to existing methods); and analysis of the calibration properties of such methods. GP regression is a much simpler task than GP classification (due to the Gaussian likelihood). This immediately suggest that perhaps by treating the (binary) class labels as regression targets, and ignoring the fact that they are discrete variables (this can be extended to multiple classes through a 1vs rest approach). Predictions from such a model may lie outside of the [0, 1] range, so one can either clip the outputs or apply post-calibration. This forms a baseline in the paper. Another idea is to use separate regression models for each class, and then combine these through the use of a categorical/Dirichlet model. I note that this idea is not new - it has been used extensively in the context of Bayesian linear models, and I found several references to this kind of model for GPs [1-4]. The idea of constructing these as separate Gamma likelihoods is novel as far as I am aware, and the neat thing here is that the Gamma likelihood (which would be intractable) can be quite well approximated by the lognormal distribution, allowing inference on log scaled versions of the targets. Note here that the model is heteroskedastic, although there are only two possible values for the noise per instance ($\alpha_i$), which seems somewhat limiting. The analysis of the behaviour of $\alpha$ (section 4.3), shows that it it's well behaved in testing scenarios. I would conjecture here that the between class variance must be similar in train and test for this to be the case. The experiments are appropriate, and Figure 4 shows that the method is competitive (although not obviously superior) to the competing methods. When combined with the speedup, the comparison with GPC is certainly favourable. It is perhaps harder to argue that it dominates GPR (Platt) and NKRR, although on balance the uplift in performance is probably enough to accept the additional complexity in certain situations. In sum, on the positive side this paper is well written, and contains a (fairly) simple idea that is well executed. The negative side is that there is limited depth to the analysis. Specific comments: 51 "It is established that binary classifiers are calibrated when they employ the logistic loss". This is under the (strong) assumption that the true conditional distributions of the scores given the class label are Gaussian with equal variance. This of course is often violated in practice. 126 "The Bayes' rule minimising the expected least squares is the regression function … which in binary classification is proportional to the conditional probability of the two classes". I guess you mean "The Bayes optimal decision function"? And do you mean the ratio of the conditional probabilities? As in the point above, this being calibrated relies on a conditionally Gaussian assumption 132 breifly -> briefly Figure 1 - are these in fact two-class problems? Table 1 - how are the numbers of inducing points chosen? There's no obvious pattern Appendix Figure 1 - HTRU2 - GRP (Platt) - I believe this is a case where Platt scaling cannot help Cho, Wanhyun, et al. "Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data." World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering9.12 (2015): 2453-2457. Chan, Antoni B. "Multivariate generalized Gaussian process models." arXiv preprint arXiv:1311.0360 (2013). Cho, Wanhyun, et al. "Variational Bayesian Inference for Multinomial Dirichlet Gaussian Process Classification Model." Advances in Computer Science and Ubiquitous Computing. Springer, Singapore, 2015. 835- Salimbeni & Deisenroth. Gaussian process multiclass classification with Dirichlet priors for imbalanced data. Workshop on Practical Bayesian Nonparametrics, NIPS 2016. Note: Following author feedback my concerns about related work have been addressed. I feel that this is a solid if not remarkable paper.
CommonCrawl
Prove that the given the code calculates the factorial of a positive integer variable x. We can annotate each line of code describe the properties of the variables (have a look into Hoare logic for more information). The loop runs $j$ times since the loop will only execute when $x_i > 1$, and from (1), (3), (4) and (5) we can deduce that $y_j = x_0 \times x_1 \times \cdots \times x_j = x \times (x - 1) \times \cdots \times 1$ which is the factorial of $x$.
CommonCrawl
Abstract: Decomposition spaces are simplicial $\infty$-groupoids subject to a certain exactness condition, needed to induce a coalgebra structure on the space of arrows. Conservative ULF functors (CULF) between decomposition spaces induce coalgebra homomorphisms. Suitable added finiteness conditions define the notion of Möbius decomposition space, a far-reaching generalisation of the notion of Möbius category of Leroux. In this paper, we show that the Lawvere-Menni Hopf algebra of Möbius intervals, which contains the universal Möbius function (but is not induced by a Möbius category), can be realised as the homotopy cardinality of a Möbius decomposition space $U$ of all Möbius intervals, and that in a certain sense $U$ is universal for Möbius decomposition spaces and CULF functors.
CommonCrawl
Is multiplication by zero clear for and understood by K-3 students? For K-3 students, perhaps it is not acceptable to introduce multiplication by zero as a property or definition. Instead, the child may think about multiplication as, e.g., repeated addition. Examples of the "repeated addition" conception: $3 × 2 = 3 + 3 $ and $ 4×3 = 4 + 4 +4$. How will students conceive of $ 3 × 0 $? How should early elementary/primary curricula deal with multiplication by zero? I think the graphical approach gets the idea across. You could represent $3\times 1$ by a row of three dots; likewise, $3\times 2$ is two rows of three dots; etc. If you asked them to guess what $3\times 0$ is, some would probably tell you you'd have no rows so no dots, so the answer is zero. Humans are very good at inductive logic, even if we aren't aware of it. I'm sticking with food math. You have 4 friends and need 2 candy bars per friend, so 8 bars. Similar, if say, 1 bar per friend, you have 4 bars total. If you have 0 bars per friend, it's 0. But if it's 0 per friend, no matter how many friends you have, you still need 0 candy bars. Not the answer you're looking for? Browse other questions tagged primary-education concept-motivation curriculum teacher-preparation definitions or ask your own question. How can I motivate the formal definition of continuity?
CommonCrawl