text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Ok so we have lets say 2 big wooden stakes, each has a length of $2$ $meters$. Our goal is to create as many photo frames as possible.
Each photo frame is consisted of $2\times50cm$ pieces and $2\times30cm$ pieces.
We start off in stage $(200,200)$ and we can create a frame in multiple ways such as : $(100,60)$ or $(150,90)$ or $(40,200)$ and so on. But how can i convert this into a recursion formula??
This doesn't directly answer your "recursion" question. I am not exactly sure what you even mean by that. But it does address one way of approaching problems like this, so maybe it will be helpful.
If you want to generalize your approach to problem-solving, then you need to think about more generalized problems. you don't have $2$ stakes - you have $n$ stakes. You could also generalize the length of the stakes and of the frame pieces, but for various reasons, I think in this case it will be more helpful to only generalize one element first.
The next thing you want to do, and I cannot emphasize this enough - it is a major stumbling block for students and even rocket scientists - always immediately convert measurements to all be in consistent units. You have stakes of length $200$ cm.
If we divide $j$ stakes into four $50$ cm strips and $2k$ stakes into one $50$ cm strip and five $30$ cm strips ($2k$ because we need an even number of each strip), then we get $(4j+2k) 50$ cm strips and $(10j)30$ cm strips. To have no waste, we need these counts to be the same: $$4j + 2k = 10j\\k = 3j$$ whose smallest solution is $j = 1, k = 3$. So given $2k +j = 7$ stakes, we can divide them up completely into ten $50$ cm strips and ten $30$ cm strips, making 5 frames.
$r = 0: 0$ additional frames, no waste.
$r = 1: 1$ additional frame, $40$ cm wasted.
$r = 2: 2$ additional frames, $80$ cm wasted.
$r = 3: 3$ additional frames, $120$ cm wasted.
$r = 4: 4$ additional frames, $160$ cm wasted.
$r = 5: 6$ additional frames, $40$ cm wasted.
$r = 6: 7$ additional frames, $80$ cm wasted.
So the maximum number of frames that can be made from $n = 7q + r$ stakes is $5q + r$ when $r \le 4$ and $5q + r + 1$ when $r =5$ or $r = 6$.
Not the answer you're looking for? Browse other questions tagged dynamic-programming or ask your own question. | CommonCrawl |
This paper derives the memory of the product series $x_ty_t$, where $x_t$ and $y_t$ are stationary long memory time series of orders $d_x$ and $d_y$, respectively. Special attention is paid to the case of squared series and products of series driven by a common stochastic factor. It is found that the memory of products of series with non-zero means is determined by the maximal memory of the factor series, whereas the memory is reduced if the series are mean zero. | CommonCrawl |
wp_safe_redirect() – Performs a safe redirect, using wp_redirect(). Before redirections check whether the host is in whitelist (in list of allowed hosts). WordPress Function.
Before redirecting to the specified host (url), the function checks it. Redirection will occur only if the host is present in the list of allowed hosts.
If the host is not allowed, then the redirect defaults to http://site.com/wp-admin instead. This prevents malicious redirects which redirect to another host. With this approach, you can protect users from getting to an unsafe site.
The list of allowed hosts can be extended by plugins, see hook allowed_redirect_hosts().
Since 5.1.0 The return value from wp_redirect() is now passed on, and the $x_redirect_by parameter was added.
true/false. $redirect False if the redirect was cancelled, true otherwise.
HTTP response status code to use. (Moved Temporarily).
The application doing the redirect. | CommonCrawl |
Let $I$ be a nonzero ideal of the ring of Gaussian integers $\Z[i]$.
Prove that the quotient ring $\Z[i]/I$ is finite.
In particular, $\Z[i]$ is a Principal Ideal Domain (PID).
Since $I$ is a nonzero ideal of the PID $\Z[i]$, there exists a nonzero element $\alpha\in \Z[i]$ such that $I=(\alpha)$.
Let $a+bi+I$ be an arbitrary element in the quotient $\Z[i]/I$.
\[a+bi=q\alpha+r,\] for some $q, r\in \Z[i]$ and $N(r) < N(\alpha)$.
\[a+bi+I=r+I.\] It follows that every element of $\Z[i]/I$ is represented by an element $r$ whose norm is less than $N(\alpha)$.
There are only finitely many elements in $\Z[i]$ whose norm is less than $N(\alpha)$.
Hence the quotient ring $\Z[i]/I$ is finite.
$(x^3-y^2)$ is a Prime Ideal in the Ring $R[x, y]$, $R$ is an Integral Domain. | CommonCrawl |
Why do we use ancilla qubits for error syndrome measurements?
Here $M$ is a measurement in the computational basis. This circuit measures $Z_1Z_2$ and $Z_2Z_3$ of the encoded block (i.e. the top three). My question is why measure these using ancilla qubits - why not just measure the 3 encoded qubits directly? Such a setup would mean you would not have to use c-not gates which from what I have heard are hard to implement.
(Note I have only given this 3-qubit code as an example I am interested in general syndrome measurements on general codes).
The key point of quantum error correction is precisely to correct the errors without collapsing the qubits, right? If we measure the encoded qubits we project the qubits to $\left|0\right>$ or $\left|1\right>$ and lose all the information in the coefficients $\alpha \left|0\right> + \beta \left|1\right>$. By measuring ancilla qubits we can know what has happened to the qubits without actually knowing the values of the qubits: this enables us to correct errors in a non-destructive way, and carry on with our quantum operation.
When you say "why not just measure the 3 encoded qubits directly", are you thinking that you could measure $Z_1$, $Z_2$ and $Z_3$, and that, from there, you can calculate the values $Z_1Z_2$ and $Z_2Z_3$?
This is sort of true: if your only goal is to obtain the observables $Z_1Z_2$ and $Z_2Z_3$, you could do this.
But that is not your end goal, which is, instead, to preserve the information encoded in the logical state. The only way you can do this is to learn nothing about the state that is encoded. Effectively, measuring in this way gives you too much information: it gives you 3 bits of information (1 bit from each measurement that you perform) when you only need 2 bits. Where does this extra bit come from? It is one bit of information about the state that you have encoded. In other words, you have measured the encoded state, destroying any superposition that you are specifically trying to use the error correcting code to protect.
Not the answer you're looking for? Browse other questions tagged error-correction stabilizer-code or ask your own question.
Why is the Pauli group used for stabilizers?
Why does this error correcting code not work?
Why can't there be an error correcting code with fewer than 5 qubits?
Why does phase flip correction error? Why could any error be written as a linear combination of I, X, Z and ZX matrices?
How does Steane code use the classical Hamming code for error correction? | CommonCrawl |
Definition: A function $f$ is said to be Increasing on an interval $I$ if for all $m, n \in I$, whenever $m < n$, we have that $f(m) < f(n)$. A function $f$ is said to be Decreasing on an interval $I$ if for all $m, n \in I$, whenever $m < n$, we have that $f(m) > f(n)$.
Many texts will give the following similar definition for an increasing function: "A function $f$ is said to be increasing on an interval $I$ if for all $m, n \in I$, whenever $m \leq n$, we have that $f(m) \leq f(n)$", and then define a function $f$ to be strictly increasing on $I$ if it satisfies the original definition above. A similar sentiment holds for decreasing functions. In most applications of the term "increasing function" and "decreasing function", the distinction in these definitions will not be important. When it is, it will be clearly noted.
The teal indicates the intervals to where the function is decreasing, while the blue indicates interval to where the function is increasing.
Determining whether a function is increasing or decreasing on a certain interval can be difficult if we don't have an idea of what the graph of $f$ looks like. For example, consider the function $f(x) = x^2 - \ln x$. We will show that this function is increasing on $[1, \infty)$.
From above we see that $f(n) - f(m) > 0$. So $f(m) < f(n)$ and $f$ is increasing on $[1, \infty)$. | CommonCrawl |
KAIST undergraduate student, Haesong Seo, wrote a fairly complete lecture notes of all lectures given at this year's topology summer school.
One can download the note from the following link [download].
KAIST Advanced Institute for Science-X (KAIX) hosts its first thematic program this summer. As a part of the program, there will be a summer school on mathematics in June. This year's theme is "Introduction to the recent developments in PDE and Topology, and their intersection." Topology session is organized by me, and PDE session is organized by Prof. Soonsik Kwon.
All talks are at the building E6-1, Room 2413.
The schedule is as in the table below.
Abstract: The following topics will be covered.
Title: Group actions on quasi-trees and application.
Abstract: A quasi-tree is a geodesic metric space that is quasi-isometric to a tree.
Abstract: Historically, deformation spaces of Kleinian groups appeared as generalisations of Teichmuller spaces. Thurston's work in the 1980s gave a quite novel viewpoint coming from his study of hyperbolic 3-manifolds. In this talk, I shall describe the theory of deformations of Kleinian groups starting from classical work of Bers, Maskit and Marden, and then spend most of time explaining Thurston's framework. If time permits, I should also like to touch upon the continuity/discontinuity of several invariants defined on deformation spaces.
Abstract: There is a rich interplay between the degree of regularity of a group action on the circle and the allowable algebraic structure of the group. In this series of talks, I will outline some highlights of this theory, culminating in a construction due to Kim and myself of groups of every possible critical regularity $\alpha \in [1,\infty)$.There is a rich interplay between the degree of regularity of a group action on the circle and the allowable algebraic structure of the group. In this series of talks, I will outline some highlights of this theory, culminating in a construction due to Kim and myself of groups of every possible critical regularity $\alpha \in [1,\infty)$.
If you have any question, please contact me via email hrbaik(at)kaist.ac.kr. | CommonCrawl |
Abstract: (Abridged) The detection of forming planets in disks around young stars remains elusive, and state-of-the-art observational techniques provide somewhat ambiguous results. It has been reported that the pre-transitional T Tauri star LkCa 15 could host three planets; candidate planet b is in the process of formation, as inferred from its H$\alpha$ emission. However, a more recent work casts doubts on the planetary nature of the previous detections. We have observed LkCa 15 with ISIS/WHT. The spectrograph's slit was oriented towards the last reported position of LkCa 15 b (parallel direction) and 90degr from that (perpendicular). The photocenter and full width half maximum (FWHM) of the Gaussians fitting the spatial distribution at H$\alpha$ and the adjacent continuum were measured. A well-known binary (GU CMa) was used as a calibrator to test the spectro-astrometric performance of ISIS/WHT, recovering consistent photocenter and FWHM signals. However, the photocenter shift predicted for LkCa 15 b is not detected, but the FWHM in H$\alpha$ is broader than in the continuum for both slit positions. Our simulations show that the photocenter and FWHM observations cannot be explained simultaneously by an accreting planet. In turn, both spectro-astrometric observations are naturally reproduced from a roughly symmetric Halpha emitting region centered on the star and extent comparable to the orbit originally attributed to the planet at several au. The extended H$\alpha$ emission around LkCa 15 could be related to a variable disk wind, but additional multi-epoch data and detailed modeling are necessary to understand its physical nature. Spectro-astrometry in H$\alpha$ is able to test the presence of accreting planets and can be used as a complementary technique to survey planet formation in circumstellar disks. | CommonCrawl |
Concept and establishment of essential amino acids in animals and human beings rendered immeasurable contributions to animal production and human health. In ruminant animals, however, essential amino acids have never been completely established. The present review proposes a hypothesis that histidine may not be an essential amino acid for normal growing cattle (Japanese black) at least at the growing stage after about 450 kg of body weight on the basis of the experimental results of histidinol dehydrogenase activities in some tissues of the cattle together with hints from which the hypothesis was derived. At the same time, histidinol dehydrogenase activities in liver, kidney and muscle of swine, mouse, fowl and wild duck will be shown and the essentiality of histidine in these animals will be discussed. Finally, the essentiality of histidine for adult human will briefly be discussed.
Albanese, A. A., L. E. Holt Jr., J. E. Frankston and V. Irby. 1944. Observations on a histidine deficient diet in man. Bull. Johns Hopkins Hosp. 74:251-258.
Andorn, N. and J. Aronovitch. 1982. Purification and properties of histidinol dehydrogenase from Escherichia coli. B. J. Gen. Microbiol. 128: 579-584.
FAO. 1970. Amino Acid Content of Foods and Biological Data on Proteins in FAO Nutritional Studies, No. 24. (ed by Food Policy and Food Science Service, Nutrition Division, FAO, Rome, 1970).
Okitani, A. 1999. 'Niku no Kagaku (Meat Science)' ed. by A. Okitani. Asakura. Tokyo.
Rose, W. C., W. J. Haines, D. T. Warner and J. E. Johnson. 1951a. The amino acid requirements of man. II. The role of threonine and histidine. J. Biol. Chem. 188:49-58.
Rose, W. C., W. J. Haines and D. T. Warner. 1951b. The amino acid requirements of man. III. The role of isoleucine: additional evidence concerning histidine. J. Biol. Chem. 193:605-612.
Wadud, S., R. Onodera, and M. M. Or-Rashid. 2001b. Synthesis of histidine in liver and kidney of cattle and swine. Anim. Sci. 72:253-256.
Japanese Feeding Standard for Poultry. 1997. Agriculture, Forestry and Fisheries Research Council Secretariat (ed). MAFF. Central Association of Livestock Industry. Tokyo.
Japanese Feeding Standard for Beef Cattle. 2000. Agriculture, Forestry and Fisheries Research Council Secretariat (ed). MAFF. Central Association of Livestock Industry. Tokyo.
Japanese Feeding Standard for Swine. 1998. Agriculture, Forestry and Fisheries Research Council Secretariat (ed). MAFF. Central Association of Livestock Industry. Tokyo.
Morimoto, H. 1974. Shiryogaku (Feeds and Feeding). Yokendo. Tokyo.
Meyer, C. E., W. C. Rose. 1936. The spatial configuration of $\alpha$-amino-$\beta$-hydroxy-n-butyric acid. J. Biol. Chem. 115:721-729.
McCoy, R. H., C. E. Meyer, W. C. Rose. 1935. Feeding experiments with mixture of highly purified amino acids. VIII. Isolation and identification of a new essential amino acid. J. Biol. Chem. 112:283-302.
Voeltz, W. 1920. Der Ersatz des Nahrungseiweisses durch Harnstoff beim wachsenden Wiederkaeuer. Der Futter des nach dem Beckmannschen Verfahren aufgeschlossenen Strohs und der Spreu. Biochem. Z. 102:151-227.
Wixom, R. L. and H. L. Anderson. 1979. Histidine – Limited synthesis in normal, adult man. Histidine: Metab., Clin. Aspects, Ther. Use., Internatl. Workshop, 1st, pp. 19-33.
Ishibashi, T. 2002. 'Dobutsu Eiyo Shikenho (Methods for Animal Nutrition Experiments)' ed. by T. Ishibashi. Yokendo Ltd. Tokyo.
Onodera, R., S. Wadud and M. M. Or-Rashid. 2001. Is histidine an essential amino acid in ruminant animals? Asian-Aust. J. Anim. Sci. 14 (Special Issue):123-130.
Snyderman, S. E., A. Boyer, E. Roitman, L. E. Holt, JR. and P. H. Prose. 1963. The histidine requirement of the infant. Pediatrics 31:786-801.
Rose, W. C., W. J. Haines, J. E. Johnson and D. T. Warner. 1943. Further experiments on the role of the amino acids in human nutrition. J. Biol. Chem. 148:457-458.
Buttery, P. J. and A. N. Foulds. 1985. Amino acid requirements of ruminants. In: Recent Advances in Animal Nutrition-1985. (Haresign, W. and D. J. A. Cole eds. Butterworths. London.) pp. 257-271. | CommonCrawl |
Abstract: The European X-ray Free Electron Laser (this http URL) is currently being commissioned in Schenefeld, Germany. From 2017 onwards it will provide spatially coherent X-rays of energies between 0.25\,keV and 25\,keV with a unique timing structure. One of the detectors foreseen at this http URL for the soft X-ray regime (energies below 6\,keV) is a quasi column-parallel readout FastCCD developed by Lawrence Berkeley National Lab (LBNL) specifically for the this http URL requirements. Its sensor has 1920$\times$960 pixels of 30\,$\mu$m $\times$30\,$\mu$m size with a beam hole in the middle of the sensor. The camera can be operated in full frame and frame store mode. With the FastCCD a frame rate of up to 120~fps can be achieved, but at this http URL the camera settings are optimized for the 10\,Hz XFEL bunch-mode. The detector has been delivered to this http URL. Results of the performance tests and calibration done using the this http URL detector calibration infrastructure are presented quantifying noise level, gain and energy resolution. | CommonCrawl |
Has the problem been translated from some other language to, say, Spanish, and then from Spanish to English? If what you gave was intended, the original wording was very poor. Can you post the version you translated prior to your translation? Do you know where the problem came from originally?
The robot would need to be given the numerical value of x to be able to cut any pieces of length x+1 inches. If the robot is informed that x = 9, and initially cuts pieces of length 10 inches, why wouldn't "quality control inspection" perceive the "excess" or remainder of 10 inches as another piece of the required length, making the true remainder zero inches? Why should someone reading the problem assume that the quality control inspection uses the remainder theorem instead of inspecting the results of the robot's work? If the robot were reprogrammed to cut pieces of length x+12 inches, the remainder would be calculated by the remainder theorem method as being 560 inches. For a piece length greater than x+12 inches, the remainder calculated by the remainder theorem would be greater than the length of the bar. Which is dumber, the robot or the quality control inspection?
That makes no sense to me...I QUIT!
Okay! No problem with that.
This problem was translated from other language. The original version can be read as follows. Excuse if my attempt was not very precise but I attempted to do my best with the translation.
The problem belongs to a collection of problems but I don't know the origin as the source doesn't list it and I can't find it properly. I hope this isn't critical.
En una fabrica automotriz en Hsinchu, un especialista esta encargado de un robot que tiene como funcion cortar barras de acero para la transmision de un coupé. La longitud en pulgadas de la barra esta dada por la formula $B(x)=5x^2+mx+n$ en pulgadas. Cuando el robot corta la barra en trozos de $(x+1)$ pulgadas el control de calidad indica que sobran $10$ pulgadas. El robot es reprogramado y ahora el corte de pedazos es de x pulgadas. Pero ahora sobran $20$ pulgadas. Si la longitud inicial de la barra es $560$ pulgadas. ¿Cuantos trozos de acero de longitud $(x+2)$ pulgadas podran obtenerse como maximo?.
I'm not sure if reading the original source does clear some doubts. But by judging your words, it doesn't seem it will. Whoever posed this problem probably intended the person solving it to assume what you mention. It looks that this person used "quality control" as just a way to say that it only informed the result of the cut but didn't performed a check of the work done by the robot, which isn't a real thing as this is contrary to what quality control does.
I get your idea about why would someone assume that a remainder of $10$ can't be perceived as another piece of the required length. Maybe this problem was meant to be a challenge of interpretation. I just don't know. Honestly I didn't thought that this problem would cause need for any clarification. But the more I read it, seems that way. Anyway, the method that you used initially given the unedited wording did produced an answer which matched mine, so I believe that must be the answer.
I did not tried to look on what would happen if the cut is increased to let's say $x+12$ as you mentioned. But it seems this would produce some contradiction. Again, I don't know if whoever made this problem took that into consideration.
I'm trying to understand the way how you obtained $m = -95$ and $n = 1010$ and this part isn't very clear to me. Can you explain this part please?
The same applies to $B(x) = 5x^2 - 45x + 660$, $B(5) = B(4) = 560$ to which I still don't know how you got to that second possibility. In other words where do the new values for $m$ and $n$ came from?. Sorry if I ask for these details but I'd like to learn this part.
One can find values for m and n for any two consecutive values of x by substituting those values into 5x² + mx + n = 560, and then solving the resulting simultaneous equations for m and n.
I asked for the Spanish so that I could make more Google searches for this problem or a similar one, but I found nothing relevant. Was your textbook written by a committee rather than by a mathematician?
silly "stories" that only complicate the problem!!
of the division is of 20 inches. If the initial length of the steel bar is 560 inches.
How many pieces of (x+2) inches can be obtained as maximum?
The length of a steel bar is given by the function B(x)=5x^2+mx+n inches.
The length of the steel bar is 560 inches.
then a piece of length 20 inches is left; so x > 20.
then a piece of length 10 inches is left.
then what is the maximum number of pieces possible?
Just if $u,v$ and $w$ were the same in value I could conclude that $y=0$, but this doesn't seem to be the case. Without having any other clue on that. I can't guess what would be the values for the variables you mention. Other than finding the factors for $560$ which are $7 \times 8 \times 10$. But to produce something with a remainder of $20$ the only choice would be $9\times 6 \times 10$, now from there which would I choose as $x$ and which as $u$ for that I would jump to the next equation and see what it checks, this would be $9$ for $x$ and $v=55$ and finally $w=50$ as well. I don't know if you intended to use these clues to get to the answer. By using this I got to $y=10$ inches the remainder. I'm still not sure if this is what you intended to be the method for obtaining the answer from your equations.
Now the tricky part was to get $9$. Honestly I didn't like the part of doing several trials to get that specific number. I was looking for a more methodically way to find it and hence this question was asked in a section of the book devoted to algebra, the intended methodology I believe it was to use the remainder theorem. Of course this doesn't mean that there are other ways to find a solution, but probably due my limitations maybe this method isn't the one for me.
Sorry: my diagram was NOT an attempt to solve.
It's simply a representation (or picture) of the problem.
560 / (x+2) = an integer.
It could be reworded as you suggested, but would then be a significantly different problem.
Solution would be x = 54, right? | CommonCrawl |
Assistant Professor at University of Haifa. Usually works on topics related to classical algebraic groups.
11 Is the quotient of two linear group schemes linear?
9 Can elements in the orthogonal group of a non-split Azumaya algebra with an orthogonal involution have reduced norm -1?
8 Is $K[[x_1,x_2,\dots]]$ an $\mathfrak m$-adically complete ring?
6 Each $w\in W$ can be expressed as product of distinct reflections? | CommonCrawl |
A Method for Weak Lensing Flexion Analysis by the HOLICs Moment Approach - Astrophysics - Download this document for free, or read online. Document in PDF available to download.
Abstract: We have developed a method for measuring higher-order weak lensingdistortions of faint background galaxies, namely the weak gravitationalflexion, by fully extending the Kaiser, Squires and Broadhurst method to includehigher-order lensing image characteristics HOLICs introduced by Okura,Umetsu, and Futamase. We take into account explicitly the weight function incalculations of noisy shape moments and the effect of higher-order PSFanisotropy, as well as isotropic PSF smearing. Our HOLICs formalism allowsaccurate measurements of flexion from practical observational data in thepresence of non-circular, anisotropic PSF. We test our method using mockobservations of simulated galaxy images and actual, ground-basedSubaru observations of the massive galaxy cluster A1689 $z=0.183$. From thehigh-precision measurements of spin-1 first flexion, we obtain ahigh-resolution mass map in the central region of A1689. The reconstructed massmap shows a bimodal feature in the central $4-\times 4-$ region of the cluster.The major, pronounced peak is associated with the brightest cluster galaxy andcentral cluster members, while the secondary mass peak is associated with alocal concentration of bright galaxies. The refined, high-resolution mass mapof A1689 demonstrates the power of the generalized weak lensing analysistechniques for quantitative and accurate measurements of the weak gravitationallensing signal. | CommonCrawl |
Let $Det_n$ denote the closure of the $\mathrm G\mathrm L(n^2,\mathbb C)$-orbit of the determinant polynomial $\det_n$ with respect to linear substitution. The highest weights (partitions) of irreducible $\mathrm G\mathrm L(n^2,\mathbb C)$-representations occurring in the coordinate ring of $Det_n$ form a finitely generated monoid $S(Det_n)$. We prove that the saturation of $S(Det_n)$ contains all partitions λ with length at most $n$ and size divisible by $n$. This implies that representation theoretic obstructions for the permanent versus determinant problem must be holes of the monoid $S(Det_n)$. | CommonCrawl |
You have begun a series of rock-paper-scissors games with your spouse, the loser to wash the dishes. Bad news: your spouse being the better player, your probability of winning any given game is $< 0.5$. Worse, you've already lost the first game. But there's good news too: your spouse has generously agreed to let you pick, right now, the number $n$ of games that must be won to win the series. Thus, for example, if you pick $n=3$, you must win $3$ games before your spouse does (but remember, your spouse has already won a game). What is your best choice of $n$, as a function of $p$?
Scan your solutions to [email protected] before 21st September and we'll announce the winner 🍾 shortly afterwards. If you have any questions, post them below.
If you would like to be sent a list of our statistics puzzles each week, sign up to Black Swans here.
Will you tell us if we get it right or not?
Yes. But receiving quite a few submissions, so bear with us 🙏.
Tricky question. Will roll back the years and give it a shot! Those maths books look cool. Next Christmas sorted for Jack Dry!
Hmm ... after some playing around and pattern-spotting, I was able to conjecture the result which looks sufficiently elegant that I'm certain it's correct. But ... PROVING it ... that has me completely stumped! I may have to wait for you to publish a solution.
Will you be publishing a solution soon?
Hi. Sorry for the delay. We had lots of submissions to get through. We've just posted the solution here.
Perhaps consider a limiting case? Suppose the probability of winning each time is only just less than $0.5$.
Then with $n=2$, our chance of winning the series is approx. $0.5 \times 0.5 = 0.25$, but over a long series the current deficit of one game would hardly matter and the chances of winning overall would be about $0.5$ for each player. | CommonCrawl |
Let $F$ be a field of zero characteristic. All groups are taken modulo torsion.
Consider a residue map from the exterior algebra of the multiplicative group of the function field of the projective line to the sum over all points except $\infty$ of exterior algebras of the multiplicative groups of the closed points.This gives a complex, which is exact in the first and the last term.
I am interested in the middle homology group of the complex, written above. Since residue map satisfies graduate Leibniz rule, middle homology group is an algebra. What other natural structures it carries? I have found for it a natural presentation by generators and relations, but only viewed as a vector space. I am also interested in understanding better the ring structure.
Browse other questions tagged ac.commutative-algebra kt.k-theory-and-homology or ask your own question.
What is the Atiyah-Bott-Shapiro map for a bundle of *complex* quadratic forms? | CommonCrawl |
in the Hardy space $H^2_0(\mathbb D)$, and some related questions. It is shown that for $|\lambda|>R(N)$ the family is complete in $H^2_0(\mathbb D)$ (and often is a Riesz basis of $H^2_0$), whereas for $|\lambda|<r(N)$ it is not, where both radii $r(N)\leq R(N)$ tends to infinity and behave more or less as $N$ (as $N\to\infty$). Several results are also obtained for more general binomials $ż^n(1-\frac1\lambda z^n)^\nu\colon n=1,2,…\}$ where $|\lambda|\geq1$ and $\nu\in\mathbb C$.
Ключевые слова: Hardy spaces, completeness of dilations, Riesz basis, Hilbert multidisc, Bohr transform, binomial functions.
This research is supported by the project "Spaces of analytic functions and singular integrals", RSF grant 14-41-00010. | CommonCrawl |
I was reading the notion of quotient map, topology and space but ran into the following example.
In this example I have understood almost everything except one moment: How to prove rigorously that $p(x)$ is closed map.
Would be thankful if anyone will show the rigorous proof.
If you want to do it from scratch, note that $f:x\mapsto x$ and $g:x\mapsto x-1$ are closed maps on $\mathbb R$ (this is easy to prove).
$[0,1]\cap C'\sqcup [1,2]\cap g(C')=[0,2]\cap (C'\cup g(C'))$.
Since $C'$ is closed in $\mathbb R$ by assumption and $g(C')$ is closed in $\mathbb R$ also, by the first remark, we conclude that $p(C)=[0,2]\cap (C'\cup g(C'))$ is closed in $Y$.
Any closed subset of $[0,1] \cup [2,3]$ is compact. Since $p$ is continuous its image is compact, hence closed.
The map $p$ restricted to each interval is just a homeomorphism so closed. From this we conclude that this combined map is also closed.
Not the answer you're looking for? Browse other questions tagged general-topology closed-map or ask your own question.
Product of quotient map a quotient map when domain is compact Hausdorff?
Why does there exist a unique quotient topology that makes a given surjective map a quotient map?
Example 4, Sec. 22 in Munkres' TOPOLOGY, 2nd ed: How is this quotient space homeomorphic with $S^2$? | CommonCrawl |
Format: MarkdownItexHello, In the (infinity,1)-functor page, in part [Properties](https://ncatlab.org/nlab/show/%28infinity%2C1%29-functor#properties), there is a Theorem. Some notation is introduced in it, [C^op,KanCplx]°, but instead [C^op,sSet]° is used in the statement. Is the statement incorrect? Also, I guess it would be nice to add a reference for this Theorem. Does anyone have one?
In the (infinity,1)-functor page, in part Properties, there is a Theorem. Some notation is introduced in it, [C^op,KanCplx]°, but instead [C^op,sSet]° is used in the statement. Is the statement incorrect?
Also, I guess it would be nice to add a reference for this Theorem. Does anyone have one?
Format: MarkdownItexYes, this theorem statement is a bit confused. The fibrant and cofibrant objects *are* all valued in Kan complexes, but a consistent notation should be used. Also the text says we have an equivalence of $\infty$-groupoids, but the displayed equation is an equivalence between $(\infty,1)$-categories. For a reference, if you follow enough links you can find a citation to Lurie at [[(infinity,1)-category of (infinity,1)-functors]] (models).
Yes, this theorem statement is a bit confused. The fibrant and cofibrant objects are all valued in Kan complexes, but a consistent notation should be used. Also the text says we have an equivalence of ∞\infty-groupoids, but the displayed equation is an equivalence between (∞,1)(\infty,1)-categories.
For a reference, if you follow enough links you can find a citation to Lurie at (infinity,1)-category of (infinity,1)-functors (models).
Format: MarkdownItexThank you. I understand the statement in the reference to Lurie. I'm not making the edit to clarify the entry under discussion, because I'm not sure how to do it. I don't think it's a good idea, in a statement, to mix concrete models for $(\infty,1)$-categories (such as quasicategories, at the beginning of the statement) and then general $(\infty,1)$-notions that should in principle existe for any model (such as $(\infty,1)$-functor later on). Here the final equivalence in the statement seems to live in the world of simplicial categories. So what should be done?
I understand the statement in the reference to Lurie.
I'm not making the edit to clarify the entry under discussion, because I'm not sure how to do it. I don't think it's a good idea, in a statement, to mix concrete models for (∞,1)(\infty,1)-categories (such as quasicategories, at the beginning of the statement) and then general (∞,1)(\infty,1)-notions that should in principle existe for any model (such as (∞,1)(\infty,1)-functor later on). Here the final equivalence in the statement seems to live in the world of simplicial categories. So what should be done?
Format: MarkdownItexThis theorem is all about particular models, I don't think there is any part of it that makes sense model-independently.
This theorem is all about particular models, I don't think there is any part of it that makes sense model-independently.
Format: MarkdownItexI made the least invasive edit I could think of that I think makes the statement true and all notations defined.
I made the least invasive edit I could think of that I think makes the statement true and all notations defined.
Format: MarkdownItexThanks. It looks better indeed. I understand that this theorem is about particular models; actually, it seems to be about the particular model of quasi-categories. That is why I believe the statement should end by Then we have an equivalence of **quasi-categories** etc. But it is a minor point, I guess.
Thanks. It looks better indeed.
Then we have an equivalence of quasi-categories etc. But it is a minor point, I guess. | CommonCrawl |
Irreducible holomorphic symplectic (IHS) manifolds are a higher dimensional analogue of K3 surfaces; if $X$ is such a manifold, we can define a quadratic form on $H^2(X,\mathbb Z)$ that bears a formal resemblance to the intersection product on a surface.
A birational transformation $f$ of a manifold $X$ is said "imprimitive" if it preserves the fibres of a non-trivial fibration $\pi\colon X\dashrightarrow B$. Analogously to the surface case (Gizatullin), I will show that, if $X$ is IHS and $f$ induces a linear automorphism of $H^2(X,\mathbb Z)$ with at least an eigenvalue with modulus different then $1$, then it is primitive. | CommonCrawl |
We study properties of functions with bounded variation in Carnot-Carath\'eodory spaces. We prove their almost everywhere approximate differentiability and we examine their approximate discontinuity set and the decomposition of their distributional derivatives. Under an additional assumption on the space, called property $\mathcal R$, we show that almost all approximate discontinuities are of jump type and we study a representation formula for the jump part of the derivative. | CommonCrawl |
You are given a playlist of a radio station since its establishment. The playlist has a total of $n$ songs.
What is the longest sequence of successive songs where each song is unique?
The first input line contains an integer $n$: the number of songs.
The next line has $n$ integers $k_1,k_2,\ldots,k_n$: the id number of each song.
Print the length of the longest sequence of unique songs. | CommonCrawl |
Abstract : We consider the problem of determining the symmetric tensor rank for symmetric tensors with an algebraic geometry approach. We give algorithms for computing the symmetric rank for $2\times \cdots \times 2$ tensors and for tensors of small border rank. From a geometric point of view, we describe the symmetric rank strata for some secant varieties of Veronese varieties. | CommonCrawl |
Is a certain restriction of an open map open?
Why is the inverse of a bounded bijective operator continuous?
How to think of an open ball in $\mathbb R^2$.
How can I verify that this standard map is symplectic?
Is a faithfully flat map of affine schemes open? | CommonCrawl |
1) Find $\theta$: how do I understand an angle theta in relation to a reflection? This is intuitive to me for a rotation, but not for a reflection. Is it twice the angle between a given vector and the line of reflection, so that the reflection across the line is viewed as a rotation? How might I find this?
2) Find the unit vector for the line of reflection. Since this is a reflection, I set $2u_1^2 - 1 = 4$ to get $u_1 = \pm\frac 52$, but then $2u_2^2 - 1 = -4$ nets an imaginary number. What am I doing wrong? I have heard that reflections involve imaginary numbers and we haven't covered that yet, but I'm lost on how else I might find the unit vector.
I'm looking for pointers/concept explanations, rather than outright answers. Appreciate any help!
The line of reflection bisects the angle between a vector and its image. Pick any convenient unit vector $\mathbf u$ and compute its reflection $\mathbf u'$. The vector $\mathbf u+\mathbf u'$ bisects the angle between them (remember the parallelogram rule for vector addition) and so lies along the line of reflection. I hope that you can find the angle this vector makes with the $x$-axis and derive a unit vector from it on your own.
Points on the line of reflection are mapped to themselves by this transformation, so in the language of eigenvectors, it is the eigenspace of $1$.
You can also reason in terms of eigenvectors.
The eigenvalues are $-5,5$, and this tells you that the matrix consists of a scaling by $5$, and of a reflection, since (putting apart the scaling) there is a vector that remains the same and another that is inverted.
The eigenvector corresponding to $\lambda=5$ is $(3,1)$, and so that is the axis of reflection.
Of course you are right in telling that there is no angle associable to a reflection, in 2D.
Not the answer you're looking for? Browse other questions tagged linear-algebra linear-transformations reflection or ask your own question.
find the matrix for the reflection over a line that goes through the origin and makes the angle pi/17 with the x-axis. | CommonCrawl |
In a previous article on QuantStart we investigated how to download free futures data from Quandl. In this article we are going to discuss the characteristics of futures contracts that present a data challenge from a backtesting point of view. In particular, the notion of the "continuous contract" and "roll returns". We will outline the main difficulties of futures and provide an implementation in Python with pandas that can partially alleviate the problems.
Futures are a form of contract drawn up between two parties for the purchase or sale of a quantity of an underlying asset at a specified date in the future. This date is known as the delivery or expiration. When this date is reached the buyer must deliver the physical underlying (or cash equivalent) to the seller for the price agreed at the contract formation date.
In practice futures are traded on exchanges (as opposed to Over The Counter - OTC trading) for standardised quantities and qualities of the underlying. The prices are marked to market every day. Futures are incredibly liquid and are used heavily for speculative purposes. While futures were often utilised to hedge the prices of agricultural or industrial goods, a futures contract can be formed on any tangible or intangible underlying such as stock indices, interest rates of foreign exchange values.
A detailed list of all the symbol codes used for futures contracts across various exchanges can be found on the CSI Data site: Futures Factsheet.
The main difference between a futures contract and equity ownership is the fact that a futures contract has a limited window of availability by virtue of the expiration date. At any one instant there will be a variety of futures contracts on the same underlying all with varying dates of expiry. The contract with the nearest date of expiry is known as the near contract. The problem we face as quantitative traders is that at any point in time we have a choice of multiple contracts with which to trade. Thus we are dealing with an overlapping set of time series rather than a continuous stream as in the case of equities or foreign exchange.
The goal of this article is to outline various approaches to constructing a continuous stream of contracts from this set of multiple series and to highlight the tradeoffs associated with each technique.
The main difficulty with trying to generate a continuous contract from the underlying contracts with varying deliveries is that the contracts do not often trade at the same prices. Thus situations arise where they do not provide a smooth splice from one to the next. This is due to contango and backwardation effects. There are various approaches to tackling this problem, which we now discuss.
This method alleviates the "gap" across multiple contracts by shifting each contract such that the individual deliveries join in a smooth manner to the adjacent contracts. Thus the open/close across the prior contracts at expiry matches up.
The key problem with the Panama method includes the introduction of a trend bias, which will introduce a large drift to the prices. This can lead to negative data for sufficiently historical contracts. In addition there is a loss of the relative price differences due to an absolute shift in values. This means that returns are complicated to calculate (or just plain incorrect).
The Proportionality Adjustment approach is similar to the adjustment methodology of handling stock splits in equities. Rather than taking an absolute shift in the successive contracts, the ratio of the older settle (close) price to the newer open price is used to proportionally adjust the prices of historical contracts. This allows a continous stream without an interruption of the calculation of percentage returns.
The main issue with proportional adjustment is that any trading strategies reliant on an absolute price level will also have to be similarly adjusted in order to execute the correct signal. This is a problematic and error-prone process. Thus this type of continuous stream is often only useful for summary statistical analysis, as opposed to direct backtesting research.
The essence of this approach is to create a continuous contract of successive contracts by taking a linearly weighted proportion of each contract over a number of days to ensure a smoother transition between each.
For example consider five smoothing days. The price on day 1, $P_1$, is equal to 80% of the far contract price ($F_1$) and 20% of the near contract price ($N_1$). Similarly, on day 2 the price is $P_2 = 0.6 \times F_2 + 0.4 \times N_2$. By day 5 we have $P_5 = 0.0 \times F_5 + 1.0 \times N_5 = N_5$ and the contract then just becomes a continuation of the near price. Thus after five days the contract is smoothly transitioned from the far to the near.
The problem with the rollover method is that it requires trading on all five days, which can increase transaction costs.
There are other less common approaches to the problem but we will avoid them here.
The remainder of the article will concentrate on implementing the perpetual series method as this is most appropriate for backtesting. It is a useful way to carry out strategy pipeline research.
We are going to stitch together the WTI Crude Oil "near" and "far" futures contract (symbol CL) in order to generate a continuous price series. At the time of writing (January 2014), the near contract is CLF2014 (January) and the far contract is CLG2014 (February).
contract in order to produce a continuous time series futures contract."""
# depending upon the point at which you read this!
It can be seen that the series is now continuous across the two contracts. The next step is to carry this out for multiple deliveries across a variety of years, depending upon your backtesting needs. | CommonCrawl |
A chain of quadratic first integrals of general linear Hamiltonian systems that have not been represented in canonical form is found. Their involutiveness is established and the problem of their functional independence is studied. The key role in the study of a Hamiltonian system is played by an integral cone which is obtained by setting known quadratic first integrals equal to zero. A singular invariant isotropic subspace is shown to pass through each point of the integral cone, and its dimension is found. The maximal dimension of such subspaces estimates from above the degree of instability of the Hamiltonian system. The stability of typical Hamiltonian systems is shown to be equivalent to the degeneracy of the cone to an equilibrium point. General results are applied to the investigation of linear mechanical systems with gyroscopic forces and finite-dimensional quantum systems.
This paper is concerned with the problem of the integrable behavior of geodesics on homogeneous factors of the Lobachevsky plane with respect to Fuchsian groups (orbifolds). Locally the geodesic equations admit three independent Noether integrals linear in velocities (energy is a quadratic form of these integrals). However, when passing along closed cycles the Noether integrals undergo a linear substitution. Thus, the problem of integrability reduces to the search for functions that are invariant under these substitutions. If a Fuchsian group is Abelian, then there is a first integral linear in the velocity (and independent of the energy integral). Conversely, if a Fuchsian group contains noncommuting hyperbolic or parabolic elements, then the geodesic flow does not admit additional integrals in the form of a rational function of Noether integrals. We stress that this result holds also for noncompact orbifolds, when there is no ergodicity of the geodesic flow (since nonrecurrent geodesics can form a set of positive measure).
This paper addresses the dynamics of systems with servoconstraints where the constraints are realized by controlling the inertial properties of the system. Vakonomic systems are a particular case. Special attention is given to the motion on Lie groups with left-invariant kinetic energy and a left-invariant constraint. The presence of symmetries allows the dynamical equations to be reduced to a closed system of differential equations with quadratic right-hand sides. As the main example, we consider the rotation of a rigid body with a left-invariant servoconstraint, which implies that the projection of the body's angular velocity on some body-fixed direction is zero.
This paper is concerned with the problem of first integrals of the equations of geodesics on two-dimensional surfaces that are rational in the velocities (or momenta). The existence of nontrivial rational integrals with given values of the degrees of the numerator and the denominator is proved using the Cauchy–Kovalevskaya theorem.
The problem of integrability conditions for systems of differential equations is discussed. Darboux's classical results on the integrability of linear non-autonomous systems with an incomplete set of particular solutions are generalized. Special attention is paid to linear Hamiltonian systems. The paper discusses the general problem of integrability of the systems of autonomous differential equations in an n-dimensional space, which admit the algebra of symmetry fields of dimension $\geqslant n$. Using a method due to Liouville, this problem is reduced to investigating the integrability conditions for Hamiltonian systems with Hamiltonians linear in the momenta in phase space of dimension that is twice as large. In conclusion, the integrability of an autonomous system in three-dimensional space with two independent non-trivial symmetry fields is proved. It should be emphasized that no additional conditions are imposed on these fields.
This paper addresses a class of problems associated with the conditions for exact integrability of systems of ordinary differential equations expressed in terms of the properties of tensor invariants. The general theorem of integrability of the system of $n$ differential equations is proved, which admits $n−2$ independent symmetry fields and an invariant volume $n$-form (integral invariant). General results are applied to the study of steady motions of a continuum with infinite conductivity.
We develop a new method for solving Hamilton's canonical differential equations. The method is based on the search for invariant vortex manifolds of special type. In the case of Lagrangian (potential) manifolds, we arrive at the classical Hamilton–Jacobi method.
The Kac circular model is a discrete dynamical system which has the property of recurrence and reversibility. Within the framework of this model M.Kac formulated necessary conditions for irreversibility over "short" time intervals to take place and demonstrated Boltzmann's most important exploration methods and ideas, outlining their advantages and limitations. We study the circular model within the realm of the theory of Gibbs ensembles and offer a new approach to a rigorous proof of the "zeroth" law of thermodynamics based on the analysis of weak convergence of probability distributions.
The paper develops an approach to the proof of the "zeroth" law of thermodynamics. The approach is based on the analysis of weak limits of solutions to the Liouville equation as time grows infinitely. A class of linear oscillating systems is indicated for which the average energy becomes eventually uniformly distributed among the degrees of freedom for any initial probability density functions. An example of such systems are sympathetic pendulums. Conditions are found for nonlinear Hamiltonian systems with finite number of degrees of freedom to converge in a weak sense to the state where the mean energies of the interacting subsystems are the same. Some issues related to statistical models of the thermostat are discussed.
The famous Lagrange identity expresses the second derivative of the moment of inertia of a system of material points through the kinetic energy and homogeneous potential energy. The paper presents various extensions of this brilliant result to the case 1) of constrained mechanical systems, 2) when the potential energy is quasi-homogeneous in coordinates and 3) of continuum of interacting particles governed by the well-known Vlasov kinetic equation.
The kinetics of collisionless continuous medium is studied in a bounded region on a curved manifold. We have assumed that in statistical equilibrium, the probability distribution density depends only on the total energy. It is shown that in this case, all the fundamental relations for a multi-dimensional ideal gas in thermal equilibrium hold true.
A collisionless continuous medium in Euclidean space is discussed, i.e. a continuum of free particles moving inertially, without interacting with each other. It is shown that the distribution density of such medium is weakly converging to zero as time increases indefinitely. In the case of Maxwell's velocity distribution of particles, this density satisfies the well-known diffusion equation, the diffusion coefficient increasing linearly with time.
The paper deals with the problem of integration of equations of motion in nonholonomic systems. By means of well-known theory of the differential equations with an invariant measure the new integrable systems are discovered. Among them there are the generalization of Chaplygin's problem of rolling nonsymmetric ball in the plane and the Suslov problem of rotation of rigid body with a fixed point. The structure of dynamics of systems on the invariant manifold in the integrable problems is shown. Some new ideas in the theory of integration of the equations in nonholonomic mechanics are suggested. The first of them consists in using known integrals as the constraints. The second is the use of resolvable groups of symmetries in nonholonomic systems. The existence conditions of invariant measure with analytical density for the differential equations of nonholonomic mechanics is given.
The paper develop a new approach to the justification of Gibbs canonical distribution for Hamiltonian systems with finite number of degrees of freedom. It uses the condition of nonintegrability of the ensemble of weak interacting Hamiltonian systems.
In this article we develop Poincaré ideas about a heat balance of ideal gas considered as a collisionless continuous medium. We obtain the theorems on diffusion in nondegenerate completely integrable systems. As a corollary we show that for any initial distribution the gas will be eventually irreversibly and uniformly distributed over all volume, although every particle during this process approaches arbitrarily close to the initial position indefinitely many times. However, such individual returnability is not uniform, which results in diffusion in a reversible and conservative system. Balancing of pressure and internal energy of ideal gas is proved, the formulas for limit values of these quantities are given and the classical law for ideal gas in a heat balance is deduced. It is shown that the increase of entropy of gas under the adiabatic extension follows from the law of motion of a collisionless continuous medium.
The questions of justification of the Gibbs canonical distribution for systems with elastic impacts are discussed. A special attention is paid to the description of probability measures with densities depending on the system energy.
Traditional derivation of Gibbs canonical distribution and the justification of thermodynamics are based on the assumption concerning an isoenergetic ergodicity of a system of n weakly interacting identical subsystems and passage to the limit $n \to\infty$. In the presented work we develop another approach to these problems assuming that n is fixed and $n \geqslant 2$. The ergodic hypothesis (which frequently is not valid due to known results of the KAM-theory) is substituted by a weaker assumption that the perturbed system does not have additional first integrals independent of the energy integral. The proof of nonintegrability of perturbed Hamiltonian systems is based on the Poincare method. Moreover, we use the natural Gibbs assumption concerning a thermodynamic equilibrium of bsystems at vanishing interaction. The general results are applied to the system of the weakly connected pendula. The averaging with respect to the Gibbs measure allows to pass from usual dynamics of mechanical systems to the classical thermodynamic model.
We analyse the operation of averaging of smooth functions along exact trajectories of dynamic systems in a neighborhood of stable nonresonance invariant tori. It is shown that there exists the first integral after the averaging; however in the typical situation the mean value is discontinuous or even not everywhere defind. If the temporal mean were a smooth function it would take its stationary values in the points of nondegenerate invariant tori. We demonstrate that this result can be properly derived if we change the operations of averaging and differentiating with respect to the initial data by their places. However, in general case for nonstable tori this property is no longer preserved. We also discuss the role of the reducibility condition of the invariant tori and the possibility of the generalization for the case of arbitrary compact invariant manifolds on which the initial dynamic system is ergodic.
We study motion of a charged particle on the two dimensional torus in a constant direction magnetic field. This analysis can be applied to the description of electron dynamics in metals, which admit a $2$-dimensional translation group (Bravais crystal lattice). We found the threshold magnetic value, starting from which there exist three closed Larmor orbits of a given energy. We demonstrate that if there are n lattice atoms in a primitive Bravais cell then there are $4+n$ different Larmor orbits in the nondegenerate case. If the magnetic field is absent the electron dynamics turns out to be chaotic, dynamical systems on the corresponding energy shells possess positive entropy in the case that the total energy is positive.
The paper discusses relationship between regular behavior of Hamilton's systems and the existence a sufficient number of fields of symmetry. Some properties of quite regular schemes and their relationship with various characteristics of stochastic behavior are studied. | CommonCrawl |
Abstract: T-distributed stochastic neighbour embedding (t-SNE) is a widely used data visualisation technique. It differs from its predecessor SNE by the low-dimensional similarity kernel: the Gaussian kernel was replaced by the heavy-tailed Cauchy kernel, solving the "crowding problem" of SNE. Here, we develop an efficient implementation of t-SNE for a $t$-distribution kernel with an arbitrary degree of freedom $\nu$, with $\nu\to\infty$ corresponding to SNE and $\nu=1$ corresponding to the standard t-SNE. Using theoretical analysis and toy examples, we show that $\nu<1$ can further reduce the crowding problem and reveal finer cluster structure that is invisible in standard t-SNE. We further demonstrate the striking effect of heavier-tailed kernels on large real-life data sets such as MNIST, single-cell RNA-sequencing data, and the HathiTrust library. We use domain knowledge to confirm that the revealed clusters are meaningful. Overall, we argue that modifying the tail heaviness of the t-SNE kernel can yield additional insight into the cluster structure of the data. | CommonCrawl |
Abstract: Denote by $\mathfrak M$ the set whose elements are the simple 3-dimensional unitary groups $U_3(q)$ and the linear groups $L_3(q)$ over finite fields. We prove that every periodic group, saturated by the groups of a finite subset of $\mathfrak M$, is finite.
Keywords: saturation of a group by a set of groups, periodic group. | CommonCrawl |
This work focuses on the design and development of quadrotors that are capable of autonomous landing on a stationary platform. This problem is motivated by the fact that quadrotors have to land frequently to charge their batteries, as maximum flight time is rather short with the current state-of-art. In this work, we particularly consider three different quadrotor systems including the quadrotor constructed in our Intelligent Systems Laboratory, the ISL quadrotor. The ISL quadrotor is endowed with a camera attached to a gimbal stabilizing the view for visual sensing, a laser range sensor for measuring the altitude, PIXHAWK flight controller and a Raspberry Pi 3 (RPi3) as an onboard computer for autonomous operation. The software is designed and developed in Robot Operating System (ROS) and includes control, visual processing and serial communication with the flight controller. The code is designed to be multi-threaded as have the quadrotor be capable of doing visual processing concurrently with flight control. The dynamic modeling of the quadrotor is based on one of the commonly used models. For the landing task, position, altitude, and velocity controllers based on proportional and derivative (PD) control are developed. Position information is provided by visual feedback through detecting the marker on the landing platform. The developed approaches are first tested on the Gazebo simulation environment. We also conduct similar landing experiments on the ISL quadrotor for different wind, light, and initial altitude conditions and observe that the quadrotor is able to land autonomously within approximately 80 cm error range on a stationary platform that is marked by a 50cm $\times$ 50cm ArUco marker. The designed quadrotor is also capable of tracking a platform that is moving with a slow linear velocity.
For more info, pls contact Mustafa Mete. | CommonCrawl |
The paper is devoted to study of the longtime behavior of solutions of a damped semilinear wave equation in a bounded smooth domain of $\mathbb R^3$ with the nonautonomous external forces and with the critical cubic growth rate of the nonlinearity. In contrast to the previous papers, we prove the dissipativity of this equation in higher energy spaces $E^\alpha$, $0<\alpha\le 1$, without the usage of the dissipation integral (which is infinite in our case).
Keywords: nonautonomous attractors, Damped wave equations, critical growth rate, regularity of attractors..
Mathematics Subject Classification: Primary 37B40, 37B45. | CommonCrawl |
Are the different interpretations of Quantum mechanics just different viewpoints of the same physical reality? Or can experiments distinguish them? Are they empirically distinguishable or not?
All this said, we would be the last to claim that the foundations of quantum theory are not worth further scrutiny. For instance, it is interesting to search for minimal sets of physical assumptions that give rise to the theory. Also, it is not yet understood how to combine quantum mechanics with gravitation, and there may well be important insight to be gleaned there. However, to make quantum mechanics a useful guide to the phenomena around us, we need nothing more than the fully consistent theory we already have. Quantum theory needs no interpretation.
And finally, has anyone found that minimum number of postulates?
No, interpretations of quantum mechanics are not distiguishable in a physical experiment, otherwise they would be called theories rather than interpretations.
It should be noted that there are some theories whom their authors call "interpretations" but they in fact are not. For instance the "objective collapse theories" often (wrongly) called "interpretations". These theories can be physically proven or disproven and predict different observations than the standard quantum mechanics (with all its interpretations).
That said, it is not that interpretations cannot be experimentally distinguished at all. Maybe they can be, but the experiment that would be able to distinguish between them would not be a physical (or scientific) experiment in the sense it would not satisfy the requirements for scientific method.
All scientific experiments regarding quantum mechanics should produce the same results as far as different interpretations concerned.
It's not true that all the different interpretations are not (in principle) experimentally distinguishable. Let's consider the difference between Copenhagen Interpretation (CI), De Broglie–Bohm theory (BT) and the Many Worlds Interpretation (MWI). BT assumes that under normal circumstances we have so-called quantum equilibrium and only then do you get the usual predictions of standard quantum mechanics that you get when you assdume CI. This means that you can try to detect small deviations of exact quantum equilibrium, see here for details.
If the MWI is correct then time evolution is always exactly unitary. The CI doesn't explain how we get to a non-unitary collapse, but it does assume that there exists such a thing. This implies that at least in principle there should be detectable effects. Systems that are well isolated from the environment should undergo a non-unitary time evolution at a rate that is faster than can be explained as being caused by decoherence by the residual interactions it still has with the environment.
David Deutsch has proposed a thought experiment to illustrate that MWI is not experimentally equivalent to CI. Suppose an artificially intelligent experimenter is simulated by a quantum computer. It will measure the operator A = |0><0| - |1><1|. The qubit is initialized in the state |1/sqrt(2)[|0> + |1>]. Then the CI predicts that after the measurement the state of the qubit undergoes a non-unitary collapse to one of the two possible eigenstates of A, i.e. |0> or |1>. The MWI asserts that the state of the entire quantum computer splits into two branches corresponding to either of the possible outcomes.
To decide who is right, the experimenter decides to let the computer perform the unitary time evolution corresponding to inverting the final state of the quantum computer (according to the MWI) to the initial state, but while keeping the record that a measurement has been performed. This transform to the modified initial state is still unitary and can therefore be implemented (all unitary transforms can be implemented using only the CNOT and single qubit rotations).
Then it is easy to check that if the CI is correct that you don't get that desired modified initial state back and the difference between the two states if the qubit you end up with, can be easily detected by doing measurements on it.
I do not know that whether or not all interpretations can be distinguishable from each other. But after going through the link once, I thought Asher Peres probably implied that all the interpretations(De Broglie Bohm, MWI etc) will not have any effect on the probability of measurement outcomes (predictions of QM) i.e. the number of times detectors' clicks. In that sense, all the experiments with measurements (observables in QM) will give probabilistic outcomes and different interpretations are to explain the results, which are probabilistic in nature and all the interpretations are just various ways to rule out that probabilistic nature i.e different way to interpret the reasons for indeterminism which presents itself as one of the basic postulates in QM (as pointed out by Tom).
However experiments have been performed to Check whether or not QM is consistent with Local Hidden Variable Theory. One can measure joint probability and show that QM violates Bell's Inequality. In that sense, one can show that QM violates Local Hidden Variable Theory.(http://en.wikipedia.org/wiki/Bell_test_experiments). For other interpretations there are some thought experiments (as pointed out by @Count Iblis). But to my knowledge no such experiments has been performed till now.
In the name "interpretations" it is implied that these are believed to not be empirically distinguishable. What everyone agrees on is how to calculate empirically testable predictions. That is essentially mathematics, and if you are very mathematically minded, you can propose to axiomize it, that is, find a minimum number of axioms (postulates) from which it follows. Obviously, as with any system of axioms, there is a bit of personal preference involved: Many call the time-dependent Schroedinger equation such a postulate. I personally prefer to only use de Broglie's matter-wave duality as a postulate and derive e.g. the (non-relativistic) Schroedinger equation for free massive particles from the non-relativistic impulse-energy relationship expressed for plane waves (which are one possible and hence sufficient choice for a basis system to express any wave in). I am afraid all one can summarily say is that there is no widespread consensus on what postulates/axioms to use; we physicists only agree on the (well-tested) outcome and perhaps that how exactly one gets there is at least as much a question of mathematics as of physics.
Whilst "interpretations" was certainly the correct word historically, one can make good arguments that some theories that carry the historical name "interpretation" can actually be tested, at least if you take a sufficiently broad approach to tests as to permit thought experiments. Most interpretations fail to resolve paradoxes, such as Schroedinger's cat, the EPR paradox, or how to determine what constitutes a measurement (or how a wave-function collapse is physically possible considering quantum mechanical evolution only permits unitary, that is non-collapsing, time-evolution). That means most "interpretations" do not pass the test of logical consistency since otherwise there would not be any such paradoxes.
Even with regard to experimental tests, we have at least two: The violation of the Bell inequality has been experimentally demonstrated, which is a test against hidden local variables, which would otherwise remain fair game for outlandish "interpretations." And a simplistic form of Schroedingers cat can be realized in a 2-qubit quantum computer, where modelling decoherence as interaction with an environment (that could be a real uncontrolled environment or simulated via more qubits) provides an experimentally verifiable detail theory of just how the cat would decohere out of its curious superposition. If you like, that constitutes an empirical test that the Schroedinger cat paradox is not really a paradox and any interpretation that is unable to resolve it must be an incorrect or at least incomplete theory.
Finally there is the issue of the recurrence time. Since quantum mechanics only allows unitary transforms, essentially rotations (and reflections) in a high dimensional Hilbert space, it predicts that everything repeats eventually, although most likely, due to the high dimensionality, only after a time that is mind-bogglingly huge even when compared to cosmological timescales. That is at odds with thermodynamics and relativity (at least if that is valid for ever expanding universes). There obviously must be tests for it, but finding them is surprisingly difficult. For example, even in theory waiting out the recurrence time is frustrated by the fact that if it exists as such, all notes and memories would have reverted, and we would again wonder if we should start the experiment for the first time ever! Yet on systems small enough that we can isolate them sufficiently from the environment to have short and observable recurrences (that experimentalists have come to call "revivals"), they demonstrably occur. At least to the extent that you consider this good enough to be at least a partial test of recurrence for the universe as a whole, this is of course at odds with any postulate of wavefunction collapse.
Regarding the interpretation of QM, there are several schools of thought. Apparently Asher belongs to the "just shut up and calculate" school. He knows that classical intuitions do not suffice in trying to understand QA. But your term "objective meaning" needs to be clarified since in QM, measurements of properties of an object (say an electron) cannot be independent of the system or observer doing the measurement when the measurement is made. Can there be such a thing as an objective meaning when the meaning must necessarily, for humans, be a narrative saturated with the subject's own embodied metaphors? I submit that an understanding of any object, especially one that cannot be directly observed, cannot be entirely purged of subject.
Here are that minimum number of postulates I obtained from my on-line MIT course with professor Adams (which is really great): QM, lecture 3.
The configuration or state of a quantum object is completely specified by a wavefunction denoted as $\psi(x)$.
$p(x) = |\psi(x)|^2$ determines the probability (density) that an object in the state ψ(x) will be found at position $x$.
Given two possible states of a quantum system corresponding to two wavefunctions $\psi_a$ and $\psi_b$, the system could also be in a superposition $\psi = \alpha\psi_a + \beta\psi_b$ with $\alpha$ and $\beta$ as arbitrary complex coefficients satisfying normalization.
Why are transition amplitudes more fundamental than probabilities in quantum mechanics? | CommonCrawl |
In this paper we study $f$-harmonic maps from non-compact manifolds into non-positively curved ones. Notably, we prove existence and vanishing results which generalize to the weighted setting part of Schoen and Yau's theory of harmonic maps. As an application, we deduce information on the topology of manifolds with lower bounded $\infty$-Bakry-Emery Ricci tensor, and in particular of steady and expanding gradient Ricci solitons. | CommonCrawl |
Abstract: This paper studies classification with an abstention option in the online setting. In this setting, examples arrive sequentially, the learner is given a hypothesis class $\mathcal H$, and the goal of the learner is to either predict a label on each example or abstain, while ensuring that it does not make more than a pre-specified number of mistakes when it does predict a label.
Previous work on this problem has left open two main challenges. First, not much is known about the optimality of algorithms, and in particular, about what an optimal algorithmic strategy is for any individual hypothesis class. Second, while the realizable case has been studied, the more realistic non-realizable scenario is not well-understood. In this paper, we address both challenges. First, we provide a novel measure, called the Extended Littlestone's Dimension, which captures the number of abstentions needed to ensure a certain number of mistakes. Second, we explore the non-realizable case, and provide upper and lower bounds on the number of abstentions required by an algorithm to guarantee a specified number of mistakes. | CommonCrawl |
Use the theory of group trisections to find invariants of $4$-manifold trisections.
Define a notion of group trisection for trisections with boundary.
Adapt group trisections to the setting of bridge trisections and use it to get invariants of knotted surfaces in $S^4$.
Accourding to a classical theorem by Wall, any two homotopoy equivalent, simply connected smooth closed $4$-manifolds $X$ and $Y$ become diffeomorphic after stabilizing by taking connected sums with some number of $S^1\times S^2$'s. What does it says about trisections of the trivial group?
Cite this as: AimPL: Trisections and low-dimensional topology, available at http://aimpl.org/trisections. | CommonCrawl |
Is the $\alpha'$ expansion in string theory an asymptotic expansion?
and: What is your simplest explanation of the string theory?
Maimon is referring to the paper "Under the spell of the gauge principle" by tHooft.
We explain the principles of the laws of physics that we believe to be applicable for the quantum theory of black holes. In particular, black hole formation and evolution should be described in terms of a scattering matrix. This way black holes at the Planck scale become indistinguishable from other particles. This S-matrix can be derived from known laws of physics. Arguments are put forward in favor of a dlscrete algebra generating the Hilbert space of a black hole with its surrounding space-time including surrounding particles."
Can someone explain this in a none-technical (or semi-technical) language? Seems like a big deal because I never heard that before. This is also completely absent in popular explanations of string theory. Maybe this also would be a nice entry for the Q&A page.
PhysicsOverflow is for graduate+ level; so any answers should concentrate on the physical content, not on popular explanations.
The question is not bad or uninteresting as such, but if you are exclusively looking for non-technical/popular level answers PhysicsForums for example might be a more appropriate place. On PO, on-topic answers are expected to be rather technical in nature. | CommonCrawl |
What is the homotopy type of the space of simple closed curves isotopic to a given one?
For surfaces there are many statements along the lines of: if two simple closed curves are homotopic, they are isotopic. I'm interested in such questions for families of curves.
More precisely, let $\Sigma$ be a hyperbolic surface, possibly with boundary. We fix an essential simple closed curve $\gamma$ on $\Sigma$. It is true that the subspace of $Emb(S^1,\Sigma)$ consisting of those curves that are isotopic to $\gamma$ is homotopy equivalent to a circle? Here the circle would come from reparametrisation of the curves.
This statement is true if we instead look at the space of all continuous (or smooth) maps of $S^1$ into $\Sigma$ that are homotopic to $\gamma$. Also note that this seems to be false for the torus, as for any essential simple closed curve we get at least $S^1 \times S^1$.
Earlier than Grayson, the determination of the homotopy-types of these spaces was done by Gramain.
There are a few special cases, like the torus and sphere and the non-orientable analogue, the case of null curves. But if they're not null homotopic the components of the embedding space have the homotopy type of $S^1$ -- the reparametrizations of the given curve.
If you forget about the parametrization, the "curve shortening flow" isotopes an essential simple closed curve to THE geodesic isotopic to it (this is a celebrated result of Matt Grayson), which I believe is gives a deformation retraction of the unparametrized space to a point. When you throw the parametrization back in, you get your conjectured result.
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology surfaces or ask your own question.
How to detect a simple closed curve from the element in the fundamental group?
Who proved that two homotopic embeddings of one surface in another are isotopic?
How many simple closed geodesics in a given primitive homology class? | CommonCrawl |
This is a linklog to Christopher Rackauckas's article about Julia, Cython & Numba. Unluckily, I don't have time to read this now. Hope I can return to this later.
Quels schémas à tester ?
I intend to post this for a Borel-Cantelli lemma exercise on Math.SE.
To apply Borel-Cantelli, one has to determine whether $\sum_i P(X_i = 0)<+\infty$. | CommonCrawl |
Guruvenket, S and Ghatak, Jay and Satyam, PV and Rao, Mohan G (2005) Characterization of bias magnetron-sputtered silicon nitride films. In: Thin Solid Films, 478 (2). pp. 256-260.
Influence of the deposition parameters and the substrate bias voltage on the optical, compositional and the surface properties of DC magnetron-sputtered silicon nitride thin films are studied. Silicon nitride thin films are deposited on silicon (100) and quartz substrates at different partial pressures of nitrogen and discharge currents. The variation in the refractive index and the optical band gap of these films is studied. Compositional variation has been studied using Rutherford backscattering spectroscopy (RBS). Silicon nitride thin films deposited at $3 \times 10^-^2$ Pa partial pressure of nitrogen with $2.5 mA/cm^2$ cathode current density showed an optical band gap of 4.3 eV and refractive index of 2.04 (at 650 nm). Nitrogen to silicon ratio in the film is 1.31, and the roughness of the films is 2.3 nm. Substrate bias during deposition helped in changing the optical properties of the films. Substrate bias of -60V resulted in films having near stoichiometry with N/Si ratio 1.32, and the optical band gap, refractive index, and the roughness are 4.8 eV, 1.92 and 0.78 nm, respectively. | CommonCrawl |
But when I solved this quesiton in the frequency domain I obtained $0.4$ which is the correct answer.
What is the mistake in this method?
The primary problem in your time domain analysis is the assumption that if both $x(t)$ and $x(t-d)$ have the same Nyquist sampling rate so will $x(t)+x(t-d)$.
You can see that the summation may alter the signal in such a way that resulting signal's bandwidth may be different than the individual ones.
As an example consider the composite signal $x(t) = x_a(t) + x_b(t)$ where $x_a(t)$ is a low-pass signal with individual Nyquist sampling rate of $w_a$ and $x_b(t) = \sin(w_0 t)$, is a high frequency sine wave with a Nyquist sampling rate of $2w_0$. Therefore both $x(t)$ and $x(t-d)$ will have the same Nyquist sampling rate of $2 w_0$.
Now if $d$ is chosen such that $w_0 d = \pi$ then $\sin(w_0 t) + \sin(w_0 t - \pi) = 0$, for all $t$, therefore $x(t) + x(t-d) = x_a(t) + x_a(t-d)$. There will be cancellation of the high frequency terms so that the Nyquist sampling rate of the sum will be $w_a$ (assuming the sum of low-pass signals will not further alter it). Hence the sum will have a different Nyquist sampling rate eventhough $x(t)$ and $x(t-d)$ individually have the same Nyquist sampling rate.
Not the answer you're looking for? Browse other questions tagged fourier-transform sampling convolution homework nyquist or ask your own question.
Minimum number of Poles and zero of transfer function H(z)?
Minimum number of data points needed for a DFT to avoid spectrum leakage? | CommonCrawl |
Key and signature sizes of NTRU and NTRU Prime?
I would have expected this info to be easier to Google Scholar for, but alas I'm asking here.
the public key consists of one ring, and so it takes up $11 \times 439 = 4829$ bits, which compresses to 604 bytes.
what are the NTRU keysize and application in industry?
For instance, with $N = 1171$ and $q = 2048$, which are recommended parameters, public and private keys are of size 1.57 KB.
What about the size of NTRUsign signatures?
Our public keys are field elements, easily squeezed into 1232 bytes.
Any idea what signature sizes of an NTRU Prime-based signature scheme would be?
Ntru prime does not have a signature scheme as far as I know.
Not the answer you're looking for? Browse other questions tagged signature key-size ntru or ask your own question.
Multiple NTRU public keys for the same private key? | CommonCrawl |
Social networks are large graphs that require multiple graph database servers to store and manage them. Each database server hosts a graph partition with the objectives of balancing server loads, reducing remote traversals (edge-cuts), and adapting the partitioning to changes in the structure of the graph in the face of changing workloads. To achieve these objectives, a dynamic repartitioning algorithm is required to modify an existing partitioning to maintain good quality partitions while not imposing a significant overhead to the system. In this paper, we introduce a lightweight repartitioner, which dynamically modifies partitioning using a small amount of resources. In contrast to the existing repartitioning algorithms, our lightweight repartitioner is efficient, making it suitable for use in a real system. We integrated our lightweight repartitioner into Hermes, which we designed as an extension of the open source Neo4j graph database system, to support workloads over partitioned graph data distributed over multiple servers.
Using real-world social network data, we show that Hermes leverages the lightweight repartitioner to maintain high quality partitions and provides a two to three times performance improvement over the de-facto standard random hash-based partitioning.
pute engine that can optionally even be located in the cloud.
heating using a radiant heater. Finally, SPOT* is less intrusive in that it does not use a camera. We find that the per-user cost for SPOT* is about $185 compared to $1000 for SPOT. Moreover, in a preliminary deployment, SPOT* is able to improve user comfort by 78%.
An increasing number of programming languages compile to the Java Virtual Machine (JVM), and program analysis frameworks such as WALA and SOOT support a broad range of program analysis algorithms by analyzing bytecode. While this approach works well when applied to bytecode produced from Java code, its efficacy when applied to other bytecode has not been studied until now.
We present qualitative and quantitative analysis of the soundness and precision of call graphs constructed from JVM bytecodes produced for Python, Ruby, Clojure, Groovy, Scala, and OCaml applications. We show that, for Python, Ruby, Clojure, and Groovy, the call graphs are unsound due to use of reflection, invokedynamic instructions, and run-time code generation, and imprecise due to how function calls are translated.
For Scala and OCaml, all unsoundness comes from rare, complex uses of reflection and proxies, and the translation of first-class features in Scala incurs a significant loss of precision.
The bulk synchronous parallel (BSP) model used by synchronous graph processing systems allows algorithms to be easily implemented and reasoned about. However, BSP can suffer from poor performance due to stale messages and frequent global synchronization barriers. Asynchronous computation models have been proposed to alleviate these overheads but existing asynchronous systems that implement such models have limited scalability or retain frequent global barriers, and do not always support graph mutations or algorithms with multiple computation phases. We propose barrierless asynchronous parallel (BAP), a new computation model that reduces both message staleness and global synchronization. This enables BAP to overcome the limitations of existing asynchronous models while retaining support for graph mutations and algorithms with multiple computation phases. We present GiraphUC, which implements our BAP model in the open source distributed graph processing system Giraph, and evaluate our system at scale with large real-world graphs on 64 EC2 machines. We show that GiraphUC provides across-the-board performance improvements of up to $5\times$ faster over synchronous systems and up to an order of magnitude faster than asynchronous systems. Our results demonstrate that the BAP model provides efficient and transparent asynchronous execution of algorithms that are programmed synchronously.
Open multi-agent systems (MASs) act as societies in which autonomous and heterogeneous agents can work towards similar or different goals. In order to cope with the heterogeneity, autonomy and diversity of interests among the different agents in the society, open MASs establish a set of behavioural norms that is used as a mechanism to ensure a state of co-operation among agents. Such norms regulate the behaviour of the agents by defining obligations, permissions and prohibitions. Fulfillment of a norm may be encouraged through a reward while violation of a norm may be discouraged through punishment. Although norms are promising mechanisms to regulate an agent's behaviour, we should note that each agent is an autonomous entity that is free to fulfill or violate each associated norm. Thus, agents can use different strategies when deciding to achieve their goals including whether to comply with their associated norms. Agents might choose to achieve their goals while ignoring their norms, thus overlooking the rewards or punishments they may receive. In contrast agents may choose to comply with all the norms although some of their goals may not be achieved. In this context, this paper proposes a framework for simulation of normative agents providing a basis for understanding the impacts of norms on agents.
The frequency of extreme weather events has accelerated, an apparent outcome of progressive climate change. Excess water is a significant consequence of these events and is now the leading cause of insurance claims for infrastructure and property damage. Governments recognize that plans for growth must reflect communities' needs, strengths and opportunities while balancing the cumulative effects of economic growth with environmental concerns. Legislation must incorporate the cumulative effects of economic growth with adaptation to weather events to protect the environment and citizens, while ensuring that products of growth such as buildings and infrastructure are resilient. For such a process to be effective it will be necessary for the private sector to develop and operate cumulative effect decision support software (CEDSS) tools and to work closely with all levels of government including watershed management authorities (WMAs) that supply environmental data. Such co-operation and sharing will require a new Open Data information-sharing platform managed by the private sector. This paper outlines that platform, its operation and possible governance model.
When two methods are invoked on the same object, the dispatch behaviours of these method calls will be correlated. If two correlated method calls are polymorphic (i.e., they dispatch to different method definitions depending on the type of the receiver object), a program's interprocedural control flow graph will contain infeasible paths. Existing algorithms for data-flow analysis are unable to ignore such infeasible paths, giving rise to loss of precision.
We show how infeasible paths due to correlated calls can be eliminated for Interprocedural Finite Distributive Subset (IFDS) problems, a large class of data-flow analysis problems with broad applications.
Our approach is to transform an IFDS problem into an Interprocedural Distributive Environment (IDE) problem, in which edge functions filter out data flow along infeasible paths. A solution to this IDE problem can be mapped back to the solution space of the original IFDS problem. We formalize the approach, prove it correct, and report on an implementation in the WALA analysis framework.
Current transaction systems for geo-distributed datastores either have high transaction processing latencies or are unable to support general transactions with dependent operations. In this paper, we introduce CrossStitch, an efficient transaction processing framework that reduces latency by restructuring each transaction into a chain of state transitions, where each state consists of a key operation and computation.
Transaction states are processed sequentially, and the transaction code and data is sent directly to the next hop in the chain. CrossStitch transactions can be organized such that all states in a location are processed before transitioning to a state in a different location. This allows CrossStitch to significantly reduce the number of inter-location crossings compared to transaction systems that retrieve remote data to a single location for processing.
To provide transactional properties while preserving the chain communication pattern, CrossStitch introduces a pipelined commit protocol that executes in parallel with the transaction and does not require any centralized co-ordination. Our evaluation results show that CrossStitch can reduce the latency of geo-distributed transactions when compared to a traditional 2PC-based distributed transaction system. We demonstrate that CrossStitch can reduce the number of round trips by more than half for TPC-C-like transactions.
local memory for incoming RDMA messages.
The increasing prevalence of oversubscribed networks and fast solid-state storage devices in the datacenter has made the network the new performance bottleneck for many distributed filesystems. As a result, distributed filesystems need to be network-aware in order to make more effective use of available network resources.
In this paper, we introduce Mayflower, a new distributed filesystem that is not only network-aware, it is co-designed from the ground up to work together with a network control plane. In addition to the standard distributed filesystem components, Mayflower has a flow monitor and manager running inside a software-defined networking controller. This tight coupling with the network controller enables Mayflower to make intelligent replica selection and flow scheduling decisions based on both filesystem and network information. It also enables Mayflower to perform global optimizations that are unavailable to conventional distributed filesystems and network control planes. Our evaluation results from both simulations and a prototype implementation show that Mayflower reduces average read completion time by more than 25% compared to current state-of-the-art distributed filesystems with an independent network flow scheduler, and more than 80% compared to HDFS with ECMP.
Current random datacenter network designs, such as Jellyfish, directly wire top-of-rack switches together randomly, which is difficult even when using best-practice cable management techniques. Moreover, these are static designs that cannot adapt to changing workloads. In this paper, we introduce Lanternfish, a new approach to building random datacenter networks using an optical ring that significantly reduces wiring complexity and provides the opportunity for reconfigurability. Unlike previous optical ring designs, Lanternfish does not require wavelength planning because it is specifically designed to provide random connectivity between switches. This design choice further reduces the difficulty of deploying a Lanternfish network, making random datacenter networks more practical. Our experimental results using both simulations and network emulation show that Lanternfish can effectively provide the same network properties as Jellyfish. Additionally, we demonstrate that by replacing passive optical components with active optical components at each switch, we can dynamically reconfigure the network topology to better suit the workload while remaining cost competitive. Lanternfish is able to construct workload specific topologies that provide as much as half the average pathlength than a Jellyfish deployment with twice as many switch-to-switch connections.
Design of next generation network systems with predictable behaviour in all situations poses a significant challenge. Monitoring of events happening at different points in a distributed environment can detect the occurrence of events that indicates significant error conditions. We use Modular Timing Diagrams (MTD) as a specification language to describe these error conditions. MTDs are a component-oriented and compositional notation. We take advantage of these features of MTDs and point out that, in many cases, a global MTD specification describing behaviours of several system components can be efficiently decomposed into a set of sub-specifications. Each of the sub-specifications describes a local monitor that is specific to the component on which the monitor is intended to run. We illustrate the compositional nature of MTDs in describing several network monitoring conditions related to network security.
We motivate and present a proposal for how to represent extended nite state machine behavioural models with rich hierarchical states and compositional control structures (e.g., the Statecharts family) in SMT-LIB. Our goal with such a representation is to facilitate automated deductive reasoning on such models, which can exploit the structure found in the control structures. We present a novel method that combines deep and shallow encoding techniques to describe models that have both rich control structures and rich datatypes. Our representation permits varying semantics to be chosen for the control structures recognizing the rich variety of semantics that exist for the family of extended nite state machine languages. We hope that discussion of these representation issues will facilitate model sharing for investigation of analysis techniques.
In a large, long-lived project, an effective code review process is key to ensuring the long-term quality of the code base. In this work, we study code review practices of a large, open source project, and we investigate how the developers themselves perceive code review quality. We present a qualitative study that summarizes the results from a survey of 88 Mozilla core developers. The results provide developer insights into how they define review quality, what factors contribute to how they evaluate submitted code, and what challenges they face when performing review tasks. We found that the review quality is primarily associated with the thoroughness of the feedback, the reviewer's familiarity with the code, and the perceived quality of the code itself. Also, we found that while different factors are perceived to contribute to the review quality, reviewers often find it difficult to keep their technical skills up-to-date, manage personal priorities, and mitigate context switching.
We present a feedback-directed instrumentation technique for computing crash paths that allows the instrumentation overhead to be distributed over a crowd of users and to re- duce it for users who do not encounter the crash. We imple- mented our technique in a tool, Crowdie, and evaluated it on 10 real-world issues for which error messages and stack traces are insufficient to isolate the problem. Our results show that feedback-directed instrumentation requires 5% to 25% of the program to be instrumented, that the same crash must be observed three to ten times to discover the crash path, and that feedback-directed instrumentation typically slows down execution by a factor 2x–9x compared to 8x–90x for an approach where applications are fully instrumented.
Database design is critical for high performance in relational databases and many tools exist to aid application designers in selecting an appropriate schema. While the problem of schema optimization is also highly relevant for NoSQL databases, existing tools for relational databases are inadequate for this setting. Application designers wishing to use a NoSQL database instead rely on rules of thumb to select an appropriate schema. We present a system for recommending database schemas for NoSQL applications. Our cost-based approach uses a novel binary integer programming formulation to guide the mapping from the application's conceptual data model to a database schema.
We implemented a prototype of this approach for the Cassandra extensible record store. Our prototype, the NoSQL Schema Evaluator (NoSE) is able to capture rules of thumb used by expert designers without explicitly encoding the rules. Automating the design process allows NoSE to produce efficient schemas and to examine more alternatives than would be possible with a manual rule-based approach.
Symbolic interactionist principles of sociology are based on the idea that human action is guided by culturally shared symbolic representations of identities, behaviours, situations and emotions. Shared linguistic, paralinguistic, or kinesic elements allow humans to co-ordinate action by enacting identities in social situations. Structures of identity-based interactions can lead to the enactment of social orders that solve social dilemmas (e.g., by promoting co-operation). Our goal is to build an artificial agent that mimics the identity-based interactions of humans, and to compare networks of such agents to human networks. In this paper, we take a first step in this direction, and describe a study in which humans played a repeated prisoner's dilemma game against other humans, or against one of three artificial agents (bots). One of the bots has an explicit representation of identity (for self and other), and attempts to optimise with respect to this representation. We compare the human play against bots to human play against humans, and show how the identity-based bot exhibits the most human-like behaviour.
The frequency of extreme weather events has accelerated, an apparent outcome of progressive climate change. Excess water is a significant consequence of these events and is now the leading cause of insurance claims for infrastructure and property damage. Governments recognize that plans for growth must reflect communities needs, strengths and opportunities while balancing the cumulative effects of economic growth with environmental concerns. For such a process to be effective it will be necessary to develop and operate cumulative effect decision support software (CEDSS) tools, and to work closely with all levels of government including watershed management authorities (WMAs) that supply environmental data. Such co-operation and sharing will require a new open and big data information-sharing platform, which is described in this paper as an open and big data platform for environmental analysis and management. | CommonCrawl |
I found myself working in Excel as part of the work I did fitting exponential (and gamma) distributions to left censored data. This was partly due to do the fact that my research colleagues had done the initial distribution fitting using Microsoft Excel's Solver to do maximum likelihood, something it does quite well. A shortcoming of this approach is that you cannot get the Hessian matrix for models with two or more parameters which you need if you want to place any sort of confidence interval around your estimates. There is nothing stop you, of course, from doing the actual mathematics, and calculating the values you need directly, but this all sounds like rather too much work and is distribution specific. One can equally make the criticism that the approximations used by the BFGS and other quasi-Newton methods are not guaranteed to be close to the true Hessian matrix.
The next step along the chain (I am sure this is a terribly mixed-metaphor but hey who cares), for me at least, was to use MCMC — in particular, to implement a simple random walk Metropolis-Hastings sampling scheme.
Note: The method I describe here is almost impossible for a multi-parameter model, or a model where the log-likelihood does not reduce to a simple sum of the data (or a sum of a function of the data). The reason for this is that Excel's distribution functions are not vector functions, which means in many circumstances the values of the likelihood for different observations must be stored in separate cells, and then we have to sum over the cells. In a problem with n observations and m proposals, we then would have to store \(n\times m\) values unless we resort to Visual Basic for Applications (VBA). However, I wanted to do this problem without VBA.
Note 2:I know that it is very easy to estimate the variance for the exponential distribution, but please refer to the title of this post.
In order to do MCMC we need to be able to generate random numbers. This functionality is provided in Excel by the Data Analysis Add-In. However, the Data Analysis Add-In has not been available since Excel 2008 for the Mac. There is a claim that this functionality can be restored by a third party piece of software called StatPlus LE, but in my limited time with it it seems a very limited solution. There are number of other pieces of functionality missing in the Mac version of Excel, which reduces its usefulness greatly.
which depends only the sum the observations above the limit, the sum of the logarithms of the observations above the detection limit, and the number of observations above and below the detection limit.
In my JAGS model I used a \(\Gamma(0.001, 0.001)\) priors for \(\alpha\) and \(\beta\). This would be easy enough to implement in Excel if the inverse Gamma function was sufficiently robust. However, it is not, and so I have opted for a prior which is \(U(-2,3)\) on log-scale.
This prior is a little less extreme than the \(\Gamma(0.001, 0.001)\) prior but has reasonable properties for this example.
We can use the Data Analysis Add-In to generate a set of proposal values. The screen capture below shows the dialog box from the Random Number Generation part of the Data Analysis Add-In. We need proposals for both \(\alpha\) and \(\beta\). Therefore we ask Excel to give us 2 random variables. In a standard MCMC implementation we usually choose a "burn-in" period to make sure our samples are not to correlated with the starting values, and to give the sampler time to get somewhere near the target distribution. In this example we will use a burn-in period of 1,000 iterations and then sample for a further 10,000 iterations, for a total of 11,000 iterations. We get Excel to put the proposals out into columns B and C starting at row 2 (and extending to row 11,001). Note: I have set the random number seed (to 202) here so that my results can be replicated.
We also need a column of U[0,1] random variates for our Metropolis-Hastings update step. The screen capture below shows the dialog box how we set this up. We store these values in column F, and as before I have set the random number seed (to 456) so that my results can be replicated.
into cell D2, and then selecting cells D2 to D11001 and using the Fill Down command to propagate the formula. We select the range D2:E:11001 and use the Fill Right command to propagate the formula formula across for \(\beta\). Columns C and D contain my proposal values for \(\alpha\) and \(\beta\).
As noted before, all we need to is the sum of the observed values and the sum of the log of the observed values, plus the number of observed and censored values. The sum of the observed values in my data set is 1478.48929487124 (stupid accuracy for replication), and the sum of the logs of the observed values is 519.633872429806. As noted before the number of observed values is 395, and there are 9,605 censored values. I will insert these values in cells I2 to I5 respectively, and in cells H2 to H5 I will enter the labels sum_x, sum_log_x, nObs, and nCens.
It is useful to label cells with names when working with Excel formulae. This allows us to refer to cells containing values by a name that means something rather than a cell address. We can define names by using the tools on Formula tab. I will use this tool to assign the names I put into cells H2 to H5 to the values I put into cells I2 to I5. To do this I select the range H2:I5, and the I click on the Formula tab, then the "Create names from Selection" button as shown in the screenshot below: Note I do not believe you can do this on the Mac, but I do not know for sure.
Excel (Windows) allows you to create multiple names in a spreadsheet at once.
I can now use, for example, the name sum_x to refer to cell address $I$2 in my formulae. It also removes the need to make sure that the address is absolute every time I type it.
After you have got this formula correct (and it will probably take more than one go), then select cells J2:J11001 and use the "Fill Down" command (Ctrl-F on Windows) to propagate the formula down for every proposed value.
is cells L3 and M3 respectively. We need to propagate these formulae down to row 11,001 by selecting and using the "Fill Down" command as before.
Finally, we need to gather some summary statistics about our sample from the posterior distribution. Recall we are using a burn-in period of 1,000 samples and a sampling period of 10,000 iterations. Therefore all our summary functions are only applied from row 1,002 to row 11,001.
We are interested in the mean, standard deviation, and 0.025 and 0.975 quantiles for [latex]\alpha\) and \(\beta\). We can get these values using the AVERAGE, STDEV.S, and PERCENTILE.EXC functions. In cells P2 to P7 we insert the following formulae.
and then we select cells P2:Q7 and use the "Fill Right" command.
The screen capture above shows my results. Rather embarrassingly the 95% credible interval does not contain the true values \(\alpha=10.776\) and \(\beta=5.138\). The main reason for this is that there is a total of 10 acceptances in our sample of size 10,000! That is, the sampler is incredibly inefficient. This is not completely surprising. The interval calculated under the assumption that the posterior distribution is symmetric is a little wider and does contain the true values. However, I would not put much stock in this. Of course, we can easily have the sampler run for a longer time, by generating more random variates and using more cells. I will leave that task to the reader.
In this article I have shown how to fit a gamma distribution to left censored data using MCMC in Excel. It is definitely not the best way to do this — I would use R and JAGS, which would be infinitely faster and give me more useable results — but it can be done. It does offer functionality to non-R users, of which there are many more than actual users, and it also allows the chance to observe the Markov chain so that we can see how the sampler is working at every stage of the process.
For completeness the sheet I created whilst writing this post is available from the link below. | CommonCrawl |
We show the advantages of using the quaternionic expression of some PDEs in this case Navier Stokes in this case to obtain a faster and more simple numerical solution via vortex methods.
Here $u$ is the velocity, $P$ the pressure, $R$ the Reynolds number.
In 3D $\psi$ is the velocity potencial in 2D is the stream function.
where $T$ is the $T$-operator (Teodorescu transform), $D$ the Dirac-operator and $F$ the Cauchy integral. The Cauchy integral depends only on the boundary values of $u$.
That means that if $u=0$ on the boundary then this part can be deleted of the formula.
$\alpha$ is the outer normal to $\gamma$ at the point $y$ and $e(x)$ the fundamental solution (generalized Cauchy kernel) of the Dirac-operator.
In this way substituting in the above equations we obtain a nonlinear equation in $\xi$ instead a system in $u$ and $\xi$. To find representation formulas and numerical methods for $\xi$ is one of the goals of the project. Because we have to evaluate only the vorticity (and not in addition the velocity, too) a better efficiency of this approach is expected. | CommonCrawl |
Why do colligative properties depend only on number of solute particles?
Colligative properties depend solely on the number of even though the interactive forces are different for different solute-solvent pairs. So why is the dependence only on the number of solute?
After your solute has dissolved, there are no more enthalpic effects to take into account. The solvation enthalpy is converted to a temperature change, and that's it. For colligative properties, the solute ideally has no significant own vapour pressure, does not precipitate, it just stays in solution.
The solute molecule moves around (Brownian motion), the solvent molecules around it exchange places, but are all the same, and that's that.
Now everything in solution is about entropy. In a solution, there is only orientational and translational entropy. (Molecular) Vibrational excitation is only relevant at high temperatures, rotation is practically imposible due to the many interactions.
So every particle, irrespective of its size, has the same contribution to the entropy, except that each solute particles has a constant additional contribution, because they are different.
If the solute concentration inceases, its particles start interacting. Their coming close to each other and detaching again makes local energy fluctuations, which add entropy, but depend strongly on the kind of interaction. That's when the identity of the solute starts to matter.
It's a lie. Colligative properties do depend on the chemical nature of the solute and solvent - their interaction. The trick is: an ideal solution, in which the solute-solute intermolecular forces and solute-solvent interactions are more or less of the same character, has the property that colligative properties are dictated by composition given only in terms of amount of moles. However, any sufficiently diluted solution will behave somehow ideally, that is where the statement goes: for diluted solutions, colligative properties depend only on the number of moles of solute.
Colligative properties are defined as solvent properties that display a linear dependence on the mole fraction $x_2$ of solute in solution, but are otherwise blind to the specific nature of intermolecular interactions. The key condition for such properties to be observed is that Raoult's law (one definition of an ideal solution) holds. Colligative behavior is purely statistical. For colligative properties - including freezing point depression, boiling point elevation, and osmotic pressure - the change in the property of the solvent can be expressed as $\Delta J=K_J x_2$, where $K_J$ is regarded as a constant for the solvent at the given temperature and pressure, independently of the identity of the solute. The relations assume explicitly that the solute concentration is low ($x_2\lt\lt1$). In practice, the requirement of dilute solutions is often necessary to ensure that Raoult's law is observed.
where $\mu_1^*$ is the chemical potential of the pure solvent at the same temperature and pressure. This equation is the basis for deriving the colligative property equations.
It should be emphasized that it is necessary to ensure ideal solution behavior - that Raoult's law is observed - in order to implement the mathematical formalism associated with colligative properties. In that sense this is a functional definition: one searches for systems that match the definition and allow application of the formalism.
When are these conditions met?
In chemistry, colligative properties are properties of solutions that depend on the ratio of the number of solute particles to the number of solvent molecules in a solution, and not on the nature of the chemical species present.1 The number ratio can be related to the various units for concentration of solutions. The assumption that solution properties are independent of nature of solute particles is only exact for ideal solutions, and is approximate for dilute real solutions. In other words, colligative properties are a set of solution properties that can be reasonably approximated by assuming that the solution is ideal.
Not the answer you're looking for? Browse other questions tagged solutions colligative-properties or ask your own question.
Why do exothermic dissolution reactions occur?
How do repulsive solute interactions affect the van't Hoff factor?
Why is osmotic pressure the best colligative property? | CommonCrawl |
Your task is to calculate the number of ways to get a sum $n$ by throwing dice. Each throw yields an integer between $1 \ldots 6$.
For example, if $n=10$, some possible ways are $3+3+4$, $1+4+1+4$ and $1+1+6+1+1$.
The only input line contains an integer $n$.
Print the number of ways modulo $10^9+7$. | CommonCrawl |
This post is dedicated to all those who still believes that R in a nutshell is a book for learning statistics… Let's get started with the easiest question.
In statistics, having a hypothesis means that we believe that the value of a parameter, for instance the mean or the variance of a distribution is close to a certain number. As all statements it can be correct or completely wrong. Data will tell us what seems to be correctly explained. In order to overcome the trickiness of the concept let's define some terminology that might be helpful in the next paragraphs. Before the actual analysis, scientists usually formulate some questions they would like to answer. Those questions are technically referred to as hypotheses. Given the null hypothesis H0, the alternative hypothesis H1 and the sample (x1,x2,…xn), the rejection region (also called critical region) is defined as the region C such that if H0 is accepted, (x1,x2,…xn)∉C. Similarly if H0 is rejected, the data do belong to C.
Probably the most classical way to explain hypothesis testing is by referring to the gaussian distribution with both mean and variance known. Let me oversimplify the problem of cancer attacked with statistical analysis tools.
Imagine that there are some reasons to believe that gene RSPC is responsible of a type of cancer. Moreover, doctors have samples of patients who are affected by cancer (control). The control sample has a mean value for gene RSPC, μRSPC. The idea behind hypothesis testing is that if another group of patients has a mean value that is close enough to μRSPC, the hypothesis H0:μ=μRSPC will be accepted and the group will be labeled as affected. Contrarily, if the mean value is not close enough to μRSPC then H1:μ≠μRSPC should be accepted instead and the group will be labeled as not at risk. In terms of the critical region, the aforementioned hypotheses are saying that if the sample under investigation belongs to the critical region, we better reject H0.
No need to be a genius to conclude that if the sample does not belong to the critical region, we have a good reason to accept H0, instead. Fine!
and relax it too. In fact, a much stricter hypothesis testing would be conducted with a very small $\alpha $$. This is translated into being less tolerant about the probability of the type I error.
The beauty of mathematical statistics consists in the capability of explaining the same concepts in so many different ways. Very often, academic papers and research studies in general exploit the concept of p-value. Once the statistic measure is computed from the data (in the example above it is |X̂ −μRSPCσn√|) we might be interested in evaluating the probability that a random sample from the standard normal distribution is greater than our statistic measure. That probability is what they call the p- value. If that probability is greater than the statistic measure and that happens a number of times, then H0 should be accepted.
How many times? α, of course!
The simplification about cancer above is just ridiculous, I know. I've also read quite a number of papers in which they claim to govern the complexity of some aspects of (some types of) cancer with hypothesis testing, which I also find ridiculous. | CommonCrawl |
Back in September, Bob Carpenter posted a Stan puzzle to Andrew Gelman's blog. I didn't notice until Bob posted a second puzzle in November. I'm still learning to use Stan and thought this puzzles would be great exercises. So I decided to take a look and try to solve the puzzle myself (although the solution has already been posted).
Suppose a player is shooting free throws, but rather than recording for each attempt whether it was successful, she or he instead reports the length of her or his streaks of consecutive successes. For the sake of this puzzle, assume the the player makes a sequence of free throw attempts, $z = (z_1, z_2, \ldots)$, assumed to be i.i.d. Bernoulli trials with chance of success $\theta$, until $N$ streaks are recorded. The data recorded is only the length of the streaks, $y = (y_1, \ldots, y_N)$.
Puzzle: Write a Stan program to estimate $p(\theta | y)$.
$z = (0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0)$.
$y = (3, 1, 2, 5)$.
Any number of initial misses (0 values) in $z$ would lead to the same $y$. Also, the sequence $z$ always ends with the the first failure after the $N$-th streak.
Hint: Feel free to assume a uniform prior $p(\theta)$. The trick is working out the likelihood $p(y | \theta, N)$, after which it is trivial to use Stan to compute $p(\theta | y)$ via sampling.
Another Hint: Marginalize the unobserved failures (0 values) out of $z$. This is non-trivial because we don't know the length of $z$.
Extra Credit: Is anything lost in observing $y$ rather than $z$?
The data we have is the number of streaks, $N$, and the length of the streaks, $y$. Note that we aren't actually allowed to use the complete data, $z$, and can only make use of $y$. The goal of the puzzle is to estimate $\theta$, the probability of success.
The last part of the problem contains an important piece of information—there is no distinction between a single miss (i.e. $0$) and a "streak" of misses (e.g. $0, 0, …$). This is really helpful because it reduces the problem to a single transition case (i.e. $1$ to $0$). In fact since we are observing $N$ streaks, and we've collapsed any streaks of zero cases (misses) to a single event, the probability of the transition from 0 to 1 can be considered 1. Now since the misses are marginalized, we want to be sure to only model the subsequent successes (if any). So given that the player has already made one shot, required to begin the streak, we are interested the probability of making another shot.
Since we are taking a Bayesian approach we phrase the problem as "what value of $\theta$ is most likely, given the data?", or stated another way, "given what we've observed, what is the probability the player will make another shot?".
Although our data is recorded as counts we are interested in the chance of making a shot, so this is a binomial problem (probability of making the shot). It is easy to think that the problem should be approached as a Poisson problem or negative binomial problem—that is to say since we are only given count data, the problem may be easily mistaken as one using either the Poisson or negative binomial distributions. Given that these distributions are related and the presentation of the problem, this would be an understandable error. However we are squarely tasked with estimating $p(\theta | y)$.
Printing the object displays the simulation results. The simulation consisted of 4 chains, each with 5000 iterations. The first 2500 iteractions were discarded from each chain to allow the sampler to warmup, yielding 2500 post-warmup iterations per chain. No thinning of the chains was performed.
Inference for Stan model: 441b9b22174fb849c0ecdf0100d53155.
post-warmup draws per chain=2500, total post-warmup draws=10000.
Samples were drawn using NUTS(diag_e) at Thu Nov 26 17:27:15 2015.
Based on 3508 effective samples, the posterior mean for $\theta$ is 0.67 with a 95% posterior interval from 0.29 to 0.95. We can interpret this as the following: given the data we've observed the probability the basketball player will make another shot, $\theta$, is about 0.67. However this is qualified by a large 95% posterior interval (0.29, 0.95) which demonstrates a large amount of uncertainty for this estimate.
This problem was trickier than I first though but not for the reasons I would have expected. As it turns out Stan has pretty helpful error messages—if you take the time to read them and think about what they mean. In early attempts I forgot to remove the leading success from the streak lengths and I admit it took much longer to fix this problem than it should have. I should have spent less time Googling the error and more time thinking about what caused it.
As mentioned above it is tempting to think about this as a Poisson or negative binomial problem. In fact the negative binomial distribution might be a good way to think about the problem because you make draws with some probability of success until you've observed some number of failures. We can apply the negative binomial distribution where we stop observing after a single failure. I decided to try this out and compare the results.
The code is pretty similar with small changes to the model statement and the inclusion of a generated quantites statement. Somewhat surprisingly it ran fine on the first try, but I guess Stan isn't actually that difficult. Since we are using the negative binomial, $\theta$ is the probability of failure. Using the generated quantities statement we can easily get the probability of success instead.
Inference for Stan model: c3b39edc8cf7553e3b565397c19f76c7.
Samples were drawn using NUTS(diag_e) at Fri Nov 27 09:41:40 2015.
The posterior mean is consistent with our estimate using the binomial distribution, but the 95% posterior interval is wider (0.13, 0.96) and we have fewer effective samples so our sampling with the negative binomial distribution is slightly less efficient.
This was a fun exercise and gave me an excuse to use Stan. I'll have to be sure to make an attempt on the second Stan puzzle. Hopefully Bob and Andrew keep putting these together because they are a great way to get familiar with using Stan to solve Bayesian problems. | CommonCrawl |
"Disentangling Coalescing Neutron Star-White Dwarf Binaries for LISA."
"The host galaxies of double compact objects merging in the local Universe."
"Neutron Star Mergers Might not be the Only Source of r-Process Elements in the Milky Way."
"GW$\times$LSS: Chasing the Progenitors of Merging Binary Black Holes."
"Numerical simulations of mass loading in the tails of Bow Shock Pulsar Wind Nebulae."
"Digging the population of compact binary mergers out of the noise."
Sebastian M. Gaebel et al.
"How frequent are close supermassive binary black holes in powerful jet sources?."
Martin G. H. Krause et al.
"A NuSTAR observation of the low mass X-ray binary GX 349+2 throughout the Z-track."
Benjamin M. Coughenour et al.
"An Analytical Portrait of Binary Mergers in Hierarchical Triple Systems."
Lisa Randall & Zhong-Zhi Xianyu.
"Probing non-Gaussian Stochastic Gravitational Wave Backgrounds with LISA."
"Stochastic Chemical Evolution of Galactic Subhalos and the Origin of r-Process Elements."
"Physical conditions for the r-process I. radioactive energy sources of kilonovae." | CommonCrawl |
You are playing a game which consists of $n$ rooms. Each room has a teleporter to some other room (or the room itself).
You have to process $q$ queries of the form: You are now in room $a$ and want to reach room $b$. What is the minimum number of teleportations?
The first input line contains two integers $n$ and $q$: the number of rooms and queries. The rooms are numbered $1,2,\ldots,n$.
The second line contains $n$ integers $t_1,t_2,\ldots,t_n$: for each room, the destination of the teleporter in that room.
Finally, there are $q$ lines that describe the queries. Each line has two integers $a$ and $b$: you are now in room $a$ and want to reach room $b$.
For each query, print the minimum number of teleportations. If it is not possible to reach the destination, print $-1$. | CommonCrawl |
I was studying about the interest rate term structures and i came across term structure model with (and without) drift.
From the equation above $\lambda$ is the drift factor and $\lambda dt$ is the drift. I have a very confusing explanation of drift which is along the lines of interest rates are moved in the future by some factor.
Can someone give me an explanation of drift. An example associated with it would be ideal. Thanks!
These dynamics imply that the conditional mean and variance of changes in the short-term rate depend on the level of $r$.
On your case we have $\alpha = \lambda$ and $\beta=\gamma=0$, and the model simplifies to the one on Merton (1973). So $\lambda$ is just capturing the growth over time of the interest rate. If there was no uncertainty, it would mean that interest rates would grow forever. Usually we don't see this in the data that's why most models haave a $\beta < 0$ which implies that interest rates are mean reverting.
Not the answer you're looking for? Browse other questions tagged fixed-income interest-rates term-structure or ask your own question.
How is term structure of interest rate reflected in Vasicek? | CommonCrawl |
An ionic compound has a solubility of $1\ \mathrm M$ in water at $25\ ^\circ \mathrm C$ and its solubility increases as the temperature is raised. What are the signs of $\Delta H^\circ$ and $\Delta S^\circ$ for the dissolving process?
Since the the solubility increases as the temperature is raised ($\Delta G^\circ$ becomes more negative), I know that $\Delta S^\circ$ is positive. However, I am unsure of how to determine the sign of $\Delta H^\circ$. The answer is $\Delta H^\circ > 0$.
Browse other questions tagged thermodynamics or ask your own question.
Is sign convention for delta H and enthapy change of a solution different?
How to determine whether the enthalpy of solution is positive or negative by calorimetry?
If the temperature of a solution decreases, what is the sign of the enthalpy change?
How to find the standard enthalpy and entropy for the dissolution of silver chloride?
Does the reaction become more or less favorable as temperature increases? | CommonCrawl |
Hypotheses: Usually the theorem we are trying to prove is of the form $$P_1\land\ldots\land P_n \Rightarrow Q.$$ The $P$'s are the hypotheses of the theorem. We can assume that the hypotheses are true, because if one of the $P_i$ is false, then the implication is true.
Known results: In addition to any stated hypotheses, it is always valid in a proof to write down a theorem that has already been established, or an unstated hypothesis (which is usually understood from context).
Definitions: If a term is defined by some formula it is always legitimate in a proof to replace the term by the formula or the formula by the term.
Tautology: If $P$ is a statement in a proof and $Q$ is logically equivalent to $P$, we can then write down $Q$.
Modus Ponens: If the formula $P$ has occurred in a proof and $P\Rightarrow Q$ is a theorem or an earlier statement in the proof, we can write down the formula $Q$. Modus ponens is used frequently, though sometimes in a disguised form; for example, most algebraic manipulations are examples of modus ponens.
Specialization: If we know "$\forall x\,P(x)$,'' then we can write down "$P(x_0)$'' whenever $x_0$ is a particular value. Similarly, if "$P(x_0)$'' has appeared in a proof, it is valid to continue with "$\exists x\,P(x)$''. Frequently, choosing a useful special case of a general proposition is the key step in an argument.
When you read or write a proof you should always be very clear exactly why each statement is valid. You should always be able to identify how it follows from earlier statements.
A direct proof is a sequence of statements which are either givens or deductions from previous statements, and whose last statement is the conclusion to be proved.
Variables: The proper use of variables in an argument is critical. Their improper use results in unclear and even incorrect arguments. Every variable in a proof has a quantifier associated with it, so there are two types of variables: those that are universally quantified and those that are existentially quantified. We may fail to mention explicitly how a variable is quantified when this information is clear from the context, but every variable has an associated quantifier.
When we introduce an existentially quantified variable, it is usually defined in terms of other things that have been introduced previously in the argument. In other words, it depends on the previously mentioned quantities in the proof.
Definition 2.1.1 We say the integer $n$ is even if there is an integer $k$ such that $n=2k$. We say $n$ is odd if there is a $k$ such that $n=2k+1$.
Example 2.1.2 If $n$ is even, so is $n^2$.
Assume $n$ is an even number ($n$ is a universally quantified variable which appears in the statement we are trying to prove). Because $n$ is even, $n=2k$ for some $k$ ($k$ is existentially quantified, defined in terms of $n$, which appears previously). Now $n^2=4k^2=2(2k^2)$ (these algebraic manipulations are examples of modus ponens). Let $j=2k^2$ ($j$ is existentially quantified, defined in terms of $k$); then $n^2=2j$, so $n$ is even (by definition).
Example 2.1.3 The sum of two odd numbers is even.
Assume $m$ and $n$ are odd numbers (introducing two universally quantified variables to stand for the quantities mentioned in the statement). Because $m$ and $n$ are odd there are integers $j$ and $k$ such that $m=2j+1$ and $n=2k+1$ (introducing existentially quantified variables, defined in terms of quantities already mentioned). Now $m+n=(2j+1)+(2k+1)=2(j+k+1)$ (modus ponens). Let $i=j+k+1$ (existentially quantified); then $m+n=2i$ is even (by definition).
In 1-4, write proofs for the given statements, inserting parenthetic remarks to explain the rationale behind each step (as in the examples).
Ex 2.1.1 The sum of two even numbers is even.
Ex 2.1.2 The sum of an even number and an odd number is odd.
Ex 2.1.3 The product of two odd numbers is odd.
Ex 2.1.4 The product of an even number and any other number is even.
Ex 2.1.6 Prove that $x$ is odd if and only if $|x|$ is odd.
Ex 2.1.7 If $x$ and $y$ are integers and $x^2+y^2$ is even, prove that $x+y$ is even. | CommonCrawl |
As seen in the last lesson, you can read voltages in from the Arduino. This lesson will show you how to read the real temperature in the room you're in. It uses a device called the LM35DZ. This device outputs a voltage of 0.01 Volts for every $1^\circ$C in temperature of its environment.
Let's use the LM35DZ to read in some temperatures (100 of them), and use the linear mathematical relationship between $^\circ F$ and $^\circ C$ to display the temperature in both Celsius and Farenheight.
Now you try. Can you figure out to 1) Make the voltage into the temperature in $^\circ C$, then 2) into $^\circ F$?
The LM35DZ is a remarkable device. It is essentially an electronic thermometer. If you apply power to it (5V on its leftmost pin, 0V on its rightmost pin), the middle pin$\times 100$ will be the temperature in $^\circ C$. | CommonCrawl |
This node exports several files from a terrain. The images are exported as gray levels, each pixel having a value between 0 and 1. The generated images have the resolution of the terrain as its size, and the gray levels and scale of the heights (vertices with a 0 value will have their height set to the min value, and vertices with a 1 value will have their height set to the max value) determine the height of each point.
The advantage of exporting multiple files is to optimize the rendering times for very large terrains in an external engine where it is more practical to manage parts of the terrain in different files.
To add a node, right click in the Graph Editor and select Create Node > Export > Multi file export terrain.
File pattern: This is the formula used to name the files to export. The naming convention is important because the node aligns the terrain on a grid and where the first number is the X axis and the second one is the Y axis. Depending on the XY coordinates of the part of the terrain and the number of files to export, each exported file be named according to its XY coordinates, for example the top left part of a 2x2 terrain will have the name Group_0_0.png, the top right part Group_0_1.png, etc.
In File pattern in the parameters dialog, paste the path and add a file name, for example here we add "Group" and then _$x_$y, where $x represents the position of the part of the terrain on the X axis and $y represents the Y axis.
Range: Click User defined to set the Minimum height and Maximum heights or leave the default setting, Automatic.
Each file is loaded, and forms the corresponding part of the final terrain. | CommonCrawl |
I have a wav file which is gained by playing 2 different wav files with 2 speakers and recorded with mic. If I want to get one of the original signals using the resulting wav file and the other original wav file, is it possible to do this? I checked online and only got blind signal separation algorithm. Thank you in advance!
where $n$ is the sample (time) index, and $h_1$ and $h_2$ are the room impulse responses from the respective speakers to the mic. Assume you want to retrieve $x_2$ from $y$ with the knowledge of $x_1$. You can apply echo cancellation methods to remove the part in $y$ stemming from $x_1$. What you get is $(x_2*h_2)(n)$ plus some residual from $x_1$ due to imperfect echo cancellation. If you also want to get rid of the effect of $h_2$, you must resort to blind deconvolution methods, which is a lot harder (and you need to know or assume some things about the desired signal $x_2$).
$x_1(n)$ (one of the original signals).
And your problem is that you want to recover $x_2$? Then this is the same setup as this question, and certainly not a problem of source separation - since one of the signals is already known (up to a convolution). You can get an estimate of $h_1$, and recover $h_2 \star x_2$. And if you can assume that $h_1$ and $h_2$ are similar, you can try inverting $h_2$ and get a result closer to $x_2$.
It would be a true BSS problem is you had no prior information about both $x_1$ and $x_2$, in which case it would have been intractable (unless some assumptions can be made about the way these sounds are produced - eg one of them is speech and the other is background music).
Not the answer you're looking for? Browse other questions tagged matlab discrete-signals continuous-signals source-separation or ask your own question.
Is there any example or existing resource for using matlab in audio comparison (cross correlation)? | CommonCrawl |
Abstract: We describe a wide class of boundary-value problems for which the application of elliptic theory can be reduced to elementary algebraic operations and which is characterized by the following polynomial property: the sesquilinear form corresponding to the problem degenerates only on some finite-dimensional linear space $\mathscr P$ of vector polynomials. Under this condition the boundary-value problem is elliptic, and its kernel and cokernel can be expressed in terms of $\mathscr P$. For domains with piecewise-smooth boundary or infinite ends (conic, cylindrical, or periodic), we also present fragments of asymptotic formulae for the solutions, give specific versions of general conditional theorems on the Fredholm property (in particular, by modifying the ordinary weighted norms), and compute the index of the operator corresponding to the boundary-value problem. The polynomial property is also helpful for asymptotic analysis of boundary-value problems in thin domains and junctions of such domains. Namely, simple manipulations with $\mathscr P$ permit one to find the size of the system obtained by dimension reduction as well as the orders of the differential operators occurring in that system and provide complete information on the boundary layer structure. The results are illustrated by examples from elasticity and hydromechanics. | CommonCrawl |
Abstract: We use the SDSS-Gaia catalogue to search for substructure in the stellar halo. The sample comprises 62\,133 halo stars with full phase space coordinates and extends out to heliocentric distances of $\sim 10$ kpc. As actions are conserved under slow changes of the potential, they permit identification of groups of stars with a common accretion history. We devise a method to identify halo substructures based on their clustering in action space, using metallicity as a secondary check. This is validated against smooth models and numerical constructed stellar halos from the Aquarius simulations. We identify 21 substructures in the SDSS-Gaia catalogue, including 7 high significance, high energy and retrograde ones.
We investigate whether the retrograde substructures may be material stripped off the atypical globular cluster $\omega$~Centauri. Using a simple model of the accretion of the progenitor of the $\omega$~Centauri, we tentatively argue for the possible association of up to 5 of our new substructures (labelled Rg1, Rg3, Rg4, Rg6 and Rg7) with this event. This sets a minimum mass of $5 \times 10^8 M_\odot$ for the progenitor, so as to bring $\omega$~Centauri to its current location in action -- energy space. Our proposal can be tested by high resolution spectroscopy of the candidates to look for the unusual abundance patterns possessed by $\omega$~Centauri stars. | CommonCrawl |
How is this possible to do in TeX? I had a look whether this could be done with TikZ but could not find any comparable examples. I am convinced that in theory it should be possible to prepare a TikZ extension that does things like this rather simply, but at the moment I am looking for an already existing solution to accomplish such tasks. When I get more experienced with making such plots I might consider writing a basic package.
Is it possible to make such plots in LaTeX with current packages?
If yes, with TikZ? Could you give me an example how to start?
If not, what software would you recommend to prepare and include such kind of 3D graphics?
Please note that this is not the kind of question you sometimes get here that asks "Please do this for me!" I honestly tried solving this problem but did not get far at all. It would be really great to get an answer, a lot of people in my field would benefit from this.
(1) Wertz, James R. (2009). Orbit & Constellation Design & Management. New York: Springer.
This is a long answer since there are good tools for spherical geometry scattered all around, so I created a few sections addressing those tools.
I suggest to use \tdplotdrawarc . This is explained in the TikZ and PGF Manual. You need to define three angles $\alpha$, $\beta$ and $\gamma$ for the arc, theen the radius, origin, initial and final angle. I include here and example with the angles used. With this example you can build new examples explaining other angle combinations.
% rotate circle to make it look better.
Here is a post which addresses the drawing an equator when the north pole is given. A simple macro to speed up coding draw an equator when north pole is known .
Sometimes is better to say away from thinking and trying to do 3D. So I am contradicting here myself with the advise of using tikz-3dplot . Think how to draw a 3D thinking 2D (that is ellipses and arcs).
The next example is an improvement over an example shown here Spherical triangles and great circles . The code is based on @Tarass great insight. The example is shown here more to show the capabilities of Tikz and the use of it for other purposes. As I said, it is better to use, in general \tdplotdrawarc .
In spherical geometry understanding where coordinates (a point) are and how to draw arcs is a fundamental issue.
There could be confusion because spherical coordinates for mathematicians and phycisist use different symbols, the following link provides macros for conversion between spherical (azimuth, polar) and cartesian coordinates and addresses conversions in terms of geografic (latidue, altitude) coordinates as well: spherical coordinates in 3d .
Finally since TiKz do not seem to have tools to draw arcs given a center and a radius I wrote a macro and posted here .
The R package GeoMap will create Spherical project of the earth with the continent maps. I have not used it except to verify it loads and builds a map. If you combine with the package tikzDevice you will get tikz code which could be modified. Be aware that it will be a large file due to the extensive use of points for plotting.
Once this is working you should be able to implement with Sweave so that all the code is contained within the LaTeX file.
I would consider this just a workaround until a tikz package was built with pure tikz.
Not the answer you're looking for? Browse other questions tagged tikz-pgf graphics 3d or ask your own question.
How can I draw an arc from point A -> B on a 3D sphere in TikZ?
How to draw spherical figures?
How to draw 3d vector field on a line?
How to draw Timed Hybrid Petri Nets with tex? | CommonCrawl |
Keywords: Discrete singularity convolution(DSC), sampling theory, regularized Shannon kernel(RSK), delta distribution, sequence of delta type, convergence rate.
Citation: LI YONGFENG (2003-10-29). Numerical methods for differential equations with distributional derivatives. ScholarBank@NUS Repository.
Abstract: A recently developed numerical method, the discrete singular convolution (DSC) method, for solving high order differential equations with distributional derivatives is presented in the different framework of distribution theory and sampling theory, respectively. In order to use this method to solve a class of differential equations with the delta distribution and its distributional derivatives, the classical approximations to the delta distribution and their convergence rates in Sobolev space $H^\alpha$ of negative order $\alpha$ are studied in details. As an example, the model of Euler-Bernoulli beam with jump discontinuities is used to test the efficiency of some delta sequences and the DSC method. | CommonCrawl |
85 Is this an easter egg to Sherlock in Thor Ragnarok?
75 Why did no student correctly find a pair of $2\times 2$ matrices with the same determinant and trace that are not similar?
74 In The Matrix, why is Neo confusing left and right?
47 Is this an easter egg to Sherlock in Thor Ragnarok?
42 Is it important for the plot that Leia is Luke's sister?
41 How to tell my professor that I don't understand the class and that I want office hours? | CommonCrawl |
Let $K$ be a number field. If $X$ is a curve over $K$ with good reduction at a place $v$ of $K$, then the Jacobian of $X$ also has good reduction at $v$. This follows from the functoriality of the Jacobian.
The converse is not true, but I don't know of any examples.
Can one provide an example for all number fields $K$?
Let $E,E'$ be elliptic curves over the residue field. Then $E \times E'$ has good reduction but is not a Jacobian. However, any principally polarized abelian surface that reduces to $E \times E'$ but is not a product of elliptic curves is a Jacobian of some genus-$2$ curve $C$ that cannot have good reduction.
Explicitly (when the residue characteristic is odd), $C$ can have the form $y^2 = P(x)$ where $P$ is a sextic with roots $x_1,x_2,x_3$, $1/x'_1,1/x'_2,1/x'_3$ such that each $x_i,x'_i$ reduces to zero and each $x_i-x_j$ and $x'_i-x'_j$ ($i\neq j$) has valuation $1$. If I remember right, the Jacobian reduces to $E \times E'$ where $E: Y^2 = (X-\bar x_1) (X-\bar x_2) (X-\bar x_3)$ and $E': Y^2 = (X-\bar x'_1) (X-\bar x'_2) (X-\bar x'_3)$.
As I recall I learned this construction from Joe harris.
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-curves arithmetic-geometry or ask your own question.
A Jacobian with a good reduction, which is simple : how is the reduction of the curve? | CommonCrawl |
We examine a model developed by Prof. Charles Peskin with $N$ neurons ($N$ very large), each having a small number $K$ (e.g., $K=10$) states $0,1,\ldots,K-1$. Neurons are randomly excited and move up one state. But, critically, when a neuron reaches state $K$ it fires. Other neurons are then excited with independent probability $p$. This may (or may not!) set off a chain reaction with a large number of neurons firing. After the chain reaction, those neurons that have fired are reset to state $0$. The nature of the process depends on the relationship of $p$ to $N$ and $K$. There are strong connections to the Erdős-Rényi percolation discussed in the Monday Lecture. This is a ``work in progress" with many open questions. | CommonCrawl |
A graph $X$ consists of a set of vertices and a set of edges where each edge is associated with two vertices. The set of vertices is often denoted $V(X)$ and the set of edges is often denoted $E(X)$. We will only consider finite graphs that are connected - that is, the graph is one piece.
Computing the fundamental group of such graphs is relatively easy. We must first make some definitions first though.
Definition: Let $X$ be a connected graph. A Tree in $X$ is a subgraph $T$ of $X$ that contains no loops, i.e., $T$ is simply connected. A Maximal Tree in $X$ is a subgraph $T$ of $X$ such that if $T'$ is another tree containing $T$ then $T' = T$.
Theorem 1: Let $X$ be a path connected graph and let $T$ be a maximal tree in $X$. Then $\pi_1(X, x)$ is isomorphic to the free group generated by $|E(X)| - |E(T)|$ elements.
In the example above, the $3 \times 3$ grid $X$ has $E(X) = 24$ edges, and $E(T) = 15$ edges. So the fundamental group of $X$, $\pi_1(X)$ is $F_9$, the free group generated by $9$ elements. | CommonCrawl |
Prove that $x^3 + x^2 = 1$ has no rational solutions?
I would show that $x = \sqrt1$, which is irrational but then do I have to show more? $x+1=1$ which gives me $x=0$ and since $x$ cannot equal to $0$ as this would make the statement false (everything is $0$). Is it enough to simply state this falsity or is there another way to express it?
By the rational root theorem, a rational root would have to be $x=1$ or $x=-1$, but neither works.
The solution satisfying the following equation $$ A \times B =0 $$ is $A=0$ (for any $B$) or $B=0$ (for any $A$).
You cannot apply the same pattern for the case in which the right hand side is not zero. Why? For example, $$ A\times B = 2 $$ If you choose $A=2$ then $B$ must be $1$ (rather than for any $B$). If you choose $B=2$ then $A$ must be $1$ (rather than for any $A$).
If you want to find the solution of $$x^2(x+1) =1$$ you have to make sure the right hand side equals to 0.
To prove the equation has no rational solution see this comment.
Not the answer you're looking for? Browse other questions tagged polynomials proof-writing proof-verification rational-numbers or ask your own question.
How can I prove this equation has no solution?
Given a rational number and an irrational number, both greater than 0, prove that the product between them is irrational.
Analysis of a proof that the decimal expression for any rational is periodic. | CommonCrawl |
Represents a "user group membership" – a specific instance of a user belonging to a group.
For example, the fact that user Mary belongs to the sysop group is a user group membership.
The class encapsulates rows in the user_groups table. The logic is low-level and doesn't run any hooks. Often, you will want to call User::addGroup() or User::removeGroup() instead.
Definition at line 37 of file UserGroupMembership.php.
Definition at line 52 of file UserGroupMembership.php.
Delete the row from the user_groups table.
Definition at line 119 of file UserGroupMembership.php.
References Wikimedia\Rdbms\IDatabase\affectedRows(), DB_MASTER, Wikimedia\Rdbms\IDatabase\delete(), Wikimedia\Rdbms\IDatabase\insert(), null, wfGetDB(), and wfReadOnly().
Definition at line 218 of file UserGroupMembership.php.
References $group, $userId, null, and Wikimedia\Rdbms\IDatabase\timestamp().
Definition at line 75 of file UserGroupMembership.php.
Definition at line 68 of file UserGroupMembership.php.
Gets the localized name for a member of a group, if it exists.
For example, "administrator" or "bureaucrat"
Definition at line 445 of file UserGroupMembership.php.
References $group, $username, and wfMessage().
Referenced by RightsLogFormatter\getMessageParameters(), and UserrightsPage\groupCheckboxes().
Gets the localized friendly name for a group, if it exists.
For example, "Administrators" or "Bureaucrats"
Definition at line 432 of file UserGroupMembership.php.
Referenced by SpecialActiveUsers\buildForm(), SpecialPasswordPolicies\execute(), SpecialListGroupRights\execute(), UsersPager\getAllGroups(), User\makeGroupLinkHTML(), and User\makeGroupLinkWiki().
Gets the title of a page describing a particular user group.
When the name of the group appears in the UI, it can link to this page.
Definition at line 457 of file UserGroupMembership.php.
References $title, Title\newFromText(), and wfMessage().
Referenced by UsersPager\doBatchLookups(), SpecialPasswordPolicies\execute(), SpecialListGroupRights\execute(), User\getGroupPage(), User\makeGroupLinkHTML(), and User\makeGroupLinkWiki().
Gets a link for a user group, possibly including the expiry date if relevant.
string | null $userName If you want to use the group member message ("administrator"), pass the name of the user who belongs to the group; it is used for GENDER of the group member message. If you instead want the group name message ("Administrators"), omit this parameter.
Definition at line 374 of file UserGroupMembership.php.
References IContextSource\getLanguage(), IContextSource\getUser(), Linker\link(), MessageLocalizer\msg(), null, and Message\rawParam().
Referenced by PermissionsError\__construct(), UsersPager\buildGroupLink(), SpecialListGroupRights\formatPermissions(), User\newFatalPermissionDeniedStatus(), MediaWiki\Preferences\DefaultPreferencesFactory\profilePreferences(), and UserrightsPage\showEditUserGroupsForm().
Returns a UserGroupMembership object that pertains to the given user and group, or false if the user does not belong to that group (or the assignment has expired).
Definition at line 341 of file UserGroupMembership.php.
Referenced by UserRightsProxy\removeGroup(), User\removeGroup(), and UserGroupMembershipTest\testGetMembership().
Returns UserGroupMembership objects for all the groups a user currently belongs to.
Definition at line 309 of file UserGroupMembership.php.
References $res, as, DB_REPLICA, and wfGetDB().
Referenced by UserRightsProxy\getGroupMemberships(), User\loadGroups(), and UserGroupMembershipTest\testGetMembershipsForUser().
Definition at line 61 of file UserGroupMembership.php.
Definition at line 79 of file UserGroupMembership.php.
Insert a user right membership into the database.
When $allowUpdate is false, the function fails if there is a conflicting membership entry (same user and group) already in the table.
Definition at line 156 of file UserGroupMembership.php.
References Wikimedia\Rdbms\IDatabase\addQuotes(), Wikimedia\Rdbms\IDatabase\affectedRows(), DB_MASTER, getDatabaseArray(), Wikimedia\Rdbms\IDatabase\insert(), null, Wikimedia\Rdbms\IDatabase\selectRow(), JobQueueGroup\singleton(), Wikimedia\Rdbms\IDatabase\timestamp(), Wikimedia\Rdbms\IDatabase\update(), and wfGetDB().
Definition at line 230 of file UserGroupMembership.php.
Creates a new UserGroupMembership object from a database row.
Definition at line 93 of file UserGroupMembership.php.
Referenced by UsersPager\doBatchLookups(), and User\loadFromRow().
Purge expired memberships from the user_groups table.
Definition at line 243 of file UserGroupMembership.php.
References $res, $services, Wikimedia\Rdbms\IDatabase\addQuotes(), as, DB_MASTER, Wikimedia\Rdbms\IDatabase\delete(), Wikimedia\Rdbms\IDatabase\endAtomic(), Wikimedia\Rdbms\IDatabase\getDomainID(), Wikimedia\Rdbms\IDatabase\getScopedLockAndFlush(), Wikimedia\Rdbms\IDatabase\insert(), LIST_AND, LIST_OR, Wikimedia\Rdbms\IDatabase\makeList(), Wikimedia\Rdbms\IDatabase\select(), Wikimedia\Rdbms\IDatabase\startAtomic(), and Wikimedia\Rdbms\IDatabase\timestamp().
Referenced by PurgeExpiredUserrights\execute(), and UserGroupExpiryJob\run().
Returns the list of user_groups fields that should be selected to create a new user group membership.
Definition at line 104 of file UserGroupMembership.php.
Referenced by UsersPager\doBatchLookups(), and ApiQueryUsers\execute().
Timestamp of expiry in TS_MW format, or null if no expiry.
Definition at line 45 of file UserGroupMembership.php.
Referenced by getExpiry(), and isExpired().
Definition at line 42 of file UserGroupMembership.php.
Referenced by __construct(), getDatabaseArray(), getGroup(), getGroupMemberName(), and getGroupName().
The ID of the user who belongs to the group.
Definition at line 39 of file UserGroupMembership.php.
Referenced by getDatabaseArray(), and getUserId(). | CommonCrawl |
Let $S$ be an integral 1-dimensional scheme with function field $K$.
Let $E$ be an elliptic curve over $K$. The torsion of $E$ over $K$ is not necessarily finite. As an example consider an elliptic curve over $\mathbf C(t)$.
Now, assume the residue field of each closed point of $S$ to be finite.
Is the torsion of $E(K)$ finite?
Note that I'm not assuming $S$ to be noetherian.
Note that we may assume $S$ to be affine. To answer the above question positively, it would suffice to show the torsion embeds into the rational points of a special (not geometric) fibre of $E$.
I have a vague idea that this might be wrong.
First, it seems to me that you can even assume that $S$ is the spectrum of a local ring. Then, if $p$ is the resicual characteristic, the reduction map will in general not be injective on $p$--torsion. So, if I had to produce couterexamples to this, I would start with an $E$ over $\mathbb Z_p$ with supersingular reduction, and then base change $E$ to some big, totally ramified extension of $\mathbb Z_p$ where lots of $p$--power torsion points of $E$ are defined.
Not the answer you're looking for? Browse other questions tagged elliptic-curves schemes ag.algebraic-geometry arithmetic-geometry or ask your own question.
Lifting the p-torsion of a supersingular elliptic curve.
unboundedness of number of integral points on elliptic curves?
Given a family of curves, when does there exist a fibered surface over Spec Z parametrizing them? | CommonCrawl |
Let be a polynomial of degree with integer coefficients and let be a positive integer. Consider the polynomial , where occurs times. Prove that there are at most integers such that .
%V0 Let $P(x)$ be a polynomial of degree $n > 1$ with integer coefficients and let $k$ be a positive integer. Consider the polynomial $Q(x) = P(P(\ldots P(P(x)) \ldots ))$, where $P$ occurs $k$ times. Prove that there are at most $n$ integers $t$ such that $Q(t) = t$. | CommonCrawl |
The source code and installation instructions are available at https://github.com/sebp/scikit-survival.
The hyper-parameter $\alpha > 0$ determines the amount of regularization to apply: a smaller value increases the amount of regularization and a higher value reduces the amount of regularization. The hyper-parameter $r \in [0; 1]$ determines the trade-off between the ranking objective and the regresson objective. If $r = 1$ it reduces to the ranking objective, and if $r = 0$ to the regression objective. If the regression objective is used, it is advised to log-transform the survival/censoring time first.
In this example, I'm going to use the ranking objective ($r = 1$) and grid search to determine the best setting for the hyper-parameter $\alpha$.
The class sksurv.svm.FastSurvivalSVM adheres to interfaces used in scikit-learn and thus it is possible to combine it with auxiliary classes and functions from scikit-learn. Here, I'm going to use GridSearchCV to determine which set hyper-parameters performs best for the Veteran's Lung Cancer data. Since, we require an event indicator $\delta_i$, which is boolean, and the survival/censoring time $y_i$ for training, we have to create a structured array that contains both information.
But first, we have to import the classes we are going to use.
Next, load data of the Veteran's Administration Lung Cancer Trial from disk and convert it to numeric values. The data consists of 137 patients and 6 features. The primary outcome measure was death (Status, Survival_in_days). The original data can be retrieved from http://lib.stat.cmu.edu/datasets/veteran.
Note that it does not matter how you name the fields corresponding to the event indicator and time, as long as the event indicator comes first.
Now, we are essentially ready to start training, but before let's determine what the amount of censoring for this data is and plot the survival/censoring times.
First, we need to create an initial model with default parameters that is subsequently used in the grid search. We are going to use a Red-Black tree to speed up optimization.
Next, we define a function for evaluating the performance of models during grid search. We use Harrell's concordance index.
The last part of the setup specifies the set of parameters we want to try and how many repetitions of training and testing we want to perform for each parameter setting. In the end, the parameters that on average peformed best across all test sets (200 in this case) are selected. GridSearchCV can leverage multiple cores by evaluating multiple parameter settings concurrently (I use 4 jobs in this example).
Finally, start the hyper-parameter search. This can take a while since a total of 13 * 200 = 2600 fits have to be evaluated.
Let's check what is the best average performance across 200 random train/test splits we got and the corresponding hyper-parameters.
Finally, we retrieve all 200 test scores for each parameter setting and visualize their distribution by box plots.
As kernel we are going to use the clinical kernel, because it distinguishes between continuous, ordinal, and nominal attributes.
To use GridSearchCV with a custom kernel, we need to pre-compute the squared kernel matrix and pass it to GridSearchCV.fit later. It would also be possible to construct FastKernelSurvivalSVM with kernel="rbf" (or any other built-in kernel), which does not require pre-computing the kernel matrix.
Now, print the best average concordance index the corresponding parameters.
Finally, we visualize the distribution of test scores obtained via cross-validation. | CommonCrawl |
Abstract : This paper is concerned with the study of the existence/non-existence of the discrete spectrum of the Laplace operator on a domain of $\mathbb R ^3$ which consists in a twisted tube. This operator is defined by means of mixed boundary conditions. Here we impose Neumann Boundary conditions on a bounded open subset of the boundary of the domain (the Neumann window) and Dirichlet boundary conditions elsewhere. | CommonCrawl |
In this post, I plan to introduce a few new special functions that appear in number theory and some infinite series identities that they satisfy. In particular, I will show some links between these number-theoretical functions and the Riemann Zeta function, and introduce a more general type of series, the Dirichlet Series, which is closely tied to the Zeta function. Finally, at the end, I will calculate a couple of series that require cumulative knowledge of Dirichlet Series and the Zeta function.
I would now like to define a couple of number-theoretical functions that I will use later in the post. First is the divisor function $\sigma_\alpha (n)$, defined as the sum of the $\alpha$ th powers of the divisors of $n$ (including $1$ and $n$). defined using mathematical language, this is This function is multiplicative, meaning that for any coprime positive integers $m$ and $n$, the function satisfies The special case $\sigma_0 (n)$ is sometimes written simply as $d(n)$, and is just the number of divisors of $n$.
Next is Euler's totient function $\phi(n)$, defined as the number of positive integers less than $n$ that are coprime to $n$ (including $1$). It satisfies a few interesting identities that we will use, including the identity for any positive integer $n$. The totient function is also multiplicative.
The function $\Omega(n)$ counts the number of prime factors of $n$ with multipliticy, and the function $\omega(n)$ counts the number of prime factors of $n$ without multiplicity. The first function satisfies for any positive integers $m$ and $n$, and the second satisfies for coprime positive integers $m$ and $n$. The Liouville function $\lambda(n)$ is defined as It is also multiplicative.
Many of these properties are pretty easy to prove (except for the proof that $\phi(n)$ is multiplicative, which gave me a bit of trouble), so I won't prove them, and I'll dive right in to the infinite series.
This follows directly from the previously described propert of the Totient function: Now it's time for some more interesting Dirichlet series.
First recall the following elementary property of infinite series This can be used to prove a couple very interesting and useful properties regarding number theoretical series. I will begin with a simple strategy for evaluating series of these type. This strategy is limited, so I will only do a few series, but I will later introduce a much more effective strategy that can be used for all of them.
Suppose $f(n)$ is some function, and $g(n)$ is the function defined as Then consider the following: We can use this to calculate the Dirichlet series of the Euler Totient function mentioned earlier, using this property that I also stated earlier: Letting $f(n)=\phi(n)$ and $g(n)=n$, we have or This can also be used to calculate the Dirichlet series of the Mobius function, since is equal to $1$ if $n=1$ and $0$ otherwise. This allows us to say that I'll do one more example. Since $\sigma_a$ is defined as we have that of course, the LHS is so we have the result This strategy can be used to a greater extent - for example, it can be used to calculate generally the value of $D(\sigma_a(n),s)$ for any $a$. However, for more difficult Dirichlet series like $D(d(n^2),s)$ or $D(d^2(n),s)$, it is advisable to use the following technique.
Recall the property of infinite series mentioned earlier. I will now use it to develop a much more useful strategy, especially those with multiplicative summands. Suppose that the function $f(n)$ is multiplicative and that we are considering the infinite series Since every positive integer has a unique prime factorization, this can be written as follows, where $p_i$ is the ith prime number: Or, since $f$ is multiplicative, Using the property of series mentioned earlier, this is equal to or, rearranging a bit, Since $f(1)=1$ for every multiplicative function $f$, this is equal to Thus, for any multiplicative function $f$, as long as the series converges, we have As a corrolary of this, we have the famous product representation of the Riemann Zeta function: However, we can do some other cool stuff with it. For example, consider the infinite series Since $d(n)$ is multiplicative, and since $d(p^k)=k+1$ for any prime $p$ and nonnegative integer $k$, we have the following: I won't put this in a box, because we've already derived a more general Dirichlet series for $\sigma_a$, but it is a good example of how to use this technique. Though this technique can be applied to more exotic series (as I will show in a moment), the earlier technique provides a much more elegant derivation of $D(\sigma_a(n),s)$, whereas this one gets very ugly with algebra.
Interestingly, these last two Dirichlet series were reciprocals of each other. | CommonCrawl |
The annual ring pattern of a log end face is related to the quality of the wood. We propose a method for computing the number of annual rings on a log end face depicted in sawmill production. The method is based on the grey-weighted polar distance transform and registration of detected rings from two different directions. The method is developed and evaluated on noisy images captured in on-line sawmill production at a Swedish sawmill during 2008, using an industrial colour camera. We have also evaluated the method using synthetic data with different ring widths, ring eccentricity, and noise levels.
Norell, K. (2009). Creating synthetic log end face images. In: P. Zinterhof; S. Loncaric; A. Uhl; A.Carini (Ed.), Proceedings of 6th International Symposium on Image and Signal Processing and Analysis, 2009.: ISPA 2009.. Paper presented at 6th International Symposium on Image and Signal Processing and Analysis (pp. 353-358).
In this paper we present the design and creation of synthetic images of log end faces. The images are constructed to resemble images of Scots pine taken in on-line sawmill production. Wood features such as knots, heartwood, and annual rings, as well as the sawing procedure, storage, and imaging, including camera position, are simulated. A dataset of 100 images is provided, together with code for generating new synthetic data.
Bengtsson, E., Norell, K., Nyström, I., Strand, R. & Wadelius, L. (2008). Annual Report 2007.
Two related methods for automatic estimation of the pith position, i.e., the centre of the annual rings, in wood log end face images are presented. We use images that depict untreated log end faces that are deliberately chosen to include difficulties such as rot, non-circular shape, uncentered pith and dirt. The images are taken with a regular digital camera in sawmill environments. Both presented methods use local orientation and Hough transform to detect the pith position, but two different ways to compute the local orientation are used. The results are promising for both methods. At least one of the methods is fast enough to use on-line at a sawmill.
Binary mathematical morphology can be computed by thresholding a distance transform, provided that the distance transform is a metric. Here we show that the polar distance transform is a metric and use it for morphological operations. The polar distance transform varies with the spatial coordinates of the image, resulting in spatially-variant morphology. In this distance transform each pixel is related to an image origin. We prefer angular propagation over radial, thus we construct structuring elements that are elongated in the angular direction, which is useful when circular segments are handled. We show an example where segments of annual rings on a log end face are connected using mathematical morphology based on the polar distance transform.
Norell, K., Lindblad, J. & Svensson, S. (2008). The polar distance transform. In: Proceedings SSBA 2008, Symposium on image analysis, Lund.
Norell, K., Lindblad, J. & Svensson, S. (2007). Grey Weighted Polar Distance Transform for Outlining Circular and Approximately Circular Objects. In: 14th International Conference on Image Analysis and Processing: ICIAP 2007 (pp. 647-652).
We introduce the polar distance transform and the grey weighted polar distance transform for computation of minimum cost paths preferring circular shape, as well as give algorithms for implementations in a digital setting. An alternative to the polar distance transform is to transform the image to polar coordinates, and then apply a Cartesian distance transform. By using the polar distance transform, re-sampling of the image and interpolation of new pixel values are avoided. We also handle the case of grey weighted distance transform in a $5\times 5$ neighbourhood, which, to our knowledge, is new. Initial results of using the grey weighted polar distance transform to outline annual rings in images of log end faces are presented. | CommonCrawl |
1) Bombieri-Vinogradov theorem. This theorem, which is believed (according to Jean-marie De Koninck and Florian Luca's book) to be the reason for Bombieri's Fields Medal in 1974, asserts basically that the Generalized Riemann Hypothesis is true 'on average' over an impressive range of primes. The exact statement, however, is likely quite obtuse to non-number theorists. That said, Bombieri-Vinogradov has an impressive list of consequences, including the most recent results on bounded gaps between primes (due to Maynard).
2) Heath-Brown's 'Theorem 14'. Proved by Heath-Brown in his 2002 paper "The density of rational points on curves and surfaces", this theorem generalized the Bombieri-Pila determinant method to the $p$-adic setting. Its statement is long and difficult to understand at first glance, but it has enormous consequences including (ultimately shown by Salberger) the so-called dimension growth conjecture. It also provided uniform estimates for curves (the best result on this is a preprint due to Miguel Walsh Edit: I just found out that Walsh's paper has now appeared in print and can be found here: http://imrn.oxfordjournals.org/content/early/2014/06/29/imrn.rnu103.refs) and surfaces. It has consequences for concrete diophantine problems, including power-free values of polynomials and power-free values of $f(p)$ where $p$ ranges over the primes only (previous error bounds only provided $\log$ power savings, which are insufficient for this case).
So roughly speaking a workhorse theorem is one where the statement of the theorem is not intuitive nor easy to understand, its proof is difficult and perhaps not very enlightening, but nonetheless it has extraordinary consequences and can be used to prove results which are much easier to understand or seemingly unrelated.
As others have noted, this sort of thing is commonplace in analysis. The best results often flow directly from the strongest available estimates, and the strongest estimates are often complicated and inaccessible to the non-expert.
In more algebraic areas, you may want to look for famous results that people call "lemmas" rather than "theorems." People tend to call a result a "lemma" if it doesn't look like something you'd be interested in for its own sake, but is nevertheless useful for proving other things of interest. Now, if you are only interested in results that are deep or difficult, then not all lemmas will qualify, since some are very simple (Schur's lemma, Zorn's lemma, Yoneda's lemma) and others are non-trivial but not too difficult (Nakayama's lemma, Hensel's lemma, Sperner's lemma). However, there do exist "high-powered" examples such as the fundamental lemma or the Szemerédi regularity lemma.
Two workhorses of geometric measure theory are the covering theorems by Vitali and Besicovitch. When I first encoutered them they both seemed quite involved to state and their use was not obvious. Also they have quite lengthy proofs. But once you seem them in action ( e.g. proving Lebesgue's differentiation theorem) you learn to appreciate them.
At first sight, most of these seem quite pointless and unnatural. It's not easy for a beginner to see why controlling a function by its derivatives or a small gain in integrability can be important. But much of PDE theory relies on these little inequalities.
I think that two good examples are the Cartan-Kähler Existence Theorem and the Cartan-Kuranishi Prolongation Theorem.
The first theorem gives sufficient (but quite subtle) conditions for a system of (real-analytic) PDE (possibly overdetermined or with degenerate symbol) to be locally solvable and describes the 'generality' of the generic real-analytic solution. Its simplest case (which is still not trivial to prove) is the Cauchy-Kowalewskaya Theorem.
The second theorem says that, under certain technical conditions that are not easy to state without a lot of preparation and definitions (but that are almost always satisfied in practice), a specific algorithm for replacing a given system of real-analytic PDE by an equivalent system of PDE (i.e., one that has the same real-analytic solutions) will, after a finite number of applications, yield either a system that satisfies the sufficient conditions of the Cartan-Kähler Theorem or a system that is formally incompatible (and hence has no real-analytic solutions).
All the many theorem variants establishing lower bounds for linear forms in logarithms of algebraic numbers, à la Baker.
The statements look simple only when phrased as "Then there is an effectively computable constant such that…": fully explicit forms are needed when you really want to sit down, tools in hand, and solve individual diophantine equations.
The proofs involve an awful lot of technical bookkeeping, although they ultimately boil down to the miraculous and elementary fact that there is no rational integer strictly between $0$ and $1$.
I thought of the Jacobson density theorem in ring theory. At least for someone who does not do ring theory day for day, the way it is usually stated provokes a feeling of "M-mh. So what?" to which the ring theorist might reply that it implies the Artin-Wedderburn theorem and a (maybe unexpected) description of primitive rings. Whether its proofs are "technically challenging" is a matter of perspective; although the proofs I know are kind of easy to follow, I would not think that I could have come up with one (or with the theorem's statement, for that matter).
I don't know if technically challenging can be interpreted as finding the right trick (which can be quite difficult) to prove it. If so, then the Hahn-Banach theorem should be included. While the real version has a natural proof, the complex version involves a trick that is not obvious. According to what I remember (probably apocryphal), it took Banach two years to obtain the complex version after he had proved the real version.
And of course, H-B is used all the time in functional analysis.
I think it is reasonable to view the h-cobordism theorem (and its relative, the s-cobordism theorem) as a workhorse of differential topology. The statement isn't actually that hard - it simply gives a natural condition under which a cobordism between two high dimensional manifolds is homotopically trivial - but the proof is quite difficult (it implies the high dimensional Poincare conjecture and contributed greatly to Smale's Fields medal). And the theorem is at the very heart of the surgery exact sequence, one of the most important and powerful tools in high dimensional topology.
The theorem of Hörmander on $L^2$ estimates for the solution of the $\bar \partial$ operator and its variants.
At first glance the role of the plurisubharmonic weights involved in the estimates is not clear and the proof given Hörmander's book using Hilbert space techniques is not very enlightening. However the Theorem (and its variants) has a wide range of applications, from complex analysis and geometry (for instance the embedding theorem for Stein manifolds and Skoda's results on the local algebra of holomorphic functions), to number theory (Bombieri's Theorem on algebraic values of meromorphic maps).
A very nice short historical overview of the Theorem can be found here.
I myself would place Kodaira's vanishing theorem high on such a list, at least to algebraic geometers.
The Blakers-Massey excision theorem in algebraic topology. In its classical formulation it says that a certain map of pairs induces an isomorphism in relative homotopy groups in a certain range of dimensions. But it underlies a great many of the most important results in the subject, because it allows you to apply target-type techniques to domains and vice versa.
The Mayer-Vietoris long exact sequence in cohomology for a pair of spaces, and the Leray-Serre spectral sequence for cohomology of a fiber bundle (there are some other spectral sequences one could mention here). They are the key tools to compute cohomology of various spaces, have had huge amount of applications in concrete situations. Constructions of spectral sequences are particularly technical.
Resolution of Singularities. Nowadays, 'simple' proofs are available. The theorem is a huge work horse.
This was already mentioned, but the Yoneda lemma (and its dual, as well as extensions to enriched and higher categories) is perhaps one of the most important theorems of category theory. There are proofs of serious theorems which can be boiled down to repeated different uses of Yoneda . It could probably be seen as the categorical analogue of Cauchy-Schwartz.
Lawvere refers to the lemma as the Cayley-Dedekind-Grothendieck-Yoneda Lemma, giving an idea of the scope of its uses and users. The question What is Yoneda's Lemma a generalization of? and its answers give an idea of results that follow from Yoneda.
Urs Schreiber could no doubt recall some, I remember him emphasising this fact once.
In homotopy theory, a typical workhorse theorem is of the form 'there is a model structure on XY with such an such properties'. While there are some standard techniques producing them, most examples need some extra effort. Moreover, even the very definition of a model structure will appear obscure to non-experts and the existence of one does not appear to very interesting in itself.
To be a bit more concrete: One of the most often used examples of a model structure is the Kan-Quillen model structure on simplicial sets, for which there is still no entirely easy proof. It is the basis of any modern treatment of the homotopy theory of simplicial sets. This model structure is also the basis of countless other model structures (like Cauchy-Schwarz produces countless other estimates). Or the various model structures on categories of spectra (S-modules, symmetric spectra, orthogonal spectra), which form the basis of modern stable homotopy theory (unless you want to use $\infty$-categories, where one often uses other work-horse theorems like straightening-unstraightening!).
What is Yoneda's Lemma a generalization of? | CommonCrawl |
physically accurate lead to more precise flow simulations especially over long time intervals.
more physically accurate methods for approximating the NSE, MHD, and related systems.
and present numerical experiments that demonstrate the advantages of the scheme.
including proofs for conservation laws, unconditional stability and optimal convergence.
associated penalty method, and the method arising from using grad-div stabilized Taylor-Hood elements.
Finally, we give numerical examples which verify the theory and demonstrate the effectiveness of the scheme.
results are given that verify the theory.
Chapter 7 extends Leray-$\alpha$-deconvolution modeling to the incompressible MHD.
both in its filtering radius and order of deconvolution.
preserves energy and cross-helicity, and optimally converges to the MHD solution.
benchmark problems of channel flow over a step and the Orszag-Tang vortex problem.
Wilson, Nicholas, "Physicic-based algorithms and divergence free finite elements for coupled flow problems" (2012). All Dissertations. 967. | CommonCrawl |
Abstract: The reductions of the free geodesic motion on a non-compact simple Lie group G based on the $G_+ \times G_+$ symmetry given by left- and right multiplications for a maximal compact subgroup $G_+ \subset G$ are investigated. At generic values of the momentum map this leads to (new) spin Calogero type models. At some special values the `spin' degrees of freedom are absent and we obtain the standard $BC_n$ Sutherland model with three independent coupling constants from SU(n+1,n) and from SU(n,n). This generalization of the Olshanetsky-Perelomov derivation of the $BC_n$ model with two independent coupling constants from the geodesics on $G/G_+$ with G=SU(n+1,n) relies on fixing the right-handed momentum to a non-zero character of $G_+$. The reductions considered permit further generalizations and work at the quantized level, too, for non-compact as well as for compact G. | CommonCrawl |
In this work we propose a variational model for multi-modal image registration. It minimizes a new functional based on using reformulated normalized gradients of the images as the fidelity term and higher-order derivatives as the regularizer. We first present a theoretical analysis of the proposed model. Then, to solve the model numerically, we use an augmented Lagrangian method (ALM) to reformulate it to a few more amenable subproblems (each giving rise to an Euler-Lagrange equation that is discretized by finite difference methods) and solve iteratively the main linear systems by the fast Fourier transform; a multilevel technique is employed to speed up the initialisation and avoid likely local minima of the underlying functional. Finally we show the convergence of the ALM solver and give numerical results of the new approach. Comparisons with some existing methods are presented to illustrate its effectiveness and advantages.
Keywords: Variational model, optimization, multi-modality images, similarity measures, mapping, high order regularisation, inverse problem, augmented lagrangian, multilevel.
Mathematics Subject Classification: Primary: 65M32, 35Q68, 94A08; Secondary: 65M22, 35G15.
E. Bae, J. Shi and X.-C. Tai, Graph cuts for curvature based image denoising, IEEE Transactions on Image Processing, 20 (2011), 1199-1210. doi: 10.1109/TIP.2010.2090533.
M. Burger, J. Modersitzki and L. Ruthotto, A hyperelastic regularization energy for image registration, SIAM Journal on Scientific Computing, 35 (2013), 132-148. doi: 10.1137/110835955.
Y. M. Chen, J. L. Shi, M. Rao and J. S. Lee, Deformable multi-modal image registration by maximizing renyi's statistical dependence measure, Inverse Problems and Imaging, 9 (2015), 79-103. doi: 10.3934/ipi.2015.9.79.
N. Chumchob, Vectorial total variation-based regularization for variational image registration, IEEE Transactions on Image Processing, 22 (2013), 4551-4559. doi: 10.1109/TIP.2013.2274749.
N. Chumchob and K. Chen, Improved variational image registration model and a fast algorithm for its numerical approximation, Numerical Methods for Partial Differential Equations, 28 (2012), 1966-1995. doi: 10.1002/num.20710.
N. Chumchob, K. Chen and C. Brito-Loeza, A fourth-order variational image registration model and its fast multigrid algorithm, Multiscale Modeling & Simulation, 9 (2011), 89-128. doi: 10.1137/100788239.
M. Droske and W. Ring, A mumford-shah level-set approach for geometric image registration, SIAM journal on Applied Mathematics, 66 (2006), 2127-2148. doi: 10.1137/050630209.
J. Feydy, B. Charlier, F. V. Vialard and G. Peyre, Optimal transport for diffeomorphic registration, International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2017: Medical Image Computing and Computer Assisted Intervention - MICCAI, (2017), 291-299, https://arXiv.org/abs/1706.05218v1. doi: 10.1007/978-3-319-66182-7_34.
B. Fischer and J. Modersitzki, Fast diffusion registration, Contemp. Math., 313 (2002), 117-129. doi: 10.1090/conm/313/05372.
B. Fischer and J. Modersitzki, Curvature based image registration, Journal of Mathematical Imaging and Vision, 18 (2003), 81-85. doi: 10.1023/A:1021897212261.
B. Fischer and J. Modersitzki, Ill-posed medicine - an introduction to image registration, Inverse Problems, 24 (2008), 034008, 16 pp. doi: 10.1088/0266-5611/24/3/034008.
E. Haber and J. Modersitzki, Numerical methods for volume preserving image registration, Inverse Problems, 20 (2004), 1621-1638. doi: 10.1088/0266-5611/20/5/018.
E. Haber and J. Modersitzki, Image registration with guaranteed displacement regularity, International Journal of Computer Vision, 71 (2007), 361-372. doi: 10.1007/s11263-006-8984-4.
S. Henn, A multigrid method for a fourth-order diffusion equation with application to image processing, SIAM Journal on Scientific Computing, 27 (2005), 831-849. doi: 10.1137/040611124.
E. Hodneland, A. Lundervold, J. Rørvik and A. Z. Munthe-Kaas, Normalized gradient fields for nonlinear motion correction of dce-mri time series, Computerized Medical Imaging and Graphics, 38 (2014), 202-210.
W. Hu, Y. Xie, L. Li and W. Zhang, A total variation based nonrigid image registration by combining parametric and non-parametric transformation models, Neurocomputing, 144 (2014), 222-237. doi: 10.1016/j.neucom.2014.05.031.
M. Ibrahim, K. Chen and C. Brito-Loeza, A novel variational model for image registration using gaussian curvature, Geometry, Imaging and Computing, 1 (2014), 417-446. doi: 10.4310/GIC.2014.v1.n4.a2.
L. König and J. Rühaak, A fast and accurate parallel algorithm for non-linear image registration using normalized gradient fields, in Biomedical Imaging (ISBI), 2014 IEEE 11th International Symposium on, IEEE, 2014,580-583.
D. Loeckx, P. Slagmolen, F. Maes, D. Vandermeulen and P. Suetens, Nonrigid image registration using conditional mutual information, IEEE Transactions on Medical Imaging, 29 (2010), 19-29.
F. Maes, A. Collignon, D. Vandermeulen, G. Marchal and P. Suetens, Multimodality image registration by maximization of mutual information, IEEE Transactions on Tedical Imaging, 16 (1997), 187-198. doi: 10.1109/42.563664.
A. Mang and G. Biros, An inexact Newton-Krylov algorithm for constrained diffeomorphic image registration, SIAM Journal on Imaging Sciences, 8 (2015), 1030-1069. doi: 10.1137/140984002.
A. Mang and G. Biros, Constrained $h^1$-regularization schemes for diffeomorphic image registration, SIAM Journal on Imaging Sciences, 9 (2016), 1154-1194. doi: 10.1137/15M1010919.
J. Modersitzki, FAIR: Flexible Algorithms for Image Registration, SIAM, 2009. doi: 10.1137/1.9780898718843.
F. P. Oliveira and J. M. R. Tavares, Medical image registration: A review, Computer Methods in Biomechanics and Biomedical Engineering, 17 (2014), 73-93. doi: 10.1080/10255842.2012.670855.
K. Papafitsoros, C. B. Schoenlieb and B. Sengul, Combined first and second order total variation inpainting using split bregman, Image Processing On Line, 3 (2013), 112-136. doi: 10.5201/ipol.2013.40.
J. P. Pluim, J. A. Maintz and M. A. Viergever, Mutual-information-based registration of medical images: A survey, IEEE Transactions on Medical Imaging, 22 (2003), 986-1004. doi: 10.1109/TMI.2003.815867.
C. Pöschl, J. Modersitzki and O. Scherzer, A variational setting for volume constrained image registration, Inverse Problems and Imaging, 4 (2010), 505-522. doi: 10.3934/ipi.2010.4.505.
T. Rohlfing, C. R. Maurer, D. A. Bluemke and M. A. Jacobs, Volume-preserving nonrigid registration of mr breast images using free-form deformation with an incompressibility constraint, IEEE transactions on medical imaging, 22 (2003), 730-741. doi: 10.1109/TMI.2003.814791.
G. Roland and L. T. Patrick, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, SIAM, 1989. doi: 10.1137/1.9781611970838.
J. Rühaak, L. König, M. Hallmann, N. Papenberg, S. Heldmann, H. Schumacher and B. Fischer, A fully parallel algorithm for multimodal image registration using normalized gradient fields, in Biomedical Imaging (ISBI), 2013 IEEE 10th International Symposium on, IEEE, 2013,572-575.
A. Sotiras, C. Davatzikos and N. Paragios, Deformable medical image registration: A survey, IEEE Transactions on Medical Imaging, 32 (2013), 1153-1190. doi: 10.1109/TMI.2013.2265603.
X.-C. Tai, J. Hahn and G. J. Chung, A fast algorithm for Euler's elastica model using augmented lagrangian method, SIAM Journal on Imaging Sciences, 4 (2011), 313-344. doi: 10.1137/100803730.
P. Viola and W. M. Wells Ⅲ, Alignment by maximization of mutual information, International Journal of Computer Vision, 24 (1997), 137-154.
C. Wu and X. C. Tai, Augmented lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models, SIAM Journal on Imaging Sciences, 3 (2010), 300-339. doi: 10.1137/090767558.
C. Wu, J. Zhang and X.-C. Tai, Augmented Lagrangian method for total variation restoration with non-quadratic fidelity, Inverse Problems and Imaging, 5 (2011), 237-261. doi: 10.3934/ipi.2011.5.237.
C. Xing and P. Qiu, Intensity-based image registration by nonparametric local smoothing, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 (2011), 2081-2092.
M. Yashtini and S. H. Kang, A fast relaxed normal two split method and an effective weighted TV approach for E1uler's elastica image inpainting, SIAM Journal on Imaging Sciences, 9 (2016), 1552-1581. doi: 10.1137/16M1063757.
W. Yilun, Y. Junfeng, Y. Wotao and Z. Yin, A new alternating minimization algorithm for total variation image reconstruction, SIAM Journal on Imaging Sciences, 1 (2008), 248-272. doi: 10.1137/080724265.
J. Zhang and K. Chen, Variational image registration by a total fractional-order variation model, Journal of Computational Physics, 293 (2015), 442-461. doi: 10.1016/j.jcp.2015.02.021.
J. Zhang, K. Chen and B. Yu, An improved discontinuity-preserving image registration model and its fast algorithm, Applied Mathematical Modelling, 40 (2016), 10740-10759. doi: 10.1016/j.apm.2016.08.009.
J. Zhang, K. Chen and B. Yu, A novel high-order functional based image registration model with inequality constraint, Mathematics with Applications, 72 (2016), 2887-2899. doi: 10.1016/j.camwa.2016.10.018.
X. Zhou, Weak lower semicontinuity of a functional with any order, Journal of Mathematical Analysis and Applications, 221 (1998), 217-237. doi: 10.1006/jmaa.1997.5881.
W. ZHU, X.-C. TAI and T. CHAN, Augmented lagrangian method for a mean curvature based image denoising model, Imaging, 7 (2013), 1409-1432. doi: 10.3934/ipi.2013.7.1409.
W. Zhu, X.-C. Tai and T. Chan, Image segmentation using euler's elastica as the regularization, Journal of Scientific Computing, 57 (2013), 414-438. doi: 10.1007/s10915-013-9710-3.
Figure 7. Comparison of $ 3 $ different models to register the MRI images fin Fig. 6. Example 3 zoomed in the red squares (see Fig. 6): From left to right; Zooms in the reference $ R $ and the registered $ T(\mathbf u) $ using New model, NGF and MI, respectively.
Table 2. Registration results of the different models for processing Examples 1-6. The errors are computed using formula (38), (40) and (39). Here, #N is the ratio of the number of pixels where $\nabla_n T\cdot \nabla_n R \neq 0$ over the total number of pixels, whereas #G is the ratio of number pixels where GF(T, R)+TM(T, R) ≠ 0 over the total number of pixels.
Ex 1 0.2% .02% 0.370 0.97 0.381 for GFer and NGFer, the smaller the better.
Ex 3 49% 23% 0.463 0.579 1.265 But for MIer, The larger the better. | CommonCrawl |
Question 1: Start with the classing case of bagging, say in random forest. Fit $B$ trees to bootstrap samples of the data. Average the predictions of the $B$ trees to form a final prediction. Bagging.
Why not use stacking? Form a matrix of predictions of dimension $N\times B$, and regress it on $Y$. This yields a set of weights $w$, which can be used in a weighted average of the individual predictions. Stacking.
Why aren't random forests routinely "stacked?"
It seems like heteroskedasticity might mess up stacking in this multilevel context. Say each classroom has a different amount of unexplained (or unexplainable) variance. The trees that do best in the stacking regression will be those that didn't include the classes with big epsilons. So my final meta-model will pretend as if everyone has a small epsilon, thereby understating my uncertainty. Right? What is known on this subject?
One more question, because I am new to stacking What do you do with negative regression coefficients in the stacking regression? Negative weights? Or do you just exponentiate them and call it done? Or should one use some sort of non-negative least squares optimizer?
I get a lot of zero weights when I do the latter, and it seems like this'd screw up the variance reduction that you're supposed to get through ensembling.
Question 1: Why not use Stacking in Random Forests instead of averaging?
Decision trees have high variance and averaging them together reduces the variance, improving the performance. Since decision trees are weak individual models, stacking does not work that well on them. Stacking is best suited for a diverse set of strong models, which themselves can be ensembles (e.g. Random Forests, GBMs, etc).
Question 2: Can you stack clustered (aka "pooled repeated measures") data?
Sure, you can stack clustered data. However, when you use cross-validation to create the "level-one" data (the data to train the metalearner), you should ensure that the rows belonging to a single cluster all stay within a single fold. In your example above, that the rows corresponding to a whole classroom must be contained in a single fold and not be spread out across different folds.
Question 3: What do you do with negative regression coefficients in the stacking regression?
There's nothing inherently wrong with allowing negative weights, however, I've consistently seen better results if you restrict the weights to be non-negative. That's why we choose a GLM with non-negative weights as the default metalearner in the H2O Stacked Ensemble implementation. It's also the default in the SuperLearner R package.
Having a lot of zero weights is not a problem, it probably just means that many of your base learners are not adding value to the ensemble.
Not the answer you're looking for? Browse other questions tagged random-forest heteroscedasticity bagging stacking model-averaging or ask your own question. | CommonCrawl |
You are given $n$ cubes in a certain order, and your task is to build towers using them. Whenever two cubes are one on top of the other, the upper cube must be smaller than the lower cube.
You must process the cubes in the given order. You can always either place the cube on top of an existing tower, or begin a new tower. What is the minimum possible number of towers?
The first input line contains an integer $n$: the number of cubes.
The next line contains $n$ integers $k_1,k_2,\ldots,k_n$: the sizes of the cubes.
Print one integer: the minimum number of towers. | CommonCrawl |
opportunities for networking, informal contact, exchange of ideas and discussions with fellow researchers, in a friendly and relaxed environment. High quality papers describing new original research are sought on topics strongly related to the evolution of computer programs, ranging from theoretical work to innovative applications. The conference will feature a mixture of oral presentations and poster sessions. In 2013, the EuroGP acceptance rate was 49% (38% for oral presentations).
Accepted papers will be presented orally or as posters at the Conference and will be printed in the proceedings published by Springer Verlag in the Lecture Notes in Computer Science (LNCS) series.
The papers which receive the best reviews will be nominated for the Best Paper Award.
Submissions must be original and not published elsewhere. They will be peer reviewed by at least three members of the program committee. The reviewing process will be double-blind, so please omit information about the authors in the submitted paper. Submit your manuscript in Springer LNCS format.
A comprehensive bibliography of genetic programming literature and links to related material is accessible at the Genetic Programming Bibliography web page, part of the Collection of Computer Science Bibliographies maintained and managed by William Langdon, Steven Gustafson, and John Koza.
Order acceptance and scheduling (OAS) is an important planning activity in make-to-order manufacturing systems. Making good acceptance and scheduling decisions allows the systems to utilise their manufacturing resources better and achieve higher total profit. Therefore, finding optimal solutions for OAS is desirable. Unfortunately, the exact optimisation approaches previously proposed for OAS are still very time consuming and usually fail to solve the problem even for small instances in a reasonable computational time. In this paper, we develop a new branch-and-bound (B&B) approach to finding optimal solutions for OAS. In order to design effective branching strategies for B&B, a new GP method has been proposed to discover good ordering rules. The results show that the B&B algorithms enhanced by GP can solve the OAS problem more effectively than the basic B&B algorithm and the CPLEX solver on the Mixed Integer Linear Programming model.
We describe a fully automated workflow for performing stage1 breast cancer detection with GP as its cornerstone. Mammograms are by far the most widely used method for detecting breast cancer in women, and its use in national screening can have a dramatic impact on early detection and survival rates. With the increased availability of digital mammography, it is becoming increasingly more feasible to use auto- mated methods to help with detection. A stage 1 detector examines mammograms and highlights suspicious areas that require further investigation. A too conservative approach degenerates to marking every mammogram (or segment of) as suspicious, while missing a cancerous area can be disastrous. Our workflow positions us right at the data collection phase such that we generate textural features ourselves. These are fed through our system, which performs PCA on them before passing the most salient ones to GP to generate classifiers. The classifiers give results of 100% accuracy on true positives and a false positive per image rating of just 1.5, which is better than prior work. Not only this, but our system can use GP as part of a feedback loop, to both select and help generate further features.
There is great interest for the development of semantic genetic operators to improve the performance of genetic programming. Semantic genetic operators have traditionally been developed employing experimentally or theoretically-based approaches. Our current work proposes a novel semantic crossover developed amid the two traditional approaches. Our proposed semantic crossover operator is based on the use of the derivative of the error propagated through the tree. This process decides the crossing point of the second parent. The results show that our procedure improves the performance of genetic programming on rational symbolic regression problems.
We propose a simple method of directly measuring a mutation operator's short-term exploration-exploitation behaviour, based on its transition matrix. Higher values for this measure indicate a more exploitative operator. Since operators also differ in their degree of long-term bias towards particular areas of the search space, we propose a simple method of directly measuring this bias, based on the Markov chain stationary state. We use these measures to compare numerically the behaviours of two well-known mutation operators, the genetic algorithm per-gene bitflip mutation and the genetic programming subtree mutation.
The Operator Equalization (OE) family of bloat control methods have achieved promising results in many domains. In particular, the Flat-OE method, that promotes a flat distribution of program sizes, is one of the simplest OE methods and achieves some of the best results. However, Flat-OE, like all OE variants, can be computationally expensive. This work proposes a simplified strategy for bloat control based on Flat-OE. In particular, bloat is studied in the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. NEAT includes a very simple diversity preservation technique based on speciation and fitness sharing, and it is hypothesized that with some minor tuning, speciation in NEAT can promote a flat distribution of program size. Results indicate that this is the case in two benchmark problems, in accordance with results for Flat-OE. In conclusion, NEAT provides a worthwhile strategy that could be extrapolated to other GP systems, for effective and simple bloat control.
This paper proposes a novel asynchronous reference-based evaluation (named as ARE) for an asynchronous EA that evolves individuals independently unlike general EAs that evolve all individuals at the same time. ARE is designed for an asynchronous evolution by tertiary parent selection and its archive. In particular, ARE asynchronously evolves individuals through a comparison with only three of individuals (i.e., two parents and one reference individual as the tertiary parent). In addition, ARE builds an archive of good reference individuals. This differ from synchronous evolution in EAs in which selection involves comparison with all population members. In this paper, we investigate the effectiveness of ARE, by applying it to some standard problems used in Linear GP that aim being to minimize the execution step of machine-code programs. We compare GP using ARE (ARE-GP) with steady state (synchronous) GP (SSGP) and our previous asynchronous GP (Tierra-based Asynchronous GP: TAGP). The experimental results have revealed that ARE-GP not only asynchronously evolves the machine-code programs, but also outperforms SSGP and TAGP in all test problems.
Synthesizing a program with the desired input-output behavior by means of genetic programming is an iterative process that needs appropriate guidance. That guidance is conventionally provided by a fitness function that measures the conformance of program output with the desired output. Contrary to widely adopted stance, there is no evidence that this quality measure is the best choice; alternative search drivers may exist that make search more effective. This study proposes and investigates a new family of behavioral search drivers, which inspect not only final program output, but also program behavior meant as the partial results it arrives at while executed.
For many years now it has been known that Cartesian Genetic Programming (CGP) does not exhibit program bloat. Two possible explanations have been proposed in the literature: neutral genetic drift and length bias. This paper empirically disproves both of these and thus, reopens the question as to why CGP does not suffer from bloat. It has also been shown for CGP that using a very large number of nodes considerably increases the effectiveness of the search. This paper also proposes a new explanation as to why this may be the case.
This paper addresses the problem of evolutionary design of classifiers for the recognition of handwritten digit symbols by means of Cartesian Genetic Programming. Two different design scenarios are investigated - the design of multiple-output classifier, and design of multiple binary classifiers. The goal is to evolve classification algorithms that employ substantially smaller amount of operations in contrast with conventional approaches such as Support Vector Machines. Even if the evolved classifiers do not reach the accuracy of the tuned SVM classifier, it will be shown that the accuracy is higher than 93% and the number of required operations is a magnitude lower.
Programming (GP). Constant creation in GE is an important issue due to the disruptive nature of ripple crossover, which can radically remap multiple terminals in an individual, and we investigate if more compact methods, which are more similar to the GP style of constant creation (Ephemeral Random Constants (ERCs), perform better. The results are surprising. The GE methods all perform significantly better than GP on unseen test data, and we demonstrate that the standard GE approach of digitconcatenation does not produce individuals that are any larger than those from methods which are designed to use less genetic material.
Genetic Programming (GP) may dramatically increase the performance of software written by domain experts. GP and autotuning are used to optimise and refactor legacy GPGPU C code for modern parallel graphics hardware and software. Speed ups of more than six times on recent nVidia GPU cards are reported compared to the original kernel on the same hardware.
Genetic Improvement (GI) is a form of Genetic Programming that improves an existing program. We use GI to evolve a faster version of a C++ program, a Boolean satisfiability (SAT) solver called MiniSAT, specialising it for a particular problem class, namely Combinatorial Interaction Testing (CIT), using automated code transplantation. Our GI-evolved solver achieves overall 17% improvement, making it comparable with average expert human performance. Additionally, this automatically evolved solver is faster than any of the human-improved solvers for the CIT problem.
The Flash system runs ensemble-based Genetic Programming (GP) symbolic regression on a shared memory desktop. To significantly reduce the high time cost of the extensive model predictions required by symbolic regression, its fitness evaluations are tasked to the desktop's GPU. Successive GP "instances" are run on different data subsets and randomly chosen objective functions. Best models are collected after a fixed number of generations and then fused with an adaptive, output-space method. New instance launches are halted once learning is complete. We demonstrate that Flash's ensemble strategy not only makes GP more robust, but it also provides an informed online means of halting the learning process. Flash enables GP to learn from a dataset composed of 370K exemplars and 90 features, evolving a population of 1000 individuals over 100 generations in as few as 50 seconds.
Kernel regression is a well-established nonparametric method, in which the target value of a query point is estimated using a weighted average of the surrounding training examples. The weights are typically obtained by applying a distance-based kernel function, which presupposes the existence of a distance measure. This paper investigates the use of Genetic Programming for the evolution of task-specific distance measures as an alternative to Euclidean distance. Results on seven real-world datasets show that the generalisation performance of the proposed system is superior to that of Euclidean-based kernel regression and standard GP.
Classification problems are of profound interest for the machine learning community as well as to an array of application fields. However, multi-class classification problems can be very complex, in particular when the number of classes is high. Although very successful in so many applications, GP was never regarded as a good method to perform multi-class classification. In this work, we present a novel algorithm for tree based GP, that incorporates some ideas on the representation of the solution space in higher dimensions. This idea lays some foundations on addressing multi-class classification problems using GP, which may lead to further research in this direction. We test the new approach on a large set of benchmark problems from several different sources, and observe its competitiveness against the most successful state-of-the-art classifiers.
The 3-versus-2 Keepaway soccer task represents a widely used benchmark appropriate for evaluating approaches to reinforcement learning, multi-agent systems, and evolutionary robotics. To date most research on this task has been described in terms of developments to reinforcement learning with function approximation or frameworks for neuro-evolution. This work performs an initial study using a recently proposed algorithm for evolving teams of programs hierarchically using two phases of evolution: one to build a library of candidate meta policies and a second to learn how to deploy the library consistently. Particular attention is paid to diversity maintenance, where this has been demonstrated as a critical component in neuro-evolutionary approaches. A new formulation is proposed for fitness sharing appropriate to the Keepaway task. The resulting policies are observed to benefit from the use of diversity and perform significantly better than previously reported. Moreover, champion individuals evolved and selected under one field size generalize to multiple field sizes without any additional training.
This paper introduces the concepts of error vector and error space, directly bound to semantics, one of the hottest topics in genetic programming. Based on these concepts, we introduce the notions of optimally aligned individuals and optimally coplanar individuals. We show that, given optimally aligned, or optimally coplanar, individuals, it is possible to construct a globally optimal solution analytically. Thus, we introduce a genetic programming framework for symbolic regression called Error Space Alignment GP (ESAGP) and two of its instances: ESAGP-1, whose objective is to find optimally aligned individuals, and ESAGP-2, whose objective is to find optimally coplanar individuals. We also discuss how to generalize the approach to any number of dimensions. Using two complex real-life applications, we provide experimental evidence that ESAGP-2 outperforms ESAGP-1, which in turn outperforms both standard GP and geometric semantic GP. This suggests that ``adding dimensions'' is beneficial and encourages us to pursue the study in many different directions, that we summarize in the final part of the manuscript.
This paper proposes a new approach to improve generalisation of standard regression techniques when there are hundreds or thousands of input variables. The input space $X$ is composed of observational data of the form $(x_i, y(x_i)), i = 1... n$ where each $x_i$ denotes a k-dimensional input vector of design variables and $y$ is the response. Genetic Programming (GP) is used to transform the original input space $X$ into a new input space $Z = (z_i, y(z_i))$ that has smaller input vector and is easier to be mapped into its corresponding responses. GP is designed to evolve a function that receives the original input vector from each $x_i$ in the original input space as input and return a new vector $z_i$ as an output. Each element in the newly evolved $z_i$ vector is generated from an evolved mathematical formula that extracts statistical features from the original input space. To achieve this, we designed GP trees to produce multiple outputs. Empirical evaluation of $20$ different problems revealed that the new approach is able to significantly reduce the dimensionality of the original input space and improve the performance of standard approximation models such as Kriging, Radial Basis Functions Networks, and Linear Regression, and GP (as a regression techniques). In addition, results demonstrate that the new approach is better than standard dimensionality reduction techniques such as Principle Component Analysis (PCA). Moreover, the results show that the proposed approach is able to improve the performance of standard Linear Regression and make it competitive to other stochastic regression techniques.
Symbolic regression has many successful applications in learning free-form regular equations from data. Trying to apply the same approach to differential equations is the logical next step: so far, however, results have not matched the quality obtained with regular equations, mainly due to additional constraints and dependencies between variables that make the problem extremely hard to tackle. In this paper we propose a new approach to dynamic systems learning. Symbolic regression is used to obtain a set of first-order Eulerian approximations of differential equations, and mathematical properties of the approximation are then exploited to reconstruct the original differential equations. Advantages of this technique include the de-coupling of systems of differential equations, that can now be learned independently; the possibility of exploiting established techniques for standard symbolic regression, after trivial operations on the original dataset; and the substantial reduction of computational effort, when compared to existing ad-hoc solutions for the same purpose. Experimental results show the efficacy of the proposed approach on an instance of the Lotka-Volterra model.
EuroGP invites all to an open session to discuss issues and current trends in the field of Genetic Programming. | CommonCrawl |
Put in the missing symbols to make these number sentences correct.
Use $+ , -, \times , \div$ and $ =$ .
All these number sentences below, except two of them, have two solutions.
Can you find the symbols to use?
Which two number sentences have only one answer?
Can you see why this is so?
This problem is designed to help young learners to use the symbols plus, minus, multiplied by, divided by and equals to, meaningfully, in number statements. Children frequently meet boxes or similar devices to represent numbers but seldom the actual operational symbols. This problem also helps learners understand inverse operations.
You could start with an example like the ones given at the beginning of the problem. It might be useful to use the interactivity with the whole group at this point until everyone has the idea that they are finding symbols, not numbers.
After this the children could work in pairs on the examples either from the screen or this printed sheet so that they are able to talk through their ideas with a partner.
At the end of the session the group could gather together again and put up their ideas on the board or use the interactivity. You should also discuss why four of the double number sentences have two answers and two only have one answer. Can they see why this is so?
This problem could also be used as a people maths activity during an assembly with children standing in line holding cards, thus forming a human equation. The audience can tell the 'symbols' where to stand.
Which symbol tells you to take away?
What do you have to do to "undo" an addition? What about a subtraction?
What do you have to do to "undo" a multiplication? What about a division?
Learners could make some more number statements which can be done in more than one way. Can they make one that can be done three or even four ways?
Suggest using the interactivity or, if this is not possible, counters with the five symbols on them which can be moved around on this sheet.
Inverses. Practical Activity. Addition & subtraction. Multiplication & division. Trial and improvement. Interactivities. Games. Working systematically. Compound transformations. Visualising. | CommonCrawl |
The Sahel is a region south of the Sahara. Thanks @bandrade.
Kendo is literally "the way of the sword" in japanese. Thanks to @Teo, and to the OP for the explanation.
Thanks to @Teo for ANWIL and to @Rand al'Thor for LIAM.
Thanks to @DanielBaliki for ETERNAL.
2 is the first ("prime") prime. 8 is $\infty$ on end. Thanks to @Rand al'Thor.
StackExchange has an office at 110 William Street on the 28th floor!
FETA, KENDO (thanks @Teo). There are many types of both Greek cheese and Asian sword, but this makes sense for the reason pointed out by @ManyPinkHats.
PHONE, something (e.g. could be SHED or HUT).
A katana is a sword, not a sword way. I was thinking of BUSHIDO. Or KENDO, or IAIDO... there are lots of possibilities. | CommonCrawl |
Q1: Hurwitz formula + canonical divisor of $\mathbb P^2$.
Q2: Move the curve in $\mathbb P^2$.
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology or ask your own question.
Can one construct a ramified cover over prescribed divisors? | CommonCrawl |
Proceedings of Machine Learning Research, PMLR 89:1544-1553, 2019.
We characterize the asymptotic performance of nonparametric goodness of fit testing. The exponential decay rate of the type-II error probability is used as the asymptotic performance metric, and a test is optimal if it achieves the maximum rate subject to a constant level constraint on the type-I error probability. We show that two classes of Maximum Mean Discrepancy (MMD) based tests attain this optimality on $\mathbb R^d$, while the quadratic-time Kernel Stein Discrepancy (KSD) based tests achieve the maximum exponential decay rate under a relaxed level constraint. Under the same performance metric, we proceed to show that the quadratic-time MMD based two-sample tests are also optimal for general two-sample problems, provided that kernels are bounded continuous and characteristic. Key to our approach are Sanov's theorem from large deviation theory and the weak metrizable properties of the MMD and KSD.
%X We characterize the asymptotic performance of nonparametric goodness of fit testing. The exponential decay rate of the type-II error probability is used as the asymptotic performance metric, and a test is optimal if it achieves the maximum rate subject to a constant level constraint on the type-I error probability. We show that two classes of Maximum Mean Discrepancy (MMD) based tests attain this optimality on $\mathbb R^d$, while the quadratic-time Kernel Stein Discrepancy (KSD) based tests achieve the maximum exponential decay rate under a relaxed level constraint. Under the same performance metric, we proceed to show that the quadratic-time MMD based two-sample tests are also optimal for general two-sample problems, provided that kernels are bounded continuous and characteristic. Key to our approach are Sanov's theorem from large deviation theory and the weak metrizable properties of the MMD and KSD. | CommonCrawl |
Let's say I have formulated an integer linear programming (ILP) problem with the objective function $$F(X)=V(T,X)-C(t,X),$$ where $V(T,X)$ is the payoff of portfolio, and $C(t,X)$ is the initial cost of portfolio, $0<t<T$ is the calendar time, then I have setup a system of constraints and found the optimum solution $X=(x_1, x_2, \ldots, x_n)$, where $x_i$ is the number of units of an $i$-th asset in the portfolio, with $x_i>0$ for buying, $x_i<0$ for short selling.
Now I'd like to extend the system of constraints and add new constraint on the initial cost $C(t,X)$. Let's say $C(t,X)\le c$, where $c$ can be either a positive number or zero, or even negative number. I think that theoretically I can find the optimum solution $X$ with the constraint $C(t,X)\le c$.
And are they really zero-cost? As for initial fee, than yes. It is necessary to take into consideration that there is necessary general agreement with bank for option trading. It must be covered by collateral. There are also costs of contract processing, expert's opinions for assets evaluation, opportunity costs influencing of pledge, also of call option sale... Any zero-cost options are not really zero.
My question: Can I assume than an ivestor can use the money received from the sale of some contracts to buy of other contracts in the portfolio? Can I realize the optimal porfolio with the zero or negative initial cost on a market?
Peter Carr and Dilip Madan. Towards a theory of volatility trading. In R. Jarrow, editor, Volatility , pages 417-427. Risk Publications, 1998.
To answer it directly, try searching for the term 'Sequential Quadratic Programming'. This should lead you to relevant references.
More details, if I am reading your question correctly you are hoping to minimize the loss of a strategy involving sequential transactions (buying or selling) of options. I think your question would be more clear if you also indexed the the asset value by time (the equation does look like you are assuming the asset price is fixed at some future date throughout the duration of the strategy period) and if you do not impose symmetry with respect to the time you do a transaction involving an option. I also think it would be easier to express this problem as a minimization problem and to make it clear whether you are buying or selling the option at a given time period. This should help clarify your objective function.
Not the answer you're looking for? Browse other questions tagged portfolio-management optimization option-strategies or ask your own question.
Is creating constrained random portfolios a hard problem? | CommonCrawl |
where $f$ and $g$ are functions of the molar volume, $V_\mathrm m$. Determine the state equation.
Browse other questions tagged thermodynamics equation-of-state or ask your own question.
Why is Gibbs free energy more useful than internal energy?
How to find the molar mass of a compound given mass, vol, temp, and pressure?
Is this the equation of state? | CommonCrawl |
You have $4$x$8$ chocolate, you can cut only straight with the knife.
What is the least amount of cutting required to have $32$ pieces of 1x1 chocolates?
What if putting chocolates onto each other was not allowed?
Cut in half vertically, creating 2, $2\times8$ pieces, then place these end to end to get a $2\times16$ piece. Cut again vertically through both pieces, to get 4, $1\times8$ pieces. Place these side by side to form a $4\times8$ piece again, this time, cut in half horizontally, and move the pieces to form an $8\times4$ shape, cut and move into $16\times2$, then cut once more and you have 32 individual pieces. This totals 5 cuts.
This puzzle works because with each cut, we half the size of every piece. to work this out quickly, we could do $\log_2(32) = 5$.
Simply imagine repeatedly folding it in half like a piece of paper until it is $1\times1$. Instead of folding, you cut and stack the pieces on top of each other. If you are not allowed to stack on top of each other for the cut, then you can put them next to each other instead.
$4\times 8 = 2^2 \times 2^3$ so it takes 5 cuts to reduce to $2^0 \times 2^0$.
5 cuts if you are allowed to stack the pieces on top of each other after each cut.
you can either cut rows of 8 (3 cuts, you now have 4 rows of 8), then separate each row with 7 cuts each. $3 + (4*7) = 31$.
or cut vertically first seven times to create 8 columns of 4, then cut each 3 times. $7 + (8*3) = 31$.
changing between cutting horizontally and vertically each time will not help getting a lower amount of cuts. it always results in 31.
In this puzzle, we don't have to separate the pieces and cut each piece on its own. As others now have said, it is possible with 5 cuts.
What if putting chocolates onto each other was not allowed? - You don't even have to stack the halves on top of each other, just arrange them next to each other so you can cut each piece the same way with one cut.
1. Cut into 2 2x8 pieces.
2. Rearrange into 2x16 and cut into 2 1x16 (actually 4 1x8) pieces.
3. Rearrange into 4x8 and cut into 2 4x4 (actually 8 1x4) pieces.
4. Rearrange into 8x4 and cut into 2 8x2 (actually 16 1x2) pieces.
5. Rearrange into 16x2 and cut into 2 16x1 (actually 32 1x1) pieces.
If rearranging is not allowed, we can cut only 1 line at once, and you need to cut 10 lines (3 in the "north-south" and 7 in the "east-west" direction).
Chocolate is usually already divided into 1 x 1 blocks. You can easily crush it with your hand and you don't need any help of a knife.
I think that it is already divided into 32 blocks. Why would you provide dimensions 4 x 8 otherwise? Not 2 x 4 nor 1 x 2?
There is another way that hasn't been mentioned.
The puzzle says that you have to cut straight with the knife, not that the cut has to be straight. So if you move the chocolate while cutting you can cut it into 32 pieces in one cut.
How to divide the vibranium bar into 3 pieces?
How many pirates were there, and how much was the booty?
Cut cut cut… the rope! | CommonCrawl |
Gaik Ambartsoumian, Leonid Kunyansky. Exterior\/interior problem for the circular means transform with applicationsto intravascular imaging. Inverse Problems & Imaging, 2014, 8(2): 339-359. doi: 10.3934\/ipi.2014.8.339.
Samuel Amstutz, Antonio Andr\u00E9 Novotny, Nicolas Van Goethem. Minimal partitions and image classification using a gradient-free perimeter approximation. Inverse Problems & Imaging, 2014, 8(2): 361-387. doi: 10.3934\/ipi.2014.8.361.
Elena Beretta, Markus Grasmair, Monika Muszkieta, Otmar Scherzer. A variational algorithm for the detection of line segments. Inverse Problems & Imaging, 2014, 8(2): 389-408. doi: 10.3934\/ipi.2014.8.389.
Paola Favati, Grazia Lotti, Ornella Menchi, Francesco Romani. An inner-outer regularizing method for ill-posed problems. Inverse Problems & Imaging, 2014, 8(2): 409-420. doi: 10.3934\/ipi.2014.8.409.
Adriana Gonz\u00E1lez, Laurent Jacques, Christophe De Vleeschouwer, Philippe Antoine. Compressive optical deflectometric tomography:A constrained total-variation minimization approach. Inverse Problems & Imaging, 2014, 8(2): 421-457. doi: 10.3934\/ipi.2014.8.421.
Zhenlin Guo, Ping Lin, Guangrong Ji, Yangfan Wang. Retinal vessel segmentation using a finite element based binary level set method. Inverse Problems & Imaging, 2014, 8(2): 459-473. doi: 10.3934\/ipi.2014.8.459.
Hiroshi Isozaki, Hisashi Morioka. A Rellich type theorem for discrete Schr\u00F6dinger operators. Inverse Problems & Imaging, 2014, 8(2): 475-489. doi: 10.3934\/ipi.2014.8.475.
Hyeuknam Kwon, Yoon Mo Jung, Jaeseok Park, Jin Keun Seo. A new computer-aided method for detecting brain metastases on contrast-enhanced MR images. Inverse Problems & Imaging, 2014, 8(2): 491-505. doi: 10.3934\/ipi.2014.8.491.
Liyan Ma, Lionel Moisan, Jian Yu, Tieyong Zeng. A stable method solving the total variation dictionary model with $L^\\infty$ constraints. Inverse Problems & Imaging, 2014, 8(2): 507-535. doi: 10.3934\/ipi.2014.8.507.
David Maxwell. Kozlov-Maz\'ya iteration as a form of Landweber iteration. Inverse Problems & Imaging, 2014, 8(2): 537-560. doi: 10.3934\/ipi.2014.8.537.
Lassi Roininen, Janne M. J. Huttunen, Sari Lasanen. Whittle-Mat\u00E9rn priors for Bayesian statistical inversion with applications in electrical impedance tomography. Inverse Problems & Imaging, 2014, 8(2): 561-586. doi: 10.3934\/ipi.2014.8.561.
Yuanchang Sun, Lisa M. Wingen, Barbara J. Finlayson-Pitts, Jack Xin. A semi-blind source separation method for differential optical absorption spectroscopy of atmospheric gas mixtures. Inverse Problems & Imaging, 2014, 8(2): 587-610. doi: 10.3934\/ipi.2014.8.587. | CommonCrawl |
Abstract: This chapter describes the assumed specifications and sensitivities for HI galaxy surveys with SKA1 and SKA2. It addresses the expected galaxy number densities based on available simulations as well as the clustering bias over the underlying dark matter. It is shown that a SKA1 HI galaxy survey should be able to find around $5\times 10^6$ galaxies over 5,000 deg$^2$ (up to $z\sim 0.8$), while SKA2 should find $\sim 10^9$ galaxies over 30,000 deg$^2$ (up to $z\sim 2.5$). The numbers presented here have been used throughout the cosmology chapters for forecasting. | CommonCrawl |
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:518-527, 2017.
Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are $L$ observed contexts and $K$ arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality $m$ ($m ≪L,K$). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the $L \times K$ mean reward matrix $\mathbfU$ (for each context in $[L]$ and each arm in $[K]$) factorizes into non-negative factors $\mathbfA$ ($L \times m$) and $\mathbfW$ ($m \times K$). This insight enables us to propose an $ε$-greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of $\mathcalO\left(L\mathrmpoly(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcalO(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of $\mathcalO\left(Km\log T\right)$. These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.
%X Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are $L$ observed contexts and $K$ arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality $m$ ($m ≪L,K$). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the $L \times K$ mean reward matrix $\mathbfU$ (for each context in $[L]$ and each arm in $[K]$) factorizes into non-negative factors $\mathbfA$ ($L \times m$) and $\mathbfW$ ($m \times K$). This insight enables us to propose an $ε$-greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of $\mathcalO\left(L\mathrmpoly(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcalO(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of $\mathcalO\left(Km\log T\right)$. These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets. | CommonCrawl |
Given a permutation matrix that is not full rank, is there a linear algebraic and corresponding algebraic criterion to tell if matrix contains more than one disjoint non-trivial cycle or exactly one non-trivial cycle?
Is there a criteria to tell $X,Y$ have unique cycles of length $d_x,d_y$ respectively by looking at $X\otimes Y$?
This matrix embeds 1st row->2nd row->6th row->8th row->5th row->1st row cycle (essentially a 5-cycle in $8\times 8$ matrix). Hence we have a non-trivial $5$-cycle.
In above matrix, if (3,3),(4,4),(7,7) entry is 1, we still get a 5 cycle with 3 fixed points.
On other hand if (4,3),(3,4) entry is 1 and (7,7) entry is 0, we obtain disjoint 2 and 5 cycles with no fixed points (turning (7,7) to 1 gets you a fixed point).
Browse other questions tagged linear-algebra permutations or ask your own question.
Is this statement about the real edge space of a graph known or trivial?
Is there an efficient algorithm to check whether two matrices are the same up to row and column permutations? | CommonCrawl |
is always an equivalence in $S$.
Let us verify the conditions for an equivalence in turn.
which is trivial by definition of the Cartesian product $S \times S$.
this follows by True Statement is implied by Every Statement.
hence by True Statement is implied by Every Statement, it follows that $\mathcal R$ is transitive.
Having verified all three conditions, we conclude $\mathcal R$ is an equivalence. | CommonCrawl |
How to solve this problem with relaxation LP by graphical method?
$x_1 =3$, $x_2 =4$ doesn't satisfy one of your equations ($7x+3y \le 22)$.
Not the answer you're looking for? Browse other questions tagged convex-optimization linear-programming relaxations or ask your own question.
How to choose $u_i$'s for Chvatal-Gomory cutting plane?
How can I solve the following linear program?
How do I maximize the objective $0 x_1 + 0 x_2 + \dots + 0 x_n$ in linear programming? | CommonCrawl |
Brit Grosskopf joined the University of Exeter Business School as a Professor of Economics in September 2013, she is the current head of the Economics department. Originally from Germany, Brit did her undergraduate degree in economics at the Humboldt University in Berlin. She completed her PhD dissertation Social Preferences and Learning in Experimental Games under the supervision of Rosemarie Nagel at the Universitat Pompeu Fabra, Barcelona, in 2000. She then spent three years at Harvard Business School, where she was a post-doc with Al Roth (Nobel Laureate, Economics 2012). Brit then took a tenure-track job at Texas A&M University where she was granted tenure in 2009. She moved to the University of Birmingham as a Professor in Experimental Economics in 2011, where she founded the Birmingham Experimental Economics Laboratory (BEEL) of which she was the Director until she joined Exeter.
Brit's research interests lie at the intersection of economics and psychology. She uses experimental methods to study individual and group behaviour with a particular interest in social preferences, reasoning, learning, reputation, identity and happiness. Brit has obtained research support from the National Science Foundation, the British Academy and the Russell Sage Foundation.
Grosskopf B, Sarin R, Rentschler L (2018). An experiment on first-price common-value auctions with asymmetric information structures: the blessed winner. Games and Economic Behavior, 109, 40-64. Full text. DOI.
Drouvelis M, Grosskopf B (2016). The effects of induced emotions on pro-social behaviour. Journal of Public Economics, 134, 1-8.
© 2016 Elsevier B.V. Emotions are commonly experienced and expressed in human societies; however, their consequences on economic behaviour have received only limited attention. This paper investigates the effects of induced positive and negative emotions on cooperation and sanctioning behaviour in a one-shot voluntary contributions mechanism game, where personal and social interests are at odds. We concentrate on two specific emotions: anger and happiness. Our findings provide clear evidence that measures of social preferences are sensitive to subjects' current emotional states. Specifically, angry subjects contribute, on average, less than happy subjects and overall welfare as measured by average net earnings is lower when subjects are in an angry mood. We also find that how punishment is used is affected by moods: angry subjects punish harsher than happy subjects, ceteris paribus. These findings suggest that anger, when induced, can have a negative impact on economic behaviour.
Grosskopf B, Sarin R, Watson E (2015). An experiment on case-based decision making. Theory and Decision, 79(4), 639-666.
© 2015, Springer Science+Business Media New York. We experimentally investigate the disposition of decision makers to use case-based reasoning as suggested by Hume (An enquiry concerning human understanding, 1748) and formalized by case-based decision theory (Gilboa and Schmeidler in Q J Econ 110:605–639, 1995). Our subjects face a monopoly decision problem about which they have very limited information. Information is presented in a manner which makes similarity judgements according to the feature matching model of Tversky (Psychol Rev 84:327–352, 1977) plausible. We provide subjects a "history" of cases. In the $$2\times 2$$2×2 between-subject design, we vary whether information about the current market is given and whether immediate feedback about obtained profits is provided. The results provide support for the predictions of case-based decision theory, particularly when no immediate feedback is provided.
Grosskopf B (2014). New Perspectives on Emotions in Finance: the Sociology of Confidence, Fear and Betrayal. Journal of Economic Literature, 52(3), 862-864.
Bereby-Meyer Y, Moran S, Grosskopf B, Chugh D (2013). Choosing Between Lotteries: Remarkable Coordination Without Communication. Journal of Behavioral Decision Making, 26(4), 338-347. DOI.
Bereby-Meyer Y, Moran S, Grosskopf B, Chugh D (2013). Choosing between lotteries: Remarkable coordination without communication. Journal of Behavioral Decision Making, 26(4), 338-347.
The current research examines tacit coordination behavior in a lottery selection task. Two hundred participants in each of three experiments and 100 in a fourth choose to participate in one of two lotteries, where one lottery has a larger prize than the other. Independent of variations in the complexity of the mechanism of prize allocation, the prize amounts, and whether the lottery is the participant's first or second choice, we typically find that the percentage of participants who choose the high versus low-prize lotteries does not significantly differ from the equilibrium predictions. This coordination is achieved without communication or experience. We additionally find that participants with an analytical thinking style and a risk-averse tendency are more likely to choose the low-prize lottery over the high-prize lottery. This tendency seems to be stable across choices. The pattern of our results suggests that to achieve tacit coordination, having a subset of individuals who attend to the choices of others is sufficient. Copyright © 2012.
Grosskopf B, Sarin R (2010). Is reputation good or bad? an experiment. American Economic Review, 100(5), 2187-2204.
We investigate the impact of reputation in a laboratory experiment. We do so by varying whether the past choices of a long-run player are observable by the short-run players. Our framework allows for reputation to have either a beneficial or a harmful effect on the long-run player. We find that reputation is seldom harmful and its beneficial effects are not as strong as theory suggests. When reputational concerns are at odds with other-regarding preferences, we find the latter overwhelm the former.
Grosskopf B, Roth AE (2009). If you are offered the Right of First Refusal, should you accept? an investigation of contract design. Games and Economic Behavior, 65(1), 176-204.
Rights of first refusal are contract clauses intended to provide the holder of a license or lease with some protection when the contract ends. The simplest version gives the right holder the ability to act after potential competitors. However, another common implementation requires the right holder to accept or reject some offers before potential competitors are given the same offer, and, if the right holder rejects the initial offer, allows the right to be exercised affirmatively only if competitors are subsequently offered a better deal (e.g. a lower price). We explore, theoretically and experimentally, the impact this latter form of right of first refusal can have on the outcome of negotiation. Counterintuitively, this "right" of first refusal can be disadvantageous to its holder. This suggests that applied contract design may benefit from the same kind of attention to detail that has begun to be given to practical market design. © 2008 Elsevier Inc. All rights reserved.
Coats JC, Gronberg TJ, Grosskopf B (2009). Simultaneous versus sequential public good provision and the role of refunds - an experimental study. Journal of Public Economics, 93(1-2), 326-335.
We experimentally study contributing behavior to a threshold public good under simultaneous and sequential voluntary contribution mechanisms and investigate how refund policies interact with the mechanism. We find that, for a given refund rule, efficiency is greater under a sequential contribution mechanism than under a simultaneous contribution mechanism. Furthermore, for a given order of contributions, we find that full refund unambiguously achieves higher efficiency in the simultaneous mechanism while this is not the case in the sequential mechanism. © 2008 Elsevier B.V. All rights reserved.
Bereby-Meyer Y, Grosskopf B (2008). Overcoming the winner's curse: an adaptive learning perspective. Journal of Behavioral Decision Making, 21(1), 15-27.
The winner's curse phenomenon refers to the fact that the winner in a common value auction, in order to actually win the auction, is likely to have overestimated the item's value and consequently is likely to gain less than expected and may even lose (i.e. it is said to be "cursed"). Past research, using the "Acquiring a company" task has shown that people do not overcome this bias even after they receive extensive feedback. We suggest that the persistence of the winner's curse is due to a combination of two factors: variability in the environment that leads to ambiguous feedback (i.e. choices and outcomes are only partially correlated) and the tendency of decision makers to learn adaptively. We show in an experiment that by reducing the variance in the feedback, performance can be significantly improved. Copyright © 2007 John Wiley & Sons, Ltd.
Grosskopf B, Nagel R (2008). The two-person beauty contest. Games and Economic Behavior, 62(1), 93-99.
We introduce a two-person beauty contest game with a unique Nash equilibrium that is identical to the game with many players. However, iterative reasoning is unnecessary in the two-person game as choosing zero is a weakly dominant strategy. Despite this "easier" solution concept, we find that a large majority of players do not choose zero. This is the case even with a sophisticated subject pool. © 2007 Elsevier Inc. All rights reserved.
Grosskopf B, Bereby-Meyer Y, Bazerman M (2007). On the robustness of the winner's curse phenomenon. Theory and Decision, 63(4), 389-418.
We set out to find ways to help decision makers overcome the "winner's curse," a phenomenon commonly observed in asymmetric information bargaining situations, and instead found strong support for its robustness. In a series of manipulations of the "Acquiring a Company Task," we tried to enhance decision makers' cognitive understanding of the task. We did so by presenting them with different parameters of the task, having them compare and contrast these different parameters, giving them full feedback on their history of choices and resulting outcomes, and allowing them to interact with a human opponent instead of a computer program. Much to our surprise, none of these manipulations led to a better understanding of the task. Our results demonstrate and emphasize the robustness of the winner's curse phenomenon. © 2007 Springer Science+Business Media, LLC.
Grosskopf B, Erev I, Yechiam E (2006). Foregone with the wind: Indirect payoff information and its implications for choice. International Journal of Game Theory, 34(2), 285-302.
Examination of the effect of information concerning foregone payoffs on choice behavior reveals a complex pattern. Depending on the environment, this information can facilitate or impair maximization. Our study of nine experimental tasks suggests that the complex pattern can be summarized with the assumption that initially people tend to be highly sensitive, and sometimes too sensitive, to recent foregone payoffs. However, over time, people can learn to adjust their sensitivity depending on the environment they are facing. The implications of this observation to models of human adaptation and to problems of mechanism design are discussed.
Bereby-Meyer Y, Grosskopf B (2004). How manipulable are fairness perceptions? the effect of additional alternatives. Research on Economic Inequality, 11, 43-53.
In customer or labor markets raising prices or cutting wages is perceived as unfair if it results from the exploitation of shifts in demands. In a series of manipulations we show that adding an alternative to the original choice set alters the perception of fairness of the final outcome. Adding a worse alternative lowers the perception of unfairness, whereas adding a better alternative raises the perception of unfairness. These findings supplemented with existing experimental evidence cast doubt on purely outcome-based theories of fairness and suggest that fairness perceptions are highly manipulable. © 2004 Elsevier Ltd. All rights reserved.
Idson LC, Chugh D, Bereby-Meyer Y, Moran S, Grosskopf B, Bazerman M (2004). Overcoming focusing failures in competitive environments. Journal of Behavioral Decision Making, 17(3), 159-172.
This paper attacks one of the chief limitations of the field of behavioral decision research - the past inability to use this literature to improve decision making. Building on the work of Thompson, Gentner, Loewenstein and colleagues (Loewenstein, Thompson, & Gentner, 1999; Thompson, Gentner, & Loewenstein, 2000; Gentner & Markman, 1997), the current paper finds that it is possible to reduce bias in one of the most robust problems in the decision literature, the Acquiring a Company Problem (Samuelson & Bazerman, 1985). Past research has shown that individuals make sub-optimal offers as a result of the failure to think about the decisions of others and to incorporate a clear understanding of the rules of the game. In the current study, we find that by allowing study participants to see and understand differences in seemingly unrelated decision problems - versions of the Monty Hall Game (Nalebuff, 1987; Friedman, 1998) and Multiparty Ultimatum Game (Messick, Moore, & Bazerman, 1997; Tor & Bazerman, 2003) - study participants can learn to focus more accurately on the decisions of other parties and the rules of the game, the keys to solving the Acquiring a Company Problem. This research offers a new piece of evidence that comparative and analogical processes may be a successful direction for improving decision making. Copyright © 2004 John Wiley & Sons, Ltd.
Asker J, Grosskopf B, McKinney CN, Niederle M, Roth AE, Weizsäcker G (2004). Teaching auction strategy using experiments administered via the internet. Journal of Economic Education, 35(4), 330-342.
The authors present an experimental design used to teach concepts in the economics of auctions and implications for e-Business procurement. The experiment is easily administered and can be adapted to many different treatments. The chief innovation is that it does not require the use of a lab or class time. Instead, the design can be implemented on any of the many Web-based auction sites (we use Yahoo!). This design has been used to demonstrate how information is transmitted by bids in an auction and how this can make it difficult for well-informed bidders to profit from their information, leading to disincentives for relatively informed bidders to enter an auction. Consequently, an auction may sometimes be an ineffective mechanism for procurement, compared with other options. The auction experiment shows how information can dramatically affect market outcomes and bidder incentives.
Charness G, Grosskopf B (2004). What makes cheap talk effective? Experimental evidence. Economics Letters, 83(3), 383-389.
In some environments, a player only learns the choice of another player if he or she undertakes a risky choice. While costless preplay communication (cheap talk) has been found to be effective in experimental coordination games, participants have typically learned both own payoffs and the other player's action. Are both of these components necessary for cheap talk to be effective? in our 2×2 stag hunt game, the safe choice always yields the same payoff, so that information about payoffs does not always identify the other player's action. We vary whether information is provided about the other person's play, and whether costless one-way messages can be sent before action choices are made. We find that information provision about the other person's play increases coordination when there are messages, but otherwise has no effect. © 2004 Published by Elsevier B.V.
Grosskopf B (2003). Reinforcement and directional learning in the ultimatum game with responder competition. Experimental Economics, 6(2), 141-158.
Demands in the Ultimatum Game in its traditional form with one proposer and one responder are compared with demands in an Ultimatum Game with responder competition. In this modified form one proposer faces three responders who can accept or reject the split of the pie. Initial demands in both ultimatum games are quite similar, however in the course of the experiment, demands in the ultimatum game with responder competition are significantly higher than in the traditional case with repeated random matching. Individual round-to-round changes of choices that are consistent with directional learning are the driving forces behind the differences between the two learning curves and cannot be tracked by an adjustment process in response to accumulated reinforcements. The importance of combining reinforcement and directional learning is addressed. Moreover, learning transfer between the two ultimatum games is analyzed.
Charness G, Grosskopf B (2001). Relative payoffs and happiness: an experimental study. Journal of Economic Behavior and Organization, 45(3), 301-328.
Some current utility models presume that people are concerned with their relative standing in a reference group. Yet how widespread is this influence? Are some types of people more susceptible to it than others are? Using simple binary decisions and self-reported happiness, we investigate both the prevalence of "difference aversion" and whether happiness levels influence the taste for social comparisons. Our decision tasks distinguish between a person's desire to achieve the social optimum, equality, or advantageous relative standing. Most people appear to disregard relative payoffs, instead typically making choices resulting in higher social payoffs. While we do not find a strong general correlation between happiness and concern for relative payoffs, we do observe that a willingness to lower another person's payoff below one's own (competitive preferences) may be correlated with unhappiness. © 2001 Elsevier Science B.V. | CommonCrawl |
I'm developing a face recognizing application using the face_recognition Python library.
The faces are encoded as 128-dimension floating-point vectors. In addition to this, each named known person has a variance value, which is refined iteratively for each new shot of the face along with the mean vector. I took the refining formula from Wikipedia.
I'm getting some false positives with the recognized faces, which I presume is because the library was developed primarily for Western faces whereas my intended audience are primarily Southern-Eastern Asian. So my primary concern with my code, is about whether or not I had gotten the mathematics correct.
# previous face encoding and auxiliary info.
n = min(n, 28) # heuristically limited to 28.
sys.exit() # possibly selected wrong face.
Irrelevant note : I used struct.(un)pack to serialize in binary to save space, because the repr of the data is too big.
I can't say I understand exactly what your algorithm does. Even when looking at Wikipedia it is hard to see. So, why don't you just use the reference implementation from there? It has functions with nice names and everything (well, not quite PEP8 compatible, but that could be changed).
Alternatively, once you have vectors of numbers and want to perform fast calculations on them, you probably want to use numpy, which allows you to easily perform some operation on the whole vector, like taking the difference between two vectors, scaling a vector by a scalar, taking the sum, etc.
In addition to using numpy, I expanded some of the names so it is clearer what they are, I made the cutoff value a global constant (you should give it a meaningful name), added a custom exception instead of just dying (the pass after it was unneeded and unreachable BTW), which makes it more clear what happened, wrapped the calling code under a if __name__ == "__main__": guard to allow importing from this script without running it and followed Python's official style-guide, PEP8, by having operators surrounded by one space.
Instead of this norm function, you can also use scipy.linalg.norm, which might be even faster.
However, note that Wikipedia also mentions that this formula suffers from numerical instabilities, since you "repeatedly subtract a small number from a big number which scales with \$n\$".
This is a bit tricky to implement, since your \$x_i\$ are actually vectors of numbers, so you would need to find out how to properly reduce it in dimension (in your previous formula you used the norm for that).
Not the answer you're looking for? Browse other questions tagged python statistics clustering or ask your own question. | CommonCrawl |
A crane is a wonderful tool for putting up a building. It makes the job go very quickly. When the building must go up even faster, more than one crane can be used. However, when there are too many cranes working on the same building, it can get dangerous. As the cranes spins around, it can bump into another crane if the operator is not careful. Such an accident could cause the cranes to fall over, possibly causing major damage. Therefore, safety regulations require cranes to be spaced far enough apart so that it is impossible for any part of a crane to touch any part of any other crane. Unfortunately, these regulations limit the number of cranes that can be used on the construction site, slowing down the pace of construction. Your task is to place the cranes on the construction site while respecting the safety regulations.
The construction site is laid out as a square grid. Several places on the grid have been marked as possible crane locations. The arm of each crane has a certain length $r$, and can rotate around the location of the crane. The crane covers the entire area that is no more than $r$ units away from the location of the crane. You are to place the cranes to maximize the total area covered by all the cranes.
The first line of input contains one integer specifying the number of test cases and there will be at most $20$ test cases. Each test case begins with a line containing an integer $C$, the number of possible locations where a crane could be placed. There will be no more than 15 such locations. Each of the following $C$ lines contains three integers $x$, $y$, and $r$, all between $-10\, 000$ and $10\, 000$ inclusive and $r$ will be positive. The first two integers are the grid coordinates of the location, and the third integer is the length of the arm of the crane that can be placed at that location.
For each test case, find the maximum area $A$ that can be covered by cranes, and output a line containing a single integer $B$ such that $A = B \times \pi $. | CommonCrawl |
hyperbolic or spherical or Seifert-fibered with infinite fundamental group.
The three bullets are just different wording of the the same theorem.
My question is what is the method for producing all irreducible closed 3-manifolds ? We start with the ones having no incompressible torus in JSJ theorem. Let's call them family zero manifolds which are closed hyperbolic, spherical or Seifert-fibered with infinite $\pi_1$. In Friedl, Introduction to 3 manifolds, I read that Seifert-fibered manifolds are finitely covered by $S^1$-bundle over surface. The hyperbolic closed manifolds are kind of mystery for me yet. In the same place I read that closed hyperbolic manifold is finitely covered by surface bundle over circle (famous virtually fibered conjecture). Still it does not help me to understand hyperbolic manifolds but this would be the topic for another question.
Next we go family one i.e. irreducible closed manifold which has one uncompressible torus. When we cut along this torus we obtain either one or two pieces. How can we describe these pieces ? How can we prepare hyperbolic or Seifert-fibered manifold having one or two tori as boundary ? The good candidate seems to be knot complement in sphere $S^3$. How about circle complement in family zero member ? Say, we have such pieces in hand, what are possible ways of gluing the boundary tori ? The automorphism group of torus up to homotopy is $SL_2(\mathbb Z)$ - it is named mapping class group - MCG in Friedl introduction. Thus we have a lot of freedom in gluing these pieces. How can we control this ? Our goal is to enlist all manifolds in family one.
Can someone explain me whether this procedure works and we can name all irreducible closed 3-manifolds by such method ?
Nobody knows how to classify the family zero manifolds. The Seifert ones are pretty well-known and were classified by Seifert himself: each such may be represented as a string like $$(S, (p_1,q_1), \ldots, (p_k,q_k)$$ where $S$ is a surface and $(p_i,q_i)$ are coprime numbers, and there is a theorem that says very clearly when two different strings represent the same manifold. However, on the hyperbolic side there is yet no classification at all. In principle there are algorithms that "produce all the hyperbolic 3-manifolds" without repetition, but they are not useful in practice. There are of course various important theorems around, the first one to cite should probably be Thurston's hyperbolic Dehn filling. There is also a very concrete fantastic tool, SnapPy, to investigate these manifolds. But we do not have a global satisfying classification.
On the other hand, the higher level manifolds are not a problem. Once you know all the level zero ones, every irreducible manifold is almost uniquely described as a graph with a level zero manifold at each vertex and a $2\times 2$ integer matrix at each edge (to be precise, every level zero manifold should be equipped with a homology basis at each torus boundary). The "almost" here stands for the fact that one should take into account the self-diffeomorphisms of the level zero manifolds: in the Seifert cases these are pretty understood and may have infinite order, in the hyperbolic case they are finite and form precisely the isometry group of the manifold.
Edit : By "level zero" I mean every compact manifold, possibly with boundary made of tori, that is either Seifert or hyperbolic or a torus (semi-)fibration. The third class contains only closed manifolds.
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology 3-manifolds or ask your own question.
Given a link $L\subset S^3$ how to construct a link $L'$ whose complement have hyperbolic structure? | CommonCrawl |
Automatic Chemical Manufacturing is experimenting with a process called self-assembly. In this process, molecules with natural affinity for each other are mixed together in a solution and allowed to spontaneously assemble themselves into larger structures. But there is one problem: sometimes molecules assemble themselves into a structure of unbounded size, which gums up the machinery.
You must write a program to decide whether a given collection of molecules can be assembled into a structure of unbounded size. You should make two simplifying assumptions: 1) the problem is restricted to two dimensions, and 2) each molecule in the collection is represented as a square. The four edges of the square represent the surfaces on which the molecule can connect to other compatible molecules.
An uppercase letter ($A, \ldots , Z$) followed by $+$ or $-$. Two edges are compatible if their labels have the same letter but different signs. For example, $A+$ is compatible with $A-$ but is not compatible with $A+$ or $B-$.
Two zero digits $00$. An edge with this label is not compatible with any edge (not even with another edge labeled $00$).
Assume there is an unlimited supply of molecules of each type, which may be rotated and reflected. As the molecules assemble themselves into larger structures, the edges of two molecules may be adjacent to each other only if they are compatible. It is permitted for an edge, regardless of its connector label, to be connected to nothing (no adjacent molecule on that edge).
Figure 1 shows an example of three molecule types and a structure of bounded size that can be assembled from them (other bounded structures are also possible with this set of molecules).
Figure 1: Illustration of Sample Input 1.
The input consists of a single test case. A test case consists of two lines. The first contains an integer $n$ ($1 \leq n \leq 40\, 000$) indicating the number of molecule types. The second line contains $n$ eight-character strings, each describing a single type of molecule, separated by single spaces. Each string consists of four two-character connector labels representing the four edges of the molecule in clockwise order.
Display the word unbounded if the set of molecule types can generate a structure of unbounded size. Otherwise, display the word bounded. | CommonCrawl |
Following an illustration in Singer/Thorpe: Lecture Notes in Elementary Topology and Geometry, the example has been drawn by Stefan Kottwitz using jpgfdraw, and programmed by Alain Matthes on TeX.SX .
#1 Traian Surtea, April 13, 2013 at 7:31 a.m.
To improve the "learning power" of this example it would be better to use different colours for the paths $\alpha (t) = F(t,0)$ (let's say red), $\beta (t) = F(t,1)$ (let's say blue) and to draw an intermediate path let's say $\gamma (t) = F(t,1/2)$ with magenta. Congrats to both authors. | CommonCrawl |
Consider a discrete 'blurred' output $h[t]$ given by the convolution of filter $f[t]$ and signal $g[t]$. This question considers recovering $g[t]$ from a window (subset) of $h[t]$. This causes the problem to be under-determined.
My understanding is that most approaches are based on regularisation however most sources seem to focus on ill-conditioning resulting from additive noise rather than ill-conditioning resulting from a windowed output. What are the approaches typically used for this problems? Can you apply it to this toy example?
Using more conventional notation, let $x_k$ and $y_k$ denote the $k$-th input and $k$-th output, respectively.
We have an underdetermined system of $5$ linear equations in $6$ unknowns. Let $x_0$ be a parameter.
Note that if we choose $x_0 = 1$ then we recover the original input vector. You chose $x_0 = 3$ instead.
Not the answer you're looking for? Browse other questions tagged convolution deconvolution moving-average linear-algebra inverse-problem or ask your own question.
Deconvolution of Images - How To?
Deconvolution of non-stationary, 1-D signal?
Deconvolution of Synthetic 1D Signals - How To? | CommonCrawl |
Goro Akagi, Jun Kobayashi, Mitsuharu \u00D4tani. Principle of symmetric criticality and evolution equations. Conference Publications, 2003, 2003(Special): 1-10. doi: 10.3934\/proc.2003.2003.1.
Goro Akagi, Mitsuharu \u00D4tani. Evolution equations and subdifferentials in Banach spaces. Conference Publications, 2003, 2003(Special): 11-20. doi: 10.3934\/proc.2003.2003.11.
Carlo Alabiso, Mario Casartelli. Quasi Normal modes in stochastic domains. Conference Publications, 2003, 2003(Special): 21-29. doi: 10.3934\/proc.2003.2003.21.
Ernesto Aranda, Pablo Pedregal. Constrained envelope for a general class of design problems. Conference Publications, 2003, 2003(Special): 30-41. doi: 10.3934\/proc.2003.2003.30.
M. R. Arias, R. Ben\u00EDtez. Properties of solutions for nonlinear Volterra integral equations. Conference Publications, 2003, 2003(Special): 42-47. doi: 10.3934\/proc.2003.2003.42.
Yury Arlinski\u012D, Eduard Tsekanovski\u012D. Constant J-unitary factor and operator-valued transfer functions. Conference Publications, 2003, 2003(Special): 48-56. doi: 10.3934\/proc.2003.2003.48.
Sergei A. Avdonin, Boris P. Belinskiy. Controllability of a string under tension. Conference Publications, 2003, 2003(Special): 57-67. doi: 10.3934\/proc.2003.2003.57.
L. Bakker. A reducible representation of the generalized symmetry group of a quasiperiodic flow. Conference Publications, 2003, 2003(Special): 68-77. doi: 10.3934\/proc.2003.2003.68.
Cristina M. Ballantine. Ramanujan type graphs and bigraphs. Conference Publications, 2003, 2003(Special): 78-82. doi: 10.3934\/proc.2003.2003.78.
John V. Baxley, Philip T. Carroll. Nonlinear boundary value problems with multiple positive solutions. Conference Publications, 2003, 2003(Special): 83-90. doi: 10.3934\/proc.2003.2003.83.
Boris P. Belinskiy, Peter Caithamer. Stochastic stability of some mechanical systems with a multiplicative white noise. Conference Publications, 2003, 2003(Special): 91-99. doi: 10.3934\/proc.2003.2003.91.
H\u00FCseyin Bereketo\u011Flu, Mih\u00E1ly Pituk. Asymptotic constancy for nonhomogeneous linear differential equations with unbounded delays. Conference Publications, 2003, 2003(Special): 100-107. doi: 10.3934\/proc.2003.2003.100.
Bi Ping, Maoan Han. Oscillation of second order difference equations with advanced argument. Conference Publications, 2003, 2003(Special): 108-112. doi: 10.3934\/proc.2003.2003.108.
Joseph A. Biello, Peter R. Kramer, Yuri Lvov. Stages of energy transfer in the FPU model. Conference Publications, 2003, 2003(Special): 113-122. doi: 10.3934\/proc.2003.2003.113.
Lora Billings, Erik M. Bollt, David Morgan, Ira B. Schwartz. Stochastic global bifurcation in perturbed Hamiltonian systems. Conference Publications, 2003, 2003(Special): 123-132. doi: 10.3934\/proc.2003.2003.123.
Henk Broer, Aaron Hagen, Gert Vegter. Numerical approximation of normally hyperbolic invariant manifolds. Conference Publications, 2003, 2003(Special): 133-140. doi: 10.3934\/proc.2003.2003.133.
Daria Bugajewska, Miros\u0142awa Zima. On positive solutions of nonlinear fractional differential equations. Conference Publications, 2003, 2003(Special): 141-146. doi: 10.3934\/proc.2003.2003.141.
Daria Bugajewska, Miros\u0142awa Zima. On the spectral radius of linearly bounded operators and existence results for functional-differential equations. Conference Publications, 2003, 2003(Special): 147-155. doi: 10.3934\/proc.2003.2003.147.
G. Calafiore, M.C. Campi. A learning theory approach to the construction of predictor models. Conference Publications, 2003, 2003(Special): 156-166. doi: 10.3934\/proc.2003.2003.156.
T. Candan, R.S. Dahiya. Oscillation of mixed neutral differential equations with forcing term. Conference Publications, 2003, 2003(Special): 167-172. doi: 10.3934\/proc.2003.2003.167.
Anna Maria Candela, J.L. Flores, M. S\u00E1nchez. A quadratic Bolza-type problem in a non-complete Riemannian manifold. Conference Publications, 2003, 2003(Special): 173-181. doi: 10.3934\/proc.2003.2003.173.
Hongwei Chen. Blow-up estimates of positive solutions of a reaction-diffusion system. Conference Publications, 2003, 2003(Special): 182-188. doi: 10.3934\/proc.2003.2003.182.
Jorge Cort\u00E9s. Energy conserving nonholonomic integrators. Conference Publications, 2003, 2003(Special): 189-199. doi: 10.3934\/proc.2003.2003.189.
C. T. Cremins. Existence theorems for weakly inward semilinear operators. Conference Publications, 2003, 2003(Special): 200-205. doi: 10.3934\/proc.2003.2003.200.
Yu. Dabaghian, R. V. Jensen, R. Bl\u00FCmel. Integrability in 1D quantum chaos. Conference Publications, 2003, 2003(Special): 206-212. doi: 10.3934\/proc.2003.2003.206.
M. DeDeo, M. Mart\u00EDnez, A. Medrano, M. Minei, H. Stark, A. Terras. Spectra of Heisenberg graphs over finite rings. Conference Publications, 2003, 2003(Special): 213-222. doi: 10.3934\/proc.2003.2003.213.
M. Delgado-T\u00E9llez, Alberto Ibort. On the geometry and topology of singular optimal control problems and their solutions. Conference Publications, 2003, 2003(Special): 223-233. doi: 10.3934\/proc.2003.2003.223.
Joshua Du. Kelvin-Helmholtz instability waves of supersonic multiple jets. Conference Publications, 2003, 2003(Special): 234-245. doi: 10.3934\/proc.2003.2003.234.
Kurt Ehlers. Geometric equivalence on nonholonomic three-manifolds. Conference Publications, 2003, 2003(Special): 246-255. doi: 10.3934\/proc.2003.2003.246.
R.H. Fabiano, J. Turi. Making the numerical abscissa negative for a class of neutral equations. Conference Publications, 2003, 2003(Special): 256-262. doi: 10.3934\/proc.2003.2003.256.
Wenying Feng. Solutions and positive solutions for some three-point boundary value problems. Conference Publications, 2003, 2003(Special): 263-272. doi: 10.3934\/proc.2003.2003.263.
Daniel Franco, Donal O\'Regan. Existence of solutions to second order problems with nonlinear boundary conditions. Conference Publications, 2003, 2003(Special): 273-280. doi: 10.3934\/proc.2003.2003.273.
Michael L. Frankel, Victor Roytburd. Fractal dimension of attractors for a Stefan problem. Conference Publications, 2003, 2003(Special): 281-287. doi: 10.3934\/proc.2003.2003.281.
Harald Friedrich. Semiclassical and large quantum number limits of the Schr\u00F6dinger equation. Conference Publications, 2003, 2003(Special): 288-294. doi: 10.3934\/proc.2003.2003.288.
Marcelo F. Furtado, Liliane A. Maia, Elves A. B. Silva. Systems with coupling in $mathbb(R)^N$ class of noncoercive potentials. Conference Publications, 2003, 2003(Special): 295-304. doi: 10.3934\/proc.2003.2003.295.
Cedric Galusinski, Serguei Zelik. Uniform Gevrey regularity for the attractor of a damped wave equation. Conference Publications, 2003, 2003(Special): 305-312. doi: 10.3934\/proc.2003.2003.305.
Marta Garc\u00EDa-Huidobro, Raul Man\u00E1sevich. A three point boundary value problem containing the operator. Conference Publications, 2003, 2003(Special): 313-319. doi: 10.3934\/proc.2003.2003.313.
Leszek Gasi\u0144ski. Optimal control problem of Bolza-type for evolution hemivariational inequality. Conference Publications, 2003, 2003(Special): 320-326. doi: 10.3934\/proc.2003.2003.320.
Filippo Gazzola. Critical exponents which relate embedding inequalities with quasilinear elliptic problems. Conference Publications, 2003, 2003(Special): 327-335. doi: 10.3934\/proc.2003.2003.327.
Filippo Gazzola, Lorenzo Pisani. Remarks on quasilinear elliptic equations as models for elementary particles. Conference Publications, 2003, 2003(Special): 336-341. doi: 10.3934\/proc.2003.2003.336.
John R. Graef, R. Savithri, E. Thandapani. Oscillatory properties of third order neutral delay differential equations. Conference Publications, 2003, 2003(Special): 342-350. doi: 10.3934\/proc.2003.2003.342.
Maurizio Grasselli, Vittorino Pata. On the damped semilinear wave equation with critical exponent. Conference Publications, 2003, 2003(Special): 351-358. doi: 10.3934\/proc.2003.2003.351.
Ellina Grigorieva, Evgenii Khailov. A nonlinear controlled system of differential equations describing the process of production and sales of a consumer good. Conference Publications, 2003, 2003(Special): 359-364. doi: 10.3934\/proc.2003.2003.359.
Guo Ben-Yu, Wang Zhong-Qing. Modified Chebyshev rational spectral method for the whole line. Conference Publications, 2003, 2003(Special): 365-374. doi: 10.3934\/proc.2003.2003.365.
Daniel Guo, John Drake. A global semi-Lagrangian spectral model for the reformulated shallow water equations. Conference Publications, 2003, 2003(Special): 375-385. doi: 10.3934\/proc.2003.2003.375.
Manuel Guti\u00E9rrez. Lorentz geometry technique in nonimaging optics. Conference Publications, 2003, 2003(Special): 386-392. doi: 10.3934\/proc.2003.2003.386.
Takahiro Hashimoto. Existence and nonexistence of nontrivial solutions of some nonlinear fourth order elliptic equations. Conference Publications, 2003, 2003(Special): 393-402. doi: 10.3934\/proc.2003.2003.393.
Min He. On continuity in parameters of integrated semigroups. Conference Publications, 2003, 2003(Special): 403-412. doi: 10.3934\/proc.2003.2003.403.
J. William Hoffman. Remarks on the zeta function of a graph. Conference Publications, 2003, 2003(Special): 413-422. doi: 10.3934\/proc.2003.2003.413.
S. Huff, G. Olumolode, N. Pennington, A. Peterson. Oscillation of an Euler-Cauchy dynamic equation. Conference Publications, 2003, 2003(Special): 423-431. doi: 10.3934\/proc.2003.2003.423.
Gennaro Infante. Positive solutions of differential equations with nonlinear boundary conditions. Conference Publications, 2003, 2003(Special): 432-438. doi: 10.3934\/proc.2003.2003.432.
Hiroshi Inoue, Kei Matsuura, Mitsuharu \u00D4tani. Strong solutions of magneto-micropolar fluid equation. Conference Publications, 2003, 2003(Special): 439-448. doi: 10.3934\/proc.2003.2003.439.
Harry L. Johnson, David Russell. Transfer function approach to output specification in certain linear distributed parameter systems. Conference Publications, 2003, 2003(Special): 449-458. doi: 10.3934\/proc.2003.2003.449.
P. M. Jordan, P. Puri. Some recent findings concerning unsteady dipolar fluid flows. Conference Publications, 2003, 2003(Special): 459-468. doi: 10.3934\/proc.2003.2003.459.
Shuichi Kawashima, Shinya Nishibata, Masataka Nishikawa. Asymptotic stability of stationary waves for two-dimensional viscous conservation laws in half plane. Conference Publications, 2003, 2003(Special): 469-476. doi: 10.3934\/proc.2003.2003.469.
C. M. Khalique, G. S. Pai. Conservation laws and invariant solutions for soil water equations. Conference Publications, 2003, 2003(Special): 477-481. doi: 10.3934\/proc.2003.2003.477.
Peter R. Kramer, Joseph A. Biello, Yuri Lvov. Application of weak turbulence theory to FPU model. Conference Publications, 2003, 2003(Special): 482-491. doi: 10.3934\/proc.2003.2003.482.
Helmut Kr\u00F6ger. From quantum action to quantum chaos. Conference Publications, 2003, 2003(Special): 492-500. doi: 10.3934\/proc.2003.2003.492.
K. Q. Lan. Multiple positive eigenvalues of conjugate boundary value problems with singularities. Conference Publications, 2003, 2003(Special): 501-506. doi: 10.3934\/proc.2003.2003.501.
Bavo Langerock. Optimal control problems with variable endpoints. Conference Publications, 2003, 2003(Special): 507-516. doi: 10.3934\/proc.2003.2003.507.
D. Lannes. Consistency of the KP approximation. Conference Publications, 2003, 2003(Special): 517-525. doi: 10.3934\/proc.2003.2003.517.
Monica Lazzo. Existence and multiplicity results for a class of nonlinear elliptic problems in $\\mathbb(R)^N$. Conference Publications, 2003, 2003(Special): 526-535. doi: 10.3934\/proc.2003.2003.526.
Dung Le. Exponential attractors for a chemotaxis growth system on domains of arbitrary dimension. Conference Publications, 2003, 2003(Special): 536-543. doi: 10.3934\/proc.2003.2003.536.
Wei Feng, Shuhua Hu, Xin Lu. Optimal controls for a 3-compartment model for cancer chemotherapy with quadratic objective. Conference Publications, 2003, 2003(Special): 544-553. doi: 10.3934\/proc.2003.2003.544.
Timothy J. Lewis. Phase-locking in electrically coupled non-leaky integrate-and-fire neurons. Conference Publications, 2003, 2003(Special): 554-562. doi: 10.3934\/proc.2003.2003.554.
Gary Lieberman. Nonlocal problems for quasilinear parabolic equations in divergence form. Conference Publications, 2003, 2003(Special): 563-570. doi: 10.3934\/proc.2003.2003.563.
Torsten Lindstr\u00F6m. Discrete models and Fisher\'s maximum principle in ecology. Conference Publications, 2003, 2003(Special): 571-579. doi: 10.3934\/proc.2003.2003.571.
Eduardo Liz, Victor Tkachenko, Sergei Trofimchuk. Yorke and Wright 3\/2-stability theorems from a unified point of view. Conference Publications, 2003, 2003(Special): 580-589. doi: 10.3934\/proc.2003.2003.580.
Chunqing Lu. Asymptotic solutions of a nonlinear equation. Conference Publications, 2003, 2003(Special): 590-595. doi: 10.3934\/proc.2003.2003.590.
Harald Markum, Rainer Pullirsch. Classical and quantum chaos in fundamental field theories. Conference Publications, 2003, 2003(Special): 596-603. doi: 10.3934\/proc.2003.2003.596.
S. L. Ma\'u, P. Ramankutty. An averaging method for the Helmholtz equation. Conference Publications, 2003, 2003(Special): 604-609. doi: 10.3934\/proc.2003.2003.604.
Roderick V.N. Melnik, Ningning Song, Per Sandholdt. Dynamics of torque-speed profiles for electric vehicles and nonlinear models based on differential-algebraic equations. Conference Publications, 2003, 2003(Special): 610-617. doi: 10.3934\/proc.2003.2003.610.
Mariusz Michta. On solutions to stochastic differential inclusions. Conference Publications, 2003, 2003(Special): 618-622. doi: 10.3934\/proc.2003.2003.618.
Ronald E. Mickens. Positivity preserving discrete model for the coupled ODE\'s modeling glycolysis. Conference Publications, 2003, 2003(Special): 623-629. doi: 10.3934\/proc.2003.2003.623.
Alain Miranville. Existence of solutions for Cahn-Hilliard type equations. Conference Publications, 2003, 2003(Special): 630-637. doi: 10.3934\/proc.2003.2003.630.
Hirobumi Mizuno, Iwao Sato. L-functions and the Selberg trace formulas for semiregular bipartite graphs. Conference Publications, 2003, 2003(Special): 638-646. doi: 10.3934\/proc.2003.2003.638.
Octavian G. Mustafa, Yuri V. Rogovchenko. Existence of square integrable solutions of perturbed nonlinear differential equations. Conference Publications, 2003, 2003(Special): 647-655. doi: 10.3934\/proc.2003.2003.647.
K Najarian. On stochastic stability of dynamic neural models in presence of noise. Conference Publications, 2003, 2003(Special): 656-663. doi: 10.3934\/proc.2003.2003.656.
Mat\u00EDas Navarro, Federico S\u00E1nchez-Bringas. Dynamics of principal configurations near umbilics for surfaces in $mathbb(R)^4$. Conference Publications, 2003, 2003(Special): 664-671. doi: 10.3934\/proc.2003.2003.664.
Gaston N\'Guerekata. On weak-almost periodic mild solutions of some linear abstract differential equations. Conference Publications, 2003, 2003(Special): 672-677. doi: 10.3934\/proc.2003.2003.672.
Sam Northshield. Quasi-regular graphs, cogrowth, and amenability. Conference Publications, 2003, 2003(Special): 678-687. doi: 10.3934\/proc.2003.2003.678.
Rafael Ortega, James R. Ward Jr. A semilinear elliptic system with vanishing nonlinearities. Conference Publications, 2003, 2003(Special): 688-693. doi: 10.3934\/proc.2003.2003.688.
Chun-Gil Park. Stability of a linear functional equation in Banach modules. Conference Publications, 2003, 2003(Special): 694-700. doi: 10.3934\/proc.2003.2003.694.
Pawe\u0142 Pilarczyk. Topological-numerical approach to the existence of periodic trajectories in ODE\'s. Conference Publications, 2003, 2003(Special): 701-708. doi: 10.3934\/proc.2003.2003.701.
David W. Pravica, Michael J. Spurr. Analytic continuation into the future. Conference Publications, 2003, 2003(Special): 709-716. doi: 10.3934\/proc.2003.2003.709.
Dalibor Pra\u017E\u00E1k. Exponential attractor for the delayed logistic equation with a nonlinear diffusion. Conference Publications, 2003, 2003(Special): 717-726. doi: 10.3934\/proc.2003.2003.717.
Maria Alessandra Ragusa. Parabolic systems with non continuous coefficients. Conference Publications, 2003, 2003(Special): 727-733. doi: 10.3934\/proc.2003.2003.727.
Vladimir R\u0103svan. On the central stability zone for linear discrete-time Hamiltonian systems. Conference Publications, 2003, 2003(Special): 734-741. doi: 10.3934\/proc.2003.2003.734.
Paolo Rinaldi, Heinz Sch\u00E4ttler. Minimization of the base transit time in semiconductor devices using optimal control. Conference Publications, 2003, 2003(Special): 742-751. doi: 10.3934\/proc.2003.2003.742.
Kimun Ryu, Inkyung Ahn. On certain elliptic systems with nonlinear self-cross diffusions. Conference Publications, 2003, 2003(Special): 752-759. doi: 10.3934\/proc.2003.2003.752.
Felix Sadyrbaev. Nonlinear boundary value problems of the calculus of variations. Conference Publications, 2003, 2003(Special): 760-770. doi: 10.3934\/proc.2003.2003.760.
Yasuhisa Saito. A global stability result for an N-species Lotka-Volterra food chain system with distributed time delays. Conference Publications, 2003, 2003(Special): 771-777. doi: 10.3934\/proc.2003.2003.771.
Addolorata Salvatore. Multiple homoclinic orbits for a class of second order perturbed Hamiltonian systems. Conference Publications, 2003, 2003(Special): 778-787. doi: 10.3934\/proc.2003.2003.778.
Barbara A. Shipman. Compactified isospectral sets of complex tridiagonal Hessenberg matrices. Conference Publications, 2003, 2003(Special): 788-797. doi: 10.3934\/proc.2003.2003.788.
Ken Shirakawa. Asymptotic stability for dynamical systems associated with the one-dimensional Fr\u00E9mond model of shape memory alloys. Conference Publications, 2003, 2003(Special): 798-808. doi: 10.3934\/proc.2003.2003.798.
Eugenii Shustin. Exponential decay of oscillations in a multidimensional delay differential system. Conference Publications, 2003, 2003(Special): 809-816. doi: 10.3934\/proc.2003.2003.809.
Michael W. Smiley, Howard A. Levine, Marit Nilsen Hamilton. Numerical simulation of capillary formation during the onset of tumor angiogenesis. Conference Publications, 2003, 2003(Special): 817-826. doi: 10.3934\/proc.2003.2003.817.
J\u0119drzej \u015Aniatycki. Integral curves of derivations on locally semi-algebraic differential spaces. Conference Publications, 2003, 2003(Special): 827-833. doi: 10.3934\/proc.2003.2003.827.
Meiyu Su. True laminations for complex H\u00E8non maps. Conference Publications, 2003, 2003(Special): 834-841. doi: 10.3934\/proc.2003.2003.834.
Emmanuel Tr\u00E9lat. Optimal control of a space shuttle, and numerical simulations. Conference Publications, 2003, 2003(Special): 842-851. doi: 10.3934\/proc.2003.2003.842.
Konstantina Trivisa. Global existence and asymptotic analysis of solutions to a model for the dynamic combustion of compressible fluids. Conference Publications, 2003, 2003(Special): 852-863. doi: 10.3934\/proc.2003.2003.852.
Larry Turyn. Cellular neural networks: asymmetric templates and spatial chaos. Conference Publications, 2003, 2003(Special): 864-871. doi: 10.3934\/proc.2003.2003.864.
C. van der Mee, Stella Vernier Piro. Travelling waves for solid-gas reaction-diffusion systems. Conference Publications, 2003, 2003(Special): 872-879. doi: 10.3934\/proc.2003.2003.872.
Cheng Wang. The primitive equations formulated in mean vorticity. Conference Publications, 2003, 2003(Special): 880-887. doi: 10.3934\/proc.2003.2003.880.
Dehua Wang. Global existence and dynamical properties of large solutions for combustion flows. Conference Publications, 2003, 2003(Special): 888-897. doi: 10.3934\/proc.2003.2003.888.
D. Warren, K Najarian. Learning theory applied to Sigmoid network classification of protein biological function using primary protein structure. Conference Publications, 2003, 2003(Special): 898-904. doi: 10.3934\/proc.2003.2003.898.
J. R. L. Webb. Remarks on positive solutions of some three point boundary value problems. Conference Publications, 2003, 2003(Special): 905-915. doi: 10.3934\/proc.2003.2003.905.
Wen-Qing Xu. Boundary conditions for multi-dimensional hyperbolic relaxation problems. Conference Publications, 2003, 2003(Special): 916-925. doi: 10.3934\/proc.2003.2003.916.
Xiangjin Xu. Multiple solutions of super-quadratic second order dynamical systems. Conference Publications, 2003, 2003(Special): 926-934. doi: 10.3934\/proc.2003.2003.926.
Noriaki Yamazaki. Global attractors for non-autonomous multivalued dynamical systems associated with double obstacle problems. Conference Publications, 2003, 2003(Special): 935-944. doi: 10.3934\/proc.2003.2003.935.
Ming Yang, Chulin Li. Valuing investment project in competitive environment. Conference Publications, 2003, 2003(Special): 945-950. doi: 10.3934\/proc.2003.2003.945.
Tianliang Yang, J. M. McDonough. Solution filtering technique for solving Burgers\' equation. Conference Publications, 2003, 2003(Special): 951-959. doi: 10.3934\/proc.2003.2003.951.
Zelik S.. Formally gradient reaction-diffusion systems in Rn have zero spatio-temporal topological. Conference Publications, 2003, 2003(Special): 960-966. doi: 10.3934\/proc.2003.2003.960.
Dmitry V. Zenkov. Linear conservation laws of nonholonomic systems with symmetry. Conference Publications, 2003, 2003(Special): 967-976. doi: 10.3934\/proc.2003.2003.967. | CommonCrawl |
Subsets and Splits