text
stringlengths
100
500k
subset
stringclasses
4 values
The molecular mass calculations made using the atomic masses on the periodic table apply for general calculations, but aren't accurate when known isotopes of atoms are present in a compound.... 16/01/2017 · And actually it doesn't have a molar mass, as molar mass is a property of a compound - something with well defined molecular formula. Mixture doesn't have a formula, it contains separate compounds, each one having a separate formula. For example, if you want to find the molar mass of carbon, you would find the atomic mass of carbon on the periodic table, and this is equal to the molar mass in grams per mole. So, in our example, carbon has a molar mass of 12.01 grams per mole.There are a few exceptions to this rule.... Molar mass of a substance is the mass of one mole of the substance in grams. It has a unit of grams per mole. One mole of any substance contains 6.02 $\times$ 10 23 particles. First, Determine the mass in grams of each element in the compound by multiplying the subscript by the atomic mass of the periodic table. Next, add them all together to get the molar mass in grams of the entire compound. how to get rid of premature wrinkles The density of a substance is its mass per unit volume - so you're halfway there. The *volume* occupied by a certain molecule depends on how ell it "packs" into other molecules of the same compound. The molecular mass calculations made using the atomic masses on the periodic table apply for general calculations, but aren't accurate when known isotopes of atoms are present in a compound. Mole Conversions . Reading Compounds that have water attached (hydrates) The • represents CuSO 4 that has 5 waters attached. 7/01/2008 · "Molar Mass Finder" provides a list of possible molecular ion molar masses from given fragment masses. You can then use the Fragment Finder or Formula Finder applications to find possible formulas for your compound. To find the molar mass of a compound, you have to write the chemical formula, list the number of atoms of each element, and multiply this number by the molar mass of the element. Example 3 Calculate the molar mass of water.
CommonCrawl
So far on QuantStart we have discussed algorithmic trading strategy identification, successful backtesting, securities master databases and how to construct a software research environment. It is now time to turn our attention towards forming actual trading strategies and how to implement them. One of the key trading concepts in the quantitative toolbox is that of mean reversion. This process refers to a time series that displays a tendency to revert to its historical mean value. Mathematically, such a (continuous) time series is referred to as an Ornstein-Uhlenbeck process. This is in contrast to a random walk (Brownian motion), which has no "memory" of where it has been at each particular instance of time. The mean-reverting property of a time series can be exploited in order to produce profitable trading strategies. In this article we are going to outline the statistical tests necessary to identify mean reversion. In particular, we will study the concept of stationarity and how to test for it. Where $\theta$ is the rate of reversion to the mean, $\mu$ is the mean value of the process, $\sigma$ is the variance of the process and $W_t$ is a Wiener Process or Brownian Motion. In a discrete setting the equation states that the change of the price series in the next time period is proportional to the difference between the mean price and the current price, with the addition of Gaussian noise. This property motivates the Augmented Dickey-Fuller Test, which we will describe below. Where $\alpha$ is a constant, $\beta$ represents the coefficient of a temporal trend and $\Delta y_t = y(t)-y(t-1)$. The role of the ADF hypothesis test is to consider the null hypothesis that $\gamma=0$, which would indicate (with $\alpha = \beta = 0$) that the process is a random walk and thus non mean reverting. If the hypothesis that $\gamma=0$ can be rejected then the following movement of the price series is proportional to the current price and thus it is unlikely to be a random walk. Dickey and Fuller have previously calculated the distribution of this test statistic, which allows us to determine the rejection of the hypothesis for any chosen percentage critical value. The test statistic is a negative number and thus in order to be significant beyond the critical values, the number must be more negative than these values, i.e. less than the critical values. A key practical issue for traders is that any constant long-term drift in a price is of a much smaller magnitude than any short-term fluctuations and so the drift is often assumed to be zero ($\beta=0$) for the model. Since we are considering a lag model of order $p$, we need to actually set $p$ to a particular value. It is usually sufficient, for trading research, to set $p=1$ to allow us to reject the null hypothesis. To calculate the Augmented Dickey-Fuller test we can make use of the pandas and statsmodels libraries. The former provides us with a straightforward method of obtaining Open-High-Low-Close-Volume (OHLCV) data from Yahoo Finance, while the latter wraps the ADF test in a easy to call function. We will carry out the ADF test on a sample price series of Google stock, from 1st January 2000 to 1st January 2013. Here is the output of the Augmented Dickey-Fuller test for Google over the period. The first value is the calculated test-statistic, while the second value is the p-value. The fourth is the number of data points in the sample. The fifth value, the dictionary, contains the critical values of the test-statistic at the 1, 5 and 10 percent values respectively. Since the calculated value of the test statistic is larger than any of the critical values at the 1, 5 or 10 percent levels, we cannot reject the null hypothesis of $\gamma=0$ and thus we are unlikely to have found a mean reverting time series. An alternative means of identifying a mean reverting time series is provided by the concept of stationarity, which we will now discuss. A time series (or stochastic process) is defined to be strongly stationary if its joint probability distribution is invariant under translations in time or space. In particular, and of key importance for traders, the mean and variance of the process do not change over time or space and they each do not follow a trend. A critical feature of stationary price series is that the prices within the series diffuse from their initial value at a rate slower than that of a Geometric Brownian Motion. By measuring the rate of this diffusive behaviour we can identify the nature of the time series. We will now outline a calculation, namely the Hurst Exponent, which helps us to characterise the stationarity of a time series. The goal of the Hurst Exponent is to provide us with a scalar value that will help us to identify (within the limits of statistical estimation) whether a series is mean reverting, random walking or trending. In addition to characterisation of the time series the Hurst Exponent also describes the extent to which a series behaves in the manner categorised. For instance, a value of $H$ near 0 is a highly mean reverting series, while for $H$ near 1 the series is strongly trending. # Assuming you have run the above code to obtain 'goog'! From this output we can see that the Geometric Brownian Motion posssesses a Hurst Exponent, $H$, that is almost exactly 0.5. The mean reverting series has $H$ almost equal to zero, while the trending series has $H$ close to 1. Interestingly, Google has $H$ also nearly equal to 0.5 indicating that it is extremely close to a geometric random walk (at least for the sample period we're making use of!). While we now have a means of characterising the nature of a price time series, we have yet to discuss how statistically significant this value of $H$ is. We need to be able to determine if we can reject the null hypothesis that $H=0.5$ to ascertain mean reverting or trending behaviour. In subsequent articles we will describe how to calculate whether $H$ is statistically significant. In addition, we will consider the concept of cointegration, which will allow us to create our own mean reverting time series from multiple differing price series. Finally, we will tie these statistical techniques together in order to form a basic mean reverting trading strategy.
CommonCrawl
is Noetherian and geometrically regular over $\kappa (\mathfrak p)$. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 07BZ. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 07BZ, in case you are confused.
CommonCrawl
A well known example in quantum mechanics is that of a finite rectangular potential well with a rectangular bump in the middle. I guess this closely approximates the "umbrella" effect of the $NH_3$ molecule. But this potential is not solvable analytically. I want to know if there is a solvable Hamiltonian known which mimics the effects of this potential - like from which one can exactly see the effect on the energy levels and the wave-functions of the width of the bump or the height of the bump or the well width on either side of the bump. @Squark By solvable - I mean exactly solvable :) From that transcendental equation is it possible to see any of the dependencies that I mentioned? I have seen at places that the kind of potential I talked of is modelled by a $2 \times 2$ matrix such that the $11$ and the $22$ component has say $H_0$ and the both the off-diagonal elements have say $-\Delta$. This is solvable exactly and does help reproduce some of the effects like the fine double splitting of the energy (typical of the Ammonia spectrum) as $\Delta$ is moved from $0$ to non-zero. @Squark But this model Hamiltonian is still not sophisticated enough to capture the effect of the width and height and position of the bump on the spectrum and wave-functions. Hence I am looking for something better. Like one would want to make statements about how fast the probability density maxima oscillates from the right to the left as a function of the width, height and position of the bump - or how the time evolution is affected by what kind of linear superposition of states (say the ground and the first excited) or whatever state one starts off with. @Squark May be you can elaborate more on how you think these fine details can be gotten without actually solving - as you seem to suggest.
CommonCrawl
Abstract: We show that the late-decaying particle scenario may be realized in the supersymmetric singlet majoron model with the majoron scale $10-200$ TeV. The smajoron decaying into two neutrinos is the late-decaying particle with the mass $0.1-1$ TeV and the life-time $2\times10^3-8\times10^4$ seconds. The lower limit of the majorino mass is $4-40$ TeV in order to avoid the overclosure of the universe due to the decay-produced LSP. The muon neutrino and the tau neutrino can be used to explain the atmospheric and the solar neutrino deficit.
CommonCrawl
Abstract: We consider the first mixed problem in a cylindrical domain $D=(0,\infty)\times\Omega$ for a pseudo-differential parabolic equation with homogeneous Dirichlet boundary conditions and a finitely supported initial function. We find upper bounds for the $L_2$-norm of a solution as $t\to\infty$ in terms of a geometric characteristic introduced earlier by the author for an unbounded domain $\Omega\subset\mathbb R^n$, $n\ge 2$, in the case of a higher-order parabolic equation. Keywords: stabilization of solutions, pseudo-differential parabolic equations, unbounded domain, mixed problem.
CommonCrawl
Abstract: In this paper we consider the minimum time population transfer problem for the $z$-component of the spin of a (spin 1/2) particle driven by a magnetic field, controlled along the x axis, with bounded amplitude. On the Bloch sphere (i.e. after a suitable Hopf projection), this problem can be attacked with techniques of optimal syntheses on 2-D manifolds. Let $(-E,E)$ be the two energy levels, and $|\Omega(t)|\leq M$ the bound on the field amplitude. For each couple of values $E$ and $M$, we determine the time optimal synthesis starting from the level $-E$ and we provide the explicit expression of the time optimal trajectories steering the state one to the state two, in terms of a parameter that can be computed solving numerically a suitable equation. For $M/E<<1$, every time optimal trajectory is bang-bang and in particular the corresponding control is periodic with frequency of the order of the resonance frequency $\omega_R=2E$. On the other side, for $M/E>1$, the time optimal trajectory steering the state one to the state two is bang-bang with exactly one switching. Fixed $E$ we also prove that for $M\to\infty$ the time needed to reach the state two tends to zero. In the case $M/E>1$ there are time optimal trajectories containing a singular arc. Finally we compare these results with some known results of Khaneja, Brockett and Glaser and with those obtained by controlling the magnetic field both on the $x$ and $y$ directions (or with one external field, but in the rotating wave approximation). As byproduct we prove that the qualitative shape of the time optimal synthesis presents different patterns, that cyclically alternate as $M/E\to0$, giving a partial proof of a conjecture formulated in a previous paper.
CommonCrawl
In how many ways can you tile a $3\times n$ rectangle with $2\times 1$ dominoes? Here is a sample tiling of a $3\times 12$ rectangle. Input consists of several test cases followed by a line containing -1. Each test case is a line containing an integer $0 \leq n \leq 30$. For each test case, output one integer number giving the number of possible tilings.
CommonCrawl
M. Vanzini, "Auxiliary systems for observables: dynamical local connector approximation for electron addition and removal spectra", Université Paris-Saclay, 2018. J. Sky Zhou, Gatti, M., Kas, J. J., Rehr, J. J., and Reining, L., "Cumulant Green's function calculations of plasmon satellites in bulk sodium: Influence of screening and the crystal environment", Phys. Rev. B, vol. 97. 2018. A. Aouina, "Development of functionals for observables", vol. Master. 2018. M. J. Mohn, Hambach, R., Wachsmuth, P., Giorgetti, C., and Kaiser, U., "Dielectric properties of graphene/MoS2 heterostructures from ab initio calculations and electron energy-loss experiments", Phys. Rev. B , vol. 97. 2018. A. Seidu, Marini, A., and Gatti, M., "Dynamical correlation effects in a weakly correlated material: Inelastic x-ray scattering and photoemission spectra of beryllium", Phys. Rev. B, vol. 97. American Physical Society, p. 125144, 2018. V. Balédent, Cerqueira, T. T. F., Sarmiento-Pérez, R., Shukla, A., Bellin, C., Marsi, M., Itié, J. - P., Gatti, M., Marques, M. A. L., Botti, S., and Rueff, J. - P., "High-pressure phases of $\mathrmVO_2$ from the combination of Raman scattering and ab initio structural search", Phys. Rev. B, vol. 97. 2018. F. Pressacco, Uhlíř, V., Gatti, M., Nicolaou, A., Bendounan, A., Arregi, J. Ander, Patel, S. K. K., Fullerton, E. E., Krizmancic, D., and Sirotti, F., "Laser induced phase transition in epitaxial FeRh layers studied by pump-probe valence band photoemission", Structural Dynamics, vol. 5. p. 034501, 2018. L. Prussel and Veniard, V., "Linear electro-optic effect in semiconductors: Ab initio description of the electronic contribution", Phys. Rev. B, vol. 97. 2018. M. Panholzer, Gatti, M., and Reining, L., "Nonlocal and Nonadiabatic Effects in the Charge-Density Response of Solids: A Time-Dependent Density-Functional Approach", Phys. Rev. Lett., vol. 120. American Physical Society, p. 166402, 2018. A. Tararan, Di Sabatino, S., Gatti, M., Taniguchi, T., Watanabe, K., Reining, L., Tizei, L. H. G., Kociak, M., and Zobelli, A., "Optical gap and optically active intragap defects in cubic BN", Phys. Rev. B, vol. 98. American Physical Society, p. 094106, 2018. M. Labeye, Zapata, F., Coccia, E., Veniard, V., ‡ Toulouse, J., Caillat, J., Taïeb, R., and Luppi, E., "Optimal Basis Set for Electron Dynamics in Strong Laser Fields: The case of Molecular Ion H_2^+", J. Chem. Theory Comput., vol. 14. 2018. M. Vanzini, Reining, L., and Gatti, M., "Spectroscopy of the Hubbard dimer: the spectral potential", The European Physical Journal B, vol. 91. p. 192, 2018. Z. Chen, Giorgetti, C., Sjakste, J., Cabouat, R., Veniard, V., Zhang, Z., Taleb-Ibrahimi, A., Papalazarou, E., Marsi, M., Shukla, A., Peretti, J., and Perfetti, L., "Ultrafast electron dynamics reveal the high potential of InSe for hot-carrier optoelectronics", Phys. rev. B, vol. 97. 2018. L. Prussel, "Ab-initio description of optical nonlinear properties of semiconductors in the presence of an electrostatic field", Ecole Polytechnique, Palaiseau, 2017. F. Sottile, "Bethe-Salpeter equation approach in solids". 2017. P. Cudazzo and Gatti, M., "Collective charge excitations of the two-dimensional electride $\mathrmCa_2\mathrmN$", Phys. Rev. B, vol. 96. 2017. J. Koskelo, Fugallo, G., Hakala, M., Gatti, M., Sottile, F., and Cudazzo, P., "Excitons in van der Waals materials: From monolayer to bulk hexagonal boron nitride", Phys. Rev. B, vol. 95. American Physical Society, p. 035125, 2017. M. L. Trolle, Pedersen, T. G., and Veniard, V., "Model dielectric function for 2D semiconductors including substrate screening", Scientific Reports, vol. 7. p. 39844 - , 2017. M. Bertocchi, Degoli, E., Veniard, V., Luppi, E., and Ossicini, S., "Second Harmonic Generation in Silicon Based Heterostructures: The Role of Strain and Symmetry", Nanoscience and Nanotechnology Letters, vol. 9. 2017. W. Tarantino, Romaniello, P., Berger, J. A., and Reining, L., "Self-consistent Dyson equation and self-energy functionals: An analysis and illustration on the example of the Hubbard atom", Phys. Rev. B, vol. 96. American Physical Society, p. 045124, 2017. M. Tzavala, "Towards an improved description of spectroscopies for materials with localized electrons: Effective potentials and interactions", Université Paris-Saclay, 2017. F. Sottile, "Ab initio approaches to spectroscopies". SOLEIL, Theory Days, Gif-sur-Yvette (France), 2016.
CommonCrawl
will be true if $K\to\infty$. The right hand we know is NOT the right tri-nomial develop, so this is the absurdum we are looking for. The reason why it will works for $n=2$ and not for $n>2$ can be summarized as: it's due to mixed terms of higher degree (from 2). And a short cut will be: first derivate = curve. It was a very hard work for me (8 years long)... I hope it's close to the full stop (for many reasons). So it's clear that the right hand is NOT the right tri-nomial develop, so this is the absurdum we are looking for. I already did another trip, just doble longer, where will be clear that assuming $A^=C^n-B^n$ for true in the integers there is no continuty if we assume, and computate in this way from right (+1/K) or from the left side (-1/K), since the two limits, once reduced in this way are different. I will check the computation to avoid errors, and I've also to check witch of the possible cooling, or warming, is the shortest one to show the absurdum. So if all computation is right I'm alowd to say that this prove that the initial equation must be wrong ?
CommonCrawl
How to correct aim in aerial airship combat? Report! Correct aim 2-20 fore, figure in enemy speed at negative 15 knots relative decelerating. Ready? Fire in Sequence on my mark... MARK! LCDR Arnold Dering, Commander of the "X"-Turret, GKS-B-RW41 "Adamanta" NOTICE: I am looking for a solution within the confines given in the question. If your answer does not work without introducing anything to the world (e.g. super precise lenses, magical hotstones, etc.) please take the time to weigh up the pros & cons of whatever you introduce. In my mind-bogglingly breathtaking conworld heavily armoured airships majestically float through the skies with help of a science-defying, yet well thought-through lighter-than-anything material1. Above the skies they duke it out between each other armed with machine guns, flak, artillery. Machine gun & flak fire saturating patterns & methods against flying enemies are well enough explored and used in our world. What I would like though, is to have the larger airships not only use their artillery to reduce bunkers and buildings to rubble, but fire shells at each other - in the air! The only issue I see is a way to properly report misses in order to adjust aim for the next salvo. If they miss, the shell will simply hit something on the ground - potentially miles away from the target. To solve this I was thinking about introducing a marker shell being fired until a promising bearing is achieved. This marker shell could either be something along the lines of traditional tracer bullets, or a self-detonating shell being primed to explode after a certain time (distance / speed). In my understanding this would provide the spotter/targeter/aimer with a reference point - quite similar to the splash used by World-War navy gunners - being close enough to the target to be actually useful. 1Here's a question about how to control altitude with this physics-defying figment of my imagination. Q: Are there any issues with the described technique(s)? What is the advantage/disadvantage of tracer shells over the detonating shells? What is the advantage/disadvantage of detonating shells over the tracer shells? If none of the techniques can work, why - what fact about aerial combat did I miss out on? In addition it would be amazing if answers that explain why this does not work, to include a section proposing alternative solutions come close/to a similar result as what I am describing in the above prose. Technology: The world is set technologically somewhere between the industrial revolution and the invention of internal-combustion-engines. There's been heavy development and optimizations regarding steam-driven-engines as there is, at least in parts of the world, little to no oil available. Electricity is something fairly new and so far doesn't go further than being used for creating light aboard airships and being used for telegraphing (little to no electric infrastructure). While oil is found and burned in other parts of the world, in this part of it the most commonly used fuels burned for heat are coal and peat. Additionally natural-gas, coal-water-slurry and fishoil are most commonly burned in lanterns to provide light. Speeds: These Airships, most of the time, consist of two or more parallelly arranged lifting bodies with most of the mass of the ship hung between them, the center of mass being below the lifting bodies. This arrangement is intended to give them a sort of stability in the air similarly to a catamaran. Under normal weather conditions military airships rate average speeds between 60-90kph. Military ships feature multiple, often pivotable, engine-pods along their hulls, allowing the application of thrust in almost any direction; the bigger the ship, the more pods naturally. In general speeds, etc. are based on WW1 & Interwar-Period craft, such as the Schütte-Lanz Airships. Tactics: There are different classes of ships that refer to different sizes, armaments and tasks of the ship, the largest of these reaching widths of up to 160m and lengths of up to 400m. Aspect ratios vary from smaller ships @7:2 to larger ships @5:3. Armament and engagement-tactics are loosely based on WW1/WW2 naval combat. That is cruisers/battleships would primarily fire heavy shells over vast distances (mostly for land bombardment), while smaller/medium ships would feature lighter weaponry to engage in ship-to-ship fighting (e.g. destroyers). In addition almost any ship down to corvette-size carries a compliment of spotter/fighter/bomber biplanes. While a corvette would carry at most 1-2 planes, a destroyer would already carry a wing of 3-6 planes, a battleship half a squadron (~a dozen planes), up to carriers hauling multiple scores of plane (being basically huge flightdecks with underslung multi-story hangars). This was a major research area in both World Wars. The main issue was battleships from WW1 onwards, because of the large ranges involved, had enormous difficulty judging distance. It was generally regarded that the British had the best Rangefinding techniques, but even so only 3% of fired shells actually hit their intended target. As targets are far away from each other, there is no advantage to 'smoker' shells. Keep in mind at a distance of 30km or more, it takes a long time for a shell to ballistically reach its target, even several minutes. In this time the ship could move, wind factors could alter, and the tactical situation may change. Your 'marker' shells would be more the go to detect if there was a hit, although close observation from spotter towers may give you this data anyway. In early (prior to WW1) battles Russian battleships were not successful against newer Japanese battleships. This was in part due to the technique of the Japanese firing a large salvo with each shell of a different range. This 'shotgun' technique allowed them to find the right range quicker, whereas the Russian ships would still firing one at a time. Other nations quickly adopted this, but with the advent of range finding computers, devices mounted on spotting towers (hence the many platforms in a typical Battleship) accuracy became greater from the first salvo anyway. Later, radar was the preferred method of range finding, combined with computers to increase accuracy. Artillery nowadays can be very accurate on the first firing. Spotter towers on your airship, with platforms extended as far 'wide' as possible, to try to increase the accuracy of your range finding system. Rangefinding devices mounted to the platforms, and data fed to a central command deck with tables/calculators to determine range. Shells to be fired more often and in multiple salvos. Unlike battleships though, your target could be a different altitude in addition to direction, so your shotgun approach would be to fire multiple shells in a grid / hexagonal fashion to maximise chances of a hit. All shells should be effective to give you the most advantage in the shortest time frame. There is a great advantage to height. The higher you are, the greater your effective range of shells. Therefore I would expect your airships as soon as detected would both try to rise as high as possible. Unlike battleships, as an airship captain lifting your airship is as good as a higher range gun, so airships over the top of you have a distinct advantage. I would imagine 'emergency blow orders' would be the first command that would be given. However at the end of the day, it may well be that your battleship airships give way to being carriers. This is because instead of range being measured in the tens of kilometres, with smaller, lighter, faster craft (such as mini-airships, gliders or planes) force can be projected over hundreds of kilometres - each with its own armament. This means a smaller faster craft can come in to close range, or at 'high altitude' and be equally if not more effective than a large heavy vessel. This happened in real-life, leading to the demise of the grand Battleship, which simply became bombing targets for more agile lighter craft. Tracers are nice for a visual image of your projectile but they rely on light, and are obviously more visible at night; fighting above the clouds in bright sunlight might also hinder this. In order to keep effective artillery shells, I would reccommend using smoke trails as a form of tracer. Similar to the colored smoke you see stunt planes release at air shows. I believe this would solve the visual element of what a tracer would provide; and as an added bonus the lingering smoke would provide an idea about air currents relative to the target as well. Which you use depends on what you're firing. If you're firing a machinegun (a "hose") then tracers just make sense: you can see where the line of what you're firing connects with what you're shooting at, how much you have to lead, how much up you have to fire, etc. If you're firing a volley of single-shot things, then having them explode colorfully on fuzes at roughly the right distance gives you some great advantages: stuff that "misses" has a second chance to do damage from shrapnel; and you can see the pattern it paints around the target. Tracers and rainbow-shells both help your foe locate you and return fire. Say you always fire a rainbow of shells. The foe, on observing a red explosion to their port and an orange explosion closer but still to the port, will know that you are astern of them. Even if you don't fire in color, a foe seeing an explosion to port and then a later one to the starboard will know you are to port. You could stagger your volley deliberately, to make them think you're in a different direction, but that would also reduce the accuracy of your information from the volley. Regardless, the shotgun volley lets you tell where in that volley the foe was, and to focus your fire around that. But once you've found the right place to shoot at by the outcome of your volley, you want a way to continue range-finding as each ship maneuvers around each other. You need a computer, but you don't have even the most basic electronics. If we are to go by history, you have two options: gears/cams, and pigeons. Cams and gears are pretty freaking incredible. Back in WWII, we couldn't carry computers around in planes. Computers were the size of buildings, and needed a fleet of engineers just to keep them running and to fix all the bugs. Think the differential gear in your car was just a thing that let the wheels rotate at different speeds? No, it's a computing system that does addition and subtraction! And similar things let you multiply, divide, differentiate, calculate trig functions, and more! Which is where pigeons come in. Because, sure, you've fired the thing in basically the right direction, but if your shell has vanes on the tail, it can direct its flight to actually hit the thing you fired it at. But fitting even a mechanical computer into a shell, particularly one which could recognize a ship from any possible angle and target towards it... not possible. But you know what can do all that, and is light and portable? Pigeons. One to three pigeons, trained by operant conditioning to recognize the target, were stationed in front of the screen; when they saw the target, they would peck at the screen with their beaks. As long as the target remained in the center of the screen, the screen would not move, but if the bomb began to go off track, the image would move towards the edge of the screen. The pigeons would follow the image, pecking at it, which would move the screen on its pivots. the pigeons learned to rap out as many as four pecks a second for more than two minutes without a break, and would work feverishly to prevent the target image from moving off dead center. [...] Target pistols were fired only a few inches from a bird's head. The pigeons didn't miss a peck, didn't even look up. Other extremely loud noises were introduced. Again, the pigeons stayed at their task. Skinner also put the pigeons in a pressure chamber, setting the altitude at 10,000 feet. They were also whirled around in a centrifuge, put on pure oxygen and exposed to bright flashes, simulating shell bursts. High vibrations were also introduced, and the birds were subjected to massive G forces without harmful effects. [While] pigeons were never used to bomb enemy targets, the test results show that they could have. If electronics had not come along, organic control would likely have become the main way of making guided missiles. TL;DR: As well as tracers (for guns) and fuzed detonation (for cannon), you may also want to look into cams, gears, and pigeons. Enter the elevation and azimuth of both eyepieces into a table, look up the distance, and read off the required azimuth/elevation for your guns. The table would be valid for a fixed point in time from the readings, so communicate these to your crew and fire your guns. Building such a device would not be hard, so long as trigonometry and parabolas are known mathematics (as they have been for hundreds of years now). One cool thing is that it uses the relative velocity between the craft - so it doesn't matter if both vehicles are moving, rotating or whatever, it will all get factored in with the same basic math. Similarly, so long as the relative timings between aim1-aim2 and aim2-shoot are consistent, the exact timing wouldn't be too important (though faster would be more better). Tables may not be efficient with six entry points, so perhaps you could wire the eyepieces into a mechanical calculator of sorts. I can forsee some fun-shaped cams containing the balistics information. I can imagine you could build such a system directly into a gun-turret. You have one dial for the distance estimate, and then you line up the one marker, wait five seconds, line up the next marker, and some mechanics places a third marker where to shoot. If you have two-man gunner teams, and some way to get a round into a moving breech, one man could reload while the other aims. Aim, aim, shoot. Aim, aim, shoot. Depending on the size of your vessels and the distances you are working at, and the accuracy of your measuring devices (stereo offset from the bow/stern of your craft) or your intel about the size of the enemies craft (measuring by relative angle from front to rear of there craft via lenses), you could potentially have a person who's duty it is to measure the distance to the enemy and inform all the crew members. Whether the information could reach the gunners in time for it to be useful is another question. The [Kerrison] Predictor solved the problem by doing all of the calculations mechanically through a complex system of gears. Inputs to its calculations included wind speed, gravity, ballistics of the gun and the rounds it fired, angle to the target in azimuth and altitude, and a user-input estimated target speed. Some of these inputs were fed in by dials, which turned gearing inside the Predictor to calculate the range (from the change in angle and estimated speed) and direction of motion. Here's a fun idea: lighter-than-air torpedos. I suppose they're actually called rockets, but if you can make them neutral buoyancy due to your handwavium, you could fit them with gyroscopic stabilisers [complete service manual], and point them on their way. In order to find range, an airship opens fire with tracer shells until a suitable range is detected. Tracer shells will burn brightly when fired. Each gunnery team has a crewmember whose job is the spotter. Their important job is to track the time it takes for the shell to cross the plane between the firing and target airship. Whether the shell passes in front of, or behind the target, that time is all that is needed. A tracer round must be used; smoke or detonation is not an acceptable alternative. In WWI style naval gun battles, all ships in an inferior firing position will be actively emitting as much smoke as possible to obscure their own positions. A assume your airship battles will be the same. The only reason not to deploy smoke is if you are entering battle with overwhelming force. Smoke will make targeting very hard in the absence of any radar. For this reason, very bright tracers are crucial to being able to tell where your shots are landing. The time that the shell spends in the air, along with the angle of fire of the gun, along with the known muzzle velocity of the gun (assuming the cartridges are produced with such quality that the muzzle velocity is well known), are all you need to know to calculate the range of the tracer. The angle of fire of the gun must be known with respect to an absolute level. The airship will presumably have some roll (and possibly pitch and yaw, too) that will affect the firing of the guns. The absolute angle of the gun relative to the ground is what you need to know. If he tracer passed in front of the target, then this distance is short; if it passed behind the target, then this distance is long. Adjust accordingly. What did all this math get you? You can now create a lookup table with the solution to these equations. Given $t$ and $\beta$ you can find out how far your shell traveled, and thus, how far away your target is. Each table of lookups will be for a certain cartridge or powder size (however your guns work), such that $v$ is constant for one table. Equation (1) gives you a simple estimate of the distance at which your shot passed either in front of or behind your target. Equation (2) gives a further way to you a way to estimate what angle you should make your first few shots before you range in. This second equation also lets you cover expected changes in relative elevation, such as if your are climbing or descending relative to your target. You can measure $\alpha$ with a sextant or a similar piece of equipment. The lookup table will be used the same way, except it will be a three way lookup table with $\alpha$ included. How do you range in on the target? Alright, let us imagine that the airship has its guns mounted in a limited set of turrets, the way that an Iowa-class battleship has its nine guns arranged in three turrets. Each turret is going to fire tracer rounds at first, to get a range in on the target. As it is firing, the gunnery officer for that turret will note the times to target, elevation of the gun that fired the shot, type of cartridge involved, and approximate angle to the target. Using all this information and the lookup tables, for each shot you can develop two separate estimates of distance to target. You write these distances on a board in front of you in chalk or grease pencil or something appropriate, in two colors, noting near and far. With use of the targeting board, you will quickly be able to use the multiple guns of the turret to range in on the target. Once you are close, you switch to firing real, explosive rounds, with perhaps every fifth shot being a tracer to ensure you are on target. With all the smoke, and explosions and other ships firing on your target, this won't be easy. But with gun crew training, a competent crew should be able to keep on target during a pitched battle in the skies. You need a flight time estimate to make the shell detonate at the right point. Where does it come from? If you have distance and speed with sufficient precision to set the fuse, why do you need ranging shots? You might or might not have reliable time fuses. Also, they would have to be set before the shell is loaded, which could be a lengthy process for a big gun. You can make them explode in separate colors for separate turrets, so each turret can tell theirs from the others. They work best for a relatively flat trajectory. On a ballistic arc the tracer might no longer point towards the observer. They affect the ballistics. The destructive shells might have a different trajectory. Simply make all of your shells tracers, that way the first shell to hit already does damage. With an airship, even an armored one, that first hit could be decisive. Your airships maneuver in three dimensions, and they will probably be faster than watercraft. That could make any long-range, indirect fire artillery questionable. The best WWII fighter pilots had the nerve to close in with the enemy and to fire a devastating burst. Hitting at more than a few hundred yards was almost impossible. Even if you extend the ranges to a mile or two, the big guns would have to be fired in a very flat trajectory to hit. How advanced are your computers? Firing guns against targets at different altitudes is a lot trickier than just trying to hit something at the same level. The US artillery had a real problem fighting in Italy, as the range tables weren't very useful when the artillery and target were at different heights. One way to fight is to get sufficiently close that trajectories will be pretty much flat. At that point, we're looking at something pre-WWI, with individual turrets firing at will without any sort of central fire control. At that range, you can probably keep track of those huge shells with binoculars. It also keeps battles more compact, which is better if you're doing them visually. In WWII, for large ships, two miles was ridiculously close range. I know of one photograph from WWII with ships from both sides on it, and you can hardly see the Japanese. I agree almost entirely with the answer by flox. For that reason I won't repeat the arguments. I'll offer a different solution and that is human attack missions. I'm not talking about the Japanese Kamikaze pilots. That is a possibility but it's expensive in destroyed equipment and lost human life and also involves turning the large craft into an aircraft carrier. Instead I propose gaining height on your opponent and launching one of your crack team of wing-suit flyers. The flyer carries a magnetic mine and when within suitable proximity, pulls the pin and lobs it at the enemy. Then they peel off and head towards a land base where they can prepare for their next mission. The following video shows just how precisely these wingsuits can be piloted. I'll see what I can come up with for velocity and glide slope. I'll just say that they are such small, fast targets that they will be very hard to shoot down. Since you are already bending the rules a bit, perhaps use a smoke tracer round? A smoke trail naturally blows away in the wind, but thats not what you'll be using it for. Like fireworks the smoke would have a color that changes over time. This allows the artillery battery to gauge the time it took before the shell flew by the enemy. An additional advantage is that the smoke left behind will blow passed the enemy and give a reference point to judge how close it went passed, combined with the color of the smoke trail you have a good idea of your shot, for a relative amount of "good" when talking about two fast moving ships in 3 dimensions in different light&weather conditions that judge distance based on something that literally moves with the wind. "Marker shells" only work for consistent fire from a stable platform because only in a fixed position is your next shot sufficiently similar to make it worthwhile. So-called 'walking your shots' was firing from a comparatively more stable platform than airships, but even then they threw out a lot of ordinance for each actual hit. Artillery between airships renders the guess-and-test shooting functionally useless due to too much movement through the air and including the added difficulty of changing relative heights. The force of firing a shell (remember your Newtonian mechanics) is significant and will move the airship in addition to the continuing maneuvers of both combatants though 3 dimensions. You would probably not 'fire in sequence' like some 18th century ship of the line as the recoil from each shot will make the next one even more wildly inaccurate. A far more plausible approach would be to give each gun independent operation - the officer waits until their shot is lined up as the aircraft keep moving relative to each other, then fires. Some coordination should be made to ensure two crews do not fire at roughly the same time and waste the second shot in a wild miss, but otherwise each gun should be independent. It would take electronic controls to get your firing of different cannons across different parts of the ship to be so precisely coordinated that they all fire at exactly the same moment (the slightest delay would throw the shot way off). Again, firing that shot will make the aircraft rotate around its center of mass in an somewhat unreliable way - air is simply not viscous enough to push against like that. This could be somewhat mitigated with a recoilless gun - that would fit within a first world war level of technology (an approach historically used to try to get heavier ordinance on early aircraft both for the destructive effects of recoil on an airframe as well as avoiding the significant weight of a traditional artillery piece). While this does allow aircraft to use heavier guns, much of the power is spent out the back to balance the forces which cuts the effective range. If engagements are less than a kilometer you could even go with an technical setup of a modified machine gun geared to a recoilless gun or two. If the ammunition for the machine gun is calibrated to have the same ballistic profile of the big gun, you just need to walk the bullets until a spotter sees them hit, then fire the big gun. Even here you should be careful of using tracer rounds - it is difficult for the eye to perceive properly and even led to the US to stop using tracers on their bombers during WW2 as they apparently found it tricked gunners into consistently missing targets due to how the eye incorrectly perceived the shots (trust the mechanical sights, not intuitive eyes on distances and speeds your eyes did not evolve to judge well). Flak is most useful for aircraft. These basically have timed fuses - the artillery crew sets a time on them, based on a best guess, so the shells explode in rough proximity to the aircraft. Actually expecting to score a direct hit is far too difficult. In surface ships, a near miss can still be somewhat useful as the shell explodes when it hits the water, but you don't get that in aircraft - missing by just a hair is still a complete miss with no effect whatsoever unless you are firing explosive shells on timed fuses. A miss isn't even any good at telling you how close you got to hitting - it is too difficult to perceive how close a tiny fast moving object got near the target from so far away. With heavy ordinance every shot needs to count - if nothing else the sheer weight of shells you must carry is far more of a concern for an airship than it would be for a surface vessel. Visual observation should be able to use parallax to get an effective range, levels should get them an inclination, and a little calculus (easily and quickly done on a mechanical calculator) should give them a good firing solution (what angle up/down and how long before the shot should reach the target). This would still only work with very slow airships at close range - at long distances and 3 degrees of freedom, it may be practically impossible to hit. It would probably be more efficient to ditch the heavy cannon and carry that weight in bombs to drop on ground targets, while relying on small arms to defend against any light attack aircraft getting close. The problems and explanations have been thoroughly explored as I type this. I won't repeat them. I'll offer a different solution. A direct hit is not necessary. Aim high and the missile will fly over the enemy craft. Course adjustment happens in flight. It doesn't have to be electrical. The missile then falls and can be reeled in until a magnetic switch detects the enemy's hull at which time it explodes on the far side of the craft. As above but doesn't doesn't allow for course correction. Not the answer you're looking for? Browse other questions tagged science-based warfare weapons airships or ask your own question. How to make the development and use of Steam Engines preferred over that of Combustion Engines? How would flora behave on a two continent planet? Is this concept for an LTA-gas' associated element lifecycle feasible? What ways are there to keep a steady altitude with large fluctuations regarding load? Would a fighter pilot with an invincible space ship be able to conquer the galaxy? How to retrieve cultural artifacts from a world under siege? Aerial Navy Doctrine-Steampunk era like world war 2 with magic. What tactics that can evolve from it? Which part of ship should be targeted while using guided artillery rounds?
CommonCrawl
We deal with a Hamiltonian system with two degrees of freedom, whose Hamiltonian is a 2$\pi$-periodic function of time and analytic in a neighborhood of an equilibrium point. It is assumed that the characteristic equation of the system linearized in a neighborhood of the equilibrium point has two different double roots such that their absolute values are equal to unity, i.\,e., a combinational resonance takes place in this system. We consider the case of general position when the monodromy matrix of the linearized system is not diagonalizable. In this case the equilibrium point is linearly unstable. However, this does not imply its instability in the original nonlinear system. Rigorous conclusions on the stability can be formulated in terms of coefficients of the Hamiltonian normal form. We describe a constructive algorithm for constructing and normalizing the symplectic map generated by the phase flow of the Hamiltonian system considered. We obtain explicit relations between the coefficients of the generating function of the symplectic map and the coefficients of the Hamiltonian normal form. It allows us to formulate conditions of stability and instability in terms of coefficients of the above generating function. The developed algorithm is applied to solve the stability problem for oscillations of a satellite with plate mass geometry, that is, $J_z = J_x +J_y$, where $J_x$, $J_y$, $J_z$ are the principal moments of inertia of the satellite, when the parameter values belong to a boundary of linear stability. Putkaradze V., Rogers S. M. We discuss a non-Hamiltonian vector field appearing in considering the partial motion of a Chaplygin ball rolling on a horizontal plane which rotates with constant angular velocity. In two partial cases this vector field is expressed via Hamiltonian vector fields using a nonalgebraic deformation of the canonical Poisson bivector on $e^*(3)$. For the symmetric ball we also calculate variables of separation, compatible Poisson brackets, the algebra of Haantjes operators and $2\times2$ Lax matrices. a hypersurface where they fulfill a transversality assumption ($b$-Poisson structures). We do this using the desingularization technique introduced in and extend it to $b^m$-Nambu structures. I describe a finite-dimensional manifold which contains all meromorphic solutions to the many-particle elliptic Calogero – Moser problem at some fixed values of the coupling constant. These solutions can be selected by purely algebraic calculations as it was shown in the simplest case of three interacting particles.
CommonCrawl
find p$_i$:V->V, $(pi)^2$=p$_i$, p$_i$*p$_j$=0 such that V$_i$=im(p$_i$) where p$_i$=projection and im=image. What do you mean find the projections? Do you mean you want a formula? If so this is just the Euclidean inner product. If not, explain what you mean. If I'm understanding you correctly this is very trivial. It's just the standard projections embedded back into the product. For example take $V_i = \mathbb Z$, and $V = \mathbb Z^3$ say. So $V$ is the set of all $3$-tuples of integers $(n, m, p)$. The standard projections $\pi_i : V \to \mathbb Z$ select the first, second, and third coordinates respectively. So for example $\pi_2(4, 9, 47) = 9$. Now we can inject $\mathbb Z$ back into $\mathbb Z^3$ by padding with $0$ coordinates. So we have injection maps $\iota_i : \mathbb Z \to V$ where for example $\iota_2(9) = (0, 9, 0)$. Then your $p_i = \iota_i \circ \pi_i$. In other words (composing right to left) given an n-tuple $v$, first we project onto the $i$-th coordinate, then we pad with $0$'s to get back another element of $V$ that has the $i$-th coordinate of $v$ in the $i$-th place and $0$'s elsewhere. It's clear that $p_i \circ p_i = p_i$ and that if * is multiplication, $p_i * p_j = 0$ for $i \neq j$. However $p_i* p_i = p_i^2$. However your notation is a little off since it's im($\pi_i$) that's equal to $V_i$. Since you say that $p_i : V \to V$, it must be the case that im($p_i$) $\subset V$, not $V_i$. Another little problem is that your $p_i$'s are not projections, since they go from $V$ to $V$ rather than from $V$ to $V_i$. They're like "padded projections" to coin a phrase. In one place you called the $p_i$'s projections and notated them correctly, and in another place they are padded projections but not technically projections. If I'm misunderstanding your question please clarify. Last edited by Maschke; October 12th, 2017 at 09:06 PM.
CommonCrawl
Y.I. Manin mentions in a recent interview the need for a "codification of efficient new intuitive tools, such as … the "brave new algebra" of homotopy theorists". This makes me puzzle, because I thought that is codified in e.g. Lurie's articles. But I read only his survey on elliptic cohomology and some standard articles on symmetric spectra. Taking the quoted remark as indicator for me having missed to notice something, I'd like to read what others think about that, esp. what the intuition on "brave new algebra" is. Edit: In view of Rognes' transfer of Galois theory into the context of "brave new rings" and his conference last year, I wonder if themes discussed in Kato's article (e.g. reciprocity laws) have "brave new variants". Edit: I found Greenlees' introductions (1, 2) and Vogt's "Introduction to Algebra over Brave New Rings" for getting an idea of the topological background very helpfull. * Adding homotopy to algebra allows for generalizations of familiar algebraic notions. For instance, a topological commutative ring is a commutative ring object in the category of spaces; it has addition and multiplication maps which satisfy the usual axioms such as associativity and commutativity. But instead, one might instead merely require that associativity and commutativity hold "up to all possible homotopies" (and we'll think of the homotopies as part of the structure). (It is hard to give the flavor of this if you haven't seen a definition of this sort.) This gives one possible definition of a "brave new commutative ring". * What is really being generalized is not algebraic objects, but derived categories of algebraic objects. So if you have a brave new ring R, you don't really want to study the category of R-modules; rather, the proper object of study is the derived category of R-modules. If your ring R is an ordinary (cowardly old?) ring, then the derived category of R-modules is equivalent to the classical derived category of R. * As for Manin's remarks: the codification of these things has being an ongoing process for at least 40 years. It seems we've only now reached the point where these ideas are escaping homotopy theory and into the broad stream of mathematics. It will probably take a little while longer before things are so well codified that brave new rings get introduced in the grade school algebra curriculum, so the process certainly isn't over yet! The idea is that we're again changing the fundamental nature of space — first it was something easily drawn, then topology, then schemes, then stacks. Now we're doing some infinity versions of spaces, e.g. space --> $\infty$-category, ring --> $E_\infty$ category and that's brave new (the person who wrote this was quoting somebody from the 80s — below I explain that this person may very well be not Manin). In one sentence, we're not just taking functions now, but also forms etc. Later he actually explains that "the homotopy picture becomes more important, and if you want discrete, you need to factorize". И поэтому я не предвижу ничего такого экстраординарного в ближайшие двадцать лет. Происходит перестройка того, что я называю основаниями математики, не в нормативном смысле слова, а как свод подчас даже не эксплицитных правил, критериев ценности, способов представления результатов, который присутствует в мозгу у работающего математика здесь и сейчас, в каждое конкретное время. Вот это я называю основаниями математики. Их можно делать эксплицитными, при этом в нескольких вариантах, и представители разных вариантов могут начать спорить, но, поскольку это существует в мозгах работающего поколения математиков, там всегда есть нечто общее. Так вот, после Кантора и Бурбаков в мозгах, что бы там ни говорили, сидит теоретико-множественная математика. And so I don't foresee anything extraordinary in the next twenty years. Probably, a rebuilding of what I call the "pragmatic foundations of math- ematics" will continue. By this I mean simply a codification of efficient new intuitive tools, such as Feynman path integrals, higher categories, the "brave new algebra" of homotopy theorists, as well as emerging new value systems and accepted forms of presenting results that exist in the minds and research papers of working mathematicians here and now, at each particular time. When "pragmatic foundations" of mathematics are made explicit, usually in several variants, the advocates of different versions may start quarrel- ing, but to the extent that it all exists in the brains of the working generation of mathematicians, there is always something they have in common. So, after Cantor and Bourbaki, no matter what we say, set theoretic mathematics resides in our brains. The things that I call the foundation of math are being rebuilt; not in the normative meaning of that word, but rather as the codex of — not even explicit rules, but rather values, ways to represent the results that exist in the brain of a working mathematician, here and now, at every given moment of time. Interesting, but as you see this is referring to the general idea of change in the "homotopy" direction rather then to the specific papers. In particular, "codification" should refer to the process when this "homotopy-think" becomes firmly established in the textbooks, rather then in the recent research articles. It's a mystery to me as to why highly intelligent people didn't notice the discrepancy when preparing the interview for publication. In some other places the words are changed, e.g. "then you factorize..." --> "then you pass to the set of connected components of a space defined only up to homotopy", and it appears this was made to make the interview more readable and unambiguous in English — it's very informal, though understandable, in the source. A possibility, of course, would be that Manin himself edited the English version after it was translated. re: Manin's comments, the article says that "Manin edited this translation for publication in the Notices", so it is not surprising the English and Russian versions are different. Not the answer you're looking for? Browse other questions tagged homotopy-theory ac.commutative-algebra intuition or ask your own question. Some intuition behind the five lemma? Intuition behind the Eichler-Shimura relation? What is the intuition behind the definition of cuspidal representations? Intuition behind the Morse inequalities? Intuition behind small object argument and cofibrantly generated model categories? When do the polynomial algebra and free algebra coincide in brave new algebra?
CommonCrawl
Format: MarkdownItexEvery category -- indeed, every simplicial set -- admits a [[homotopy final functor]] into it out of a [[Reedy category]], namely its [[category of simplices]] (HTT 4.2.3.14). This makes me wonder: can every $(\infty,1)$-topos be presented as a localization of an $(\infty,1)$-topos of presheaves on a Reedy category? Every category – indeed, every simplicial set – admits a homotopy final functor into it out of a Reedy category, namely its category of simplices (HTT 4.2.3.14). This makes me wonder: can every (∞,1)(\infty,1)-topos be presented as a localization of an (∞,1)(\infty,1)-topos of presheaves on a Reedy category? Format: MarkdownItexAnd I guess with that point made, it makes sense to ask the question more generally about locally presentable $(\infty,1)$-categories. I'm thinking of something like this: suppose C is a small $(\infty,1)$-category and $(\Delta\downarrow C)$ its category of simplices; then we have a functor $t\colon (\Delta\downarrow C) \to C$ sending each simplex to the last object occurring in it. This induces a functor $t^* \colon sPre(C) \to sPre(\Delta\downarrow C)$, and every object in the image of this functor has the property that it sees as isomorphisms all the maps in $(\Delta\downarrow C)$ which fix the last object. Consider the localization of $sPre(\Delta\downarrow C)$ which forces all these maps to be invertible; it seems as though that has a decent chance to be equivalent to $sPre(C)$? And I guess with that point made, it makes sense to ask the question more generally about locally presentable (∞,1)(\infty,1)-categories. I'm thinking of something like this: suppose C is a small (∞,1)(\infty,1)-category and (Δ↓C)(\Delta\downarrow C) its category of simplices; then we have a functor t:(Δ↓C)→Ct\colon (\Delta\downarrow C) \to C sending each simplex to the last object occurring in it. This induces a functor t *:sPre(C)→sPre(Δ↓C)t^* \colon sPre(C) \to sPre(\Delta\downarrow C), and every object in the image of this functor has the property that it sees as isomorphisms all the maps in (Δ↓C)(\Delta\downarrow C) which fix the last object. Consider the localization of sPre(Δ↓C)sPre(\Delta\downarrow C) which forces all these maps to be invertible; it seems as though that has a decent chance to be equivalent to sPre(C)sPre(C)? Format: MarkdownItexAt least in the 1-categorical case, this is true. The functor $t^*$ has a left adjoint (left Kan extension), and by C3.3.8(i) in the Elephant, it is fully faithful; thus it exhibits $Pre(C)$ as a reflective subcategory of $Pre(\Delta\downarrow C)$. Does C3.3.8(i) have an $(\infty,1)$-categorical analogue? At least in the 1-categorical case, this is true. The functor t *t^* has a left adjoint (left Kan extension), and by C3.3.8(i) in the Elephant, it is fully faithful; thus it exhibits Pre(C)Pre(C) as a reflective subcategory of Pre(Δ↓C)Pre(\Delta\downarrow C). Does C3.3.8(i) have an (∞,1)(\infty,1)-categorical analogue?
CommonCrawl
If vertex $$$v$$$ is white, mark it as black; otherwise, perform this operation on all direct sons of $$$v$$$ instead. Mark all vertices in the subtree of $$$v$$$ (including $$$v$$$) as white. Find the color of the $$$i$$$-th vertex. An example of operation "1 1" (corresponds to the first example test). The vertices $$$1$$$ and $$$2$$$ are already black, so the operation goes to their sons instead. Can you help Juliana to process all these queries? The first line contains two integers $$$n$$$ and $$$q$$$ ($$$2\leq n\leq 10^5$$$, $$$1\leq q\leq 10^5$$$) — the number of vertices and the number of queries. The second line contains $$$n-1$$$ integers $$$p_2, p_3, \ldots, p_n$$$ ($$$1\leq p_i<i$$$), where $$$p_i$$$ means that there is an edge between vertices $$$i$$$ and $$$p_i$$$. Each of the next $$$q$$$ lines contains two integers $$$t_i$$$ and $$$v_i$$$ ($$$1\leq t_i\leq 3$$$, $$$1\leq v_i\leq n$$$) — the type of the $$$i$$$-th query and the vertex of the $$$i$$$-th query. It is guaranteed that the given graph is a tree. For each query of type $$$3$$$, print "black" if the vertex is black; otherwise, print "white". The first example is shown on the picture below. The second example is shown on the picture below. Server time: Apr/22/2019 04:14:17 (g1).
CommonCrawl
Volume 9, Number 2 (1996), 335-342. If $a(t)$ and $b(t)$ are positive almost periodic functions, and $K(t)$ is nonnegative and piecewise continuous on $[0,\infty),$ conditions under which the equation $$ N'(t) = N(t)\Big( a(t) - b(t) \int_0^\infty K(s) N(t-s)\,ds \Big) $$ has a positive almost periodic solution $N^*(t)$ on $(-\infty,\infty)$ are given which attracts all other positive solutions as $t\to a.$ These conditions are quite explicit and apparently new. Differential Integral Equations, Volume 9, Number 2 (1996), 335-342. A Necessary and Sufficient Condition for the $\Lambda$-Coalescent to Come Down from Infinity.
CommonCrawl
Abstract: A multiple power series $B(x)$ with non-negative coefficients converging in $x\in(0,1)^n$ and diverging at the point $\mathbf1=(1,…,1)$ is considered. A random variable (r.v.) $\xi_x$ having power series distribution $B(x)$ is studied. The integral limit theorem for r.v. $\xi_x$ as $x\uparrow\mathbf1$ is proved under the assumption that $B(x)$ regularly varies at this point. Also local version of this theorem is received in the situation when the coefficients of the series $B(x)$ are one-sided weakly oscillatory at infinity. Keywords: Multiple power series distribution, weak convergence of $\sigma$-finite measures and random vectors, gamma-distribution with parameter $\lambda\geq0$, regularly varying and one-sided weakly oscillatory functions in a positive hyper-octant.
CommonCrawl
Written by Dominik Joe Pantůček on December 6, 2018. Basic computer vision algorithms may aid you in performing visual control of whatever you may be manufacturing. But if you want something more, you need to turn to the magic world of … linear algebra. Read on to see how all those little vectors and matrices help us with QA. Even though our testing devices are equipped with camera of extraordinary resolution, it is not wise to rely on extremely precise positioning of the PCB holder and the camera arm. Therefore we have a special aiming markers – consisting of concentric circles – on our PCB and using computer vision algorithms we find their centers. Of course, the circle finding algorithm – as mentioned last week – may find more circles depending on the lighting conditions. Yet as the algorithm knows where about to find the markers, it can easily discard false positives and we have their positions in camera coordinates. And we know that we need to rotate the coordinate system so $\Delta'$ holds given angle $\beta$ with the X-axis. We actually need to align the vectors $\Delta$ and $\Delta'$ directions and these angles just help us thinking about that. So $\alpha$ is the angle between $\Delta'$ and the X-axis and $\beta$ is the angle between $\Delta$ and – again – the X-axis. If we rotate everything by $-\beta$ and then by $\alpha$, we get the required alignment. Yes, there are few tricks involved I did not show you – but you can easily derive them yourself. For example there is no need for calculating the sizes of $\Delta$ and $\Delta'$ – therefore you can save calculating the square roots there. The resulting transformation can be pretty fast and can be performed in real time even on a slower computer. You can see the accuracy in Picture 1 below. The program is – of course – not just a simple coordinates transformation. There is also color-space transformation in place which helps us check the intensity and correct color of every LED soldered. Thank you for staying with us and I hope we have proven we take the quality control seriously. Come back next week for more!
CommonCrawl
Firstly, I'd like to specify my goal: to know why QFT runs exponentially faster than classical FFT. I have read many tutorials about QFT (Quantum Fourier Transform), and this tutorial somehow explains something important between classical and quantum Fourier Transform. However, I couldn't catch up the statement regarding "x and y are state vectors"; I get stuck at the formula 5.2 and 5.3. I want to know how to convert the classical input vector x to the right hand side of the formula 5.2. If this confusing problem is solved, it'll be better for me to understand the time complexity issue of QFT. Formula 5.2 refers to an encoding we call amplitude encoding. Imagine you have a vector $x$ with components $x_i$, the components are then encoded as amplitudes of a quantum state. This encoding is very important as a vector that has a dimension $N$, will be encoded in quantum form using about $log(N)$ qubits. This is the main reason why in many quantum algorithms using this encoding we can achieve exponential speedup in the size of the problem. However, generally in quantum computing, you have to assume that this encoding is done using a device called quantum random access memory for loading in this form a vector. Or you are given a circuit that do the job for you. You don't convert a classical input to the r.h.s. of Eq. (5.2). The r.h.s. of Eq. (5.2) is something you get as the output of a preceding quantum computation as a quantum state, such as in Shor's algorithm. This is the only way to get an exponential speedup -- if you had to start from an exponentially big classical vector, there would be no way to solve this in polynomial time. Not the answer you're looking for? Browse other questions tagged algorithm quantum-fourier-transform or ask your own question. How to get all combinations of given input? Swap Test for vector difference - how are different sized inputs combined?
CommonCrawl
The motion of a nonautonomous time-periodic two-degree-of-freedom Hamiltonian system in a neighborhood of an equilibrium point is considered. The Hamiltonian function of the system is supposed to depend on two parameters $\varepsilon$ and $\alpha$, with $\varepsilon$ being small and the system being autonomous at $\varepsilon=0$. It is also supposed that for $\varepsilon=0$ and some values of $\alpha$ one of the frequencies of small linear oscillations of the system in the neighborhood of the equilibrium point is an integer or half-integer and the other is equal to zero, that is, the system exhibits a multiple parametric resonance. The case is considered where the rank of the matrix of equations of perturbed motion that are linearized at $\varepsilon=0$ in the neighborhood of the equilibrium point is equal to three. For sufficiently small but nonzero values of $\varepsilon$ and for values of $\alpha$ close to the resonant ones, the question of existence, bifurcations, and stability (in the linear approximation) of the periodic motions of the system is solved. As an application, periodic motions of a symmetrical satellite in the neighborhood of its cylindrical precession in an orbit with small eccentricity are constructed for cases of the multiple resonances considered. The motion of a time-periodic two-degree-of-freedom Hamiltonian system in the neighborhood of the equilibrium being stable in the linear approximation is considered. The weak Raman thirdorder resonance and the strong fourth-order resonance are assumed to occur simultaneously in the system. The behavior of the approximated (model) system is studied in the stability domain of the fourth-order resonance. Areas of the parameters (coefficients of the normalized Hamiltonian) are found for which all motions of the system are bounded if they begin in a sufficiently small neighborhood of the equilibrium. Boundedness domain estimate is obtained. А disturbing effect of the double resonance on the motion of the system within the boundedness domain is described. We consider the motion of a heavy rigid body with one point performing the specified highfrequency harmonic oscillations along the vertical. In the framework of an approximate autonomous system of differential equations of motion two new types of permanent rotations of the body about the vertical are found. These motions are affected by presence of fast vibrations and do not exist in the case of a body with a fixed point. The problem of stability of the motions is investigated. Motions of a time-periodic, two-degree-of-freedom Hamiltonian system in a neighborhood of a linearly stable equilibrium are considered. It is assumed that there are several resonant thirdorder relations between the frequencies of linear oscillations of the system. It is shown that in the presence of two third-order resonances the equilibrium is unstable at any ratio between resonant coefficients. Approximate (model) Hamiltonians are obtained which are characteristic of the resonant cases under consideration. A detailed analysis is made of nonlinear oscillations of systems corresponding to them. Stability of permanent rotations around the vertical of a heavy rigid body with the immovable point (Staude's rotations) is investigated in assumption of a general mass distribution in the body and an arbitrary position of the point of support. In admissible domains of the five-dimensional space of parameters of the problem the detailed linear analysis of stability is carried out. For each set of admissible values of parameters the necessary conditions of stability are received. In a number of cases the sufficient conditions of stability are found. Motions of a non-autonomous time-periodic Hamiltonian system with one degree of freedom are considered. The Hamiltonian of the system contains a small parameter. The origin of the phase space is a linearly stable equilibrium of the unperturbed or complete system. It is supposed that the degeneration takes place in the unperturbed system with regard for terms of order less than five (the frequency of small nonlinear oscillations does not depend on the amplitude), and a resonance (up to the sixth order inclusively) occurs. For each resonance case a model Hamiltonian is constructed, and a qualitative investigation of motion of the model system is carried out. Using Poincare's theory of periodic motions and KAM-theory we solve rigorously the problem of existence, bifurcations and stability of periodic motions of the initial system. The motions we study are analytical with respect to fractional (for resonances up to the forth order inclusively) or integer (resonances of fifth and sixth orders) degrees of the small parameter. As an illustration, we analyze resonance periodic motions of a spherical pendulum and a Lagrange top with a vibrating point of suspension in the presence of the degeneration considered.
CommonCrawl
####Question Description We have $1,2,\ldots,n$ symbols. We want to find a shoretest possible string on the symbols that contains every permutation of those symbols as a contiguous substring. We call a string that contains every permutation in this way a $superpermutation$, and one of minimal length is called a $minimal$ $superpermutation$. For example, when $n=3$, all permutations are : $123$,$132$,$312$,$213$,$231$,$321$. In this case, the minimal superpermutation is $123121321$. The minimal length is 9. Suppose we already have a small superpermutation on $n$ and we want to construct a small superpermutation on $n+1$ symbols. To do so, simply replace each permutaiton in the $n$ symbol superpermutation by (1) the permutaiton, (2)the symbol $n+1$ (3) that permutaiton again.
CommonCrawl
Is there a definition of an $\infty$-groupoid in HoTT whose terms are $n$-manifolds and whose higher morphisms are diffeomorphisms/isotopies/etc? Suppose you want to work with TQFTs in homotopy type theory (HoTT). Working with $(\infty,n)$-categories, or even $(\infty,1)$-categories, is something that I gather is too difficult for HoTT at the moment because of coherence issues. However, you could just forget all noninvertible cobordisms, and you should be able to obtain an $\infty$-groupoid from your $(\infty,1)$-category of cobordisms. Then you could try to define functions from this $\infty$-groupoid to other $\infty$-groupoids, ideally inspired by TQFT invariants. Is there an inductive definition in HoTT of such a "groupoid of manifolds?" I'd also be happy with something like an $\infty$-groupoid of knots in $S^3$. Browse other questions tagged at.algebraic-topology tqft homotopy-type-theory or ask your own question. Is the polynomial de Rham functor a Quillen equivalence? Are there analogous statements for the number of zeros of a section in terms of the Euler class, even when the relevant spaces are not manifolds? Are there analogs of smooth partitions of unity and good open covers for PL-manifolds?
CommonCrawl
You are given an array that contains $n$ integers. Your task is to determine the longest increasing subsequence in the array. An increasing subsequence is a set of values in the array in their original order such that each number is larger than the previous number. The first line contains an integer $n$: the size of the array. After this there are $n$ integers $x_1,x_2,\ldots,x_n$: the contents of the array. Print the length of the longest increasing subsequence.
CommonCrawl
25 : 37 : : 49 : ? Priyanka started from her house towards west. After walking a distance of 25 m, she turned to the right and walked 10 m. She then again turned to the right and walked 15 m. After this she is the turn righ at 135 and to cover 30 m. In which direction should he go ? In a certain code AVANTIKA is written as VNIAAATK. How would MADURAI be written in that code ? How many pairs of letters are there in the word SEQUENTIAL which have as many letters between them in the words as they have between them in the English alphabet ? 7 . In questions which one set of letters when sequentially placed at the gaps in the given letter series shall complete it ? 8 . In questions which one set of letters when sequentially placed at the gaps in the given letter series shall complete it ? If '%' means '$\times$', '$\times$' means '$\div$', '$\div$' means '+' and '+' means '-', then 150 $\times$ 5 + 10 % 3 $\div$ 10 = ? Showing a picture Krishna said, That person is the sister of grandson of father of my maternal uncle. Who is that person in the picture of krishna ?
CommonCrawl
Assuming I have a data set with $d$ dimensions (e.g. $d=20$) so that each dimension is i.i.d. $X_i \sim U[0;1]$ (alternatively, each dimension $X_i \sim \mathcal N[0;1]$) and independent of each other. For normal distributed data, the results appear to be very similar, at least when rescaling them to a total sum of $1$ (the $\mathcal N[0;1]^d$ distribution clearly has a higher variance in the first place). I wonder if there is any result that predicts this behavior? I'm looking for a test if the series of eigenvalues is somewhat regular, and how many of the eigenvalues are as expected and which ones significantly differ from the expected values. For a given (small) sample size $k$, is there a result if a correlation coefficient for two variables is significant? Even i.i.d. variables will have a non-0 result occasionally for low $k$. There is a large literature on the distribution of eigenvalues for random matrices (you can try googling random matrix theory). In particular, the Marcenko-Pastur distribution predicts the distribution of eigenvalues for the covariance matrix of $i.i.d.$ data with mean of zero and equal variance as the number of variables and observations goes to infinity. Closely related is Wigner's semicircle distribution. Not the answer you're looking for? Browse other questions tagged normal-distribution uniform eigenvalues or ask your own question. What is the ratio of uniform and normal distribution? How do I compute the density of this data set that is made up of two different 3D-distributions? Moment/mgf of cosine of directional vectors? PCA Basic: Do eigenvalues remain constant with increase in dimensions?
CommonCrawl
I am very much in favor of immigration everywhere in the U.S. except Edison, N.J. But really, what bothers me about this piece, why it didn't strike me as satire, is that it seems to assume that there really is a dominant narrative out there, i.e. that "white" culture is where it's at. Assimilation is not an option, it's a requirement for these rude new aliens – but of course, that assimilation is on the dominant narratives terms. Didn't meant to insult Indians with my column this week. Also stupidly assumed their emails would follow that Gandhi non-violence thing. Perhaps he thought the emails would also be curry flavo(u)red? If the word 'curry' doesn't have a stable referent or a fixed origin, how can its changing use by postcolonials be seen as a sign of resistance? The University of California (UC) postdocs are trying to form a union to (among other things) get a uniform contract, workplace protections, etc. The UC administration has (true to form) stalled on giving information for negotiations. Congressman George Miller sent a rather strongly worded letter to Yudof after a congressional hearing was was held in Berkeley. More recently the union filed an unfair labor practices charge with the California Public Employment Relations Board. Beryl Benderly has been covering this story for Science Magazine – links to some of her posts are above. As I went for a walk this morning I passed the Bank of America on University – usually a pretty deserted stretch but today brightly colored by the contents of an upended Monopoly set, bright yellow Community Chest cards and 10 dollar bills scattered on the sidewalk. In front of the doors, face up, a salmon "Get out of jail free." A homeless man reaches down for the top hat. Earlier this month, a letter was circulated to the UC Faculty regarding the Nature Publishing Group (NPG)'s proposal to increase the licensing fees for online access by 400%, which is pretty dramatic given a) the high cost of the subscription in the first place and b) the fact that library budgets are going down. There was a suggestion of a boycott. NPG felt like they had been misrepresented, and issued a press statement saying "you guys are a bunch of whiners, our stuff is the best, and 7% price hikes per year is totally reasonable." Furthermore, they said "you guys have been getting a crazy good deal for way too long anyhow and its time you paid your fair share." I suppose behaving like complete jerks is an ok way to react when you are trying to sell somebody something, especially something that is made up of stuff written by your potential buyers. I wonder what their profit margins are like. Looks like PLoS better start rolling out some new titles! More info can be found at the OSC website, which oddly doesn't say what OSC stands for. My thesis work was about a particular adversarial model from Shannon theory (the arbitrarily varying channel), but I am more broadly interested in how adversarial assumptions can be used to capture uncertainty in modeling or add an element of robustness to designs. I'm not wedded to the AVC, so I like going to see other works which try to address these issues. Among the talks I saw at ISIT, these three captured different aspects of these models. This talk used as an example the "cockroach network," named thusly because it "does not look like a butterfly." That line got a lot of laughs. In this model a subset of nodes in the network are controlled by an adversary, who can introduce errors. They show that by using a certain structured code called a polytope code, together with some checking and filtering by internal nodes, the adversaries can be detected/avoided. For some networks (planar) they can show that their code achieves the cut-set bound. The key to the codes working is making any adversarial action result in certain tuples of random variables becoming atypical and hence detectable. This uses adversarial modeling (in the form of game theory) to model how users in a multiaccess setting contend for resources in the presence of jammers. In the AVC context, La and Anantharam proved some early results on these models. Here the extension was that the jammer(s) do not know whether or not the transmitter will be sending anything in a given slot (hence dynamic traffic). In the case with a single transmitter and a single jammer, the model is that the transmitter has to minimize its energy subject to an average rate constraint (packets are coming in at a certain rate to the transmitter and have to passed along). The jammer has a power constraint and wants the transmitter to maximize its energy. It turns out the that if the jammer doesn't know the queue state of the transmitter, then it has to be on all the time. They have more complicated extensions to multi-transmitter scenarios. Sirin talked in the session I was in. For point-to-point AVCs under average error, there is a finite list size called the symmetrizability of the channel such that for list-decoding with list sizes the capacity is 0 and for larger list sizes the capacity is the randomized coding capacity of the AVC. This work extended this to the multiaccess setting, which is a particularly tricky beast since the capacity region itself is nonconvex. He showed that there is a such that the capacity region has empty interior if the list size is and that for list sizes you get back the randomized coding capacity. What is open is whether the list sizes for the two users can be different, so that this gap could be closed, but that problem seems pretty hard to tackle. I guess I'll be shameless and blog about my own paper — we looked at an adversarial setting where the adversary gets to see the transmitted symbols after a delay. We assumed that the delay grew linearly with the blocklength, so that the transmitter gets to act at time based on , where is the blocklength and is the delay parameter. Suppose everything is binary and the adversary gets to flip a total of bits of the codeword but has to see it causally with delay . If the adversary sees nothing and the average-error capacity is bits. If the adversary can do a lot worse and the capacity is bits. What we show is that for any we go back to bits (and there was much rejoicing). The key is allowing the encoder to randomize, which effectively prevents the adversary from learning anything. Luckily, the decoder can still figure out what is going on (as long as there is some additional slack in the rate), even though it does not share common randomness with the encoder. After seeing Nisim's ISIT talk on LP Decoding of Regular LDPC Codes in Memoryless Channels, I remembered one issue that has been puzzling me for a while: if there are any explicit constructions of LDPC codes that have a provable nonzero threshold (for block-error probability, hence excluding density evolution arguments) i.e. correct a constant fraction of bit-flipping errors (other channels would be harder)– under any efficient decoder. The question seems trivial: As is well-known, a random regular graph will be a very high quality expander (for very small sets) and therefore a (very small) constant fraction of bit-flipping errors can be corrected by Sipser & Spielman's bit flipping algorithm. Also the work by Feldman et al. LP decoding corrects a constant fraction of errors establishes correction of an adversarial set of bit-flipping errors for expander graphs under LP decoding. But here is the catch: sure, a random graph is a good expander with high probability, but how do we check if a given graph is a good expander? for any size of expanding sets $\alpha$. Unfortunately this bound cannot certify expansion for any , which is exactly the point where it starts being useful for coding theory. Perhaps there are stronger spectral bounds that could establish more expansion, the book Spectral Graph Theory by Fan Chung Graham contains a lot of material on that point but I have not seen any such applications to coding. So ok, lets say we do not know how to construct (or certify) that LDPCs have high expansion, how about other graph properties that will guarantee a correctable fraction of errors in polynomial time? This started when I was working with Costas on LP decoding for LDPC codes and we were always (incorrectly) assuming that random regular bipartite graphs will have girth with high probability. When we actually tried to find a proof for this, for example looking at the Modern Coding theory book we find that usual proofs establish a significantly weaker statement: in a random regular graph, if you start from a variable and start expanding the tree, you will not loop around after a constant number of steps with probability. This is what is refered to as 'locally-tree like'. I do not know of any stronger statements but I think it can be easily shown that for any fixed cycle length, the expected number of cycles of that length is constant for regular random graphs. The breakthrough paper by Arora, Daskalakis and Steurer, 'Message-Passing Algorithms and Improved LP Decoding' establishes that regular LDPCs with girth will correct a constant fraction of random bit-flipping errors whp under LP decoding. But how do we find regular LDPCs with girth ? After searching a little I found the recent work of Bayati et al. Generating Random Graphs with Large Girth and Generating random Tanner-graphs with large girth that talk about the related problem of generating a graph with high girth uniformly from all such graphs (and with a given degree distribution) but as far as I understood these constructions cannot guarantee a diameter scaling like (but only any constant diameter). This is of course the relevant practical question but the scaling issue remains. The only construction that I know is the one found in the appendix of Gallager's thesis that contains a deterministic algorithm that constructs regular Tanner graphs with girth. The same question is relevant for compressed sensing when we ask for sparsity in the measurement matrix: All the constructions that I know for sparse measurement matrices (that require the optimal number of measurements under Basis pursuit, e.g. see the survey of Gilbert and Indyk: Sparse Recovery using Sparse Matrices) are constructed from high bipartite expander graphs. But given a fixed measurement matrix how do we certify that it is good and should be implemented in a system? Of course one can use Message passing algorithms for compressed sensing (by Donoho, Maleki and Montanari) and obtain sparse measurement matrices with very good thresholds but under a different decoding algorithm. 1. The Arora, Daskalakis & Steurer paper guarantees that (3,6) regular graphs with girth correct a (quite high– much higher than expansion arguments) constant fraction of errors under LP decoding. 2. Gallager's thesis appendix contains deterministic constructions of such (3,6) regular sparse matrices. 3. Our connection result establishes that if a matrix corrects a constant fraction of bit flipping errors, the same matrix used as a compressed sensing measurement matrix will recover all sparse signals (of the same support as the bit-flipping errors). Conclusion: The Appendix C in Gallager's thesis contains the best sparse compressed sensing measurement matrices known (under basis pursuit). This was the talk I did attend. The goal was to information-theoritize the notion of compressing databases. You get i.i.d. individuals with characteristics distributed according to and features conditioned on the individuals according to . The identification problem is this: given a noisy feature vector generated from a particular individual retrieve the index of the user corresponding to . This paper looked at the case where you want to recover a distorted version of , so they are trading off identification and distortion. The solution is to code Wyner-Zivly, and the result recover the existing results on the identification problem alone as well as the Wyner-Ziv problem.
CommonCrawl
It is well-known that an analytic generic CR submainfold M of codimension m in Cn+m is locally transformed by a biholomorphic mapping to a plane Cn$\times$Rm ⊂ Cn$\times$Cm whenever the Levi form L on M vanishes identically. We obtain such a normalizing biholomorphic mapping of M in terms of the defining function of M. Then it is verified without Frobenius theorem that M is locally foliated into complex manifolds of dimension n. We find the values of numerical invariants col(R) and row (R) for R=k[te, te+1, t(e-1)e-1] where k is a field and e$\geq$4. We also show that col(R) = crs (R) and row (R) = drs(R), but they are strictly less than the reduction number of R plus 1. An integral transform with the Bessel function Jv(z) in the kernel is considered. The transform is relatd to a singular Sturm-Liouville problem on a half line. This relation yields a Plancherel's theorem for the transform. A Paley-Wiener-type theorem for the transform is also derived. We consider regularity questions arising in the degenerate elliptic vector valued variational inequalities -div(|▽u|p-2∇u)$\geq$b(x, u, ∇u) with p$\in$(1, $\infty$). It is a generalization of the scalar valued inequalities, i.e., the obstacle problem. We obtain the C1,$\alpha$loc regularity for the solution u under a controllable growth condition of b(x, u, ∇u). We study the variational principle for quantum unbounded spin systems interacting via superstable and regular interactions. We show that the (weak) KMS state constructed via the thermodynamic limit of finite volume Green's functions satisfies the Gibbs variational equality. For linear time-varying control systems with constrained control described by both differential and discrete-time equations in Banach spaces was give necessary and sufficient conditions for exact global null-controllability. We then show that for such systems, complete stabilizability implies exact null-controllability. We characterize a condition for M to be of weak type ($\Phi$1, $\Phi$2) in terms of Orlicz norms. In this paper we formulate an optimal control problem governed by time-delay Volterra integral equations; the problem includes control constraints as well as terminal equality and inequality constraints on the terminal state variables. First, using a special type of state and control variations, we represent a relatively simple and self-contained method for deriving new necessary conditions in the form of Pontryagin minimum principle. We show that these results immediately yield classical Pontryagin necessary conditions for control processes governed by ordinary differential equations (with or without delay). Next, imposing suitable convexity conditions on the functions involved, we derive Mangasarian-type and Arrow-type sufficient optimality conditions. We calculate the R(G)-algebra structure on the reduced equivariant K-groups of two-dimensional spheres on which a compact Lie group G acts as a reflection. In particular, the reduced equivariant K-groups are trivial if G is abelian, which shows that the previous Y. Yang's calculation in is incorrect.
CommonCrawl
[1902.05040] How do infinite width bounded norm networks look in function space? Title:How do infinite width bounded norm networks look in function space? Abstract: We consider the question of what functions can be captured by ReLU networks with an unbounded number of units (infinite width), but where the overall network Euclidean norm (sum of squares of all weights in the system, except for an unregularized bias term for each unit) is bounded; or equivalently what is the minimal norm required to approximate a given function. For functions $f : \mathbb R \rightarrow \mathbb R$ and a single hidden layer, we show that the minimal network norm for representing $f$ is $\max(\int |f''(x)| dx, |f'(-\infty) + f'(+\infty)|)$, and hence the minimal norm fit for a sample is given by a linear spline interpolation.
CommonCrawl
Counting integers with a least prime factor greater than $x$ in a sequence of $x$ consecutive integers. Problem about real projective plane. Proving an inequality involving absolute value; how do I justify using a conjunction (and) instead of a disjunction (or)? Question on the injectivity of a function. How do I prove that the sequence $x_n=1+(-1)^n$ does not converge?
CommonCrawl
LATTICE-parameters.sh: (Bash) shell script for extracting the values of the lattice parameters out of any exciting input file. exciting2sgroup.py: Python script for converting an exciting input file to an input file for the program sgroup. Minimization with respect to the $b/a$ ratio (all other parameters are fixed). Minimization with respect to the $c/a$ ratio (all other parameters are fixed). Minimization with respect to the angle $\alpha$ between the $b$ and $c$ axes (all other parameters are fixed). Minimization with respect to the angle $\beta$ between the $a$ and $c$ axes (all other parameters are fixed). Minimization with respect to the angle $\gamma$ between the $a$ and $b$ axes (all other parameters are fixed). Inside this directory, we create or copy an input file for hexagonal Be with the name Be_opt.xml. In this tutorial is not necessary to rename the input file as input.xml, because this file is the input of a specific script used in this tutorial and not of exciting itself. Please, notice that the input file for a direct exciting calculation must be always called input.xml. This file should look like the following. 0.03, the absolute value of the maximum strain, ϵmax, for which we want to perform the calculation. Notice that in this case the physical strain ϵ is defined by the relationship V=Vinitial·(1+ϵ)3, where Vinitial is the volume of the unit cell defined in Be_opt.xml. Due to the fact that in this step the volume only has been optimized, the relative variation with respect to the initial values of the three lattice parameters $a, b,$ and $c$ is the same. If you are interested to check how accurate the calculated equilibrium parameters at this step are, you can find more information here. Notice that in this case the physical strain ϵ is defined by the relationship (c/a)=(c/a)initial·(1+ϵ), where (c/a)initial corresponds to the unit cell defined in 1-VOL.xml. Now, move to the COA directory and run OPTIMIZE-submit.sh. Here, you can notice that by changing the $c/a$ ratio at fixed $V$, the values of $a, b$ and $c$ differ from the ones at the previous step. Now, you should move out to the parent directory, rename COA to 2-COA (second optimization step, varying only $c/a$). Then, copy the coa-optimized.xml file to the current directory with the name 2-COA.xml. Notice: Due to the fact that the volume and $c/a$ ratio have been already optimized once, we can increase the number of calculations at each step in order to achieve a better fitting accuracy. Repeat now the procedure already explained in STEP1, running the script OPTIMIZE-lattice.py and using as entries values 2-COA.xml, 1, 0.03, and 11 in the given order. After having performed the calculation (running the script OPTIMIZE-submit.sh inside the directory VOL), you run OPTIMIZE-lattice.py and get the following plot. Proceed in a similar way to to STEP2. Run the script OPTIMIZE-lattice.py using as entries values 3-VOL.xml, 2, 0.03, and 11 in the given order. Using the same procedure as in the previous steps, you will end up with the following plot. In the following table you find a summary of the results for the lattice parameters of the first 4 optimization steps. As you can see from the previous table, at the 4-th iteration you reached an overall convergence of the lattice parameters $a$, $b$, and $c$ of about 1$\times$10-3 Bohr. If this result corresponds to the desired accuracy, you can stop the optimization procedure. Otherwise, you proceed with the next step and, using the new results, you check again the convergence behaviour of the equilibrium parameters. In some case, it may happen that after some new optimization steps the values of the lattice parameters oscillate without reaching convergence. Then, you should improve the accuracy in the energy of the single calculations by correspondingly changing computational parameters such as ngridk, rgkmax, and epsengy. a result is obtained which is similar to the one illustrated in the next plot, where no global minimum can be clearly identified. In order to obtain reasonable results, a general rule is that the accuracy of the energy for each calculations should be much smaller than the energy difference between two different calculations (two different points in the above plot).
CommonCrawl
If the returns of my strategy are distributed like 𝒩[μ,σ], what is the optimal fraction of capital to invest in each single trade, as a function μ and σ? Help! This problem can be expressed as the original Merton's portfolio problem. where $\alpha_t$ is proportion of the investment in the risky asset $S_t$, and $S_t^0$ is the risk-free asset. where $\gamma$ is constant and corresponds to the risk-aversion of the investor. Notice that the solution can be interpreted as the mean-variance trade-off. Not the answer you're looking for? Browse other questions tagged risk-management kelly-criterion money-management or ask your own question. Why does Kelly maximise $E[\log\space G]$ rather than simply $E[G]$? What is the Kelly Criterion for continuous probability case? Am I calculating my Kelly Criterion correctly? How can the Kelly Criterion be adjusted for making Angel Investment Decisions?
CommonCrawl
The goal is to tile rectangles as small as possible with the given hexomino, in this case number 3 of the 25 hexominoes which cannot tile a rectangle alone. We allow the addition of copies of a rectangle. For each rectangle $a\times b$, find the smallest area larger rectangle that copies of $a\times b$ plus at least one of the given hexomino will tile. Now we don't need to consider $1\times 1$, $1\times 2$, $1\times 3$ or $2\times 3$ further as we have found the smallest rectangle tilable with copies of the hexomino plus copies of those rectangles. This is split into two sections: Those tilable by hand, and those probably requiring a computer. Feel free to solve any of them by hand, but please don't post computer-found tilings for those in the no-computer section. No Computer section All of these should be tiled by hand only. This also means please don't look up answers on the web... This does not preclude you from for example using an image program to manipulate shapes on the screen, just from using a computer to search for or automate the arrangement. Computer section Master solvers may well solve these by hand. Not the answer you're looking for? Browse other questions tagged geometry computer-puzzle no-computers tiling polyomino or ask your own question.
CommonCrawl
I've read that the Friis transmission equation can be applied to calculate path loss in ideal space. However, I'd like to better understand the factors that define it and what it represents, and why it is the equation it is. Let's imagine that there are two antennas, entirely in free space. There is no Earth or any other object near enough to affect propagation. Let's also assume that these antennas are far enough apart that the antennas interact only by their far fields. This number has units $\mathrm W/\mathrm m^2$ and is a power flux density or irradiance. A watt is one joule per second, so a power flux density of $1\mathrm W/\mathrm m^2$ means that every second, one joule of energy passes through an area of one square meter. $f$ is the frequency in Hz. This formula is essential to the calculation of link budgets, and gives the best case path loss. It's then easier to consider other sources of loss as separate factors. 1: this means you can build a Dyson sphere around a star, capturing all of its energy. 2: at least, not ones that emit coherent radiation. The sun can be (approximately) isotropic because the light (a kind of electromagnetic radiation) it emits is not coherent. 3: 1.62, or 2.15 dBi, for a half-wave dipole. Not the answer you're looking for? Browse other questions tagged math path-loss or ask your own question. What is a link budget, and how do I make one? building a relay for 5.8Ghz? What is the average fade margin required for proper operation at 2.4 GHz? Why link losses are more as the antenna height decreases in ground to air communication, in v/UHF bands?
CommonCrawl
Abstract: The study of free boundary minimal hypersurfaces, namely of those critical points for the area functional that meet the boundary of their ambient manifold orthogonally, dates back at least to Courant and yet has recently seen remarkable developments both in terms of existence results (via min-max and or gluing/desingularization methods) and in terms of a much deeper understanding of the connection with extremal metrics for the first Steklov eigenvalue. In this lecture, I will survey a number of results, of global character, centered around the following general questions: What conditions ensure geometric compactness (i.e. smooth, graphical convergence with multiplicity one) of a sequence of free boundary minimal hypersurfaces? What is the fine scale description of the geometric picture when multiply-sheeted convergence happens instead? As a byproduct, our analysis allows to obtain various consequences in terms of finiteness and topological control, which we can combine with a suitable bumpy metric theorem in this category to derive novel genericity theorems. Based on joint work with Lucas Ambrozio and Benjamin Sharp. Abstract: I consider surfaces in 3-space which are critical with respect to certain geometric variational problems, such as CMC and minimal surfaces and (constrained) Willmore surfaces. In this talk I want to report on recent results on the construction of new examples of higher genus CMC surfaces and on the identification of constrained Willmore minimizers in the class of conformal tori. Moreover, by viewing minimal surfaces in different space forms within the constrained Willmore integrable system, counterexamples to a question of Simpson are constructed. This suggests a deeper connection between Willmore surfaces, i.e., rank 4 harmonic maps theory, and the rank 2 theory of Hitchin's self-duality equations. This talk is based on joint work with Cheikh Birahim Ndiaye, Sebastian Heller and Nicholas Schmitt. Abstract: We study a variational problem for surfaces in the euclidean space with an anisotropic surface energy. An anisotropic surface energy is the integral of an energy density that depends on the surface normal over the considered surface, which was introduced to model the surface tension of a small crystal. The minimizer of such an energy among all closed surfaces enclosing the same volume is unique and it is (up to rescaling) so-called the Wulff shape. The Wulff shape and equilibrium surfaces of this energy for volume-preserving variations are generalizations of the round sphere and constant mean curvature surfaces, respectively. However, they are not smooth in general. In this talk, we show that, if the energy density function is three times continuously differentiable and convex, then any closed stable equilibrium surface is a rescaling of the Wulff shape. Moreover, we show that, there exists a non-convex energy density function such that there exist closed embedded equilibrium surfaces with genus zero which are not (any homothety of) the Wulff shape. This gives also closed embedded self-similar shrinking solutions with genus zero of the anisotropic mean curvature flow other than the Wulff shape. These concepts and results are naturally generalized to higher dimensions. Abstract: We discuss the existence of surfaces of type S2 minimizing the Willmore functional with prescribed isoperimetric ratio, and some asymptotics as the ratio goes to zero. Abstract : A $f$-extremal domain in a manifold $M$ is a domain $\Omega$ which admits a positive solution $u$ to the equation $\Delta u+f(u)=0$ with $0$ Dirichlet boundary data and constant Neuman boundary data. Thanks to a result of Serrin, it is known that in $\mathbb R^n$ such a $f$-extremal domain has to be a round ball. In this talk, we will prove that a $f$-extremal domain in $\mathbb S^2$ which is a topological disk is a geodesic disk under some asumption on $f$. This is a joint work with J.M. Espinar. Abstract: I will report on recent progress on the problem of classifying metrics on surfaces with constant curvature and prescribed conic singularities, with particular emphasis on the case of positive curvature and large cone angles. Joint work with Xuwen Zhu. Title: Progress in the theory of CMC surfaces in locally homogeneous 3-manifolds. Abstract: I will go over some recent work that I have been involved in on surface geometry in complete locally homogeneous 3-manifolds X. In joint work with Mira, Perez and Ros, we have been able to finish a long term project related to the Hopf uniqueness/existence problem for CMC spheres in any such X. In joint work with Tinaglia on curvature and area estimates for CMC H>0 surfaces in such an X, we have been working on getting the best curvature and area estimates for constant mean curvature estimates in terms of their injectivity radii and their genus. It follows from this work that if W is a closed Riemannian homology 3-sphere W then the moduli space of closed stongly Alexandrov embedded surfaces of constant mean curvature H in an interval [a,b] with a>0 and of genus bounded above by a positive constant is compact. In another direction, in joint work with Coskunuzer and Tinaglia we now know that in complete hyperbolic 3-manifolds N, any complete embedded surface M of finite topology is proper in N if H is at least 1 (this is work with Tinaglia) and for any value of H less than 1 there exists complete embedded nonproper planes in hyperbolic 3-space (joint work with both researchers). In joint work with Adams and Ramos, we have been able characterize the topological types of finite topology surfaces that properly embed in some complete hyperbolic 3-manifold of finite volume (including the closed case) with constant mean curvature H; in fact, the surfaces that we construct are totally umbilic. Abstract: We define a hierarchy of special classes of constrained Willmore surfaces by means of the existence of a polynomial conserved quantity of some type, filtered by an integer. Type 1 with parallel top term characterises parallel mean curvature surfaces and, in codimension 1, type 1 characterises constant mean curvature surfaces in space-forms. We show that this hierarchy is preserved under both spectral deformation and Baecklund transformation, for special choices of parameters, defining, in particular, transformations of CMC surfaces into new ones, with preservation of both the space-form and the mean curvature, in the latter case. This is joint work with Susana Duarte Santos. Abstract : We introduce a general scheme that permits to generate successive min-max problems for producing critical points of higher and higher indices to Palais-Smale Functionals in Banach manifolds. We call the resulting tree of minmax problems a minmax hierarchy. Using the viscosity approach to the minmax theory of minimal surfaces that we introduced in a series of recent works, we shall explain how this scheme can be deformed for producing smooth minimal surfaces of strictly increasing area in arbitrary codimension. We shall implement this scheme to the case of the 3- dimensional sphere. In particular we are giving a min-max characterization of the Clifford Torus and conjecture what are the next minimal surfaces to come in the S3 hierarchy. Title: Minimal annuli in H2xR. Abstract: We give some existence and non-existence results for properly Alexandrov-embedded minimal annuli in H2xR. Abstract: Minimal and constant mean surfaces have played a key role in classical (smooth) Riemannian geometry, but finding analogous discrete objects turns out to be difficult. Various definitions of these compete, e.g. critical points of the area functional, though that one remains unsatisfactory and breaks the maximum principle (as is obvious from the cotan Laplace operator). We will present here one based on circular quad-based nets and explain how it relates to the minimal/CMC PDE. We will also show that it does have an interpretation in terms of Lax pair, much like the smooth PDE has. As a consequence it offers a (partial) constructive approach, as well as a Lawson correspondence. This is a joint work with Alexander Bobenko (TU Berlin). If Ω is bounded, then by a classical result of Serrin, Ω is a disc and u is radial. If the domain is unbounded the shape of Ω and u are not so rigid. We present some theorems about that situation. These results show some relationship with the Geometry properly embedded minimal and constant mean curvature surfaces in R3. This is a joint work with David Ruiz and Pieralberto Sicbaldi. Abstract: We discuss the precise behaviour of solutions of Teichmüller harmonic map flow at finite time singularities and explain how it allow us to conclude that this geometric flow decomposes every given closed initial surface into (branched) minimal immersions. This is joint work with Peter Topping. Abstract: In this talk I will discuss several aspects of the geometric structure of complete minimal submanifolds of rank two in space forms. Under some natural curvature assumptions we provide a classification of these submanifolds. The results are joint work with M. Dajczer, Th. Kasioumis and Th. Vlachos. Abstract: We'll consider a theory of bubbling along a sequence of minimal hypersurfaces with bounded Morse index and area. In particular we'll see that one can capture regions of coalescing index (along the sequence) to produce a family of complete minimal hypersurfaces in Euclidean space of finite total curvature (the 'bubbles'). Furthermore the total curvature along the sequence is entirely accounted for by a limit minimal hypersurface and the bubbles. We'll discuss applications of such a result. Joint work with R. Buzano and ongoing work with L. Ambrozio, R. Buzano and A. Carlotto. Title: Collapsing ancient solution of mean curvature flow. Abstract: Understanding the geometry of ancient solutions for mean curvature flow is key to study singularities of mean curvature flow. In this talk, I will describe the construction of the unique compact convex rotationally symmetric ancient solution of mean curvature flow contained in a slab. This is joint work with Bourni and Langford. Abstract:Ricci flow theory has been developing rapidly over the last couple of years, with the ability to handle Ricci flows with unbounded curvature finally becoming a reality. This is vastly expanding the range of potential applications. I will describe some recent work in this direction with Miles Simon that shows the right way to pose the 3D Ricci flow in this setting in order to obtain applications. Amongst these applications is a proof that 3D Ricci limit spaces are locally bi-Holder homeomorphic to smooth manifolds, which solves more than an old conjecture of Anderson-Cheeger-Colding-Tian in this dimension. Abstract: I will discuss a surprising dichotomy for classical minimal surfaces that gives new insights into the Colding-Minicozzi theory.
CommonCrawl
Short answer. No, the answer proposed here is wrong. Actually, $[(zx)/x]\, \lambda z.xyz = \lambda t.(zx)yt$, which is completely different from $\lambda t.(t(zx)y)$ because application is not commutative, unlike the usual product: in general, the term $xy$ is the application of $x$ to $y$ (where $x$ and $y$ are intended as two arbitrary functions), which is different from $yx$ i.e. the application of $y$ to $x$. Anyway, you are right when you say that $[(zx)/x]\, \lambda z.xyz \neq \lambda z.zxyz$. Indeed, substitution in the $\lambda$-calculus is not a simple replacement of something with something else, because of the problem of the capture of variables, which I explain below. Substitution is a delicate operation in the $\lambda$-calculus. Indeed, the $\lambda$-calculus is intended as a formal system for expressing computation based on abstraction and application using variable binding and substitution, where functions are taken as first class values: every term in the $\lambda$-calculus represents a (computable) function. A naïve approach in the definition of substitution in the $\lambda$-calculus may change the meaning of the represented functions in a inconsistent way. In the syntax of the $\lambda$-calculus, the $\lambda$ is an operator binding a variable in a function. For instance, the term $\lambda x.x$ represents the identity function ($x \mapsto x$), the term $\lambda x. y$ represents the constant function ($x \mapsto y$, i.e. everything is mapped to $y$). Note that the particular choice of a bound variable, in a $\lambda$, does not (usually) matter: for instance, the term $\lambda x . x$ is the same as the term $\lambda y. y$ because they both represents the identity function. Formally, terms in the $\lambda$-calculus are identified up to $\alpha$-equivalence, i.e. up to renaming of the bound variables. Now, consider the term $[x/y] \, \lambda x. y$. Morally, it represents the constant function $x \mapsto y$ (everything is mapped to $y$) where $y$ is replaced by $x$, that is, it represent the constant function $z \mapsto x$ (everything is mapped to $x$). However, if we intended the substitution as a simple replacement, $[x/y] \, \lambda x. y$ would be $\lambda x. x$, i.e. the constant function, a completely different function from the intended one. The problem arises because the variable $x$ in the substitution $[x/y]$ has be captured by the binder $\lambda x$ in the term. So, in order to define substitution in a consistent way, the problem of the capture of variable has to be avoided. The solution is defining substitution in a capture-avoiding way as follows: given the terms $t$ and $u$, the term $[u/x]\, t$ is obtained from $t$ by replacing the free (i.e. not bound by a $\lambda$) occurrences of $x$ in $t$ with $u$, provided that the bound variables of $t$ are not free in $u$; if this proviso is not fulfilled by $t$, then we work on a term $t'$ (instead of $t$) where this proviso holds: this is always possible thanks to $\alpha$-equivalence, i.e. by renaming the bound variables in $t$ (which does not change the meaning of $t$, as I explained before). For example, in $[x/y] \, \lambda x. y$ the variable $x$ in the substitution is also a bound variable in the term $\lambda x. y$; then, instead of performing the replacement on $\lambda x. y$, we do it in the $\alpha$-equivalent term $\lambda z. y$ (or $\lambda w.y$, it is the same) and then we get $[x/y] \, \lambda x. y = \lambda z. x$ (or equivalently, $[x/y] \, \lambda x. y = \lambda w. x$). Coming back to your question, in the term $[(zx)/x]\, \lambda z.xyz $, the term $zx$ in the substitution contains a free variable $z$ that is bound in $\lambda z.xyz $, so before performing the substitution we have to rename the term $\lambda z.xyz $ in a $\alpha$-equivalent way, say $\lambda w.xyw $ (or equivalently, $\lambda t.xyt $, if $t$ stands for a variable). Therefore, $[(zx)/x]\, \lambda z.xyz = \lambda w.(zx)yw $ (or equivalently, $[(zx)/x]\, \lambda z.xyz = \lambda t.(zx)yt $). Not the answer you're looking for? Browse other questions tagged computer-science lambda-calculus or ask your own question. In rewiring systems do definitions creates new rewrite laws or an alias? And is this a meaningful question? What does the first "non-free" variable mean here when substituting in simple type theory?
CommonCrawl
Abstract: We study the problem of whether an arbitrary codimension three graded artinian Gorenstein algebra has the Weak Lefschetz Property. We reduce this problem to checking whether it holds for all compressed Gorenstein algebras of odd socle degree. In the first open case, namely Hilbert function (1,3,6,6,3,1), we give a complete answer in every characteristic by translating the problem to one of studying geometric aspects of certain morphisms from $\mathbb P^2$ to $\mathbb P^3$, and Hesse configurations in $\mathbb P^2$.
CommonCrawl
With the start of Observing Period 31 (April 1st, 2005), NOT has started a Fast-Track Service Observing Programme offering the opportunity to propose short programmes on a running basis and with a short lead time from proposal to execution. A number of nights have been reserved for this service throughout each semester, and approved programmes will in principle only be executed on those nights. Below, a brief description of the programme is given, and links are provided to the fast-track proposal form and the Observing Block (OB) generator needed to define the required observations when a proposal has been accepted. With the implementation of the new general rules for the allocation of observing time and user contributions there are some practical changes to the Fast-Track service programme. The Fast-Track programme remains a service that is provided free of cost to Nordic users (i.e., researchers affiliated with a Nordic institution), but it also incorporates a system to compensate regular Nordic observing programmes for time lost due to interruptions by observations for ToO or Monitoring programmes. In the case of a Fast-Track proposal from a researcher affiliated with a non-Nordic institute, a contribution to the operational cost of queue-mode service observing will have to be provided at an hourly rate based on the cost of the corresponding services (for proposals submitted as of the start of period 56) of 1670 Euro per night, assuming an average of 10 hr per full night. In these cases, the PI of the proposal assumes responsibility to provide the required contributions when accepting any allocation of time, but contributions are only to be made if the observations are executed successfully. In the case you are requesting observations to compensate for time lost due to interruptions by observations for ToO or Monitoring programmes, there is no need to submit a fast-track programme, but you should use the proposal number of the programme affected when generating OBs (see below under "Project execution"). You can define OBs up to (approximately) the total time lost, which we then execute in service mode during Nordic service nights. If you have any questions or comments, please contact service. Only short programmes are accepted for the fast-track programme, with a maximum total length of four hours. Proposals may concern short, self-contained projects, pilot projects, or observations needed to finish projects which were, e.g., cut short by bad weather or interrupted by ToO observations. Within the rules, the fast-track programme is open to anybody, and the evaluation criteria are only scientific quality and technical feasibility. A limited number of service nights will be reserved for the fast-track programme in each semester; thus, fast-track programmes do not have access to time already allocated to other programmes. The observing schedules list the assigned service nights in the current semester(s). To offer a broad set of observing capabilities, but at the same time provide a system that allows both flexibility and the highest possible rate of completion of the proposed observing programmes, the instruments that will be available are ALFOSC, NOTCam, FIES and StanCam. For NOTCam, FIES and StanCam the instrument set-ups are in principle fixed. For ALFOSC the instrument set-up can vary widely, but we cannot provide changes of the set-up during an observing night, and there is a limited amount of time for calibration observations. Therefore, we limit each proposal to a selection of 2 grisms and 2 slits, and 4 filters beyond the standard UBVRi set. The maximum total time of the proposed observations is 4 hours, which must include the time needed for target acquisition and any night-time calibration observations (e.g., standard star observations). Proposals that do not comply with these constraints will be rejected. To submit a proposal a web-based proposal form is provided. We have tried to keep the form as short and as simple as possible, so we have provided a limited number of boxes that should be filled out. For each item in the form, a short Instruction window is provided. In the resulting form each box has a maximum size, and applicants should check that no overflow occurs before submitting the form. The format is what-you-see-is-what-you-get. No HTML or LaTeX code will be rendered, neither in PDFs nor for on-line viewing. However, applicants can of course use these codes as long as their meaning is clear (e.g., $\alpha$ in LaTeX). Note that when you fill in the form you are working locally, and you can save the current contents of your proposal in a file on your own computer. Thus, you can work on the proposal whether or not you have a working network connection. Also, intermediate versions of a proposal (which in principle could be prepared by anyone else and sent to you) can be loaded into the form. With a working network connection, you can create a PDF file of the proposal for a final check, and submit the finished proposal. The NOT Deputy Director is the contact person for all applications and provides evaluation of their technical feasibility. After submission and technical evaluation, the proposal is sent to the OPC Chair, who will arrange for its prompt evaluation by a member of the OPC. The resulting recommendation will assign a priority on a scale from 1 (highest) to 3 (lowest) and specify a maximum time to be allocated. No appeal of this decision will be possible. The goal of the procedure is to give an answer to the proposer within 2 weeks. Accepted proposals will remain in the queue for a maximum of one and a half year, after which a new proposal may be submitted. Approved programmes will normally be executed in queue mode by the NOT staff on the pre-assigned service nights. The general aim of service observing in flexible queue mode is to use observing time in an efficient way. For this to be successful we need to be sure that the observations provide useful data. An important part of this is to provide a way (for the PI) to define the requirements of the observations, and (for us) to make sure that observations meet the goals of the proposed programmes. For this we make use of our Observing Blocks (OBs) system which define the requirements for a set of observations. The general philosophy is similar to that for OBs as defined by ESO. To prepare OBs, please use our Observing Block Generator. Please read and follow the instructions carefully. When observations have been executed, the PI with be notified automatically, and the data obtained will be made available by ftp. The same proprietary period applies for data obtained through fast-track programmes as for normal observing programmes.
CommonCrawl
In flow control, the timer used is started when the transmission of the packet starts or when the last bit of the packet is transmitted? Any questions in flow control related to the timer? timeout timer is started just as tranmission is done . Timeout Timer Started just after last bit of data packet being put on transmission media for transmission. Basically what we understand with last bit, data got divided into Packet's and Packet contains Bit, so even though actually bit transfer in link but they are bundled in packet. event though we are saying last bit conceptually but actually when packet getting transfer , timeout timer start immediately. Consider three nodes A B, C connected in series. Node A is connected to Node B via 3 Gbps link, 500 km length. Node B is connected to Node C via 60 Mbps link, 15 km length. The links are full duplex, but no other traffic on the links. A large file is ... . Assume velocity of propagation as $2 \times 10^8$ m/sec. What is the round trip time?______ msec (correct to two decimal points). What will be the window size if transmission time of a frame is 5ms, propagation delay is 45 ms, and the frames are transmitted till the link is maximally pack ________? Compute the efficiency of Stop and Wait protocol for the frame size of 1000 bits over 1Mbps and propagation delay between the system is 20ms, if the probability of error in frame is .1 _____?
CommonCrawl
Abstract: We revisit the relations between open and closed string scattering amplitudes discovered by Kawai, Lewellen, and Tye (KLT). We show that they emerge from the underlying algebro-topological identities known as the twisted period relations. In order to do so, we formulate tree-level string theory amplitudes in the language of twisted de Rham theory. There, open string amplitudes are understood as pairings between twisted cycles and cocycles. Similarly, closed string amplitudes are given as a pairing between two twisted cocycles. Finally, objects relating the two types of string amplitudes are the $\alpha'$-corrected bi-adjoint scalar amplitudes recently defined by the author [arXiv:1610.04230]. We show that they naturally arise as intersection numbers of twisted cycles. In this work we focus on the combinatorial and topological description of twisted cycles relevant for string theory amplitudes. In this setting, each twisted cycle is a polytope, known in combinatorics as the associahedron, together with an additional structure encoding monodromy properties of string integrals. In fact, this additional structure is given by higher-dimensional generalizations of the Pochhammer contour. An open string amplitude is then computed as an integral of a logarithmic form over an associahedron. We show that the inverse of the KLT kernel can be calculated from the knowledge of how pairs of associahedra intersect one another in the moduli space. In the field theory limit, contributions from these intersections localize to vertices of the associahedra, giving rise to the bi-adjoint scalar partial amplitudes.
CommonCrawl
Question: Is the operation "$I$ is linked to $J$" an equivalence relation? Particularly I do not understand how it is reflexive? If $I$ is an ideal with $R/I$ CM, pick a regular sequence $x_1,\ldots, x_r\in I$, where $r$ is the height of $I$. If we call $K$ the ideal generated by the $x_i$s, we can define $J=(K:I)$. One checks that $I=K:J$ and then we are done once we also check that $R/J$ is also CM. Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra or ask your own question. Does an étale equivalence relation of schemes induce an equivalence relation on points? What is the completion at a family of ideals?
CommonCrawl
There is a list of $n$ numbers and two players who move alternately. On each move, a player removes either the first or last number from the list, and their score increases by that number. Both players try to maximize their scores. What is the maximum possible score for the first player, if both players play optimally? The first input line contains an integer $n$: the size of the list. The next line has $n$ integers $x_1,x_2,\ldots,x_n$: the contents of the list. Print the maximum possible score for the first player.
CommonCrawl
We review results about the influence tumor heterogeneity has on optimal chemotherapy protocols (relative to timing, dosing and sequencing of the agents) that can be inferred from mathematical models. If a tumor consists of a homogeneous population of chemotherapeutically sensitive cells, then optimal protocols consist of upfront dosing of cytotoxic agents at maximum tolerated doses (MTD) followed by rest periods. This structure agrees with the MTD paradigm in medical practice where drug holidays limit the overall toxicity. As tumor heterogeneity becomes prevalent and sub-populations with resistant traits emerge, this structure no longer needs to be optimal. Depending on conditions relating to the growth rates of the sub-populations and whether drug resistance is intrinsic or acquired, various mathematical models point to administrations at lower than maximum dose rates as being superior. Such results are mirrored in the medical literature in the emergence of adaptive chemotherapy strategies. If conditions are unfavorable, however, it becomes difficult, if not impossible, to limit a resistant population from eventually becoming dominant. On the other hand, increased heterogeneity of tumor cell populations increases a tumor's immunogenicity and immunotherapies may provide a viable and novel alternative for such cases. Keywords: Optimal control, drug resistance, cancer chemotherapy, adaptive therapy. Mathematics Subject Classification: Primary: 92C50, 49K15; Secondary: 93C15. MTD, NCI Dictionary of Cancer Terms, https://www.cancer.gov/publications/dictionaries/cancer-terms/def/mtd, accessed 08/18/18. N. André, L. Padovani and E. Pasquier, Metronomic scheduling of anticancer treatment: The next generation of multitarget therapy?, Future Oncology, 7 (2011), 385-394. F. Billy and J. Clairambault, Designing proliferating cell population models with functional targets for control by anti-cancer drugs, Discr. and Cont. Dyn. Syst., Series B, 18 (2013), 865-889. doi: 10.3934/dcdsb.2013.18.865. F. Billy, J. Clairambault and O. Fercoq, Optimisation of Cancer Drug Treatments Using Cell Population Dynamics, in: Mathematical Methods and Models in Biomedicine, (U. Ledzewicz, H. Schättler, A. Friedman and E. Kashdan, Eds.), Springer, New York, 2013, 265–309. doi: 10.1007/978-1-4614-4178-6_10. O. Bonefon, J. Covile and G. Legendre, Concentration phenomena in some non-local equation, J. Discrete and Continuous Dynamical Systems, Series B, 22 (2017), 763-781. doi: 10.3934/dcdsb.2017037. B. Bonnard and M. Chyba, Singular Trajectories and their Role in Control Theory, Mathématiques & Applications, vol. 40, Springer Verlag, Paris, 2003. A. Bressan and B. Piccoli, Introduction to the Mathematical Theory of Control, American Institute of Mathematical Sciences, 2007. M. Delitalia and T. Lorenzi, Formations of evolutionary patterns in cancer dynamics, in: Pattern Formation in Morphogenesis: Problems and Mathematical Issues, (V. Capasso et al., Eds.), Springer Proceedings in Mathematics, 15 (2013), 179–190. doi: 10.1007/978-3-642-20164-6_15. M. Delitalia and T. Lorenzi, Recognition and learning in a mathematical model for immune response against cancer, Discr. and Cont. Dyn. Syst., Series B, 18 (2013), 891-914. doi: 10.3934/dcdsb.2013.18.891. M. Delitalia and T. Lorenzi, Mathematical modelling of cancer under target therapeutic actions: Selection, mutation and drug resistance, in: Managing Complexity, Reducing Perplexity in Biological Systems, (M. Delitala and G. Ajmone Marsan Eds.), Springer Proceedings in Mathematics & Statistics, 2014, 81–99. G. P. Dunn, A. T. Bruce, H. Ikeda, L. J. Old and R. D. Schreiber, Cancer immunoediting: From immunosurveillance to tumor escape, Nat. Immunol., 3 (2002), 991-998. doi: 10.1038/ni1102-991. H. Easwaran, H. C. Tsai and S. B. Baylin, Cancer epigenetics: Tumor Heterogeneity, Plasticity of Stem-like States, and Drug Resistance, Molecular Cell, 54 (2014), 716-727. doi: 10.1016/j.molcel.2014.05.015. M. Eisen, Mathematical Models in Cell Biology and Cancer Chemotherapy, Lecture Notes in Biomathematics, Vol. 30, Springer Verlag, Berlin, 1979. R. A. Gatenby, A change of strategy in the war on cancer, Nature, 459 (2009), 508-509. doi: 10.1038/459508a. R. A. Gatenby, A. S. Silva, R. J. Gillies and B. R. Frieden, Adaptive therapy, Cancer Research, 69 (2009), 4894-4903. doi: 10.1158/0008-5472.CAN-08-3658. J. H. Goldie, Drug resistance in cancer: A perspective, Cancer and Metastasis Review, 20 (2001), 63-68. J. H. Goldie and A. Coldman, A model for resistance of tumor cells to cancer chemotherapeutic agents, Mathematical Biosciences, 65 (1983), 291-307. R. Grantab, S. Sivananthan and I. F. Tannock, The penetration of anticancer drugs through tumor tissue as a function of cellular adhesion and packing density of tumor cells, Cancer Research, 66 (2006), 1033-1039. doi: 10.1158/0008-5472.CAN-05-3077. J. Greene, O. Lavi, M. M. Gottesman and D. Levy, The impact of cell density and mutations in a model of multidrug resistance in solid tumors, Bull. Math. Biol., 74 (2014), 627-653. doi: 10.1007/s11538-014-9936-8. P. Hahnfeldt and L. Hlatky, Cell resensitization during protracted dosing of heterogeneous cell populations, Radiation Research, 150 (1998), 681-687. doi: 10.2307/3579891. P. Hahnfeldt, J. Folkman and L. Hlatky, Minimizing long-term burden: the logic for metronomic chemotherapeutic dosing and its angiogenic basis, J. of Theoretical Biology, 220 (2003), 545-554. doi: 10.1006/jtbi.2003.3162. D. Hanahan, G. Bergers and E. Bergsland, Less is more, regularly: Metronomic dosing of cytotoxic drugs can target tumor angiogenesis in mice, J. Clinical Investigations, 105 (2000), 1045-1047. doi: 10.1172/JCI9872. M. Kimmel and A. Swierniak, An optimal control problem related to leukemia chemotherapy, Scientific Bulletins of the Silesian Technical University, 65 (1983), 120-130. O. Lavi, J. Greene, D. Levy and M. Gottesman, The role of cell density and intratumoral heterogeneity in multidrug resistance, Cancer Research, 73 (2013), 7168-7175. doi: 10.1158/0008-5472.CAN-13-1768. U. Ledzewicz and H. Schättler, Optimal bang-bang controls for a 2-compartment model in cancer chemotherapy, J. of Optimization Theory and Applications - JOTA, 114 (2002), 609-637. doi: 10.1023/A:1016027113579. U. Ledzewicz and H. Schättler, Analysis of a cell-cycle specific model for cancer chemotherapy, J. of Biological Systems, 10 (2002), 183-206. doi: 10.1142/S0218339002000597. U. Ledzewicz and H. Schättler, Drug resistance in cancer chemotherapy as an optimal control problem, Discrete and Continuous Dynamical Systems, Series B, 6 (2006), 129-150. doi: 10.3934/dcdsb.2006.6.129. U. Ledzewicz and H. Schättler, Tumor microenvironment and anticancer therapies: An optimal control approach, in: Mathematical Oncolgy 2013, (A. d'Onofrio and A. Gandolfi, Eds.), Birkhäuser, New York, 2013, 295–334. U. Ledzewicz, K. Bratton and H. Schättler, A $3$-compartment model for chemotherapy of heterogeneous tumor populations, Acta Applicandae Matematicae, 135 (2015), 191-207. doi: 10.1007/s10440-014-9952-6. U. Ledzewicz, M. S. Faraji Mosalman and H. Schättler, Optimal controls for a mathematical model of tumor-immune interactions under targeted chemotherapy with immune boost, Discrete and Continuous Dynamical Systems, Series B, 18 (2013), 1031-1051. doi: 10.3934/dcdsb.2013.18.1031. U. Ledzewicz and H. Schättler, On optimal chemotherapy for heterogeneous tumors, J. of Biological Systems, 22 (2014), 177-197. doi: 10.1142/S0218339014400014. U. Ledzewicz and H. Schättler, A review of optimal chemotherapy protocols: From MTD towards metronomic therapy, Math. Model. Nat. Phenom., 9 (2014), 131-152. doi: 10.1051/mmnp/20149409. U. Ledzewicz and H. Schättler, Optimizing chemotherapeutic anti-cancer treatment and the tumor microenvironment: an analysis of mathematical models, in: Systems Biology of Tumor Microenvironment, Advances in Experimental Medicine and Biology Vol. 936, (K.A. Rejniak (ed.)), 2016, 209–223. doi: 10.1007/978-3-319-42023-3_11. A. Lorz, T. Lorenzi, M. E. Hochberg, J. Clairambault and B. Berthame, Population adaptive evolution, chemotherapeutic resistance and multiple anti-cancer therapies, ESAIM: Mathematical Modelling and Numerical Analysis, 47 (2013), 377-399. doi: 10.1051/m2an/2012031. A. Lorz, T. Lorenzi, J. Clairambault, A. Escargueil and B. Perthame, Effects of space structure and combination therapies on phenotypic heterogeneity and drug resistance in solid tumors, Bull. Math. Biol., 77 (2015), 1-22. doi: 10.1007/s11538-014-0046-4. L. Norton and R. Simon, Tumor size, sensitivity to therapy, and design of treatment schedules, Cancer Treatment Reports, 61 (1977), 1307-1317. L. Norton and R. Simon, The Norton-Simon hypothesis revisited, Cancer Treatment Reports, 70 (1986), 41-61. B. Perthame, Transport Equations in Biology, Birkhäuser, Basel, Switzerland, 2007. K. Pietras and D. Hanahan, A multi-targeted, metronomic and maximum tolerated dose "chemo-switch" regimen is antiangiogenic, producing objective responses and survival benefit in a mouse model of cancer, J. of Clinical Oncology, 23 (2005), 939-952. L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, Macmillan, New York, 1964. E. Ramos, C. Nespoli and P. Ramos, Feedback optimal control for mathematical models for cancer treatment, Preprint, 2018. H. Schättler and U. Ledzewicz, Geometric Optimal Control, Interdisciplinary Applied Mathematics, vol. 38, Springer, 2012. doi: 10.1007/978-1-4614-3834-2. H. Schättler and U. Ledzewicz, Optimal Control for Mathematical Models of Cancer Therapies, Interdisciplinary Applied Mathematics, vol. 42, Springer, 2015. doi: 10.1007/978-1-4939-2972-6. R. D. Schreiber, L. J. Old and M. J. Smyth, Cancer immunoediting: Integrating immunity's roles in cancer suppression and promotion, Science, 331 (2011), 1565-1570. doi: 10.1126/science.1203486. S. V. Sharma et al., A chromatin-mediated reversible drug-tolerant state in cancer cell subpopulations, Cell, 141 (2010), 69–80. doi: 10.1016/j.cell.2010.02.027. H. E. Skipper, On mathematical modeling of critical variables in cancer treatment (goals: better understanding of the past and better planning in the future), Bulletin of Mathematical Biology, 48 (1986), 253-278. doi: 10.1007/BF02459681. N. V. Stepanova, Course of the immune reaction during the development of a malignant tumour, Biophysics, 24 (1980), 917-923. G. W. Swan, Applications of Optimal Control Theory in Medicine, Marcel Dekker, New York, 1984. G. W. Swan, General applications of optimal control theory in cancer chemotherapy, IMA J. of Mathematical Applications in Medicine and Biology, 5 (1988), 303-316. doi: 10.1093/imammb/5.4.303. A. Swierniak, Optimal treatment protocols in leukemia - modelling the proliferation cycle, Biomedical Systems Modelling and Simulation (Paris, 1988), 51–53, IMACS Ann. Comput. Appl. Math., 5, IMACS Trans. Sci. Comput. '88, Baltzer, Basel, 1989. A. Swierniak, Cell cycle as an object of control, Journal of Biological Systems, 3 (1995), 41-54. A. Swierniak, U. Ledzewicz and H. Schättler, Optimal control for a class of compartmental models in cancer chemotherapy, Int. J. Applied Mathematics and Computer Science, 13 (2003), 357-368. A. Swierniak, A. Polanski and M. Kimmel, Optimal control problems arising in cell-cycle-specific cancer chemotherapy, Cell proliferation, 29 (1996), 117-139. A. Swierniak, A. Polanski, M. Kimmel, A. Bobrowski and J. Smieja, Qualitative analysis of controlled drug resistance model - inverse Laplace and semigroup approach, Control and Cybernetics, 28 (1999), 61-75. A. Swierniak and J. Smieja, Cancer chemotherapy optimization under evolving drug resistance, Nonlinear Analysis, 47 (2000), 375-386. doi: 10.1016/S0362-546X(01)00184-5. G. Tonini, M. Imperatori, B. Vincenci, A. M. Frezza and D. Santini, Rechallenge therapy and treatment holiday: Different strategies in management of metastatic colorectal cancer, J. Experimental Clinical Cancer Research, 32 (2013), p92. doi: 10.1186/1756-9966-32-92. S. Wang and H. Schättler, Optimal control of a mathematical model for cancer chemotherapy under tumor heterogeneity, Mathematical Biosciences and Engineering - MBE, 13 (2016), 1223-1240. doi: 10.3934/mbe.2016040. S. Wang and H. Schättler, Optimal control for cancer chemotherapy under tumor heterogeneity, Discrete and Continuous Dynamical Systems, Series B, 48 (2018). S. D. Weitman, E. Glatstein and B. A. Kamen, Back to the basics: the importance of concentration $\times$ time in oncology, J. of Clinical Oncology, 11 (1993), 820-821. doi: 10.1200/JCO.1993.11.5.820.
CommonCrawl
In a class of three students, the teacher says an astrochemechanicalculus exam will be given next week. He warns that after the tests are graded, he will swap the highest and lowest scores. For example, a set of 0, 50, & 100 will result in the perfect scorer receiving a zero while the bum gets a 100. Each student has a job that pays $1 per hour (they work at underground child labour sweatshops). Meanwhile, their parents will give them $1 for every point they score on the test. Every hour the student studies, they would earn 1 point on the test. If the money is the goal, what is the optimum strategy on how long one student should study? +1 follower to the first best solution. The answer is study zero hours and work all the hundred hours. Let the hours studied by student \( i \) be \( x_i \) and hours earning money be \( y_i \). Then \( x_i + y_i = 100 \). Now, consider the person who has the highest marks.(If two people have the highest marks, choose the one whose marks gets exchanged.) WLOG , let it be Student 1. Now, he is at an obvious disadvantage, since his marks get exchanged for lower ones(WLOG Student 2). His earnings = \( x_1 + y_2 < 100 \). Whereas Student 2 is at an advantage as his earnings = \( x_2 + y_1 > 100 \). Therefore, those students who study more are at a disadvantage and those who study less are at an advantage. Thus, the strategy of each student should be to study the least. i.e. 0 hours. I still think the Nash equilibrium is for all of them to study 100 hours since making $100 from doing so is very much easier than 100 hours in a sweatshop. @Venture Hi – That wouldn't be the Nash Equilibrium. In a Nash Equilibrium, no one can gain by changing their strategy. In your example, If a student changes his strategy from "studying 100 hours" to "studying 0 hours" he clearly gains.It is an equilibrium point but it is not a NASH equilibrium. I disagree with this outcome although it is a possible outcome if all of the students are irrational thinkers and only act selfishly for their own good. The best outcome is for all of them to study 100 hours and earn a score of 100 each. That way no one gets cheated from their studying time. If money is the prime motivator, say making a $100 either from working 100 hours or studying 100 hours( getting 100 points and getting $100 reward for doing so), one is better off studying since that entails less sweat. Since all 3 students know that each know that whoever studied the least will ultimately gets the highest score, everyone of them will study just as hard, to max out their score to 100.That way, the equilibrium point is all 3 students will earn 100 marks and $100. For this to work, ALL of them has to study 100 hours, i.e. to earn a $100 from their parent. ( Assuming studying 100 hours earn you 100 marks ) . Any slight inclination of one of them ( or two or 3) to NOT follow this plan will result in the action- ALL of them will prefer not to study since the only person who chose to do so will only end up getting nothing. So it is a clear choice for all of them to stick to this plan of maximizing their study hours. It is quite clear that making a $100 from studying is much easier than doing hard labor. +1 again for the following situations: What is the optimal solution if there are 6 students, and this time the parents will pay double (id est $200 for a perfect score)? I'm planning on turning it into a problem, but I need to make sure I've got it right first. Same strategy. It doesn't change with the number of students or the money earned by studying. Formally, the only Nash equilibrium is when everyone doesn't study.Infact, the number of students and the factor of increase don't matter. Proof:- Suppose there does exist a Nash equilibrium where someone studies.Let the hours studied by Student \(i \) be \(x_i \) and WLOG, \(x_1 \geq x_2 \geq \ldots \geq x_n \). and \( x_1 > 0 \). This is greater as \( 100 - x_2 \geq 100 - x_1 \) and \( kx_2 \geq kx_6 \). Equality occurs when \( x_1 = x_2 = x_n \) which is not possible since \( x_1 > x_n \). Therefore, His new earnings is strictly greater than his original earnings, which is not possible, since we assumed this is a Nash equilibrium. Case 2 :- \( x_1 = x_n \).Therefore \(x_1 = x_2 = \ldots = x_n > 0 \). Student 1's original earnings = \( (100 - x_1) + kx_n = 100 + x_1 \). By studying 0 hours, his earnings = \( (100 - 0 + kx_2) = 100 + kx_1 \) Since \( x_1 > 0 \).we have \( 100 + x_1 < 100 + kx_1 \) Therefore, His new earnings is strictly greater than his original earnings, which is not possible, since we assumed this is a Nash equilibrium. Thus there doesn't exist a Nash equilibrium with \( x_1 > 0 \). Note:- I proved this for "n" students of a increase factor of "k". I still think making a $100 from studying is much easier than from hard labor. So, it is only rational for them to cooperate and study 100 hours so they ALL can earn a max of $100. Any slight inclination of not following this strategy ( i.e. studying less than 100 hrs or not studying at all) means the other 2 students will follow suit as well, meaning all of them would have to earn their keep the hard way!! So the best strategy is to cooperate and max out their study hours. They know that they must get a lowest score to get the most money, so they'll just spend 100 hrs doing the job. This will be the best solution for individuals. But I didn't say that doing job for 100 hrs is the only way. If they can get the same points, they also get a same amount of money. So they have to talk together about this at some time I don't care. The another weird way is do whatever they want. The total money they get is always $300. Then share some money together at some time I also don't care.
CommonCrawl
If all balls at $x$ and $y$ are isometric is there an isometry sending $x$ to $y$? Let $(X,d)$ be a metric space and $x,y \in X$. Assume that for all $r > 0$ the balls $B_r(x)$ and $B_r(y)$ are isometric. Is it true that there exists an isometry of $X$ sending $x$ to $y$? No. Let $x$ and $y$ be connected by an edge and let's use the graph distance as our metric. At $x$, connect paths of length $n$ for each $n\in\mathbb N$. At $y$, do the same, but also connect an infinite path. Oviously, $\ B_r(x)\ $ and $\ B_r(y)\ $ are isometric for every $\ r>0,\ $ while there is no isometry $\ f:X\rightarrow X\ $ for which $\ f(x)=y$. REMARK 1 This example is ironic because while the respective balls are isometric, the isometry of the balls doesn't respect the centers. In this sense every bounded (and especially--finite) required example would be ironic. REMARK 2 Number $\ 5\ $ is minimal. Not the answer you're looking for? Browse other questions tagged metric-spaces isometries or ask your own question. Are all homogeneous metric spaces bihomogeneous? Which metric spaces have this superposition property? Surface locally isometric to a sphere. Is there a global obstruction for a diffeomorphism to be an isometry? Does there exist a ``continuous measure'' on a metric space? Does nonexpanding map between manifolds decrease volume? Is there a "universal" connected compact metric space?
CommonCrawl
Abstract: We present the analysis underpinning the measurement of cosmological parameters from 207 spectroscopically classified type Ia supernovae (SNe Ia) from the first three years of the Dark Energy Survey Supernova Program (DES-SN), spanning a redshift range of 0.017<$z$<0.849. We combine the DES-SN sample with an external sample of 122 low-redshift ($z$<0.1) SNe Ia, resulting in a "DES-SN3YR" sample of 329 SNe Ia. Our cosmological analyses are blinded: after combining our DES-SN3YR distances with constraints from the Cosmic Microwave Background (CMB; Planck Collaboration 2016), our uncertainties in the measurement of the dark energy equation-of-state parameter, $w$, are .042 (stat) and .059 (stat+syst) at 68% confidence. We provide a detailed systematic uncertainty budget, which has nearly equal contributions from photometric calibration, astrophysical bias corrections, and instrumental bias corrections. We also include several new sources of systematic uncertainty. While our sample is <1/3 the size of the Pantheon sample, our constraints on $w$ are only larger by 1.4$\times$, showing the impact of the DES SN Ia light curve quality. We find that the traditional stretch and color standardization parameters of the DES SNe Ia are in agreement with earlier SN Ia samples such as Pan-STARRS1 and the Supernova Legacy Survey. However, we find smaller intrinsic scatter about the Hubble diagram (0.077 mag). Interestingly, we find no evidence for a Hubble residual step ( 0.007 $\pm$ 0.018 mag) as a function of host galaxy mass for the DES subset, in 2.4$\sigma$ tension with previous measurements. We also present novel validation methods of our sample using simulated SNe Ia inserted in DECam images and using large catalog-level simulations to test for biases in our analysis pipelines.
CommonCrawl
In this lecture and a companion lecture Classical Filtering with Linear Algebra, we study the classical theory of linear-quadratic (LQ) optimal control problems. matrix decompositions applied to linear systems of first-order conditions for optimum problems. The system of equations (4) form a $ 2 \times m $ order linear difference equation that must hold for the values of $ t $ indicated. Notice how we have chosen to arrange the $ y_t $'s in reverse time order. $ U $ is an $ (N+1) \times (N+1) $ upper trangular matrix.
CommonCrawl
Jansen and Heß -- correcting an earlier paper of Douglas and Kroll -- have derived a (pseudo-)relativistic energy expression which is very successful in describing heavy atoms. It is an approximate no-pair Hamiltonian in the Furry picture. We show that their energy in the one-particle Coulomb case, and thus the resulting self-adjoint Hamiltonian and its spectrum, is bounded from below for $\alpha Z\leq 1.006$. Full text: dvi.gz 27 k, dvi 69 k, ps.gz 591 k, pdf 173 k.
CommonCrawl
Gentz, B (1996). A central limit theorem for the overlap in the Hopfield model. The Annals of Probability, 24(4):1809-1841. We consider the Hopfield model with n neurons and an increasing number $p = p(n)$ of randomly chosen patterns. Under the condition $(p^3 \log p)/n \to 0$, we prove for every fixed choice of overlap parameters a central limit theorem as $n \to \infty$, which holds for almost all realizations of the random patterns. In the special case where the temperature is above the critical one and there is no external magnetic field, the condition $(p^2 \log p)/n \to 0$ suffices. As in the case of a finite number of patterns, the central limit theorem requires a centering which depends on the random patterns.
CommonCrawl
show ⋆ 100% Private Proxies - Fast, Anonymous, Quality, Unlimited USA Private Proxy! Google photos knows about the people in pictures, and can give me a list of pictures every person appears in. When a picture is chosen as the default profile picture for a person, it knows how to center it properly on the face. Is there a way when viewing a picture to know who is who on that picture (in a similar way to how Facebook highlights people)? $ J$ directed means that for all pairs $ a,b$ in $ J, \exists c$ s.t. $ a,b<c$ and $ K$ is cofinal in $ J$ so for each $ j \in J$ , $ \exists d \in K$ such that $ j<d$ . I am struggling with how to prove this. I have tried by contradiction (there exists $ x,y \in K$ such that there does not exist $ z \in K$ such that $ x,y<z$ ), but don't know where to go from here. Both views goal is to show taxonomy terms, but first one shows ONLY parent terms, then we click on parent Fruit, we get second view, which shows ONLY terms from Fruit, so Apple and Banana. If we clicked on Meat term, we get terms Chicken and Pork. My question is how to do it? Find the Taylor series of this polynomial. How do I formally show radius of convergence? Find the Taylor series for $ f(x)$ centered at the given value of a. [Assume that f has a power series expansion. Do not show that $ R_n(x) -> 0.$ Also find the associated radius of convergence. I'm having trouble finding the general formula for each term. Without it, how am I supposed to find the radius of convergence? I can see why the radius of convergence is $ \infty$ , it's because $ x$ can be anything and it'll converge. But how do I show this formally? Can I use the ratio test?
CommonCrawl
Continuum $=$ compact connected metric space. Let $X$ be a continuum. $X$ is indecomposable means that every proper subcontinuum of $X$ is nowhere dense in $X$. It is easy to see that if $X$ is indecomposable then every connected open subset of $X$ is dense in $X$. Question. Are these two conditions equivalent? Given the wealth of examples in continuum theory, the answer is likely no. So what is an example of a decomposable continuum all of whose connected open subsets are dense? EDIT: I have constructed an example; the two conditions are NOT equivalent. Before revealing it, I will leave the bounty open in hopes of attracting more examples. **Edit 1.a:**This construction tried to provide a counterexample, but is wrong. I give more details below. Glue two solenoids as follows. Let $K$ be a Cantor set (e.g. iterate $[0,1]\to [0,\frac 1 3] \cup [\frac 2 3,1]$). Let $D\subset K$ be a closed nowhere dense Cantor subset (e.g. iterate $[0,1]\to [0,\frac 1 9] \cup [\frac 8 9, 1]$). Let $S_1,S_2$ be two solenoids over circles $C_1,C_2$ respectively. Essentially, a solenoid is a space $S$ that fibres over a circle $S\to C$, with all the fibres homeomorphic to Cantor sets, and with a certain nontrivial twist. A solenoid is an indecomposable continuum, is homogeneous, and the only proper subcontinua are arcs. On each circle $C_i$ consider two points $P_i^0, P_i^\pi$, and let $F_i^\alpha$ be the corresponding fibers on $S_i$, for $i=1,2$ and $\alpha=0,\pi$. Let homeomorphisms $a: F_1^0\to D$, $b:K\to F_2^0$, $c: F_2^\pi\to D$, $d:K\to F_1^\pi$ be given. Glue $F_1^0\cup (d\circ c)(F_2^\pi)\subset S_1$ to $(b\circ a)(F_1^0)\cup F_2^\pi\subset S_2$ via the above identification, to obtain the required space $X$. $X$ is clearly a continuum, and is decomposable, because is the union of two proper subcontinua (copies of $S_1,S_2$). Edit 1.b: The observation motivating the example was that $S_1$ and $S_2$ are separately indecomposable, and an open set $U$ meeting $S_1$ necessarily meets the fiber $F_1^0$ and since $F_1^0$ is nowhere dense in $F_2^0$ it follows that $U$ meets $F_2^\pi$ as well. This property is achieved in a less trivial way than just identifying (an open subset of) a fiber of $S_1$ with a similar subset of $S_2$. Indeed there is no small neighborhood of $F_2^0$ retracting on it. However this does not really help in solving the question. Not the answer you're looking for? Browse other questions tagged gn.general-topology counterexamples continuum-theory or ask your own question. Improvements of the Baire Category Theorem under (not CH)? A question about indecomposable continua. Is there a compactification with nontrivial connected remainder? Example of an $\omega_1$ decreasing chain of dense semicontinua? Can every dense subset be partitioned into two dense subsets?
CommonCrawl
"directory junction" vs "directory symbolic link"? What is the result of $\infty - \infty$? What happens when there's insufficient memory to throw an OutOfMemoryError? "git rm --cached x" vs "git reset head --​ x"? Why can't I email myself at: [email protected]? File.separator vs FileSystem.getSeparator() vs System.getProperty("file.separator")? Is it possible to set the equivalent of a src attribute of an img tag in CSS? Do Java arrays have a maximum size? How can I make XSLT work in chrome? What languages are Windows, Mac OS X and Linux written in? How does Java convert int into byte? What's the complete range for Chinese characters in Unicode? How to prevent a PDF file from being indexed by search engines? How can the font size be changed in Notepad++? How to exit MySQL command prompt? How to display text in the browser status bar? What is socket bind and how to bind an address? How to get first day of every corresponding month in mysql? How do I escape spaces in command line in Windows without using quotation marks? Incorrect usage of UNION and ORDER BY? Is there a way to get a <button> element to link to a location without wrapping it in an <a href … tag?
CommonCrawl
Combinatorics plays a prominent role in contemporary mathematics, due to the vibrant development it has experienced in the last two decades and its many interactions with other subjects. This book arises from the INdAM conference "CoMeTA 2013 - Combinatorial Methods in Topology and Algebra,'' which was held in Cortona in September 2013. The event brought together emerging and leading researchers at the crossroads of Combinatorics, Topology and Algebra, with a particular focus on new trends in subjects such as: hyperplane arrangements; discrete geometry and combinatorial topology; polytope theory and triangulations of manifolds; combinatorial algebraic geometry and commutative algebra; algebraic combinatorics; and combinatorial representation theory. The book is divided into two parts. The first expands on the topics discussed at the conference by providing additional background and explanations, while the second presents original contributions on new trends in the topics addressed by the conference. This collection marks the recent resurgence of interest in combinatorial methods, resulting from their deep and diverse applications both in topology and algebraic geometry. Nearly thirty mathematicians met at the University of Rochester in 1982 to survey several of the areas where combinatorial methods are proving especially fruitful: topology and combinatorial group theory, knot theory, 3-manifolds, homotopy theory and infinite dimensional topology, and four manifolds and algebraic surfaces. This material is accessible to advanced graduate students with a general course in algebraic topology along with some work in combinatorial group theory and geometric topology, as well as to established mathematicians with interests in these areas.For both student and professional mathematicians, the book provides practical suggestions for research directions still to be explored, as well as the aesthetic pleasures of seeing the interplay between algebra and topology which is characteristic of this field. In several areas the book contains the first general exposition published on the subject. In topology, for example, the editors have included M. Cohen, W. Metzler and K. Sauerman's article on 'Collapses of $K\times I$ and group presentations' and Metzler's 'On the Andrews-Curtis-Conjecture and related problems'. In addition, J. M. Montesino has provided summary articles on both 3 and 4-manifolds.
CommonCrawl
Let $m$ be an integer bigger than one, $A$ a ring of algebraic integers, $F$ its fraction field, and $K_m (A)$ the $m$-th Quillen $K$-group of $A$. We give a (huge) explicit bound for the order of the torsion subgroup of $K_m (A)$ (up to small primes), in terms of $m$, the degree of $F$ over $\mathbf Q$, and its absolute discriminant. Full text: dvi.gz 35 k, dvi 87 k, ps.gz 487 k, pdf 201 k.
CommonCrawl
9 Meaning and usage of complex number functor and monad? 9 Why is the Major-Minor Scale unused? 7 Is mfix for Maybe impossible to be nontrivially total? 6 Is there a name for, or notable structure that uses, weird "distributive laws" such as $a\times(b+c)=b\times a+c\times a$?
CommonCrawl
In the real world, math is about solving unsolved problems, analyzing unfamiliar situations, devising new mathematical systems and tools, exploring unknown landscapes and more — all the while reasoning with clear, rigorous logic. Students of math must ask their own questions and search for their own directions. These skills are essential outside math, but I believe that math research is a good way to train them. With proper guidance, students can potentially discover something truly original. This page proposes math projects of various difficulties to be investigated by high school students. I try to propose unsolved questions; investigators might as well put their efforts into something actually useful. Completely solving unsolved questions may be too much to expect, but I think partial progress (e.g. only works for special cases), even by intuitive means (i.e. not perfectly rigorous) or computer programs, would already be impressive. How can I generalize this? Each proposal ends with a few possible methods of attack or possible directions of inquiry, but I would prefer the investigators find their own way forward independently. Investigators who are lost or stuck can send a request for further suggestions or more advanced directions to [email protected]. Unfortunately I cannot guarantee that my suggestions will lead anywhere at all, but in any case investigators are recommended to think beyond them. Good luck, and I'll be glad to learn of any progress in these projects! Keep me posted, or give any feedback, through the contact form at the bottom. *Some diagrams of geometric dissections by Gavin Theobald; click to see many more. Do you like some dissections more than others? What is a "good" dissection? Does it depend on the number of pieces used? Symmetry? Flipping over pieces? Does it depend on the starting and ending shapes? What criteria, or combination of criteria, for "good'' dissections can be formalized and stated using a mathematical language? Using those criteria, write a mathematical procedure (possibly a computer program?) that can tell if dissections are "good" or not. Find large, general families of "good" dissections. Challenging: how do we tell if a dissection uses the fewest number of pieces? Note that the Wallace-Bolyai-Gerwien Theorem guarantees that any two polygons of equal area are equidecomposable, that is, one can be dissected and rearranged into the other. (Frederickson, 1997) and (Lindgren, 1964) survey several types of dissections and the techniques used to generate them, as well some of the history behind these puzzles. (Frederickson, 2002) does the same for special dissections whose pieces can be hinged together. There is probably more literature on this topic that can be found. (Frederickson, 1997) Greg N. Frederickson. Dissections: Plane and Fancy. Cambridge University Press (1997). (Frederickson, 2002) Greg N. Frederickson. Hinged Dissections: Swinging and Twisting. Cambridge University Press (2002). (Lindgren, 1964) Harry Lindgren. Geometric Dissections. Princeton, N.J.: D. Van Nostrand (1964). The Wallace-Bolyai-Gerwien Theorem guarantees that any two polygons of equal area are equidecomposable, that is, one can be dissected into a finite number of pieces and rearranged into the other. However, the 3D version of this doesn't work; for example, a cube is not equidecomposable with a tetrahedron of the same volume. But some dissections of polyhedra do work, such as the one illustrated below and many more described in (Frederickson, 1997) and (Frederickson, 2002). But can you find large, general families of polyhedron pairs that are equidecomposable? Dissecting a parallelepiped (with rectangular base) into a cuboid. Are right prisms with equal base area and height equidecomposable? Find large, general families of polyhedron pairs that are equidecomposable. Several techniques for creating dissections listed in (Frederickson, 1997), (Frederickson, 2002), (Lindgren, 1964) and further literature will be useful. Suppose a light ray travelling in the plane is trapped in a rectangular box whose walls are mirrors that the light bounces off against. If its speed and initial position and direction are known, how do we predict where the light would have reached after a certain amount of time has passed? Working it out seems hard at first because each bounce must take into account the wall involved, which affects the direction of the deflected light ray, which affects the next wall-bounce, which affects the new direction... the potential scenarios to be accounted for seem to explode. A blue light ray travels within the yellow box, bouncing off the mirrored walls. It starts from the black square and ends at the black circle. Dashed black lines mark out reflected copies of the box, while the red dotted line marks a virtual light ray which goes from the back square to the white triangle. The red virtual light ray marks the path that the light ray would have taken if the box wasn't there. The real light ray and virtual light ray reach the single arrowhead at the same time, then the double arrowhead at the same time, and so on. But while it's hard to find the position of the blue arrowheads, the red arrowheads are rather straightforward to locate! Take the part of the path of the virtual light ray on the right of D and reflect it about D. Take the part of the new path above C and reflect it about C. Take the part of the new path to the right of B and reflect it about B. Take the part of the new path to the right of A and reflect it about A. How can this process be used to predict where the light ray has reached after a certain amount of time? In the first place, what variables (e.g. initial position and direction) need to be fixed and known before the prediction can happen? What if instead of a light ray, we had a billiard ball bouncing around inside a rectangular table? (Ignore friction and whatnot.) Can the same method be used? If yes, why? If no, how can it be adapted? What if the light ray was bouncing in 3D, inside a cuboid box with mirrored walls? Stretch your imagination and consider the strangest types of boxes in any dimension or shape, and try to adapt the reflection method to them. Note that I have seen the reflection trick inside some book before, so it is probably well-known — all the more important that investigators do a thorough literature review to see what has been tried. Chapter XI of (Berger, 2010) discusses other dimensions of this problem, but does not appear to use the trick. (Berger, 2010) Mercel Berger. Geometry Revealed: A Jacob's Ladder to Modern Higher Geometry. Springer-Verlag Berlin Heidelberg (2010). Formulate a general version of this kind of formula for a determinant of an $n \times n$ matrix. Laplace expansion would probably be useful. Prove the generalized formula. Mathematical induction would probably be useful. *A single Gosper's Glider Gun creating "gliders" What other interesting patterns can you come up with? Can you change the rules or setup of the game, yet still define some new, interesting patterns? How about a higher-dimensional grid? How about a triangular or hexagonal grid? How about letting each cell have more than two states? This proposal is probably the most open-ended one, but this game is very famous so it's extremely important to find out what interesting structures have been defined by other researchers already. The content of Mathematics Research Proposals by Cheng Herng Yi, except the figures whose captions are prefixed by (*), is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
CommonCrawl
Roughly this can be understood as follows: since the pertrubation is in the coupling constant about vanishing coupling, a non-zero radius of convergence would imply that the theory is finite also for negative coupling (where "things fly apart"), which will not happen in realistic theories. My question is, why the negative coupling causing "things fly apart" will lead to asymptotic series? Suppose we have an electron and positron, if they fly apart, it corresponds to electron and electron, and vice versa. Maybe because the instanton part diverges for $g \to 0^-$, because of the exponential. An asymptotic series happens when you expand a function around a singular point (that's why it has zero radius of convergence). The point of zero coupling is usually a branch point, which is singular. The reason for the appearance of the branch point is that for negative coupling the vacuum is unstable because the vacuum will create pairs of particles and antiparticles and "they will fly apart" forever. This leads to an imaginary part (i.e. a branch cut) in correlation functions. That's why (in your language) "things flying apart will lead to asymptotic series." Suppose the series in $\alpha$ converges in some radius. The it converges also for negative values of the coupling constant inside that radius. Consider now what kind of theory is QED with a negative $\alpha$. In that theory like charges attract and opposite sign charges repel each other. Now take the vacuum of the non-interacting theory. This state is unstable against formation of electron-positron pairs, because said pairs would repel indefinitely leading t a lower energy state. You can make an even lower energy state by adding pairs that would separate in two clusters of electrons on one side and positrons on the other. Therefore this theory does not possess a ground state, since the spectrum is unbounded from below. Hence there is no consistent QED for negative coupling constant. And so the perturbative series cannot converge. As far as I know this argument is strictly heuristical, but shortly after it appeared (in the 1950s) Walter Thirring proved the divergence for a particular scattering process (I'm not in my office so I don't have the correct reference, but I'm positive the paper is in Thirring's selected works as well as explained in his autobiography). Note that this question of convergence was proeminent in a period where people tried to define QFT in terms of the perturbative expansion. The advent of non-perturbative effects (instantons, confinements, pick your favorite...) coupled with renormalization group showed that this was the wrong approach for QFT. But note also that the argument of vacuum instability depends on the interaction. It does not preclude the possibility of designing a QFT with convergent perturbative expansion, it just shows that it it not to be expected in a general theory. If there is no stable ground state is a problem of perturbative expansion, the $\phi^3$ theory was extensively studied in Srednicki's book, e.g. sections 16-19, does it mean the perturbative computations on $\phi^3$ making no sense? The absence of a ground state is not a problem of perturbation theory in this case, but rather the converse, the independent argument for no ground state implies a problem in the perturbative expansion. Since we know that the theory does not exist we get that the series must not be convergent. The $\phi^3$ in Srednicki's is different. In page 71, section 9, he explicitly mentions that although there is no stable vacuum this is invisible by perturbation theory. In any case he also says that he is only interested in giving an example, and not concerned with overall consistency of the theory. Excuse me, I still have a question. Does "formation of electron-positron pairs" mean virtual, off-shell particle? Since on-shell electrons have rest mass, the repulsion lead to lower energy may not be larger than the rest mass.
CommonCrawl
Abstract: Let $\Sigma$ be a codimension one submanifold of an $n$-dimensional Riemannian manifold $M$, $n\geqslant 2$. We give a necessary condition for an isometric immersion of $\Sigma$ into $\mathbb R^q$ equipped with the standard Euclidean metric, $q\geqslant n+1$, to be locally isometrically $C^1$-extendable to $M$. Even if this condition is not met, "one-sided" isometric $C^1$-extensions may exist and turn out to satisfy a $C^0$-dense parametric $h$-principle in the sense of Gromov.
CommonCrawl
XZ=ZY is called the conjugacy equation. Given languages X and Y we are interested in knowing if there exists a non empty language Z which makes this equation true. This problem is undecidable in the general setting. Suppose we have a polynomial $P$ in variables $X_1, \\ldots, X_n$ with coefficients from a field $F$, with total degree at most $d$. The polynomial $P$ is given in terms of some algebraic expression involving $X_1, \\ldots, X_n$. Using Kolmogorob Complexity, we will prove that there exist an oracle with respect to which $P$ not equal to $NP$. My talk focuses on two aspects of my thesis, in the theory of algorithmic randomness. Many stochastic systems are governed by events that, though they have a small probability of occurrence, are crucial to performance.
CommonCrawl
Microbial flavohemoglobins (flavoHbs) and hemoglobins (Hbs) show large radical dotNO dioxygenation rate constants ranging from 745 to 2900 μM−1 s−1 suggesting a primal radical dotNO dioxygenase (NOD) (EC 1.14.12.17) function for the ancient Hb superfamily. Indeed, modern O2-transporting and storing mammalian red blood cell Hb and related muscle myoglobin (Mb) show vestigial radical dotNO dioxygenation activity with rate constants of 34–89 μM−1 s−1. In support of a NOD function, microbial flavoHbs and Hbs catalyze O2-dependent cellular radical dotNO metabolism, protect cells from radical dotNO poisoning, and are induced by radical dotNO exposures. Red blood cell Hb, myocyte Mb, and flavoHb-like activities metabolize radical dotNO in the vascular lumen, muscle, and other mammalian cells, respectively, decreasing radical dotNO signalling and toxicity. HbFe(III)–OOradical dot, HbFe(III)–OONO and protein-caged [HbFe(III)–Oradical dotradical dotNO2] are proposed intermediates in a reaction mechanism that combines both O-atoms of O2 with radical dotNO to form nitrate and HbFe(III). A conserved Hb heme pocket structure facilitates the dioxygenation reaction and efficient turnover is achieved through the univalent reduction of HbFe(III) by associated reductases. High affinity flavoHb and Hb heme ligands, and other inhibitors, may find application as antibiotics and antitumor agents that enhance the toxicity of immune cell-derived radical dotNO or as vasorelaxants that increase radical dotNO signaling. We compare the amino acid sequences of 70 globing, representing the following families: (a) α hemoglobin chains; (b) β hemoglobin chains; (c) myoglobins; (d) two lamprey, a mollusc, and two plant globins. The comparisons show a convergence of maximal and minimal estimates of genetic differences as calculated respectively by the stochastic and maximum parsimony procedures, thus demonstrating for the first time the logical consistency and complementarity of the two procedures. Evolutionary rates are non-constant, varying over a range of 1 to 75 nucleotide replacements per 100 codons per 108 years. These rate differentials are resolved into two components (a) due to change in the number of codon sites free to fix mutations during the period of divergence of the species involved; (b) due to change in fixation intensity at each site. These two components also show non-uniformity along different lineages. Positive Darwinian natural selection can bring about an increase in either component, and negative or stabilizing selection in protein evolution can lead to decreases. Accelerated rates of globin evolution were found in lineages of cold-blooded vertebrates, some marsupials, and early placental mammals, while slower rates were found in warm-blooded vertebrates, especially higher primates. One manifestation of negative selection in the globins is that minimal 3-base type amino acid replacements occur less frequently than would be expected if base replacements had occurred and were accepted at random. The selection against these replacements is not due to atypical behavior with respect to the change in electrical charge involved in the replacements. Interestingly, the globins from the lamprey, sea hare and the legumes are as distant from one another as are α-hemoglobin and β-hemoglobin from myoglobin. Orthologs are sequences of genes that evolved from a common ancestor and can be traced evolutionarily through different species. By comparing the ortholog sequences of a specific gene between many species, the amino acid sequences which are conserved can be determined. These highly conserved sequences are important, because they provide information on which amino acids are essential to the protein structure and function. Hemoglobin is derived from the myoglobin protein, and ancestral species just had myoglobin for oxygen transport. 500 million years ago the myoglobin gene duplicated and part of the gene became hemoglobin. Lampreys are the most ancestral animal to have hemoglobin, and the ancestral version was composed of dimers instead of tetramers and was only weakly cooperative. 100 million years later, the hemoglobin gene duplicated again forming alpha and beta subunits. This form of derived hemoglobin is found in bony fish, reptiles, and mammals, which all have both alpha and beta subunits to form a tetramer (Mathews et al., 2000). When the amino acid sequences of myoglobin, the hemoglobin alpha subunit, and the hemoglobin beta subunit are compared, there are several amino acids that remain conserved between all three globins (Mathews et al., 2000). These amino acid sequences are considered truly essential, because they have remained unchanged throughout evolution, and therefore are fundamental to the function of the protein. These essential amino acids can be seen in Figure 1, which compares myoglobin, and the alpha and beta subunits of hemoglobin. The histidines in helix F8 and helix E7 are highly conserved. These histidines are located proximally and distally to the heme molecule and keep the heme molecule in place within the hemoglobin protein as seen in Figure 2 (Mathews et al., 2000). This shows that the position of the heme molecule within the globin protein is essential to its function. Likewise, the amino acids in the FG region are also highly conserved. This region of the protein is essential to the conformational change between the T to R states (Mathews et al., 2000). Additionally, the amino acids at the alpha-beta subunit interfaces are highly conserved, because they also affect the conformational change between the subunits, which regulates oxygen affinity and cooperativity. In general, the most highly conserved sequences are located within the interior of the hemoglobin protein where the subunits contact each other (Gribaldo et al., 2003). Figure 2: A cartoon drawing of the structure of hemoglobin around heme molecule. The histadines in helix F8 and E7 interact directly with the heme molecule. http://www.aw-bc.com/mathews/ch07/fi7p5.htm (permission pending). Figure 1: The amino acid sequences of myoglobin, alpha subunit of hemoglobin, and beta subunit of hemoglobin. The amino acid sequences highlighted in tan are conserved between all three globins and the amino acid sequences highlighted in gray are conserved between alpha and beta hemoglobin. http://www.aw-bc.com/mathews/ch07/fi7p11.htm (permission pending). Figure 2: A cartoon drawing of the structure of hemoglobin around heme molecule. The histidines in helix F8 and E7 interact directly with the heme molecule. http://www.aw-bc.com/mathews/ch07/fi7p11.htm (permission pending). The alpha subunit of hemoglobin has several amino acid sequences that are conserved across many species and are essential to its function. The alpha subunit of hemoglobin is encoded by the 2 genes HBA1 and HBA2 both located on chromosome 16 (GeneCard, 2005). Click here to see the gene card for HBA1. To determine which amino acid sequences are conserved, I compared the orthologs of HBA1 in Homo sapiens (humans) to 5 additional species including, Xenopus tropicalis (African clawed frog), Danio rerio (Zebra fish), Gallus gallus (Red jungle fowl), Mus musculus (mouse), and Rattus norvegicus (rat) using the Ensembl program. Figure 3 shows the 6 orthologs aligned and the important conserved regions highlighted. The stars indicate amino acids that are conserved between all of the species. As a general observation, the mouse ortholog of HBA is the most similar to human HBA, because it is the most evolutionarily related. The amino acid sequences that are conserved in all globin proteins (highlighted in blue) can be seen in Figure 3. There are also several conserved amino acids that are specifically important to HBA structure (highlighted in red) including: the phenylalanine (F) at position 44, which is in direct contact with the heme group; tyrosine (Y) at position 142, which stabilizes the hemoglobin molecule by forming hydrogen bonds between two of the helices; and glycine (G) at position 26, which is small and therefore allows two of the helices to approach each other, which is important to the structure of hemoglobin (Natzke, 1998). Additionally, there are several proteins found in the alpha subunit that are involved in the movement of the alpha and beta subunits (also highlighted in red) including: the tyrosine (Y) at position 43, which interacts with the beta subunit during the R state, and the arginine (N) at position 143, which interacts with the beta subunit during the T state (Gribaldo et al., 2003). Looking at the effects mutated portions of a gene is also a good way to determine the function of highly conserved sequences. In hemoglobin, deleterious mutations are most common in the heme pockets of the protein and in the alpha and beta subunit interfaces (Mathews et al., 2000). There are several key mutations in highly conserved portions of HBA (highlighted in yellow) including: the substitution of histidine (H) at position 88 to tyrosine (Y), which disrupts the heme molecule leading to decreased oxygen affinity; the substitution of arginine (N) at position 143 to histidine (H), which eliminates a bond in the T state and therefore favors the R state, resulting in increased oxygen affinity; the substitution of proline (P) at position 97 to arginine (N), which alters the alpha-beta contact region and results in the disassociation of the hemoglobin complex; and the substitution of leucine (L) at position 138 for proline (P), which interrupts the helix formation and also results in the disassociation of the hemoglobin complex (Mathews et al., 2000). As mentioned on the previous page, the bar-headed goose has hemoglobin that is specifically adapted to high altitudes. The bar-headed goose hemoglobin has an increased oxygen affinity which allows it to live in low oxygen pressure environments (Liang et al., 2001). This increased oxygen affinity is the result of a mutation at position 121 in the alpha subunit, which is highly conserved in other species, from proline to alanine, as seen in Figure 4 (Liang et al., 2001). This substitution leaves a two-carbon gap between the alpha-beta dimer, which relaxes the T structure and allows it to bind oxygen more readily under lower pressures (Jessen et al. 1991). Thus, comparing orthologs can also be used to explain differences in the oxygen binding capabilities of hemoglobin in different species. Ensembl. Ensembl Genome Browser. http://www.ensembl.org/. Accessed March 2005. GeneCard. 2005. GeneCard for HBA1. http://genome-www.stanford.edu/cgi-bin/genecards/carddisp?HBA1&search=HBA&suff=txt. Accessed March 2005. Gribaldo, Simonetta, Didier Casane, Philippe Lopez and Herve Philippe. 2003. Functional Divergence Prediction from Evolutionary Analysis: A Case Study of Vertebrate Hemoglobin. Molecular Biology and Evolution 20 (11): 1754-1759. Jessen, Timm H et al. 1991. Adaptation of bird hemoglobins to high altitudes: Demonstration of molecular mechanism by protein engineering. Evolution 88: 6519-6522. Liang, Yuhe et al. 2001. The Crystal Structure of Bar-headed Goose Hemoglobin in Deoxy Form: The Alloseteric Mechanism of a Hemoglobin Species with High Oxygen Affinity. Journal of Molecular Biology 313: 123-137. Mathews, Christopher, Kensal Van Holde and Kevin Ahern. 2000. Biochemistry 3 rd edition. http://www.aw-bc.com/mathews/ch07/c07emhp.htm . Accessed March 2005. Natzke, Lisa. 1998. Hemoglobin. http://biology.kenyon.edu/BMB/Chime/Lisa/FRAMES/hemetext.htm. Accessed March 2005. Divergence pattern and selective mode in protein evolution: the example of vertebrate myoglobins and hemoglobin chains. Otsuka J1, Miyazaki K, Horimoto K. J Mol Evol. 1993 Feb; 36(2):153-81. The evolutionary relation of vertebrate myoglobin and the hemoglobin chains including the agnathan hemoglobin chain is investigated on the basis of a new view of amino acid changes that is developed by canonical discriminant analysis of amino acid residues at individual sites. In contrast to the clear discrimination of amino acid residues between myoglobin, hemoglobin alpha chain, and hemoglobin beta chain in warm-blood vertebrates, the three types of globins in the lower class of vertebrates show so much variation that they are not well discriminated. This is seen particularly at the sites that are ascertained in mammals to carry the amino acid residues participating in stabilizing the monomeric structure in myoglobin and the residues forming the subunit contacts in hemoglobin. At these sites, agnathan hemoglobin chains are evaluated to be intermediate between the myoglobin and hemoglobin chains of gnathostomes. The variation in the phylogenetically lower class of globins is also seen in the internal region; there the amino acid residues of myoglobin and hemoglobin chains in the phylogenetically higher class exhibit an example of parallel evolution at the molecular level. New quantities, the distance of sequence property between discriminated groups and the variation within each group, are derived from the values of discriminant functions along the peptide chain, and this set of quantities simply describes an overall feature of globins such that the distinction between the three types of globins has been clearer as the vertebrates have evolved to become jawed, landed, and warm-blooded. This result strongly suggests that the functional constraint on the amino acid sequence of a protein is changed by living conditions and that severe conditions constitute a driving force that creates a distinctive protein from a less-constrained protein. Agnathans (jawless vertebrates) occupy a key phylogenetic position for illuminating the evolution of vertebrate anatomy and physiology. Evaluation of the agnathan globin gene repertoire can thus aid efforts to reconstruct the origin and evolution of the globin genes of vertebrates, a superfamily that includes the well-known model proteins hemoglobin and myoglobin. Here we report a comprehensive analysis of the genome of the sea lamprey (Petromyzon marinus) which revealed 23 intact globin genes and two hemoglobin pseudogenes. Analyses of the genome of the Arctic lamprey (Lethenteron camtschaticum) identified 18 full length and five partial globin gene sequences. The majority of the globin genes in both lamprey species correspond to the known agnathan hemoglobins. Both genomes harbor two copies of globin X, an ancient globin gene that has a broad phylogenetic distribution in the animal kingdom. Surprisingly, we found no evidence for an ortholog of neuroglobin in the lamprey genomes. Expression and phylogenetic analyses identified an ortholog of cytoglobin in the lampreys; in fact, our results indicate that cytoglobin is the only orthologous vertebrate-specific globin that has been retained in both gnathostomes and agnathans. Notably, we also found two globins that are highly expressed in the heart of P. marinus, thus representing functional myoglobins. Both genes have orthologs in L. camtschaticum. Phylogenetic analyses indicate that these heart-expressed globins are not orthologous to the myoglobins of jawed vertebrates (Gnathostomata), but originated independently within the agnathans. The agnathan myoglobin and hemoglobin proteins form a monophyletic group to the exclusion of functionally analogous myoglobins and hemoglobins of gnathostomes, indicating that specialized respiratory proteins for O2 transport in the blood and O2 storage in the striated muscles evolved independently in both lineages. This dual convergence of O2-transport and O2-storage proteins in agnathans and gnathostomes involved the convergent co-option of different precursor proteins in the ancestral globin repertoire of vertebrates. I've just pointed out the distinction between myoglobin and hemoglobin. You may also remember that hemoglobin is a multimeric protein consisting of four subunits, 2 α\alpha subunits and 2 β\beta subunits. What you may not know is that in humans there are actually two types of α\alpha hemoglobin and four types of β\beta hemoglobin, each coded by a different genetic locus (see Table 1). The five α\alpha -globin loci (α\alpha_1, α\alpha_2, ς\zeta, and two non-functional pseudogenes) are found in a cluster on chromosome 16. The six β\beta-globin loci (ε\epsilon, ϒ\gamma_G, ϒ\gamma_A, δ\delta, β\beta, and a pseudogene) are found in a cluster on chromosome 11. The myoglobin locus is on chromosome 22. Table 1: Human hemoglobins arranged in developmental sequence. Adult hemoglobins composed of 2 and 2 subunits typically account for less than 3% of hemoglobins in adults (http://sickle.bwh.harvard.edu/hbsynthesis.html). Human and mouse neuroglobins (Ngb) are more closely related to one another than they are to other globins, even those from the same species. The same holds true for cytoglobins (Cyg) and myoglobins (Mb). Within the hemoglobins, only mouse β\beta-globin (Mouse HbB) is misplaced. All other α\alpha- and β\beta-globins group with the corresponding mouse and human loci. This pattern is exactly what we expect as a result of duplication and divergence. Up to the time that a gene becomes duplicated, its evolutionary history matches the evolutionary history of the organisms containing it. Once there are duplicate copies, each follows an independent evolutionary history. Each traces the history of speciation and divergence. And over long periods duplicate copies of the same gene share more recent common ancestry with copies of the same gene in a different species than they do with duplicate genes in the same genome. Figure 1: Evolution of globin genes in Archaea and mammals (from ). A history of duplication and divergence in multigene families makes it important to distinguish between two classes of related loci: those that represent the same locus in different species and between which divergence is a result of species divergence are orthologs. Those that represent different loci and between which divergence occurred after duplication of an ancestral gene are paralogs. The β\beta-globin loci of humans and chickens are orthologous. The α\alpha $- and $\beta $-globin loci of any pair of taxa are paralogous. As multigene families go, the globin family is relatively simple and easy to understand. There are only about a dozen loci involved, one isolated locus (myoglobin) and two clusters of loci ($\alpha- and β\beta-globins). You'll find a diagram of the β\beta-globin cluster in Figure 2. As you can see the β\beta-globins are not only evolutionarily related to one another they occur relatively close to one another on chromosome 11 in humans. Figure 2: Structure of the human β\beta-globin gene cluster. % identity refers to similarity to the mouse β\beta-globin sequence. From http://globin.cse.psu.edu/html/pip/betaglobin/iplot.ps (retrieved 28 Nov 2006). Other families are far more complex. Class I and class II MHC loci, for example are part of the same multigene family. Moreover, immunoglobulins, T-cell receptors, and, and MHC loci are part of a larger superfamily of genes, i.e., all are ultimately derived from a common ancestral gene by duplication and divergence. Table 2 lists a few examples of multigene families and superfamilies in the human genome and the number of proteins produced. Table 2: A few gene families from the human genome (adapted from [5,6]). the evolutionary history of three hypothetical living species (C, D, and E), inferred by comparing amino-acid differences in their myoglobin molecules. much higher affinity for oxygen than haemoglobin. Primitive Globin – Very primitive animals had only a myoglobin-like, single-chain ancestral globin for oxygen storage and were so small that they did not require a transport protein. Roughly 500 million years ago the ancestral myoglobin gene was duplicated. One copy became the ancestor of the myoglobin genes of all higher organisms. The other copy evolved into the gene for an oxygen transport protein and gave rise to the hemoglobins. Most Primitive Hemoglobin – The most primitive animals to possess hemoglobin are the lampreys. Lamprey hemoglobin can form dimers but not tetramers and is only weakly cooperative. It represents a first step toward allosteric binding. Subsequently a second gene duplication must have occurred, giving rise to the ancestors of the present-day and hemoglobin chain families. This must have happened about 400 million years ago, at about the time of divergence of the sharks and bony fish. The evolutionary line of the bony fish led to the reptiles and eventually to the mammals, all carrying genes for both and globins and capable of forming tetrameric 22 hemoglobins. Further gene duplications have occurred in the hemoglobin line, leading to the embryonic forms and , the fetal form, , and the infant form (Figure 7.22). Conserved Amino Acid Sequences – During the long evolution of the myoglobin/hemoglobin family of proteins, only a few amino acid residues have remained invariant (Figure 7.11). They include the histidines proximal and distal to the heme iron (F8 and E7- see Figure 7.5b) and Val FG5, which has been implicated in the hemoglobin deoxy/oxy conformation change. These may mark the truly essential positions in the molecule. Other regions highly conserved in hemoglobins are those near the 1 – 2 and 2 – 1 contacts. These parts of the molecule are most directly involved in the allosteric conformational change.
CommonCrawl
The cohomology groups of the total space of the pencil. A description of the Gauss-Manin local system on $\mathbb P^1-\Delta$ (e.g. in terms of monodromy action of the generators for $\pi_1$). The Hodge numbers of a hypersurface $X\subset \mathbb P^n$ can be computed using Lefschetz hyperplane away from the middle row, and then Hirzebruch's generating function for the primitive middle Hodge numbers (see http://www.math.purdue.edu/~dvb/preprints/book-chap17.pdf for details). In your example, the Hodge diamond has middle row $(35,232,35)$. The total space of a Lefschetz pencil is the blow up of $X$ at the base locus of the pencil (8 points in your example), so it has $b_2 = 310$. The local system description is more difficult. Let $C_p$ be a smooth octic curve over a general point $p\in \mathbb P^1$. The local system has rank $h^1(C_p)=2g(C_p)= 42$. There are $392$ nodal curves in the pencil. Since $\pi_1(\mathbb P^1 - 392,p)$ is free on 391 generators, you need to find the monodromy for each generator. By the Picard-Lefschetz formula, the monodromy around a given nodal fiber is given by $$\tau(x) = x- \langle x,e \rangle e$$ where $e\in H^1(C_p,\mathbb Z)$ is the vanishing cycle, represented by an embedded $S^1$ which gets contracted to the node. Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology cohomology singularity-theory or ask your own question. How does Tate verify his own conjecture for the Fermat hypersurface? Does the (torsion) Zariski cohomology of a (singular) hyperplane section of a smooth projective variety vanish (in small degrees)? Are there known cases of the Mumford–Tate conjecture that do not use Abelian varieties?
CommonCrawl
Abstract: We study finite (physical) volume and scaling violation effects of the Landau gauge ghost propagator as well as of the running coupling $\alpha_s(p)$ in the $SU(2)$ lattice gauge theory. We consider lattices with physical linear sizes between $aL \simeq 3$ fm and $7$ fm and values of lattice spacing between $a=0.2$ fm and $0.07$ fm. To fix the gauge we apply an efficient gauge fixing method aimed at finding extrema as close as possible to the global maximum of the gauge functional. We find finite volume effects to be small for the lattice size $aL \simeq 3$ fm at momenta $|p|\, \ge \, 0.6$ GeV. For the same lattice size we study extrapolations to the continuum limit of the ghost dressing function as well as for the running coupling with momenta chosen between $|p| = 0.41$ GeV and $3.2$ GeV. We present fit formulae for the continuum limit of both observables in this momentum range. Our results testify in favor of the decoupling behavior in the infrared limit.
CommonCrawl
27 There exists a regular language A such that for all languages B, A ∩ B is regular. 11 Finding coefficient of polynomial? 11 Group of $r$ people at least three people have the same birthday? 10 Squaring both sides when units are different? 7 How do you compute eigenvalues/vectors of big $n\times n$ matrix?
CommonCrawl
A. Kapahtia, P. Chingangbam, and S. Appleby. (2019)cite arxiv:1904.06840Comment: 31 Pages, 15 figures. We use morphological descriptors, Betti numbers and Contour Minkowski Tensor (CMT) on 21cm brightness temperature excursion sets, to study the ionization and heating history of the intergalactic medium (IGM) during and before the Epoch of Reionization (EoR). The ratio of eigenvalues of the CMT denoted by $\beta$, gives shape information while it's trace gives the contour length of holes and connected regions. We simulate the matter density, neutral hydrogen fraction, spin temperature and brightness temperature field using the publicly available code 21cmFAST in a redshift range of $z=20.22$ to $z=6$. We study the redshift evolution of three quantities - the Betti number counts $N_con,hole$, the characteristic size $r^ch_con,hole$ and shape anisotropy parameter $\beta^ch_con,hole$ of connected regions and holes for these fields and investigate the different physical origins of their evolution. We make a qualitative comparison of different models of heating and ionization during the EoR. We obtain different regimes of morphological evolution of brightness temperature, depending upon how the shapes and sizes of connected regions and holes change with redshift for different astrophysical settings affecting the ionization and heating history of the IGM during and before the EoR. We find that the morphology of the brightness temperature field traces the morphology of ionized regions below a certain redshift value depending upon the model, where $\Delta r^ch_hole<10 \%$ and $\Delta \beta^ch_hole<1 \%$ relative to the $x_HI$ field. This difference decreases with redshift. Therefore, the ionization history of the IGM can be reconstructed using the morphological description of $T_b$ in real space. %0 Generic %1 kapahtia2019morphology %A Kapahtia, Akanksha %A Chingangbam, Pravabati %A Appleby, Stephen %D 2019 %K library %T Morphology of 21cm brightness temperature during the Epoch of Reioinization using Contour Minkowski Tensor %U http://arxiv.org/abs/1904.06840 %X We use morphological descriptors, Betti numbers and Contour Minkowski Tensor (CMT) on 21cm brightness temperature excursion sets, to study the ionization and heating history of the intergalactic medium (IGM) during and before the Epoch of Reionization (EoR). The ratio of eigenvalues of the CMT denoted by $\beta$, gives shape information while it's trace gives the contour length of holes and connected regions. We simulate the matter density, neutral hydrogen fraction, spin temperature and brightness temperature field using the publicly available code 21cmFAST in a redshift range of $z=20.22$ to $z=6$. We study the redshift evolution of three quantities - the Betti number counts $N_con,hole$, the characteristic size $r^ch_con,hole$ and shape anisotropy parameter $\beta^ch_con,hole$ of connected regions and holes for these fields and investigate the different physical origins of their evolution. We make a qualitative comparison of different models of heating and ionization during the EoR. We obtain different regimes of morphological evolution of brightness temperature, depending upon how the shapes and sizes of connected regions and holes change with redshift for different astrophysical settings affecting the ionization and heating history of the IGM during and before the EoR. We find that the morphology of the brightness temperature field traces the morphology of ionized regions below a certain redshift value depending upon the model, where $\Delta r^ch_hole<10 \%$ and $\Delta \beta^ch_hole<1 \%$ relative to the $x_HI$ field. This difference decreases with redshift. Therefore, the ionization history of the IGM can be reconstructed using the morphological description of $T_b$ in real space.
CommonCrawl
The following listing contains all publications of Dr. Pascal Lenzner. Further publications of the research group can be found on the current list of publications and the complete list of publications. Individual listings are available externally on DBLP and Google Scholar or locally as PDF. Selfish Network Creation focuses on modeling real world networks from a game-theoretic point of view. One of the classic models by Fabrikant et al. [PODC'03] is the network creation game, where agents correspond to nodes in a network which buy incident edges for the price of alpha per edge to minimize their total distance to all other nodes. The model is well-studied but still has intriguing open problems. The most famous conjectures state that the price of anarchy is constant for all \($\alpha$\) and that for \($\alpha \geq n$\) all equilibrium networks are trees. We introduce a novel technique for analyzing stable networks for high edge-price alpha and employ it to improve on the best known bounds for both conjectures. In particular we show that for \($\alpha > 4n-13$\) all equilibrium networks must be trees, which implies a constant price of anarchy for this range of alpha. Moreover, we also improve the constant upper bound on the price of anarchy for equilibrium trees. We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilo et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the Price of Anarchy. We study a basic network design game where \(n\) self-interested agents, each having individual connectivity requirements, wish to build a network by purchasing links from a given set of edges. A fundamental cost sharing mechanism is Shapley cost sharing that splits the cost of an edge in a fair manner among the agents using the edge. In this paper we investigate if an optimal minimum-cost network represents an attractive, relatively stable state that agents might want to purchase. We resort to the concept of \(\alpha\)-approximate Nash equilibria. We prove that for single source games in undirected graphs, any optimal network represents an \(H(n)\)-approximate Nash equilibrium, where \(H(n)\) is the \(n\)-th Harmonic number. We show that this bound is tight. We extend the results to cooperative games, where agents may form coalitions, and to weighted games. In both cases we give tight or nearly tight lower and upper bounds on the stability of optimal solutions. Finally we show that in general source-sink games and in directed graphs, minimum-cost networks do not represent good states. We consider the discrepancy problem of coloring n intervals with \(k\) colors such that at each point on the line, the maximal difference between the number of intervals of any two colors is minimal. Somewhat surprisingly, a coloring with maximal difference at most one always exists. Furthermore, we give an algorithm with running time \(O(n \log n + k n \log k)\) for its construction. This is in particular interesting because many known results for discrepancy problems are non-constructive. This problem naturally models a load balancing scenario, where \(n\) tasks with given start- and endtimes have to be distributed among \(k\) servers. Our results imply that this can be done ideally balanced. When generalizing to \(d\)-dimensional boxes (instead of intervals), a solution with difference at most one is not always possible. We show that for any \(d \ge 2\) and any \(k \ge 2\) it is NP-complete to decide if such a solution exists, which implies also NP-hardness of the respective minimization problem. In an online scenario, where intervals arrive over time and the color has to be decided upon arrival, the maximal difference in the size of color classes can become arbitrarily high for any online algorithm.
CommonCrawl
Relationships between muscle $\alpha$-tocopherol oncentrations and metmyoglobin percentages during display of six muscles, m. serratus ventralis (SV), m. psoas major (PM), m. gluteus medius (GM), m. semimembranosus (SM), m. semitendinosus (ST) and m. longissimus lumborum (LL), of Japanese Black steers slaughtered at 28 months of age were studied. Steers were supplemented with 0, 2,000 and 4,000 mg $\alpha$-tocopheryl acetate/head/day for 28 days prior to slaughter in the VE 0, the VE 2,000 and the VE 4,000 groups, respectively. $\alpha$-Tocopherol concentrations in PM, GM, SM, ST and LL of the VE 2,000 and the VE 4,000 groups were significantly (p<0.05) higher than those of the VE 0 group. There were no significant (p>0.05) differences in $\alpha$-tocopherol concentrations in all muscles between the VE 2,000 group and the VE 4,000 group. The muscle $\alpha$-tocopherol concentrations ($\ell$/g meat) which can retard metmyoglobin formation in muscles were estimated to be 5.3 for SV, 4.5 for PM, 4.2 for GM, 4.0 for SM, 3.6 for ST and 3.5 for LL. The equation to predict color-shelf-life of each muscle from the $\alpha$-tocopherol concentration in each muscle could be obtained. Bennink, M. R. and K. Ono. 1982. Vitamin B12, E & D content of raw and cooked beef. J. Food Sci. 47:1786-1792. Liu, Q., K. K. Scheller, S. C. Arp, D. M. Schaefer and S. N. Williams. 1996. Titration of fresh meat color stability and malondialdehyde development with Holstein steers fed vitamin E-supplemented diets. J. Anim. Sci. 74:117-126. Lynch, A., J. P. Kerry, M. G. O'Sullivan, J. B. P. Lawlor, D. J. Buckley and P. A. Morrissey. 2000. Distribution of $\alpha$-tocopherol in beef muscles following dietary $\alpha$-tocopheryl acetate supplementation. Meat Sci. 56:211-214. Stewart, M. R., M. W. Zipser and B. W. Watts. 1965. The use of reflectance spectrophotometry for the assay of raw meat pigments. J. Food Sci. 30:464-469. Arnold, R. N., S. C. Arp, K. K. Scheller, S. N. Williams and D. M. Schaefer. 1993a. Tissue equilibration and subcellular distribution of vitamin E relative to myoglobin and lipid oxidation in displayed beef. J. Anim. Sci. 71:105-118. Faustman, C., R. G. Cassens, D. M. Schaefer, D. R. Buege, S. N. Williams and K. K. Scheller. 1989. Improvement of pigment and lipid stability in holstein steer beef by dietary supplementation with vitamin E. J. Food Sci. 54:858-862. Mitsumoto, M., R. G. Cassens, D. M. Schaefer, R. N. Arnold and K. K. Scheller. 1991. Improvement of color and lipid stability in beef longissimus with dietary vitamin E and vitamin C dip treatment. J. Food Sci. 56:1489-1492. AOAC. 1984. Official Methods of Analysis. 14th edn. Association of Official Analytical Chemists, Arlington, Virginia. Arnold, R. N., K. K. Scheller, S. C. Arp, S. N. Williams and D. M. Schaefer. 1993b. Dietary $\alpha$-tocopheryl acetate enhances beef quality in holstein and beef breed steers. J. Food Sci. 58:28-33. Chan, W. K. M., K. Hakkarainen, C. Faustman, D. M. Schaefer, K. K. Scheller and Q. Liu. 1996. Dietary vitamin E effect on colour stability and sensory assessment of spoilage in three beef muscles. Meat Sci. 42:387-399. Faustman, C. and R. G. Cassens. 1990. The biochemical basis for discoloration in fresh meat: a review. J. Muscle Foods. 1:217-243. SAS Institute Inc. 1985. SAS User's Guide: Statistics. SAS Institute, Inc., Cary, North Carolina. Muramoto, T., N. Nakanishi, M. Shibata and K. Aikawa. 2003a. Effect of dietary $\beta$-carotene supplementation on beef color stability during display of two muscles from Japanese Black steers. Meat Sci. 63:39-42. Green, B. E., I. Hsin and M. W. Zipser. 1971. Retardation of oxidative color changes in raw ground beef. J. Food Sci. 36:940-942. Kanner, J. and S. Harel. 1985. Initiation of membranal lipid peroxidation by activated metmyoglobin and methemoglobin. Arch. Biochem. Biophys. 237:314-321. Irie, M., K. Fujita and K. Sudou. 1999. Changes in meat color and $\alpha$-tocopherol concentrations in plasma and tissues from Japanese beef cattle fed by two methods of vitamin E supplementation. Asian-Aust. J. Anim. Sci. 12:810-814. Muramoto, T., M. Shibata and N. Nakanishi. 2003b. Effect of slaughter age on beef color stability during display of four muscles from Japanese Black steers. Asian-Aust. J. Anim. Sci. 16:1364-1368. Mitsumoto, M., S. Ozawa, T. Mitsuhashi and K. Koide. 1998. Effect of dietary vitamin E supplementation for one week before slaughter on drip, colour and lipid stability during display in Japanese Black steer beef. Meat Sci. 49:165-174.
CommonCrawl
This may well be something of a silly question, but if so, then all the more reason I get it straightened out. I have in the past been working with representations of both groups and algebras, and in the literature I've been using, the traces of the representation matrices of both have always been referred to as characters of the representation. Now, I'm looking properly into Lie algebras for the first time since far too long ago, and I come across a note that says that the weight of a representation of an algebra is the analogue of the character of a representation of a group. Are weights of representations of algebras and characters of representations of algebras then the very same thing, and its just that people who work in associative algebra and people who work with Lie algebras uses differing terminology, or is there something of much importance that I am missing here? Much look forward to getting this sorted out! Caveat: not a representation theorist here, and would love to be corrected if this is wrong. It's a pretty weak (non-mathematical) analogy, and you should not think of them as being the same. What they have in common is that they are combinatorial objects which are relatively easy to compute with, and they tell you everything about the representation, so they are super-important. So they are analogous in the human sense of the word analogous. But as far as I know, there isn't a nice uber-theory that unifies group representations and Lie algebra representations such that characters and weights become one-and-the-same. Not the answer you're looking for? Browse other questions tagged representation-theory lie-algebras characters or ask your own question. Is there some kind of character theory for representations of finite dimensional algebras? Best book to understand representation theory. Does the trivial character always show up as a weight? "Let $\chi:\mathfrak h \to \Bbb C$ be a character"?
CommonCrawl
Let $K^n$ denote the set of convex bodies (compact and convex sets) in $\mathbb R^n$. This set is endowed with two important structures. There is an algebraic structure, Minkowski addition +, and a topological one, Hausdorff distance. The cone $(K^n , +)$ is complicated - most convex bodies are extremal elements. Investigating endomorphisms of this cone, with possibly additional properties, is one approach to gain a better understanding of its structure. $\Phi(K + L) =\Phi K + \Phi L$ for all $ K, L \in K^ n$. While Schneider was able to fully characterize Minkowski endomorphisms in the plane, in higher dimensions much less is known. In this talk, we will discuss a representation result for Minkowski endomorphisms that goes back to Kiderlen. Then we are going to establish (prove if time permits) that Minkowski endomorphisms are necessarily uniformly continuous in the Hausdorff metric. As a consequence we obtain that any Minkowski endomorphism can be described by a (signed) Borel measure on the sphere. Finally, we will talk about necessary and sufficient conditions for such a measure to generate a Minkowski endomorphism.
CommonCrawl
Shimada, H. (2007). Holography at string field theory level: Conformal three point functions of BMN operators. Physics Letters B, 647(2-3), 211-218. Abstract: A general framework for applying the pp-wave approximation to holographic calculations in the AdS/CFT correspondence is proposed. By assuming the existence and some properties of string field theory (SFT) on $AdS_5 \times S^5$ background, we extend the holographic ansatz proposed by Gubser, Klebanov, Polyakov and Witten to SFT level. We extract relevant information of assumed SFT on $AdS_5 \times S^5$ from its approximation, pp-wave SFT. As an explicit example, we perform string theoretic calculations of the conformal three point functions of the BMN operators. The results agree with the previous calculations in gauge theory. We identify a broad class of field redefinitions, including known ambiguities of the interaction Hamiltonian, which does not affect the results.
CommonCrawl
Let $q$ be a scalar generalized Nevanlinna function, $q\in\mathcal N_\kappa$. Its gene alized zeros and poles (including their orders) are defined in terms of the function's operator representation. In this paper analytic properties associated with the underlying root subspaces and their geometric structures are investigated in terms of the local behaviour of the function. The main results and various characterizations are expressed by means of (local) moments, asymptotic expansions, and via the basic factorization of $q$. Also an inverse problem for recovering the geometric structure of the root subspace from an appropriate asymptotic expansion is solved.
CommonCrawl
Quite a few years ago I seem to have read a paper or heard a good talk with the title "Some matrices I have known". Does anyone recall that, or can give a reference? Perhaps you're looking for Paul Halmos's talk Matrices I have Met? Not the answer you're looking for? Browse other questions tagged matrices reference-request or ask your own question. Does anyone know of a complete list of theorems? Pairs of $2\times 2$ matrices generating free groups. Does anyone have any advice as to what measure-theoretic Probability Theory books there are with lots of worked examples? Benefits of studying Banach algebras? Is it known that the etale topology on schemes does not have cd structure?
CommonCrawl
You have to handle a very complex water distribution system. The system consists of $$$n$$$ junctions and $$$m$$$ pipes, $$$i$$$-th pipe connects junctions $$$x_i$$$ and $$$y_i$$$. The only thing you can do is adjusting the pipes. You have to choose $$$m$$$ integer numbers $$$f_1$$$, $$$f_2$$$, ..., $$$f_m$$$ and use them as pipe settings. $$$i$$$-th pipe will distribute $$$f_i$$$ units of water per second from junction $$$x_i$$$ to junction $$$y_i$$$ (if $$$f_i$$$ is negative, then the pipe will distribute $$$|f_i|$$$ units of water per second from junction $$$y_i$$$ to junction $$$x_i$$$). It is allowed to set $$$f_i$$$ to any integer from $$$-2 \cdot 10^9$$$ to $$$2 \cdot 10^9$$$. In order for the system to work properly, there are some constraints: for every $$$i \in [1, n]$$$, $$$i$$$-th junction has a number $$$s_i$$$ associated with it meaning that the difference between incoming and outcoming flow for $$$i$$$-th junction must be exactly $$$s_i$$$ (if $$$s_i$$$ is not negative, then $$$i$$$-th junction must receive $$$s_i$$$ units of water per second; if it is negative, then $$$i$$$-th junction must transfer $$$|s_i|$$$ units of water per second to other junctions). Can you choose the integers $$$f_1$$$, $$$f_2$$$, ..., $$$f_m$$$ in such a way that all requirements on incoming and outcoming flows are satisfied? The first line contains an integer $$$n$$$ ($$$1 \le n \le 2 \cdot 10^5$$$) — the number of junctions. The second line contains $$$n$$$ integers $$$s_1, s_2, \dots, s_n$$$ ($$$-10^4 \le s_i \le 10^4$$$) — constraints for the junctions. The third line contains an integer $$$m$$$ ($$$0 \le m \le 2 \cdot 10^5$$$) — the number of pipes. $$$i$$$-th of the next $$$m$$$ lines contains two integers $$$x_i$$$ and $$$y_i$$$ ($$$1 \le x_i, y_i \le n$$$, $$$x_i \ne y_i$$$) — the description of $$$i$$$-th pipe. It is guaranteed that each unordered pair $$$(x, y)$$$ will appear no more than once in the input (it means that there won't be any pairs $$$(x, y)$$$ or $$$(y, x)$$$ after the first occurrence of $$$(x, y)$$$). It is guaranteed that for each pair of junctions there exists a path along the pipes connecting them. If you can choose such integer numbers $$$f_1, f_2, \dots, f_m$$$ in such a way that all requirements on incoming and outcoming flows are satisfied, then output "Possible" in the first line. Then output $$$m$$$ lines, $$$i$$$-th line should contain $$$f_i$$$ — the chosen setting numbers for the pipes. Pipes are numbered in order they appear in the input. Otherwise output "Impossible" in the only line. Server time: Apr/22/2019 04:14:24 (g1).
CommonCrawl
A while ago I was thinking about how to illustrate the input data in Shape from sensors — in 2D, the input is the unit tangent field parametrized by arc-length. I ended with this cool-looking isometric and isotopic unwrapping. Move the slider to unwrap the curve. You can also play around in fullscreen. The effect is achieved by interpolating between the input curve and a straight line with the same length in the tangent space (using spherical linear interpolation). I did my original implementation in Matlab; for this post, I wanted to try out doing the same thing in-browser using geometry-processing-js (I actually only use the linear algebra package). The idea for computing the intermediate curve $\gamma$ is to interpolate between the source tangent field $\mathrm T_0$ (of the original curve) and the target tangent field $\mathrm T_1$ (of the straight line). The in-between tangent field $\mathrm T$ is then integrated by solving a Poisson problem. The nice thing about this approach is that the Laplacian matrix can be build and pre-factored once, prior to the choice of $t$ and computation of the right-hand side B of the system. Since the curve is open, we need boundary constraints: I fix the starting point to $[0,0]$ and the endpoint to $(1-t) P + t [L,0]$ with $P$ being the endpoint of the original curve. Details of the discretization are in "Robust Fairing via Conformal Curvature Flow" [Crane et al. 2013].
CommonCrawl
Each column, each row and each box (3$\times$3 subgrid) must have the numbers 1 to 9. The puzzle can be solved with the help of clue-numbers which are written after slash marks on the intersections of border lines. Each clue-number is the sum of two digits in the two squares that are diagonally adjacent to each other. The position of each pair of diagonally adjacent squares is indicated by either two forward slash marks // or two backward slash marks \\. 3 and 9, 9 and 3; 4 and 8, 8 and 4; 5 and 7, or 7 and 5 respectively. 1 and 5, 5 and 1; 2 and 4, or 4 and 2 respectively. Counting. Logo. Networks/Graph Theory. Mathematical reasoning & proof. Interactivities. Working systematically. Games. PrimaryGames-Strategy. Combinatorics. Visualising.
CommonCrawl
In 1901, Werner Boy (under direction from David Hilbert) discovered a non-orienting surface which is a self-intersecting immersion of the real projective plane in 3-dimensional space. The surface is obtained by sewing a Möbius strip to the edge of a disk, given by the parametric equations below. "Draws the shape at the given state of the world (the x,y,z rotation angles)" "Increment/decrement the rotation angles around the x,y and z axes" when $\alpha = 1$, this describes the Boy surface, when $\alpha = 0$, this describes the Roman surface. Built using the wireframes library.
CommonCrawl
In a previous paper we introduced and developed a recursive construction of joint eigenfunctions $J_N(a_+,a_-,b;x,y)$ for the Hamiltonians of the hyperbolic relativistic Calogero-Moser system with arbitrary particle number $N$. In this paper we focus on the cases $N=2$ and $N=3$, and establish a number of conjectured features of the corresponding joint eigenfunctions. More specifically, choosing $a_+,a_-$ positive, we prove that $J_2(b;x,y)$ and $J_3(b;x,y)$ extend to globally meromorphic functions that satisfy various invariance properties as well as a duality relation. We also obtain detailed information on the asymptotic behavior of similarity transformed functions $\rE_2(b;x,y)$ and $\rE_3(b;x,y)$. In particular, we determine the dominant asymptotics for $y_1-y_2\to\infty$ and $y_1-y_2,y_2-y_3\to\infty$, resp., from which the conjectured factorized scattering can be read off.
CommonCrawl
I need to solve a trigonometric function similar to the following one for $\alpha$. $$ x_1 \sin(2\alpha)+x_2 \cos(2\alpha) - x_3 \sin(\alpha) - x_4 \cos(\alpha) = 0 $$ I found a solution to a very . Stack Exchange Network. Stack Exchange network consists of 174 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge... sin(2alpha) = -24/25 Because 180 2alpha 360 and sin(2alpha) = -24/25, this means we're still in quadrants 3 and 4 (where sine is negative). Unfortunately we don't have enough info to determine which quadrant 2alpha is in. Solve your algebra problem step by step! Online Trigonometry Solver » Home » Analytic Trigonometry » 2. Sin, cos, tan of Sum of Two Angles; 2. Sin, Cos and Tan of Sum and Difference of Two Angles. by M. Bourne. The sine of the sum and difference of two angles is as follows: On this page... Tan of Sum and Difference of Two Angles. sin(α + β) = sin α cos β + cos α sin β. sin(α − β how to start a water restoration business 1 + cot 2 θ = csc 2 θ but I state this using argument theta because the alpha symbol in this font does not look good to me. Divinity Original Sin 2 Alpha is an Action Adventure video game that was developed and published by Larian Studios. This installment is 100% working and free from virus, this website always provide working games, so you can easily get it Today.
CommonCrawl
For skilled programmers, it is very easy to implement a sorting function. Moreover, they often avoid full sorting to reduce computation time if it is not necessary. Here, we consider "rough sorting" which sorts an array except for some pairs of elements. More formally, we define an array is "$K$-roughly sorted" if an array is sorted except that at most $K$ pairs are in reversed order. For example, '1 3 2 4' is 1-roughly sorted because (3, 2) is only the reversed pair. In the same way, '1 4 2 3' is 2-roughly sorted because (4, 2) and (4, 3) are reversed. Considering rough sorting by exchanging adjacent elements repeatedly, you need less number of swaps than full sorting. For example, '4 1 2 3' needs three exchanges for full sorting, but you only need to exchange once for 2-rough sorting. Given an array and an integer $K$, your task is to find the result of the $K$-rough sorting with a minimum number of exchanges. If there are several possible results, you should output the lexicographically minimum result. Here, the lexicographical order is defined by the order of the first different elements. The first line contains two integers $N$ and $K$. The integer $N$ is the number of the elements of the array ($1 \leq N \leq 10^5$). The integer $K$ gives how many reversed pairs are allowed ($1 \leq K \leq 10^9$). Each of the following $N$ lines gives the element of the array. The array consists of the permutation of $1$ to $N$, therefore $1 \leq x_i \leq N$ and $x_i \ne x_j$ ($i \ne j$) are satisfied. The output should contain $N$ lines. The $i$-th line should be the $i$-th element of the result of the $K$-rough sorting. If there are several possible results, you should output the minimum result with the lexicographical order. In the last example, the input array is already sorted, which means the input is already a 3-roughly sorted array and no swapping is needed.
CommonCrawl
1) Begin by stating that a Taylor Series is an expansion of a function into an infinite sum of terms. My only concern is: Is this too much for a 12-15 minute demo? Caveat: I'm just a random Internet poster, not an instructor, so take this with a grain of salt. And get some advice from blooded veterans. A. It's too much. 1 and 2 are plenty for 12 minutes. B. Write out the entire 50 minute lecture, practice it at once, and write up a synthesis of the last 35 minutes. I suggest something like topic, time, key points, perhaps purpose (you can draw a 4 column table and fill it out in the 3 minute). Make it simple enough so that you can actually explain it in 2 minutes. Test yourself to see that you can (it is simple enough summary). D. Remember that your target audience at a juco is not math superstars. They are a lower skill set than you or even than an AP calculus class. Good people who want to get through this to support their chem/physics class or nursing degree or what have you. They are looking to progress and get jobs or to transfer to lower level state schools (and then get jobs). Have some sympathy for this and for them (without being obvious or patronizing about it either). F. Do a little reconnaissance and figure out what text they use. See how it addresses this topic and work the homework problems in that text. G. Really this whole topic is a little bit of a pain for the student and not the most important material. Maybe even why they picked it. Show you can get the kids through the damned thing. Your intuition is correct that this is far too much for a 12-15 minute lesson. The committee knows that there is no way you could deliver a whole lesson, even an introductory one, on this topic. Since you've only got 15 minutes, plan to use it to spark interest in the topic, knowing that you're only showing the committee the very beginning. For example, why not show students the cosine function on a small interval around $x=0$, and ask them what polynomial it reminds them of? "Does it look like some kind of $x^2$ or $x^3$?" Let them decide which one, and then graph their guess on top of $y=cos(x)$. You will have the opportunity to discuss the vertical intercept and the width of the graph -- that it can be adjusted (by multiplying a term by a constant). In this way, your lesson could start as a game about building the cosine function from simpler functions, and students will be involved in the guessing process. I see this occupying up to 5 minutes. Then do your steps 1 and 2, and you're probably out of time. In my opinion, using your lesson time to draw on your students' previous experience with trig and power functions helps them see that they are already capable of doing much of the intuitive work leading to Taylor polynomials, and it validates that this new thing is something that doesn't have to seem like just applying another magic math formula. You could considerably simplify this assessment (for an introduction) by asking them what just the first term of the series expansion should be. If you took the time to remind them of the "long-run behavior" of a polynomial when you discussed the cosine function, you could let them use their calculators to determine what the 2nd term of the series expansion of $sin(x)$ is. I'd take an entirely different approach, though the faculty that is assessing you may not view it favorably. Mathematicians tend to view the logic of math as intrinsically beautiful. The vast majority of people do not. The vast majority considers math to be hoop jumping and weirdly arbitrary rules that must be followed to get check marks. They have no idea why anyone would care about formulae. Simply telling students "Taylor Series are important!" does not motivate them. Glossing over it or lecturing them for a few minutes on it will also fail to motivate them. To begin addressing this, I would use intellectual need to motivate the students. For example, in lower levels of math, one might teach fractions as part of a whole or something like that, or sections of circles, or naming amount of food. Or, you could make them feel intellectual need by having them how they'd share 5 pieces of licorice among 3 kids and name the amount each kid got. One could teach exponents as repeated multiplication then do a bunch of notational drills. Or, you could make students feel the need for exponents by teaching them about the spread of infectious diseases. "If 3 people are infected initially and each sick person infects one more person each week, what will happen in 6 months?" Then, you make a table, and let them struggle for a while to figure out that in week, say, 10, they calculate the number of infections by going $3\times2\times2\times2\times2\times2\times2\times2\times2\times2\times2$, which is just a pain to read and write. In the eyes of most mathematicians, the average person, including college students, have shockingly little interest in the logic of math because they just want to get answers and check marks. So, what will you do to make them feel the need for a Taylor Series? Check out the work on "Intellectual Need" in math education. The risk: The "Curse of Knowledge" unfortunately, implies a very high probability that the people assessing you assume that students will intuitively understand that Taylor Series are interesting and important, and thus just need to be told so. Experts, very frequently, are shocked to find out that basic concepts, such as why anyone cares about Taylor Series, completely fly over the head of students who ace quizzes and tests on Taylor Series. Not the answer you're looking for? Browse other questions tagged community-colleges or ask your own question. What is the evidence about the effectiveness of remediation in math?
CommonCrawl
Your company is moving to a new, larger office building. The new office is a rectangular space that will eventually be populated with cubicles. Your employees want to request particular positions for their cubicles, so you are setting up a system that lets them make these requests. To help automate this process, you have expressed the new office building using a coordinate system where each unit is one foot. The south-west corner of the new building's floor space is assigned coordinate $(0, 0)$, the positive $X$ axis is aligned with the inner edge of the building's south wall and the positive $Y$ axis is aligned with the west wall. Employees request a position for their cubicle by giving the coordinates of the cubicle's south-west corner and its north-east corner. You don't expect this technique for dividing up the space to work very well the first time around. Some space may not get allocated and space may be requested by more than one employee. Your job is to compile a report of how well these requests would allocate the office space and how much contention there is between requests. Input consists of up to $10$ test cases. Each case starts with a line containing a pair of integers $w$ and $h$ giving the size of the new office space ($w$ is the number of feet west-to-east; $h$ is the number of feet south-to-north). Both of these numbers are in the range $1 \ldots 100$. After this is a line containing an integer $0 \le n \le 20$ giving the number of employees you have. Following this are $n$ cubicle placement requests, one per line. Each request starts with the name of the employee. The name is a string of $1$ to $20$ lower- and/or upper-case letters (a–z). The name is followed by four integers $x_1, y_1, x_2, y_2$ where $(x_1, y_1)$ indicate the coordinates of the south-west corner of their desired cubicle placement and $(x_2, y_2)$ indicate the coordinates for the north-east corner. Each set of request coordinates satisfies $0 \le x_1 \le x_2 \le w$ and $0 \le y_1 \le y_2 \le h$. The sequence of test cases ends at the end of file. For each test case, print out a report that starts with the total number of square feet in the building and the number of square feet that no employee has requested (the unallocated space). Next, give the total number of square feet that are contested because more than one employee has requested the same region of the floor. Finally, for each employee give the number square feet that that employee can be guaranteed to have. This is the total area that they requested minus any regions that were also requested by another employee. List the employees in the same order they were given in the input. Leave a blank line after the output for each test case.
CommonCrawl
Abstract: A quantum critical (QC) fluid exhibits universal subleading corrections to the area law of its entanglement entropies. In two dimensions when the partition involves a corner of angle $\theta$, the subleading term is logarithmic with coefficient $a_\alpha(\theta)$ for the $\alpha$-Rényi entropy. In the smooth limit $\theta\!\to\!\pi$, $a_1(\theta)$ yields the central charge of the stress tensor when the QC point is described by a conformal field theory (CFT). For general Rényi indices and angles, $a_\alpha(\theta)$ is richer and few general results exist. We study $a_\alpha(\theta)$ focusing on two benchmark CFTs, the free Dirac fermion and boson. We perform numerical lattice calculations to obtain high precision results in $\theta,\alpha$ regimes hitherto unexplored. We derive field theory estimates for $a_\alpha(\theta)$, including new exact results, and demonstrate an excellent quantitative match with our numerical calculations. We also develop and test strong lower bounds, which apply to both free and interacting QC systems. Finally, we comment on the near collapse of $a_\alpha(\theta)$ for various theories, including interacting $O(N)$ models.
CommonCrawl
The core of InfoScout's business intelligence platform is built on top of receipts. Hundreds of thousands of receipt images are uploaded by shoppers from all over the country, every single day. The quality and resolution of these images vary wildly: they could be rotated, shot in low light conditions, or taken by a poor camera. Consequently, optical character recognition (OCR) systems have a tough time extracting line items from these receipts. Accuracy here is paramount to our business, since the number of purchases is one of the most important metrics that our clients use to evaluate their position in the market. As the saying goes, "garbage in, garbage out". When we detect garbage OCR output, we fall back to our army of human transcribers: Amazon MTurk. While humans are certainly more accurate than OCR when it comes to receipt transcription, mistakes still happen, accidentally or otherwise. My task over the last few months as a summer intern was to architect another line of defense against dirty data riddled with typos. This problem is a classic application of one type of unsupervised machine learning: clustering. Centered around each correctly spelled line item string, there resides a cluster of incorrectly spelled variations for that string, each maybe one or two letters off. An observation we made is that "the mode is always right". In other words, within each cluster of similarly spelled strings, the one we've seen the most times, often by an order of magnitude or more, is the correctly spelled one. Therefore, once we have broken down our line items into clusters, we can clean the entire cluster to the mode. Sounds simple, right? So, how did we cluster our receipt line item strings? We turned to the DBSCAN algorithm. While I won't go into the algorithm itself, one major component of the algorithm is the neighborhood query function. It is this function that determines the final runtime of the algorithm, whether that is minutes and hours, or days and weeks. Given a line item string, how do we characterize its neighbors? Through experimentation, we have found that a Levenshtein distance of 2 works well. Levenshtein distance is a measure of how many edit operations are necessary to transform one string to another, where an edit operation can be inserting one character, deleting one character, or substituting one character for another. Therefore, given one line item string, we define its neighbors to be the set of all other line item strings that can be transformed to the original in at most 2 edit operations. We initially toyed with the scikit-learn Python library, as it had an implementation of DBSCAN readily available. We quickly learned that it was not the answer to our problems. While its implementation of DBSCAN was well optimized for numerical data and had fast indices like ball trees and KD trees, there were no indices available for string data using Levenshtein distance. Thus, this degenerated to creating a $n \times n$ matrix of the pairwise distances between all $n$ line item strings, which has $O(n^2)$ space and time complexity. As an optimization to our algorithm, we decided to partition the receipt line items by the stores they came from, since stores differ wildly in their receipt line item descriptions for the same items. Even with this optimization, one of our largest stores, Kroger, still had 1.1 million unique line item strings that we've seen throughout the years. The distance matrix for Kroger would need dozens of terabytes of memory, which would not fit in even the largest AWS EC2 instance. Fortunately, Levenshtein distance satisfies all these conditions. That means that we can take advantage of the numerous convenient properties of the BK tree: $O(n \log n)$ creation time, $O(\log n)$ query time, and $O(n)$ space. At this point, our dataset easily fits into any modern computer, and will complete within a few hours for a few hundred thousand unique line items, which is typical for a medium sized chain like Trader Joe's. The cleansing output of this algorithm is quite promising, with solid precision and recall. Nevertheless, it can still be improved. Now that we got the dataset to fit in memory, the next step would be to improve precision and recall. The intuition is that, for an OCR system, some types of mistakes are more common than others. For example, the number "0" and the capital letter "O" can be easily confused. On the other hand, it is unlikely for OCR to recognize an "X" when there is actually an "O". However, using the Levenshtein distance algorithm, all these errors would have the exact same cost of 1. To quantify these differences, we collected statistics from our proprietary data to determine the frequencies of each type of error, from both humans and OCR. These frequencies were then transformed into costs. For a typical error, the cost would be around 1. For common errors, such as substituting an "O" for a "0", the cost would be less than 1, while uncommon errors would have costs greater than 1. These costs would then be used to calculate the Levenshtein distance. Thus, for the same threshold distance of 2, it allows a neighbor to be more than 2 edit operations away, if those edits are very common mistakes. Conversely, seldom seen edits are penalized, and perhaps only 1 edit operation is allowed before the distance threshold of 2 is reached. Furthermore, human transcribed texts produce very different types of errors compared to OCR transcribed texts. For instance, it is not likely for an OCR system to transpose adjacent characters in a string. On the other hand, human typists make those mistakes constantly. How many times have you see "teh" or "adn" in a comment section or forum online? To address this difference, we instead used the Damerau-Levenshtein distance for human sourced line item strings. The Damerau-Levenshtein distance allows the same 3 edit operations from the Levenshtein distance, plus the transpose operation. In addition, we also used different weights for human sourced strings. For instance, the letters "I" and "O" are very close on a QWERTY keyboard, so it is likely for a human typist to type one instead of the other, while it is unlikely to be the case for OCR sourced strings, since the two characters look nothing alike. We could not find any existing library out there that can calculate weighted Levenshtein and weighted Damerau-Levenshtein distance. Thus, we built a library to do this ourselves. With this modification to our algorithm, both precision and recall were improved. Unlikely errors were pruned out of the clusterings, while more common errors found their way into their correct clusters. Unfortunately, BK trees can no longer be used to index the strings, since the weighted Levenshtein and Damerau-Levenshtein distance neither satisfies symmetry nor the triangle inequality. Once again, we are faced with a brute force $O(n)$ neighborhood query. Fortunately, I stumbled upon this blog post. This is a highly efficient algorithm that allows constant neighborhood query time with respect to the size of the index. After using this algorithm to find an approximate neighborhood, we then use our weighted Levenshtein distance to find the exact neighborhood on the reduced search space. While this algorithm does use an obscene amount of memory, it cut down the algorithm's runtime from hours to less than 10 minutes. picking the most frequently occurring string within each cluster as the correct spelling. For Trader Joe's, a store with around 480,000 unique line items that we've seen so far, this algorithm was able to clean that to around 200,000, a reduction of almost 60%. As we like to say up north, impressive, eh?
CommonCrawl
We study the growth of typical groups from the family of $p$-groups of intermediate growth constructed by the second author. We find that, in the sense of category, a generic group exhibits oscillating growth with no universal upper bound. At the same time, from a measure-theoretic point of view (i.e., almost surely relative to an appropriately chosen probability measure), the growth function is bounded by $e^n^\alpha$ for some $\alpha$ < 1.
CommonCrawl
Abstract: We propose a variety of models of random walk, discrete in space and time, suitable for simulating stable random variables of arbitrary index $\alpha$ ($0< \alpha \le 2$), in the symmetric case. We show that by properly scaled transition to vanishing space and time steps our random walk models converge to the corresponding continuous Markovian stochastic processes, that we refer to as Levy-Feller diffusion processes. Journal reference: Physica A, Vol. 269, No 1, pp. 79-89, 1999.
CommonCrawl
This is to be completed by November 2nd, 2017. Complete the lesson "Data Visualization in R". (Source: Minka, Murphy.) My neighbor has two children. Assuming that the gender of a child is like a coin flip, it is most likely, a priori, that my neighbor has one boy and one girl, with probability 1/2. The other possibilities—two boys or two girls—have probabilities 1/4 and 1/4. a. Suppose I ask him whether he has any boys, and he says yes. What is the probability that one child is a girl? b. Suppose instead that I happen to see one of his children run by, and it is a boy. What is the probability that the other child is a girl? (Source: Peter Lee, Murphy) Suppose a crime has been committed. Blood is found at the scene for which there is no innocent explanation. It is of a type which is present in 1% of the population. a. The prosecutor claims: "There is a 1% chance that the defendant would have the crime blood type if he were innocent. Thus there is a 99% chance that he guilty". This is known as the prosecutor's fallacy. What is wrong with this argument? b. The defender claims: "The crime occurred in a city of 800,000 people. The blood type would be found in approximately 8000 people. The evidence has provided a probability of just 1 in 8000 that the defendant is guilty, and thus has no relevance." This is known as the defender's fallacy. What is wrong with this argument? (Source: Koller, Murphy.) After your yearly checkup, the doctor has bad news and good news. The bad news is that you tested positive for a serious disease, and that the test is 99% accurate (i.e., the probability of testing positive given that you have the disease is 0.99, as is the probability of testing negative given that you don't have the disease). The good news is that this is a rare disease, striking only one in 10,000 people. a. Which of the following sets of numbers are sufficient for the calculation? i. $P(e_1, e_2), P(H), P(e_1| H), P(e_2, H)$. ii. $P(e_1, e_2), P(H), P(e_1, e_2 | H)$. b. Now suppose we now assume $E_1\perp E_2 | H$ (i.e., $E_1$ and $E_2$ are independent given $H$). Which of the above 3 sets are sufficient now? Estimate the value of $\pi$ by taking uniform random samples from the square $[-1,1]\times [1,1]$ and seeing which lie in the disc $x^2+y^2\leq 1$. A company is trying to determine why their employees leave and why they stay. They have a list of roughly 15000 employee records here. a. Download this dataset and load it in R (this may require setting up a Kaggle account if you don't already have one). b. Examine the dataset and see if you need to transform any of the features=columns (e.g., are there factors that were not recognized as such, is there missing data?). c. Randomly shuffle the rows and cut the dataset into two pieces with 10000 entries in a data frame called train and the remaining entries in a data frame called valid. d. Study the train data frame and see if you can find any features that predict whether or not an employee will leave. e. Make a hypothesis about how you can predict whether an employee will leave by studying the train data. f. Once you have fixed this hypothesis evaluate how well your criteria work on the valid data frame. g. Justify your proposal with data and charts. Save at least one of these charts to a pdf file to share with management.
CommonCrawl
Why are correlations between -1 and 1? which is more convenient for calculation, but in my opinion less convenient for understanding. The correlation coefficient will be positive when and usually have the same sign — meaning that larger than average values of go with larger than average values of — and negative when the signs tend to be mismatched. and rearrange to get that . Similarly looking at gives . Finally, the correlation of a data set is just the correlation of the corresponding random variables. This is all well and good if you're introducing random variables. But one of the texts I'm teaching from this semester (Freedman, Pisani, and Purves, Statistics) doesn't, and the other (Moore, McCabe, and Craig, Introduction to the Practice of Statistics) introduces the correlation for sets of bivariate data before it introduces random variables. These texts just baldly state that is between and always — but of course some students ask why. and this being nonpositive is exactly the form of Cauchy-Schwarz we needed. and dividing through by gives that the square of the correlation is bounded above by $1$, which is what we wanted. So now I have something to tell my students other than "you need to know about random variables", which is always nice. Not that it would kill them to know about random variables. But I'm finding that intro stat courses are full of these black boxes that some students will accept and some want to open. Next Post Are we all descended from Confucius? This could use a definition of "standardization" for those of us following along up to there but not statistics experts. Fair enough. To standardize a set of numbers $(x_1, x_2, \ldots, x_n)$ you just subtract their mean and then divide by their standard deviation — so you get numbers that indicate how far above or below the mean they are, in units of their standard deviation. Similarly for random variables. The way it's written, it looks as though the entire expectation is squared (which gives a trivial and useless result), whereas we really want the expression within the expectation to be squared. Thanks a lot from France for the part with random variables. I don't know if Cauchy Schwarz inegality works in this case but your demonstration is nice without. Of course,as you writed,it in this article, the case with n values of statistics without random variables works well with it.
CommonCrawl
We study the thermal properties of QCD in the presence of a small quark chemical potential $\mu$. Derivatives of the phase transition point with respect to $\mu$ are computed at $\mu=0$ for 2 and 3 flavors of p-4 improved staggered fermions on a $16^3\times4$ lattice. Moreover we contrast the case of isoscalar and isovector chemical potentials, quantify the effect of $\mu\not=0$ on the equation of state, and comment on the screening effect by dynamical quarks and the complex phase of the fermion determinant in QCD with $\mu\not=0$. Ejiri S, Allton CR, Hands SJ, et al. The QCD phase transition at high temperature and low density. Nucl.Phys.Proc.Suppl. 2003;119:538-540. Ejiri, S., Allton, C. R., Hands, S. J., Kaczmarek, O., Karsch, F., Laermann, E., and Schmidt, C. (2003). The QCD phase transition at high temperature and low density. Nucl.Phys.Proc.Suppl. 119, 538-540. Ejiri, S., et al., 2003. The QCD phase transition at high temperature and low density. Nucl.Phys.Proc.Suppl., 119, p 538-540. S. Ejiri, et al., "The QCD phase transition at high temperature and low density", Nucl.Phys.Proc.Suppl., vol. 119, 2003, pp. 538-540. Ejiri, S., Allton, C.R., Hands, S.J., Kaczmarek, O., Karsch, F., Laermann, E., Schmidt, C.: The QCD phase transition at high temperature and low density. Nucl.Phys.Proc.Suppl. 119, 538-540 (2003). Ejiri, S., Allton, C. R, Hands, S. J, Kaczmarek, Olaf, Karsch, Frithjof, Laermann, Edwin, and Schmidt, Christian. "The QCD phase transition at high temperature and low density". Nucl.Phys.Proc.Suppl. 119 (2003): 538-540.
CommonCrawl
In the introduction of this paper, it is explained that and how the application of a dynamic subrid scale model for turbulence into a large eddy simulation (LES) model corresponds to doing one renormalization step in a renormalization group (RNG) analysis. However, when implementing the renormalization group into a LES model to obtain subgrid scale parameterizations, the rescaling step is left out. If I have understood this correctly, the rescaling leads to the fact that finally, after $k$ renormalization steps one considers a infinite in space domain which is needed to define scale invariance and therefore fixed points of the RNG flow. The neglect of the rescaling step leads to the fact, that the limit obtained in the model for $k -> \infty$ is not a (or does not have to be?) a true fixed point of the RNG transformation and is called a limit point to distinguish it from a conventional scale invariant fixed point. My question now is: Can the difference between such a "true" scale invariant fixed point of the RNG flow and the limit point, obtained ofter a large enough number of renormalization steps lead to a "misbehavior" of the dynamic subgrid scale parameterization, such that for example an expected Kolmogorov fixed point is missed and the turbulent kinetic energy spectrum shows not the right scaling ?
CommonCrawl
Because multiplication is commutative, it may not matter symbolically; however, as the authors mentioned in the article, taking 4 pills a day for 21 days is a lot different from taking 21 pills a day for 4 days. If I want my students to know what multiplication "is," what is the correct explanation? Your question is really not a math question, but a linguistic question, even though it involves symbols. Since, as you recognize, 3*4 and 4*3 have the same value, there is no mathematical value in the distinction. You are only asking about the underlying image when we say "three times four" in English. (It may have been different in Euclid's Greek.) And I submit that it can be reasonably interpreted either way. My first thought when analyzing the phrase is that originally '3 times' meant to repeat what follows three times; that is, it would mean the same as '4, taken 3 times'. So 3 times 4 means 4+4+4. But that seems awkward, considering the way we tend to say the phrase now. We might read through a calculation, saying "3, times 4 is 12, plus 2 is 14, ..." where each operation acts on the first number. We're really saying "3 multiplied by 4." Taken that way, we start with 3 and multiply it by 4, meaning that we repeat it 4 times. So 3*4 = 3+3+3+3. The difference here is entirely in the grammar: is '3 times' an adjective phrase modifying '4', or is 'times' a preposition equivalent to 'multiplied by', and 'times 4' a modifier of '3'? Of course, the Greeks had no symbolic notation for multiplication, so that says nothing about the meaning of \(3 × 4\); they wrote their math out in words, like "A multiplied by B makes C", leaving no ambiguity about which is the multiplier. The article assumes that "\(×\)" is read as "multiplied by", so that the second number is the multiplier. But that begs the question! If they had used our notation, might they not have seen \(3 × 4\) as 3 multiplying 4? Anyway, I continued by considering the issue of pedagogy: Is there, and should there be, only one model of a given multiplication? can be seen equally well as 3 rows of 4, or as 4 columns of 3. Once you've drawn the figure, or once you've written 3*4, no one can really tell which interpretation you meant. The distinction has been abstracted out of the problem. And that is a good thing, not bad: in mathematics the ability to work abstractly and forget what the concrete problem was is a major advantage, because we can ignore details that don't affect the result, and rearrange the work to make it easier. If I took 4 pills a day for 21 days, and now I want ONLY to know how many pills I took, I can call it 4*21 (I just put the numbers in the order I saw them, without having to think about which is the multiplicand) and re-model the problem as 4 sets of 21, allowing me to add 21+21+21+21, which is easier than adding 21 4's. I can switch freely among different models, because I know that the outcome is equivalent. So I never bother to define whether 4*21 means 4 groups of 21 or 21 groups of 4, because that distinction would only be a hindrance. And if I were doing a calculation for which it made a difference whether I took 4 or 21 pills a day, such as finding the concentration of medication in my blood, I wouldn't be multiplying, because that would be the wrong abstraction! So "what is multiplication?" It is a commutative operation that can be modeled in two symmetrical ways as repeated addition (when applied to whole numbers). Doctor Tom also wrote an answer to the question of what multiplication "is", by stating a mathematical definition of multiplication as an abstract operation, akin to what I have recently mentioned about axioms of arithmetic. It is probably not useful for Cindy's purposes, but is interesting. Now, three years later one of the authors of the article cited, John Marshall, wrote to us, chiding me for commenting without having read the article (which, of course, I couldn't do, so that my comments weren't about the article at all), and defending his claims. Another three years later, the other author, Deirdre Dempsey, also discovered what I had written and wrote me; by that time I had been able to read the article, and had a better idea what it said. does it make your point? It seems pretty clear to me. In 3×4=12 the 3 is the multiplicand and the 4 the multiplier. Just because the OPERATION of multiplication is commutative it doesn't follow that the MEANING of multiplication is. AND IT IS MEANINGS WE ARE TALKING ABOUT. You don't seem to understand either that, in the pills issue that Cindy quotes, the problem is NOT asking for the total number of pills, but rather how would the pharmacist, or patient, interpret a prescription that used 4×21. It is about the meaning of 4×21 not the product. When Deirdre Dempsey wrote after I had read the article, I said the following about that point. It turns out that the whole article was centered around students being asked to think of a situation represented by \(6 × 3\); the authors claimed this should be seen only as 3 groups of 6, and not as 6 groups of 3, though most people think it is the latter. The pill example claims that \(4 × 21\) specifically means "21 groups of 4", and so could be used in a prescription to tell a patient what to take, and not just to calculate the total number of pills. It seems to me that you have confused the idea of being able to represent a concrete problem as an arithmetic operation (which is an important indication of understanding) with the inverse process, being able to turn an operation into a single kind of concrete problem. The latter is not a valid problem to assign, unless you accept any situation that will yield the same operation. The fact is that an operation like 4×21 does not mean either 4 pills a day for 21 days OR 4 days taking 21 pills a day. The specific point I dealt with in my original response was the implication that a prescription could be written saying to take 4×21 pills. I certainly hope no one would ever do that, because that phrase simply does not MEAN to take 4 pills a day for 21 days. The phrase "4 times 21" is not a description of a regimen, but an arithmetic operation whose meaning is "84". All the dictionaries agree on that! Now, there is one point where I agree with you. Replacing the word "times" with "multiplied by" when multiplication is first introduced would be a good idea; it allows you to talk concretely about multiplication with a clear physical model, and it also avoids the unfortunate tendency of so many students to turn "times" into a verb ("I timesed 2 by 3 and got 6"). As long as you then move on quickly to talk about other models of multiplication and to emphasize the commutative property, I'm happy with that change. The best reason for preferring "multiplied by", I think, is that it fits into the model of other operations, where we start with a number and operate on it: \(a + b\) means "a increased by b", \(a – b\) means "a decreased by b", \(a \div b\) means "a divided by b", so \(a \times b\) should mean "a multiplied by b". That consistency is perfectly reasonable, especially in initial teaching. It just happens that it isn't followed consistently in real life, and is not important in practice. To teach children that only one model is valid is incompatible with everyday usage, and leads to confused questions from students and their parents. My 2nd grader failed her math quiz today because the teacher insists that there is an order as to how a multiplication sentence should be written. The teacher said that 2 x 3 = 6 would be wrong. The math teacher said that this is the only way that can be written, and that writing it as 3 x 5 = 15 is wrong. I have already spoken to the teacher and explained that these can be interpreted in two ways, but she insists that this is how 2nd graders should be taught! Unless I can prove her wrong with a written source of some kind, she will not consider my daughter's answers. One of the things in education that trouble me most is when something is taught in a way that confuses the smart kids who really understand (and maybe have knowledge beyond what is being taught), and even costs them points. Intelligence should never be penalized. Nor should teachers take a position that parents can see makes no sense. But we get questions like this repeatedly. I agree fully with you. But there is some reason behind the teacher's position; and a compromise is possible. Multiplication is commutative, so your point is correct; 2*3 and 3*2 are equal, and either could be used to represent the same calculation or situation. Mature thinkers don't need to pay any attention to the order, but use whichever order is most useful. On the other hand, it is reasonable for students to INITIALLY be introduced to multiplication using a single model, which in this teacher's curriculum is "this many of those" rather than "this repeated that many times." This simply allows everyone in the class to be on the same page when they give examples. But they should soon learn the commutative property so that, for example, if they want to find the cost of 50 items that each cost $2, they could just add two 50's rather than add fifty 2's! Some time later, each of the authors of the article cited (which I didn't have access to at the time) wrote to me complaining that my comments were wrong. They didn't change my mind, but I did then get to read their full article, and it did clarify the point they were trying to make. My conclusion is that it does make sense to teach kids initially to read "2*3" as "2 multiplied by 3," and perhaps to ask them to use that interpretation in class to make examples of where a particular multiplication would be used. They should not, however, be taught that this is the only interpretation. If nothing else, you can refer your daughter's teacher to that 2001 article from the Phi Delta Kappan, if she has access to it (I had to get it online through my school library site), because it makes the opposite statement about the meaning of multiplication, showing that what your teacher does is not the ONLY way to teach the subject; and moreover that other educators have the opposite opinion. It is not the way students SHOULD be taught multiplication, just the way they ARE taught it in this curriculum. So here's the compromise I suggest: the teacher can teach one interpretation of the application of multiplication, but make it clear that it is not the ONLY answer, just the one they are using IN THIS CLASS, for now. Your daughter should be commended for seeing that it really doesn't make a difference; and rather than being marked wrong, can be encouraged to learn the way the teacher says to do it merely as a class convention, to help students who need more time to catch on. CCSS.Math.Content.3.OA.A.1 Interpret products of whole numbers, e.g., interpret \(5 × 7\) as the total number of objects in 5 groups of 7 objects each. For example, describe a context in which a total number of objects can be expressed as \(5 × 7\). Now, all this actually says is that students should be able to interpret a product in terms of some physical model; the specific instance shown is just an example of one such model, and does not say that this is the only model. So with my understanding of the issue, I would say that they leave it open whether to introduce only one model, or which one to use. But I can see why teachers and curriculum writers might interpret it as requiring this one model. I wish that teachers, teacher educators, and curriculum writers could all have a sufficiently mature understanding of math beyond the level being taught to know what matters, and what doesn't. But none of us can be expected to do everything right (I myself can't claim to know all about elementary education), so we have to be gracious. Unfortunately, the example the standards give is not the "multiplied by" model that the authors of the article convinced me is better as an initial model, but the "times" model that is probably most commonly taught already. Oh, well.
CommonCrawl
We consider crossed product von Neumann algebras arising from free Bogoljubov actions of $\mathbb Z$. We describe several presentations of them as amalgamated free products and cocycle crossed products and give a criterion for factoriality. A number of isomorphism results for free Bogoljubov crossed products are proved, focusing on those arising from almost periodic representations. We complement our isomorphism results by rigidity results yielding non-isomorphic free Bogoljubov crossed products and by a partial characterisation of strong solidity of a free Bogoljubov crossed products in terms of properties of the orthogonal representation from which it is constructed.
CommonCrawl
In this paper we extend recent results on the existence and uniqueness of solutions of ODEs with non-smooth vector fields to the case of martingale solutions, in the Stroock-Varadhan sense, of SDEs with non-smooth coefficients. In the first part we develop a general theory, which roughly speaking allows to deduce existence, uniqueness and stability of martingale solutions for $\mathcal L^d$-almost every initial condition $x$ whenever existence and uniqueness is known at the PDE level in the $L^\infty$-setting (and, conversely, if existence and uniqueness of martingale solutions is known for $\mathcal L^d$-a.e. initial condition, then existence and uniqueness for the PDE holds). In the second part of the paper we consider situations where, on the one hand, no pointwise uniqueness result for the martingale problem is known and, on the other hand, well-posedness for the Fokker-Planck equation can be proved. Thus, the theory developed in the first part of the paper is applicable. In particular, we will study the Fokker-Planck equation in two somehow extreme situations: in the first one, assuming uniform ellipticity of the diffusion coefficients and Lipschitz regularity in time, we are able to prove existence and uniqueness in the $L^2$-setting; in the second one we consider an additive noise and, assuming the drift $b$ to have $BV$ regularity and allowing the diffusion matrix $a$ to be degenerate (also identically $0$), we prove existence and uniqueness in the $L^\infty$-setting. Therefore, in these two situations, our theory yields existence, uniqueness and stability results for martingale solutions.
CommonCrawl
In the Maryland Centrifugal Experiment, the plasma is created in a shaped open-field magnetic configuration. Plasma rotation perpendicular to the magnetic field at supersonic speeds (sonic Mach number larger than unity) is controlled by an externally-applied E .$\times $ B drift. This work documents the centrifugal confinement effect produced by the plasma rotation from interferometric measurements of plasma density at the magnetic minimum (midplane) and 85 cm off-midplane.~~ Complete time histories of density at these two locations are obtained and compared to deduce the efficacy of axial confinement.~~ Other key parameters are also directly measured at midplane (rotation velocity profiles, ion temperature, and diamagnetic flux) and off-midplane (diamagnetic flux).~ The observed scaling of the average density ratio at midplane and off-midplane is obtained as a function of the shape of the magnetic field (mirror ratio) and the data are compared with the MHD (Grad-Shafranov equation) solution of the centrifugally confined density.~~ The theory depends on the sonic Mach number and mirror ratio and the data are shown to be in agreement with the predictions of the ideal MHD equilibrium theory.
CommonCrawl
A) $\alpha$ is an upper bound of $E$. B) If $\gamma < \alpha$ then $\gamma$ is not an upper bound of $E$. I do not understand the difference between upper bound and least upper bound. If someone could explain the difference between the two and possibly provide an example, it would be much appreciated. Thanks. Every least upper bound is an upper bound, however the least upper bound is the smallest number that is still an upper bound. Example: Take the set $(0,1)$. It has $2$ as an upper bound but clearly the smallest upper bound that the set can have is the number $1$ and hence it's the least upper bound. Maybe you like this definition better? $(1)$ It is an upper bound of $A$, that is, if $x\in A$; then $x\leq \alpha$. $(2)$ If $\beta$ is any other upper bound $\alpha\leq \beta$. That is, $\alpha$ is the least of all upper bounds of $A$. As you can see the l.u.b. has the unique property $(2)$. Why unique? Because if $\gamma$ is another l.u.b., by definition, we must have both $\alpha\leq \gamma$ and $\gamma\leq \alpha$, but this means we must have $\alpha=\gamma$. So l.u.b.s when they exist, are unique. Not the answer you're looking for? Browse other questions tagged real-analysis upper-lower-bounds or ask your own question. Definition of supremum and least upper bound property. (Rudin). Why state the Least Upper Bound Property again? Mathematical Analysis by Walter Rudin, Theorem 1.11: Upper/Lower Bounds and Supremum/Infimum. In Baby Rudin's Theorem 1.1, why is it important that $S$ has the least upper bound property?
CommonCrawl
In a graph, a vertex is called an articulation point if removing it and all the edges associated with it results in the increase of the number of connected components in the graph. For example consider the graph given in following figure. If in the above graph, vertex 1 and all the edges associated with it, i.e. the edges 1-0, 1-2 and 1-3 are removed, there will be no path to reach any of the vertices 2, 3 or 4 from the vertices 0 and 5, that means the graph will split into two separate components. One consisting of the vertices 0 and 5 and another one consisting of the vertices 2, 3 and 4 as shown in the following figure. Likewise removing the vertex 0 will disconnect the vertex 5 from all other vertices. Hence the given graph has two articulation points: 0 and 1. Articulation Points represents vulnerabilities in a network. In order to find all the articulation points in a given graph, the brute force approach is to check for every vertex if it is an articulation point or not, by removing it and then counting the number of connected components in the graph. If the number of components increases then the vertex under consideration is an articulation point otherwise not. Here's the pseudo code of the brute force approach, it returns the total number of articulation points in the given graph. The above algorithm iterates over all the vertices and in one iteration applies a Depth First Search to find connected components, so time complexity of above algorithm is $$O(V \times (V+E))$$, where V is the number of vertices and E is the number of edges in the graph. Clearly the brute force approach will fail for bigger graphs. Given a DFS tree of a graph, a Back Edge is an edge that connects a vertex to a vertex that is discovered before it's parent. For example consider the graph given in Fig. 1. The figure given below depicts a DFS tree of the graph. In the above case, the edge 4 - 2 connects 4 to an ancestor of its parent i.e. 3, so it is a Back Edge. And similarly 3 - 1 is also a Back edge. But why bother about Back Edge? Presence of a back edge means presence of an alternative path in case the parent of the vertex is removed. Suppose a vertex $$u$$ is having a child $$v$$ such that none of the vertices in the subtree rooted at $$v$$ have a back edge to any vertex discovered before $$u$$, that means if vertex $$u$$ is removed then there will be no path left for vertex $$v$$ or any of the vertices present in the subtree rooted at vertex v to reach any vertex discovered before $$u$$, that implies, the subtree rooted at vertex $$v$$ will get disconnected from the entire graph, and thus the number of components will increase and $$u$$ will be counted as an articulation point. On the other hand, if the subtree rooted at vertex $$v$$ has a vertex $$x$$ that has back edge that connects it to a vertex discovered before $$u$$, say $$y$$, then there will be a path for any vertex in subtree rooted at $$v$$ to reach $$y$$ even after removal of $$u$$, and if that is the case with all the children of $$u$$, then $$u$$ will not count as an articulation point. So ultimately it all converges down to finding a back edge for every vertex. So, for that apply a DFS and record the discovery time of every vertex and maintain for every vertex $$v$$ the earliest discovered vertex that can be reached from any of the vertices in the subtree rooted at $$v$$. If a vertex $$u$$ is having a child $$v$$ such that the earliest discovered vertex that can be reached from the vertices in the subtree rooted at $$v$$ has a discovery time greater than or equal to $$u$$, then $$v$$ does not have a back edge, and thus $$u$$ will be an articulation point. So, till now the algorithm says that if all children of a vertex $$u$$ are having a back edge, then $$u$$ is not an articulation point. But what will happen when $$u$$ is root of the tree, as root does not have any ancestors. Well, it is very easy to check if the root is an articulation point or not. If root has more than one child than it is an articulation point otherwise it is not. Now how does that help?? Suppose root has two children, $$v_1$$ and $$v_2$$. If there had been an edge between vertices in the subtree rooted at $$v_1$$ and those of the subtree rooted at $$v_2$$, then they would have been a part of the same subtree. $$adj$$ : It is an $$N \times N$$ matrix denoting the adjacency matrix of the given graph. $$disc$$ : It is an array of $$N$$ elements which stores the discovery time of every vertex. It is initialized by 0. $$low$$ : It is an array of $$N$$ elements which stores, for every vertex $$v$$, the discovery time of the earliest discovered vertex to which $$v$$ or any of the vertices in the subtree rooted at $$v$$ is having a back edge. It is initialized by INFINITY. $$visited$$ : It is an array of size $$N$$ which denotes whether a vertex is visited or not during the DFS. It is initialized by false. $$parent$$ : It is an array of size $$N$$ which stores the parent of each vertex. It is initialized by NIL. $$AP$$ : It is an array of size $$N$$. $$AP[i]$$ = true, if ith vertex is an articulation point. $$vertex$$: The vertex under consideration. $$V$$ : Number of vertices. $$time$$ : Current value of discovery time. The above algorithm starts with an initial vertex say $$u$$, marks it visited, record its discovery time, $$disc[u]$$, and since it is just discovered, the earliest vertex it is connected to is itself, so $$low[u]$$ is also set equal to vertex's discovery time. It keeps a counter called $$child$$ to count the number of children of a vertex. Then the algorithm iterates over every vertex in the graph and see if it is connected to $$u$$, if it finds a vertex $$v$$. that is connected to $$u$$, but has already been visited, then it updates the value $$low[u]$$ to minimum of $$low[u]$$ and discovery time of $$v$$ i.e., $$disc[v]$$.But if the vertex $$v$$ is not yet visited, then it sets the $$parent[v]$$ to $$u$$ and calls the DFS again with $$vertex = v$$. So the same things that just happened with $$u$$ will happen for $$v$$ also. When that DFS call will return, $$low[v]$$ will have the discovery time of the earliest discovered vertex that can be reached from any vertex in the subtree rooted at $$v$$. So set $$low[u]$$ to minimum of $$low[v]$$ and itself. And finally if $$u$$ is not the root, it checks whether $$low[v]$$ is greater than or equal to $$disc[u]$$, and if so, it marks $$AP[u]$$ as true. And if $$u$$ is root it checks whether it has more than one child or not, and if so, it marks $$AP[u]$$ as true. The following image shows the value of array $$disc$$ and $$low$$ for DFS tree given in Fig. 3. Clearly only for vertices 0 and 1, $$low[ 5 ] \ge disc[ 0 ]$$ and $$low[ 2 ] \ge disc[ 1 ]$$, so these are the only two articulation points in the given graph. An edge in a graph between vertices say $$u$$ and $$v$$ is called a Bridge, if after removing it, there will be no path left between $$u$$ and $$v$$. It's definition is very similar to that of Articulation Points. Just like them it also represents vulnerabilities in the given network. For the graph given in Fig.1, if the edge 0-1 is removed, there will be no path left to reach from 0 to 1, similarly if edge 0-5 is removed, there will be no path left that connects 0 and 5. So in this case the edges 0-1 and 0-5 are the Bridges in the given graph. The above code uses BFS to check if the vertices that were connected by the removed edge are still connected or not. It does so for every edge and thus it's complexity is $$O(E \times (V+E))$$. Clearly it will fail for big values of $$V$$ and $$E$$. For graph given in Fig.1, the $$low$$ and $$disc$$ value obtained for its DFS tree shown in Fig.3, by the above pseudo code, will be the same as those obtained in case of articulation points. The values of array $$low$$ and $$disc$$ are shown in Fig.4. Clearly for only two edges i.e 0-1 and 0-5, $$low[ 1 ]$$ > $$ disc $$ and $$ low $$ > $$ disc $$, hence those are the only two bridges in the given graph.
CommonCrawl
Linear transform of jointly distributed exponential random variables, how to identify domain? Question: let $X,Y$ be independent exponential random variables (with parameter $\lambda$) and let $U = X+Y$, $V = X-Y$. Find the joint distribution of $U,V$. However, the domain of the joint distribution is not $0< u < \infty$. I thought this was the case because $0< x < \infty, 0 < y < \infty$ by the definition of an exponential distribution. Instead, the correct domain is $0 < \|v\| < u < \infty$. You just have to think carefully about where $U$ and $V$ 'live' in relation to the bounds of the non-negative random variables $X, Y$, and each other. Hence $\min(X+Y)=0,\max(X+Y)\to \infty$ and $\min(X-Y)\to-\infty,\max(X-Y)\to\infty$. Also of note, clearly $\lvert X-Y\rvert \leq X+Y$ . Not the answer you're looking for? Browse other questions tagged probability probability-theory proof-verification probability-distributions random-variables or ask your own question. Let $X,Y $ be two independent random variables with exponential distribution and parameter $\lambda > 0$. Transformation of random variables: what went wrong?
CommonCrawl
Syrjälä's network consists of $n$ computers and $n$ connections between them. At the moment it is possible to send a message between any two computers. Your task is to calculate the number of essential connections: if such a connection is removed from the network, it is not possible to send a message between some two computers. The first input line contains an integer $n$: the number of computers. The computers are numbered $1,2,\ldots,n$. After this, there are $n$ lines that describe the connections. Each line has two integers $a$ and $b$: there is a connection between computers $a$ and $b$. There is at most one connection between two computers, and there are no connections from a computer to itself. Print one integer: the number of essential connections. Explanation: The essential connections are $(1,2)$, $(2,3)$, $(5,6)$ and $(6,7)$.
CommonCrawl
An excellent crop of solutions here! Syed Farhan Iskander of Foxford School School and Community College listed the cubes of every whole number until he got to a cube that was larger than 7163. Then he systematically examined the differences between 7163 and these cubes to find cases where the difference was a cube number. In this way he showed that the only two numbers $x$ and $y$ such that $x^3 + y^3 = 7163$ are 11 and 18. Ling Xiang Ning of Tao Nan School, Singapore and Elizabeth and Ella of Madras College also used this method. We give two further solutions using different methods, one by Koopa Koo from Boston College which uses the prime factorisation of 7163 and another by Adam of King Jame's School, Knaresborough, based on the parity of the numbers. Now $2m^3$ is even so that $z(16z^2 + 36z +27)$ must be even. This means that $z$ must be even. I have now greatly simplified the problem because there are only three even values of $z$ for which $z(16z^2 + 36z + 27)$ is less than 1784. I will try each of these in turn to see which gives an integer solution for $m$. If $z=0$, then $z(16z^2 + 36z + 27) = 0$ and hence $2m^3 = 1784$. This does not have an integer solution for $m$. Because $z = 2$ gives an integer value for $m$ we have $z=2$, $m=9$ gives the solution $x = 4z + 3 = 11$ and $y = 2m = 18$. So the numbers are 11 and 18. Consider (2), $x^2 - xy + y^2 = (x + y)^2 - 3xy = 19 \times 29$ implying $169 - 19 \times 29 = 3xy = -382$ which is impossible as $x$ and $y$ are positive. Consider (3), $361 - 13\times 29 = 3xy = - 16$ which is another contradiction. The discriminant of this quadratic is negative implying that no real solutions exist and hence this case is rejected. Hence the two numbers are 11 and 18. Fractions. Common factors. Divisibility. Prime factors. Cube numbers. Powers & roots. Real world. Factors and multiples. Indices. Generalising. Mathematical reasoning & proof.
CommonCrawl
Maze A has no way out. Maze B has an exit (solution) to the right of cell 2. Maze D has an exit up from cell 6. For example, using Maze D above, if the starting point is cell 9, one possible set of directions to get to the exit would be: right, right, right, right, right, up, up, up. ok - indicates that there is door there and you may proceed in that direction to the neighboring cell. solved - indicates that you have successfully found an exit to the maze. wrong - indicates that your program made an error, as discussed below. If your program determines there is no way out of the maze, you should send the precise string "no way out" (without the quotes) instead of a direction. If there is in fact no way out of the maze, you will receive a solved reply. Your program sends "no way out", even though there is a way out. Your program makes the same move (direction) from the same cell twice. After receiving a wrong or a solved reply, your program should exit. This is an interactive program. The input you receive is a function of the output you generate. All input and output strings must end in a new-line character. You should never send extra blank lines. You must make sure that your program's standard output stream is flushed after you output a the new-line character that completes a command. This is accomplished with System.out.flush() in Java, stdout.flush() in Python, fflush(stdout) in C, and cout << flush in C++. The first thing your program must do when it starts up is to send its first move (up, down, right or left), followed by a new-line character. It will then wait for a new-line terminated response on the standard input. The response will be one of wall, ok, solved, or wrong indication. Your program will then make another move based on the response it received as discussed above. This process will repeat until your program receives a wrong or solved indication. Example (User output in Teletype, Computer judge output in Bold). (This sample run has no relationship to the samples shown above). It is guaranteed that the maze will not be larger than $100 \times 100$ in any dimension.
CommonCrawl
The initial value problem for a matrix Riccati differential equation associated with an $M$-matrix is known to have a global solution $X(t)$ on $[0, \infty)$ when $X(0)$ takes values from a suitable set of nonnegative matrices. It is also known, except for the critical case, that as $t$ goes to infinity $X(t)$ converges to the minimal nonnegative solution of the corresponding algebraic Riccati equation. In this paper we present a new approach for proving the convergence, which is based on the doubling procedure and is also valid for the critical case. The approach also provides a way for solving the initial value problem and a new doubling algorithm for computing the minimal nonnegative solution of the algebraic Riccati equation.
CommonCrawl
where $a_t$ is the control action at time $t$. To each state $x$, a known reward $r(x)$ is assigned. Actions have no cost. Our objective is to find the sequence of actions so as to maximize the expected reward at a given final time, $\mathbb E[r(x_T)]$. We do not care about rewards before the final time. I am confused about how to actually implement this (I'm trying to write the pseudocode). I don't know how to initialize the problem. In a lot of dynamic programming approaches they set the value function at the final time to be zero, but that doesn't seem to make any sense here. Question: How do I implement the dynamic programming equations above? My ideas so far: Since we know the reward function, perhaps we can initialize $V_T(x)$ to $r(x)$ for every possible $x$? Then using the value functions $V_t(x)$, $t<T$, we can step back all the way to $V_0(x)$. Doing this for every possible $x$ and keeping track of the optimizers will give us a table of optimal actions $a$ for any state $x$ that we may encounter. Is this correct? Browse other questions tagged optimization markov-chains control-theory optimal-control dynamic-programming or ask your own question. Is the maximum of a conditional expectation of a convex / concave function again convex / concave? Why is it valid to derive a stochastic Euler equation? Does a Markov Reward Process with "attracting state" have well-defined state-values?
CommonCrawl