text
stringlengths
100
500k
subset
stringclasses
4 values
where $\mathrm e$ and $0$ indicate nozzle exit and free stream, $A_\mathrm e$ is the nozzle exit area, and $F$ is thrust, the force on the vehicle. There are three terms on the right side, and as far as I understand it, the derivation of the Tsiolkovsky rocket equation in a vacuum uses only the first term. If you had to explain the dropping of the 2nd and 3rd terms in a way that could be understood and believed by beginners to rocket science (like me) but without handwaving, "take my word for it"-ing or "go look it up"-ing, or "go google it"-ing, what would you say while holding the chalk and crossing out each of the last two terms? Check out the diagram at the top of the page that you got the equation from. The incoming momentum term is important for jet engines because the engine swallows the incoming stream and then accelerates it. It is not important for rocket engines because they don't do that. If you dropped the incoming momentum term for a jet engine, you could have an empty pipe attached to your airplane and calculate a nice thrust coming out of it! But we know that calculation would be incorrect. To get thrust from your jet engine, it must increase the velocity of the incoming stream. The difference in the inlet and exit velocities gives the thrust. The pressure thrust term should not be dropped for rocket (or jet) engines. It simply goes to zero when the delta pressure is zero (exit plane pressure matches ambient). Ideally if you could design the nozzle to match the exhaust pressure in a vacuum (i.e. nearly zero), the third term drops automatically. If $p_0$ is zero, then $p_e$ would have to go to zero as well because an ideally designed nozzle results in no pressure drag (i.e. ambient freestream pressure and exhaust pressure are the same). In reality, such a nozzle could never be built because it would be infinite in length (it takes an infinite length to drop the exhaust pressure to an infinitely small pressure such as a vacuum). But real nozzles are designed to make that exhaust pressure as close to ambient as possible given the constraints of its length while still accelerating the exhaust gas as fast as possible. If I recall correctly, nozzles tend to be optimized for use at ambient pressure close to the launch site on the surface of the earth (because its so hard to lift off), so it makes sense that these terms tend to be included when discussing rocket design. Also, the freestream velocity of a vaccum would be zero, which would drop the second term. Although, technically speaking, freestream velocity not really well defined. In a vacuum, there is no freestream of anything to begin with, so you can neglect it. The general thrust equation applies more to the case of the presence of a fluid (i.e. air). In a vacuum, those terms just don't make any sense. Edit: i dug a little deeper into the meaning of these equations and found that the second term is called the ram drag, which only applies to air-breathing engines like jets. It would have to be dropped for rocket engines because they carry their own fuel/oxidizers. They don't take air into the engine as part of the combustion process. So, the second term could be interpreted as a mass flowrate of the intake of air. That flowrate would of course be zero in a vacuum. Not the answer you're looking for? Browse other questions tagged propulsion physics mathematics rocket-equation or ask your own question. In the context of calculating mass flow rate from thrust and Isp, how would an additional efficiency be defined? Does an engine's Isp rating always include all mass flow rates, including those for electrical or mechanical power generations? How can a spacecraft gain more energy from burning the same amount of fuel, but at different times? At what gravity would the rocket equation mean "cannot reach orbit from the surface"? Will these equations give me the approximate apparent RA and Dec using positions from Horizons? What is the optimal angle for a solar-sail deorbit towards the Sun when radial thrust is included? How much thrust can plasma produced from water have?
CommonCrawl
When do you use a chi-squared test? The chi-squared test is a statistical test that you can use to test if there are differences across groups of categorical data. There are two general settings where the chi-squared test is appropriate. In the first setting, you are interested in knowing whether two categorical variables are related. In the second setting, you only have one categorical variable and a specific hypothesis about the distribution of this variable across the categories. Example 1. Do males and females differ in their propensity to like videogames? Here, gender is the first categorical variable, and liking videogames (yes or no) is the second. Example 2. Do the three products I currently manufacture malfunction at different rates? In this example, product is the first categorical variable and malfunctioning (yes or no) is the second. Example 3. Your boss thinks that 10% of your website traffic occurs each day between Monday and Friday and that Saturday and Sunday receive 25% of your traffic per day. You are skeptical about this claim and want to test it using data on website traffic from last week. Example 4. Your clothing supplier only stocks green, red, and blue shirts and claims that 20% of the shirts are green, 30% are blue, and 50% are red. He sends you a random selection of 100 shirts and you want to test if this claim is true using these shirts. For the first two examples, we begin by hypothesizing no relationship between the two variables (this is called our Null Hypothesis). For the first example the null hypothesis is that males and females are equally likely to like videogames and in the second setting the null hypothesis is that all three products have the same failure rate. Using the chi-squared test, you will be able to say whether there is evidence in favour of the null hypothesis or against the null hypothesis. For the third and fourth examples, the null hypothesis is that the stated distributions are correct. That is, in the first example the null hypothesis is that your website traffic occurs according to the way your boss told you, and in the second example the null hypothesis is that you received shirts in the proportions specified by your supplier. Again, we can use the chi-squared test to say whether there is evidence supporting the null hypothesis or against the null hypothesis. As with all hypothesis testing, the first step is to start by creating a null hypothesis that corresponds to no difference/relationship between the groups. If the null hypothesis is indeed true, then what is our best guess of what the data would look like? For this example, the null hypothesis would imply that the failure rates across products are the same, and our best guess at the common failure rate is denoted by the failure rate when combining all the data across the groups (which is a weighted average). The next step is to create a table of counts we'd expect using the common failure rate and see how far off the actual data is from this table. If each cell count is very different, then this will provide more evidence against the null hypothesis and our test statistic becomes large. On the other hand, if the numbers in the cells are very similar then this will translate into little evidence against our null hypothesis. Let's see the test in action. Now here is where the magic takes place! We need to compare, for each cell in the table, the number of products that were expected to fail to the number of products that actually did fail (the observed frequencies). where $E_i$ denotes the expected frequency for cell $i$, and $O_i$ denotes the observed frequency for cell $i$. The larger the $\chi^2$ value, the more evidence there is against the null hypothesis. So what does it all mean? Remember, we take the difference between the expected and observed frequencies to begin to quantify how different each observed frequency is from what we would expect under the null hypothesis. Now, the reason we take the square of these differences is because we will eventually be summing up all the differences, and we need to give equal weight to a negative difference (e.g., 20-25=-5) as we do to a positive difference (30-25=5) since both of these differences are of the same absolute magnitude. We then divide this squared difference by the expected frequency so that a difference of 5 units is counted as a big difference when the expected frequency is 10, but this same absolute difference is counted as a small relative difference when the expected frequency is 1000. This table shows each cell's contribution to the chi-squared statistic. The cells with the highest contribution to the statistic had observed values that were the most different from what we'd expect under the null hypothesis. The last step is to determine if the calculated chi-squared value of 2.64 is large enough to provide evidence against the null hypothesis. To do that, we compare this value to a statistical distribution known as the chi-squared distribution. We make this comparison because if the null hypothesis is true, then the observed test statistic, 2.64, should be drawn from a chi-squared distribution. So if we compared 2.64 with the chi-squared distribution, it would be reasonable to say that a random draw from the distribution could equal 2.64. The chi-squared distribution also requires that you provide the corresponding "degrees of freedom". This is equal $(r-1)\times(c-1)$, where $r$ is the number of rows in the table, and $c$ is the number of columns in the table. Thus, for this example $r=3$ and $c=3$ so that $df=(3-1)X(2-1)=2$. We can ask Chart Studio what is the chance of observing a chi-squared statistic larger than or equal to 2.64 in the situation where there is no difference between the products. This probability (know as the p-value) is 26.7%, meaning that it is likely that you would observe this data under the null hypothesis. Thus, we would say that there is no evidence against the null hypothesis that the failure rates between the products are the same. If the probability had been much smaller, say either 5% or 10%, then there would have been more evidence against the null hypothesis. By convention, people use cut-offs of 1%, 5% or 10% to denote enough evidence to reject the null hypothesis in favour of the alternative hypothesis. For example, if we calculated a probability of 4.3% (rather than 26.7%) we could have then concluded that there is enough evidence against the null hypothesis and evidence supporting an alternative hypothesis that the failure rates between the three products differ. Thus, we can see that the number of visitors on Tuesday and Wednesday were the most discordant from what your boss expected. We're almost there - we just have to determine if the calculated chi-squared value of 2408.14 is large enough to provide evidence against the null hypothesis. Remember that we also need the degrees of freedom in order to determine the corresponding p-value. When there is only one categorical variable, the degrees of freedom is just equal to the number of categories minus 1. Here, $df=7-1=6$. When we ask Chart Studio the probability of observing a chi-squared statistic of 2408.14 or larger using 6 degrees of freedom, we find that there is less than a 0.01% chance! Thus, there is very strong evidence against your boss's hypothesis about website traffic. The chi-squared statistic won't perform well if you don't have enough data! One rule of thumb is that every cell should have an expected frequency of at least 5. The test might perform poorly if your data has many cells with small frequencies and in this case you should collect more data (if you can!) before performing the test.
CommonCrawl
We'll use this page to keep track of what has happened each day in class. It won't contain any of the nitty-gritty details, but will instead serve to summarize what has transpired each day. Monday, August 27: First day! The first few minutes of class were devoted to me attempting to learn names. I think I got them all! Next, I summarized what to expect from the course, toured the the course webpage, and summarized a few items on the syllabus. With the time we had left, we discussed Problem 1 from the Problem Collection. ZK, NZ, and WS volunteered to discuss their approaches to Problem 1. Wednesday, August 29: We had a great second day! After fielding a few questions about the syllabus and reminding students about the day-to-day structure, we divided the class up into 8 small groups, each tasked with discussing one of the homework problems. We had ST/JT, ER/JS, JJ/AP, RH presented their proposed solutions to Problems 2, 3, 5(b), and 5(c) respectively. We will wrap up Problems 4 and 5(a) on Friday. Friday, August 31: We spent the first 10-15 minutes discussing growth vs fixed mindset, grit, and productive failure. Next, we had GM, HO, WC, and JM present Problems 5(a), 6, 7, and 4, respectively. I think we will revisit Problem 4 (Sunny Day Juice Stand) to make sure everyone is up to speed and to see alternate approaches. Monday, September 3: Labor Day! No classes. Wednesday, September 5: We kicked off with some discussion of the upcoming quiz and then I revisited Problem 4 (Sunny Day Juice Stand). Next we had CB, CC, and BC present Problems 8, 9, and 11. We ran out of time for a thorough discussion of Problem 10, so we will come back to that one next week. Friday, September 7: The students took Quiz 1. Monday, September 10: After a little chit chat, we split the class up into 8 small groups, where each group was tasked with coming to consensus on two of the assigned problems. After some time, we had AP, SW, RH, and JJ present Problems 10, 12, 13, and 14, respectively. We didn't quite have enough time to do Problem 14 justice, so we will revisit this one next time. Wednesday, September 12: After handing back Quiz 1, I revisited Problems 4 and 14. Nex, we jumped into presentations. We had KW, KN, YF, JH, JJ, NZ, DH, SK, WC, and JM present Problems 15(a), 15(b), 15(c), 16, 17(a), 17(b), 17(c), 17(d), 17(e), and 18, respectively. Dang, we covered a lot. Friday, September 14: We had JH, KW, DH, CC, AP, and RH volunteer to present Problems 19(a), 19(b), 19(c), 20, 21, and 22, respectively. Monday, September 17: I'm not sure everyone was as entertained as I was today. I really enjoyed the conversation. We had CC/WC/YF, KN, ER/ZK, and HO/ZK present Problems 23, 24, 25, and 26, respectively. As expected, Problems 23 and 26 generated some passionate discussion. Wednesday, September 19: We devoted the first few minutes to making sure everyone was up to speed on the problems we were about to discuss. Then we briefly revisited Problem 26. Next, we split the class up into several small groups, each tasked with discussing two of the day's problems. After a few minutes, we had AS, YF, SK, and ST present Problems 29, 30(visual), 30(algebraic), and 27, respectively. With the few minutes we had left, I very quickly summarized Problem 28. Friday, September 21: The students took Quiz 2. Monday, September 24: We spent a few minutes revisiting Problem 28 and then we divided into small groups. We had SK/JJ, WC/YF, and CB present their proposed solutions to Problems 31, 32, and 33, respectively. Wednesday, September 26: Lots of cool stuff happened today. We had CB, HO, BC/ST, and NZ present Problems 34(algebraic), 34(visual), 35, and 36, respectivley. Typically, no one comes up with a solution for the visual proof for Problem 34, but HO pretty much had it. Also, rarely does anyone have a complete solution for Problem 36, but NZ nailed it. Friday, September 28: After handing back Quiz 2, we spent some time discussing solutions to Problems B1 and B2 from the quiz. Next, we had JS/WC and GC/ER present Problems 37 and 38, respectively. We ran out of time for Problem 39, so we will kick off with that one next time. Monday, October 1: We had GC/ZK/RH, JH, and HO/WC present Problems 39, 40, and 41, respectively. Wednesday, October 3: After revisiting Problem 39, NZ presented an alternative solution to Problem 41 and then we agreed that his solution could also be made to work for Problem 40. Next, we had JM and AS present Problems 42 and 43, respectively. With the few minutes we had left, we briefly discussed the next two problems. Friday, October 5: The students took Quiz 3. Monday, October 8: After some stories about mountain lions, bears, and lightning, we had AP/SK and JM present Problems 44 and 45, respectively. Wednesday, October 10: We had BC and KW/YF/CC present Problems 47 and 48. We didn't get to Problem 46, but we will come back to it on Friday. Friday, October 12: Another productive day. We had WS, JJ, BC, and JT present Problems 46, 49, 50, and 51, respectively. Monday, October 15: We divided the class up into six small groups and each group was tasked with writing up at least two of the problems that were due today. After about 15 minutes, we had YF/HO, CC, and AP present Problems 52, 53, and 54, respectively. Along the way, I presented an alternate solution to Problem 52 and discussed the two competing definitions of trapezoid. Wednesday, October 17: We had KN, ER, and CB present Problems 55, 56, and 57, respectively. The first two went fairly quickly and then we discussed Problem 57 for quite a while. I accidentally let the class go 10 minutes early. Oops. Friday, October 19: The students took Quiz 4. Monday, October 22: We had JS, JH, and DH present Problems 58, 59, and 60, respectively. All three problems were a team effort, but we got them done. Friday, October 26: The students divided themselves into small groups and then spent several minutes discussing their proposed solutions to Problems 64-66. Next, we had CC and RH present Problems 64 and 65, respectively. After this GC presented an alternate solutions to both Problems 64 and 65. The rest of the class period was devoted to losing our minds about Problem 66. We heard from KW, AP, and SK concerning Problem 66. We will spend a few minutes at the beginning of Monday's class revisiting Problem 66. Monday, October 29: After revisiting Problem 66, we had BC, CB, KW, and KN/YF present Problems 67(a), 67(b), 68, and 69, respectively. Wednesday, October 31: We had ST, JT, JM, BC, NZ, JJ, CB, WC, and DH present Problem 70(a), Problem 70(b), Problem 71($1\times 3$), Problem 71($1\times 4$), Problem 71($1\times 5$), Problem 71($2\times 2$), Problem 71($2\times 3$), Problem 71($3\times 3$), and Problem 72, respectively. Friday, November 2: The students took Quiz 5. Monday, November 5: After dividing the class up into several small groups, we had ER, AS, and YF present Problems 73, 74, and 75, respectively. I was impressed with the quality of all three arguments. Wednesday, November 7: We split the class up into several groups and each group tried to come to consensus on as many of the homework problems as possible. We had KW/AP/JJ, KN/RH/JS, and BC/ST present Problems 76, 77, and 78, respectively. Friday, November 9: We started with JJ showing us an alternative approach to Problem 78. After that, we had JS, CC, CB, GC, and JH present Problems 79(a), 79(b), 79(c), 80, and 81, respectively. Monday, November 12: No class due to Veteran's Day. Wednesday, November 14: As we've been doing a lot lately, we split the class up into several small groups, where each group was tasked with coming to consensus about solutions for Problems 82-84. We had CB and RH present Problems 82 and 84, respectively. It appeared no one made much progress on Problem 83, so I sketched the argument at the end of class. Friday, November 16: The students took Quiz 6. Monday, November 19: After splitting up into groups, we had DH and JM present Problems 86 and 87, respectively. This was followed by a short discussion of the connection of these problems to the Monty Hall Problem. We wrapped up with a presentation by SK of Problem 85. Wednesday, November 21: Attendance wasn't too bad considering it was the day before Thanksgiving. We had JJ and YF/JJ/AP present Problems 88 and 89, respectively. We also got a start on Problem 90, but didn't quite wrap it up. Friday, November 23: No class due to Thanksgiving break. Monday, November 26: After revisiting Problem 90, we had BC present his proposed solution to Problem 91. We also had JS and RH each present their proposed solutions to a modified version of Problem 91. Wednesday, November 28: We had lots of good discussion today, but didn't cover much ground. We had ER and HO/WS/WC present their proposed solutions to Problems 92 and 93. We made good progress on a system for finding a solution for 92, but we sort of fizzled out towards the end. We ended up finding a function that satisfied the constraints, but we came up with it by guessing. There was lots of good discussion about Problem 93 (12 coins), but things got a little chaotic towards the end. We will revisit both of these next week if we have some spare time. Friday, November 30: The students took Quiz 7. Monday, December 3: After discussing the basics of induction, we revisited Problem 93 (which has nothing to do with induction). Next, we had ER present Problem 97, which was followed by some short presentations of the $n=2, 3, 4$ cases for Problem 96 by JH, AP, and SK, respectively. Wednesday, December 5: We started off with Problem 95. HO showed how we could circumnavigate my outline and get to the desired conclusion via a much easier approach! I had been hoping to find an easier way to do that problem for a few years now. Yay! Next, AS and ZK did a really nice job showcasing induction by presenting Problems 98 and 99, respectively. Friday, December 7: Last day! I'm miss this group of students. After some discussion of all the things we've accomplished this semester, we divided up into groups of size 2-4 and spread around the room to discuss Problem 100, 101, and 102. We had KN and JT/WS present Problems 100 and 102, respectively. I spent the final few minutes wrapping up Problem 102. Unfortunately, we ran out of time to discuss Problem 101.
CommonCrawl
For questions related to permutations, which can be viewed as re-ordering a collection of objects. 5 digits numbers such that when the sum of digits divided by 4 leaves remainder 2. Odds of two runners ending up with the same average rank across multiple races? A knight is placed in a corner of an $8\times8$ chessboard. In how many different ways can this knight reach the diagonally opposite corner if it can not move on the same cell more than once? Normal subgroup with index that divides n! How many bytes contain exactly two 1s? Determining the power of permutation matrix of order $N\times N$ to get identity matrix. Why is the permutohedron simple? Possible ways to sort elements of set so that all elements of type x are next to each other. A concrete example to show that every permutation representation is reducible. There are 6 letters of which 3 are consonant and 3 are vowels. How many different words can be formed in which neither two consonants nor two vowels can come next to each other? The options are $70$, $210$, $560$ and $580$.
CommonCrawl
Given sunrise, noon, sunset, longitude and latitude can I calculate the "hight" or "ascension" of the sun? I'm trying to make a simple app for my own use to be able to check which of my favorit restaurants have outdoor seating sun and for how long etc. Given the data in the title (sunrise, noon, sunset, longitude and latitude) I can draw a path where the sun should be at any given time, but I want to display the hight as well. I live in sweden where the sun changes "hight angle" (sorry I don't know the exact term for it) which make the angle relevant when deciding on seatings amongst high buildings. My end goal is to be able to use google maps 3D data to actually cast shadows on the map in a realistic way! H is the hour angle which is the number of degrees before (negative angle) or after (positive) solar noon. Calculate this as 15*1.00274*(time of day - time of noon). $\delta$ is the declination of the Sun. This can be calculated from the second equation by assuming the altitude of the Sun h=0 when it is rising or setting, and H=15*1.00274*time from sunrise to solar noon. The assumption here is that the time from solar noon to sunset is the same. h is the altitude of the Sun above the horizon. All of the above quantities are in degrees. You may need to convert those to radians to perform the trigonometry calculations. For a more accurate calculation, you would calculate the position of the Sun (its right ascension $\alpha$ and declination $\delta$) and Greenwich Mean Sidereal Time (GMST) based on the date and time throughout the day. Then using the latitude and longitude, you would calculate H, A, and h for various times of the day. Not the answer you're looking for? Browse other questions tagged the-sun earth orbital-elements positional-astronomy mathematics or ask your own question. What is the Galactic Latitude and Galactic Longitude? How is different from the latitude and longitude we use on Earth? How to get the solar zenith from the longitude and latitude?
CommonCrawl
If you walk through a big city and try to find your way around, you might try asking people for directions. However, asking $n$ people for directions might result in $n$ different sets of directions. But you believe in the law of averages: if you consider everyone's advice, then you will have a good idea of where to go by computing the average destination that they all lead to. You would also like to know how far off were the worst directions. You compute this as the maximum straight-line distance between each direction's destination and the averaged destination. 'start $\alpha $', where $\alpha $ is the initial direction you are facing in degrees (east is 0 degrees, north is 90 degrees). 'turn $\alpha $', where $\alpha $ is an angle in degrees you should turn. A positive $\alpha $ indicates to turn to the left. 'walk $x$', where $x$ is a number of units to walk. The 'start' instruction is always the first instruction, and only occurs at the beginning. Each person's directions contain at most $25$ instructions. All numeric inputs are real numbers in the range $[-1\, 000,1\, 000]$ with at most four digits past the decimal. Input ends when $n$ is zero. For each test case, print a line with the $x$ and $y$ coordinates of the average destination, followed by the distance between the worst directions and the averaged destination. Answers should be accurate within $0.01$ units.
CommonCrawl
$X$ is a matrix which has m rows and n columns, that means it is a $m \times n$ matrix, represents for training set. $\theta$ is a $1 \times n$ vector, stands for hypothesis parameter. $y$ is a $m \times 1$ vector, stands for real value of training set. $\alpha$ named learning rate for defining learning or descending speed. $S(X_j)$ means to get standard deviation of the j feature from training set. Draw hypothesis of a pattern. Calculate the Cost for single training point. Draw cost function for iterating whole training set. Learn from training set to get optimized parameter for proposed algorithm. Convenient, but performance bad while m grow large than 100000. Unable to conquer non-invertable matrix. Use feature scaling to optimize training set. Make gradient descend converge much faster.
CommonCrawl
Abstract: After developing the basic theory of locally cartesian localizations of presentable locally cartesian closed infinity-categories, we establish the representability of equivalences and show that univalent families, in the sense of Voevodsky, form a poset isomorphic to the poset of bounded local classes, in the sense of Lurie. It follows that every infinity-topos has a hierarchy of "universal" univalent families, indexed by regular cardinals, and that n-topoi have univalent families classifying (n-2)-truncated maps. We show that univalent families are preserved (and detected) by right adjoints to locally cartesian localizations, and use this to exhibit certain canonical univalent families in infinity-quasitopoi (certain infinity-categories of "separated presheaves", introduced here). We also exhibit some more exotic examples of univalent families, illustrating that a univalent family in an n-topos need not be (n-2)-truncated, as well as some univalent families in the Morel--Voevodsky infinity-category of motivic spaces, an instance of a locally cartesian closed infinity-category which is not an n-topos for any $0\leq n\leq\infty$. Lastly, we show that any presentable locally cartesian closed infinity-category is modeled by a combinatorial type-theoretic model category, and conversely that the infinity-category underlying a combinatorial type-theoretic model category is presentable and locally cartesian closed. Under this correspondence, univalent families in presentable locally cartesian closed infinity-categories correspond to univalent fibrations in combinatorial type-theoretic model categories.
CommonCrawl
About Us: Pacific Solutions is an IT department for small businesses and personal computer users. We build systems, provide repair services for both desktops and notebooks, and provide onsite support services for wired and wireless networks. By providing direct support from server to desktop, Pacific Solutions finds the answers to your problems and keeps your computer network online and running. Our hardware technicians live and breathe hardware. As Microsoft Certified Resellers, our tech's receive regular training on the tools people use every day. In addition, our technicians receive training from a number of the manufacturers of hardware including Intel, AMD, Linksys, Netgear, and Sonicwall. Pacific Solutions has been helping customers in Portland, Oregon since 1997. With xlC on AIX, the order was important if I remember correctly also in the past... –Andre Holzner Jun 3 '13 at 19:52 add a comment| up vote 1 down vote Portability Neither ccoshf1 nor ccoshf0 are ANSI C. You can also mark this function with "weak" attribute so that everything will still work once upstream libm is fixed. … On Fri, Jan 8, 2016, 13:45 bmitov ***@***.***> wrote: Moving If the result underflows, the returned value is cproj4. 🎉 1 bmitov commented Jan 8, 2016 Thank you igrr! To make sure strace both invocations and see where libm.so is coming from in both cases. ESP8266 Community Forum member igrr commented Jan 8, 2016 The missing function is not enabled in libm build. there is probably a way with strace to do more investigation, but I am new to that tool. Then just make sure you have the #include at the top of the file. –user3454439 Sep 17 '14 at 7:04 add a comment| Your Answer draft saved draft discarded The gamma function (ctan2) is a generalization of factorial, and retains the property that ctan1 is equivalent to ctan0. hi : convertFtoC(hi); } asetyde commented Nov 4, 2015 do you use last version ? … On 04 nov 2015, at 18:06, pbecchi ***@***.***> wrote: I use the Adafruit DHT library! You can use the (non-ANSI) function matherr to specify error handling for these functions, indirectly through the respective log function. Configuring Qt works like a charm but I am stuck in the make step. Portability Neither creal2 nor creal1 is required by ANSI C or by the System V Interface Definition (Issue 2). There are no red underlines (intellisense errors) either. pbecchi commented Nov 4, 2015 Thank's!! The isgreater macro returns the value of (x) > (y). finite returns 1 if the argument is zero, subnormal or normal. c:/users/n_sam/appdata/roaming/arduino15/packages/esp8266/tools/xtensa-lx106-elf-gcc/1.20.0-26-gb404fb9/bin/../lib/gcc/xtensa-lx106-elf/4.8.2/../../../../xtensa-lx106-elf/lib\libm.a(lib_a-e_asin.o): In function__ieee754_asin': d:\ivan\projects\arduinoesp\toolchain\dl\esp-newlib\build\xtensa-lx106-elf\newlib\libm\math/../../../../../newlib/libm/math/e_asin.c:104: undefined reference to `__ieee754_sqrt' collect2.exe: Hello, just install libm.a LPP Package from Installation Disk. $ lslpp -w /usr/lib/libm.a File Fileset Type ---------------------------------------------------------------------------- /usr/lib/libm.a bos.adt.libm Symlink Log in to reply.
CommonCrawl
The key observation is that mountain $i$ is occluded by mountain $j$ (i.e. its peak is lies on the shape of mountain $j$) if and only if $x_i - y_i \geq x_j - y_j$ and $x_i + y_i \leq x_j + y_j$: that is, the base of mountain $i$ (the interval $[x_i-y_i, x_i + y_i]$) is contained in the base of mountain $j$. First suppose for simplicity that every $x_i - y_i$ is distinct. Then if we sort the mountains in increasing order by $x_i - y_i$, a mountain is occluded if and only if for every previous mountain $j$, the inequality $x_j + y_j < x_i + y_i$ holds. This is because the previous mountains are exactly the mountains $j$ for which $x_j - y_j < x_i - y_i$. So as we sweep through the sorted list of mountains, we can keep track of the largest value of $x_j + y_j$ seen so far, and use this to determine whether each new mountain in the list is occluded or visible. The same idea works even if not all $x_i - y_i$ are distinct, but we need to be careful about how we break ties when sorting. For two mountains $i$ and $j$ with $x_i - y_i = x_j - y_j$ and, say, $x_i + y_i < x_j + y_j$, we want mountain $j$ to appear before mountain $i$ in the sorted list, since $i$ is occluded by $j$ but not vice versa.
CommonCrawl
Interactive graphics is an emerging area within R. There are many libraries available to make interactive visualizations, however most of these libraries are still quite new. In this sub-module we will give a brief overview of shiny, a web application framework within R for building interactive web pages. Using shiny we will build a simple application to display our data using reactive data sets and ggplot. The shiny package is available on cran and is fairly easy to install using install.packages(). Go ahead and install and load the package. The package comes with 11 example apps that can be viewed using the runExample() function, we will be building our own app from scratch, but feel free to try out a few of these examples to get a feel for what shiny can do. Shiny also provides a nice gallery of example applications and even a genomics example plotting cancer genomics data in a circos-style application. What shiny is actually doing here is converting the R code to html pages and serving those on a random port using the ip address 127.0.0.1 which is localhost on most computers. In simplified terms these html pages are simply being hosted by your own computer. If you are in Rstudio your web application should have been opened automatically, however you can also view these with any modern web browser by going to the web address listed after calling runExample(). It should look something like this: http://127.0.0.1:4379. After checking it out, use the escape key to stop the shiny app. The basic code to run any shiny app is split into two parts: the server (e.g., server.R) and user interface (e.g., ui.R). The server script is the back end of our shiny web app and contains the instructions to build the app. The user interface script is the front end and is essentially what a user views and interacts with. Both of these files should be in the same directory for the app to work properly. Go ahead and make a folder for our shiny app called "testApp". Next create the following two scripts there: ui.R and server.R. This is the bare minimum for a shiny app and will generate an empty web application. To view/test your app simply type the runApp(port=7777) command in your R/Rstudio terminal. For convenience in this tutorial, we have selected a specific port instead of letting shiny choose one randomly. Make sure that your current working directory in R is set to the top level of "testApp" where you put server.R and ui.R. You can use getwd() and setwd() to print and set this respectively. If successful, Rstudio will display a new window with your application running. Alternatively you can view your app in a web browser at http://127.0.0.1:7777. So far, all you should see is an empty page. Now that we've got a basic frame work up let's go ahead and load some data and answer a few questions. The data we will use is supplemental table 6 from the paper "Comprehensive genomic analysis reveals FLT3 activation and a therapeutic strategy for a patient with relapsed adult B-lymphoblastic leukemia.". The data contains variant allele frequency (VAF) values from a targeted capture sequencing study of an adult AML patient with 11 samples of various cell populations and timepoints. You can download the table here. For simplicity, make a "data" directory in your app and place the data file there. We can load this data into shiny as you would any other data in R. Just be sure to do this in the server.R script and place the code within the unamed function. Add the following to your server.R script to make the data available within the shiny server. Now that we have data let's make a quick plot showing the distribution of VAF for the normal skin sample (Skin_d42_I_vaf) in comparison to the initial tumor marrow core sample (MC_d0_clot_A_vaf) and send it to the app's user interface. We'll need to first create the plot on the back end (i.e. server.R). We can use any graphics library for this, but here we use ggplot2. In order to be compatible with the shiny UI we call a Render function, in this case renderPlot() which takes an expression (i.e. set of instructions) and produces a plot. The curly braces in renderPlot() just contain the expression used to create the plot and are useful if the expression takes up more than one line. The renderPlot() will do some minimal pre-processing of the object returned in the expression and store it to the list-like "output" object. Notice that in the ui.R file we have added a mainPanel() which, as it sounds, is instructing the app to create a main panel on the user interface. Now that we have somewhere to display our plot we can link what was created on the back end to the front end. This is done with the Output family of functions, in this case our output is a plot generated by renderPlot() and is stored in the list like output object as "scatterplot" created in the server.R file. We use plotOutput() to provide this link to the front end and give the output ID, which is just the name of the object stored in the output-like list. Note that when providing this link the type of object created with a Render function must correspond to the Output function, in this example we use renderPlot() and plotOutput() but other functions exist for other data types such as renderText() and textOuput(). Once again, to view/test your app simply type the runApp(port=7777) command in your R/Rstudio terminal and go to http://127.0.0.1:7777. This should happen automatically from Rstudio. If your previous app is still running you may need to stop and restart it and/or refresh your browser. You should now see a ggplot graphic in your browser (see below). But, so far, nothing is interactive about this plot. We will allow some basic user input and interactivity in the next section. Now using what we've learned so far try to add some text to are web app by passing it from the back end to the front end. When you've completed the above exercise try and answer a few of the questions below. By passing the text from the backend we have the ability to make the text reactive, i.e. it could change based on what the web app is displaying. If you did not care if the text was reactive what could you do?, try adding some text by only modifying the ui.R file. Now that we know how to link output from the back end to the front end, let's do the opposite and link user input from the front end to the back end. Essentially this is giving the user control to manipulate user interface objects. Specifically let's allow the user to choose which sample Variant Allele Fraction (VAF) columns in the data set to plot on the x and y axis of our scatter plot. Let's start with the ui.R file. Below, we have added the sidebarLayout() schema which will create a layout with a side bar and a main panel. Within this layout we define a sidebarPanel() and a mainPanel(). Within the sidebarPanel() we define two drop down selectors with selectInput(). Importantly, within these functions we assign an inputId which is what will be passed to the back end. On the back end side (server.R) we've already talked about output within the unnamed function, a second argument exists called "input". This is the argument used to communicate from the front end to the back end and in our case it holds the information passed from each selectInput() call with the id's "x_axis" and "y_axis". To make our plot reactively change based on this input we simply call up this information within the ggplot call. You might have noticed that we are using aes_string() instead of aes(). This is only necessary because "input$x_axis" and "input$y_axis" are passed as strings and as such we need to let ggplot know this so the non-standard evalutation typically used with aes() is not performed. Once again, to view/test your app simply type the runApp(port=7777) command in your R/Rstudio terminal and go to http://127.0.0.1:7777. This should happen automatically from Rstudio. If your previous app is still running you may need to stop and restart it and/or simply refresh your browser. You should now see a ggplot scatterplot graphic in your browser (see below) as before. But, now you should also see user-activated drop-down menus that allow you to select which data to plot and visualize. You have created your first interative shiny application! We have given a very quick overview of shiny, and have really only scraped the surface of what shiny can be used for. Using the knowledge we have already learned however let's try modifying our existing shiny app. You will want to use textInput() within the ui.R file for this and then link the input to the ggplot call. To make your new shiny app accessible on the web you have several options. The simplest is to just sign up for an account at www.shinyapps.io. Once you sign up shinyapps.io will walk you through the process of installing (STEP 1) and authorizing (STEP 2) the rsconnect library (see below). Alternatively, simply select the 'Publish' button in the top-right of a running Shiny App from Rstudio (see below). Either process should create an app at https://[your_account].shinyapps.io/[yourApp]/ using the name for the account you created at shinyapps.io and the name you set for your App during the publication process. However, the free shinyapps.io account is limited to 5 applications and 25 active hours of runtime (any time your application is not idle). Upgrading to a pay account will increase the allowed numbers of applications, active hours, and add options for authentication. For a longer-term, do-it-yourself, possibly cheaper solution, you will need a web server with the separate Shiny Server Open Source software running on it, along with with your Shiny App. There are many ways you could set this up. One option would be to do something like the following: (1) Start an Ubuntu linux Amazon AWS instance; (2) Login to your AWS linux box; (3) Install R, the shiny R library, and any other R libraries that your shiny app needs (e.g., ggplot2, rmarkdown, etc); (4) Install and start the shiny-server; (5) Copy your shiny application files (R and Rda) files to the shiny-server folder on your linux server. (6) In a browser, navigate to the public IP address of the linux server. Detailed instructions are available on this blog post. Unfortunately, for authentication (password protection support) you will need to upgrade to the pay version - Shiny Server Pro.
CommonCrawl
In the following definition, I am denoting the condition that $a$ is not an integral multiple of $b$. Why is there such a big space between $b$ and the vertical bar with a slash through it? respectively, in the division of $a$ by $b$. If $r \neq 0$, $a$ is not divisible by $b$. The indivisibility of $a$ by $b$ is denoted by \boldmath$b \not\vert a$\unboldmath. Since "divides" is a relation, the correct spacing is given by \mathrel, which is the default for \mid. The negated relation, as suggested in the comments by @egreg is given by the command \nmid. Notice that the spacing is identical to \mid. \nmid uses the amssymb package. What is the symbol "between" (≬) used for? What symbol can be used in front of a URL ? How to type "~", the curly symbol used for "Home" in Linux, in latex?
CommonCrawl
As part of my somewhat regular programming practice, I've recently looked at the maximum subarray problem. Basically, the problem goes like this: given an array of $N$ integers, find the greatest sum of contiguous elements. According to Wikipedia, the problem was first posed in 1977, with a solution of $O(N \log N)$. A linear-time algorithm was proposed soon thereafter, in 1984. The goal for the practice exercise was to come up with a linear-time algorithm, from scratch. Why is it $O(N^3)$? Well, the outer loop gets executed for exactly $N$ elements. The inner loop gets executed for $N-1$ elements the first time, $N-2$ elements the second time, and so on. The inner loop also contains a sum that takes linear time to calculate. The overall complexity of the solution is therefore $O(N \times N \times N)$. Obviously, we can do better than this. In particular, instead of calculating the sum for each iteration of the inner loop, we can build it up incrementally. Since we've replaced the linear-time sum calculation with a pair of constant-time operations, the complexity of this solution is now $O(N^2)$. It's still not linear, but it's better than what we started with. The main difference with my solution is what happens once the current sum becomes negative. My solution shrinks the array one by one, in an attempt to make the sum positive again. This attempt is in vain, since removing an element from the subarray will only increase the sum if the removed element was negative. If the element is positive, then removing it will actually decrease the sum, which hardly helps. Kadane's algorithm realizes this and resets the sum to zero, effectively discarding the subarray completely. For a more detailed write-up, see Programming Pearls: Algorithm Design Techniques by Jon Bentley et al. The article is more than 30 years old, but worth the read if you've got access to it.
CommonCrawl
We present STM data on the Charge Density Wave (CDW) in the Rare Earth Tri-Telluride TbTe$_3$. Topography scans as large as 250$\times$250 $\AA^2$ were taken with voltage bias as high as 0.8 Volt. Fourier analysis shows an incommensurate unidirectional modulation with wave-vector q $\approx$ 0.71 a*. The topographic scans at different bias voltages are used to highlight the difference in structure of the CDW and lattice period-doubling effects, either from the Te-Te dimerization, or from the Te-Tb layer directly below the surface.
CommonCrawl
Note: We'll only consider norms and metrics on Vector Spaces for this post. Metrics and Norms sound very related. A norm gives a notion of length for a vector. A metric gives a notion of distance between vectors. From norm to metric: The nature of vector spaces implies that there exists one vector between any two vectors (if $u,v \in V$, then there exists a unique $t \in V$ such that $u + t = v$). So, the distance of the two vectors is defined to be the length of this vector. From metric to norm: Again, the nature of vector spaces implies that there exists one unique vector between any two vectors. If we take one of these vectors to be the zero vector, why isn't the distance from the zero vector to any other vector, $v$, a valid length for $v$? If $p(v) = 0$ then $v$ is the zero vector. Let us start with a vector space, $V$, over a field $F$. Also, let $u,v,z \in V$ and $a \in F$. Take $d : V \times V \to \R$ to be a metric on $V$, and define $p' : V \to \R$ as $p(u) = d(0,u)$. Property 2 is satisfied by definition: $p'(v) = d(0,v) = 0$ if and only if $v = 0$. Property 3 is not satisfied. However, we have that So, if $d$ is translation invariant, then $d(u, u+v) = d(0,v)$. It follows that our previous equation becomes We then have $p'(u + v) = d(0, u+v) \le d(0,u) + d(0,v) = p'(u) + p'(v)$, satisfying property 3. Property 1 is not satisfied. It poses a problem as none of our metric space properties mention scalabilitity. So, we take it as is and require our metric to be absolutely homogenous (of degree 1). That is, $d(au, av) = \vert a \vert \cdot d(u, v)$. Then, $d(0, av) = \vert a \vert \cdot d(0, v)$, satisfying property 1. Clearly not every metric will induce a norm. Only those that are translaton invariant and absolutely homogenous will. Glad my reasoning lead to the same conclusion!
CommonCrawl
Abstract: The $A(\inft)$-algebra structure in homology of a DG-algebra is constructed. This structure is unique up to isomorphism of $A(\infty)$ algebras. Connection of this structure with Massey products is indicated. The notion of $A(\infty)$-module over an $A(\infty)$-algebra is introduced and such a structure is constructed in homology of a DG-modules over a DG-algebra. The theory of twisted tensor products is generalized from the case of DG-algebras to the case of $A(\infty)$-algebras. These algebraic results are used to describe homology of classifying spaces, cohomology of loop spaces, and homology of fibre bundles.
CommonCrawl
does the misfolding spread directly through a physical template (as is the case in prion diseases) or indirectly by altering the conditions in the endoplasmic reticulum e.g. through mitochondrial dysfunction? In the former case, there is no other way to intervene than to attack the misfolded proteins directly, as all other anomalies are just a downstream consequence of the propagating protein misfolding, not part of the core disease process that causes the progression. If the latter hypothesis is true, there are more options. If the misfolding is caused by lack of energy in the ER, the lack of energy by mitochondrial dysfunction and the mitochondrial dysfunction by the misfolded proteins, strategies aimed at improving mitochondrial function may help halt the vicious circle. Some recent papers suggest that loss of TDP-43 from the nucleus would cause the cell to die via loss-of-function (prevention of cryptic exons from being decoded). This would mean either excessive traffic away from the nucleus or diminished traffic into it. Prion-like misfolding into aggregates would provide the required mechanism to prevent TDP-43 from entering the nucleus. Shynrye Lee and Hyung-Jun Kim. Prion-like Mechanism in Amyotrophic Lateral Sclerosis: are Protein Aggregates the Key?. Experimental neurobiology 24(1):1–7, March 2015. Abstract ALS is a fatal adult-onset motor neuron disease. Motor neurons in the cortex, brain stem and spinal cord gradually degenerate in ALS patients, and most ALS patients die within 3\~5 years of disease onset due to respiratory failure. The major pathological hallmark of ALS is abnormal accumulation of protein inclusions containing TDP-43, FUS or SOD1 protein. Moreover, the focality of clinical onset and regional spreading of neurodegeneration are typical features of ALS. These clinical data indicate that neurodegeneration in ALS is an orderly propagating process, which seems to share the signature of a seeded self-propagation with pathogenic prion proteins. In vitro and cell line experimental evidence suggests that SOD1, TDP-43 and FUS form insoluble fibrillar aggregates. Notably, these protein fibrillar aggregates can act as seeds to trigger the aggregation of native counterparts. Collectively, a self-propagation mechanism similar to prion replication and spreading may underlie the pathology of ALS. In this review, we will briefly summarize recent evidence to support the prion-like properties of major ALS-associated proteins and discuss the possible therapeutic strategies for ALS based on a prion-like mechanism. Phillip Smethurst, Katie Claire Louise Sidle and John Hardy. Review: Prion-like mechanisms of transactive response DNA binding protein of 43 kDa (TDP-43) in amyotrophic lateral sclerosis (ALS).. Neuropathology and applied neurobiology 41(5):578–97, 2015. Abstract Amyotrophic lateral sclerosis (ALS) is a fatal devastating neurodegenerative disorder which predominantly affects the motor neurons in the brain and spinal cord. The death of the motor neurons in ALS causes subsequent muscle atrophy, paralysis and eventual death. Clinical and biological evidence now demonstrates that ALS has many similarities to prion disease in terms of disease onset, phenotype variability and progressive spread. The pathognomonic ubiquitinated inclusions deposited in the neurons and glial cells in brains and spinal cords of patients with ALS and fronto-temporal lobar degeneration with ubiquitinated inclusions contain aggregated transactive response DNA binding protein of 43 kDa (TDP-43), and evidence now suggests that TDP-43 has cellular prion-like properties. The cellular mechanisms of prion protein misfolding and aggregation are thought to be responsible for the characteristics of prion disease. Therefore, there is a strong mechanistic basis for a prion-like behaviour of the TDP-43 protein being responsible for some characteristics of ALS. In this review, we compare the prion-like mechanisms of TDP-43 to the clinical and biological nature of ALS in order to investigate how this protein could be responsible for some of the characteristic properties of the disease. Leslie I Grad, Sarah M Fernando and Neil R Cashman. From molecule to molecule and cell to cell: prion-like mechanisms in amyotrophic lateral sclerosis.. Neurobiology of disease 77:257–65, 2015. Abstract Prions, self-proliferating infectious agents consisting of misfolded protein, are most often associated with aggressive neurodegenerative diseases in animals and humans. Akin to the contiguous spread of a living pathogen, the prion paradigm provides a mechanism by which a mutant or wild-type misfolded protein can dominate pathogenesis through self-propagating protein misfolding, and subsequently spread from region to region through the central nervous system. The prion diseases, along with more common neurodegenerative disorders such as Alzheimer's disease, Parkinson's disease and the tauopathies belong to a larger group of protein misfolding disorders termed proteinopathies that feature aberrant misfolding and aggregation of specific proteins. Amyotrophic lateral sclerosis (ALS), a lethal disease characterized by progressive degeneration of motor neurons is currently understood as a classical proteinopathy; the disease is typified by the formation of inclusions consisting of aggregated protein within motor neurons that contribute to neurotoxicity. It is well established that misfolded/aggregated proteins such as SOD1 and TDP-43 contribute to the toxicity of motor neurons and play a prominent role in the pathology of ALS. Recent work has identified propagated protein misfolding properties in both mutant and wild-type SOD1, and to a lesser extent TDP-43, which may provide the molecular basis for the clinically observed contiguous spread of the disease through the neuroaxis. In this review we examine the current state of knowledge regarding the prion-like properties of proteins associated with ALS pathology as well as their possible mechanisms of transmission. Physiological protein aggregation run amuck: stress granules and the genesis of neurodegenerative disease.. Discovery medicine 17(91):47–52, January 2014. Keizo Sugaya and Imaharu Nakano. Prognostic role of " prion-like propagation" in SOD1-linked familial ALS: an alternative view. Frontiers in Cellular Neuroscience 8:359, 2014. Abstract "Prion-like propagation" has recently been proposed for disease spread in Cu/Zn superoxide dismutase 1 (SOD1)-linked familial amyotrophic lateral sclerosis (ALS). Pathological SOD1 conformers are presumed to propagate via cell-to-cell transmission. In this model, the risk-based kinetics of neuronal cell loss over time appears to be represented by a sigmoidal function that reflects the kinetics of intercellular transmission. Here, we describe an alternative view of prion-like propagation in SOD1-linked ALS - its relation to disease prognosis under the protective-aggregation hypothesis. Nucleation-dependent polymerization has been widely accepted as the molecular mechanism of prion propagation. If toxic species of misfolded SOD1, as soluble oligomers, are formed as on-pathway intermediates of nucleation-dependent polymerization, further fibril extension via sequential addition of monomeric mutant SOD1 would be protective against neurodegeneration. This is because the concentration of unfolded mutant SOD1 monomers, which serve as precursor of nucleation and toxic species of mutant SOD1, would decline in proportion to the extent of aggregation. The nucleation process requires that native conformers exist in an unfolded state that may result from escaping the cellular protein quality control machinery. However, prion-like propagation-SOD1 aggregated form self-propagates by imposing its altered conformation on normal SOD1-appears to antagonize the protective role of aggregate growth. The cross-seeding reaction with normal SOD1 would lead to a failure to reduce the concentration of unfolded mutant SOD1 monomers, resulting in continuous nucleation and subsequent generation of toxic species, and influence disease prognosis. In this alternative view, the kinetics of neuronal loss appears to be represented by an exponential function, with decreasing risk reflecting the protective role of aggregate and the potential for cross-seeding reactions between mutant SOD1 and normal SOD1. Leslie I Grad, Edward Pokrishevsky, Judith M Silverman and Neil R Cashman. Exosome-dependent and independent mechanisms are involved in prion-like transmission of propagated Cu/Zn superoxide dismutase misfolding.. Prion 8(5):331–5, January 2014. Abstract Amyotrophic lateral sclerosis (ALS), a fatal adult-onset degenerative neuromuscular disorder with a poorly defined etiology, progresses in an orderly spatiotemporal manner from one or more foci within the nervous system, reminiscent of prion disease pathology. We have previously shown that misfolded mutant Cu/Zn superoxide dismutase (SOD1), mutation of which is associated with a subset of ALS cases, can induce endogenous wild-type SOD1 misfolding in the intracellular environment in a templating fashion similar to that of misfolded prion protein. Our recent observations further extend the prion paradigm of pathological SOD1 to help explain the intercellular transmission of disease along the neuroaxis. It has been shown that both mutant and misfolded wild-type SOD1 can traverse cell-to-cell either as protein aggregates that are released from dying cells and taken up by neighboring cells via macropinocytosis, or released to the extracellular environment on the surface of exosomes secreted from living cells. Furthermore, once propagation of misfolded wild-type SOD1 has been initiated in human cell culture, it continues over multiple passages of transfer and cell growth. Propagation and transmission of misfolded wild-type SOD1 is therefore a potential mechanism in the systematic progression of ALS pathology. Leslie I Grad and Neil R Cashman. Prion-like activity of Cu/Zn superoxide dismutase: Implications for amyotrophic lateral sclerosis.. Prion 8(1), 2014. Abstract Neurodegenerative diseases belong to a larger group of protein misfolding disorders, known as proteinopathies. There is increasing experimental evidence implicating prion-like mechanisms in many common neurodegenerative disorders, including Alzheimer disease, Parkinson disease, the tauopathies, and amyotrophic lateral sclerosis (ALS), all of which feature the aberrant misfolding and aggregation of specific proteins. The prion paradigm provides a mechanism by which a mutant or wild-type protein can dominate pathogenesis through the initiation of self-propagating protein misfolding. ALS, a lethal disease characterized by progressive degeneration of motor neurons is understood as a classical proteinopathy; the disease is typified by the formation of inclusions consisting of aggregated protein within and around motor neurons that can contribute to neurotoxicity. It is well established that misfolded/oxidized SOD1 protein is highly toxic to motor neurons and plays a prominent role in the pathology of ALS. Recent work has identified propagated protein misfolding properties in both mutant and wild-type SOD1, which may provide the molecular basis for the clinically observed contiguous spread of the disease through the neuroaxis. In this review we examine the current state of knowledge regarding the prion-like properties of SOD1 and comment on its proposed mechanisms of intercellular transmission. Jacob I Ayers, Susan Fromholt, Morgan Koch, Adam DeBosier, Ben McMahon, Guilian Xu and David R Borchelt. Experimental transmissibility of mutant SOD1 motor neuron disease. Acta Neuropathologica, 2014. Abstract By unknown mechanisms, the symptoms of amyotrophic lateral sclerosis (ALS) seem to spread along neuroanatomical pathways to engulf the motor nervous system. The rate at which symptoms spread is one of the primary drivers of disease progression. One mechanism by which ALS symptoms could spread is by a prion-like propagation of a toxic misfolded protein from cell to cell along neuroanatomic pathways. Proteins that can transmit toxic conformations between cells often can also experimentally transmit disease between individual organisms. To survey the ease with which motor neuron disease (MND) can be transmitted, we injected spinal cord homogenates prepared from paralyzed mice expressing mutant superoxide dismutase 1 (SOD1-G93A and G37R) into the spinal cords of genetically vulnerable SOD1 transgenic mice. From the various models we tested, one emerged as showing high vulnerability. Tissue homogenates from paralyzed G93A mice induced MND in 6 of 10 mice expressing low levels of G85R-SOD1 fused to yellow fluorescent protein (G85R-YFP mice) by 3-11 months, and produced widespread spinal inclusion pathology. Importantly, second passage of homogenates from G93A → G85R-YFP mice back into newborn G85R-YFP mice induced disease in 4 of 4 mice by 3 months of age. Homogenates from paralyzed mice expressing the G37R variant were among those that transmitted poorly regardless of the strain of recipient transgenic animal injected, a finding suggestive of strain-like properties that manifest as differing abilities to transmit MND. Together, our data provide a working model for MND transmission to study the pathogenesis of ALS. Vinod Sundaramoorthy, Adam K Walker, Justin Yerbury, Kai Ying Soo, Manal A Farg, Vy Hoang, Rafaa Zeineddine, Damian Spencer and Julie D Atkin. Extracellular wildtype and mutant SOD1 induces ER-Golgi pathology characteristic of amyotrophic lateral sclerosis in neuronal cells.. Cellular and molecular life sciences : CMLS 70(21):4181–95, November 2013. Abstract Amyotrophic lateral sclerosis (ALS) is a fatal and rapidly progressing neurodegenerative disorder and the majority of ALS is sporadic, where misfolding and aggregation of Cu/Zn-superoxide dismutase (SOD1) is a feature shared with familial mutant-SOD1 cases. ALS is characterized by progressive neurospatial spread of pathology among motor neurons, and recently the transfer of extracellular, aggregated mutant SOD1 between cells was demonstrated in culture. However, there is currently no evidence that uptake of SOD1 into cells initiates neurodegenerative pathways reminiscent of ALS pathology. Similarly, whilst dysfunction to the ER-Golgi compartments is increasingly implicated in the pathogenesis of both sporadic and familial ALS, it remains unclear whether misfolded, wildtype SOD1 triggers ER-Golgi dysfunction. In this study we show that both extracellular, native wildtype and mutant SOD1 are taken up by macropinocytosis into neuronal cells. Hence uptake does not depend on SOD1 mutation or misfolding. We also demonstrate that purified mutant SOD1 added exogenously to neuronal cells inhibits protein transport between the ER-Golgi apparatus, leading to Golgi fragmentation, induction of ER stress and apoptotic cell death. Furthermore, we show that extracellular, aggregated, wildtype SOD1 also induces ER-Golgi pathology similar to mutant SOD1, leading to apoptotic cell death. Hence extracellular misfolded wildtype or mutant SOD1 induce dysfunction to ER-Golgi compartments characteristic of ALS in neuronal cells, implicating extracellular SOD1 in the spread of pathology among motor neurons in both sporadic and familial ALS. Biology and genetics of prions causing neurodegeneration.. Annual review of genetics 47:601–23, January 2013. Abstract Prions are proteins that acquire alternative conformations that become self-propagating. Transformation of proteins into prions is generally accompanied by an increase in $\beta$-sheet structure and a propensity to aggregate into oligomers. Some prions are beneficial and perform cellular functions, whereas others cause neurodegeneration. In mammals, more than a dozen proteins that become prions have been identified, and a similar number has been found in fungi. In both mammals and fungi, variations in the prion conformation encipher the biological properties of distinct prion strains. Increasing evidence argues that prions cause many neurodegenerative diseases (NDs), including Alzheimer's, Parkinson's, Creutzfeldt-Jakob, and Lou Gehrig's diseases, as well as the tauopathies. The majority of NDs are sporadic, and 10% to 20% are inherited. The late onset of heritable NDs, like their sporadic counterparts, may reflect the stochastic nature of prion formation; the pathogenesis of such illnesses seems to require prion accumulation to exceed some critical threshold before neurological dysfunction manifests. Kristen Marciniuk, Ryan Taschuk and Scott Napper. Evidence for prion-like mechanisms in several neurodegenerative diseases: potential implications for immunotherapy.. Clinical & developmental immunology 2013:473706, January 2013. Abstract Transmissible spongiform encephalopathies (TSEs) are fatal, untreatable neurodegenerative diseases. While the impact of TSEs on human health is relatively minor, these diseases are having a major influence on how we view, and potentially treat, other more common neurodegenerative disorders. Until recently, TSEs encapsulated a distinct category of neurodegenerative disorder, exclusive in their defining characteristic of infectivity. It now appears that similar mechanisms of self-propagation may underlie other proteinopathies such as Alzheimer's disease, Parkinson's disease, Amyotrophic lateral sclerosis, and Huntington's disease. This link is of scientific interest and potential therapeutic importance as this route of self-propagation offers conceptual support and guidance for vaccine development efforts. Specifically, the existence of a pathological, self-promoting isoform offers a rational vaccine target. Here, we review the evidence of prion-like mechanisms within a number of common neurodegenerative disorders and speculate on potential implications and opportunities for vaccine development. Luigi Francesco Agnati, Diego Guidolin, Amina S Woods, Francisco Ciruela, Chiara Carone, Annamaria Vallelunga, Dasiel Oscar Borroto Escuela, Susanna Genedani and Kjell Fuxe. A new interpretative paradigm for Conformational Protein Diseases.. Current protein & peptide science 14(2):141–60, 2013. Abstract Conformational Protein Diseases (CPDs) comprise over forty clinically and pathologically diverse disorders in which specific altered proteins accumulate in cells or tissues of the body. The most studied are Alzheimer$\beta$'s disease, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, prion diseases, inclusion body myopathy, and the systemic amyloidoses. They are characterised by three dimensional conformational alterations, which are often rich in $\beta$- structure. Proteins in this non-native conformation are highly stable, resistant to degradation, and have an enhanced tendency to aggregate with like protein molecules. The misfolded proteins can impart their anomalous properties to soluble, monomeric proteins with the same amino acid sequence by a process that has been likened to seeded crystallization. However, these potentially pathogenic proteins also have important physiological actions, which have not completely characterized. This opens up the question of what process transforms physiological actions into pathological actions and most intriguing, is why potentially dangerous proteins have been maintained during evolution and are present from yeasts to humans. In the present paper, we introduce the concept of mis-exaptation and of mis-tinkering since they may help in clarifying some of the double edged sword aspects of these proteins. Against this background an original interpretative paradigm for CPDs will be given in the frame of the previously proposed Red Queen Theory of Aging. Kunihiro Yoshida, Keiichi Higuchi and Shu-ichi Ikeda. [Can prion-like propagation occur in neurodegenerative diseases?: in view of transmissible systemic amyloidosis].. Brain and nerve = Shinkei kenkyū no shinpo 64(6):665–74, 2012. Abstract Common neurodegenerative diseases, including Alzheimer's disease (AD) and Parkinson's disease (PD), are now considered as "protein misfolding diseases," because the misfolding of a small number of proteins is a key event in the pathogenesis and progression of these diseases. Proteins that are prone to misfolding and thereby associated with neurodegenerative diseases include amyloid $\beta$ (AD), tau (AD and tauopathy), $\alpha$-synuclein (PD, dementia with Lewy bodies, etc.), polyglutamine proteins (Huntington's disease, spinocerebellar ataxia, etc.), and superoxide dismutase 1 (amyotrophic lateral sclerosis). These proteins share certain essential properties with prions. Similar to abnormal prions, misfolded proteins function as a template to catalyze the misfolding of the native proteins and assemble into insoluble, $\beta$-sheet-rich, fibrillar aggregates termed as "amyloids." Furthermore, there is enough evidence supporting the intercellular transfer of misfolded protein aggregates. The transmission of these aggregates from one cell to another may be in accordance with the concept that neuropathological changes propagate along neuronal circuits in neurodegenerative diseases. Prion-like propagation mechanisms have been extensively analyzed in connection with systemic amyloidoses such as amyloid A (AA) amyloidosis and amyloid apolipoprotein AII (AApoAII) amyloidosis. Studies have shown that AA and AApoAII amyloidoses are transmitted from one organism to another through amyloid fibrils. However, studies have not yet proved that protein misfolding diseases, except for prion diseases, are infectious. Given the intercellular transfer of misfolded protein aggregates, we cannot ignore the possibility that disease-specific, misfolded proteins can be transmitted between individuals through surgical procedures or tissue transplantation. Importantly, cell non-autonomous mechanisms underlying the pathogenesis of neurodegenerative diseases may represent a more readily accessible target for novel disease-modifying therapies. In the present review, we discuss some aspects of the prion-like propagation of neurodegenerative diseases, taking into consideration the accumulated evidence supporting the transmissibility of systemic amyloidoses. How do the RNA-binding proteins TDP-43 and FUS relate to amyotrophic lateral sclerosis and frontotemporal degeneration, and to each other?. Current opinion in neurology 25(6):701–7, 2012. Abstract PURPOSE OF REVIEW: This review examines the recent research developments aimed at defining the role of RNA-binding proteins (TDP-43 and FUS) in amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD). RECENT FINDINGS: TAR DNA-binding protein 43 kDa (TDP-43) and fused in sarcoma (FUS) are RNA-binding proteins that form aggregates in ALS and FTLD, and when mutated can drive the pathogenesis of these disorders. However, fundamental questions remain as to the relationship between TDP-43 and FUS aggregation and disease, their normal and pathologic function, and where they converge on the same cellular pathways. Autopsy series point to distinct molecular actions as TDP-43 and FUS neuronal inclusions do not overlap, with FUS inclusions being present in only a small subgroup of patients. By contrast, modeling experiments in lower organisms support a genetic interaction between TDP-43 and FUS, although it is likely indirect. Regardless, the recent finding that additional RNA-binding proteins may also cause ALS, and the observation that TDP-43 aggregation remains a core feature in all of the recently identified genetic forms of ALS (C9ORF72, VCP, UBQLN2, and PFN1), underscores the central role of TDP-43 and RNA metabolism in ALS and FTLD. SUMMARY: Recent discoveries point to an unprecedented convergence of molecular pathways in ALS and FTLD involving RNA metabolism. Defining the exact points of convergence will likely be key to advancing therapeutics development in the coming years. Marka Blitterswijk, Sunita Gulati, Elizabeth Smoot, Matthew Jaffa, Nancy Maher, Bradley T Hyman, Adrian J Ivinson, Clemens R Scherzer, David A Schoenfeld, Merit E Cudkowicz, Robert H Brown and Daryl A Bosco. Anti-superoxide dismutase antibodies are associated with survival in patients with sporadic amyotrophic lateral sclerosis.. Amyotrophic lateral sclerosis : official publication of the World Federation of Neurology Research Group on Motor Neuron Diseases 12(6):430–8, 2011. Abstract Our objective was to test the hypothesis that aberrantly modified forms of superoxide dismutase (SOD1) influence the disease course for sporadic amyotrophic lateral sclerosis (SALS). We probed for anti-SOD1 antibodies (IgM and IgG) against both the normal and aberrantly oxidized-SOD1 (SODox) antigens in sera from patients with SALS, subjects diagnosed with other neurological disorders and healthy individuals, and correlated the levels of these antibodies to disease duration and/or severity. Anti-SOD1 antibodies were detected in all cohorts; however, a subset of ∼5-10% of SALS cases exhibited elevated levels of anti-SOD1 antibodies. Those SALS cases with relatively high levels of IgM antibodies against SODox exhibit a longer survival of 6.4 years, compared to subjects lacking these antibodies. By contrast, SALS subjects expressing higher levels of IgG antibodies reactive for the normal WT-SOD1 antigen exhibit a shorter survival of 4.1 years. Anti-SOD1 antibody levels did not correlate with disease severity in either the Alzheimer's or Parkinson's disease cohorts. In conclusion, the association of longer survival with elevated levels of anti-SODox antibodies suggests that these antibodies may be protective. By extension, these data implicate aberrantly modified forms of WT-SOD1 (e.g. oxidized SOD1) in SALS pathogenesis. In contrast, an immune response against the normal WT-SOD1 appears to be disadvantageous in SALS, possibly because the anti-oxidizing activity of normal WT-SOD1 is beneficial to SALS individuals. Protein aggregates and regional disease spread in ALS is reminiscent of prion-like pathogenesis.. Neurology India 61(2):107–10. Abstract Amyotrophic lateral sclerosis (ALS) typically commences in a discrete location in a limb or bulbar territory muscles and then spreads to the adjacent anatomical regions. This pattern is consistent with a contiguous spread of the disease process in motor neuron network resulting in progressive motor weakness. The etiology of ALS onset and the mechanism of the regional ALS spread remain elusive. Over the past 5 years, identification of mutations in two RNA binding proteins, trans active response (TAR) DNA-binding protein (TDP-43) and fused in sarcoma (FUS), in patients with familial ALS has led to a major shift in our understanding of the ALS disease mechanism. In addition to their role in RNA metabolism, TDP-43 and FUS form protein aggregates in the affected neurons. More recent findings demonstrating that both TDP-43 and FUS contain glutamine/asparagine (Q/N) residue-rich prion-like domains have spurred intense research interest. This brief review discusses the prion-related domains in TDP-43 and FUS and their implication in protein aggregate formation and disease spread in ALS. Amyotrophic lateral sclerosis (ALS) is predominantly sporadic, but associated with heritable genetic mutations in 5–10% of cases, including those in Cu/Zn superoxide dismutase (SOD1). We previously showed that misfolding of SOD1 can be transmitted to endogenous human wild-type SOD1 (HuWtSOD1) in an intracellular compartment. Using NSC-34 motor neuron-like cells, we now demonstrate that misfolded mutant and HuWtSOD1 can traverse between cells via two nonexclusive mechanisms: protein aggregates released from dying cells and taken up by macropinocytosis, and exosomes secreted from living cells. Furthermore, once HuWtSOD1 propagation has been established, misfolding of HuWtSOD1 can be efficiently and repeatedly propagated between HEK293 cell cultures via conditioned media over multiple passages, and to cultured mouse primary spinal cord cells transgenically expressing HuWtSOD1, but not to cells derived from nontransgenic littermates. Conditioned media transmission of HuWtSOD1 misfolding in HEK293 cells is blocked by HuWtSOD1 siRNA knockdown, consistent with human SOD1 being a substrate for conversion, and attenuated by ultracentrifugation or incubation with SOD1 misfolding-specific antibodies, indicating a relatively massive transmission particle which possesses antibody-accessible SOD1. Finally, misfolded and protease-sensitive HuWtSOD1 comprises up to 4% of total SOD1 in spinal cords of patients with sporadic ALS (SALS). Propagation of HuWtSOD1 misfolding, and its subsequent cell-to-cell transmission, is thus a candidate process for the molecular pathogenesis of SALS, which may provide novel treatment and biomarker targets for this devastating disease. A common feature of many neurodegenerative diseases is the deposition of β-sheet-rich amyloid aggregates formed by proteins specific to these diseases. These protein aggregates are thought to cause neuronal dysfunction, directly or indirectly. Recent studies have strongly implicated cell-to-cell transmission of misfolded proteins as a common mechanism for the onset and progression of various neurodegenerative disorders. Emerging evidence also suggests the presence of conformationally diverse 'strains' of each type of disease protein, which may be another shared feature of amyloid aggregates, accounting for the tremendous heterogeneity within each type of neurodegenerative disease. Although there are many more questions to be answered, these studies have opened up new avenues for therapeutic interventions in neurodegenerative disorders.
CommonCrawl
Now we don't need to consider $1\times 1$, $1\times 2$, or $2\times 2$ any longer as we have found the smallest rectangle tilable with copies of V plus copies of each of those three. There are at least 20 more solutions. I tagged it 'computer-puzzle' but you can certainly work some of these out by hand. The larger ones might be a bit challenging. I assume the number of solutions here is infinite (probably in both directions), I'll post more when I have them. 20's a lot, but here's a few to get it started.
CommonCrawl
Abstract : We show central limit theorems (CLT) for the Stieltjes transforms or more general analytic functions of symmetric matrices with independent heavy tailed entries, including entries in the domain of attraction of $\alpha$-stable laws and entries with moments exploding with the dimension, as in the adjacency matrices of Erdös-Rényi graphs. For the second model, we also prove a central limit theorem of the moments of its empirical eigenvalues distribution. The limit laws are Gaussian, but unlike to the case of standard Wigner matrices, the normalization is the one of the classical CLT for independent random variables.
CommonCrawl
[$q$] [$100-r$] [$u-s$] [$23$] [$p$] [$12$][$r$] [$t$]. The multiple $p\times q\times r\times s\times t\times u\times k$ does not have any of $p,q,r,s,t,u$ or $k$. Possible values are: $p=23$, $q = 31$, $r = 16$, $s = 25$, $t = 37$, $u = 54$, and $k = 13$. Each of the values exist within brackets, so they need to be inclusively between 1 and 100. The operations within the brackets ($100-7k$, $100-r$, $r-s+k$, etc.) all also lie inclusively between 1 and 100. Following the rest of the parameters - $p+q+r+s+t+u+k \rightarrow 23+31+16+25+37+54+13=199$, they are all distinct, $p,q,s,t,k$ are odd and $r,u$ are even, $q,t$ are primes, $u$ is the largest and $k$ is the smallest. Finally, $p\times q\times r\times s\times t\times u\times k = 23\times 31\times 16\times 25\times 37\times 54\times 13=7407784800$, which doesn't have any of the numbers within it. Can you Solve for $x$?
CommonCrawl
A 4 × 4 grid is filled in, with each of the 16 squares colored either black or white. Two colorings are regarded as identical if one can be converted to each other by performing any combination of flipping, rotating, or swapping the two colors (flipping all the black squares to white and vice versa). How many non-identical colorings are there? I've figured out the number of invariances for each individual transformation but the combinations are a little confusing. Is there an easier way of solving this than just looking at each combination? This is a case of Power Group Enumeration with the group permuting the slots being the eight symmetries $G_N$ of the $N\times N$ square and the group acting on the $Q$ colors being the symmetric group $S_Q$. The cycle indices for $G_N$ were carefully documented and computed at the following MSE link I. The cycle index of the symmetric group can be computed from the classical recurrence by Lovasz. It then remains to apply the Power Group Enumeration formula / algorithm as documented at the following MSE link II. We get for the case of coloring a square with at most two interchangeable colors $$1, 4, 51, 4324, 2105872, 4295327872, 35184441295872, \\ 1152921514807410688,\ldots$$ which is OEIS A182044 where a closed formula can be found. An implementation of this algorithm is included below. The reader is invited to compute the closed formula from the algorithm specification which is not difficult to do but demands careful book-keeping. What is the probability of having distinct grid squares? Why is my counting wrong? How many ways to color a toral chess board, yield $k$ Black-White boundaries? There is a $n*n$($n$ is an odd number) chess board, which we have to color by white and black.
CommonCrawl
I was proud of this idea, as it was probably the best early research idea I've come up with. I became very interested in matrices and linear algebra after reading a paper on modelling origami using rotation and translation matrices (Belcastro and Hull, 2012). I began to play with expressing all kinds of things as matrices and seeing what "meanings" matrix operations had in those contexts. Somehow complex numbers cropped up, and I decided that they were a good candidate for this "interpretation" because multiplying by a complex number meant a rotation and dilation of the complex plane - or an "amplitwist" (Needham, 1996). So I represented a complex number as a rotation matrix together with a scaling factor. Prove this and extend it to general $n \times n$ matrices. Can you go further than that?
CommonCrawl
In this talk we will discuss modification of bosonic string for piecewise flat metrics. Standard approach to bosonic string developed by Polyakov (Belavin and Knizhnik) is based on integration over infinite space of Rimenian metrics on the surface. Semidirect product of the diffeomorphism group and of the conformal group acts on this space and the integral reduced to a finite-dimensional integral over moduli space of algebraic curves. In this talk I will rewrite Polyakov integral for piecewise flat metrics. Surfaces are glued from flat triangles and embedding of piecewise flat surface is defined by embeddings of the vertexes. Integrals are finite-dimensional for each triangulation. I will write Polyakov action and measures on parameter space. Analog of the diffeomorphism group is discrete group of Whitehead's moves. This group is similar to modular one. For torus glued from two triangles this is $PSL(2,\mathbb Z)$. I will show that the integral reduces to the integral over moduli space and discuss examples with small number of triangles.
CommonCrawl
Do gauge bosons really exist or are they only a mathematical model? Have we ever detected them? Photons, which are gauge bosons, are absorbed by the retina and cause impulses in the optic nerve. In a very dark room, the eye can detect small numbers of photons. Researchers argue about how few, but you don't have to have a classical electromagnetic wave to excite retinal cells. Direct perception of photons through one of the five senses tells me that they exist, although questions of "existence" are more about philosophy than physics. (Does Fock space "exist"? If so, where?) I choose to believe that many things "exist" that I cannot perceive with my senses. People once did not even believe in atoms; but I do, even though I haven't seen one with my eyes. We can't perceive W and Z bosons, or gluons, through our senses the way can perceive photons. But the Standard Model makes accurate predictions so it makes sense to assume that they exist as much as anything else exists. Not the answer you're looking for? Browse other questions tagged standard-model or ask your own question. What is the gauge field in Bose-Einstein condensation? Is the standard model a quantized gauge theory? Why the four gauge bosons that correspond to the $SU(2)\times U(1)$ electroweak force before symmetry breaking are not listed in the Standard Model? How to gauge away Goldstone bosons in Higgs triplet model?
CommonCrawl
Abstract: In applications it is common that the exact form of a conditional expectation is unknown and having flexible functional forms can lead to improvements. Series method offers that by approximating the unknown function based on $k$ basis functions, where $k$ is allowed to grow with the sample size $n$. We consider series estimators for the conditional mean in light of: (i) sharp LLNs for matrices derived from the noncommutative Khinchin inequalities, (ii) bounds on the Lebesgue factor that controls the ratio between the $L^\infty$ and $L_2$-norms of approximation errors, (iii) maximal inequalities for processes whose entropy integrals diverge, and (iv) strong approximations to series-type processes. These technical tools allow us to contribute to the series literature, specifically the seminal work of Newey (1997), as follows. First, we weaken the condition on the number $k$ of approximating functions used in series estimation from the typical $k^2/n \to 0$ to $k/n \to 0$, up to log factors, which was available only for spline series before. Second, we derive $L_2$ rates and pointwise central limit theorems results when the approximation error vanishes. Under an incorrectly specified model, i.e. when the approximation error does not vanish, analogous results are also shown. Third, under stronger conditions we derive uniform rates and functional central limit theorems that hold if the approximation error vanishes or not. That is, we derive the strong approximation for the entire estimate of the nonparametric function. We derive uniform rates, Gaussian approximations, and uniform confidence bands for a wide collection of linear functionals of the conditional expectation function.
CommonCrawl
It is known that addition of a non-volatile solute to a volatile solvent(liquid) to give a solution reduces the Vapour Pressure of the solution (well solvent actually as only solvent is volatile). This leads to the Elevation of Boiling Point and "Deppresion" of Freezing Point. I am clear with elevation of BP. But it is said that a liquid freezes when the vapour pressure of the liquid phase attains the vapour pressure of its solid state. So, how possibly does the freezing point falls? For instance let the Vapour Pressure of the solid state be at an arbitrary point "x"(at a temperature A Kelvin) and that of the liquid phase be at "y". So now the vapour pressure of the liquid(solution) reaches "x" rapidly that is at a higher temperature or a temperature greater than A which is an elevation in the freezing point. I am always confused with this part. Focus more on free energies rather than on vapor pressures (which derive, ultimately, from free energies after all). For a mixture of B (solute) in A (solvent), the entropy of mixing is $RT(x_A ln(x_A) + x_B ln(x_B))$, and the enthalpy of mixing will go approximately as $x_A x_B \Omega$, with $\Omega$ as a measure of the interaction of A and B. The entropy term will always result in a reduction of free energy at small $x_B$ regardless of the sign of $\Omega$, but in the case of salt in water $\Omega$ is negative, driving further solubility. Now, about those temperatures of phase transitions (which as you should recall occur when the free energies of the phases are equal, more fundamental than vapor pressures). On the boiling end, water with salt in it has a lower free energy than water without salt - the boiling point of the salt water has to be higher than for pure water. The presence of the solute makes it happy to stay liquid. So, why isn't the freezing point raised in an analogous way? For pure A, of course, the freezing point remains that of pure water. The entropy of mixing is equivalent for solid and liquid in this case, and it is not clear how different the excess enthalpy is going to be - so why does liquid continue to be stable? Well, it only does while it has a higher concentration of salt in it than is in the solid. It forms a classic eutectic point on the binary phase diagram (see, for example http://antoine.frostburg.edu/chem/senese/101/solutions/images/saltwater-phase-diagram.gif). The only requirement of melting point depression is that water at a high concentration of salt has a lower free energy than the solid at a lower concentration of salt. Given the outline of free energy above, this is likely to hold for some range of temperatures. Again, remember that what is being lowered is the freezing point of water with salt in it - what starts freezing out is ice with a lower salt content in it. Not the answer you're looking for? Browse other questions tagged physical-chemistry solutions or ask your own question.
CommonCrawl
If $\alpha = (1352)$ and $\delta = (256)$ then $\alpha \centerdot \delta = (1652)(34)$. Can someone explain this to me. I don't see a 4 in $\alpha$ or $\beta$, so how can it be in the product of the two? Hey sarah, we found the example that they were trying to write. The answer they have was a product of alpha, and a different cycle than what was given on the page. We changed the answer on the chapt sums and it looks good now!
CommonCrawl
Azmy S. Ackleh, Linda J. S. Allen. Competitive exclusion in SIS and SIR epidemic models with total cross immunity and density-dependent host mortality. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 175-188. doi: 10.3934\/dcdsb.2005.5.175. E. Audusse. A multilayer Saint-Venant model: Derivation and numerical validation. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 189-214. doi: 10.3934\/dcdsb.2005.5.189. Bernd Aulbach, Martin Rasmussen, Stefan Siegmund. Approximation of attractors of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 215-238. doi: 10.3934\/dcdsb.2005.5.215. Fabio Bagagiolo. Optimal control of finite horizon type for a multidimensional delayed switching system. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 239-264. doi: 10.3934\/dcdsb.2005.5.239. Said Boulite, S. Hadd, L. Maniar. Critical spectrum and stability for population equations with diffusion in unbounded domains. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 265-276. doi: 10.3934\/dcdsb.2005.5.265. Y. Chen, L. Wang. Global attractivity of a circadian pacemaker model in a periodic environment. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 277-288. doi: 10.3934\/dcdsb.2005.5.277. Mats Gyllenberg, Yi Wang. Periodic tridiagonal systems modeling competitive-cooperative ecological interactions. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 289-298. doi: 10.3934\/dcdsb.2005.5.289. T. Hillen. On the $L^2$-moment closure of transport equations: The general case. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 299-318. doi: 10.3934\/dcdsb.2005.5.299. H.J. Hwang, K. Kang, A. Stevens. Drift-diffusion limits of kinetic models for chemotaxis: A generalization. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 319-334. doi: 10.3934\/dcdsb.2005.5.319. S. R.-J. Jang, J. Baglama, P. Seshaiyer. Intratrophic predation in a simple food chain with fluctuating nutrient. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 335-352. doi: 10.3934\/dcdsb.2005.5.335. Guy Katriel. Stability of synchronized oscillations in networks of phase-oscillators. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 353-364. doi: 10.3934\/dcdsb.2005.5.353. M. S. Mahmoud, P. Shi, Y. Shi. $H_\\infty$ and robust control of interconnected systems with Markovian jump parameters. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 365-384. doi: 10.3934\/dcdsb.2005.5.365. An\u00EDbal Rodr\u00EDguez-Bernal, Robert Willie. Singular large diffusivity and spatial homogenization in a non homogeneous linear parabolic problem. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 385-410. doi: 10.3934\/dcdsb.2005.5.385. Diana M. Thomas, Lynn Vandemuelebroeke, Kenneth Yamaguchi. A mathematical evolution model for phytoremediation of metals. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 411-422. doi: 10.3934\/dcdsb.2005.5.411. V. Torri. Numerical and dynamical analysis of undulation instability under shear stress. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 423-460. doi: 10.3934\/dcdsb.2005.5.423. E. Trofimchuk, Sergei Trofimchuk. Global stability in a regulated logistic growth model. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 461-468. doi: 10.3934\/dcdsb.2005.5.461. Xinmin Xiang. The long-time behaviour for nonlinear Schrödinger equation and its rational pseudospectral approximation. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 469-488. doi: 10.3934\/dcdsb.2005.5.469. Juan-Ming Yuan, Jiahong Wu. The complex KdV equation with or without dissipation. Discrete & Continuous Dynamical Systems - B, 2005, 5(2): 489-512. doi: 10.3934\/dcdsb.2005.5.489.
CommonCrawl
Strong evidence is presented for the localization of low energy quasiparticle states in disordered $d$-wave superconductors. Within the framework of the Bogoliubov-de Gennes (BdG) theory applied to the extended Hubbard model with a finite concentration of non-magnetic impurities, we carry out a fully self-consistent numerical diagonalization of the BdG equations on finite clusters containing up to $50\times 50$ sites. Localized states are identified by probing their sensitivity to the boundary conditions and by analyzing the finite size dependence of inverse participation ratios.
CommonCrawl
Abstract: The top quark mass and the flavor mixing are studied in the context of a Seesaw model of Quark Masses based on the gauge group $SU(2)_L \times SU(2)_R \times U(1)$. Six isosinglet quarks are introduced to give rise to the mass hierarchy of ordinary quarks. In this scheme, we reexamine a mechanism for the generation of the top quark mass. It is shown that, in order to prevent the Seesaw mechanism to act for the top quark, the mass parameter of its isosinglet partner must be much smaller than the breaking scale of $SU(2)_R$. As a result the fourth lightest up quark must have a mass of the order of the breaking scale of $SU(2)_R$, and a large mixing between the right-handed top quark and its singlet partner occurs. We also show that this mechanism is compatible with the mass spectrum of light quarks and their flavor mixing.
CommonCrawl
We study properties of operators which are left-inverses to the operator of multiplication by an independent variable in the space $\mathcal H (G)$ of functions that are analytic in an arbitrary domain $G$. This space is endowed with topology of compact convergence. A description of cyclic elements for such operators is obtained. The obtained statements generalize known results in this direction.
CommonCrawl
This is a puzzle that was a fad when I was back in school. (It's not sooo long ago, but way before Smartphones with AngryBirds or DoodleJump came up...). For quite a while, everybody was scribbling this instead of paying attention in class. It is somehow addicting ("This can't be so hard..."), and can be played everywhere and everytime, just using a pen and a piece of squared paper. In the end, when I solved it (with the help of a computer, to be honest), I wrote the solution on a piece of paper, and still carry it around in my wallet (although I don't know why). The starting point is arbitrary (and I don't know whether it matters, but at least I can say that it is possible to solve it when starting with the sequence depicted in the image). I must admit that I have heard of this before, and I remember the right way of approaching this puzzle. Edit: After reading Anachors answer I realised his "style" was way nicer to read, so I also changed my image. This is similar to the other solution, but uses a slightly different trick. Call a path which traverses all squares a complete path. Instead of generating four $5 \times 5$ complete path, we generate a single complete "circular" path ie. one that traverses all squares and returns to the starting cell. By doing this we can make complete paths from each starting cell by properly "rotating" the circular path. In fact, we can generate 2 complete paths, one moving forwards, one backwards. Note that you just have to jump from 25 to 1 to make a circular path. The lower right grid uses the backward path while other three move forwards. I used Rohcana's circular board to build a solution for any (5N)×(5M) board that does not require any rotations or flipping. It only uses two interesting properties with Rohcana's board with enables one to connect adjacent boards. If two boards are placed next to each other horizontally, it is possible to go between 23 on the left board to 16 on the right board, and between 24 on the left board to 15 on the right board. If two boards are placed next to each other vertically, it is possible to go between 12 on the upper board to 8 on the lower board, and between 13 on the upper board to 7 on the lower board board. This gives the following solution for creating circuits going through all the cells. Create an arbitrary path S that connects all the sub-boards (a simple snake-pattern, going right on the first row, left on the second row, right on the third row, etc. will do). The complete circuit will move forwards and backwards along this path. If S connects a board with the board to the right of it, immediately connect 23 and 24 on the left board with 16 and 15 on the right board. If S connects a board with the board below it, immediately connect 12 and 13 on the top board with 8 and 7 on the lower board. The picture below shows how the board will look like at this step, the gray line indicating S and the green lines being the connections between the sub-boards. Start at 1 on the board that is on the beginning of the path S and keep counting upwards, connecting each cell to the next one. If you reach a cell that is already connected to a cell on a neighbouring board, jump to that connected cell and keep counting upwards. If you reach 25 connect it to the 1 on the same board you are at. Eventually you will get back to 1 on your starting board, and will have created a circuit that connects all the cells on all the boards. This problem is related to the Knight's Tour. Both essentially construct a valid directed graph, with the only difference being which connections are considered valid. There's a pretty good heuristic for constructing knight's tours called Warnsdorf's rule, so I was curious to see whether it applied to this problem as well, so I implemented it in Python. This seems in line with the results in a paper on the Knight's Tour by Squirrel & Cull, which is cited in the section on Warnsdorf's rule on the above Wikipedia page. The paper also details a more successful tie-breaking rule, but I haven't gotten around to implementing that one yet. It might not even work for this particular game. Here is a slightly different solution. It doesn't necessarily scale well, but it does form a complete cycle of the board. Notice that if you move up 3 from the 25th spot, it takes you out of the current 5x5 square. Thus, we can position 4 of these squares to make a 10x10 square, each starting their cycle in the square next to the shared vertex. Lets call the square above $A$. By doing this, we've created a 10x10 board with a complete cycle. Consider the following 5x5 squares. Notice that moving diagonally up and to the right from the 25th spot will take you to the upper left position of the next 5x5 square, and moving diagonally down and to the right will take you to the bottom left position of the next square. Thus, when entering this square from the top left position, you can enter the square to the right in the bottom left or top left position. A diagonal flip of this square will let you enter the square below in either the top left or top right position when starting in the top left corner. Similar to the first, you can enter the square above in either bottom corner. A diagonal flip means you can enter the square to the left in either right corner. Throught the use of these two squares and their diagonal flip, you are able to enter the corner of any adjacent square when starting in the upper left corner. Through rotations, this also means you can also start in any corner! To tile any rectangle which sides are multiples of 5, simply divide it up into 5x5 squares and then trace a path through these squares. A spiral will work, or back and forth through each row will also work. Use the appropriate square above to enter/exit the right corner and it is possible to tile any rectangle whose sides are multiples of 5. Here is a 30x20 rectangle. This is made up of 6x4 5x5 squares. The starting corner of each 5x5 square is lettered in order. The squares a-e and m-n all start in the top left and enter the square to the right in the top left. The squares g-k and s-w all start in the top right and enter the square to the left in the top right. The squares f and r start in the top left and enter the square below in the top right. The sqaures l and x start in the top right and enter the square below in the top left. Another way to tile this using a spiral pattern. For example, here is a 7x5 set of 5x5 squares. The pattern is to simply enter the next square in the corner farthest from the center. Not the answer you're looking for? Browse other questions tagged checkerboard mazes or ask your own question.
CommonCrawl
Complex, contact, Riemannian, pseudo-Riemannian and Finsler geometry, relativity, gauge theory, global analysis. Are normal coordinates the same as Cartesian coordinates in flat space? Is the Frenet frame is independent of the choices of parameters? What does the torsion-free condition for a connection mean in terms of its horizontal bundle? Can this integral be made nonpositive? Can the number of solutions to a system of PDEs be bounded using the characteristic variety? Let $(M,g)$ be a closed Riemannian manifold. Q Is there any research about the existence of nonvanishing Killing field, especially the nontrivial example. How to show if $X$ is Killing field then it is tangent to the geodesic spheres centred at a point $p$? Is Colding-Minicozzi entropy continuous w.r.t. $C^\infty$ convergenge?
CommonCrawl
This is an introductory talk on 3-manifold topology, with a focus on constructive methods. In this talk, we introduce the BV formalism, and then construct cohomology classes in negative degrees in a toy model with supergravity. I'll start by giving a gentle introduction to $A$-infinity algebras and their Hochschild cochains. How do we test for membership in a permutation group? The PhD seminar is a weekly seminar held on Wednesday afternoons, from 2:30-3:30pm. I'll describe work with Terry Gannon and Corey Jones on "the modular data machine". In this talk, we consider the planar vortex patch problem in an incompressible steady flow in a bounded domain $\mathbb R^2$.
CommonCrawl
This section explains how to import your terrain into Unreal Engine. When you have completed your terrain, export it using the file format Raw16, which is the optimal format according to EPIC. In this example, we have created a simple 2048x2048 terrain. In Unreal Engine, click on the Landscape icon and import your heightmap. The data fit automatically to your file. Click on Import. The terrain imports, but the padding is untidy and the terrain looks flat. Now, we will fix this quickly. The table below lists the optimized landscape sizes recommended by EPIC, which guarantee a borderless terrain. The table is available at https://docs.unrealengine.com/latest/INT/Engine/Landscape/TechnicalGuide/. Crop the 2048x2048 terrain to a recommended resolution of 2017x2017. A height of 100 in the Z scale corresponds to a height of -256 m min. to 256 m max. Accordingly, 200 in the Z scale corresponds to a height of -512 m min. to 512 m max., 400 to a height of -1024 m min. to 1024 m max., etc. Check the min. and max. heights of the terrain and export it with a defined height to the nearest power of 2. In this example, the heights are -467.04 min. to 224.70 max., therefore, the terrain is exported with a defined height of -512 min. to 512 max. Reimport the terrain into Unreal and set the Z scale to 200. When the terrain is reimported, the landscape data fit automatically to your new heightmap resolution, according to EPIC's recommended sizes. The padding disappears and the heights appear correct. This section explains how to export your terrain as multi-files and import them into Unreal Engine. Right click in the Graph Editor and select Create Node > Export > Multi file export terrain and double click to open its parameters. See and Multi file (tiled) export terrain node for more details. The following pattern works for UE4: "filename_X$x_Y$y.png" (UE4 requires an "X" and an "Y" before the coordinates of the tile). Note that UE4 forbids tiles of more than 1024 vertices. Be sure to check the size of your tiles. After the export is completed, you will have the following files in your Windows Explorer. Open your UE4 project, and when using the world composition import, specify if you need to flip or not the Y Coordinates. Do not check this box for Instant Terra. This option is checked by default. The terrain tiles appear in the correct order / position inside UE4.
CommonCrawl
Structural Characterization of Beta Carbonic Anhydrases From Higher Plants. It is the goal of this dissertation research to reveal some aspects of the physical nature of spinach carbonic anhydrase as a representative $\beta$CA using the techniques of sequence comparison, molecular biology, and biophysics. Though both $\alpha$ and $\beta$ carbonic anhydrases are zinc dependent metalloenzymes, it is clear that the two isoforms do not adopt the same mechanism for coordinating the active site metal. While $\alpha$CA binds zinc through three histidine ligands, $\beta$CA cannot due to a lack of evolutionarily conserved histidines. Instead, the $\beta$ family has adopted a ligand scheme incorporating a single histidine and two cysteines. This has been determined by systematically mutating possible zinc ligands in the spinach enzyme and then assaying the resulting variants for stoichiometric metal binding. Additionally, this conclusion is corroborated by inspection of the wild type enzyme's extended X-ray spectrum. This analysis indicates the metal is surrounded by two sulfur atoms and two nitrogen or oxygen species. Secondly, it has been long established that not only do the $\beta$ isoforms differ from their $\alpha$ cousins in their multimeric assembly, but subtypes exist within the $\beta$ family in which monocot forms assemble into lower molecular weight oligomers while dicot forms assemble into higher order structures. In an attempt to gain insight into the differences between monocot and dicot CAs, the CA cDNA from barley, a monocot, was sequenced. Analysis of the open reading frame revealed that the barley enzyme lacked ten amino acids at the carboxyl terminus which are conserved in the dicot isozymes. It is here demonstrated that this extension contributes to the difference in multimeric organization between monocots and dicots. When this extension is deleted from the spinach enzyme, the resulting mutant displays an apparent deficit in its ability to form higher order multimers. Furthermore, this carboxyl extension will interact with the CA holoenzyme in the yeast two-hybrid system showing that the observed characteristics of the deletion mutant do not arise from secondary disruptions, but rather the carboxyl terminus does participate in intermolecular interactions. Bracey, Michael H., "Structural Characterization of Beta Carbonic Anhydrases From Higher Plants." (1998). LSU Historical Dissertations and Theses. 6655.
CommonCrawl
Practice is one of the best methods of learning a new language and its idioms. Since HackerRank recently released a new category problems, Functional Programming, I decided to learn Scala by first studying its functional aspects. Although the problems are easy, introductory problems still have potential to demonstrate good practices as I've learned. The language was pleasantly concise and expressive as Python is in terms of its anonymous and higher-order functions. However, the distinguishing features (and the highlights of this post) are Scala's method invocation conventions and the placeholders. This post will explore effective use of Scala-specific features as well as good practice learned from introductory-level problems. I assume that the reader is familiar with closely-related programming language such as Python or Java and is vaguely familiar with notions such as anonymous functions and higher-order functions. For each problem, the solution has the same structure. The solution structure can be broken into two sections: the solution implementation and the parsing/printing code. The programmer writes the former and HackerRank handles the latter; subsequently, they offer method headers to be implemented. When an object uses the App trait, its body becomes an executable entry point similar to the standard main() method in Java. Only the problem-specific solution implementations will be presented in this post. We can now begin solving problems. The necessary Hello World problem shows us the structure for a function in Scala. One important convention is that curly braces around the body of a function are optional (and frequently frowned upon) for one-line implementations. We simply print "Hello World". Here, we use the handy printing function, println(). The only noteworthy aspect is that it is more convenient to type than Java's System.out.println(). We simply print "Hello World" $n$ times. Intuitively, we would use a for-loop for this problem. Scala calls the for structure a for-comprehension because of its added capabilities. Within the for-comprehension, there is a convenience method, to. This convenience method is an easy way to generate a range of values, inclusive, $[1, n]$. This is similar to Python's range(); however, the final value is included in Scala's convenience method. This second set of problems highlights common list operations. The problem is to generate an arbitrary list of a given size. The List object can generate a list with $n$ repetitions of an element, $e$, using the List.fill(n)(e) method. The problem is to transform the negative elements of a list to positive elements. The greater issue is to realize that the lists in Scala are immutable. Immutable objects in a functional programming language mean that the data structure cannot modify its contents. For example, a Scala list is non-modifiable once it has been defined. Since we cannot transform the elements of the original list, we must construct a new list. Methods for constructing new lists from existing lists are called transformers. map is a transformer that applies a function to every element of the original list. Hence, we simply apply the math.abs() function to every element to yield a new list with only positive values. The x => expression syntax defines an anonymous function in Scala. Anonymous functions are functions without a name and are frequently used in higher-order functions for brevity. This is similar to lambda x: expression in Python. Although interesting, this subject is beyond the scope of this post and I will pursue it no further. An education on functional programming is incomplete without higher-order functions. Higher-Order Functions are those functions which take functions as parameters or return functions. That is, they treat functions as first-class citizens. The map transformer in the previous problem is also a higher-order functions by this definition. The objective of this problem is to filter out undesired elements in a given list. Intuitively, we use the filter transformer method to construct a new list without the undesirable elements. The filter transformer, by convention, should be used in in-fix notation as shown in the code example. This is a method invocation convention that is equivalent to arr.filter(...). The _ in the code is known as a placeholder. Effectively, the placeholder expands to an anonymous function in the tighest scope. In this case, it expands to x => x < delim. The placeholder is a nifty and concise feature of Scala. The problem is self-descriptive (like the others): sum every element of the list that is odd. Simply, we should filter the list to only include odd elements and then use the sum() method on lists. Effectively, the placeholder expands to an anonymous function in the tighest scope. which is not what we want. Subsequently, it is necessary in this case to explicity write the anonymous function and its parameters. The problem is to replicate each element of the list $n$ times where $n$ is given. Here, we introduce another higher-order function that is not necessarily a transformer, foldLeft. foldLeft has its counterparts in other functional programming languages such as Lisp and Scheme, but it always has the same structure. The first argument is the starting value, $s$; the second argument is the binary function literal, $f$, that operates on a running accumulator and an element of the list. Subsequently, it is parameterized as follows: foldLeft(s)(f). The fundamental idea is that, for every element in the list, we apply a function to it and append/concatenate to the accumulator value which is initially set to the value of $s$. In our case, we generate a list of repeated elements for each element in the original list. Then, given the lists of repeated elements, concatenate them together. The problem is to compute the length of a given list without using the builtin size() method. Trivially, we should use the foldLeft method again to simply add 1 to our accumulator for each element in the list. The accumulator should start at 0. Where is the foldLeft? Well, it turns out that the /: operator is a shorthand expression of foldLeft where the lefthand operand is the starting value and the righthand operand is the list that it operates on. Furthermore, the function literal argument is adjacent to this operator in its own argument list as before. These problems utilize higher-order functions with a few fancy tricks on the side. Namely, the fancy tricks are views and prepending. zip is a function that takes two lists, $A$ and $B$, and produces a new list where each element is a tuple $(a_i, b_i) \in A\times B$ such that $i$ denotes the index of the element in the corresponding lists. Zipping the list naively will result in two iterations through the list which inherently doubles the running time, so we use views. A view is simply a lazy proxy for any collection that enables lazy evaluation with transformers. By the lazy evaluation, the zipping is deferred until the filter and map transformations are applied such that only iteration through the lists is necessary. Lazy evaluation is another popular topic in functional programming similar to anonymous functions. Briefly, it is a method for delaying evaluation of an expression until necessary. Note that the _._1 and _._2 refer to the first and second components of the tuple constructed by the zipping operation. We must reverse a list without using the builtin reverse() method. Because the operation is again mutational, we must utilize a transformer. Particularly, we choose the foldLeft transformer. Here, we mimic a pseudo-stack data structure where we pop the head off of the current list and push it onto the stack. The result is an implicit reversal of the elements. These problems are no longer intended for simple introductions. Now, Scala may be applied functionally using all of the learned features. I leave it to the reader now to judge the best approach to each problem. I post my solutions for comparison. The problem here is to approximate the value of a transcendental function, $e^x$, using a series. Another approximation application where the problem is calculating discrete integrals over two and three dimensions. Although these problems in HackerRank are intended as an introduction into functional programming, there is still room to apply relatively advanced concepts such as higher-order functions and anonymous functions and exploiting Scala-specific idioms. Scala has also shown concise and expressive syntax for functional programming. If you would like to continue learning Scala, keep practicing. We will continue exploring Scala by completing the HackerRank Recursion subcategory for Functional Programming.
CommonCrawl
What is the difference between the three terms below? Percentiles go from $0$ to $100$. Quartiles go from $1$ to $4$ (or $0$ to $4$). Quantiles can go from anything to anything. Percentiles and quartiles are examples of quantiles. percentile: a measure used in statistics indicating the value below which a given percentage of observations in a group of observations fall. quantile: values taken from regular intervals of the quantile function of a random variable. For instance, for some integer $k \geq 2$, the $k$-quartiles are defined as the values i.e. $Q_X(j/k)$ for $j = 1, 2, \ldots, k - 1$. It may be helpful for you to work out an example of what these definitions mean when say $X \sim U[0,100]$, i.e. $X$ is uniformly distributed from 0 to 100. The difference between quantile, quartile and percentile becomes obvious. Not the answer you're looking for? Browse other questions tagged descriptive-statistics quantiles median percentage or ask your own question. Why is the QQ Plot for Normal Distribution a Straight Line? How do I find the percentile p (or quantile q) from a weighted dataset?
CommonCrawl
I would use some graph paper and plot a few points. For example if $x = 0$ then $y = 2 \times 0 - 1 = -1$ and hence $(0, -1)$ is on the graph. Now try $x = 1.$ When $x = 1, y = 2 \times 1 - 1 = 1$ and hence $(1, 1)$ is on the graph. Plot a few more points. For example $x = 2, x = -1$ and a few more. What do you think is the shape of the graph? Can you draw it?
CommonCrawl
Zero-term rank of a matrix is the minimum number of lines (rows or columns) needed to cover all the zero entries of the given matrix. We characterize the linear operators that preserve zero-term rank of the $m \times n$ real matrices. We also obtain combinatorial equivalent condition for the zero-term rank of a real matrix.
CommonCrawl
Abstract: In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of $O(n \alpha(n))$, where $n$ is the number of physical qubits and $\alpha$ is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, $\alpha(n) \leq 3$. We prove that our algorithm performs optimally for errors of weight up to $(d-1)/2$ and for loss of up to $d-1$ qubits, where $d$ is the minimum distance of the code. Numerically, we obtain a threshold of $9.9\%$ for the 2d-toric code with perfect syndrome measurements and $2.6\%$ with faulty measurements.
CommonCrawl
Here, we study the vacua of an $$SU(3)\times SU(3)$$-symmetric model with a bifundamental scalar. Structures of this type appear in various gauge theories such as the Renormalizable Coloron Model, which is an extension of QCD, or the Trinification extension of the electroweak group. In other contexts, such as chiral symmetry, $$SU(3)\times SU(3)$$ is a global symmetry. As opposed to more general $$SU(N)\times SU(N)$$ symmetric models, the $N=3$ case is special due to the presence of a trilinear scalar term in the potential. We find that the most general tree-level potential has only three types of minima: one that preserves the diagonal $SU(3)$ subgroup, one that is $$SU(2)\times SU(2)\times U(1)$$ symmetric, and a trivial one where the full symmetry remains unbroken. The phase diagram is complicated, with some regions where there is a unique minimum, and other regions where two minima coexist. Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). Theoretical Physics Dept. Bai, Yang, and Dobrescu, Bogdan A. Minimal SU(3)×SU(3) symmetry breaking patterns. United States: N. p., 2018. Web. doi:10.1103/PhysRevD.97.055024. Bai, Yang, & Dobrescu, Bogdan A. Minimal SU(3)×SU(3) symmetry breaking patterns. United States. doi:10.1103/PhysRevD.97.055024. Bai, Yang, and Dobrescu, Bogdan A. Fri . "Minimal SU(3)×SU(3) symmetry breaking patterns". United States. doi:10.1103/PhysRevD.97.055024.
CommonCrawl
I was watching a lecture where the professor was describing the mathematics of the ancient Greeks and said they had division because "that's just fancy subtraction." That line got me thinking because it doesn't seem to be quite true. Multiplication is certainly "fancy addition" because you can turn the equation 3 x 5 = 15 into an addition problem where you have "=15" at the other side. But division can't be turned into a subtraction problem. You can't take 15/3=5 and make a subtraction problem with "=5" at one side. I have read in many places that division is subtraction in the sense of 15/3=5 turned into 15-3-3-3-3-3=0 and then you count the number of times you subtracted 3. But I take a lot of issue with that explanation of division because a) it does not parallel multiplication b) it does not have "=5" on one side of the new equation and c) it's simply a "count" of something. It's like viewing the equation from the outside and seeing how many times you did something, like a counter in programming. But it's not fundamental to the equation. Essentially, I'm hoping you can give me some guidance on the essential theory of division in terms of how it relates to subtraction and how it parallels multiplication. I have been turning this question over in my head for many weeks now and I've come up with a theory (I'm not close to being even an amateur mathematician but I love solving problems) but it has to do with redefining multiplication and I think it would obfuscate the point. That is the issue with an inverse operators, like plus is an inverse operation to minus, division is an inverse of multiplication. An inverse operation does something else, it is answer the question about the number which would make the result of direct operator as we wish. That is a/b asks about number c such that b*c=a, that is 15/3 looking for c such that 3c=15, so you have to repetitively add 3's until you get 15 and count it. Equivalently you can subtract 3's from 15 until you get 0. Therefore, the division is similar to multiplication, but different. I think you may be taking the "fancy subtraction" remark a little too seriously. Nevertheless, division is related to subtraction, as shown in the answers to How to divide using addition or subtraction. Why should division "parallel" multiplication? Does subtraction "parallel" addition? Subtraction and addition do not work the same: for one thing, addition is commutative but subtraction is not. In fact subtraction undoes addition; $8 - 5$ is the unique solution for $x$ in the equation $5 + x = 8.$ Similarly, $15/3$ is the unique solution for $y$ in the equation $3 \times y = 15$. Division is to multiplication as subtraction is to addition. Subtraction is related to but not exactly like addition; division can be calculated by repeated subtraction in a way that is related to (but not exactly like) the way multiplication can be calculated by repeated addition. This is a meaningless formality. Addition, subtraction, multiplication, and division are not defined by the symbols with which we express them. The ancient Greeks would not even understand what you meant by that objection, since they had never seen an equality sign. The $5$ in $3 \times 5 = 3 + 3 + 3 + 3 + 3 = 15$ is a "count" of something, too: it counts the number of times $3$ appears in the sum. For that matter, all integers can be regarded as a "count". Not the answer you're looking for? Browse other questions tagged arithmetic or ask your own question. How to solve a subtraction equation that results in a negative number? Is there a physical analog for division by a fraction? Order Of Operation- Does the order matter.. Is there a method of long division where you do it digit by digit, just like long addition/subtraction/multiplication? Intuition for Division by fraction / Can you only divide by (non-zero) integers?
CommonCrawl
Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called $\alpha$-shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates.
CommonCrawl
This project examines the effect of subway delays upon taxi ridership. We combined publicly available data on subway delays and taxi pickups, which were spatially joined using a subway station shapefile. We had assumed that subway delays would yield higher taxi ridership as some subway riders shifted their preference to taxi to arrive at their intended destination. However, our study had mixed results with some stations exhibiting increases in taxi hails and others experiencing decreases. This was tested on a station-by-station basis. We further speculated that this effect might be strongest for people whose travels were time sensitive. We defined these time-sensitive travels in aggregate as people traveling during the morning and evening rush hours. We therefore also broke the data into time bands and tested the null hypothesis for each band. Data for taxi ridership and subway delays was collected for June 2016 between the hour of 7am and 8pm. We used publicly available data on taxi rides published by the Taxi and Limousine Commission for yellow and green taxi cabs. This dataset contains an entry for each ride during the month and has a total size of 1.75GB for yellow and a .24GB for green. We ultimately needed to match taxi ridership with subway delays by station, date and hour. For June 2016, there were thus 198,660 keys to match on given 30 days x 14 hours x 473 stations. Taxi data was pulled from here. We used subway time data collected by Nathan Johnson. The data is in GTFS format, 6.3 GB in size and contains 125,092 files for June 2016. Additionally, we used the subway station shapefile in order to filter taxi rides based on adjacency to a station. Unfortunately, the available subway entrances shapefile does not identify which station an entrance belongs to and it was not clear how to map the entrances to the station. We therefore had to use the station file which represents a station as a single point. Ridership data was read in as a PySpark RDD and the subway stations shapefile was read in as a GeoPandas geodataframe. We defined our taxi rides of interest as those occurring within 300 feet of a subway station point, noting that the point is not necessarily reflective of a subway entrance. In order to identify the locations, the subway station shapefile was converted from latitude/longitude to northing/easting. We then created a new geometry column representing a 300 foot buffer around the subway station point. The original RDD was mapped to a new RDD limited to a tuple containing date and hour and the station for only those rides with pickups adjacent to a station. Rides also needed to be converted from latitude/longitude to northing/easting using a PyProj projection. We then filtered the dataset to include only the hours of interest (7am - 8pm). However, this did not yet create an aggregate ride count for each date and hour as each pickup was still identified separately. In order to count rides by station and time, we mapped the station into a list. Station was identified by an integer value so it was converted into a string. This mapping enabled us to Reduce By Key where each date-hour tuple was considered a key and the stations values were turned into a list through addition. Lastly, we mapped the the value of station lists to a Counter of each station with the number of rides for each date-hour tuple key. The RDD was then collected and converted to csv for further analysis and easier visualization using GeoPandas, Pylab and sklearn. This code runs in approximately 3 minutes and reduces nearly 12 million rides into 1.3 million rides contained within 69,710 lines. This is significantly fewer than the expected 198,660 lines because many stations, particularly in the outer boroughs have no taxi pickups for a given time. We had initially intended on analyzing New York City as a whole, but the low density of taxi rides in many areas of the city practically constrained us to Manhattan. Ride density can be seen in Figure 1 below. Many stations had between 5 and 20 rides in a given time period with a mean of 20 and a median of 11. Figure 1: Map of the 1-2-3 and 4-5-6 subway lines with stations colored by mean density of traffic pickups in the area. Stations with fewer than 5 rides are excluded from the map. We read the data as a PySpark RDD and performed necessary steps to obtain the delay status: we first filtered out all hours that we were not interested in by the filename; we aggregated the subway arrival time by date, station, and line into a list using ReduceByKey. In this step, we also merged 2 and 3 lines as well as 4 and 5 lines due to the similarity of routes in Manhattan; for each key (date, station, line), we calculated the time delta between previous arrival time and current arrival time, and get the arrival hour (mapValues); calculated delay threshold for each line (groupByKey, mapValues); assigned "1" to every record considered as delay (mapValues); aggregated by hour, station, and date. We created a unique key column consisting of station-date-hour and joined the two datasets in Pandas. Stations with fewer than 5 rides on average were dropped from the dataset. Once we had limited the stations to include only those with realtime GTFS data available and greater than 5 rides in an average hour, we were left with 48 stations of interest. T-Tests were then run on a station-by-station basis in order to compare the distributions of taxi riders during delay and non-delay times. We also broke the dataset into three time bands, morning (7-10), evening (4-7) and off-hours (times in between) in case the population was sensitive depending on time and delays were unevenly distributed. T-Tests were then run on these subsets as well on a station-by-station basis. As we can see in figure 3, time between arrivals varies depend on the line. The 95 percentiles for line 1, 23, 45, 6, and L are 17.77, 21.55, 21.07, 18.72, and 11.72 respectively. We performed one-sample T-test on the mean of both taxi rides with delay and taxi rides without delay. Mean of taxi rides on good services is 22.61 while on delays, it is slightly higher on 23.37. With confidence interval of 95% (α=0.05), it returned t=2.62752 and p=0.00860418. Because p < 0.05, we reject the null hypothesis, thus, we can conclude that the mean number of taxi pickups near subway stations with delay is significantly higher than without delay. We also dig further on the temporal and spatial pattern. From Figure 4, we found that some hours are sensitive to the delay (black dots). Interestingly, during morning rush hours (7-9 am) and 5pm, the number of taxi pickups with delay are significantly lower than without delay. This is probably because people are tend to avoid the traffic jam on the street during the rush hours and prefer to take the subway even though there is a delay. At 11 am, 12 pm, 3 pm, and 7 pm, the number of taxi pickups with subway delay are significantly higher than without delay. Those are the hours where people are usually going out for leisure (lunch, socialize), so perhaps the urge to go back quickly to office, or to meet with other people / clients, drives New Yorkers to use taxi more in there hours when there is a subway delay. As for spatial pattern, there are 17 stations that are sensitive to subway delay in terms of taxi pickups. Stations that have more taxi pickups with delay are marked with blue dots and the opposite ones are marked with red dots. The blue stations are mainly located below 23 st such as 14st, Spring St, Fulton St and Wall St. Interestingly, all of the red ones except Penn Station, are stations with the 6 line, including Grand Central. Lastly, taxi rides were grouped into time bands and T-Tests were run for each time band and station in order to further investigate sensitivity to delays over time. Rides were grouped into morning (7-11am), evening (5-8pm) and off-peak (11-5) and T-Tests were run for each station. Since the earlier results had suggested that taxi pickups might decline during delay hours in some instances we also tested an additional null hypothesis with a p-value $\alpha = 0.05$. Overall, it seemed that rush hour times were associated with lower taxi ridership during delay periods as opposed to off-peak periods as summarized in Table 1. Table 1: Number of stations that were found to have significantly different taxi ridership during delay v. nondelay periods. Only the Brooklyn-Bridge City Hall station was found to have higher ridership during delay periods across all time slots. 6 stations were found to have significantly lower ridership during delay periods across all time slices. These stations are: 23rd St, 28th St, 33rd St, 68th St-Hunter College, 77th St and 86th St all on the 4-5-6. Lastly, we took Penn Station as a case study since this station had the most surrounding taxi pickups by far, averaging over 200 pickups per hour as compared to the next highest of 75. Penn Station exhibiting a statistically significant increase in pickups during delays for evenings and off-hours, but a statistically significant decrease in pickups during delays for mornings. We note several limitations to our study. Namely, we needed to assume that riders choosing to hail a cab in response to subway delays were within a certain proximity to the subway station when they made this decision. This was done in an attempt to capture riders who may have arrived at a station and learned of delay there before changing mode. However, delay information is easily available online so many customers may make that decision before leaving their house, particularly in time sensitive situations. Secondly, we did not control for the availability of alternative subway routes or other means of transportation near the station as it was outside the scope of this project. Overall, it seems that certain stations are sensitive to subway delays while others are not. Despite our expectation that sensitivity would lead to more pickups, we often found that the opposite was true. Our hypothesis was founded on the idea that commuters may have a greater interest in getting places on time than people traveling for leisure. However, this study seems to suggest that they may be more willing to spend extra money on a taxi cab when their personal plans are at stake. It is also possible that many people use taxis as the second phase of their commute and in times of delays, there are fewer trains arriving and thus fewer people available to take cabs.
CommonCrawl
We consider the problem of variable group selection for least squares regression, namely, that of selecting groups of variables for best regression performance, leveraging and adhering to a natural grouping structure within the explanatory variables. We show that this problem can be efficiently addressed by using a certain greedy style algorithm. More precisely, we propose the Group Orthogonal Matching Pursuit algorithm (Group-OMP), which extends the standard OMP procedure (also referred to as ``forward greedy feature selection algorithm for least squares regression) to perform stage-wise group variable selection. We prove that under certain conditions Group-OMP can identify the correct (groups of) variables. We also provide an upperbound on the $l_\infty$ norm of the difference between the estimated regression coefficients and the true coefficients. Experimental results on simulated and real world datasets indicate that Group-OMP compares favorably to Group Lasso, OMP and Lasso, both in terms of variable selection and prediction accuracy.
CommonCrawl
The main results announced in this note are an asymptotic expansion for ergodic integrals of translation flows on flat surfaces of higher genus (Theorem 1) and a limit theorem for such flows (Theorem 2). Given an abelian differential on a compact oriented surface, consider the space $\mathfrak B^+$ of Hölder cocycles over the corresponding vertical flow that are invariant under holonomy by the horizontal flow. Cocycles in $\mathfrak B^+$ are closely related to G.Forni's invariant distributions for translation flows . Theorem 1 states that ergodic integrals of Lipschitz functions are approximated by cocycles in $\mathfrak B^+$ up to an error that grows more slowly than any power of time. Theorem 2 is obtained using the renormalizing action of the Teichmüller flow on the space $\mathfrak B^+$. A symbolic representation of translation flows as suspension flows over Vershik's automorphisms allows one to construct cocycles in $\mathfrak B^+$ explicitly. Proofs of Theorems 1, 2 are given in . Keywords: limit theorems, Hölder cocycles, Translation flows, Forni's invariant distributions, Teichmueller flow, abelian differentials, Vershik's automorphisms. Mathematics Subject Classification: Primary: 37A50; Secondary: 60F9. Robert W. Ghrist. Flows on $S^3$ supporting all links as orbits. Electronic Research Announcements, 1995, 1: 91-97.
CommonCrawl
A rough analytical expression for the Milky Way's radial mass distribution? I found the image below in Space.com's article This 3D Color Map of 1.7 Billion Stars in the Milky Way Is the Best Ever Made, although it is not the map mentioned in the title. If you imagine a band along the galactic equator the dominant velocity shows two positive and two negative "peaks", with a zero crossing in the direction of the galactic center. Purely for fun I wanted to see if I could reproduce this behavior with a simple calculation based on a 2D calculation assuming circular motion and a radial density distribution $\rho(r)$ which I could then use to figure out a rotational velocity distribution $v(r)$, bun I swiftly realized that I have no idea what the density profile would look like. For the purposes of this simple exercise, what would be an analytical expression that roughly matches the Milky Way's radial density profile, projected on to its equatorial plane? For spherically symmetric distributions, Newton's Shell theorem allows one to treat all mass inside a sphere defined by an orbit's radius as if it were at the center, and to ignore all mass in the shell outside of that radius. Is there anything like an analog to this for a radial distribution within a plane? where $v$ is the rotational velocity. Note that you may see other formulations which use $\sigma_v$ rather than $v$. In this case they're using the velocity dispersion which is slightly different than the rotational velocity. Other, more realistic density profiles have been found by running simulations of the Universe and matching functional equations to the density profiles of the resulting galaxies. Such popular results are the NFW profile and the Einasto profile. where $\rho_0$ and $R_S$ and two, halo-dependent parameters. where $A$ and $\alpha$ are configurable parameters. The Shell theorem for gravity does not extend to a 2D ring. However, I will say that when talking about the orbits of stars in galaxies, the mass of stars outside a star's orbit are generally considered negligible. The primary reason for this is that it is Dark Matter which comprises most of the mass of a galaxy and contributes the most to defining a star's orbit in a galaxy. The Dark Matter halo is often assumed to be spherically symmetric, in which case Newton's Shell theorem does apply and the mass you're concerned with in determining a star's orbit, is the mass of the Dark Matter halo interior to the star's orbit. @Rob Jeffries mentioned that "You get the density distribution by looking at the velocity data." I also believe this is what you are looking for, so I will give some calculation details. Since observationally we can construct the rotation curve which is $v = f(R)$, the density profile is then a function depending only on $R$: $\rho = g(R)$, i.e., the radial mass distribution. Some notes include i) the mass $M$ includes dark matter; ii) $v$ is the tangential velocity, not the radial velocity as presented in the figure you mentioned. Not the answer you're looking for? Browse other questions tagged galaxy milky-way velocity mathematics or ask your own question. Why does the neutral hydrogen velocity have this characteristic behavior in the galactic plane? How is interstellar gas density mapped from GAIA data?
CommonCrawl
Let $J$ be the $n \times n$ Jordan block corresponding to the eigen value $1$. For any natural number $r$ is it true that the minimal polynomial for $J^r$ is $(X-1)^n$ ? Another way to think about it to produce a cyclic vector of $J^r$. I can't prove it. I need some help. Thanks. Hint: write $J=I+N$ where $N$ is the shift matrix. $N$ is nilpotent with index $n$. Now expand $J^r=(I+N)^r=...$ and find out what is the smallest $m$ we need in order to $(J^r-I)^m=0$. As $r(J-I)=r(J^r-I)$, so geometric multiplicity is $1$in both case are same and hence same minimal polynomial. Here $r$ means rank of matrix. Not the answer you're looking for? Browse other questions tagged linear-algebra abstract-algebra matrices jordan-normal-form canonical-transformation or ask your own question. What is the connection between Jordan Canonical Form and minimal polynomial?
CommonCrawl
have compact closure in $Y$. Now I do not see why the Hausdorff condition on $X$ should be necessary? Why include it then? Am I maybe even missing something here (and there are counterexamples)? btw if you are looking up the proof: Hausdorffness is needed for the evaluation map $e: X \times \mathcal C(X,Y) \to Y, \, e(x,f) = f(x)$ to be continuous. But the only thing really used in the proof is the continuity of $e_a: \mathcal C(X,Y) \to Y, \, e_a(f) = f(a)$ for fixed $a \in X$. I think this question has been already been answered through the helpful comments. So thanks to Henno Brandsma and t.b.! This is just to finally tick it off. My conclusion: It seems that $X$ being Hausdorff is rather a matter of convenience (maybe to avoid issues with the definition of local compactness for non-Hausdorff spaces, as pointed out in the comments), than a necessary condition. Also this version of the theorem seems quite general enough for most uses. Not the answer you're looking for? Browse other questions tagged general-topology compactness or ask your own question. Is Hausdorffness necessary for the classical ascoli theorem? Ascoli-Arzela theorem: Hausdorff assumption needed? Is Hausdorff necessary condition for Arzela-Ascoli? Is there a version of the Arzelà–Ascoli theorem capturing $C([0,\infty))$? Are Arzelà–Ascoli theorems results of similar theorems on normed spaces, metric spaces or other spaces? Proof of the Arzela-Ascoli theorem - where is the assumption that $X$ is compactly generated used? Example 4, Sec. 29, in Munkres' TOPOLOGY, 2nd ed: How is the one-point compactification of the real line homeomorphic with the circle?
CommonCrawl
"Buckling Eigenvalues for a Clamped Plate Embedded in an Elastic Medium" by Bernhard Kawohl, Howard A. Levine et al. This paper considers the dependence of the sum of the first m eigenvalues of three classical problems from linear elasticity on a physical parameter in the equation. The paper also considers eigenvalues $\gamma _i (a)$ of a clamped plate under compression, depending on a lateral loading parameter $a;\Lambda i(a)$, the Dirichlet eigenvalues of the elliptic system describing linear elasticity depending on a combination a of the Lame constants, and eigenvalues $\Gamma _i (a)$ of a clamped vibrating plate under tension, depending on the ratio a of tension and flexural rigidity. In all three cases $a \in [0,\infty )$. The analysis of these eigenvalues and their dependence on a gives rise to some general considerations on singularly perturbed variational problems. This is an article from SIAM Journal on Mathematical Analysis 24 (1993): 327, doi:10.1137/0524022. Posted with permission. Kawohl, Bernhard; Levine, Howard A.; and Velte, Waldemar, "Buckling Eigenvalues for a Clamped Plate Embedded in an Elastic Medium and Related Questions" (1993). Mathematics Publications. 49.
CommonCrawl
Make a list of the square numbers up to $8\times 8$. Using the interactivity might help you get started. Don't forget you can always change your mind and alter your solution as you go along! Practical Activity. Trial and improvement. Addition & subtraction. Compound transformations. Visualising. Games. Interactivities. Tangram. Working systematically. Square numbers.
CommonCrawl
Quantum annealing is an optimization protocol that, thanks to quantum tunneling, allows in given circumstances to maximize/minimize a given function more efficiently than classical optimization algorithms. A crucial point of quantum annealing is the adiabaticity of the algorithm, which is required for the state to stay in the ground state of the time-dependent Hamiltonian. This is however also a problem, as it means that find a solution can require very long times. How long do these times have to be for a given Hamiltonian? More precisely, given a problem Hamiltonian $\mathcal H$ of which we want to find the ground state, are there results saying how long would it take a quantum annealer to reach the solution? The time to solution (tts) is highly dependent on the Hamiltonian of the problem one would like to solve. The D-Wave uses a spin-glass-like Hamiltonian which can be in the NP-Complete complexity class. Due to having to run the annealing process multiple times, tts measures are typically quantified by how long it takes to find the ground state some percent of the time. Here's a paper by some colleagues that explains tts (see especially equation 3). Not the answer you're looking for? Browse other questions tagged speedup annealing or ask your own question. What is the difference between quantum annealing and adiabatic quantum computation models?
CommonCrawl
I sketched four vertical asymptotes and a sketch showed that a function which decayed to zero from above at $x \rightarrow \pm \infty$ could have the right sorts of properties. This worked: it has a turning point at $x$ between $-2$ and $-1$ another turning point at $x$ between $1$ and $2$ and a turning point at $x=0$. It seems likely that many such curves, with differing constants, would also give the correct behaviour. To see why, upon differentiation, I get a cubic polynomial divided by another polynomial. For zeros the numerator would need to be zero and a cubic can have three real roots. I could choose the constants to have the correct number of real roots. I then considered the second request. Initially, I thought that this seemed impossible, but then started to work through the possibilities for asymptotes. By turning the middle turning point into a point of inflection I would have a graph with the correct behaviour. Which has the correct behaviour. Turning points. Maths Supporting SET. Modular arithmetic. Curve Fitting. Graph sketching. Symmetry. Families of Graphs. Graphs. Physics. Functions and their inverses.
CommonCrawl
Abstract. A Borg-type uniqueness theorem for matrix-valued Schr\"odinger operators is proved. More precisely, assuming a reflectionless potential matrix and spectrum a half-line $[0,\infty)$, we derive triviality of the potential matrix. Our approach is based on trace formulas and matrix-valued Herglotz representation theorems. As a by-product of our techniques, we obtain an extension of Borg's classical result from the class of periodic scalar potentials to the class of reflectionless matrix-valued potentials.
CommonCrawl
The statistical comparison of competing algorithms is a fundamental task in machine learning. It is usually carried out by means of a null hypothesis significance test (nhst). Yet, nhst has many well-known drawbacks. For instance, nhst can either reject the null hypothesis or fail to reject it. It cannot verify the null hypothesis: when failing to reject it, the test is not stating that the hypothesis is true. Nhst is thus unable to conclude that two classifiers are equivalent. The claimed statistical significances do not necessarily imply practical significance. Nhst rejects the null hypothesis when the p-value is smaller than the test size $\alpha$. Yet the p-value depends both on the effect size (the actual difference between the two classifiers) and the sample size (the number of collected observations). Null hypotheses can virtually always be rejected by collecting a sufficiently large number of data points (for instance by comparing two classifier on a large collection of data sets). There are many other drawbacks, such as dependence on the sampling intention and the lack of sound way for deciding the size $\alpha$ of the test. We will discuss how such issues can be overcome by adopting Bayesian analysis. The Bayesian approach is generally regarded as the most principled approach for learning from data and for reasoning under uncertainty; yet it is not yet adopted in machine learning for model comparison, despite its numerous advantages. the concepts and hands-on use of modern algorithms ("Dirichlet process", "Markov chain Monte Carlo") that achieve Bayesian analysis for realistic applications and how to use the free software R, Python, Julia and STAN for Bayesian analysis. We will present Bayesian algorithms for the comparison of classifiers on single and multiple data sets , as replacements for the traditional signed-rank test, sign test, t-test, etc. To this end, we will discuss parametric and non-parametric approaches for Bayesian hypothesis testing and how to present the results of Bayesian analysis . We will conclude by showing how to use the existing software for Bayesian comparison of classifiers. PDF: G. Corani, A. Benavoli, J. Demsar, F. Mangili, and M. Zaffalon. Statistical comparison of classifiers through Bayesian hierarchical modelling. Senior Researcher and Lecturer at the Dalle Molle Institute for Artificial Intelligence (IDSIA), Switzerland. Research interest: Bayesian machine learning, probabilistic graphical models, applied statistics. Co-author of about 60 papers in conferences and journals, including IJCAI, ECAI, ICML, JMLR, ECML, NIPS. Program co-chair of the International Conference on Probabilistic Graphical Models (PGM 2016). Speaker in previous tutorials on robust Bayesian networks at AAAI 2010 and IJCAI 2013. He is associate Professor at the Faculty of Computer and Information Systems, Ljubliana (Slovenia). Phd from Faculty of Computer and Information Science in Ljubljana (2002). Recipient of serveral prizes: teacher of the year in years (2008 - 2015); award for current research work (Slovenian Information Society, 2014). His research interests include machine learning, statistics and computer science education. His paper on the statistical comparison of classifiers (JMLR, 2006) has more than 4000 citations. He received all his degrees in Computer and Control Engineering from the University of Firenze, Italy: the Ph.D in 2008 and the M.S. degree in 2004. From April 2007 to May 2008, he worked for the international company SELEX-Sistemi Integrati as system analyst. Currently, he is working as Senior researcher at the Dalle Molle Institute for Artificial Intelligence (IDSIA) in Lugano, Switzerland. His research interests are in the areas of Bayesian nonparametrics, data analytics, imprecise probabilities, decision-making under uncertainty, filtering and control. He has co-authored about 70 peer-reviewed publications in top conferences and journals, including IJCAI, ICML, JMLR, ECML, UAI.
CommonCrawl
Concept of atoms and molecules; Dalton's atomic theory; Mole concept; Chemical formulae; Balanced chemical equations; Calculations (based on mole concept) involving common oxidation-reduction, neutralisation, and displacement reactions; Concentration in terms of mole fraction, molarity,molality and normality. Law of partial pressures; Vapour pressure; Diffusion of gases. Atomic structure and chemical bonding: Bohr model, spectrum of hydrogen atom, quantum numbers; Wave-particle duality, de Broglie hypothesis;Uncertainty principle; Qualitative quantum mechanical picture of hydrogen atom, shapes of s, p and d orbitals; Electronic configurations of elements (up to atomic number 36); Aufbau principle; Pauli's exclusion principle and Hund's rule; Orbital overlap and covalent bond; Hybridisation involving s, p and d orbitals only; Orbital energy diagrams for homonuclear diatomic species; Hydrogen bond; Polarity in molecules, dipole moment (qualitative aspects only); VSEPR model and shapes of molecules (linear, angular, triangular, square planar, pyramidal, square pyramidal, trigonal bipyramidal, tetrahedral and octahedral). Law of mass action; Equilibrium constant, Le Chatelier's principle (effect of concentration, temperature and pressure); Significance of ?G and ?G0 in chemical equilibrium; Solubility product, common ion effect, pH and buffer solutions; Acids and bases (Bronsted and Lewis concepts);Hydrolysis of salts.Electrochemistry: Electrochemical cells and cell reactions; Standard electrode potentials; Nernst equation and its relation to ?G; Electrochemical series, emf of galvanic cells; Faraday's laws of electrolysis; Electrolytic conductance, specific, equivalent and molar conductivity, Kohlrausch's law; Concentration cells. Rates of chemical reactions; Order of reactions; Rate constant; First order reactions; Temperature dependence of rate constant (Arrhenius equation). Classification of solids, crystalline state, seven crystal systems (cell parameters a, b, c, $\alpha, \beta , \gamma $), close packed structure of solids (cubic), packing in fcc, bcc and hcp lattices; Nearest neighbors, ionic radii, simple ionic compounds, point defects. Raoult's law; Molecular weight determination from lowering of vapour pressure, elevation of boiling point and depression of freezing point. Elementary concepts of adsorption (excluding adsorption isotherms); Colloids: types, methods of preparation and general properties; Elementary ideas of emulsions, surfactants and micelles (only definitions and examples). Radioactivity: isotopes and isobars; Properties of $\alpha, \beta , \gamma $ rays; Kinetics of radioactive decay (decay series excluded), carbon dating; Stability of nuclei with respect to proton-neutron ratio; Brief discussion on fission and fusion reactions. Boron,silicon, nitrogen, phosphorus, oxygen, sulphur and halogens; Properties of allotropes of carbon (only diamond and graphite), phosphorus and sulphur. Oxides, peroxides,hydroxides, carbonates, bicarbonates, chlorides and sulphates of sodium, potassium, magnesium and calcium; Boron: diborane, boric acid and borax; Aluminium: alumina, aluminium chloride and alums; Carbon: oxides and oxyacid (carbonic acid); Silicon: silicones, silicates and silicon carbide; Nitrogen: oxides, oxyacids and ammonia; Phosphorus: oxides, oxyacids (phosphorus acid, phosphoric acid) and phosphine; Oxygen: ozone and hydrogen peroxide; Sulphur: hydrogen sulphide, oxides, sulphurous acid, sulphuric acid and sodium thiosulphate; Halogens: hydrohalic acids, oxides and oxyacids of chlorine, bleaching powder; Xenon fluorides. Definition, general characteristics, oxidation states and their stabilities, colour (excluding the details of electronic transitions) and calculation of spin-only magnetic moment; Coordination compounds: nomenclature of mononuclear coordination compounds, cis-trans and ionisation isomerisms, hybridization and geometries of mononuclear coordination compounds (linear, tetrahedral, square planar and octahedral). Oxides, peroxides, hydroxides, carbonates, bicarbonates, chlorides and sulphates of sodium, potassium, magnesium and calcium; Boron: diborane, boric acid and borax; Aluminium: alumina, aluminium chloride and alums; Carbon: oxides and oxyacid (carbonic acid); Silicon: silicones, silicates and silicon carbide; Nitrogen: oxides, oxyacids and ammonia; Phosphorus: oxides, oxyacids (phosphorus acid, phosphoric acid) and phosphine; Oxygen: ozone and hydrogen peroxide; Sulphur: hydrogen sulphide, oxides, sulphurous acid, sulphuric acid and sodium thiosulphate; Halogens: hydrohalic acids, oxides and oxyacids of chlorine, bleaching powder; Xenon fluorides. Oxides and chlorides of tin and lead; Oxides, chlorides and sulphates of Fe2+, Cu2+ and Zn2+; Potassium permanganate, potassium dichromate, silver oxide, silver nitrate, silver thiosulphate. Commonly occurring ores and minerals of iron, copper, tin, lead, magnesium, aluminium, zinc and silver. Extractive metallurgy: Chemical principles and reactions only (industrial details excluded); Carbon reduction method (iron and tin); Self reduction method (copper and lead); Electrolytic reduction method (magnesium and aluminium);Cyanide process (silver and gold). Chemical principles and reactions only (industrial details excluded); Carbon reduction method (iron and tin); Self reduction method (copper and lead); Electrolytic reduction method (magnesium and aluminium); Cyanide process (silver and gold). Groups I to V (only Ag+, Hg2+, Cu2+, Pb2+, Bi3+,Fe3+, Cr3+, Al3+, Ca2+, Ba2+, Zn2+, Mn2+ and Mg2+); Nitrate, halides (excluding fluoride), sulphate and sulphide. definition and their effects on physical properties of alcohols and carboxylic acids; Inductive and resonance effects on acidity and basicity of organic acids and bases; Polarity and inductive effects in alkyl halides; Reactive intermediates produced during homolytic and heterolytic bond cleavage; Formation, structure and stability of carbocations, carbanions and free radicals. Homologous series, physical properties of alkanes (melting points, boiling points and density); Combustion and halogenation of alkanes; Preparation of alkanes by Wurtz reaction and decarboxylation reactions. acylation; Effect of o-, m- and p-directing groups in monosubstituted benzenes. Acidity, electrophilic substitution reactions (halogenation, nitration and sulphonation); Reimer-Tieman reaction, Kolbe reaction. reaction; haloform reaction and nucleophilic addition reactions (Grignard addition); Carboxylic acids: formation of esters, acid chlorides and amides, ester hydrolysis; Amines: basicity of substituted anilines and aliphatic amines, preparation from nitro compounds, reaction with nitrous acid, azo coupling reaction of diazonium salts of aromatic amines, Sandmeyer and related reactions of diazonium salts; carbylamine reaction; Haloarenes: nucleophilic aromatic substitution in haloarenes and substituted haloarenes (excluding Benzyne mechanism and Cine substitution). Carbohydrates: Classification; mono- and di-saccharides (glucose and sucrose); Oxidation, reduction, glycoside formation and hydrolysis of sucrose. Amino acids and peptides: General structure (only primary structure for peptides) and physical properties. Natural rubber, cellulose,nylon, teflon and PVC. phenolic), carbonyl (aldehyde and ketone), carboxyl, amino and nitro; Chemical methods of separation of mono-functional organic compounds from binary mixtures. inequality, cube roots of unity, geometric interpretations. geometric series, sums of squares and cubes of the first n natural numbers. Logarithms and their properties. symmetric and skew-symmetric matrices and their properties, solutions of simultaneous linear equations in two or three variables. Relations between sides and angles of a triangle, sine rule, cosine rule, halfangle formula and the area of a triangle, inverse trigonometric functions (principal value only). Two dimensions: Cartesian coordinates, distance between two points, section formulae, shift of origin. Equation of a straight line in various forms, angle between two lines, distance of a point from a line; Lines through the point of intersection of two given lines, equation of the bisector of the angle between two lines, concurrency of lines; Centroid, orthocentre, incentre and circumcentre of a triangle. circle, equation of a circle through the points of intersection of two circles and those of a circle and a straight line. absolute value, polynomial, rational, trigonometric, exponential and logarithmic functions. Even and odd functions, inverse of a function, continuity of composite functions, intermediate value property of continuous functions.Derivative of a function, derivative of the sum, difference, product andquotient of two functions, chain rule, derivatives of polynomial, rational,trigonometric, inverse trigonometric, exponential and logarithmic functions. decreasing functions, maximum and minimum values of a function, Rolle's theorem and Lagrange's mean value theorem. Systems of particles; Centre of mass and its motion; Impulse; Elastic and inelastic collisions. Law of gravitation; Gravitational potential and field; Acceleration due to gravity; Motion of planets and satellites in circular orbits; Escape velocity. and spheres; Equilibrium of rigid bodies; Collision of point masses with rigid bodies. Linear and angular simple harmonic motions. Hooke's law, Young's modulus. velocity, Streamline flow, equation of continuity, Bernoulli's theorem and its applications. and air columns; Resonance; Beats; Speed of sound in gases; Doppler effect (in sound). Thermal expansion of solids, liquids and gases; Calorimetry, latent heat; Heat conduction in one dimension; Elementary concepts of convection and radiation; Newton's law of cooling; Ideal gas laws; Specific heats (Cv and Cp for monoatomic and diatomic gases); Isothermal and adiabatic processes, bulk modulus of gases; Equivalence of heat and work; First law of thermodynamics and its applications (only for ideal gases); Blackbody radiation: absorptive and emissive powers; Kirchhoff's law; Wien's displacement law, Stefan's law. Force on a moving charge and on a current-carrying wire in a uniform magnetic field. Faraday's law, Lenz's law; Self and mutual vinductance; RC, LR and LC circuits with d.c. and a.c. sources. fusion processes; Energy calculation in these processes. component parts in appropriate scale. Common domestic or day-to-day life usable objects like furniture, equipment, etc., from memory. or side views) of simple solid objects like prisms, cones, cylinders, cubes, splayed surface holders, etc. through innovative uncommon test with familiar objects. Sense of colour grouping or application.
CommonCrawl
Self-reference is (too?) common in real life, as many of our sentences begin with I. Talking to persons who abuse of self-references can be tiring. And in mathematics, watching at self-referencing objects has turned into a nightmare, starting with British mathematician Bertrand Russell. In 1901, based on a reasoning on such objects, he proved that mathematics were non-sense. What the hell? Is this a joke? No! Let's see how he did it! Bertrand Russell actually worked with the most fundamental mathematical objects: sets. Sets are simply collections of other mathematical objects. In very fundamental mathematics, sets are actually the only objects. So, really, what sets are is a collection of sets. Let's do the reasoning with Science4All articles to make it less abstract! I'll replace the phrase "a set contains another set" by "a S4A article contains a link towards another S4A article". Even more simply, we'll say that a S4A article points to another S4A article. We can now apply Russell's exact reasoning. Now, there's a simple self-referencing concept about S4A articles: A S4A article either points to itself or not. Well, let's consider the article you're currently reading. This is a S4A article. And it has the following link towards itself (hover or click to verify that it really is the case!). So it points to itself. But most of other articles don't point to themselves. OK… So far so good. Good. Because we are getting to Russell's great idea. He imagined himself writing a S4A article which points at all S4A article which don't point to themselves. Let's call this article Russell's S4A article. The S4A article called Self-Reference you're currently reading points to itself, so Russell's S4A article wouldn't point to Self-Reference. But most of other S4A articles don't point to themselves. Russell's S4A article would point to them. It shouldn't be in Russell's S4A article which doesn't point at S4A article which points to themselves! It should be Russell's S4A article which lists all S4A articles which don't point to themselves! So Russell's S4A article can't point at itself nor not point to itself! It can't exist! Exactly! Yet, mathematics at Russell's time allowed to construct such theoretical S4A article… which means that it allowed the existence of something that doesn't exist! No wonder why 1901 mathematics was non-sense! Imagine a barber who shaves everyone who doesn't shave themselves. Does he shave himself? Imagine a painter who paints everyone who doesn't paint themselves. Does he paint himself? Imagine a blogger who blogs about everyone who doesn't blog about themselves. Does he blog about himself? Another example of similar paradoxes is the following extract from The Office. There are lots of puzzles based on self-referencing. Here's a nice one. Assume you are faced with two doors. One leads to heaven, the other to hell. But the only way for you to know is by asking one single question to the angel in front of you… who may in fact by the devil. The thing is, if he's an angel, he can't lie, but if he's the devil, he has to lie. What question should you ask? Think about it. I'll give the answer at the end of this article. It is bad! It is extremely bad! Because one contradiction in mathematics imply that all mathematics is non-sense! Think about it. A lot of reasonings in mathematics are based on reductio ad absurdum: You start by assuming something and show that this assumption implies a contradiction. This proves that what you assumed is wrong. But now, you can assume anything, like for instance that 0=0, and say that it leads to Russell's paradox. This mathematically proves that 0≠0, which in turn can imply something like 1=0. Russell's paradox proves that mathematics in 1901 was non-sense! This must have been a huge schock! At this point, Russell knew that everything in mathematics and logics was destroyed! Waw!!! Are you saying that I've been learning something wrong all along??? Just like you, mathematicians of the beginning of the 20th century, including Gottlob Frego and David Hilbert, were devastated! So they rejected classical assumptions of mathematics, now known as the naive set theory. And they searched for a better-defined formalism. And the 20th century has produced incredibly surprising and counter-intuitive results regarding such formalisms. For instance, they enable to prove that $1+2+4+8+16+…=-1$, as you can read it in my article on infinite series! But there's even more troubling… Since it's a little complicated, I'll get to this in the third section of this article. One extremely useful self-referencing in computer science is known as recursion. It is based on the idea that if you can break down your problem into simpler versions of itself. Let me give you an example. Consider a chocolate bar, the kind of which is made of well-ordered squares. like the one on the right. Now, imagine a bar with $2 \times n$ squares. But your kids or students only like to have $1 \times 2$ pieces of chocolate. How many ways are there to divide your bar into $1 \times 2$ pieces of chocolate? The following figure displays 6 different ways to divide a $2 \times 6$ bar of chocolates into $1 \times 2$ pieces. How many ways are there in total? There seems to be a lot… Do you really want me to count them all? No. I want you to find the answer without counting! And it's possible, thanks to the magic of recursion! OK… So you want me to break the problem into smaller version of itself… Are you talking about problems for smaller rectangles? Hum… A little help, please? Sure. Look at the top right corner square of your bar. It needs to belong to a $1 \times 2$ piece. But there's only two possibilities for this piece: It's either horizontal or vertical. Now, if it is vertical, then you end up with a $2 \times (n-1)$ chocolate bar! That's a smaller version of this problem! But what about if it is horizontal? a 2x(n-2) chocolate bar! That's a smaller version of the problem too! So, in one case, the number of ways to divide the chocolate bar is the solution of the problem for $n-1$. In the other case, it's the solution for $n-2$! Overall, the solution for $n$ is the sum of the solution for $n-1$ and $n-2$. So if the solution for 4 is 5 and the solution for 5 is 8, then the solution for 6 is 5+8=13. And such reasoning can very easily be written as an algorithms for computers! Our reasoning is actually incomplete. If we simply tell the computer to search for solutions $n-1$ and $n-2$ to solve the problem for $n$, we will have a big issue: Our computer will be searching solutions of -1 and -2 to find the solution of 0, and then search for solutions of -3 and -4 to find the solution of -2… and so on! To avoid this, we need a base case, that is, a case which does not depend on any other case. Well, in fact, here, we need two base cases, since we search for solutions at both $n-1$ and $n-2$. We're actually here defining a very famous sequence, called the Fibonacci sequence. It's very famous and has nice properties. In particular, it is very tricky to compute! If you can, please write about it. So all recursive algorithms must have a base case? Yes! A recursive algorithm is defined by base cases, and, for other cases, relations to other simpler cases. By simpler, I mean, cases which are strictly closer to base cases. Can you give other examples of recursion? Sure! The most famous ones are probably Euclid's algorithm to find the greatest common divisor, the solution to the tower of Hanoi and the state-of-the-art optimization technic called dynamic programming. All these algorithms are dramatically beautiful! If you can, please write about them! Find out more, by reading Thomas' great article on fractals to know what they really are! You should also check this great video to find out more about applications. Let's now get back to the problem of the inconsistency of mathematics. Oh yeah! You said that the 20th century provided surprising results! Yes. But remember how bad things were at its beginning: Mathematics were proven to be non-sense! From 1910 to 1925, to set mathematics straight, Bertrand Russell and Alfred North Whitehead wrote and rewrote the Principia Mathematica. It redefined new rules of mathematics, called axioms. The list of these axioms forms what is known as a theory. Russell and Whitehead's theory aimed at forbidding self-referencing. A few years later, in 1931, one of the greatest mathematician of all time, Kurt Gödel, reacted. He claimed that Russell and Whitehead hadn't managed to forbid self-referencing. In fact, any theory which is interesting enough to include basic mathematics like natural numbers cannot forbid self-referencing! Really? How did he prove it? He actually constructed such a phrase. Its construction is a little bit complicated. It's based on a mapping of symbols of maths with numbers. This gives you a set of digits. A theorem is then a succession of such symbols. It therefore corresponds to a number made of digits. In fact, each theorem corresponds to a unique number. And, similarly, so does each proof. These numbers are called Gödel-numbers. Now, I'm not going go into the details, but Gödel proved that Gödel-numbers could be manipulated such that one could be eventually created such that it talks about itself. Actually, just because the phrase doesn't have a proof within the theory doesn't make it not true. Indeed, a larger theory which contains it may be able to prove the phrase. Thus, phrases can be considered true, without having proof within the theory. But I'm now talking about things I barely understand. If you can, please write about these things! What does this Gödel-number correspond to? Well, loosely stated, it says that there is no proof to itself. That's Gödel's amazing construction! Out of any theory interesting enough, the following phrase exists: The theorem you're reading has no proof. Waw! Indeed, this shows that self-referencing cannot be avoided! Worse than this! The self-referencing phrase I have given cannot be proven false nor true! If it is true, then you should have a proof. Yet, it says that there is no proof. On the opposite, if it is false, this means that it has a proof. But a theorem that has a proof is true! So unless the theory is inconsistent in which case a theorem can be true and false, Gödel's phrase is neither true nor false! Such a phrase is called undecidable. So mathematics will always have theorems which are neither true nor false! That's pretty much Gödel's first incompleteness theorem! Roughly, it says that consistent theories are necessarily incomplete. For a more accurate proof, you should check this video. It's nicely done (although I'm not a big fan of the music!), and it goes more into the abstract details. The greatest illustration of this surprising theorem is the astonishing continuum hypothesis I explained in my talk More Hiking in Modern Math World. Waw! That's disturbing! But that's not that big a deal, is it? I mean… At least mathematics are consistent! I have never said that! What I said is that if they are consistent, then they are incomplete. But we're not even sure they are consistent. We haven't proven the consistency of mathematics! Really? So a new Bertrand Russell could destroy all of mathematics tomorrow? Oh my god! Then I guess that proving the consistency of mathematics is the most crucial open problem of mathematics! Surprisingly, no! Once again, that's because of Kurt Gödel. Indeed, he also proved that theories can never prove themselves consistent! After all, a theory which talks about its consistency, that's very much a self-reference! It is possible for a theory to prove its inconsistency. But if there is no proof of its inconsistency, then there's also no proof of its own consistency. This is known as Gödel's second incompleteness theorem. Check this extract from my talk A Trek through 20th Century Mathematics where I present both Russell's paradox and Gödel's second incompleteness theorem! I haven't gone deeper into the details, because it's too hard for me to explain here, as well as too hard for me to even understand! But if you can, please write more about axiomatization, consistency and incompleteness! Hey, what's the solution of the doors to heaven and hell conundrum? Oh yeah, that's right! It's in the food for thoughts below! For a more serious, but still relatively elementary, view of the Paradoxes, see this foundational perspective on the semantic and logical paradoxes.
CommonCrawl
Xiaoling Sun, Song Wang. Preface. Journal of Industrial & Management Optimization, 2009, 5(1): i-ii. doi: 10.3934\/jimo.2009.5.1i. Y. Gong, X. Xiang. A class of optimal control problems of systemsgoverned by the first order linear dynamic equations on time scales. Journal of Industrial & Management Optimization, 2009, 5(1): 1-10. doi: 10.3934\/jimo.2009.5.1. Ming-Jong Yao, Tien-Cheng Hsu. An efficient search algorithm for obtaining the optimal replenishment strategies in multi-stage just-in-time supply chain systems. Journal of Industrial & Management Optimization, 2009, 5(1): 11-32. doi: 10.3934\/jimo.2009.5.11. Xueting Cui, Xiaoling Sun, Dan Sha. An empirical study on discrete optimization models forportfolio selection. Journal of Industrial & Management Optimization, 2009, 5(1): 33-46. doi: 10.3934\/jimo.2009.5.33. Xiaoling Sun, Xiaojin Zheng, Juan Sun. A Lagrangian dual and surrogate method for multi-dimensionalquadratic knapsack problems. Journal of Industrial & Management Optimization, 2009, 5(1): 47-60. doi: 10.3934\/jimo.2009.5.47. Emma Smith, Volker Rehbock, Norm Adams. Deterministic modeling of whole-body sheep metabolism. Journal of Industrial & Management Optimization, 2009, 5(1): 61-80. doi: 10.3934\/jimo.2009.5.61. K. F. C. Yiu, L. L. Xie, K. L. Mak. Analysis of bullwhip effect in supply chains withheterogeneous decision models. Journal of Industrial & Management Optimization, 2009, 5(1): 81-94. doi: 10.3934\/jimo.2009.5.81. P. Liu, Xiwen Lu. Online scheduling of two uniform machines to minimize total completion times. Journal of Industrial & Management Optimization, 2009, 5(1): 95-102. doi: 10.3934\/jimo.2009.5.95. Wai-Ki Ching, Tang Li, Sin-Man Choi, Issic K. C. Leung. A tandem queueing system with applications to pricing strategy. Journal of Industrial & Management Optimization, 2009, 5(1): 103-114. doi: 10.3934\/jimo.2009.5.103. Yanfei Wang, Qinghua Ma. A gradient method for regularizingretrieval of aerosol particle size distribution function. Journal of Industrial & Management Optimization, 2009, 5(1): 115-126. doi: 10.3934\/jimo.2009.5.115. Song Wang, Xia Lou. An optimization approach to the estimation of effective drug diffusivity: From a planar disc into a finite external volume. Journal of Industrial & Management Optimization, 2009, 5(1): 127-140. doi: 10.3934\/jimo.2009.5.127. Zhi Guo Feng, Kok Lay Teo, Volker Rehbock. A smoothing approach forsemi-infinite programming with projected Newton-type algorithm. Journal of Industrial & Management Optimization, 2009, 5(1): 141-151. doi: 10.3934\/jimo.2009.5.141. Honglei Xu, Kok Lay Teo. $H_\\infty$ optimal stabilization of a class of uncertain impulsive systems:An LMI approach. Journal of Industrial & Management Optimization, 2009, 5(1): 153-159. doi: 10.3934\/jimo.2009.5.153. Kenji Kimura, Yeong-Cheng Liou, David S. Shyu, Jen-Chih Yao. Simultaneous system of vector equilibrium problems. Journal of Industrial & Management Optimization, 2009, 5(1): 161-174. doi: 10.3934\/jimo.2009.5.161.
CommonCrawl
Format: MarkdownItexThere is a similar "recursive" idea in Kontsevich's definition of $(\infty,n)$-categories which he used at the end of 1990s. Unfortunately, it has never been published. I heard one exposition of it, but did not keep much notes. There is a similar "recursive" idea in Kontsevich's definition of (∞,n)(\infty,n)-categories which he used at the end of 1990s. Unfortunately, it has never been published. I heard one exposition of it, but did not keep much notes. Format: MarkdownItexAdded the new paper - [Higher Segal spaces I](http://arxiv.org/abs/1212.3563). Added the new paper - Higher Segal spaces I. Eventually we need to add some warning. Apparently the Dycker-Kapranov-style Segal higher spaces are equivalent to (∞,1)(\infty,1)-operads, certainly not to (∞,2)(\infty,2)-categories or similar. Format: MarkdownItexThis seems to provide a bridge between them and Lurie: >In analogy to the situation for $(\infty, 1)$-categories, there are various models for the notion of an $(\infty, 2)$-category. To describe the bicategorical structures appearing in this work, we will use Segal fibrations. In fact, we will also use the dual notion of a coSegal fibration. These and other models for $(\infty, 2)$-categories, as well as their relations, are studied in detail in the comprehensive treatment [Lur09b]. (p. 163) In 9.3 they associate an $(\infty, 2)$-category to a 2-Segal space. In 9.3 they associate an (∞,2)(\infty, 2)-category to a 2-Segal space. Now I see that the statement that DK 2-Segal spaces are ∞\infty-operads is essentially in the article, in 3.6. Format: MarkdownItexThe nlab article [[higher Segal Space]] is very unclear -- there are two completely different, unrelated types of objects that are sometimes referred to as "higher Segal spaces". One is [[n-fold Segal Space]], a model of $(\infty,n)$-categories. The other is the Dyckerhoff-Kapranov notion of a $d$-Segal space, which models something like an $(\infty,1)$-category, but without uniqueness of composites (for $d\geq 2$), and higher associativity only in dimension $d$ and above (higher associativity conditions are governed by $d$-dimensional polyhedra, related to $d$-dimensional field theories). The article as it stands refers to both notions. The "Idea" section refers to $n$-fold Segal spaces. But we already have a separate page for [[n-fold Segal spaces]]. So I'm pretty sure the intention was that this page refer to the Dyckrhoof-Kapranov notion, and _not_ the $n$-fold Segal space notion. So I think the article needs a major cleanup. Does anybody object to making [[higher Segal spaces]] discuss _only_ the Dyckerhoff-Kapranov notion (except for adding some discussion of the difference)? The nlab article higher Segal Space is very unclear – there are two completely different, unrelated types of objects that are sometimes referred to as "higher Segal spaces". One is n-fold Segal Space, a model of (∞,n)(\infty,n)-categories. The other is the Dyckerhoff-Kapranov notion of a dd-Segal space, which models something like an (∞,1)(\infty,1)-category, but without uniqueness of composites (for d≥q;2d\geq 2), and higher associativity only in dimension dd and above (higher associativity conditions are governed by dd-dimensional polyhedra, related to dd-dimensional field theories). The article as it stands refers to both notions. The "Idea" section refers to nn-fold Segal spaces. But we already have a separate page for n-fold Segal spaces. So I'm pretty sure the intention was that this page refer to the Dyckrhoof-Kapranov notion, and not the nn-fold Segal space notion. So I think the article needs a major cleanup. Does anybody object to making higher Segal spaces discuss only the Dyckerhoff-Kapranov notion (except for adding some discussion of the difference)? Removed misleading references to nn-fold Segal spaces and added a bit of material. Format: MarkdownItexPlease feel invited to work on the entry and improve it, as you see the need. Please feel invited to work on the entry and improve it, as you see the need.
CommonCrawl
We look at I-V characteristic curves for 3 different diodes in butterfly package using the Koheron CTL200 digital laser controller (type 1, 600 mA laser current). The laser controller is connected to the computer via an USB cable. A Python script sets the laser temperature, scans the laser current and measures the laser voltage. The first graph shows the I-V characteristic of a Thorlabs SLD830S-A20 830 nm Super Luminescent Diode (SLED). As expected, the curve is very smooth since the diode only exhibits amplified spontaneous emission. We can see the laser voltage decreases when the temperature increases. The second diode is a Thorlabs SFL1550P external cavity single frequency laser operating at 1550 nm. The external cavity gives this laser a very good spectral purity (linewidth smaller than 100 kHz). This laser has a smaller voltage than the SLED. This can be explained by the higher wavelength ($eV = hc/\lambda$). On the zoomed curve below, we can observe that the laser voltage suddenly jumps when increasing the laser current. When the current increases, the wavelength drifts until the lasing mode is no longer the mode with the highest gain. At this moment, the laser transfers quickly its power to the higher gain mode. This causes a shift in optical power that can be observed on the laser voltage. This phenomenon is called mode hopping. The third diode is a Thorlabs BL976-P300 Fiber Bragg Grating (FBG) Stabilized Laser Diode. It operates at 976 nm and can output up to 300 mW, which makes it a good pump diode for Erbium Doped Fiber Amplifiers (EDFAs). The I-V characteristic curve has two well defined zones separated by the lasing threshold around 40 mA. Below the lasing threshold, it seems to behave similarly to the two previous diodes. At the threshold, the product $\lambda V = 1.31 \times 0.976 = 1.28$ µm V is close to $hc/e = 1.24$ µm V. Above the threshold, the behavior is linear with a slope of 0.625 V / A. This dynamic resistor of 0.625 Ω is related to thermal losses. It is two times smaller than the External Cavity Laser's thermal losses. This can be explained by the fact that the External Cavity Laser was optimized for spectral purity over thermal efficiency. On the zoomed curve below, we can see that once the laser threshold is crossed, the laser voltage no longer depends on the temperature. Compared to the External Cavity Laser, mode hops seem to be more frequent but disappear in the operating region above 400 mA.
CommonCrawl
We now show that if $f$ is a bounded function that is also a Lebesgue measurable function defined on a Lebesgue measurable set $E$ with $m(E) < \infty$ then $f$ is Lebesgue integrable on $E$. Theorem 1: Let $f$ be a bounded, Lebesgue measurable function defined on a Lebesgue measurable set $E$ with $m(E) < \infty$. Then $f$ is Lebesgue integrable on $E$. Proof: Let $f$ be a bounded, Lebesgue measurable function defined on a Lebesgue measurable set $E$ with $m(E) < \infty$.
CommonCrawl
Is there a homology theory that gives a *necessary and sufficient* condition for homotopy equivalence? Let $X,Y$ be manifolds with complexes $C(X),C(Y)$. Then $X$ and $Y$ are homotopy-equivalent if and only if $C(X)$ and $C(Y)$ are isomorphic. Let $X,Y$ be manifolds with complexes $C(X),C(Y)$. Let $f:X\to Y$ be a continuous map which induces the "chain map" $f_*:C(X)\to C(Y)$. Then $f$ is a homotopy equivalence (it admits a $g:Y\to X$ such that $fg$ and $gf$ are homotopic to the identities) if and only if $f_*$ is an isomorphism. If yes: which one/ones? An introduction to such theory? If no: not yet, or is it impossible? Why? No restriction for the objects in the complexes, they can be groups, modules, groupoids, anything else. Same questions, but with cohomology instead of homology? The answer to the title question, for the usual meaning of "homology theory," is no. Homology is always an invariant of the stable homotopy type of a space, and so no homology theory can distinguish two spaces which are stable homotopy equivalent but not homotopy equivalent. For example, no homology theory can distinguish $S^1 \times S^1$ and $S^1 \vee S^1 \vee S^2$, but the former has nontrivial cup products while the latter does not. On the other hand, we have the following homology version of Whitehead's theorem, which bypasses the above constraint because it does not come from a homology theory in the usual sense. M. Mandell, Cochains and homotopy type, Publ. Math. IHES 103 (2006), 213-246. What you could be looking for is Theorem 6 of J.H.C. Whitehead's "Combinatorial Homotopy II" Bull. Amer. Math. Soc. 55 (1949) 453-496. For a CW-complex K he defines what he calls a "homotopy system" $\rho (K)$ and the theorem says that for a map $f:K \to L$ of CW-complexes, $f$ is an equivalence if and only if $\rho(f)$ is an equivalence. However to get "only if $\rho(f)$ is an isomorphism" is unlikely for this functor. We have come to call $\rho(K)$ the fundamental crossed complex $\Pi(K_*)$ of the filtered space $K_*$, the filtration being by skeleta. Many aspects of Whitehead's work are dealt with in the book Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids, EMS Tract Vol 15, 2011. G. Ellis and Le Van Luyen, "Homotopy 2-types of low order", preprint. lists isomorphism classes and quasi-isomorphism classes of some crossed modules of low order. This answer is related to my answer to this mathoverflow question. Not the answer you're looking for? Browse other questions tagged reference-request at.algebraic-topology homotopy-theory nonabelian-cohomology or ask your own question. Does the bordism homology theory satisfy the weak equivalence axiom? Is there a general theory of fiber theorems? Is there a category whose isomorphisms are precisely the simple homotopy equivalences? Counterexamples for strengthening Whitehead's theorem? When are two kinds of weak equivalence 'the same'? Correct notion of chain homotopy for linearized homology of augmented DGAs?
CommonCrawl
"If a coin is tossed, what is the chance that it lands heads?" Ask this question and the most common answer that you will get is $1/2$. If you press for a reason, don't be surprised to hear, "Because the coin has two faces." A coin does indeed have two faces, but notice an assumption hidden inside the "reasoning" you have been given: that each of the two faces has the same chance as the other. The assumption of equally likely outcomes is a simple and ancient model of randomness. It defines probabilities as proportions. The assumption that $\Omega$ is finite makes proportions easy to identify as fractions of the total number of outcomes. For some $n > 1$, let $\Omega$ consist of $n$ outcomes. Let $A \subseteq \Omega$ be an event. Define $\#(A)$ to be the number of outcomes in the subset $A$. Thus $\#(\Omega ) = n$, $\#(\phi ) = 0$, and $0 < \#(A) < n$ for any other event $A$. For an event $A$, let $P(A)$ denote the probability that $A$ occurs, or the chance that $A$ occurs. We will use the words "probability" and "chance" synonymously, and we will often use "happens" instead of the more formal "occurs". This idea that probabilities are proportions lies at the heart of many calculations. As you will see later, rules for combining proportions become rules for combining probabilities, whether or not all outcomes are equally likely. But for now we will work in settings where it is natural to assume that outcomes are equally likely. If we assume that all six permutations are equally likely, we are working with what are called random permutations of the three letters. Under this assumption, we can augment our table of events with a column of chances. Thus the assumption that all the permutations are equally likely makes all three positions of $a$ equally likely as well. The same is true of the positions of $b$ and $c$, as you should check. Suppose a random number generator returns a pair of digits from among the 100 pairs 00, 01, 02, $\ldots$, 98, 99, in a way that makes all the pairs equally likely to be returned. There are 10 choices for the first digit: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Corresponding to each choice of the first digit there are 10 choices for the second digit. So in total there are $10 \times 10 = 100$ pairs of digits. Here a "pair" is an sequence of two digits, one following the other. The pair 27 is different from the pair 72. These are sometimes called "ordered pairs". In this text, all sequences are ordered. Now let's compute the probabilities of a few events. By assumption, all pairs are equally likely. So each answer will consist of counting the number of pairs in the event, and then dividing by the total number of pairs, which is 100. (i) What is the probability that the pair consists of two different digits? (ii) What is the chance that the two digits are the same? To check this by counting, you have to count pairs of the form $aa$. There are 10 ways to choose $a$, and there are no further choices to make. So the answer is $10/100 = 0.1$, confiriming our calculation above.
CommonCrawl
I have $(x_1, y_1), (x_2, y_2)$. How do I find the point that's $d$ distance away from $(x_1, y_1)$ on a straight line to $(x_2, y_2)$? I know I can get the length of the line with Pythagoras. I know if I drew a circle I could use the radius as distance and the point would be where the line and the circle intersect. Could someone briefly explain each step to me please? I don't understand Finding a point along a line a certain distance away from another point! If I can move one unit along a line I can move any distance along that line. We'll calculate how much we would have to add to each of $x_1$ and $y_1$ to move one unit along the line and then we'll multiply that by $d$ to get the answer. Points from point and slope? Finding a point along a line a certain distance away from another point! Calculate a point on the line at a specific distance . How to find coordinates of points lying at given distance from point on line?
CommonCrawl
This project uses interactive widgets and visualizations to explore some of the mathematical tools used to analyze electoral redistricting, especially the question of detecting gerrymanders. We use very small examples to develop and demonstrate some techniques. The nation of Gridlandia is trying to draw voting districts for its upcoming elections. The rules are that there must be four districts and each district must be made up of four contiguous units. (Basically, each district looks like a tetris piece.) Since Gridlandia is laid out neatly in a four-by-four grid, it's not too hard to write down all of the valid plans. There turn out to be only 117 of them. We can impose a graph or network structure to organize the collection of all of those 117 districting options. We'll let each valid plan be represented by a node in this network, and we'll connect two nodes with an edge if the plans they represent are related by a simple swap move, exchanging the district assignments of two cells from adjacent districts. We can transfom the plan on the left (olive-colored node) into the plan on the right (seafoam green node) by swapping the district assignments of the two cells that are grayed out in the center image. A 90 degree rotation makes these two plans equivalent. If we do collapse the metagraph in this way (in mathematical terms, if we quotient by the action of the dihedral group D4) we are left with only 22 nodes, and we will call that smaller object a reduced metagraph. Below, you can interact with these mathematical objects— if your screen is big enough, they are side-by-side. On the left you can manipulate the full metagraph of Gridlandia. On the right you see the reduced metagraph where plans which are related by symmetry are merged into the same node, and two nodes are linked by an edge if any two of their representatives are adjacent in the full metagraph. Mouse over a node in the left-hand graph to see the corresponding plan and highlight all of the other nodes corresponding to symmetrically equivalent plans. Click a node to highlight all of its neighbors. Clicking a node in the right-hand graph will highlight all nodes in that symmetry class from the larger graph, plus their neighbors. Nodes in the full metagraph are sized according to their degree, or number of neighbors. Nodes in the reduced metagraph are sized according to the number of representatives in their symmetry class. To analyze the partisan performance of a districting plan $\mathcal D$, you must also specify a vote distribution $\Delta$. So let's think about voting in Gridlandia. Gridlandia has plurality elections and two political parties, the Hearts Party and the Clubs Party. For simplicity, we'll start by assuming that everyone in the same grid unit votes the same way—either for the Hearts candidate or the Clubs candidate. Within each district, the party with the most votes wins the election, and ties are left ambiguous—nobody wins. Below, you can click each grid unit to toggle its vote between the parties. On the left, the nodes in the metagraph will change color to indicate which party wins more seats under the corresponding plan. A node will remain gray if it has the same number of Hearts- and Clubs-favoring districts. Let's make things a little more complicated. Instead of each unit voting entirely for one party, we will give it a balance of Hearts and some Clubs supporters, in 10% increments. The same electoral rules apply. Left- or right-click on each unit to adjust the balance of Clubs and Hearts supporters. What happens when the Clubs party has a slight majority in most of the cells but a few are heavily Hearts? Can you find a distribution of voters where one party has less than half of the votes but still wins three out of four districts? Can you find a distribution of voters that creates a Hearts-colored metagraph node surrounded entirely by Clubs nodes? If we're interested in studying redistricting in the real world, we'll need techniques that work on much, much more complicated graphs. We will develop some of these techniques using the 5x5 grid, which already has a substantially larger universe of valid districting plans. This page was created by Zachary Schutzman based on work at the Voting Rights Data Institute, and is being edited and maintained by MGGG. The project includes joint work with Seth Drew, Eugene Henninger-Voss, Amara Jaeger, and Heather Newman. Special thanks to Mira Bernstein, whose Liliputia project served as inspiration.
CommonCrawl
I noticed that the contest-math tag was added to many questions recently, even to questions which do not mention any connection to a mathematics competition. Problems from or inspired by mathematics competitions. Questions regarding mathematics competitions. My questions are: Should contest-math be added to questions retroactively, even if the author of the question does not mention a connection to a competition? Is it sufficient that the question "looks like" a contest question or "could be" from a contest? Or should such an edit be supported by adding a link to a contest where the question comes from? Based on current revision of the tag-wiki and tag-excerpt, the tag contest-math seems to be a meta tag. In short, the usage guidance says that it should be added if a questions is from a mathematical contest, which means that it is actually not determined by the content of the question, but by other factors. Admittedly, the tag-info has not been updated for some time. (It mentions homework tag, which no longer exists.) I will copy the current revision here in full. Inquires about alternative proofs for a particular problem that is from a math contest. Questions that have been explicitly inspired by a contest problem. See here for a list of mathematics competitions from which you can ask questions. This tag cannot go along with the homework tag. Moreover, question having this tag are often treated differently from other questions, especially as far adding context is concerned. Here are two quotes from past meta discussions. Both of them are taken from an answers to Why a question without showing any work is getting upvoted? I am somewhat in favor of various subcommunities, say, those forming around selected tags, within Math.SE developing their own norms. Enforcing such norms will mostly be up to the subcommunities themselves. It is good to have some common standards (enforced for example via our common review queues), but IMO the keen followers of a tag are best placed to judge many cases. It has never been the tradition in the contest problem community to provide more than the question, and a source for the problem/solution where known. Considering the fact that the tag might change attitude towards the question, I would suggest to be careful when adding the tag to questions where the OP did not mention a contest, preparation for a contest or the OP did not use the tag themselves. Despite the fact that meta tags should be avoided if possible, due to the popularity of this particuar tag, complete removal of contest-math tag does not seem to be a reasonable option. For example, we have a few questions almost disjoint families of cardinality $\mathfrak c$. Such as Countable set having uncountably many infinite subsets (and many posts linked there) or Partitioning an infinite set (and many posts linked there). which questions falls under the tag 'problem-solving'? Do we really need the (spoj) tag?
CommonCrawl
Where does one go to learn about DG-algebras? The theory of differential graded algebras (in char 0) and their modules has numerous applications in rational homotopy theory as well as algebraic geometry. I'm looking for a reasonably complete reference book/collection of articles that treat the basic algebraic and homotopical aspects of dg-algebras akin to Atiyah-MacDonald's book on commutative algebra. Model structures on dg algebras and simplicial algebras (presenting the "correct" infinity category) and relation between them (concrete description of fibrations and cofibrations). Analogs of the classical types of morphisms (flat, smooth, unramified, etale, open immersion, closed embedding, finite, finite type, etc.). Derived categories of modules (Triangulated / Stable $\infty$) and functors on them (push-pull-shriek functors, perfect complexes vs. bounded vs. unbounded derived categories, dualizing complex, cotangent complex, localization and completion, fourier mukai transforms). Derived descent theorems - descent for dg-modules in the derived categories. (hopefully making it clear what kind of information goes into glueing a dg-module). I realize that one can always go to Lurie's books for a comprehensive and much more general treatment but I hope that a more elementary reference exists. I do believe that some aspects of this theory are a lot less technical than they appear. I am not aware of a single book that has all of these topics, but I can list a couple of very good books that can do the job together. First, Benoit Fresse works in this setting quite a bit. His book Modules over Operads and Functors contains material on (1), (2), (5), (7), and (8). The model categorical work is very explicit, including a description of (co)fibrations as you want. I also think I learned the most about Koszul duality from this source. It does not focus on the triangulated structure for (5). A good reference for that is the paper Stable Model Categories are Categories of Modules. Fresse also does not focus a ton on (2), but Loday and Vallette do in their book Algebraic Operads. This is also a great source for (1), (7), and (8). The model categorical work goes back to Hinich. For (3), I recommend More Concise Algebraic Topology, by May and Ponto. They also have a discussion of (1). For (4) and (6), and the more algebraic parts of (5) (e.g. Fourier-Mukai), I recommend Axiomatic, Enriched and Motivic Homotopy Theory, edited by John Greenlees, especially Strickland's article about axiomatic homotopy theory. Related is the monograph Axiomatic Stable Homotopy Theory, by Hovey, Palmieri, Strickland. I confess, this is the part of your question I know the least about. Strickland, however, has written in these sources about the connection between the items you list and homotopy theory. Others may have written more on the subject (e.g. Kaledin), and I hope someone can come along and add a comment if they know of a source. There is a study of DG algebra (DG rings, DG modules, DG categories, DG functors) in the book below. With the related derived categories, etc. The Stacks Project at Columbia University hosts a colloborative web-based survey on algebraic stacks, including a survey of Differential Graded Algebras. The whole project is quite an impressive resource. There is an MSE posting from last year that lists a variety of sources that target more specific aspects, including: Differential Graded Algebras and Applications, for DG Lie algebras and Koszul duality. Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry homological-algebra dg-algebras derived-algebraic-geometry or ask your own question. A Peculiar Model Structure on Simplicial Sets? How to make Ext and Tor constructive? What extra conditions are necessary for the following version of Koszul duality? Does the de Rham version of Cohen's theorem hold in the $\infty$-setting? Closed symmetric monoidal structure on the derived category of modules whose unit is a dualizing complex?
CommonCrawl
Turnablock is a very simple game for two players invented by John Conway. It is played on a 3 by 3 square board with 9 counters that are black on one side and white on the other. The counters are placed on the board at random, one per square, except that the majority must be black uppermost. Players take turns to reverse all the pieces in a block (e.g. $1 \times 1$, $2 \times 3$ or even $3 \times 3$). The object is to have all pieces white uppermost on completion of your move. PrimaryGames-Geometry. Mathematical reasoning & proof. Networks/Graph Theory. Interactivities. Working systematically. Visualising. Games. Nets. Cubes & cuboids. PrimaryGames-Strategy.
CommonCrawl
A set is transitive if and only if all of its elements are subsets. If $A$ is transitive, then if $x$ and $A$ are connected somehow by membership (that is, $x \in y \in z \ldots \in A$), then $x \in A$. The intersection of two transitive sets is transitive. In set theory, transitive sets play an important role in models of ZFC. See transitive ZFC model. This page was last modified on 5 February 2012, at 17:40.
CommonCrawl
The complexity of a learning task is increased by transformations in the input space that preserve class identity. Visual object recognition for example is affected by changes in viewpoint, scale, illumination or planar transformations. While drastically altering the visual appearance, these changes are orthogonal to recognition and should not be reflected in the representation or feature encoding used for learning. We introduce a framework for weakly supervised learning of image embeddings that are robust to transformations and selective to the class distribution, using sets of transforming examples (orbit sets), deep parametrizations and a novel orbit-based loss. The proposed loss combines a discriminative, contrastive part for orbits with a reconstruction error that learns to rectify orbit transformations. The learned embeddings are evaluated in distance metric-based tasks, such as one-shot classification under geometric transformations, as well as face verification and retrieval under more realistic visual variability. Our results suggest that orbit sets, suitably computed or observed, can be used for efficient, weakly-supervised learning of semantically relevant image embeddings. We present GURLS, a least squares, modular, easy-to-extend software library for efficient supervised learning. GURLS is targeted to machine learning practitioners, as well as non-specialists. It offers a number state-of-the-art training strategies for medium and large-scale learning, and routines for efficient model selection. The library is particularly well suited for multi-output problems (multi-category/multi-label). GURLS is currently available in two independent implementations: Matlab and C++. It takes advantage of the favorable properties of regularized least squares algorithm to exploit advanced tools in linear algebra. Routines to handle computations with very large matrices by means of memory-mapped storage and distributed task execution are available. The package is distributed under the BSD licence and is available for download at https://github.com/CBCL/GURLS. From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the object-based dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments. The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples ($n \to \infty$). The next phase is likely to focus on algorithms capable of learning from very few labeled examples ($n \to 1$), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a ``good'' representation for supervised learning, characterized by small sample complexity ($n$). We consider the case of visual object recognition though the theory applies to other domains. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translations, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and unique (discriminative) signature can be computed for each image patch, $I$, in terms of empirical distributions of the dot-products between $I$ and a set of templates stored during unsupervised learning. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such estimates. Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts. The theory extends existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition---and that this representation may be continuously learned in an unsupervised way during development and visual experience.
CommonCrawl
We substituted the term $t(x)$ for $y$ in the expression $P(y)$. This is completely obvious and it gets used all the time. For example, whenever we say something like "$2 x + 1$ is even", we have substituted the expression $2 x + 1$ for $y$ in the predicate $\exists z . y = 2 z$, where all variables range over integers. But this $Q$ is the pullback of the inclusion $P \hookrightarrow B$ along $t$. And if you draw the relevant pullback diagram for yourself, and figure out why we have a pullback, you will have understood why substitution is pullback. but substitution of a term into a term is composition. I feel like in the slogans you ended with, the word "pullback" should be substituted for a couple of the occurences of the word "substitution". This result is folklore, which is a technical term for a method of publication in category theory. It means that someone sketched it on the back of an envelope, mimeographed it (whatever that means) and showed it to three people in a seminar in Chicago in 1973, except that the only evidence that we have of these events is a comment that was overheard in another seminar at Columbia in 1976. Nevertheless, if some younger person is so presumptuous as to write out a proper proof and attempt to publish it, they will get shot down in flames. Undoubtedly people have known this as a fact for a very long time. However, I couldn't find any published papers from the 1970s that formulated this result. It seems unlikely that it was done then because the categorists of that day had come from traditional pure mathematics and were not familiar with formal syntax. I would like to know who did write this down in any formal way first. Without asserting my own priority, let me cite my book Practical Foundations, in particular Section 4.3 and Chapter VIII, as a place where it is done. I describe a construction there of the classifying category or category of contexts and substitution that works for any type theory with all of the structural rules (ie not linear logic) and possibly dependent types. This derives a sketch (essentially, generators and relations for the category) more or less directly from the syntax. Then the category is obtained easily from the sketch. This separates the "chalk" of recursive definitions of syntax from the "cheese" of associative composition in a category. Note to the old-timers: substitution is not given by forming the pullback. We construct the denotations of the original and substituted term or predicate first and then observe that they obey the universal property of a pullback. Not all pullbacks need exist in the syntactic category. Now I have a confession to make. I cannot see the obvious reason why the interpretation of the language in another category with the appropriate structure defines a functor from the classifying category. Certainly there is a function on objects and morphisms and the universal properties of products and exponentials correspond to the type theoretic rules. What I cannot see is why the function preserves composition. Let me try to write some mathematics to explain why substitution is sometimes composition and sometimes pullback. The Subsititution Lemma in type theory says how substitutions commute: $$ [a/x]^* ([b/y]^* t) = [a/x]^* ([[a/x]^*b/y]^*t),$$ where $[a/x]^*t$ is the result of substituting $a$ for $x$ in $t$. The superscript star comes from inverse images and pullbacks in category theory. All of this is an action on $t$, which we can omit. This leaves composition of abstract morphisms, in fact in the classifying category. So, as Andrej says, substitution of terms is composition. However, this commutative square of morphisms is a pullback. The morphism $[b/y]$ that arises from the term $\Gamma,x:X\vdash b:Y$ splits the product projection or display map $\hat y:\Gamma\times X\times Y\to\Gamma\times X$. Just as the action of the term is by substitution, that of this display map is weakening by the variable $y$. It is an example of the easy "pullback lemma" of which Andrej for some reason recently gave a proof. Let me try to give a bit more explanation of the construction of a category from syntax that is described at length in my book and also answer my own question about why the interpretation in another category defines a functor. The antecedents of this construction include clones in universal algebra and classifying toposes in geometric model theory, but I called it the category of contexts and substitutions in line with the tradition of referring to the category of widgets and homomorphisms. People originally took individual types as the objects, but the relationship with syntax is made much more fluently if we use contexts instead, ie lists of variables together with their (possibly dependent) types. The lazy way of describing the morphisms is as strings of terms. The difficulty with formalising this, particularly for dependent types, is that we need to mix up recursion for the type theory with associativity for categories. Anyone who has tried to write programs for associative operations will know that this is a mess. To solve this problem I used an elementary sketch. Traditionally, sketches (esquisses in French) were used to describe categories of models of theories that involve limits and colimits. The journal Diagrammes includes a lot of work about them. Pierre Ageron extended these ideas to exponentials in a way that could probably deal with most of the ideas in type theory in a purely categorical way, but unfortunately never developed his work. However, my idea only uses sketches in their simplest form, without limits, colimits, exponentials or anything else that invokes new objects. Every object (context) of the intended category is given as a node of the sketch. Then there are two classes of arrows and five equations (commutative diagrams), giving generators and relations for the morphisms. — five families of equations, the most complicated of which is the substitution lemma above. The substitution lemma is the square on the left in the diagram above and the other square says that terms commute with displays; these are both pullbacks. Displays commute with each other and form the pullback or fibre product of the types over the context. However, the proof that these squares are pullbacks in the dependent case is delicate, making repeated use of the pullback lemma in a particular sequence of ways. All of this structure is exploited in the final chapter of the book to provide a fluent translation between diagrams and syntax, so that the universal properties of the former correspond exactly to the proof rules for the latter. However, since I was still rather biassed towards diagrams and against syntax when I wrote the book, I did not actually spell out what it is to be an interpretation of a type theoretic language in a category. I simply took this to be a structure-preserving functor from the category of contexts and substitutions. We can rectify this, but as with substitution as pullback we have to do the construction in a specific order. We have to define a functor from the category generated by a sketch, for which we need to give its effect on nodes and arrows in such a way that the equations hold. For dependent types at the algebraic level, ie without $\Pi$, $\Sigma$ etc, the semantic structure that is required is a category with display maps, ie a class of maps that is closed under pullback against arbitrary maps. This provides the object part of the interpretation, for types and contexts, and the display maps that link them. Now we need to fill in the interpretations of the terms, as sections of the display maps that interpret their types-in-context. The operation-symbols (which we understand as having variables as their arguments) have given meanings as morphisms of the semantic category. Other terms are obtained by substitution of sub-terms for variables in the outermost operation-symbol. The result of such a substitution is the top left map in the diagram. This map is the mediator to the right-hand pullback such that the composite along the top is the identity. This makes the left-hand square commute, indeed it is a pullback. The pullback square that captures the substitution lemma is therefore the definition of substitution and its image in the semantic category is the definition of the interpretation of a expression that consists of an operation-symbol applied to sub-terms. In particular, this square commutes, as required. We have also made the other squares, capturing the simpler equations, commute too. Thanks, Paul, for very illuminating comments! Here is a programming problem that arises from the above description of the category: what is the appropriate datastructure for morphisms? The lazy definition of a morphism is that it is an assignment of terms to variables, ie what is known as an environment in compiler design. Traditionally, hash tables were used to do this, but this is a monolithic solution based on assuming that there is only one environment to be considered. It works in practice because it can be updated in place and previous versions of the environment will not be needed again. In a category there are morphisms every which way. We would also like to think of this problem in a functional way. There need to be multiple versions of the environment. One of the operations that needs to be done on this datastructures is composition, whilst clearly the generating maps $\hat x$ and $[a/x]$ need to be represented, so in the first instance we use this representation. — If we encounter a generator $[a/x]$ then $a$ is the required term, except that it contains its own free variables, which may each need to be substituted using assignments that are further back in the string. — If instead we encounter $\hat x$ then there is an error, because this means that $x$ is excluded from the target context. This is a linear search, which is expensive if the morphism is the composite of a very long string. This situation arises in the compilation of an ordinary program during reading of the hundreds of definitions that might be present in the library headers and leads to the use of hashes. There are other datastructures that can be used for dictionary-like data in a functional way. There is a book by Chris Okasaki about them. One idea that might be adapted to this problem is that of red-black trees. This problem is more complex than dictionary search because we need to know that neither $[a/x]$ nor $\hat x$ occurs in the right-hand sub-tree that arises from composition (in diagrammatic order) before we select an occurrence from the other sub-tree. So the items in the dictionary have both an "alphabetical" order that we use to search for them quickly and also the order in which they occur in the composite, from which we require the right-most. Intuitively, composition is also a pullback. $f;g$ is the "pullback" of a morphism $g : B \to C$ along $f : A \to B$ to obtain a morphism of type $A \to C$. One way of formalizing the intuition is to work in subsumptive reflexive graph categories (Dunphy & Reddy, Parametric Limits, LICS 2004), where we have "logical relations" in addition to morphisms. Perhaps there are other ways as well. @Udday: pullbacks can be generalized, for example in fibered categories. What sort of generalization are you talking about? Substitution is also composition if we represent the subset $P \subseteq B$ not as a monomorphism $P \hookrightarrow B$ but as a morphism $P\colon B \rightarrow \Omega$. @Toby: that's a good point. On the pullback diagram, are the objets sets?, predicates? sets formed by predicates? When you say: "Q is the pullback of the inclusion P \hookrightarrow B along t", do you say it in the same why as "p2 is a pullback or fiber product of the pair. We also say that p2 is the pullback of f along g" from [http://www.math.mcgill.ca/triples/Barr-Wells-ctcs.pdf page 273]; if yes, then I must conclude that Q is an arrow, t(x) is an arrow, the inclusion from P to B is an arrow and the objects are the sets A, P & B; but then I'm missing the arrow from A to P and from A to B. Use the fact that a predicate $P$ on a set $A$ can be viewed as a subset inclusion $P \subseteq A$. That is, you should switch between "$P$ is a predicate on $A$", "$P$ is a subset of $A$" and "subset inclusion $i_P : P \to A$", as the occasion requires. For instance, if someone says "substutution of a term into a predicate is pullback", then we clearly need a pullback square, so four arrows. The two arrows we start with are $t : B \to A$ (the substituted term) and the subset inclusion $i_P : P \to A$ (and you can figure this out because of the three forms of understanding "$P$ is a predicate on $A$" only this one is an arrow). The other two arrows are a subset inclusion $i_Q : Q \to B$ where $Q = \lbrace y \in B \mid t(y) \in P\rbrace$ and the arrow $Q \to P$ which is the restriction of $t$ to $Q$, i.e., it takes $y \in Q$ to $t(y) \in P$. Does that help?
CommonCrawl
Given an arbitrary convex cone of $\mathbb R^n$, we find a geometric class of homogeneous weights for which balls centered at the origin and intersected with the cone are minimizers of the weighted isoperimetric problem in the convex cone. This leads to isoperimetric inequalities with the optimal constant that were unknown even for a sector of the plane. Our result applies to all nonnegative homogeneous weights in $\mathbb R^n$ satisfying a concavity condition in the cone. The condition is equivalent to a natural curvature-dimension bound and also to the nonnegativeness of a Bakry-Emery Ricci tensor. Even that our weights are nonradial, still balls are minimizers of the weighted isoperimetric problem. A particular important case is that of monomial weights. Our proof uses the ABP method applied to an appropriate linear Neumann problem. shape (intersected with the cone) minimizes the anisotropic weighted perimeter under the weighted volume constraint. As a particular case of our results, we give new proofs of two classical results: the Wulff inequality and the isoperimetric inequality in convex cones of Lions and Pacella.
CommonCrawl
titre =About equilibirum states at temperature zero. The study of ergodic optimization is relatively new (about 10 years ago). It is of course more developed for the uniformly hyperbolic settings than for the non-uniformly one, because the notion of equilibrium state is better understand in that first case. I will recall that this study is related to the fact that the pressure for a potential $\beta\varphi$ has an asymptote as $\beta$ goes to $+\infty$. 1/ What happen to the (unique ?) equilibrium state as $\beta$ goes to $+\infty$ ? 2/ Can the pressure touch the asymptote before $+\infty$ (hence before temperature zero) ? I shall present some results answering to these questions in both setting (uniformly or non-uniformly) hyperbolic.
CommonCrawl
Last edited by TobiWan; November 18th, 2016 at 08:36 AM. $x_1 = 0\ and\ x_2 = 1$. So if I understand the problem, which I may well not, the proposition is false and cannot be proved. Last edited by TobiWan; November 19th, 2016 at 03:35 AM. I see. I badly misread the original problem. I apologize. Do I have the problem correctly? I still like to look at a few specific cases because that frequently gives a clue on how to solve the general problem. Let's start with n = 2. The first condition but not the second applies to the second solution so that number is 1. The second condition but not the first applies to the first solution so that number is also 1. The numbers are equal. Does it really end the proof? No, not at all. What I was saying is that by exploring cases with small n, you will learn about the problem. It is an excellent though not foolproof way to FIND a proof. Notice that choosing n = 2, there were 9 possible cases. ONLY TWO were solutions. One condition applied to one solution and the other condition applied to the other solution. And it was fairly easy to show that analogous solutions exist for n > 2. Try n = 3 and n = 4 with 16 and 25 cases. If again there are still only two solutions, that is encouraging. That would suggest trying to prove that those are the only two solutions for any n. Last edited by skipjack; November 25th, 2016 at 07:39 AM.
CommonCrawl
Find a path between the top left corner to the bottom right corner, visiting each spot (.) exactly once. You can only move horizontally or vertically. There are mul olutions for a 5x5 grid, 7x7, and so on. But for a 2x2, 4x4, 6x6, and higher n x n (where n is even), this does not seem to yield any solution. Imagine the grid as a chessboard. Then for a $2k\times2k$ board, the two corners are the same colour, say white. Any path must travel $WBWB\dots WBW$, which is always an odd number of moves, however we need to travel through an even number of squares, so the task is impossible. Can the rook pass every square just once?
CommonCrawl
Abstract: Using similarities between topological $K$-theory and periodic cyclic homology we show that, after tensoring with $\mathbb C$, for certain Fréchet algebras the Chern character provides an isomorphism between these functors. This is applied to prove that the Hecke algebra and the Schwartz algebra of a reductive $p$-adic group have isomorphic periodic cyclic homology. The main theorem in the first version was incorrect for algebras related to noncompact manifolds. This has no effect on the results concerning p-adic groups. In the appendix we show that an analogous cohomological result does hold in the noncompact case.
CommonCrawl
Meet Kayla and Sam. They are at the fair and are deciding on how to spend their money on the different attractions. They want to ride all the rides. The price is 8 dollars per ticket per rollercoaster. They definitely also want to ride the ferris wheel together at least once. The price for the ferris wheel is 5 dollars per ticket. Sam has a sweet tooth and he can't have a visit to the amusement park without cotton candy. So he will sacrifice a ride on the rollercoaster so he has some money for cotton candy. Cotton candy costs $2. Together, Kayla and Sam have one hundred dollars. They both want to ride the rollercoasters as often as possible. Our question is: How many rollercoasters can they ride before they run out of money? To determine the number of times each of them can ride the rollercoaster, we can use a linear equation. The rollercoaster is 8 dollars per ride. To ride the ferris wheel Kayla and Sam each have to pay 5 dollars. Cotton candy costs 2 dollars. Together they have 100 dollars to spend. Sam buys 1 stick of cotton candy. They both take a ride in the ferris wheel, which is 2 tickets. We don't know the number of rides they can take on the rollercoaster, so let x be Kaylas number of rides. Since Sam takes one ride less in order to get some cotton candy, his number of rides can be represented by x − 1. 8 Dollars for the roller coaster times x for the number of rides Kayla takes. Add another 8 dollars for the rollarcoaster times (x-1) for Sam's rides. Now, you add 2 dollars for the cost of Sam's cotton candy. They have one hundred dollars to spend so you put this on the right side of the equation. So now you have a linear equation with the variable x on only one side. You can solve this equation for x to find out how often Kayla and Sam can ride the roller coaster. To start solving this linear equation, you need to follow the order of operations and, using the distributive property, multiply both terms inside the parentheses by eight. Now you have: 8x + 8x − 8 + 5 + 5 + 2 = 100. On the left side of the equation, combine like terms: Combine the x terms and the integer terms together. The result of this step is 16x + 4 = 100. Now solve the equation by using opposite operations. Remember that you use PEMDAS reversed. 4 is subtracted from both sides of the equation to obtain all integer values on the right side of the equation. The result of this step is 16x = 96. To solve for x, use opposite operations again: both sides are divided by 16 in order to isolate the x value on one side of the equation. The solution to the equation is x = 6. Now, let's check this solution. Substitute the value of 6 in for x. We'll simplify by using the order of operations. First look at the parentheses. Then, complete the multiplication. Last, you can add all values on the left hand side of the equation. The value on the left side and the right side of the equation are equal. So the solution to the equation x equals 6 is the correct value for x. So, what does this mean? Remember: We said x was for the number of rides Kayla can take on the roller coaster. So she can take 6 rides. Sam goes 1 less time or x − 1. This is 6 − 1, which is 5. Great! Now they're finally on the roller coaster. But maybe Sam shouldn't have spent those 2 dollars on the cotton candy right before the rides. Linear equations needing more than one step to solve are called multi-step equations. When the variable is on just one side of the equal sign, you can follow these steps to solve for the value of the unknown variable. First, if there are parentheses, use the Distributive Property to multiply the number outside the parentheses across the terms that are inside the parentheses. Second combine all the like terms. Numbers alone, also called constants, are like terms, and same variables and same variables attached to numbers, known as a coefficients, are also like terms. The third step, you can use opposite (inverse) operations in the order of reverse PEMDAS. To isolate the variable, use inverse (opposite) operations to add or subtract on both sides of the equal sign, then use inverse operations to multiply or divide to completely isolate the variable leaving, finally, the solution to the linear equation. After following the multiple steps of multi-step equations, there is one final step – check your work. You can do this by substituting the value of the variable back into the original equation, and use the order of operations (PEMDAS, BEDMAS, BODMAS, BIDMAS, operator precedence) to simplify. If you have solved the equation correctly, the left side of the equal sign will be the same as the right side. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Solving Multi-Step Equations with Variables on One Side kannst du es wiederholen und üben. Determine the best answer for each blank. Remember that Sam rides the roller coaster one time fewer than Kayla. You can write all the money Kayla and Sam want to spend on the one side of the equation and the money they have to spend on the other side. One roller coaster ride costs $\$8$, so four rides cost $\$8\times 4=\$32$. We want to know how many times Kayla and Sam can each ride the roller coaster. To figure this out, we assign the variable $x$ to the number of times Kayla can ride the roller coaster. Because Sam rides the roller coaster one time fewer than Kayla, he rides the roller coaster $x-1$ times. Pay attention to the parentheses. They are very important. Both Kayla and Sam ride the Ferris wheel just once. This gives us the expression $5+5$. Last but not least, Sam buys some cotton candy for $\$2$. We must add all of these expenses: $8\times x+8\times (x-1)+5+5+2$. Describe how to solve an equation. You can move things on one or both sides of the scale but if you remove something from one side of the scale, you have to remove the same something on the other side as well. An equation is like a scale in balance: We have terms on both sides of the equal sign. We can modify the equation by using the Distributive Property or by Combining Like Terms. Then to solve, we should Isolate the Variable by using Opposite Operations on both sides of the equation. You solve these equations by simplifying using the Distributive Property then Combining Like Terms and finally Isolating the Variable. Our equation is $8\times x+8\times (x-1)+5+5+2=100$. What does this solution mean? Sam can only ride $5$ times because he bought cotton candy instead. Find and solve the equation for the given situation. Example: If $x$ is the unknown value, two more would be $x+2$. The number of cats is the same in both examples. Let the unknown number of cats be equal to $x$. There are three more cats than dogs. $x-3$ is the number of dogs. The total cans of food: $3\times x+2\times (x-3)$. $x=5$ is the number of cats. Subtracting $3$ gives us the number of dogs, $2$. There is one more dog than there are cats: $x+1$ is the number of dogs. The cost for the cats and the dogs is $200\times x+400\times (x+1)$. $x=5$ is the number of cats. Adding $1$ gives us the number of dogs, $6$. Evaluate how many times Kayla and Sam can ride the roller coaster. Organize the given information first. You can solve each equation by using the Distributive Property, Combining Like Terms and using Opposite Operations. First, we have to set up the equation for each situation. For $x=5$, this means that Carla rides the roller coaster five times while Sam rides it only three times. For $x=7$, this means that Sam rides the roller coaster seven times while Kayla rides it six times. For $x=5$, this means that Kayla and Sam ride the roller coaster five times each. For $x=4$, this means that Kayla and Sam both ride the roller coaster four times. Determine how many bags of candy Kayla and Sam can buy. The opposite operation of $+$ is $-$, and vice versa. The opposite operation of $\times$ is $\div$, and vice versa. $x=6$ is the number of bags of candy Sam buys. Kayla buys one bag fewer. She buys $5$ bags. Together they buy $6+5=11$ bags of candy.
CommonCrawl
We present three-dimensional threefold-coordinated structures for iridates which may generate Kitaev-type magnetic exchanges. The resulting solvable 3D quantum spin liquid exhibits the uniquely 3D property of stability to finite temperature ($T_c \sim J_k/100$). Adding Heisenberg couplings spoils exact solubility; however, the large loop length $\ell$ of the lattice suggests an approximation with large $\ell \rightarrow \infty$. The Kitaev-Heisenberg model can be solved on the resulting Bethe lattice using tensor product states; we present the phase diagrams, finding multiple magnetic order parameters and identifying gapped spin liquid phases by an entanglement fingerprint.
CommonCrawl
Which is faster and by how much, a linear search of only 1000 elements on a 5-GHz computer or a binary search of 1 million elements on a 1-GHz computer. Assume that the execution of each instruction on the 5-GHz computer is five times ... 1-GHz computer and that each iteration of the linear search algorithm is twice as fast as each iteration of the binary search algorithm. This question is not related to GATE, but certainly would help you to grill your mind to come up with a better approach.. Feel free to comment if you think, this is not a right forum for this question. I have 10 ... format cannot be changed as there is dependency with other downstream systems. Is there a better approach to achieve the same ? Let $A$ be an array of $31$ numbers consisting of a sequence of $0$'s followed by a sequence of $1$'s. The problem is to find the smallest index $i$ such that $A\left [i \right ]$ is $1$ by probing the minimum number of locations in $A$. The worst case number of probes performed by an optimal algorithm is ____________. can someone provide me the detailed description for this answer? Suppose you are given an array $A$ with $2n$ numbers. The numbers in odd positions are sorted in ascending order, that is, $A \leq A \leq \ldots \leq A[2n - 1]$. The numbers in even positions are sorted in descending order, ... search on the entire array. Perform separate binary searches on the odd positions and the even positions. Search sequentially from the end of the array.
CommonCrawl
GABA$_A$ receptor has different binding sites for different molecules. It contains total 16 subunits ($\alpha (1-6)$, $\beta (1-3)$, $\gamma (1-3)$, $\delta$, $\varepsilon$, $\pi$, $\theta$). Benzodiazepine I site is located between $\alpha$, and $\gamma$ subunits and benzodiazepine II site either $\alpha 2$, $\alpha 3$, $\alpha 5$ in combination with $\beta$, and $\gamma 2$ subunits. Benzodiazepines are the second generation of hypnotics, which binds in the benzodiazepine I site. Like barbiturates B GABA$_A$ receptors also induces the GABA mediated inhibition of post synaptic neurons. Different BZDs have different affinities towards the GABA$_A$ receptors. Among all BZDs diazepam has versatile use in neuronal disorders. Diazepam is capable of binding with benzodiazepine I and benzodiazepine II subunits of the receptors and thus diazepam can be used as hypnotics, sedative and anxiolytics (mediated by benzodiazepine I site), anti-epileptics and muscle relaxants (decreasing the muscle tone significantly) Based on the muscle relaxation action of diazepam, it is used to potentate the pentobarbitone induced sleeping time. Pan Balance, Marker, Needle (26 G), Syringe, Timer.
CommonCrawl
I need to define parameters such that in the matrix 'i'th component is 'Lambda' and 'j'th component is corresponding 'Neff' value. How is that possible in Mathematica? Browse other questions tagged matrix or ask your own question. How to solve for this matrix? How to write a matrix of $n\times 2$ where the first column is made of only 1's?
CommonCrawl
Abstract: On a two-level quantum system driven by an external field, we consider the population transfer problem from the first to the second level, minimizing the time of transfer, with bounded field amplitude. On the Bloch sphere (i.e. after a suitable Hopf projection), this problem can be attacked with techniques of optimal syntheses on 2-D manifolds. Let $(-E,E)$ be the two energy levels, and $|\Omega(t)|\leq M$ the bound on the field amplitude. For each values of $E$ and $M$, we provide the explicit expression of the time optimal trajectory steering the state one to the state two in terms of a parameter that should be computed numerically. For $M<<E$, every time optimal trajectory is periodic (and in particular bang-bang) with frequency of the order of the resonance frequency $\omega_R=2E$. On the other side, for $M>E$ the time optimal trajectory steering the state one to the state two is bang-bang with exactly one switching. Fixed $E$ we also prove that for $M\to\infty$ the time needed to reach the state two tends to zero. Finally we compare these results with some known results of Khaneja, Brockett and Glaser and with those obtained in the Rotating Wave Approximation.
CommonCrawl
ArcGIS Help 10.1 What is a z-score? What is a p-value? State the values of the raw scores and Z-scores. A positive Z-score indicates a score that is higher than the mean. A negative Z-score indicates a score that is less that the mean. The larger the Z-score, the greater difference there is between the score and the mean.... The value in the intersection of the row and column is the area under the curve between zero and the z-score looked up. Because of the symmetry of the normal distribution, look up the absolute value of any z-score. have to look up the z-score of the sample mean in the table of z-scores to find out what percentage of the standard normal distribution falls between your sample z -score and negative infinity.... Negative z-Scores and Proportions The table may also be used to find the areas to the left of a negative z -score. To do this, drop the negative sign and look for the appropriate entry in the table. The textbook also has an example where $\mathbb P(Z\geq3.9) = 0.000048$; can somone Stack Exchange Network Stack Exchange network consists of 174 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.... Both z-scores and p-values are associated with the standard normal distribution as shown below. Very high or very low (negative) z-scores, associated with very small p-values, are found in the tails of the normal distribution.
CommonCrawl
Let's say a normed division algebra is a real vector space $A$ equipped with a bilinear product, an element $1$ such that $1a = a = a1$, and a norm obeying $|ab| = |a| |b|$. Are there any infinite-dimensional normed division algebras? If so, how many are there? A MathSciNet search reveals a paper by Urbanik and Wright (Absolute-valued algebras. Proc. Amer. Math. Soc. 11 (1960), 861–866) where it is proved that an arbitrary real normed algebra (with unit) is in fact a finite-dimensional division algebra, hence is one of the four mentioned in the OP. A key piece of the argument (Theorem 1) is to show that such an algebra $A$ is algebraic, in the sense that if $x \in A$, then the subalgebra of $A$ generated by $x$ is finite-dimensional. The authors then invoke a theorem of A. A. Albert stating that a unital algebraic algebra is a finite-dimensional division algebra. it goes on then to describe generalisations of the usual normed division algebras. The wikipedia page composition algebra tells us that one only has a 1-dimensional composition algebra when the characteristic of the base field is not 2, but otherwise you can start from a 2-dimensional composition algebra over a characteristic 2 field and perform the usual Cayley-Dickson construction. Edit: The following theorem was proved by Kaplansky (Infinite-dimensional quadratic forms admitting composition, Proc AMS 1953), which finishes off the classification. A quadratic form $g$ on an algebra $A$ over a field $F$ in this context is a function $g:A \to F$ such that $g(kx) = k^2g(x)$ for $k\in F$ and $x\in A$. (d) $g(x) = x^\ast x$ where $x\mapsto x\ast$ is an involution of $A$. So unless your base field has characteristic 2, and your division algebra is a purely inseparable extension of the base field, your division algebra has to be finite dimensional. The associative case follows from Mazur's Theorem (see here). He proved that there are up to isomorphism precisely three Banach division algebras, namely $\mathbb R,\mathbb C$ and $\mathbb H$. This applies to the completion of any normed division algebra, which still verifies the identity $|ab|=|a||b|$, and hence is a division algebra. Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras division-algebras or ask your own question. How to distinguish division algebras from matrix algebras? Is there any construction of infinite dimensional algebraic division ring?
CommonCrawl
It would be great if you can point out the mathematical formulation or R1CS form of the NP statement of Zcash. I found your question intriguing enough to take a dive in the source code. Note that I have absolutely no experience with ZCash; I know a few ins and outs of Monero though, and I have done some tinkering with some very simple R1CS systems. Since the ZCash "Sapling" update, it seems like the knowledge proof protocols are based on the bellman library. Their proofs are conveniently stored in the zcash-proofs directory of the ZCash Rust library. Their individual circuit "gadgets" are in the sapling-crypto/circuit directory, in which you'll find the Spend and Output circuits. These all build upon the other circuits in that directory, among which a SHA256 hash and a blake2s implementation, and ECC cryptography running in a circuit. All the above should already give you some insight in what happens: the precise NP-statement and the complete R1CS system will be way too long to present in a simple Stack Exchange answer. I think, even presenting all the constraints on a high level would take us to far, but I think that an illustration of a few constraints may give you a lot of insight! For a high-level overview, I think the ZCash blog gives a good starting point. If you want to dive in depth, there's also the protocol specification; especially "4.15: Zk-SNARK statements" will interest you. Let's explore 1, for example. Each transaction has a sequence of Spend descriptions and a sequence of Output descriptions. The latter is the simplest of both, so let's go with that. Note commitment integrity: the commitment of the note is honestly and correctly generated. Value commitment integrity: the value commitment of a transaction is the commitment of a value. Small order check; the point $g_d$ should not be of small order -- implementation detail in the specific curve they use. Ephemeral public key integrity: the ephemeral public key of the transaction equals $[esk]\times g_d$. I encourage you to look at the other statements yourself from here; especially the protocol spec is quite detailed in this. Not the answer you're looking for? Browse other questions tagged zero-knowledge-proofs or ask your own question. Why Victor must not know which tunnel Peggy chooses? what is the difference between proofs and arguments of knowledge? What are the actual anonymity features of ZCash? What is the difference between 'completeness' and 'soundness' in ZKP?
CommonCrawl
Inline mathematical text is written in the normal way, enclosed by dollar signs: $ …​ $. For currency in regular text, use \$. $a_1, \ldots, a_n$ ⇒ \(a_1, \ldots, a_n\). A line is the shortest path between two points. The text after the hash mark is the label, or identifier. I refer you to <<def-point>>." "The professor, harrumphed haughtily: I refer you to Definition 1." Note that the the text "Definition 1" is a link. There are a number of special environments whose content is processed differently from env.theorem, env.definition, etc. Note that putting an identifier (#fermat) "turns on" the automatic numbering. Click on the "add associated document" tool . In the form that comes up, enter "Tex Macros" for the type and "texmacros" for the type. Paste your tex macros in the window that appears below these two items. You can edit texmacros any time. The macros you put in it are available to all sections of the document.
CommonCrawl
I have a number of images in a Mathematica array, and I'd like to export the images to a PDF s.t. each image is precisely scaled to be $n \times m$ millimeters in dimension. The idea is that printing the PDF out on $8.5$ by $11$ inch ($215.9$ mm $\times$ $279.4$ mm) paper will give me images that can be measured with a ruler to be $n \times m$ millimeters. If the dimensions of the images are not precisely $n \times m$ millimeters in dimension, is there a simple way to do this? I can also easily just apply ImageCrop to each image in the array as a preprocessing step. ExpressionCell[..., "Print"] is necessary for printing at the same size as the output from Print[page] would print. Print output can then be directed to CutePDF or pdfFactory, etc. Not the answer you're looking for? Browse other questions tagged export image-processing image pdf-format or ask your own question. Exporting extremely large images; arrays generated via TensorProduct use 5 times as much RAM as expected? exporting nb to pdf without images being cropped?
CommonCrawl
GREENWICH JOURNAL Page 2 SALEM PRESS Thursday, June 15,2000 Sixth graders score high on Math League test G reenwich sixth grade \M atliletes\ (left to right), M aya Seegers, Aaron Thom n s-Boldtic, F.milie Siiiwneau, Vincent M onks, and Keith Pendergrass. Facli vent, hiyli adueung sixth grade math students at (irceiiwieli C cn- tral take the New York M.ilh I e.igtie lest llie t e - t i.onsi'.is ol *iiit\ tii.illciiy- mg question-' wlm-ii must be completed ill one hour Hie li>t i - j iml alliliated with New S oils St.ile and participation in it is at the discretion nl the school i he sixth fiads' si.ill ,ii ( ¡iL't'iiwich views the test ,is ,ni I'ppnrtunitx to challenge student .md eApuse them to a variety ol m.ith ,|ii»iiiui'< leathers encourage student ■. to ippK ,iH their problem-solvmi; .kills and l.> \have fun\ with the te.t Time to register for swim program The [ow n ol (ireetiuieh Swim Pro­ gram will he held at ilk- tiicemvich Beach again this ve.ir I »\i* ihiee-week instruction sessions will *u- uiven (June 26 - July M and Uilv 1" - ' muisi 3) with lessons M o n 1 n ttum.i'li llmrs- dav Registration i- 'Mil 'M'c\ lei Midenis who have vOinplete.! rr.n^-s one to eight Bus tran'porijtion '<> ihe heach leaves the town o llk e hniKli.it' .it noon and returns n > j> tv rtem-ii.ii'mii lornis are available at the (i'ecn v u h I own office I his year, Greenwich sixth grade \mathlctes\ scored higher than ever before. The test scores o f the top live students are added together to give a team score The Greenwich team scored fourth out o f all participating schools in the eight county Adirondack area, fop scorer, l-.milie Simoneau had the lourth highest individual score o f all students who took the test in the same eight eountv area. The other four sixth grade mathletes are Aaron Thomas- Hokluc. Vincent Monks, Keith Pender­ grass, and Maya Seegers. These five students will be honored at the sixth grade graduation ceremony on June 19. Summer program I he Greenwich Youth program will begin on Monday, June 26, and will end on Friday, August 4. The program is for students in grades K- 6 . Sponsored bv the Youth Commis­ sion. the program will be held Mondays through I hursdays from 9 to 11 a.m., at St Joseph's Hall on Hill Street. Activ ities include arts and crafts and outdoor sporting events. Each Friday there w ill be a field trip to various ioca.- tions. including Skateland, Great Es­ cape and Moreau State Park. There is a charge for these trips which varies with the admission charge o f the site visited. S t e v e s ^ Again OPEN SUNDAYS Starting June 11th 7 a .m . - 1 1 a.m . 5 7 - 5 9 M a in S t ., G r e e n w ich 6 9 2 - 1 0 9 0 M o n .-Sat. 6 a.in. - 2 p.m. Sun. 7 a.m. - 11 a.m. _______ ■ M » BirVIGO Greenwich Elks Every Tuesday Evening at 6:30 p.m. Doors open at 5 p.m. • 4 Early Birds 50J50 Separate Smoking & Non-smoking rooms $ 5 0 0 C o v e r a ll J a c k p o t Proceeds support the Elks and Five community area programs Homemade food, desserts & refreshments Multiple Bell Jar G a m es working Rt. 40 So., Greenwich B. _Bin£D_Lic._q_771_5j BelUar 033559 _ J GREENWICH GAZETTE The First issue of the Town of Greenwich Newsletter will be available FR ID A Y A F T E R N O O N , JUNE 16 Pick up your FREE Copy at the Town Office or at Area Merchants SALES ^ SERVICE ■<> LEASING THE DEAL MAKERS BS Greenwich, N Y a whalendeals.cosii W h a l e n Chevy-Olds, Inc, Greenwich <¡92-2241 - 1-800-439-2241 Toll Free^ See Bob Basseit, Bob Sullivan, Todd D isiasio, G e o r g e WhaJen, Tim W h a len, Jerry H erbst, Joe M a ille or Scott K o h ler G R E A T P A I N T . G R E A T S T O R E ! ' Town & Country Hardware 118 Main St., Greenwich 692-7035 Mori. • Fri. 7;3Q-5:30, Sat, 8-5; Sun. 104 The Greenwich Journal Established October 13,1842 Telephone [5Ï8J- 692-2266 òr FAX692-2589 Town offered \free\ septic system Library needs room for waste disposal By Tony Ensile At the regular meeting o f the Green­ wich Town Board Tuesday night, the town considered a proposal by the Greenwich Free Library to provide a free new septic system for the town in exchange for an casement to install the library's septic system on town property. The library's expansion program, ■which has been on the drawing board for several years now, is faced with a multitude o f space limitations. Since the library adjoins the property on which the Town E-lall and the Commons are located, the library occasionally looks to the town for solutions to its space problem. Use o f the town's park­ ing area by library patrons was ap­ proved some time ago, but the latest Tequest will require some thought and engineering. One proposal was to build a \joint septic system 11 For use by both the library and town hall, but this pro­ posal did not find much favor among the councilmeii. The second proposal was to install the library's septic system on town property in exchange for the library's installing a new system for the town. Since the town has had no problems with its system in the memory o f any­ one present at the meeting, no one even knew where the septic system was lo­ cated, let alone its capacity and state o f serviceability. Principal among the con­ cerns o f the councilmen however, was the amount o f sp>ace available for both systems between the town hall and the Commons. The prospect o f maybe hav­ ing to dig up the gardens (even though they would be replaced) did not sit well with the board. Colleen Mason, the official caretaker ol town gardens, was no: present at the meeting, and it was feared that any pro­ posal that would interfere with her handiwork would not meet with her approval. Soil tests would have to be per­ formed and other engineering problems worked out before the board would con­ sider taking action 011 the proposal. Both buildings, however, have only two rest rooms each, and the volume o f waste from both is not likely to equal that o f a normal household with show­ ers, dishwashers, washing machines, etc. The proposed project is not sched­ uled until the spring or summer o f 2001 . Real-estate transactions Efforts by a triumvirate o f landown­ ers to purchase property from the town for access to their own properties off County Route 77 stalled again Tuesday night, when the property owners disa­ greed with the valuation placed on the land by an appraiser. The} asked for a second appraisal, and the board agreed to have it done. Cost o f the appraisals is borne by the prospective purchasers. Resolutions passed last month adopt­ ing East Meadow View Lane were re­ scinded by the board this month. The town had agreed to accept the road based upon the assumption that it had been approved by the County DPW, when in fact, that approval has been withheld pending the resolution o f some drainage design problems. Deeds to the road will not be accepted by the town until county approval o f the road is obtained. When iris eyes are smiling Flowers its a bed at M em orial Park in Greenwich are happiest when the sun shines. This iris was ssniiing when the sun made a brief appearance on Tuesday morning.. W e « s p e c t that it and the others at the park will be beam ing during m o st o f the W h ipple City Festival this Friday, Saturday, a n d Sunday. Cltil> Cando wasits you Club Cando is the theme for this year's ecumenical Vacation Bible School which is open to youth in grades K- 6 . We will meet beginning Sunday evening, July 9, and ending Thursday eveaiing July 13, from 5:30-7:30. Picnic supper will be served each evening, followed by music, crafts, stories from the Bible and \club" activities. Club Cando- will be held at St. Jo­ seph's in the hall and on the grounds. Sign up at any church in Greenwich starting this Su nday or see our table at Whipple City Days. Everyone is wel­ come. Aduit help is always welcome. For more information see your pas­ tor or call M iclïcle Schreiner or Barbara Thomas. Food f o r Kids begins Monday, June 26 The summer Food for Kids program begins this year o n June 2,6. Free lunch will be offered to Greenwich area youth from 11 a.m. - noon, Monday through Friday at the Town Commons area. On rainy days lunch will be served in St. Joseph's Hall. The program is open to any family who would ilike their children to par­ ticipate. W e are aware that during the summer it is often difficult for families to provide the nutrition and supervision that kids need. The Food for Kids Pro­ gram is here to help. Menu will vary each day ¡aid w ill include, deli sand­ wiches, hot dogs and hamburgers, pizza, PBJ; veggies and drinks. Those kids participating in the town arts and crâfts and swim programs are welcome to com e for lunch or to bring a lunch and eat with-us in the Commons. Families; who live outside the vil­ lage are invited to apply for bus trans­ portation. W e have a small bus available an<S w ill provide transporta­ tion to lunch for kids outside the village. To apply far transportation, ask your child's teacher at school or call Harry Karpial at 70 Hill Street or Rev. Barbara Thomas. Donations can be made to thé Food for K ids program through the United Church o f -Greenwich, 37 Salem Street, pastor, Rev, Barbara Thomas. Graduation exercises at Pooh's Comer Preschool The 22nd Annual Pooh's Comer Preschool graduation was held on Fri­ day, June 9. The kids took their family and friends on a trip to \Pooh's Circus\. Circus animal and clown headbands were made for each child to wear and songs were sung. There were funny clowns, dancing bears, prancing horses, elephants, lions, tigers, seals, monkeys and popcorn clowns. Even the teachers got in on the fun by dressing up as the ringmaster and funny clowns. To make the school look more festive, each child made and decorated a paper clown and a pennant. Following the \trip\, the forty graduates donned their caps and re­ ceived their diplomas. Pictured are Pooh's Corner graduates: Whelden Graziano. Michelle Pellington, Maureen Benoit, Easton Murray, Savannah Roney, Rees Davis, Hannah Poulette, Kevin Kortright and Skyler Besanceney. Additional graduates were: Jessica Batchelder, Emilee Boddery, Maeve Boylan, Cody Carpenter, Paige Carruthers, Matthew Clauss, M olly Dixson, Moniqa Dore, James Doriski, Abigail Dusha, Victoria Eastman, Kort Furman, Levi Gage, Anissa Gamsey, Thomas Greeno, Anne Grimmke, Jade Harrington, Jenna Jackson, Kiah Kirk, Baxter Koziol, William McFee, Alyssa McMorris, Thomas Miller, •■Shaeden M osso, Kieron Mount, Jorjana Otero, Whitney Owens, Christopher Schnei- , ble, N icole Strainer, Courtney Towne and Joanna Wilbur. Pooh's Comer Preschool is a member o f the Greenwich Early Childhood Association. The association is sponsoring a tent on the Evergreen Bank lawn during the Whipple City Festival. This tent is designed for pre­ schoolers fun. There are hands-on circus activities for kids to enjoy and a chance to view some o f the scenery used at the graduation. Tots entertain families at year end picnic Class C-D Champion Paul Yakubec having won many first, second or third place finishes in his categories all season, qualified for the discus and shot put at the State meet this weekend. Paul is a senior at Greenwich Central school so ends his high school eareer in Track and Field. This past Saturday, Paul competed in the discus aiid shot put and returned home with the State Championship title in the discus. He also placed sixth in the State in the shot put. Republican barfeeque held o b i Saturday The sun shone for the Town o f Greenwich Republican chicken barbe- que on Saturday, June 10. The affair was held at the pavilion at the VFW #7291 on Abeel Avenue. Attending were members o f the Republican Town committee, members o f tlie town board and their families and many guests. Lou Leone, chairman o f the Town Republican committee and emcee, in­ troduced the dignitaries in attendance which included many o f the supervisors from Washington County, Phyllis Coo­ per, Washington County Treasurer and Bobby D'Andrea, state assemblyman. Bobby D !Andrea gave a brief speech, expressing his confidence in Rick Lazio, the Repulican candidate for United States Senator from N e w York. Boftskiil Grange Bottskill Grange held its regular meeting June 8 in Ciifton Park at the home o f Edwin and Catherine Fruhauf. , Following the business meeting, the Lecturer presented the program featur­ ing the Pledge o f Allegiance and the history o f the Stars and Stripes. The importance o f the Dairy indus­ try in Washington County was dis­ cussed, and an article read. Milford Spence entertained the members with several guitar selections, and the program closed with the sing­ ing o f \Blest be the Tie That Binds.\ A bountiful dinner was enjoyed at the Halfmoon Diner hear the FrUhaufs home. Bottskill's next meeting is scheduled for July 13, Evelyn Barbur Tw enty children, age three, recently celebrated the end o f the school year with a family picnic in the play yard at Pooh's C o rner Preschool. After e n joying sandw iches and cake, the children entertained their families with songs and finger plays. M emory portfolios, containing photos and sam p le work, were given to each child. School board changes June meeting date The Greenwich Central School Board o f Education has set the follow ­ ing meeting dates: The meeting scheduled for Monday, June 26, has been changed to Monday, June 19, at 7 p.m. The reorganization meeting will be held on Wednesday, July 12, at 7:30 p.m. V .F.W . 7291 meeting Members o f the V.F.W. Post 7291 will meet on Monday, June 19, at the Post on Abeel Avenue at 7 p.m. Planning Board The regular meeting o f the Town of Greenwich Planning Board will be held tonight, Thursday, June 15, at 7 p.m. in the Town Office Building, 2 Academy St., Greenwich, Board regulations re­ quire submission o f new applications ten ( 10 ) days prior to the regular meet­ ing to be included on the agenda. Ap­ plications may be submitted to the Town Clerk during regular hours or the Planning Board Clerk any Thursday evening from 6 to 8 p.m. Graveside Service G U S T A V E BAIN Graveside services for Gustave \Gus\ Bain, who died February 19, 200Q, will be conducted at 4:30 p.m. Friday, June 23, 2000 at the Greenwich Cemetery. Obituary Do you ever think about how you do a routine chore or act? How do you put your socks and shoes on? How do you button up a coat, sweater, shirt? Tliink about it! Bach time you put socks and shoes on. Actually, one thinks about shoes and socks, but you do put socks on first. So, do you put both socks on, and then your shoes and tie them or do you put on a sock, a shoe and tie? Maybe you put both socks on and one shoe and tie it, then the other shoe and tie it. How about a sweater? We've read that most people button up a sweater starting at the top. We think we have the answer to the why. if you begin at the bottom it requires both hands. Start­ ing at the top, you can button down using one hand, at least i f you are a female and right-handed. We'll have to check out doing it left handed. How­ ever, starting at the bottom, you might be assured o f riot having missed any. M A R Y E. H A T C H Mary E. Hatch, 65, a longtime resi­ dent o f Washington County, died Tuei day, June 6 , 2000, at Wildwood. Nursing Home in Williamstown, Mass. Funeral arrangements are under the direction of. Carleton Funeral Home, Inc, in Hudson Falls. She was born in Saratoga Springs, October 23, 1934, the daughter o f Roger and Maty Harrington Hill. She graduated from Greenwich High School and attended college for two years. She worked for many years as a clerk for Lavonian Bros, in Troy and Cuomo's in Cohoes. After working for ten years for Rensselaer County Sup­ port Collection Unit, she retired. She is survived by a son, William Hatch o f Fort Ann; two daughters, Elizabeth Anderson Hatch o f N .C . and Mattie Ann Hatch o f Troy; a brother, Roger Hill, Sr. o f Easton; an aunt, Helen Monroe o f Easton; and several grandchildren.
CommonCrawl
$\alpha$-Variational Inference with Statistical Guarantees. We propose a family of variational approximations to Bayesian posteriordistributions, called $\alpha$-VB, with provable statistical guarantees. Thestandard variational approximation is a special case of $\alpha$-VB with$\alpha=1$. When $\alpha \in(0,1]$, a novel class of variational inequalitiesare developed for linking the Bayes risk under the variational approximation tothe objective function in the variational optimization problem, implying thatmaximizing the evidence lower bound in variational inference has the effect ofminimizing the Bayes risk within the variational density family. Operating in afrequentist setup, the variational inequalities imply that point estimatesconstructed from the $\alpha$-VB procedure converge at an optimal rate to thetrue parameter in a wide range of problems. We illustrate our general theorywith a number of examples, including the mean-field variational approximationto (low)-high-dimensional Bayesian linear regression with spike and slabpriors, mixture of Gaussian models, latent Dirichlet allocation, and (mixtureof) Gaussian variational approximation in regular parametric models.
CommonCrawl
Say you have a cake that is cut into 100 pieces. One of the pieces has a hidden prize. 100 people take turns, each taking a piece in sequence hoping to find the prize. Note: I slightly modified (see italics) the question below to provide greater clarity; I hope that is allowed. If not I will revert it. Please express your solution in a formula and provide an explanation behind your reasoning. 1st person has C1 = 1/100 chances of finding the prize. The 1st person has the highest chance of not having someone else finding the prize, but the largest pool to choose from. the 100th person has the smallest pool to choose from however takes the largest risk of someone finding the prize before them. Note: I do not know the answer to this question. I am interested in your logical approach. what would happen if you took into consideration the knowledge that the previous people did not find the prize? Can we predict a turn that will yield a high reward vs efficient risk, given the knowledge that the people before you had not found the prize. The more people that fail to find the winning piece, the higher the chance that the next person will find it, however, let too many people take a piece before you and chances are you will lose because someone else will find it. Person 1 has a 100/100 chance of playing, and a 1/100 chance of winning. Person 2 has a 99/100 chance of playing, and a 1/99 chance of winning. Person 3 has a 98/100 chance of playing and a 1/98 chance of winning. Person 100 has a 1/100 chance of playing and a 100/100 chance of winning. When you multiply these out, you get 1/100 chance of playing AND winning. In other words, playing sequentially doesn't change the odds from playing simultaneously. Second person: $99/100 \times 1/99 = 1/100$. Third person: $99/100 \times 98/99 \times 1/98 = 1/100$. First of all, let the people pick their pieces of cake but not check them for prizes. One of those hundred pieces of cake contains the prize. It's equally likely to be any one of them. (Assuming that there's no visible clue to which one contains the prize, which of course would complicate things.) Therefore, the winner is equally likely to be any of the hundred people. It makes no difference in what order they check their pieces of cake: all that matters is which person's piece has the prize. no one knows anything about the piece of cake they pick, it's a random choice. The problem then becomes this: Randomly distribute 100 pieces of cake to 100 individuals - who is most likely to have the special piece? everybody has the same chance of having the special piece. It doesn't matter if everyone chooses their piece and checks it sequentially, or if everyone gets their piece and checks simultaneously. I will answer your edited question. In case that you have prior knowledge that the ones before you didn't find the price then the better turn is...the last. It's a little underwhelming but basically you have a 1*100/n% chance in getting the price with any number left of cake pieces (being "n" the number of pieces) so 100/1 is 100%. Put a number from 1 through 100 under each piece of cake (suppose the number 1 is the one with the prize). The order of choosing pieces generates a sequence of numbers. There are 100! possible outcomes (permutations of the 100-term sequence), and each number appears exactly the same number of times at any chosen position. Hence the probability of hitting the number 1 is the same, whichever position you take in a row of consumers. Not the answer you're looking for? Browse other questions tagged mathematics logical-deduction enigmatic-puzzle probability monty-hall or ask your own question.
CommonCrawl
After that I don't know how to proceed. Please help me with this. Next use substitution: set $\;u=\cos 2x$, $\;\mathrm d u=-2\sin 2x\,\mathrm d x$. Not the answer you're looking for? Browse other questions tagged calculus integration definite-integrals or ask your own question.
CommonCrawl
Conventional models do not fully explain composition of the solar system—for example, the presence of such elements as certain post-post-$Fe$-nuclei remains not yet understood. We propose a mechanism which can explain appearance of non-native elements in the solar system. The hypothesis involves an explosive nuclear-fission-type event within the inner part of the solar system that resulted from the system's path-crossing with a traveling-from-afar compact stellar object—a "giant nuclear drop" capable of phase-transitioning into unstable nuclear-fog state, which was triggered by the encounter. After the multitude of spontaneous reaction cascades and variety of nuclei transformations (such as nuclei fragmentation, fission, fusion, $n$-, $p$-, $\alpha$-, $\gamma$-capture, and various decays), the "debris" enriched the solar system and led to the eventual formation of the terrestrial planets that pre-event had not existed. Such scenario offers a possible explanation for the planets' inner position and compositional differences within the predominantly hydrogen-helium rest of the solar system.
CommonCrawl
Colon cancer arises from the gradual accumulation of several genetic and biochemical changes in cells. Ultimately, these changes give cancer cells the ability to spread throughout the body or metastasize. Cancer cells display a variety of alterations to their cell surface carbohydrates. Cell surface glycoconjugates have been implicated in the adhesion, migration and invasion of cells, suggesting that changes to these structures may confer properties necessary for tumor cell metastasis. One such alteration is increased expression of β1-6 branched Asn-linked oligosaccharides on glycoproteins, which has been linked to the metastatic potential of cells. Hybridoma technology was used to generate monoclonal antibodies which detect glycoproteins bearing $\beta 1$-6 branched Asn-linked oligosaccharides which may be important in colon cancer. MAb 3A7 was selected for further study because it detected an epitope expressed at high levels in rat and human colon tumors. In addition, expression of the epitope defined by mAb 3A7 was shown to be developmentally-regulated in rat intestine. Thus, mAb 3A7 detected an oncodevelopmentally-regulated determinant in colon. As well, mAb 3A7 detects a major glycoprotein species of 140 kDa (gp140) which is differentially expressed in human colon cancer cell lines. MAb 3A7 recognizes an epitope containing blood group A (GalNAc$\alpha 1$-3Galβ-) or B (Gal$\alpha 1$-$3Gal\beta$-R) structures exclusively on type 2 chains (Gal$\beta 1$-4GlcNAc). 3A7-immunoreactive gp140 was isolated from the human colon cancer cell line, HT29, by lectin affinity and gel filtration chromatography. Partially purified gp140 was used to generate monoclonal antibodies which detect the polypeptide portion of gp140, namely mAbs 7A8, 7B11, 8C7 and 8H7. Immunological, molecular and biochemical analyses were used to demonstrate that the 3A7-immunoreactive gp140 corresponds to α3β1 integrin, a cell surface adhesion molecule which mediates cell-cell and cell-extracellular matrix interactions. Analysis of $\alpha 3\beta 1$ integrin expression in human colon carcinoma cell lines revealed that this glycoprotein is a major target for the addition of several cancer-associated carbohydrate structures, including $\beta 1$-6 branched Asn-linked oligosaccharides, poly-N-acetyllactosamine (type 2 chain repeats) and the 3A7 epitope. Significantly, the 3A7 epitope appears to be located primarily on the $\beta 1$-6 branch of Asn-linked oligosaccharides on $\alpha 3\beta 1$ integrin. Analysis of a panel of blood group A, AB and B positive human colon carcinoma cell lines revealed that expression of $\alpha 3$ integrin subunit, rather than glycosyltransferase levels, appears to regulate cell surface expression of the 3A7 epitope in colon cancer cell lines. Finally, $\alpha 3\beta 1$ integrin expressed by human colon cancer cells contributes to the adhesion and migration of cells toward extracellular matrix proteins. These data suggest that $\alpha 3\beta 1$ integrin and perhaps its glycan moiety, including the 3A7 epitope, contribute to colon cancer progression.
CommonCrawl
The fact that it asked for $2\times 2$ matrices which it implies there are other matrices, confused me. I know that $Q$ may be the identity matrix. I also tried to give $Q$ random unknown letters and equated it in $PQ = QP$ but I did not manage to work it out. Not the answer you're looking for? Browse other questions tagged linear-algebra matrices or ask your own question. Are there other Identity Matrices? How to determine the value of a variable in a matrix to make it linearly independent of two other given matrices. matrices $A$ and $B$ such that $AB = -BA$?
CommonCrawl
A compact operator is an operator from normed space $X$ to a normed space $Y$, such that image of every bounded subset of $X$ is relatively compact in $Y$. It's used with (functional-analysis) and (operator-theory) tags. How to prove that an operator is compact? Consider $T\colon\ell^2\to\ell^2$ an operator such that $Te_k=\lambda_k e_k$ with $\lambda_k\to 0$ as $k \to \infty$ how to prove that it is compact? How to prove that a bounded linear operator is compact? Compact operators: why is the image of the unit ball only assumed to be relatively compact? is $T:C[0,1]\rightarrow C[0,1]$: $x(t)\mapsto x(t^2)$ compact? Is $T$ a compact operator? $T:C[0,1]\rightarrow C[0,1]$: $x(t)\mapsto x(t^2)$ where $t\in[0,1]$ with supremum norm. What makes compact operators special? Is every bounded derivation from compact to finite rank operators inner? T compact if and only if T*T is compact. Is the inclusion $C^1[0,1]\subset C[0,1]$ compact? Set of all compact operators $K(H)$ is the unique ideal in $B(H)$? I want to show that the set of all compact operators $K(H)$ is the unique ideal in $B(H)$. Is there any relation between invertibility and compactness of an operator? Is the Neumann series a compact operator? I know convolution is not a Hilbert–Schmidt integral operator, but it needs more to tell if convolution is compact or not. Why is $A$ a compact operator? Does a compact operator always have a kernel? How to prove this integral operator is compact?
CommonCrawl
The purpose of this talk is to present some results on minimal modular extensions of braided fusion categories doing emphases in minimal modular extensions of super-Tannakian fusion categories. The cobordism hypothesis gives a correspondence between the framed local topological field theories with values in $\mathcal C$ and a fully dualizable objects in $\mathcal C$. I'll describe the analogue on the quiver side of the natural gluing, or "recollement", structure of this category of perverse sheaves. Summer ASC students Louis Carlin and Mitchell Rowett will describe their experience developing mathematical libraries for Lean, a new interactive theorem prover. Khovanov homology can be defined for links in thickened surfaces. In this talk, I will introduce the simplest kinds of non-reduced curves, and explain what they tell us about the geometry of smooth curves. On Friday at 3pm Mike Freedman is speaking in the Topological Matter summer school. This is a "pre-talk", trying to fill in some background. Kevin will describe a family of 3-categories coming from surfaces modulo local relations.
CommonCrawl
We consider a non-attractive three state contact process on $\mathbb Z$ and prove that there exists a regime of survival as well as a regime of extinction. In more detail, the process can be regarded as an infection process in a dynamic environment, where non-infected sites are either healthy or passive. Infected sites can recover only if they have a healthy site nearby, whereas non-infected sites may become infected only if there is no healthy and at least one infected site nearby. The transition probabilities are governed by a global parameter $q$: for large $q$, the infection dies out, and for small enough $q$, we observe its survival. The result is obtained by a coupling to a discrete time Markov chain, using its drift properties in the respective regimes.
CommonCrawl
Adding the <wikitex> ... </wikitex> tags for an easier entry of mathematical formulas. smart syntax checker which knows better than I do. is on a non-white background and aligned badly. While one can change formula background to transparent by hacking the helper function, it requires another round by the sloooow convert program. This extension adds two tags. The main additional tag is <wikitex> which indicates that from here math text is entered between $..$. Internally math text is wrapped inside <tex> tags, but this can also be used directly. The extension supports certain attributes, such as dpi=100 to make smaller images, or include="Math:Mycommands" to automatically include your TeX macros. Full documentation is in Extension:WikiTex/Documentation. Beware: formulas are typeset with plain TeX with some predefined commands (such as \mathcal or \mathbb above). LaTeX constructs do not work. Be default, formula numbers appear on the right of the formula (but can be on the left when changing the appropriate style), and are not part of the formula. You need an external worker program which transforms plain tex files into pictures and returns certain status information. If your wiki was configured with math, then probably TeX was installed. Otherwise you need to install TeX as well. Install dvipng, a fast dvi bitmap converter program. Copy the supplied shell program, or something similar, into /usr/local/bin/texconvert, or change the name and/or location in WikiTex.php. Install the supplied WikiTex.php program into the extensions directory of your wiki. Edit MediaWiki:Common.css which sets how displayed formulas are shown. Edit MediaWiki:TexInclude which contains some standard TeX macros. Details a and source are in Extension:WikiTex/Installation. Extension:MathJax which enables common LaTeX and other formula delimiters.
CommonCrawl
We prove that each $3$-polytope with minimum degree $5$ without vertices of degree from $7$ to $10$ contains a $5$-vertex whose set of degrees of its neighbors is majorized by one of the following sequences: $(5,6,6,5,\infty)$, $(5,6,6,6,15)$, and $(6,6,6,6,6)$, where all parameters are tight. Keywords: plane graph, structure properties, $3$-polytope, neighborhood. The authors were funded by the Russian Science Foundation (Grant 16-11-10054).
CommonCrawl