url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://math.ac.vn/vi/icalrepeat.detail/2019/04/11/3509/-/-.html
HOẠT ĐỘNG TRONG TUẦN New results on stability and $L_ {\infty}$-gain analysis for positive linear differential-algebraic equations with unbounded time-varying delays Báo cáo viên: Nguyễn Hữu SáuThời gian: 9h, Thứ 5, ngày 11/4/2019Địa điểm: Phòng 513, Nhà A6, Viện Toán học, 18 Hoàng Quốc ViệtTóm tắt: This paper addresses the problems of stability and $L_ {infty}$ gain analysis for positive linear differential-algebraic equations with unbounded time-varying delays. First, we consider the stability problem of a class of positive linear differential-algebraic equations with unbounded time-varying delays. A new method, which is based on the upper bounding of the state vector by a decreasing function, is presented to analyze the stability of the system. Then, by investigating the monotonicity of state trajectory, the L ∞ -gain for differential-algebraic systems with unbounded time-varying delay is characterized. It is shown that the $L_ {infty}$-gain for differential-algebraic systems with unbounded time-varying delay is also independent of the delays and fully determined by the system matrices. A numerical example is given to illustrate the obtained results Tin tức nổi bật 20/11/19, Hội nghị, hội thảo:The IMH School Introduction to Algebraic Schemes and Cohomology 22/11/19, Bài giảng viện:Berkovich spaces over Z and convergent arithmetic power series 02/12/19, Hội nghị, hội thảo:School “INVERSE PROBLEMS” 04/12/19, Hội nghị, hội thảo:Hội nghị Đại số-Lý thuyết số-Hình học và Tô pô 2019 09/12/19, Hội nghị, hội thảo:Workshop Quốc tế về Hình học và Tổ hợp 13/12/19, Bài giảng viện:How to escape the curse of dimensionality in combinatorics 20/12/19, Bài giảng viện:Algebraic surfaces with minimal Betti numbers 17/02/20, Hội nghị, hội thảo:SEAMS school “Arithmetic, Geometry and Model Theory” 04/03/20, Hội nghị, hội thảo:MIS-IMH research school on Mathematics of Data
2019-11-19 23:54:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7866821885108948, "perplexity": 11852.130932882248}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00161.warc.gz"}
https://jeremycote.me/page16/
### Period of a Pendulum The pendulum is a classic physical object that is modeled in many introductory physics courses. It’s a nice system to study because it is so simple, yet still allows students to see how to study the motion of a system. Here, I want to do through the steps of deriving what is usually seen as an elementary result, the period of a pendulum, and show how it is actually more complicated than what most students see. To begin with, what exactly is a pendulum? This may seem like an easy question, but it’s a good idea to have a well-defined system. So, the pendulum we will be looking at today is called a simple pendulum. Surprising no one, a simple pendulum is the most idealized pendulum, consisting of a point mass attached by rod of fixed length. This means we aren’t dealing with a pendulum that has a flexible rope that changes length, nor do we have something like a boat, which doesn’t quite act like a point mass since the mass is distributed throughout the object and isn’t localized. In other words, our situation should look something like this: You may be wondering why we aren’t using Cartesian coordinates, and the reason is quite simple. In Cartesian coordinates, we would need to specify both the $x$ and $y$ coordinates, which requires two degrees of freedom and is also a pain in this particular setup. By contrast, using polar coordinates is more compact since the radius $r$ is fixed (in this case, $r=l$), which means we only have one degree of freedom, the angle $\theta$. To begin our analysis, we will start with our generic equation for conservation of energy, which looks like this: Here, the kinetic energy is $T$ and the potential energy is $U$. To know the kinetic energy, we need to know the magnitude of the velocity of the object, which we don’t know at the moment (and which changes depending on the angle $\theta$). We do know though that the kinetic energy is given by $T = \frac{1}{2} m v^2$, where $v$ is the magnitude of the velocity (the speed), so we will keep that to the side. We also know that the potential energy is given by the familiar equation $U = mgh$ on Earth, where $h$ is the height of the object from the ground. To find this height $h$, we need to draw some clever lines and invoke some geometry: From the diagram above, we can see that the height is given by $h = l \left( 1 - \cos \theta \right)$. Therefore, the potential energy is: With this, we almost have everything we need in our equation. The goal is to isolate for our speed $v$, so we can then integrate it over a whole cycle to find the period. To do this, let’s remember our conservation of energy equation: $E = T + U$. This equation states that the total energy $E$ is always a constant in time. In other words, $\frac{dE}{dt} = 0$, and so we can simply find the total energy at one particular instant, and then substitute it for $E$. What we will do is consider the energy that the pendulum initially has, just before it is allowed to fall. At that moment, it has an initial angle which we will call $\theta_0$, and since it isn’t moving, the pendulum has no kinetic energy. Therefore, the energy of the pendulum is: We can now make this equal to the sum of the kinetic and potential energy at any time to get: Since each term in this equation has the mass $m$ in it, we can see that our result will be independent of the mass. If we then isolate for $v^2$, we get: At this point, we need to think about what the speed $v$ is. The definition of speed is $v = \frac{ds}{dt}$, where $s$ is the path length. Fortunately, the path length of a pendulum is very easy to find, since it’s simply the arc length of a circle! From the above diagram, we can see that the path length is given by $s = l \theta$. Therefore, the speed is: We can now substitute this into Equation \ref{vSquared}, and solve for $\frac{d\theta}{dt}$: Note here that I’m only considering the positive solution for $\frac{d\theta}{dt}$, since we will be solving for the period, which is a positive quantity. What we will now do is employ the method of separation of variables to integrate this quantity. If you aren’t familiar with this method, I suggest taking a look at a resource on differential equations such as here. Separating our variables gives us: This is good. We now have an expression for $dt$, which means we can integrate it for the angle between $0$ and $\theta_0$, and this will be one quarter of the period. To see why it’s only a quarter of the period, look at the following sketch (each arrow is a quarter period): Integrating gives us: And solving for the period $T$ gives: This is the full expression for the period of a pendulum at any initial angle $\theta_0$. The only slight issue is that, while correct, this expression is not easily integrated. In fact, I don’t know how to integrate it at all. What we would like the period to be is of the form: The expression above would be what is called a Taylor expansion, with the first term being what you might have already seen to be the period of a pendulum, plus some correction factors that are contained in the ellipsis. To get it into this form, we want to be able to use the binomial expansion, which is given by: To do this, we need to transform Equation \ref{fullPeriod}. First, we will perform what may seem like a totally random substitution, but bear with me. We will change coordinates and go from $\theta \rightarrow \psi$. This mapping will be done using the following relation: Looking at this relation, we can see that when $\theta$ ranges from 0 to $\theta_0$, the corresponding variable $\psi$ varies from $0$ to $\pi / 2$. Implicitly differentiating each side gives us: We can then pull out a handy trigonometric identity called the double angle identity, which is given by: Using this identity, we can rewrite the expression inside the square root of Equation \ref{fullPeriod} as: From here, we can insert our original substitution from Equation \ref{transform} into the second term above, giving us: Just to note, from the second to third line, I simply used the Pythagorean theorem. Now, since we wanted the square root of $\cos \theta - \cos \theta_0$, we can take the square root of the above expression. Furthermore, we can use Equation \ref{dTheta} in order to find an expression for $d \theta$: From this, we can insert everything into the integral of Equation \ref{fullPeriod} and simplify. Note here that I’ve omitted the prefactor in the front of the integral just to get things a little cleaner, but we won’t forget about it. We’re almost there. Now, we can simply used a rearranged version of the Pythagorean theorem to write: Here, I’ve made use of equation (13) again in order to write this expression in terms of $\psi$. Throwing this all together and reintroducing the prefactor in front for the period gives us the following result for the period: I don’t know about you, but that was a lot of work. This integral is actually a special kind of integral. It’s called a complete elliptic integral of the first kind, and is defined by: In our case, $m = \sin^2 \left( \frac{\theta_0}{2} \right)$. What’s nice about this form of the integral is that it is indeed in binomial form, so we can expand it. We therefore have: This looks like quite the jumbled expression, but we can can write it quite succinctly in the following form: Here, the double factorial sign (!!) means that we skip a number each time we do the multiplication. Therefore, $5!! = 5 \cdot 3 \cdot 1$ and $6!! = 6 \cdot 4 \cdot 2$. You can verify that this does represent the above expression of Equation \ref{long}. We are now in a better position to evaluate the integral. It looks like this: This last integral is a bit of tricky one, but we will show that the integral is given by: To get this result, we will use recursion. First, we note that the values of $n$ are all positive, which is clear from Equation \ref{sum}. This means our lowest value of $n$ will be one. If we label the integral in Equation \ref{In} as $I(n)$, then we can evaulate this function to get: With the base case out of the way, we now tackle the whole integral. Let’s start by splitting up the integral as such: We can now use integration by parts to partially evaluate this integral. If we use $u = \sin^{2n-1} \psi$ and $dv = \sin \psi$, we get: The first term evaluates to zero, and so we are only left with the integral. We can then change the cosine into a sine and rearrange things to give: If you look at this and compare it to our definition of $I(n)$ from Equation \ref{In}, you’ll notice that we can write the above equation as: Solving for $I(n)$ gives: This is a recurrence relation, which means it tells us how to construct the next term from the previous one, as long as we have a beginning “seed”. Thankfully, we do have one, which is $I(1) = \pi/4$. What we want to do at this point is to keep on applying the recurrence relation to the term $I(n-1)$, until we get all the way to $I(1)$, where we stop. I’ll illustrate a few of these for you, and hopefully it becomes clear what the pattern is. I could continue, but this is a good representation of what happens. In summary, the numerators of the fractions are odd numbers (since they are in the form $2k+1$), and the denominators are even numbers (since they are in the form $2k$). Furthermore, as you go down the fraction, you go from an odd number to the next closest odd number, and the argument is the same for the even numbers. Therefore, what we are really doing is another factorial all the way until we get to $I(1)$, which we can evaluate since it is our starting seed. Therefore, we get: Now that we have this result, we can put it all together to give us: Expanding this gives us the following infinite series: If we recall that $m = \sin^2\left( \frac{\theta_0}{2} \right)$ and we insert the prefactors for the period from Equation \ref{Period} in, we get the following result for the period of the pendulum: This is the full expression for the period of the pendulum with any starting angle $\theta_0$. What’s quite nice about this expression is that we can immediately see that if $\theta_0 \approx 0$, then all of the sine functions become very close to zero and so the only important term in the square brackets is the one. At this point, the period becomes what one usually learns (for small angles): $T = 2\pi \sqrt{\frac{l}{g}}$. Furthermore, we can see that when our initial angle gets bigger, it becomes more important to take on successive terms within the brackets of Equation \ref{Final}. Hopefully, this wasn’t too bad. I wanted to go through the calculation as explicitly as possible, since I remember being a bit confused when I saw it for the first time. As such, I want to make sure things are illustrated nice and slow so everyone can follow. What I love the most about these long analytical expressions is how you can recover the simpler result you had from simplifying the problem. We can easily see that our “usual” period is nestled within the long infinite expression. Lastly, I just wanted to make clear that one assumption we did make was that we were dealing with a point mass pendulum. In other words, we still weren’t quite modelling a physical pendulum, which requires taking into account the centre of mass of the bob and the rod of the pendulum together. Still, this is enough precision for today, so we will leave it at that. ### On Uncertainty in Science I’ll let you in on a bit of a secret. For most of my life, I hated doing experiments in science. It didn’t really matter if the experiments were in physics, chemistry, or biology class (though I enjoyed the fact that physics experiments tended not to be as messy). In fact, when I was in secondary school, my grade was asked at the end of the year to vote on what kind of science class they wanted the next year. There were two choices. One was to keep the material more theoretical and from the textbook. The second was to introduce the content in a much more “hands-on” sort of way, which meant more laboratory experiments. If I recall correctly, I was one of the only students who chose the first option. I didn’t really understand why everyone wanted to do the hands-on program. In my eyes, it just made things seem less exact and more messy. Other students seemed to like the idea that they could do experiments, but it wasn’t my idea of a fun time. Moving into CÉGEP, I kept this attitude of not enjoying lab experiments. They were annoying to do, and completing the lab reports after were the worst. One had to deal with uncertainties and significant figures and sources of error that made everything seem much more messy than the theoretical predictions that were made using mathematics. I longed for simple relations without error bars. From reading the above, it may seem like I think science should be all theoretical. Of coure, this is not the case, and I think, if anything, we need to talk more about the uncertainty and messiness in science. If we want to have a society that understands the way we get results in science, we need to communicate this uncertainty more clearly. Science is not mathematics. Sure, we want to describe the world using mathematics as our language, but we need to keep in mind that nature will not bend to our will. There will always be fluctuations, imprecise measurements, and sheer randomness in some data. We use mathematics to make these uncertainties as small as possible, but we can never fully eliminate them. As such, it’s crucial to realize that a measurement means nothing without its corresponding uncertainty. The reason is simple: we take measurements in order to compare them. If we just dealt with measurements as precise quantities that have no uncertainty, than we would find a lot less agreement with our predictions. This would make it near impossible to do science. Let’s take a very simple example. Imagine we wanted to measure an object that is said to be 4.500 metres long. To verify this claim, we take a metre stick that has granulations every centimetre and measure the object. Say it comes out to 4.52 metres. Do we say that these two measurments are different? The answer is, it depends. To find out for sure, we need to know the uncertainties that are associated with each measurement. When the object was measured to be 4.500 metres long originally, what were the uncertainties on that measurement? Was it $\pm \ 1 mm$? These are critical questions to ask when making comparisons. If we imagine that the metre stick has an uncertainty of $\pm \ 1 cm$ (because this metre stick is only marked off in centimetres), then the two values we are comparing are: \begin{align} (4.500 \pm 0.001) \ m, \\ (4.52 \pm 0.01) \ m. \end{align} The question now becomes: do these two measurements overlap? This is the key question, and in our case, the measurements don’t overlap, since the first measurement could be at most 4.501 m and the second measurement could be at least 4.51 m. Since these two measurements don’t overlap, we consider them to not be in agreement. As you may notice, this isn’t a trivial matter. It may have seemed like the two measurements did agree at first glance, but without knowing their associated uncertainties, we have no idea. This means that if someone tells you some figure that came from experiment and wasn’t just a theoretical calculation, you need to know their uncertainty if you want to compare the figure to anything else. Without it, the measurement is meaningless. What I want to stress here is that uncertainty is inherent in science. There’s no getting around this fact, no matter how precise and careful your experiment is. This is why I find it so amusing when people attack scientific results on the basis that they are simply uncertain. Of course they are! This isn’t mathematics, where results have infinite precision. In science, we have this inherent uncertainty, but we use the tools of mathematics to make sure that the uncertainty is as small as possible, and we make our claims using this uncertainty. We make do with what nature presents us. If there’s one thing I want to ask of you, it is this: make sure you’re aware of the inherent uncertainty in science, so that you aren’t worried when you see scientists saying that the measurements agree with theory, despite the seeming non-equivalence. Chances are, the uncertainties in the measurement is what allows scientists to make this claim. Conversely, look for those who try to exploit this characteristic of science to push information that simply isn’t supported by the scientific method. ### Mathematical Sophistication When I reflect on my education in science (and in physics in particular), the common theme I see is just how the amount of sophistication present in the computations and concepts I learned each year kept increasing. If there was one thing I could count on, it wasn’t learning something “new”. Instead, it was about viewing things I might have once taken for granted as a process that was much more deep than I realized. For example, take Snell’s law. In secondary school, I learned how this phenomena worked in the sense that I could calculate the effects. I learned that Snell’s law could be written like this: $\frac{\sin \theta_1}{\sin \theta_2} = \frac{v_1}{v_2} = \frac{\lambda_1}{\lambda_2} = \frac{n_2}{n_1}.$ This allows one to calculate the angle of refraction for various simple systems, and this is exactly what I remember doing. Additionally, the “reason” for why this was true seemed to be something about the light “slowing down” in a different medium, but the reasoning wasn’t all that clear. In the end, it was more of a “here’s the law, now calculate it” sort of concept. At the time, I don’t remember being bothered by this. Now though, it makes me frustrated, since what is the point of learning these ideas if one doesn’t learn why this specific result occurs? It’s something I’ve been thinking about a fair amount lately. Fast-forward a few years, and now Snell’s law gets derived using Fermat’s principle of least time, which uses the calculus of variations, and gives one a more satisfying explanation concerning what is going on when the light rays “bend”. In this sense, the mathematics produce the result, which is better than being told the result. Another example is one that I hadn’t thought about much until I came across it. Anyone who has gone through a class in statistics has seen how to fit a curve to a collection of data points. Usually, one is concerned only with fitting a linear curve, but sometimes we also plot quadratic curves as well (with software). In the case of linear plots, in secondary school, the recipe went like this. First, one should plot the points on a graph. Then, one needs to carefully draw a rectangle around the data points, and then measure the dimensions of this rectangle. From there, the slope can be calculated, and then a representative point was chosen in order to find the initial value of the line. Basically, this was an exercise in graphing and drawing accuracy, not something you’d want from a mathematics class. As such, while the results were qualitatively correct, they coud differ widely from student to student. Fast-forward a few years later once again, and the story is much different. In my introductory statistics for science class, we were given the equation that would give us the slope of our linear equation, as well as the correct point to use for the initial value. This undoubtedly produced more accurate results, but once again it lacked the motivation behind it (due to a lack of time, in this case). Thankfully, this lack of explanation was addressed in my linear algebra class, where we learned the method of least-squares. Here was finally an explanation as to how these curves were computed. In the statistics class, it was a long and complicated formula that was given. However, in linear algebra, the reasoning behind how to compute such a curve was much simpler and straightforward. In other words, it made sense as a process. Even better, this method generalizes well for other types of curve fitting, not just linear functions. As such, this explanation was much more useful than all of the other ones. The lesson that I personally get is that, no matter the topic you’re learning, there often is another layer of understanding that can complement it. This means that I shouldn’t stop looking at concepts that I’ve seen many times just because I think they are boring! There are often new perspectives to look at the situations, and they usually come tied with more mathematical sophistication. This is something that I love to see, because it brings new viewpoints to concepts I might have though I had completely figured out. This shows me that I can always learn and understand a concept more thoroughly, and hopefully this can be good inspiration for you to seek out varied explanations of your favourite concepts. Just because classical mechanics is, well, classical, doesn’t mean you can’t look at it in more sophisticated ways. ### The Limitations of Models As many students in the sciences know, the reason we use mathematics to describe our results is because mathematics is the most precise language we possess. It’s not like we have some sort of favouritism towards mathematics that we don’t have to other languages like English or French. Quite frankly, it’s an issue of precision in what one is communicating. It’s the difference between saying I can see a red light and that I can see a light of about 600 nanometres. It’s the difference between basing a prediction on past results and basing on extrapolating from a model. However, what is often missed in the public is the fact that science is based on mathematical models. And, as any scientist will tell you, a model is only as good as the assumptions it makes. This means the models are inherently different from what we would call “real life”. ## Simplicity to complexity When you first learn physics in secondary school, you typically learn about the big picture concepts, such as Newton’s laws, some optics, and maybe even something about waves. If we focus on only Newton’s famous $\vec{F} = m \vec{a}$, you learn about solving this equation for an easy system. Additionally, one usually starts without any notion of calculus, so the questions revolve around either finding a force or finding the acceleration of the system. Personally, I remember analyzing systems such as a block that is subject to a variety of constant forces. This made the analysis easy, compared to what one does a few years later in more advanced classes. However, what one must keep in mind is that the systems I was analyzing weren’t realistic. If one stops to think about it, there aren’t many forces that are constant with time (even our favourite gravitational force $m\vec{g}$ isn’t technically a constant). However, we weren’t going to be thrown to the lions in our first physics class, so these simple systems were enough to begin. Years later, I would refine these models to become gradually more realistic. To give an explicit example, consider the equations for kinematics, which one learns about in secondary school and are given by: \begin{align} x(t) = x_0 + v_0 t + \frac{1}{2}at^2,\\ x(t) = \left( \frac{v + v_0}{2} \right) t,\\ v(t) = v_0 + at,\\ v^2 = v_0^2 + 2a\left(x - x_0 \right). \end{align} What one immediately learns following this is that these are equations that describe the motion of a free-falling object under a constant acceleration. These two emphasized terms are important, because unless you’re trying to describe the motion of projectiles in outer space, these equations don’t actually describe the motion of the systems. There are a few reasons why this is so. First, as alluded to above, these equations are only valid where there is no force acting on the system except for gravity. This is obviously not realistic, since there are other forces that can act on a system when they are launched (such as air friction). Therefore, modeling the situation as if air friction didn’t exist can only give an approximate answer at best. The presence of only gravity as a force is what is meant bu the term free-falling. Second, the acceleration needs to be constant, and this isn’t true either. If we simply take the example of launching a system into the air, the fact that air friction acts as a force on the system changes the acceleration of the system, thereby nullifying our kinematic equations once again. Alright, so those are a few reasons why the kinematic equations don’t work, but what does the difference look like in our models? I won’t go through the whole derivation of the kinematic equations when we add in air friction, but here’s a plot that shows the difference between the two for a tennis ball. As you can see, the difference is very small. Indeed, it took me a bit of time to figure out what kind of object would show some more obvious deviations from the original parabola (which is the red curve). Finally, I found a good example, which is a table tennis ball. The more accurate curve that takes air friction into account (in blue) is quite close to the red curve, so to a first approximation, our original model without air friction is pretty good. Actually, if you take the whole trajectory into account, you can see that the two curves diverge in the latter half of the trajectory. You might be thinking, “Alright great, we have the solution for the trajectory, so now this problem is solved.” But that’s not quite true. If you’ve ever thrown hit a table tennis ball, you know that it doesn’t just fly in the air in one position. It spins, and that rotation changes how the ball moves (as anyone who plays table tennis knows). As such, the moral of the story is that we can always add more elements into the models that make them more accurate. However, that always comes at the cost of simplicity, so your model becomes more difficult to compute as you increase the features it encodes. At some point, you have to choose where you want to fall on the spectrum of simplicity to complexity. ## How much stock can we put into models? So who cares about the trajectory of a ball when we throw it? Chances are, not many. The reason I wanted to show you this one was just to illustrate what we need to take into account when we want to model some sort of phenomena. There are always tradeoffs, and these tradeoffs affect our accuracy. The problem that we as scientists can fall into is failing to communicate how these models work to the public. It’s nice to give big, qualitative statements about the future, but often we don’t share the limitations of these statements. What I mean by this is simply that our statements in science are often predicated on models. And, as I mentioned in the beginning of this piece, models are only as good as their built-in assumptions. Einstein’s theory of general relativity is a fantastic framework for understanding and predicting many features of spacetime, but if we suddenly see that there isn’t a speed barrier in the universe, then the whole model is useless physically. That’s obviously an extreme example, but the broader point is to keep in mind the limitations of models. A model is something we use to describe the world. If it’s a very good model, then it may even make predictions about things we haven’t yet discovered. But what you shouldn’t do is keep yourself tied to a specific model. That’s because every model has its own domain of applicability, and trying to apply the model past this domain isn’t a good idea. We should all keep this in mind when we hear news reports about extraordinary things. First, what kind of model is being used? Is it a model that has proven value, or is it something completely new? Second, what kind of domain of applicability does this model have, and does going past it significantly change the results? As you can see from the example we did above, not including air friction didn’t significantly change the results. However, the amount that is “bad” is very subjective, which means it depends on the application. If we are trying to understand simple models of astrophysical phenomena, we might not be too picky if our results could be up to 20% off (depending of course on the situation). However, if you have a model that predicts certain health issues in patients, misidentifying one in five patients is much too high (for myself). Therefore, the next time that we hear something extraordinary on the news, think about the model that’s being used. I understand that we can’t possibly research every single ridiculous claim that is made, but a bit more skepticism and curiosity about the details of such claims would not be a bad thing.
2018-11-20 17:20:00
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 37, "x-ck12": 0, "texerror": 0, "math_score": 0.8121991157531738, "perplexity": 234.1871246267262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746528.84/warc/CC-MAIN-20181120171153-20181120193153-00123.warc.gz"}
https://www.jiskha.com/questions/20712/evaluate-the-integral-y-lny-dy-i-know-its-integration-by-parts-but-i-get-confused-once
# calc evaluate the integral: y lny dy i know it's integration by parts but i get confused once you have to do it the second time Leibnitz rule (a.k.a. product rule): d(fg) = f dg + g df y lny dy = d[y^2/2 ln(y)] - y/2 dy ----> Integral of y lny dy = y^2/2 ln(y) - y^2/4 + const. Instead of partial integration you can use this trick: Integral of y^a dy = y^(a+1)/(a+1) (1) Differentiate both sides w.r.t. a: Integral of y^a Ln(y) dy = y^(a+1)Ln(y)/(a+1) -y^(a+1)/(a+1)^2 (2) uUp to an integration constant) Substitute a = 1 to obtain the answer. Note that the integration constant we could have added to (1) can still depend on the parameter a. So, if you differentiate it you get an integration constant in (2) 1. 👍 0 2. 👎 0 3. 👁 1,160 ## Similar Questions 1. ### calculus integrals Evaluate the integral by making the given substitution. (Use C for the constant of integration. Remember to use absolute values where appropriate.) integral x^5/x^6-5 dx, u = x6 − 5 I got the answer 1/6ln(x^6-5)+C but it was 2. ### Calculus 1. Express the given integral as the limit of a Riemann sum but do not evaluate: integral[0 to 3]((x^3 - 6x)dx) 2.Use the Fundamental Theorem to evaluate integral[0 to 3]((x^3 - 6x)dx).(Your answer must include the 3. ### calculus Make a substitution to express the integrand as a rational function and then evaluate the integral. (Use C for the constant of integration.) 1/x{(x-1)^1/2} dx 4. ### calculus2 Evaluate the integral using the indicated trigonometric substitution. (Use C for the constant of integration.) ∫√((x^2-4)/x)dx x=2sec(t) 1. ### Calculus Integral of sin2xcos3x using Integration by Parts 2. ### Calculus A steady wind blows a kite due west. The kite's height above ground from horizontal position x = 0 to x = 80 ft is given by the following. y = 150 - (1/40)(x-50)^2 Find the distance traveled by the kite. y = 150 - (1/40)(x-50)^2 y 3. ### calculus 1.Evaluate the integral. (Use C for the constant of integration.) integral ln(sqrtx)dx 2. Use the method of cylindrical shells to find the volume V generated by rotating the region bounded by the curves about the given axis. y = 4. ### calculus evaluate the double integral ∫R∫ lny/x dA for the region R is the rectangle defined by 1 1. ### calculus Use integration by parts to evaluate the definite integral. Intergral: xsec^2(6x) dx I set u=sec^2(6x) and dv=x, but the problems seems to get harder as I go on.
2020-11-27 09:24:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700924158096313, "perplexity": 2545.9543421874996}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00622.warc.gz"}
https://answerdata.org/how-much-us-dollars-is-100000-mexican-pesos/
# How much US dollars is 100000 mexican pesos? I have 2 50,000 peso bills how much are both worth in US currency • Around $8.00 usd. The former line of currency was discontinued and a new created in 1992 three zeros were removed and the new was par with the old, in fact it was exactly valued at the same. People had three years to exchange old for new. They are worth$4 each @ 12 to 1 actual value = 50pesos each. You could have had breakfast and lunch for 50,000.00 Houses were running off the calculators and cars were in the millions of pesos The bank of Mexico will stll honor your bills minus 3 zeros. • Unfortunately, since Mexico issued those bills, they’ve revalued the peso at a rate of 1000 “old” pesos (like yours) being equal to one “new peso”. With one new peso being equal to about $.08, they’d have a face value of about$8 each–if they’re still exchangeable (which I don’t think they are). • Erm no you have to have pesos to consume things in Mexico well I think if you go to any state near the US then they may accept US dollars but if you go to Mexico city you would need pesos because they would not accept US dollars, I know this because Im from there so you are getting proffesional advice… • Those bills value is nothing, I recommend you to sell them as collectible and you´ll get more money.
2023-04-01 14:11:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18643991649150848, "perplexity": 3780.9241613409117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00737.warc.gz"}
https://www.nature.com/articles/s41562-020-01009-0?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosam&stream=top&error=cookies_not_supported&code=ab8fa729-fed9-4560-9987-120a8884f854
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Ranking the effectiveness of worldwide COVID-19 government interventions ## Abstract Assessing the effectiveness of non-pharmaceutical interventions (NPIs) to mitigate the spread of SARS-CoV-2 is critical to inform future preparedness response plans. Here we quantify the impact of 6,068 hierarchically coded NPIs implemented in 79 territories on the effective reproduction number, Rt, of COVID-19. We propose a modelling approach that combines four computational techniques merging statistical, inference and artificial intelligence tools. We validate our findings with two external datasets recording 42,151 additional NPIs from 226 countries. Our results indicate that a suitable combination of NPIs is necessary to curb the spread of the virus. Less disruptive and costly NPIs can be as effective as more intrusive, drastic, ones (for example, a national lockdown). Using country-specific ‘what-if’ scenarios, we assess how the effectiveness of NPIs depends on the local context such as timing of their adoption, opening the way for forecasting the effectiveness of future interventions. ## Main In the absence of vaccines and antiviral medication, non-pharmaceutical interventions (NPIs) implemented in response to (emerging) epidemic respiratory viruses are the only option available to delay and moderate the spread of the virus in a population1. Confronted with the worldwide COVID-19 epidemic, most governments have implemented bundles of highly restrictive, sometimes intrusive, NPIs. Decisions had to be taken under rapidly changing epidemiological situations, despite (at least at the very beginning of the epidemic) a lack of scientific evidence on the individual and combined effectiveness of these measures2,3,4, degree of compliance of the population and societal impact. Government interventions may cause substantial economic and social costs5 while affecting individuals’ behaviour, mental health and social security6. Therefore, knowledge of the most effective NPIs would allow stakeholders to judiciously and timely implement a specific sequence of key interventions to combat a resurgence of COVID-19 or any other future respiratory outbreak. Because many countries rolled out several NPIs simultaneously, the challenge arises of disentangling the impact of each individual intervention. To date, studies of the country-specific progression of the COVID-19 pandemic7 have mostly explored the independent effects of a single category of interventions. These categories include travel restrictions2,8, social distancing9,10,11,12 and personal protective measures13. Additionally, modelling studies typically focus on NPIs that directly influence contact probabilities (for example, social distancing measures18, social distancing behaviours 12, self-isolation, school closures, bans on public events20 and so on). Some studies focused on a single country or even a town14,15,16,17,18 while other research combined data from multiple countries but pooled NPIs into rather broad categories15,19,20,21, which eventually limits the assessment of specific, potentially critical, NPIs that may be less costly and more effective than others. Despite their widespread use, relative ease of implementation, broad choice of available tools and their importance in developing countries where other measures (for example, increases in healthcare capacity, social distancing or enhanced testing) are difficult to implement22, little is currently known about the effectiveness of different risk-communication strategies. An accurate assessment of communication activities requires information on the targeted public, means of communication and content of the message. Using a comprehensive, hierarchically coded dataset of 6,068 NPIs implemented in March–April 2020 (when most European countries and US states experienced their first infection waves) in 79 territories23, here we analyse the impact of government interventions on Rt using harmonized results from a multi-method approach consisting of (1) a case-control analysis (CC), (2) a step function approach to LASSO time-series regression (LASSO), (3) random forests (RF) and (4) transformers (TF). We contend that the combination of four different methods, combining statistical, inference and artificial intelligence classes of tools, also allows assessment of the structural uncertainty of individual methods24. We also investigate country-specific control strategies as well as the impact of selected country-specific metrics. All the above approaches (1–4) yield comparable rankings of the effectiveness of different categories of NPIs across their hierarchical levels. This remarkable agreement allows us to identify a consensus set of NPIs that lead to a significant reduction in Rt. We validate this consensus set using two external datasets covering 42,151 measures in 226 countries. Furthermore, we evaluate the heterogeneity of the effectiveness of individual NPIs in different territories. We find that the time of implementation, previously implemented measures, different governance indicators25, as well as human and social development affect the effectiveness of NPIs in countries to varying degrees. ## Results ### Global approach Our main results are based on the Complexity Science Hub COVID-19 Control Strategies List (CCCSL)23. This dataset provides a hierarchical taxonomy of 6,068 NPIs, coded on four levels, including eight broad themes (level 1, L1) divided into 63 categories of individual NPIs (level 2, L2) that include >500 subcategories (level 3, L3) and >2,000 codes (level 4, L4). We first compare the results for NPI effectiveness rankings for the four methods of our approach (1–4) on L1 (themes) (Supplementary Fig. 1). A clear picture emerges where the themes of social distancing and travel restrictions are top ranked in all methods, whereas environmental measures (for example, cleaning and disinfection of shared surfaces) are ranked least effective. We next compare results obtained on L2 of the NPI dataset—that is, using the 46 NPI categories implemented more than five times. The methods largely agree on the list of interventions that have a significant effect on Rt (Fig. 1 and Table 1). The individual rankings are highly correlated with each other (P = 0.0008; Methods). Six NPI categories show significant impacts on Rt in all four methods. In Supplementary Table 1 we list the subcategories (L3) belonging to these consensus categories. A normalized score for each NPI category is obtained by rescaling the result within each method to range between zero (least effective) and one (most effective) and then averaging this score. The maximal (minimal) NPI score is therefore 100% (0%), meaning that the measure is the most (least) effective measure in each method. We show the normalized scores for all measures in the CCCSL dataset in Extended Data Fig. 1, for the CoronaNet dataset in Extended Data Fig. 2 and for the WHO Global Dataset of Public Health and Social Measures (WHO-PHSM) in Extended Data Fig. 3. Among the six full-consensus NPI categories in the CCCSL, the largest impacts on Rt are shown by small gathering cancellations (83%, ΔRt between −0.22 and –0.35), the closure of educational institutions (73%, and estimates for ΔRt ranging from −0.15 to −0.21) and border restrictions (56%, ΔRt between −0.057 and –0.23). The consensus measures also include NPIs aiming to increase healthcare and public health capacities (increased availability of personal protective equipment (PPE): 51%, ΔRt −0.062 to −0.13), individual movement restrictions (42%, ΔRt −0.08 to −0.13) and national lockdown (including stay-at-home order in US states) (25%, ΔRt −0.008 to −0.14). We find 14 additional NPI categories consensually in three of our methods. These include mass gathering cancellations (53%, ΔRt between −0.13 and –0.33), risk-communication activities to inform and educate the public (48%, ΔRt between –0.18 and –0.28) and government assistance to vulnerable populations (41%, ΔRt between −0.17 and –0.18). Among the least effective interventions we find: government actions to provide or receive international help, measures to enhance testing capacity or improve case detection strategy (which can be expected to lead to a short-term rise in cases), tracing and tracking measures as well as land border and airport health checks and environmental cleaning. In Fig. 2 we show the findings on NPI effectiveness in a co-implementation network. Nodes correspond to categories (L2) with size being proportional to their normalized score. Directed links from i to j indicate a tendency that countries implement NPI j after they have implemented i. The network therefore illustrates the typical NPI implementation sequence in the 56 countries and the steps within this sequence that contribute most to a reduction in Rt. For instance, there is a pattern where countries first cancel mass gatherings before moving on to cancellations of specific types of small gatherings, where the latter associates on average with more substantial reductions in Rt. Education and active communication with the public is one of the most effective ‘early measures’ (implemented around 15 days before 30 cases were reported and well before the majority of other measures comes). Most social distancing (that is, closure of educational institutions), travel restriction measures (that is, individual movement restrictions like curfew and national lockdown) and measures to increase the availability of PPE are typically implemented within the first 2 weeks after reaching 30 cases, with varying impacts on Rt; see also Fig. 1. Within the CC approach, we can further explore these results on a finer hierarchical level. We show results for 18 NPIs (L3) of the risk-communication theme in Supplementary Information and Supplementary Table 2. The most effective communication strategies include warnings against travel to, and return from, high-risk areas (ΔRCCt = −0.14 (1); the number in parenthesis denotes the standard error) and several measures to actively communicate with the public. These include to encourage, for example, staying at home (ΔRCCt = −0.14 (1)), social distancing (ΔRCCt = −0.20 (1)), workplace safety measures (ΔRCCt = −0.18 (2)), self-initiated isolation of people with mild respiratory symptoms (ΔRCCt = −0.19 (2)) and information campaigns (ΔRCCt = −0.13 (1)) (through various channels including the press, flyers, social media or phone messages). ### Validation with external datasets We validate our findings with results from two external datasets (Methods). In the WHO-PHSM dataset26 we find seven full-consensus measures (agreement on significance by all methods) and 17 further measures with three agreements (Extended Data Fig. 4). These consensus measures show a large overlap with those (three or four matches in our methods) identified using the CCCSL, and include top-ranked NPI measures aiming at strengthening the healthcare system and testing capacity (labelled as ‘scaling up’)—for example, increasing the healthcare workforce, purchase of medical equipment, testing, masks, financial support to hospitals, increasing patient capacity, increasing domestic production of PPE. Other consensus measures consist of social distancing measures (‘cancelling, restricting or adapting private gatherings outside the home’, adapting or closing ‘offices, businesses, institutions and operations’, ‘cancelling, restricting or adapting mass gatherings’), measures for special populations (‘protecting population in closed settings’, encompassing long-term care facilities and prisons), school closures, travel restrictions (restricting entry and exit, travel advice and warning, ‘closing international land borders’, ‘entry screening and isolation or quarantine’) and individual movement restriction (‘stay-at-home order’, which is equivalent to confinement in the WHO-PHSM coding). ‘Wearing a mask’ exhibits a significant impact on Rt in three methods (ΔRt between −0.018 and –0.12). The consensus measures also include financial packages and general public awareness campaigns (as part of ‘communications and engagement’ actions). The least effective measures include active case detection, contact tracing and environmental cleaning and disinfection. The CCCSL results are also compatible with findings from the CoronaNet dataset27 (Extended Data Figs. 5 and 6). Analyses show four full-consensus measures and 13 further NPIs with an agreement of three methods. These consensus measures include heterogeneous social distancing measures (for example, restriction and regulation of non-essential businesses, restrictions of mass gatherings), closure and regulation of schools, travel restrictions (for example, internal and external border restrictions), individual movement restriction (curfew), measures aiming to increase the healthcare workforce (for example, ‘nurses’, ‘unspecified health staff’) and medical equipment (for example, PPE, ‘ventilators’, ‘unspecified health materials’), quarantine (that is, voluntary or mandatory self-quarantine and quarantine at a government hotel or facility) and measures to increase public awareness (‘disseminating information related to COVID-19 to the public that is reliable and factually accurate’). Twenty-three NPIs in the CoronaNet dataset do not show statistical significance in any method, including several restrictions and regulations of government services (for example, for tourist sites, parks, public museums, telecommunications), hygiene measures for public areas and other measures that target very specific populations (for example, certain age groups, visa extensions). ### Country-level approach A sensitivity check of our results with respect to the removal of individual continents from the analysis also indicates substantial variations between world geographical regions in terms of NPI effectiveness (Supplementary Information). To further quantify how much the effectiveness of an NPI depends on the particular territory (country or US state) where it has been introduced, we measure the heterogeneity of NPI rankings in different territories through an entropic approach in the TF method (Methods). Figure 3 shows the normalized entropy of each NPI category versus its rank. A value of entropy close to zero implies that the corresponding NPI has a similar rank relative to all other NPIs in all territories: in other words, the effectiveness of the NPI does not depend on the specific country or state. On the other hand, a high value of the normalized entropy signals that the performance of each NPI depends largely on the geographical region. The values of the normalized entropies for many NPIs are far from one, and are also below the corresponding values obtained through temporal reshuffling of NPIs in each country. The effectiveness of many NPIs therefore is, first, significant and, second, depends on the local context (combination of socio-economic features and NPIs already adopted) to varying degrees. In general, social distancing measures and travel restrictions show a high entropy (effectiveness varies considerably across countries) whereas case identification, contact tracing and healthcare measures show substantially less country dependence. We further explore this interplay of NPIs with socio-economic factors by analysing the effects of demographic and socio-economic covariates, as well as indicators for governance and human and economic development in the CC method (Supplementary Information). While the effects of most indicators vary across different NPIs at rather moderate levels, we find a robust tendency that NPI effectiveness correlates negatively with indicator values for governance-related accountability and political stability (as quantified by World Governance Indicators provided by the World Bank). Because the heterogeneity of the effectiveness of individual NPIs across countries points to a non-independence among different NPIs, the impact of a specific NPI cannot be evaluated in isolation. Since it is not possible in the real world to change the sequence of NPIs adopted, we resort to ‘what-if’ experiments to identify the most likely outcome of an artificial sequence of NPIs in each country. Within the TF approach, we selectively delete one NPI at a time from all sequences of interventions in all countries and compute the ensuing evolution of Rt compared to the actual case. To quantify whether the effectiveness of a specific NPI depends on its epidemic age of implementation, we study artificial sequences of NPIs constructed by shifting the selected NPI to other days, keeping the other NPIs fixed. In this way, for each country and each NPI, we obtain a curve of the most likely change in Rt versus the adoption time of the specific NPI. Figure 4 shows an example of the results for a selection of NPIs (see Supplementary Information for a more extensive report on other NPIs). Each curve shows the average change in Rt versus the adoption time of the NPI, averaged over the countries where that NPI has been adopted. Figure 4a refers to the national lockdown (including stay-at-home order implemented in US states). Our results show a moderate effect of this NPI (low change in Rt) as compared to other, less drastic, measures. Figure 4b shows NPIs with the pattern ‘the earlier, the better’. For those measures (‘closure of educational institutions’, ‘small gatherings cancellation’, ‘airport restrictions’ and many more shown in Supplementary Information), early adoption is always more beneficial. In Fig. 4c, ‘enhancing testing capacity’ and ‘surveillance’ exhibit a negative impact (that is, an increase) on Rt, presumably related to the fact that more testing allows for more cases to be identified. Finally, Fig. 4d, showing ‘tracing and tracking’ and ‘activate case notification’, demonstrates an initially negative effect that turns positive (that is, toward a reduction in Rt). Refer to Supplementary Information for a more comprehensive analysis of all NPIs. ## Discussion Our study dissects the entangled packages of NPIs23 and quantifies their effectiveness. We validate our findings using three different datasets and four independent methods. Our findings suggest that no NPI acts as a silver bullet on the spread of COVID-19. Instead, we identify several decisive interventions that significantly contribute to reducing Rt below one and that should therefore be considered as efficiently flattening the curve facing a potential second COVID-19 wave, or any similar future viral respiratory epidemics. The most effective NPIs include curfews, lockdowns and closing and restricting places where people gather in smaller or large numbers for an extended period of time. This includes small gathering cancellations (closures of shops, restaurants, gatherings of 50 persons or fewer, mandatory home working and so on) and closure of educational institutions. While in previous studies, based on smaller numbers of countries, school closures had been attributed as having little effect on the spread of COVID-19 (refs. 19,20), more recent evidence has been in favour of the importance of this NPI28,29; school closures in the United States have been found to reduce COVID-19 incidence and mortality by about 60% (ref. 28). This result is also in line with a contact-tracing study from South Korea, which identified adolescents aged 10–19 years as more likely to spread the virus than adults and children in household settings30. Individual movement restrictions (including curfew, the prohibition of gatherings and movements for non-essential activities or measures segmenting the population) were also amongst the top-ranked measures. However, such radical measures have adverse consequences. School closure interrupts learning and can lead to poor nutrition, stress and social isolation in children31,32,33. Home confinement has strongly increased the rate of domestic violence in many countries, with a huge impact on women and children34,35, while it has also limited the access to long-term care such as chemotherapy, with substantial impacts on patients’ health and survival chance36,37. Governments may have to look towards less stringent measures, encompassing maximum effective prevention but enabling an acceptable balance between benefits and drawbacks38. Previous statistical studies on the effectiveness of lockdowns came to mixed conclusions. Whereas a relative reduction in Rt of 5% was estimated using a Bayesian hierarchical model19, a Bayesian mechanistic model estimated a reduction of 80% (ref. 20), although some questions have been raised regarding the latter work because of biases that overemphasize the importance of the most recent measure that had been implemented24. The susceptibility of other modelling approaches to biases resulting from the temporal sequence of NPI implementations remains to be explored. Our work tries to avoid such biases by combining multiple modelling approaches and points to a mild impact of lockdowns due to an overlap with effects of other measures adopted earlier and included in what is referred to as ‘national (or full) lockdown’. Indeed, the national lockdown encompasses multiple NPIs (for example, closure of land, sea and air borders, closure of schools, non-essential shops and prohibition of gatherings and visiting nursing homes) that countries may have already adopted in parts. From this perspective, the relatively attenuated impact of the national lockdown is explained as the little delta after other concurrent NPIs have been adopted. This conclusion does not rule out the effectiveness of an early national lockdown, but suggests that a suitable combination (sequence and time of implementation) of a smaller package of such measures can substitute for a full lockdown in terms of effectiveness, while reducing adverse impacts on society, the economy, the humanitarian response system and the environment6,39,40,41. Taken together, the social distancing and movement-restriction measures discussed above can therefore be seen as the ‘nuclear option’ of NPIs: highly effective but causing substantial collateral damages to society, the economy, trade and human rights4,39. We find strong support for the effectiveness of border restrictions. The role of travelling in the global spread of respiratory diseases proved central during the first SARS epidemic (2002–2003)42, but travelling restrictions show a large impact on trade, economy and the humanitarian response system globally41,43. The effectiveness of social distancing and travel restrictions is also in line with results from other studies that used different statistical approaches, epidemiological metrics, geographic coverage and NPI classification2,8,9,10,11,13,19,20. We also find a number of highly effective NPIs that can be considered less costly. For instance, we find that risk-communication strategies feature prominently amongst consensus NPIs. This includes government actions intended to educate and actively communicate with the public. The effective messages include encouraging people to stay at home, promoting social distancing and workplace safety measures, encouraging the self-initiated isolation of people with symptoms, travel warnings and information campaigns (mostly via social media). All these measures are non-binding government advice, contrasting with the mandatory border restriction and social distancing measures that are often enforced by police or army interventions and sanctions. Surprisingly, communicating on the importance of social distancing has been only marginally less effective than imposing distancing measures by law. The publication of guidelines and work safety protocols to managers and healthcare professionals was also associated with a reduction in Rt, suggesting that communication efforts also need to be tailored toward key stakeholders. Communication strategies aim at empowering communities with correct information about COVID-19. Such measures can be of crucial importance in targeting specific demographic strata found to play a dominant role in driving the spread of COVID-19 (for example, communication strategies to target individuals aged <40 years44). Government food assistance programmes and other financial supports for vulnerable populations have also turned out to be highly effective. Such measures are, therefore, not only impacting the socio-economic sphere45 but also have a positive effect on public health. For instance, facilitating people’s access to tests or allowing them to self-isolate without fear of losing their job or part of their salary may help in reducing Rt. Some measures are ineffective in (almost) all methods and datasets—for example, environmental measures to disinfect and clean surfaces and objects in public and semi-public places. This finding is at odds with current recommendations of the WHO (World Health Organization) for environmental cleaning in non-healthcare settings46, and calls for a closer examination of the effectiveness of such measures. However, environmental measures (for example, cleaning of shared surfaces, waste management, approval of a new disinfectant, increased ventilation) are seldom reported by governments or the media and are therefore not collected by NPI trackers, which could lead to an underestimation of their impact. These results call for a closer examination of the effectiveness of such measures. We also find no evidence for the effectiveness of social distancing measures in regard to public transport. While infections on buses and trains have been reported47, our results may suggest a limited contribution of such cases to the overall virus spread, as previously reported48. A heightened public risk awareness associated with commuting (for example, people being more likely to wear face masks) might contribute to this finding49. However, we should note that measures aiming at limiting engorgement or increasing distancing on public transport have been highly diverse (from complete cancellation of all public transport to increase in the frequency of traffic to reduce traveller density) and could therefore lead to widely varying effectiveness, also depending on the local context. The effectiveness of individual NPIs is heavily influenced by governance (Supplementary Information) and local context, as evidenced by the results of the entropic approach. This local context includes the stage of the epidemic, socio-economic, cultural and political characteristics and other NPIs previously implemented. The fact that gross domestic product is overall positively correlated with NPI effectiveness whereas the governance indicator ‘voice and accountability’ is negatively correlated might be related to the successful mitigation of the initial phase of the epidemic of certain south-east Asian and Middle East countries showing authoritarian tendencies. Indeed, some south-east Asian government strategies heavily relied on the use of personal data and police sanctions whereas the Middle East countries included in our analysis reported low numbers of cases in March–April 2020. By focusing on individual countries, the what-if experiments using artificial country-specific sequences of NPIs offer a way to quantify the importance of this local context with respect to measurement of effectiveness. Our main takeaway here is that the same NPI can have a drastically different impact if taken early or later, or in a different country. It is interesting to comment on the impact that ‘enhancing testing capacity’ and ‘tracing and tracking’ would have had if adopted at different points in time. Enhancing testing capacity should display a short-term increase in Rt. Counter-intuitively, in countries testing close contacts, tracing and tracking, if they are effective, would have a similar effect on Rt because more cases will be found (although tracing and tracking would reduce Rt in countries that do not test contacts but rely on quarantine measures). For countries implementing these measures early, indeed, we find a short-term increase in Rt (when the number of cases was sufficiently small to enable tracing and testing of all contacts). However, countries implementing these NPIs later did not necessarily find more cases, as shown by the corresponding decrease in Rt. We focus on March and April 2020, a period in which many countries had a sudden surge in cases that overwhelmed their tracing and testing capacities, which rendered the corresponding NPIs ineffective. Assessment of the effectiveness of NPIs is statistically challenging, because measures were typically implemented simultaneously and their impact might well depend on the particular implementation sequence. Some NPIs appear in almost all countries whereas in others only a few, meaning that we could miss some rare but effective measures due to a lack of statistical power. While some methods might be prone to overestimation of the effects from an NPI due to insufficient adjustments for confounding effects from other measures, other methods might underestimate the contribution of an NPI by assigning its impact to a highly correlated NPI. As a consequence, estimates of ΔRt might vary substantially across different methods whereas agreement on the significance of individual NPIs is much more pronounced. The strength of our study, therefore, lies in the harmonization of these four independent methodological approaches combined with the usage of an extensive dataset on NPIs. This allows us to estimate the structural uncertainty of NPI effectiveness—that is, the uncertainty introduced by choosing a certain model structure likely to affect other modelling works that rely on a single method only. Moreover, whereas previous studies often subsumed a wide range of social distancing and travel restriction measures under a single entity, our analysis contributes to a more fine-grained understanding of each NPI. The CCCSL dataset features non-homogeneous data completeness across the different territories, and data collection could be biased by the data collector (native versus non-native) as well as by the information communicated by governments (see also ref. 23). The WHO-PHSM and CoronaNet databases contain a broad geographic coverage whereas CCCSL focuses mostly on developed countries. Moreover, the coding system presents certain drawbacks, notably because some interventions could belong to more than one category but are recorded only once. Compliance with NPIs is crucial for their effectiveness, yet we assumed a comparable degree of compliance by each population. We tried to mitigate this issue by validating our findings on two external databases, even if these are subject to similar limitations. We did not perform a formal harmonization of all categories in the three NPI trackers, which limits our ability to perform full comparisons among the three datasets. Additionally, we neither took into account the stringency of NPI implementation nor the fact that not all methods were able to describe potential variations in NPI effectiveness over time, besides the dependency on the epidemic age of its adoption. The time window is limited to March–April 2020, where the structure of NPIs is highly correlated due to simultaneous implementation. Future research should consider expanding this window to include the period when many countries were easing policies, or maybe even strenghening them again after easing, as this would allow clearer differentiation of the correlated structure of NPIs because they tended to be released, and implemented again, one (or a few) at a time. To compute Rt, we used time series of the number of confirmed COVID-19 cases50. This approach is likely to over-represent patients with severe symptoms and may be biased by variations in testing and reporting policies among countries. Although we assume a constant serial interval (average timespan between primary and secondary infection), this number shows considerable variation in the literature51 and depends on measures such as social distancing and self-isolation. In conclusion, here we present the outcome of an extensive analysis on the impact of 6,068 individual NPIs on the Rt of COVID-19 in 79 territories worldwide. Our analysis relies on the combination of three large and fine-grained datasets on NPIs and the use of four independent statistical modelling approaches. The emerging picture reveals that no one-size-fits-all solution exists, and no single NPI can decrease Rt below one. Instead, in the absence of a vaccine or efficient antiviral medication, a resurgence of COVID-19 cases can be stopped only by a suitable combination of NPIs, each tailored to the specific country and its epidemic age. These measures must be enacted in the optimal combination and sequence to be maximally effective against the spread of SARS-CoV-2 and thereby enable more rapid reopening. We showed that the most effective measures include closing and restricting most places where people gather in smaller or larger numbers for extended periods of time (businesses, bars, schools and so on). However, we also find several highly effective measures that are less intrusive. These include land border restrictions, governmental support to vulnerable populations and risk-communication strategies. We strongly recommend that governments and other stakeholders first consider the adoption of such NPIs, tailored to the local context, should infection numbers surge (or surge a second time), before choosing the most intrusive options. Less drastic measures may also foster better compliance from the population. Notably, the simultaneous consideration of many distinct NPI categories allows us to move beyond the simple evaluation of individual classes of NPIs to assess, instead, the collective impact of specific sequences of interventions. The ensemble of these results calls for a strong effort to simulate what-if scenarios at the country level for planning the most probable effectiveness of future NPIs, and, thanks to the possibility of going down to the level of individual countries and country-specific circumstances, our approach is the first contribution toward this end. ## Methods ### Data #### NPI data We use the publicly available CCCSL dataset on NPIs23, in which NPIs are categorized using a four-level hierarchical coding scheme. L1 defines the theme of the NPI: ‘case identification, contact tracing and related measures’, ‘environmental measures’, ‘healthcare and public health capacity’, ‘resource allocation’, ‘returning to normal life’, ‘risk communication’, ‘social distancing’ and ‘travel restriction’. Each L1 (theme) is composed of several categories (L2 of the coding scheme) that contain subcategories (L3), which are further subdivided into group codes (L4). The dataset covers 56 countries; data for the United States are available at the state level (24 states), making a total of 79 territories. In this analysis, we use a static version of the CCCSL, retrieved on 17 August 2020, presenting 6,068 NPIs. A glossary of the codes, with a detailed description of each category and its subcategories, is provided on GitHub. For each country, we use the data until the day for which the measures have been reliably updated. NPIs that have been implemented in fewer than five territories are not considered, leading to a final total of 4,780 NPIs of 46 different L2 categories for use in the analyses. Second, we use the CoronaNet COVID-19 Government Response Event Dataset (v.1.0)27 that contains 31,532 interventions and covers 247 territories (countries and US states) (data extracted on 17 August 2020). For our analysis, we map their columns ‘type’ and ‘type_sub_cat’ onto L1 and L2, respectively. Definitions for the entire 116 L2 categories can be found on the GitHub page of the project. Using the same criterion as for the CCCSL, we obtain a final total of 18,919 NPIs of 107 different categories. Third, we use the WHO-PHSM dataset26, which merges and harmonizes the following datasets: ACAPS41, Oxford COVID-19 Government Response Tracker52, the Global Public Health Intelligence Network (GPHIN) of Public Health Agency of Canada (Ottawa, Canada), the CCCSL23, the United States Centers for Disease Control and Prevention and HIT-COVID53. The WHO-PHSM dataset contains 24,077 interventions and covers 264 territories (countries and US states; data extracted on 17 August 2020). Their encoding scheme has a heterogeneous coding depth and, for our analysis, we map ‘who_category’ onto L1 and either take ‘who_subcategory’ or a combination of ‘who_subcategory’ and ‘who_measure’ as L2. This results in 40 measure categories. A glossary is available at: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/phsm. The CoronaNet and WHO-PHSM datasets also provide information on the stringency of the implementation of a given NPI, which we did not use in the current study. #### COVID-19 case data To estimate Rt and growth rates of the number of COVID-19 cases, we use time series of the number of confirmed COVID-19 cases in the 79 territories considered50. To control for weekly fluctuations, we smooth the time series by computing the rolling average using a Gaussian window with a standard deviation of 2 days, truncated at a maximum window size of 15 days. ### Regression techniques We apply four different statistical approaches to quantify the impact of a NPI, M, on the reduction in Rt (Supplementary Information). #### CC Case-control analysis considers each single category (L2) or subcategory (L3) M separately and evaluates in a matched comparison the difference, ΔRt, in Rt between all countries that implemented M (cases) and those that did not (controls) during the observation window. The matching is done on epidemic age and the time of implementation of any response. The comparison is made via a linear regression model adjusting for (1) epidemic age (days after the country has reached 30 confirmed cases), (2) the value of Rt before M takes effect, (3) total population, (4) population density, (5) the total number of NPIs implemented and (6) the number of NPIs implemented in the same category as M. With this design, we investigate the time delay of τ days between implemention of M and observation of ΔRt, as well as additional country-based covariates that quantify other dimensions of governance and human and economic development. Estimates for Rt are averaged over delays between 1 and 28 days. #### Step function Lasso regression In this approach we assume that, without any intervention, the reproduction factor is constant and deviations from this constant result from a delayed onset by τ days of each NPI on L2 (categories) of the hierarchical dataset. We use a Lasso regularization approach combined with a meta parameter search to select a reduced set of NPIs that best describe the observed ΔRt. Estimates for the changes in ΔRt attributable to NPI M are obtained from country-wise cross-validation. #### RF regression We perform a RF regression, where the NPIs implemented in a country are used as predictors for Rt, time-shifted τ days into the future. Here, τ accounts for the time delay between implementation and onset of the effect of a given NPI. Similar to the Lasso regression, the assumption underlying the RF approach is that, without changes in interventions, the value of Rt in a territory remains constant. However, contrary to the two methods described above, RF represents a nonlinear model, meaning that the effects of individual NPIs on Rt do not need to add up linearly. The importance of a NPI is defined as the decline in predictive performance of the RF on unseen data if the data concerning that NPI are replaced by noise, also called permutation importance. #### Transformer modelling Transformers54 have been demonstrated as models suitable for dynamic discrete element processes such as textual sequences, due to their ability to recall past events. Here we extended the transformer architecture to approach the continuous case of epidemic data by removing the probabilistic output layer with a linear combination of transformer output, whose input is identical to that for RF regression, along with the values of Rt. The best-performing network (least mean-squared error in country-wise cross-validation) is identified as a transformer encoder with four hidden layers of 128 neurons, an embedding size of 128, eight heads, one output described by a linear output layer and 47 inputs (corresponding to each category and Rt). To quantify the impact of measure M on Rt, we use the trained transformer as a predictive model and compare simulations without any measure (reference) to those where one measure is presented at a time to assess ΔRt. To reduce the effects of overfitting and multiplicity of local minima, we report results from an ensemble of transformers trained to similar precision levels. ### Estimation of Rt We use the R package EpiEstim55 with a sliding time window of 7 days to estimate the time series of Rt for every country. We choose an uncertain serial interval following a probability distribution with a mean of 4.46 days and a standard deviation of 2.63 days56. ### Ranking of NPIs For each of the methods (CC, Lasso regression and TF), we rank the NPI categories in descending order according to their impact—that is, the estimated degree to which they lower Rt or their feature importance (RF). To compare rankings, we count how many of the 46 NPIs considered are classified as belonging to the top x ranked measures in all methods, and test the null hypothesis that this overlap has been obtained from completely independent rankings. The P value is then given by the complementary cumulative distribution function for a binomial experiment with 46 trials and success probability (x/46)4. We report the median P value obtained over all x ≤ 10 to ensure that the results are not dependent on where we impose the cut-off for the classes. ### Co-implementation network If there is a statistical tendency that a country implementing NPI i also implements NPI j later in time, we draw a direct link from i to j. Nodes are placed on the y axis according to the average epidemic age at which the corresponding NPI is implemented; they are grouped on the x axis by their L1 theme. Node colours correspond to themes. The effectiveness scores for all NPIs are re-scaled between zero and one for each method; node size is proportional to the re-scaled scores, averaged over all methods. ### Entropic country-level approach Each territory can be characterized by its socio-economic conditions and the unique temporal sequence of NPIs adopted. To quantify the NPI effect, we measure the heterogeneity of the overall rank of a NPI amongst the countries that have taken that NPI. To compare countries that have implemented different numbers of NPIs, we consider the normalized rankings where the ranking position is divided by the number of elements in the ranking list (that is, the number of NPIs taken in a specific country). We then bin the interval [0, 1] of the normalized rankings into ten sub-intervals and compute for each NPI the entropy of the distribution of occurrences of that NPI in the different normalized rankings per country: $$S(\mathrm{NPI}\,)=-\frac{1}{\mathrm{log}\,(10)}\sum _{i}{P}_{i}\mathrm{log}\,({P}_{i}),$$ (1) where Pi is the probability that the NPI considered appeared in the ith bin in the normalized rankings of all countries. To assess the confidence of these entropic values, results are compared with expectations from a temporal reshuffling of the data. For each country, we keep the same NPIs adopted but reshuffle the time stamps of their adoption. ### Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability The CCCSL dataset can be downloaded from http://covid19-interventions.com/. The CoronaNet data can be found at https://www.coronanet-project.org/. The WHO-PHSM dataset is available at https://www.who.int/emergencies/diseases/novel-coronavirus-2019/phsm. Snapshots of the datasets used in our study are available in the following github repository: https://github.com/complexity-science-hub/ranking_npis. ## Code availability Custom code for the analysis is available in the following github repository: https://github.com/complexity-science-hub/ranking_npis. ## References 1. Qualls, N. L. et al. Community mitigation guidelines to prevent pandemic influenza – United States, 2017. MMWR Recomm. Rep. 66, 1–34 (2017). 2. Tian, H. et al. An investigation of transmission control measures during the first 50 days of the COVID-19 epidemic in China. Science 368, 638–642 (2020). 3. Chen, S. et al. COVID-19 control in China during mass population movements at New Year. Lancet 395, 764–766 (2020). 4. Lee, K., Worsnop, C. Z., Grépin, K. A. & Kamradt-Scott, A. Global coordination on cross-border travel and trade measures crucial to COVID-19 response. Lancet 395, 1593–1595 (2020). 5. Chakraborty, I. & Maity, P. Covid-19 outbreak: migration, effects on society, global environment and prevention. Sci. Total Environ. 728, 138882 (2020). 6. Pfefferbaum, B. & North, C. S. Mental health and the COVID-19 pandemic. N. Eng. J. Med. 383, 510–512. 7. COVID-19 dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University of Medicine (Johns Hopkins University of Medicine, accessed 4 June 2020); https://coronavirus.jhu.edu/map.html. 8. Chinazzi, M. et al. The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak. Science 368, 395–400 (2020). 9. Arenas, A., Cota, W., Granell, C. & Steinegger, B. Derivation of the effective reproduction number R for COVID-19 in relation to mobility restrictions and confinement. Preprint at medRxiv https://doi.org/10.1101/2020.04.06.20054320 (2020). 10. Wang, J., Tang, K., Feng, K. & Lv, W. When is the COVID-19 pandemic over? Evidence from the stay-at-home policy execution in 106 Chinese cities. Preprint at SSRN https://doi.org/10.2139/ssrn.3561491 (2020). 11. Soucy, J.-P. R. et al. Estimating effects of physical distancing on the COVID-19 pandemic using an urban mobility index. Preprint at medRxiv https://doi.org/10.1101/2020.04.05.20054288 (2020). 12. Anderson, S. C. et al. Estimating the impact of Covid-19 control measures using a Bayesian model of physical distancing. Preprint at medRxiv https://doi.org/10.1101/2020.04.17.20070086 (2020). 13. Teslya, A. et al. Impact of self-imposed prevention measures and short-term government intervention on mitigating and delaying a COVID-19 epidemic. PLoS Med. https://doi.org/10.1371/journal.pmed.1003166 (2020). 14. Kraemer, M. U. et al. The effect of human mobility and control measures on the COVID-19 epidemic in China. Science 497, 493–497 (2020). 15. Prem, K. & Liu, Y. et al. The effect of control strategies to reduce social mixing on outcomes of the COVID-19 epidemic in Wuhan, China: a modelling study. Lancet Public Health 5, e261–e270 (2020). 16. Gatto, M. et al. Spread and dynamics of the COVID-19 epidemic in Italy: effects of emergency containment measures. Proc. Natl Acad. Sci. USA 117, 10484–10491 (2020). 17. Lorch, L. et al. A spatiotemporal epidemic model to quantify the effects of contact tracing, testing, and containment. Preprint at arXiv https://arxiv.org/abs/2004.07641 (2020). 18. Dehning, J. & Zierenberg, J. et al. Inferring change points in the spread of COVID-19 reveals the effectiveness of interventions. Science 369, eabb9789 (2020). 19. Banholzer, N. et al. Impact of non-pharmaceutical interventions on documented cases of COVID-19. Preprint at medRxiv https://doi.org/10.1101/2020.04.16.20062141 (2020). 20. Flaxman, S. et al. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature 584, 257–261 (2020). 21. Hsiang, S. et al. The effect of large-scale anti-contagion policies on the COVID-19 pandemic. Nature 584, 262–267 (2020). 22. Nachega, J., Seydi, M. & Zumla, A. The late arrival of coronavirus disease 2019 (Covid-19) in Africa: mitigating pan-continental spread. Clin. Infect. Dis. 71, 875–878 (2020). 23. Desvars-Larrive, A. et al. A structured open dataset of government interventions in response to COVID-19. Sci. Data 7, 285 (2020). 24. Bryant, P. & Elofsson, A. The limits of estimating COVID-19 intervention effects using Bayesian models. Preprint at medRxiv https://doi.org/10.1101/2020.08.14.20175240 (2020). 25. Protecting People and Economies: Integrated Policy Responses to COVID-19 (World Bank, 2020); https://openknowledge.worldbank.org/handle/10986/33770 26. Tracking Public Health and Social Measures: A Global Dataset (World Health Organization, 2020); https://www.who.int/emergencies/diseases/novel-coronavirus-2019/phsm 27. Cheng, C., Barceló, J., Hartnett, A. S., Kubinec, R. & Messerschmidt, L. COVID-19 government response event dataset (CoronaNet v.1.0). Nat. Hum. Behav. 4, 756–768 (2020). 28. Auger, K. A. et al. Association between statewide school closure and COVID-19 incidence and mortality in the US. JAMA 324, 859–870 (2020). 29. Liu, Y. et al. The impact of non-pharmaceutical interventions on SARS-CoV-2 transmission across 130 countries and territories. Preprint at medRxiv https://doi.org/10.1101/2020.08.11.20172643 (2020). 30. Park, Y., Choe, Y. et al. Contact tracing during coronavirus disease outbreak. Emerg. Infect. Dis. 26, 2465–2468(2020). 31. Adverse Consequences of School Closures (UNESCO, 2020); https://en.unesco.org/covid19/educationresponse/consequences 32. Education and COVID-19: Focusing on the Long-term Impact of School Closures (OECD, 2020); https://www.oecd.org/coronavirus/policy-responses/education-and-covid-19-focusing-on-the-long-term-impact-of-school-closures-2cea926e/ 33. Orben, A., Tomova, L. & Blakemore, S.-J. The effects of social deprivation on adolescent development and mental health. Lancet Child Adolesc. Health 4, 634–640 (2020). 34. Taub, A. A new covid-19 crisis: domestic abuse rises worldwide. The New York Times https://www.nytimes.com/2020/04/06/world/coronavirus-domestic-violence.html (6 April 2020). 35. Abramian, J. The Covid-19 pandemic has escalated domestic violence worldwide. Forbes https://www.forbes.com/sites/jackieabramian/2020/07/22/the-covid-19-pandemic-has-escalated-global-domestic-violence/#57366498173e (22 July 2020). 36. Tsamakis, K. et al. Oncology during the COVID-19 pandemic: challenges, dilemmas and the psychosocial impact on cancer patients (review). Oncol. Lett. 20, 441–447 (2020). 37. Raymond, E., Thieblemont, C., Alran, S. & Faivre, S. Impact of the COVID-19 outbreak on the management of patients with cancer. Target. Oncol. 15, 249–259 (2020). 38. Couzin-Frankel, J., Vogel, G. & Weiland, M. School openings across globe suggest ways to keep coronavirus at bay, despite outbreaks. Science https://www.sciencemag.org/news/2020/07/school-openings-across-globe-suggest-ways-keep-coronavirus-bay-despite-outbreaks# (2020). 39. Vardoulakis, S., Sheel, M., Lal, A. & Gray, D. Covid-19 environmental transmission and preventive public health measures. Aust. N. Z. J. Public Health 44, 333–335 (2020). 40. Saadat, S., Rawtani, D. & Hussain, C. M. Environmental perspective of Covid-19. Sci. Total Environ. 728, 138870 (2020). 41. Covid-19 Government Measures Dataset (ACAPS, 2020); https://www.acaps.org/covid19-government-measures-dataset 42. Brockmann, D. & Helbing, D. The hidden geometry of complex, network-driven contagion phenomena. Science 342, 1337–1342 (2013). 43. Guan, D. et al. Global supply-chain effects of Covid-19 control measures. Nat. Hum. Behav. 4, 577–587 (2020). 44. Malmgren, J., Guo, B. & Kaplan, H. G. Covid-19 confirmed case incidence age shift to young persons aged 0–19 and 20–39 years over time: Washington State March–April 2020. Preprint at medRxiv https://doi.org/10.1101/2020.05.21.20109389 (2020). 45. Gentilini, U., Almenfi, M., Orton, I. & Dale, P. Social Protection and Jobs Responses to COVID-19 (World Bank, 2020); https://openknowledge.worldbank.org/handle/10986/33635 46. Cleaning and Disinfection of Environmental Surfaces in the Context of COVID-19 (World Health Organization, 2020); https://www.who.int/publications/i/item/cleaning-and-disinfection-of-environmental-surfaces-inthe-context-of-covid-19 47. Shen, J. et al. Prevention and control of COVID-19 in public transportation: experience from China. Environ. Pollut. 266, 115291 (2020). 48. Islam, N. et al. Physical distancing interventions and incidence of coronavirus disease 2019: natural experiment in 149 countries. BMJ 370, m2743 (2020). 49. Liu, X. & Zhang, S. Covid-19: face masks and human-to-human transmission. Influenza Other Respir. Viruses 14, 472–473 (2020). 50. 2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE (Johns Hopkins University of Medicine, 2020); https://github.com/CSSEGISandData/COVID-19 51. Griffin, J. et al. A rapid review of available evidence on the serial interval and generation time of COVID-19. Preprint at medRxiv https://doi.org/10.1101/2020.05.08.20095075 (2020). 52. Hale, T., Webster, S., Petherick, A., Phillips, T. & Kira, B. Oxford COVID-19 Government Response Tracker (Blavatnik School of Government & University of Oxford, 2020); https://www.bsg.ox.ac.uk/research/research-projects/coronavirus-government-response-tracker 53. Zheng, Q. et al. HIT-COVID, a global database tracking public health interventions to COVID-19. Sci. Data 7, 286 (2020). 54. Vaswani, A. et al. in Advances in Neural Information Processing Systems 30 (eds Guyon, I. et al.) 5998–6008 (Curran Associates, 2017). 55. Cori, A., Ferguson, N. M., Fraser, C. & Cauchemez, S. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am. J. Epidemiol. 178, 1505–1512 (2013). 56. Valka, F. & Schuler, C. Estimation and interactive visualization of the time-varying reproduction number Rt and the time-delay from infection to estimation. Preprint at medRxiv https://doi.org/10.1101/2020.09.19.20197970 (2020). ## Acknowledgements We thank A. Roux for her contribution to the coding of the interventions recorded in the dataset used in this study. We thank D. Garcia, V. D. P. Servedio and D. Hofmann for their contribution in the early stage of this work. N.H. thanks L. Haug for helpful discussions. This work was funded by the Austrian Science Promotion Agency, the FFG project (no. 857136), the WWTF (nos. COV 20-001, COV 20-017 and MA16-045), Medizinisch-Wissenschaftlichen Fonds des Bürgermeisters der Bundeshauptstadt Wien (no. CoVid004) and the project VET-Austria, a cooperation between the Austrian Federal Ministry of Social Affairs, Health, Care and Consumer Protection, the Austrian Agency for Health and Food Safety and the University of Veterinary Medicine, Vienna. The funders had no role in the conceptualization, design, data collection, analysis, decision to publish or preparation of the manuscript. ## Author information Authors ### Contributions N.H., L.G., A.L., V.L. and P.K. conceived and performed the analyses. V.L., S.T. and P.K. supervised the study. E.D. contributed additional tools. N.H., L.G., A.L., A.D.-L., B.P. and P.K. wrote the first draft of the paper. A.D.-L. supervised data collection on NPIs. All authors discussed the results and contributed to revision of the final manuscript. ### Corresponding author Correspondence to Peter Klimek. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Peer review reports are available. Primary handling editor: Stavroula Kousta. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Extended data ### Extended Data Fig. 1 Main results for the CCCSL dataset. Normalised scores (relative effect within a method) of the NPI categories in CCCSL, averaged over the four different approaches. ### Extended Data Fig. 2 Main results for the CoronaNet dataset. Normalised scores (relative effect within a method) of the NPI categories in CoronaNet, averaged over the four different approaches. Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 3. ### Extended Data Fig. 3 Main results for the WHO-PHSM dataset. Normalised scores (relative effect within a method) of the NPI categories in WHO-PHSM, averaged over the four different approaches. Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 4. ### Extended Data Fig. 4 Measure effectiveness in the WHO-PHSM dataset. Analogue to Fig. 1 of the main text if the analysis is done on the WHO-PHSM dataset. Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 4. ### Extended Data Fig. 5 Measure effectiveness in the CoronaNet dataset(part 1). Analogue to Fig. 1 of the main text if the analysis is done on the CoronaNat dataset (continued in Extended Data Fig. 6). Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 3. ### Extended Data Fig. 6 Measure effectiveness in the WHO-PHSM dataset (part 2). Analogue to Fig. 1 of the main text if the analysis is done on the CoronaNat dataset (continued from Extended Data Fig. 5). Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 3. ## Supplementary information ### Supplementary Information Supplementary Methods, Supplementary Results, Supplementary Discussion, Supplementary Figs. 1–26 and Supplementary Tables 1–6. ## Rights and permissions Reprints and Permissions Haug, N., Geyrhofer, L., Londei, A. et al. Ranking the effectiveness of worldwide COVID-19 government interventions. Nat Hum Behav 4, 1303–1312 (2020). https://doi.org/10.1038/s41562-020-01009-0 • Accepted: • Published: • Issue Date: • DOI: https://doi.org/10.1038/s41562-020-01009-0 • ### Data-driven prediction of COVID-19 cases in Germany for decision making • Lukas Refisch • Fabian Lorenz • Clemens Kreutz BMC Medical Research Methodology (2022) • ### The relative effects of non-pharmaceutical interventions on wave one Covid-19 mortality: natural experiment in 130 countries • Jonathan Stokes • Alex James Turner • Thomas Hone BMC Public Health (2022) • ### Past, present, and future trends of overweight and obesity in Belgium using Bayesian age-period-cohort models • Robby De Pauw • Manu Claessens • Brecht Devleesschauwer BMC Public Health (2022) • ### Psychological distress and associated factors among kidney transplant recipients and living kidney donors during COVID-19 • Sobhana Thangaraju • Yeli Wang • Tazeen Hasan Jafar BMC Nephrology (2022) • ### Evaluating the effectiveness of Hong Kong’s border restriction policy in reducing COVID-19 infections • Pengyu ZHU • Xinying TAN BMC Public Health (2022)
2022-08-12 22:33:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4517764449119568, "perplexity": 4315.126883790419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00404.warc.gz"}
https://www.physicsforums.com/threads/how-does-the-coolsurge-work.248622/
How does the CoolSurge work? 1. Aug 5, 2008 Staff: Mentor They had a full page ad in today's USA Today. Near as I can tell, it uses blocks of ice to make cool air: http://www.coolsurge.com/faq.cfm 2. Aug 5, 2008 WarPhalange Re: CoolSurge So it's like putting a fan next to some ice cubes? 3. Aug 5, 2008 lisab Staff Emeritus Re: CoolSurge Dang it! That's exactly how I cool my (south-facing) office - I arrange a big ice block, a box fan and a bucket! If I had only known I could sell this lame idea for money... 4. Aug 6, 2008 LowlyPion Re: CoolSurge I think that really borders on criminal misrepresentation and fraud. USA Today should be ashamed to allow such an ad. Most especially the claims that it cools the air 10 degrees coupled with the claim that it uses 93% less energy. This is patently not the case. Edit: In fraud there may be misrepresentation by either commission or omission as in this Federal statute relating to stock fraud: "... or to omit to state a material fact necessary in order to make the statements made, in the light of the circumstances under which they were made, not misleading, ..." Last edited: Aug 6, 2008 5. Aug 6, 2008 Staff: Mentor Re: CoolSurge 6. Aug 6, 2008 LowlyPion Re: CoolSurge I didn't see it, I almost never see USA Today unless it's shoved under a hotel door and I pick it up to keep from slipping on it, so I don't know how to reference it to them. But surely their attention should be drawn to it. If they were good citizens they would do a piece on exposing it even. Maybe even find a quotable physicist for them if need be. But I wouldn't expect them to do that. The monied suits still want those ad dollars. Basically though it boils my juices to see people taken advantage of with this kind of hucksterism preying on the gullible. 7. Aug 6, 2008 Staff: Mentor Re: CoolSurge Me too!...I'm traveling for work right now. 8. Aug 6, 2008 Moonbear Staff Emeritus Re: CoolSurge :rofl: I think that's the USA Today's entire circulation, getting shoved under hotel doors. It's the only time I read it either. The trick must be that you need to use someone else's freezer to make your ice block, otherwise it would be more efficient to put the fan in front of the open freezer, and we all know how useful that is. 9. Aug 6, 2008 NoTime Re: CoolSurge :rofl: If you live somewhere that gets cold enough in the winter, you could harvest and store ice like they did in the days before refrigeration. 10. Aug 7, 2008 Moonbear Staff Emeritus Re: CoolSurge While I was on the phone earlier this evening to report a power outage, I had to listen to the power company's "helpful tips" while on hold. "If it's below 0 outside, you could consider freezing a bucket of water to place in your refrigerator to help keep it cold." :uhh: It's August (and I don't think I've fallen off the Northern hemisphere yet). And if it were that cold outside, I could just put the refrigerator contents outside to keep them cold rather than waiting for buckets of water to freeze. 11. May 16, 2009 FredGarvin From The Loosers That Brought You Heat Surge... Cool-Surge: For $295 you get an old CPU case with a crappy fan and...wait for it...freezer ice blocks.$300 to blow air over ice cubes. These guys are simply amazing (read: morons). I hate to give them traffic to their web site especially since they are obviously catering to taking money from old people and the mentally challenged. I would love to meet one of these idiots on the street. http://www.coolsurge.com/index.cfm 12. May 16, 2009 Pengwuino Re: From The Loosers That Brought You Heat Surge... and only $295? You might as well buy 2! 13. May 16, 2009 turbo Re: From The Loosers That Brought You Heat Surge... Uses only as much power as an 80 watt bulb - sure, as long as you don't have to pay for the refrigeration required to re-freeze the "glacier packs". Refrigerators and freezers operate on free power. Notice the weasel words. The discharge air of this piece of crap can be "up to" 10 degrees cooler than the room air. Is that going to cool the room by 10 degrees? Nope. I'd like to catch these creeps, kidnap them, and put them in a windowless apartment (and make them pay the electrical bills) while their "miraculous" heating and cooling devices fight it out. I'd entertain them with mood rings and pet rocks and occasionally slap therm with a saturated Sham-Wow if they got sleepy. 14. May 16, 2009 Moonbear Staff Emeritus Re: From The Loosers That Brought You Heat Surge... Oh boy, I was wondering what to do with that old CPU box for the computer that's too old to use for anything anymore. I was thinking I might be able to sell it to some geek for parts for$10; I had no idea it was worth \$300 if I just added a few freezer blocks! :rofl: 15. May 16, 2009 Ivan Seeking Staff Emeritus Re: From The Loosers That Brought You Heat Surge... In light of yours and the following posts, I tend to think that any traffic generated by PF would not be very helpful to them. 16. May 16, 2009 Pengwuino Re: From The Loosers That Brought You Heat Surge... Sounds like someone needs psychiatric evaluation. :rofl: 17. May 18, 2009 Ivan Seeking Staff Emeritus Re: CoolSurge
2017-05-23 03:23:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2699369490146637, "perplexity": 3340.4302689298734}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607325.40/warc/CC-MAIN-20170523025728-20170523045728-00076.warc.gz"}
https://docs.simpeg.xyz/content/api/generated/SimPEG.maps.SphericalSystem.inverse.html
# SimPEG.maps.SphericalSystem.inverse# SphericalSystem.inverse(u)[source]# Maps vectors in Cartesian coordinates to spherical coordinates. Let $$\mathbf{v_x}$$, $$\mathbf{v_y}$$ and $$\mathbf{v_z}$$ store the x, y and z components of a set of vectors in Cartesian coordinates such that: $\begin{split}\mathbf{u} = \begin{bmatrix} \mathbf{x} \\ \mathbf{y} \\ \mathbf{z} \end{bmatrix}\end{split}$ The inverse mapping recovers the vectors in spherical coordinates, i.e.: $\begin{split}\mathbf{m}(\mathbf{u}) = \begin{bmatrix} \mathbf{a} \\ \mathbf{t} \\ \mathbf{p} \end{bmatrix}\end{split}$ where $$\mathbf{a}$$ are the amplitudes, $$\mathbf{t}$$ are the azimuthal angles and $$\mathbf{p}$$ are the radial angles. Parameters unumpy.ndarray The x, y and z components of a set of vectors in Cartesian coordinates. If the mapping is defined for a mesh, the numpy.ndarray has length 3*mesh.nC . Returns numpy.ndarray The amplitudes ($$\mathbf{a}$$), azimuthal angles ($$\mathbf{t}$$) and radial angles ($$\mathbf{p}$$) for the set of vectors in spherical coordinates. If the mapping is defined for a mesh, the numpy.ndarray has length 3*mesh.nC .
2023-02-09 03:08:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9674547910690308, "perplexity": 369.4784207220806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00864.warc.gz"}
http://meetings.aps.org/Meeting/DFD10/Event/133913
### Session MR: Supersonic and Hypersonic Flows I 8:00 AM–10:10 AM, Tuesday, November 23, 2010 Long Beach Convention Center Room: 203C Chair: Guillaume Bres, Exa Corporation Abstract ID: BAPS.2010.DFD.MR.5 ### Abstract: MR.00005 : Shock-wave surfing and the separation of meteoroid fragments in the atmosphere 8:52 AM–9:05 AM Preview Abstract MathJax On | Off   Abstract #### Authors: Stuart Laurence Ralf Deiterding Studying the aerodynamic interactions between bodies travelling at highly supersonic speeds is necessary to our understanding of the separation of meteoroid fragments following atmospheric disruption. Here we show that a phenomenon referred to as shock-wave surfing,'' in which a body moves in such a way as to follow the shock wave generated by another upstream body, can lead to the accumulation of a significantly higher relative lateral velocity between fragments than would otherwise be possible. The surfing phenomenon is investigated for the canonical cases of interactions between a sphere and a wedge, and between two spheres. Numerical simulations are performed and a simple theoretical model is developed to determine the forces acting on the surfing body. A phase-plane description is employed to elucidate features of the system dynamics in both cases. For the two sphere case, a strong influence of the body radius ratio on the separation process is found and a critical ratio is predicted for initially touching fragments that delineates entrainment of the smaller fragment within the larger fragment's shock from expulsion. It is also shown that a large fraction of the variation in the separation behaviour of meteoroid fragments deduced by previous authors from an analysis of terrestrial crater fields can be explained by a combination of surfing and a modest rotation rate of the parent body. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2010.DFD.MR.5
2014-04-24 21:42:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5588306784629822, "perplexity": 1769.625619122187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
https://au.boazcommunitycorp.org/943-sample-sizing.html
# Sample sizing When you want to size the sample size, the procedure proceeds in three distinct steps: • Evaluate the most important and most significant variable in the group; • Analyze whether it is ordinal, interval or nominal; • Check whether the population is finite or infinite; Interval variable and infinite population Interval variable and finite population Nominal or ordinal variable and infinite population Nominal or ordinal variable and finite population Note: The proportion (p) will be the estimate of the true proportion of one of the chosen levels for the adopted variable. For example, 60% of the sample phones is Nokia, so p will be 0.60. The ratio (q) will always be 1 - p. In this example q, will be 0.4. The error is represented by d. For cases where the proportions cannot be identified, 0.5 is given for p and q. Next: Data Types
2021-10-18 23:20:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8809027671813965, "perplexity": 1194.7020493784546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00659.warc.gz"}
https://analystprep.com/cfa-level-1-exam/equity/price-return-total-return-index/
## Index Value The formula for calculating the value of a price return index is as follow: $$V_{PRI} = \frac{ \sum_{i=1}^{N}{n_iP_i} } { D }$$ Where: VPRI = the value of the price return index ni = the number of units of constituent security  held in the index portfolio N = the number of constituent securities in the index Pi = the unit price of constituent security Di = the value of the divisor While the formula for calculating the value of an index may seem somewhat complicated at first glance, it is similar to calculating the value of any other normal portfolio of securities as it involves adding up the values of constituent securities. Index value calculation has just one additional step of dividing the sum of constituent securities’ values by a divisor, which is usually chosen at inception of the index to set a convenient beginning value and then adjusted to offset index value changes unrelated to changes in the prices of constituent securities. ## Example 1 An index is made up of two constituent securities, Stock A and Stock B. What beginning divisor must be used to achieve a beginning value of 1,000? $$\begin{array}{l|r|r} \textbf{Security} & \textbf{Units} & \textbf{Price/Unit} \\ \hline \text{Stock A} & 50 & 10 \\ \text{Stock B} & 30 & 100 \\ \end{array}$$ Let’s first calculate the sum of the values of both constituent securities. Stock A value = 50 × 10 = 500 Stock B value = 30 × 100 = 3,000 Stock A value + Stock B value = 3,500 The divisor must be set such that this figure is adjusted down to 1,000. $$1,000 = \frac{ 3,500 } { D }$$ $$D = \frac{ 3,500 } { 1,000 }$$ $$D = 3.5$$ ## Price Return and Total Return The price return calculation – the return from the index in percentage terms – is simply the difference in value between the two periods divided by the beginning value. $$PR_I = \frac{ V_{ PRI1 } – V_{ PRI0 } } { V_{ PRI0 } }$$ The formula for total return is the same, except we need to add the income generated from the securities, usually in the form of dividends: $$PR_I = \frac{ V_{ PRI1 } – V_{ PRI0 } + \text{Income}_I } { V_{ PRI0 } }$$ PRI = the price return of the index portfolio VPRI1 = the value of the price return index at the end of the period VPRI0 = the value of the price return index at the beginning of the period TRI =    the total return of the index portfolio IncomeI = the total income from all securities in the index over the period Another way to calculate these returns would be to sum up the weighted returns of each constituent security in the index portfolio. $$R_I = w_1R_1 + w_2R_2 + … + w_NR_N$$ RI = the return of the index portfolio number (as a decimal number) Ri = the return of constituent security i (as a decimal number) wi = the weight of security i (the fraction of the index portfolio allocated to security Note that this formula works for both price and total return calculations. ## Example 2 Calculate the one-year price return and total return for the Uncommon & Riches 5, a fictional index made up of five constituent securities. The divisor’s value begins and ends the year at 1. $$\begin{array}{l|r|r|r} \textbf{Constituent Security} & \textbf{Units (billions)} & \textbf{Beginning Value} & \textbf{Dividend} & \textbf{Ending Value} \\ \hline \text{Orange} & 5 & 107 & 2.15 & 116 \\ \text{Macrotough} & 7.75 & 55 & 1.20 & 62 \\ \text{Enout Stationary Corp} & 4 & 75 & 2.70 & 91 \\ \text{Draintree} & 0.5 & 660 & 0.00 & 750 \\ \text{Smith & Smith} & 2.75 & 100 & 3.00 & 115 \\ \end{array}$$ Let’s first calculate the beginning index price by multiplying the number of units and price of each constituent security and totaling the values. VPRI0 = (5 × 107) + (7.75 × 55) + (4 × 75) + (5 × 660) + (2.75 × 100) VPRI0 = 535 + 426.25 + 300 + 330 + 275 = 1,866.25 We’ll do the same calculation again, except replace the beginning values with ending values. VPRI1 = (5 × 116) + (7.75 × 62) + (4 × 91) + (5 × 750) + (2.75 × 115) VPRI1 = 580 + 480 + 364 + 375 + 316.25 = 2,115.75 And one more time to calculate portfolio income. Income= (5 × 2.15) + (7.75 × 1.20) + (4 × 2.70) + (5 × 0) + (2.75 × 3) Income= 10.75 + 9.30 + 10.80 + 8.25 = 39.10 The one-year price return for the Uncommon & Riches 5 comes out to: (2,115.75 – 1,866.25)/1,866.25 = 13.37% To calculate the total return, we’ll add in the portfolio income: (2,115.75 + 39.10 – 1,866.25)/1,866.25 = 15.46% Calculate and interpret the value, price return, and total return of an index Isha Shahid 2020-11-21 Literally the best youtube teacher out there. I prefer taking his lectures than my own course lecturer cause he explains with such clarity and simplicity. Artur Stypułkowski 2020-11-06 Excellent quality, free materials. Great work! 2020-11-03 One of the best FRM material provider. Very helpful chapters explanations on youtube by professor James Forjan. Rodolfo Blasser 2020-10-15 The content is masters degree-level, very well explained and for sure a valuable resource for every finance professional that aims to have a deep understanding of quantitative methods. Mirah R 2020-10-15 Priyanka 2020-09-29 Analyst Prep has actually been my soul guide towards this journey of FRM.I really appreciate the videos ad they are ALIGNED , good speed, and the Professor just keeps everything Super CASUAL. If I Clear my exams Ultimately credit goes to you guys. Keep sharing. God bless. Sar Dino 2020-09-29 Had a test on actuarial science coming up and was dead on all the concepts (had to start from ground zero). came across the channel as it had small bits of FM chapters consolidated by the professor Stephen paris. this made it easy for me to look at the chapters i was having trouble with (basically everything lol). I love the way he explains the questions, and the visualization! its so helpful for me to see the diagrams and how the formulas move around. he really did a great job explaining, and i understand so much better. 7 weeks worth of lessons condensed into 3 days of binge watching their videos.... Amazing and i am truly baffled as to why the videos have not gained traction as they should have! Share: #### Related Posts ##### Major Categories of Equity Valuation Models There are three major categories of equity valuation models: present value models, multiplier... ##### Margin Transactions Leverage Ratio The relation between risk and borrowing can be measured by the...
2021-01-23 00:27:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3453019857406616, "perplexity": 1527.1459619947218}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00643.warc.gz"}
https://support.bioconductor.org/p/40369/
1 0 Entering edit mode Natasha ▴ 440 @natasha-4640 Last seen 6.8 years ago Dear List, Normally for Illumina arrays, instead of the functions given based in the limma user guide (e.g. neqc, read.ilmn etc.), I use: * read.delim - to load probe profile data and sample table control data respectively * perform bg correction using the negative control probes from the sample table control * filter data based on _"detection scores"_ * normalise data using the _"vsn2"_ function However, as I have just realised that these can be used I have some queries: 1. Will there be much difference between the quantile normalisation in the neqc function (as compared to vsn2 ?) 2. How does one interpret the boxplots for the various controls (apart from x$genes$Status=="regular")? * as the median/mean vary a lot * much more for my samples (than the example shown in the user guide) 3. When filtering: based on the help of read.ilmn * The "Detection" column appears to be detection p-value by default * What does one do if the output is different from the GenomeStudio and it gives a "Detection Score" instead?? o Would: expressed <- apply(y$other$Detection < 0.05,1,any) + change to: expressed <- apply(y$other$Detection > 0.95,1,any) 4. Also, I do not fully understand the estimation of probes expressed using the propexpr function * one of my samples A7 shows 0.0 (I see that the housekeeping gene intensity for this is ~ 200 whereas for others its 1000+), its a similar case for samples A11 and A12 o propexpr(x) o A1 A2 A7 A8 A3 A4 A11 A12 0.3380243 0.4066500 0.0000000 0.4232871 0.3131936 0.3819055 0.1934197 0.2036340 A5 A6 A9 A10 0.3363844 0.3476216 0.3445201 0.3834617 sessionInfo() R version 2.13.0 (2011-04-13) Platform: x86_64-pc-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_GB.UTF-8 LC_COLLATE=en_GB.UTF-8 [5] LC_MONETARY=C LC_MESSAGES=en_GB.UTF-8 [7] LC_PAPER=en_GB.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] gdata_2.8.2 limma_3.8.2 loaded via a namespace (and not attached): [1] gtools_2.6.2 tools_2.13.0 Many Thanks, Natasha -- [[alternative HTML version deleted]] probe limma a4 probe limma a4 • 836 views 0 Entering edit mode @gordon-smyth Last seen 15 minutes ago WEHI, Melbourne, Australia Hi Natasha, > Date: Fri, 15 Jul 2011 18:03:59 +0100 > From: Natasha Sahgal <nsahgal at="" well.ox.ac.uk=""> > To: bioconductor at r-project.org > Subject: [BioC] read.ilmn function query > > Dear List, > > Normally for Illumina arrays, instead of the functions given based in > the limma user guide (e.g. neqc, read.ilmn etc.), I use: > > * read.delim - to load probe profile data and sample table control > data respectively > * perform bg correction using the negative control probes from the > sample table control > * filter data based on _"detection scores"_ > * normalise data using the _"vsn2"_ function > > > However, as I have just realised that these can be used I have some queries: > > 1. Will there be much difference between the quantile normalisation > in the neqc function (as compared to vsn2 ?) The neqc() strategy is different from that of vsn, not only in terms of normalization, but also in terms of background corection and variance stabilization. The are some parallels however in the mathematical theory between normexp background correction and the vsn transformation. How different the practical results will be though, I don't know. We compared neqc() to vst and other strategies that have been proposed for Illumina BeadChip data in the literature, but vsn wasn't one of those. > 2. How does one interpret the boxplots for the various controls > (apart from x$genes$Status=="regular")? > * as the median/mean vary a lot > * much more for my samples (than the example shown in the user > guide) This is a property of your data. If the boxplots vary are lot, then there must be a lot of variability in your data. > 3. When filtering: based on the help of read.ilmn > * The "Detection" column appears to be detection p-value by > default > * What does one do if the output is different from the > GenomeStudio and it gives a "Detection Score" instead?? > o Would: expressed <- apply(y$other$Detection < 0.05,1,any) > + change to: expressed <- apply(y$other$Detection > > 0.95,1,any) Yes. > 4. Also, I do not fully understand the estimation of probes expressed > using the propexpr function > * one of my samples A7 shows 0.0 (I see that the housekeeping > gene intensity for this is ~ 200 whereas for others its > 1000+), its a similar case for samples A11 and A12 > o propexpr(x) > o A1 A2 A7 > A8 A3 A4 A11 A12 > 0.3380243 0.4066500 0.0000000 0.4232871 0.3131936 > 0.3819055 0.1934197 0.2036340 > A5 A6 A9 A10 > 0.3363844 0.3476216 0.3445201 0.3834617 This seems to flag a possible problem with your sample A7. The regular probes (the majority of them anyway) are no brighter than background probes. This could suggest a problem with the RNA extraction, for example, in this case. The proportion of expressed probes might not be truly zero, but the spread of intensities must be different from that usually seen for a good quality array. Best wishes Gordon > sessionInfo() > R version 2.13.0 (2011-04-13) > Platform: x86_64-pc-linux-gnu (64-bit) > > locale: > [1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C > [3] LC_TIME=en_GB.UTF-8 LC_COLLATE=en_GB.UTF-8 > [5] LC_MONETARY=C LC_MESSAGES=en_GB.UTF-8 > [7] LC_PAPER=en_GB.UTF-8 LC_NAME=C > [9] LC_ADDRESS=C LC_TELEPHONE=C > [11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C > > attached base packages: > [1] stats graphics grDevices utils datasets methods base > > other attached packages: > [1] gdata_2.8.2 limma_3.8.2 > > loaded via a namespace (and not attached): > [1] gtools_2.6.2 tools_2.13.0 > > Many Thanks, > Natasha > > > > -- ______________________________________________________________________ The information in this email is confidential and intend...{{dropped:4}} 0 Entering edit mode Hi Natasha, Just adding to Gordon's reply: the "detection" columns in the read.ilmn output are always the same with those in the GenomeStudio/BeadStudio output. read.ilmn function does not change the original detection p values or detection scores. Cheers, Wei On Jul 17, 2011, at 10:39 AM, Gordon K Smyth wrote: > Hi Natasha, > >> Date: Fri, 15 Jul 2011 18:03:59 +0100 >> From: Natasha Sahgal <nsahgal at="" well.ox.ac.uk=""> >> To: bioconductor at r-project.org >> Subject: [BioC] read.ilmn function query >> >> Dear List, >> >> Normally for Illumina arrays, instead of the functions given based in >> the limma user guide (e.g. neqc, read.ilmn etc.), I use: >> >> * read.delim - to load probe profile data and sample table control >> data respectively >> * perform bg correction using the negative control probes from the >> sample table control >> * filter data based on _"detection scores"_ >> * normalise data using the _"vsn2"_ function >> >> >> However, as I have just realised that these can be used I have some queries: >> >> 1. Will there be much difference between the quantile normalisation >> in the neqc function (as compared to vsn2 ?) > > The neqc() strategy is different from that of vsn, not only in terms of normalization, but also in terms of background corection and variance stabilization. The are some parallels however in the mathematical theory between normexp background correction and the vsn transformation. How different the practical results will be though, I don't know. We compared neqc() to vst and other strategies that have been proposed for Illumina BeadChip data in the literature, but vsn wasn't one of those. > >> 2. How does one interpret the boxplots for the various controls >> (apart from x$genes$Status=="regular")? >> * as the median/mean vary a lot >> * much more for my samples (than the example shown in the user >> guide) > > This is a property of your data. If the boxplots vary are lot, then there must be a lot of variability in your data. > >> 3. When filtering: based on the help of read.ilmn >> * The "Detection" column appears to be detection p-value by >> default >> * What does one do if the output is different from the >> GenomeStudio and it gives a "Detection Score" instead?? >> o Would: expressed <- apply(y$other$Detection < 0.05,1,any) >> + change to: expressed <- apply(y$other$Detection >> > 0.95,1,any) > > Yes. > >> 4. Also, I do not fully understand the estimation of probes expressed >> using the propexpr function >> * one of my samples A7 shows 0.0 (I see that the housekeeping >> gene intensity for this is ~ 200 whereas for others its >> 1000+), its a similar case for samples A11 and A12 >> o propexpr(x) >> o A1 A2 A7 >> A8 A3 A4 A11 A12 >> 0.3380243 0.4066500 0.0000000 0.4232871 0.3131936 >> 0.3819055 0.1934197 0.2036340 >> A5 A6 A9 A10 >> 0.3363844 0.3476216 0.3445201 0.3834617 > > This seems to flag a possible problem with your sample A7. The regular probes (the majority of them anyway) are no brighter than background probes. This could suggest a problem with the RNA extraction, for example, in this case. The proportion of expressed probes might not be truly zero, but the spread of intensities must be different from that usually seen for a good quality array. > > Best wishes > Gordon > >> sessionInfo() >> R version 2.13.0 (2011-04-13) >> Platform: x86_64-pc-linux-gnu (64-bit) >> >> locale: >> [1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C >> [3] LC_TIME=en_GB.UTF-8 LC_COLLATE=en_GB.UTF-8 >> [5] LC_MONETARY=C LC_MESSAGES=en_GB.UTF-8 >> [7] LC_PAPER=en_GB.UTF-8 LC_NAME=C >> [9] LC_ADDRESS=C LC_TELEPHONE=C >> [11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C >> >> attached base packages: >> [1] stats graphics grDevices utils datasets methods base >> >> other attached packages: >> [1] gdata_2.8.2 limma_3.8.2 >> >> loaded via a namespace (and not attached): >> [1] gtools_2.6.2 tools_2.13.0 >> >> Many Thanks, >> Natasha >> >> >> >> -- ______________________________________________________________________ The information in this email is confidential and intend...{{dropped:6}}
2021-06-24 12:09:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6175909042358398, "perplexity": 6139.751043072573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00191.warc.gz"}
http://math.stackexchange.com/questions/184504/approximation-using-legendre-polynomials
# Approximation using Legendre polynomials my aim is to fit data points by the use of Legendre Polynomials. Has anybody experience with this task? My final aim is to do this automatically with mathematica. Thanks, rainer - "My final aim is to do this automatically with mathematica" - Look up Fit[] –  Guess who it is. Aug 20 '12 at 8:00 ## 1 Answer Is it enough to use to Legendre Polynomials as basis within the Fit[] Function. Fit[data,{LegendreP[1,x],LegendreP[2,x],LegendreP[3,x]},x] I would be glad about an explanation how it actually works. Thx rainer - You've forgotten the constant term: Fit[data, {1, LegendreP[1, x], LegendreP[2, x], LegendreP[3, x]}, x] –  Guess who it is. Aug 21 '12 at 0:51 The problem is for an arbitrary set of data points this produces exactely the same solution as: Fit[data, {1,x,x^2,x^3}, x]. So I do not understand what the special thing about Legendre Approximation is. –  rainer Aug 21 '12 at 5:51
2015-07-06 11:44:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5047077536582947, "perplexity": 2641.254519840548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098196.31/warc/CC-MAIN-20150627031818-00090-ip-10-179-60-89.ec2.internal.warc.gz"}
https://mersenneforum.org/showthread.php?s=653531858a1a67cf95a1fcb99cc6676f&p=512987
mersenneforum.org Multiplication method for factoring natural numbers Register FAQ Search Today's Posts Mark Forums Read 2019-04-07, 16:35 #34 ThiloHarich     Nov 2005 101 Posts Sorry. The sum of all divisors - not only the prime divisors I thought of - can be higher then the number itself. So I misinterpreted your model. So yes abundant numbers https://en.wikipedia.org/wiki/Abundant_number are good candidates. But for example 105 and 15 are no such numbers but are good 'k'. Last fiddled with by ThiloHarich on 2019-04-07 at 17:01 2019-04-07, 17:03   #35 nesio Apr 2019 2016 Posts Quote: Originally Posted by ThiloHarich Sorry. The sum of all divisors - not the prime divisors I thought of - can be higher then the number itself. So I misinterpreted your model. So yes abundant numbers https://en.wikipedia.org/wiki/Abundant_number are good candidates. But for example 105 and 15 are no such numbers but are good 'k'. Sum means here the amount, quantity. Last fiddled with by nesio on 2019-04-07 at 17:11 Reason: grammar correction 2019-04-07, 18:16   #36 Till "Tilman Neumann" Jan 2016 Germany 6648 Posts Quote: Originally Posted by nesio Sum means here the amount, quantity. We have some language problems here. Just to clarify: 12 has divisors 1, 2, 3, 4, 6, 12. Is 6 the result you mean, or is it 28? Quote: Originally Posted by ThiloHarich why should my model be very rough? The rough model is the one he proposed, not yours: Quote: Originally Posted by nesio You can check this rough (very rough) model of m on your source data of n: maximum(S/(m^1/3)) and m < n^1/3, where S - the sum of all divisors of m. Here are S and m^1/3 have equal weights in maximum function. What will be? I interprete this as follows: The quality of a multiplier m is about S/(m^1/3), where we consider m < n^1/3 only. S might be a) the sum of divisors of m, or b) the number of disivors of m. My first question in this post aims to clarify which of the two options is the correct one. nesio, do I get that right so far? Last fiddled with by Till on 2019-04-07 at 18:19 Reason: make clear that only one option for S is correct 2019-04-07, 18:31 #37 nesio   Apr 2019 25 Posts A rough model of m: Find the maximum of (S/(m^1/3)) when m < n^1/3, where S - the number of all divisors of m (excluding 1 and m). Here are S and m^1/3 have equal weights in maximize function. 2019-04-07, 18:43   #38 Till "Tilman Neumann" Jan 2016 Germany 22×109 Posts Quote: Originally Posted by nesio S - the number of all divisors of m (excluding 1 and m). Ok, now we understand that one. Quote: Originally Posted by nesio A rough model of m: Find the maximum of (S/(m^1/3)) when m < n^1/3, where S - the number of all divisors of m (excluding 1 and m). Here are S and m^1/3 have equal weights in maximize function. I'ld say this is not a model of m, but a procedure to choose the best m. A model would give a kind of score to any m. (at least) Quote: Originally Posted by nesio What will be? A simple weighting function giving a "plus" for smooth m, a "minus" for big m. This is no rocket science. We tested dozens of ways to arrange k's according to such principles, and many other constellations... 2019-04-07, 19:04 #39 nesio   Apr 2019 2016 Posts Is it working on your data and what are the results? Last fiddled with by nesio on 2019-04-07 at 19:11 Reason: lang 2019-04-07, 19:39 #40 Till     "Tilman Neumann" Jan 2016 Germany 22×109 Posts Unfortunately, none of our attempts to sort k's by such measures worked well. I believe that you did not try an efficient implementation yet. Theory (aka computational complexity, counting iterations being a simplified version of) and reality (measuring speed) are quite different for "small" factoring algorithms. After all, this is the only reason why Lehman or Hart algorithms have a right to exist. Otherwise NFS would wipe out all other algorithms at any number scale. If you try to optimize Hart/Lehman for small n, say n<=60 bit, then you will realize that instructions that are very fast on that number scale have a big or even decisive impact on the general performance. Complexity theory is often not strong enough to capture such influences. Storing the sqrts of k or of km is such a point. Without it, you will not be able to get a really fast Hart/Lehman implementation. Using congruences is a theoretical improvement, but also necessary. You can disprove my assumption that you did not try an efficient implementation yet by posting a fast piece of software ;-) 2019-04-07, 21:29 #41 nesio   Apr 2019 3210 Posts For example such rough model (procedure) gives these values m from n n m 555 6 5555 12 55555 36 555555 60 5555555 120 55555555 360 555555555 720 5555555555 1680 55555555555 2520 555555555555 5040 5555555555555 5040 55555555555555 5040 555555555555555 5040 It is light and quite workable way for m in "multiplication method" we hope. Til! Our work is a research and a paper about Recursive Multiplication algorithm and its relation to Simple Multiplication algorithm. Your work is improving Lehman and Harts algorithms. You asked us about our vision regarding m. We wrote then. Something does not suit you in our discussion? Let's not continue to cross the red and wet ;-) We think you and Thilo should write a paper about your work that may be interesting not only in that forum. Last fiddled with by nesio on 2019-04-07 at 21:30 Reason: lang 2019-04-08, 06:18 #42 Till     "Tilman Neumann" Jan 2016 Germany 22·109 Posts Hi nesio, sorry if I have been unpatient. I do not understand Russian, so I cannot get all information from your paper. And it took some time to get some interesting information from you. I will try to implement your particular proposition of sorting m by Code: score(m) = S(m) / n^(1/3) with S(m) = sum of divisors(m) without 1 and m this evening. Maybe we made an error so far in our evaluations of the smoothness of m's. We will see. You said that the computation of score(m) above is a very rough model. What would a more elaborate model look like? 2019-04-08, 06:39   #43 chalsall If I May "Chris Halsall" Sep 2002 3×31×103 Posts Quote: Originally Posted by Till What would a more elaborate model look like? Just to share, it's been wonderful watching you two work with each other. Inspiring! 2019-04-08, 09:27 #44 ThiloHarich     Nov 2005 101 Posts - Choosing the right m depends on the implementation. Til and me used some adjustments/improvements for odd multipliers m -> in our algorithm the good multipliers are not only even, best multipliers are odd. the numbers from nesio model : 6,12,36,60,120,360,720,1680,... do not appear in the list of good multipliers for this tuned algorithm in the top - the number of divisors is the product of the exponents + 1 of its prime factors If the exponents are all 1 then this is just 2^(number of prime factors of m). Having a factor with an exponent greater then one reduces the number of divisors. -> it looks like a good measure for good multipliers. But our best multiplier 315 = 3^2 * 5 * 7 has an exponent 2. When finding the best multipliers for numbers starting at 2^54 (and only factoring numbers with factors above n^1/3) I get a complete different list then when factoring numbers starting at 2^55. Last fiddled with by ThiloHarich on 2019-04-08 at 09:33 Similar Threads Thread Thread Starter Forum Replies Last Post datblubat Computer Science & Computational Number Theory 6 2018-12-25 17:29 Dubslow Aliquot Sequences 6 2018-05-15 15:59 SPWorley Math 5 2009-08-18 17:27 philmoore Math 131 2006-12-18 06:27 Pax_Vitae Miscellaneous Math 15 2005-11-14 12:41 All times are UTC. The time now is 00:17. Mon May 17 00:17:35 UTC 2021 up 38 days, 18:58, 0 users, load averages: 1.33, 1.57, 1.62
2021-05-17 00:17:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37516725063323975, "perplexity": 2380.0291361188188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00343.warc.gz"}
https://stacks.math.columbia.edu/tag/0FSP
Definition 24.25.2. Let $(\mathcal{C}, \mathcal{O})$ be a ringed site. Let $(\mathcal{A}, \text{d})$ be a sheaf of differential graded algebras on $(\mathcal{C}, \mathcal{O})$. A diffential graded $\mathcal{A}$-module $\mathcal{I}$ is said to be graded injective1 if $\mathcal{M}$ viewed as a graded $\mathcal{A}$-module is an injective object of the category $\textit{Mod}(\mathcal{A})$ of graded $\mathcal{A}$-modules. [1] This may be nonstandard terminology. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-03-29 16:44:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9720115065574646, "perplexity": 393.0094940466426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00346.warc.gz"}
https://jcom.sissa.it/archive/21/02/JCOM_2102_2022_Y03
# Participatory science communication for transformation in Colombia ### Abstract: This essay approaches the question: ‘What does participatory science communication for transformation mean in Colombia?’ The answer comes from an examination of the public policy instruments that have promoted participatory scientific communication through the concept of social appropriation of science, technology, and innovation (STI). In the gaze of these public policy instruments, it is evident how the social appropriation of STI has been intended as a means of transformation. Keywords: 13 October 2021 17 February 2022 ### Published: 28 March 2022 To answer the question about what participatory scientific communication means for transformation in Colombia, we have looked at some aspects of public policy instruments where this concept is relevant. The sources for these answers were the doctoral research on “Practices for Public Communication of Science and Technology in science museums” and the work carried out independently by the authors as consultants in state organizations. ### 1 The concept of appropriation and its relationship with participation in science communication The state of the art of science communication in Latin America [Massarani, 2018] identifies and highlights the concepts or expressions used in the region to talk about scientific communication. These concepts are Scientific Literacy, Social Appropriation of Science and Technology, Social Appropriation of Knowledge, Public Understanding of Science, Scientific Dissemination, Popularization of Science and Promotion of Scientific Culture; and they can be found in the public policies of Latin American countries in reference to Scientific Communication, the development of Scientific Culture, and, more recently, to Public Communication of Science and Technology. The report on science education in Latin America from CILAC (Open Forum of Science of Latin America and the Caribbean) [Macedo, 2016] emphasizes the need for the general public and, especially, children and young people to become a generation with a better degree of scientific culture to face the challenges of climate change or environmental changes. In Latin America this preparation requires an enormous work with the community, and, due to it, scientific communication must relate to actions that allow citizens to receive, understand and appropriate all possible scientific knowledge. Therefore, in Colombia participatory scientific communication has been linked to the concept of Social Appropriation of Science and Technology (SAS&T) in academic work, in research, and in the development of public policy instruments. This concept of SAS&T first appeared in a national document in 2005 [Colciencias, 2005; Colciencias, 2010; Colciencias, 2016; Arboleda Castrillón, 2007; Arboleda Castrillón and Daza-Caicedo, 2016] and has been maintained in subsequent national policy documents, as well as in local research. The document proposed the SAS&T concept as a strategy for encouraging citizens to get interested in the knowledge produced in the country, to validate it and to use it in solving specific problems in their communities [Daza-Caicedo et al., 2017; Colciencias, 2016; Chingaté-Hernández and Molano, 2016]. After the appearance of the SAS&T concept came the Law 1286 of 2009 (Ley 1286 de 2009, known as the Science and Technology Law),1 which transformed the office in charge of science and technology activities COLCIENCIAS into the Administrative Department of Science, Technology and Innovation, which was in charge of defining the policies, strategies and specific actions related to the generation and transfer of scientific knowledge. The Article 2 of this Law, dedicated to the objectives, includes “Strengthening a culture based on the generation, appropriation and dissemination of knowledge and scientific research” (Law 1286/2009, of January 23); while article 7 defines as one of its functions the generation of strategies for the social appropriation of science, technology, and innovation. During 2016, Decree 849 modified the structure of this Administrative Department and created a unit in charge of the development of “Mentality and Culture for Science, Technology and Innovation” (called in Spanish: Unidad para la mentalidad y cultura para la ciencia, la tecnología y la innovación, as it’s literally translated). The functions of this unit include the public communication of science through the generation of social mobilization and communication processes that can promote the appropriation of science, citizen participation and scientific culture. In 2018, Resolution 674 updated the Policy on Science, Technology, and Innovation for Sustainable Development with the Green Book [Colciencias, 2018]. This policy document introduces the transformative policy approach and sustainable development goals. The document proffers participation as one of its objectives and principles, defining it as the dialogue between all actors involved in the generation, use, access, and innovation of knowledge. Finally, Law 1951 of 2019 (Ley 1951 de 2019)2 creates the Ministry of Science, Technology and Innovation, scheduling its operation for the year 2020 and ratifying previous advances related to communication and scientific culture. The section on general objectives of Article 2 proposes to “Establish transfer strategies and Social appropriation of Science, Technology, Innovation and Entrepreneurship for the consolidation of a Knowledge-based Society” [p. 1]. And, in the section of general objectives of Art.2, it indicates to “Strengthen a culture based on the generation, appropriation and dissemination of knowledge and scientific research, technological development, innovation and lifelong learning” [p. 2]. Arboleda Castrillón and Daza-Caicedo [2016], Avellaneda [2016], Pérez-Bustos et al. [2012] and Hermelin [2018], Colombian researchers who have reviewed the initiatives of social appropriation of science and technology in Colombia, point out that the initiatives of social appropriation have been created, executed and led by the state with the participation of the universities and, since 2013, strategies that directly involve the community have also been included. Falla et al. [2015] consider that SAS&T practices are mediation processes in which all actors would have the same possibility of participation and where production and reception of content is mediated by culture. Arboleda Castrillón and Daza-Caicedo [2016] describe part of the evolution of the SAS&T in Colombia between 2007 and 2016 by looking at the activities that seek to communicate, share, transfer and exchange knowledge and information about SAS&T. They show the coexistence of the concepts of science communication and social appropriation of science, and the challenges it brings up for communication. Daza-Caicedo et al. [2017] affirm that the progress of the implementation of the SAS&T National Strategy in 2010 is especially visible in the interaction of the actors. They highlight that the science, technology, and society communication line of the strategy has made evident the need for the participation of more researchers in the area, not only to develop theoretical reflections, but also to support the communicative actions of the strategy. They also emphasize the need to deal in more detail with the components of the communication actions of the national strategies. They refer to the targeted actors, channels, intentions, characteristics of the audiences and, in general, to the communication strategies that may be more pertinent according to the needs of the communities and the scientific production of the country. Avellaneda [2016] focuses on the importance of dialogue between actors related to SAS&T in institutional settings that can legitimize knowledge and work at the border between science and society, including museums and science centers. The study written by Daza-Caicedo et al. [2017] evidences the need to formulate indicators that allow establishing a measurement of the impact of SAS&T practices, and proposes the development of initial qualitative indicators that consider: interest and learning in science and technology, the inclusion of groups in vulnerable situations, the exchange and co-production of knowledge, the incentive for scientific vocations, informed decision-making, and the development of capacities for SAS&T. ### 2 The policy of social appropriation of knowledge as a mechanism for the transformation of communities and other government initiatives One of the milestones of appropriation as a mechanism for participation and transformation is the publication of the National Strategy for Social Appropriation of Science, Technology and Innovation, carried out by the Administrative Department of Science, Technology and Innovation — COLCIENCIAS in the year 2010 [Colciencias, 2010]. This document presents the state’s view on communication and appropriation of science and the potential it has for social transformation. This national strategy had a direct impact on both public investment and the practices of the actors of the country’s National Science, Technology and Innovation System. COLCIENCIAS proposed the following definition for the Social Appropriation of science, technology, and innovation: "process of understanding and intervention of the relations between technoscience and society, built from the active participation of the various social groups that generate knowledge" [Colciencias, 2010]. The participatory component is identified as the driving force behind the strategy for the design of interventions and public financing. The Strategy for the Social Appropriation of Science, Technology and Innovation was structured in four lines of action as follows: • Citizen participation in public policies of science, technology, and innovation • Communication with a focus on the relationships between science, technology, and society • Exchange and transfer of knowledge • Knowledge management for the social appropriation of science, technology, and innovation These lines point out transversal components which are related to a critical and reflective approach about science; the interaction between science experts and other actors outside of scientific practice promotes collaborative work to foster the relationship between scientific knowledge and other forms of social knowledge (knowledge developed by farmer communities, for example coffee growers). The implementation of the Strategy during 2010–2020 was led by COLCIENCIAS as a national government actor. Resulting from the implementation of this strategy, programs and projects were designed to facilitate participatory communication of science, thus overcoming existing models of one-way communication. Examples of this new approach are Ideas for Change, A Ciencia Cierta (To Certain Science) and Todo es Ciencia (Everything is Science). In particular, the Ideas for Change [MinCiencias, 2021b] program was developed under the line of Knowledge Exchange and Transfer and evolved to be one of the programs with the highest effect on generating transformations in communities in conditions of vulnerability. Official documents describe the program with the following objective: “to promote experimental and focused spaces for the social appropriation of Science and Technology, based on challenges and problems that are addressed through collaborative work between experts in science, technology, and innovation, and communities or grassroots organizations, for the generation of innovative solutions based on scientific-technological knowledge, with the capacity to transform and improve the quality of life of citizens” [MinCiencias, 2021b]. Currently ongoing, the program has addressed issues such as access to drinking water, renewable energy, peace, and climate change, among others. It has received recognition from the OECD and the Inter-American Development Bank for its innovative methodologies in the process of participation and transformation of conditions, where the scientific communication, social appropriation of science’s strategies and local knowledge exchange are key instruments to foster community empowerment and design solutions where needed. The differential factor of Ideas for Change is challenging researchers and scientists to work as a team with community organizations to solve local problems, through building a relationship that is based on collective well-being. In this scheme, relationships are based on respect between peers; academic titles do not grant authority but trust. Implementation of this program resulted, among others, in plans for the recovery and sustainable use of ecosystems, energy systems with renewable sources, sustainable local production processes. All of these were collaboratively designed and not only solved a specific problem in the community, but also led to a resignification of scientific knowledge and empowerment of the local people who recognize themselves as managers of their own development. The A Ciencia Cierta [MinCiencias, 2021a] program starts from a community experience; that means, from a process that has been developed within the same community to solve a problem from a resilient perspective. When the project is advanced, a volunteer scientist joins to collaborate and contribute to the process with scientific knowledge. MinCiencias describes it as follows: "Program for the social appropriation of knowledge, which identifies, makes visible, recognizes and strengthens experiences developed by community organizations on issues of Science, Technology and Innovation, which provides a solution to a specific problem and that can be shared with other Colombians for the benefit of their social groups, based on citizen participation and dialogue of knowledge” [MinCiencias, 2021a]. In this case, the communication of the scientist is a key point. The scientist participates in the process by invitation, as a guide who works voluntarily and contributes to the progress of the community. The program, under the figure of scientific volunteering, seeks to establish closer and emotional relationships that can last over time as direct scientific communication relationships which contribute to social transformation. A Ciencia Cierta has managed, within its experimental process, to strengthen more than sixty community organizations and impact 20,000 people. In terms of scientific volunteering, approximately 200 scientists have been tutors of these community organizations. There are several projects of these two programs which have become study cases. As an example of this, and out from the Ideas for Change program, there is the case of the construction of an eco-touristic path for the sustainable harnessing of mangrove and the dry tropical forest in Isla Grande (Bolivar), with the participation of the Afro Colombian Community Council of Islas del Rosario who worked together with the Cartagena University’ Group of Medicines Chemistry Investigation. From the A Ciencia Cierta program, there is the experience of the Community Organization APISRED (beekeeping net), which produces and processes bees’ honey, wich strengthened its productive process and integrated other community organizations in the scaling of the production through the social appropriation of scientific knowledge. Todo es Ciencia is the transmedia strategy of the Ministry of Science that is responsible for mass scientific communication aimed at the public to transform their perception and relationship with science. This strategy generates multiplatform content for various audiences, involving television programs, documentaries, capsules, podcasts, and face-to-face interventions, which circulate through analog and digital channels. Unlike the other programs, Todo es Ciencia seeks to present science as an opportunity to rethink things and, with the purpose of reaching large audiences, the strategy relies on the narrative of the everyday life so that people can easily identify with the content being communicated. The most relevant milestone in this sequence is the publication of the National Policy for the Social Appropriation of Knowledge [MinCiencias, 2021c] with a defined scope until 2030. With this policy, the government hopes to achieve a higher level of impact and transformation in the most vulnerable communities, as well as approach concepts such as transformative innovation, open science, and citizen science. Although this document could be interpreted as an evolved version of the National Strategy for SAS&T of 2010, some differential factors can be identified such as: 1. The objective is that the processes go beyond consultation or dialogue to impact transformation processes of local contexts to address global problems. 2. The identification of building trust as a fundamental element in the relationships that mediate communication and appropriation of science, leading to the discussion of ethics within participatory processes in science. 3. A wider view of the territory, which provides a broadening panorama of the actors that interact there, and the need to interpret the context in addressing problems that may give the opportunity to appropriate knowledge to transform. The challenge for this policy lies in the ability of the recently created Ministry of Science to assign the financial resources to implement the new policy to bring together scientific and non-scientific actors in permanent dialogue, and to direct communication science to generate changes that can improve welfare conditions and, when needed, transform the relationship between actors. On the other hand, the development of different public policy instruments has given a guideline to science, technology, and innovation actors to transform their practices, and open spaces for the participation of citizens as subjects who carry knowledge, with the ability to decide and influence scientific practice, to make science an instrument for solving real problems. In this sense, we find points of intersection with the conceptual views of social innovation or citizen science with an important level of maturity in Colombia. ### 3 Science centers as teachers of participation and social transformation from the communication of science The relevance of science centers in scientific communication in Colombia is recent. The SAS&T National Strategy [Colciencias, 2010] reported that 69% of the actions for scientific communication between 1995 and 2004 were carried out in science centers [Colciencias, 2010] but it took almost two decades to be relevant to science communication policy. The National Policy of Actors of the National System of Science and Technology was promoted in 2016 declaring science centers as organizations that facilitate the use and appropriation of science, technology, and innovation in the community [Colciencias, 2016]. Authors such as Bandelli and Konijn [2013] and Bandelli and Konijn [2021], Barriault [2016]; and Bruyas and Riccio [2013] have studied the trajectory of science centers as platforms to promote sustainable science, technology, and innovation. Bandelli and Konijn [2021] highlights that their nature is the direct contact and intervention within the community, with very wide ranges of impact based on the number of visitors and the ways of intervention outside the museum and through its digital developments. In Colombia, science centers have addressed very specific objectives in the national strategy to communicate science to citizens; they develop methodologies with scientific, museological, educational and communicative practices, and are translators and mediators of science in society. The National Policy for Actors defines science centers as “Institutions of a public, private or mixed nature, non-profit, with legal recognition, with a physical plant permanently open to the public and that have SAS&T at the core of their mission or corporate purpose” [Colciencias, 2016, p. 19]. Science centers must recognize the cultural, economic, and social diversity of the communities, promote the principles of democratic access to information and knowledge, and contribute to strengthening the scientific culture in the country through educational programs and activities. The policy considers the following types of science centers: Biospaces such as aquariums, botanical gardens, and zoos; Spaces for exact, physical, social and technological sciences such as Science and Technology Museums, Exact Science Museums, Planetariums and Observatories; Citizen construction spaces in Science, Technology and Innovation: such as Maker Spaces, Citizen Workshops; and Mixed spaces such as Interactive Centers, Natural History Museums, Theme Parks. Some of these are Maloka, located in Bogotá (founded in 1998); the Imagenia Interactive Center, located in Barranquilla (founded in 2000); the Interactive Museum of Science and Games — SAMOGA, of the National University of Colombia (founded in 2001); the Museum of Marine Natural History of Colombia INVEMAR, located in Santa Marta (founded in 2001); the Abrakadabra Museum of the Valle del Cauca Departmental Library, located in Cali (founded in 2002); and the Parque Explora, located in Medellín (founded in 2008), which has established itself as the most recognized interactive science museum in the country and the region, and has acted as a model in the design and development of strategies for scientific communication and social appropriation of knowledge. The policy recognizes other actors of the national science and technology system such as research centers, technological development centers, R&D units, research results transfer offices, and science, technology and innovation parks, all of them with a very relevant component of scientific communication in their mission and objectives [Colciencias, 2016]. Since the policy promotes the creation of these types of "actors" at the national level, local capacities related to scientific communication and participatory work with communities have increased. Following the principles of total museology and social museology [Wagensberg, 2001; Wagensberg, 2005; Castellanos Pineda, 2008], science centers have increasingly included spaces for dialogue, collaboration, and co-creation with the community. Science centers such as Parque Explora in Medellín or Maloka in Bogotá develop diverse scientific communication activities where different actors from the community are involved, such as scientists, educators, mediators, students of primary and secondary education, parents, and vulnerable populations. As the first result of this public policy instrument, science centers have become relevant national actors in the planning and development of projects for citizen participation in science, exchange of knowledge between communities, and support in other projects related to scientific communication. Science centers present a continuous scenario where the public can advance between accessing science, dialoguing with science, and even participating in its creation together with other citizens who may or may not be scientists. This type of neutral scenario has allowed the development of the social dimension of scientific communication, not only in the sense of responsibility with those who have less possibilities of access to scientific culture, but also in the sense of generating a scientific communication that is fun and able to interest everyone. As a second result, it has been mentioned before that the capacities for scientific communication have increased in science centers and, therefore, also in other spaces. Previously, the development of research on scientific communication and social appropriation of science and technology allowed the consolidation of a large group of national researchers. The work of science centers and the development of projects for new science centers has allowed increased the number of professionals from various disciplines working in scientific communication and social appropriation of knowledge. The science centers have developed, among others, their content, education, mediation, and innovation departments, leading social appropriation projects with an impact on national communities. As a third result, the movement of science centers have contributed to the development of other spaces for scientific communication, the social appropriation of knowledge, and the participation of citizens. Libraries, community centers, art museums and other spaces such as houses of culture have begun to develop specific programs dedicated to the communication of science and the appropriation and participation of the community in activities related to science. Some of these programs have even begun to build maker or collaborative creation spaces that invite the public to participate. These initiatives are recent, so they still do not have reports that can account for the impact in the country, but they will surely later be the object of study and measurements on national scientific communication in cultural spaces. ### 4 The new mechanisms and actors for the transformation of communities with the communication of science In 2012, the article 361 of the Political Constitution of Colombia and Law 1530 (Ley 1530 de 2012)3 were modified to allocate 10% of the country’s total royalties to investment in science and technology. The royalties come from the payment made by multinational mining and oil companies to Colombia for the exploitation of natural resources. This change in public investment in Science, Technology and Innovation in Colombia made it possible to design and implement macro-projects with direct impact on the territory. The Social Appropriation of Science and, therefore, scientific communication, have been one of the priority actions with the aim of generating social transformations from the communication and implementation of science in the territories. In October 2015, COLCIENCIAS published the Sector Guide for Science, Technology and Innovation projects, a document that categorizes the sector and provides a reference on the type of programs and projects which can be financed with resources from royalties. The document dedicated a chapter to SAS&T, giving greater scope to the participatory approach and the creation and strengthening of Science Centers. According to the Observatorio Colombiano de Ciencia y Tecnología OCyT [2020], the General Royalties System has invested $676,867,092,891 COP (approximately US$ 180,000,000) between 2012 and 2018 in projects of SAS&T. An important amount without precedent in Colombia, regarding the investment of public resources that consider the importance of communication and the appropriation of science to solve territorial and national problems. Subsequently, Law 2056 of 2020 (Ley 2056 de 2020)4 modified the operation of the General Royalties System, updating the process to access royalty resources. With this new Law, territorial governments can establish alliances or consortia with public or private organizations in the educational, productive or citizen sectors. This strategy was designed to structure and manage public, open, and competitive calls for the definition of eligible investment projects in science, technology, and innovation, generating changes in the actors involved in these projects and, consequently, changes in the actors related to the scientific communication and the social appropriation of knowledge. The change in the legal framework of the system opened an opportunity for the intervention of various types of actors in the development of megaprojects of scientific communication and social appropriation of knowledge. The new projects involve at least three levels of actors linked to scientific communication and the social appropriation of knowledge. Decision makers, responsible for formulating territorial strategies, and leaders of projects related to scientific communication are at the first level. Managers, developers of content and format, communicators, and producers of strategies and specific activities related to scientific communication can be found at the second level. The third level is for those who have direct contact with the community such as technical experts, scientific communicators, educators, and mediators. To conclude, the evolution of public policy instruments and strategies with colombian communities have shown the importance of developing a participatory science communication that allows transformation. At the same time, participatory science communication requires the development of specific capacities that allow improving the relationship between citizens and science. ### References Arboleda Castrillón, T. (2007). ‘Comunicación pública de la ciencia y cultura científica en Colombia’. Revista Colombiana de Sociología 29, pp. 68–78. URL: https://revistas.unal.edu.co/index.php/recs/article/view/8013. Arboleda Castrillón, T. and Daza-Caicedo, S. (2016). ‘Cuando la apropiación social de la ciencia y tecnología es objeto de «gestión». Una reflexión desde el caso colombiano’. trilogía Ciencia Tecnología Sociedad 8 (15), pp. 81–95. https://doi.org/10.22430/21457778.401. Avellaneda, M. F. (2016). ‘Trasferencia e intercambio: cuando el rio suena… reflexiones para pensar el rumbo de la política de apropiación del conocimiento en Colombia’. trilogía Ciencia Tecnología Sociedad 8 (15), p. 69. https://doi.org/10.22430/21457778.411. Bandelli, A. and Konijn, E. A. (2013). ‘Science centers and public participation: methods, strategies and barriers’. Science Communication 35 (4), pp. 419–448. https://doi.org/10.1177/1075547012458910. — (2021). ‘Museums as brokers of participation: how visitors view the emerging role of European science centres and museums in policy’. Science Museum Group Journal 3 (3). https://doi.org/10.15180/150306. Barriault, C. (March 2016). ‘Visitor Engagement and Learning Behaviour in Science Centres, Zoos and Aquaria’. en. Philosophy Doctoral Thesis. Laurentian University, Canada. https://doi.org/10.13140/RG.2.1.2525.3523. Bruyas, A.-M. and Riccio, M., eds. (2013). Science Centres and Science Events. A science communication handbook. Springer Milan. https://doi.org/10.1007/978-88-470-2556-1. Castellanos Pineda, P. (2008). Los museos de ciencias y el consumo cultural: una mirada desde la comunicación. Barcelona, Spain: Editorial UOC. URL: https://www.editorialuoc.cat/los-museos-de-ciencias-y-el-consumo-cultural_2. Chingaté-Hernández, N. and Molano, A. (2016). ‘Recomendaciones a la política y a la estrategia de Apropiación Social de Ciencia, Tecnología e Innovación (ASCTI) en Colombia. una mirada desde el V Foro Nacional ASCTI’. trilogía Ciencia Tecnología Sociedad 8 (15), pp. 43–56. https://doi.org/10.22430/21457778.410. Colciencias (2005). Política de Apropiación Social de La Ciencia, Tecnología y la innovación. — (2010). Estrategia nacional de apropiación social de la ciencia, la tecnología y la innovación. Bogotá, Colombia. URL: http://www.colciencias.gov.co/sites/default/files/ckeditor_files/estrategia-nacional-apropiacionsocial.pdf. — (December 2016). Actores del Sistema Nacional de Ciencia, Tecnología e Innovación: Documento N 1602. URL: http://repositorio.colciencias.gov.co/handle/11146/34016. — (July 2018). Política de Ciencia, Tecnología e Innovación para el Desarrollo Sostenible. Libro Verde 2030. URL: http://repositorio.colciencias.gov.co/handle/11146/33995. Daza-Caicedo, S., Maldonado, O., Arboleda-Castrillón, T., Falla, S., Moreno, P., Tafur-Sequera, M. and Papagayo, D. (2017). ‘Hacia la medición del impacto de las prácticas de apropiación social de la ciencia y la tecnología: propuesta de una batería de indicadores’. História, Ciências, Saúde-Manguinhos 24 (1), pp. 145–164. https://doi.org/10.1590/s0104-59702017000100004. Falla, S., Arboleda Castrillón, T., Daza-Caicedo, S., Tafur-Sequera, M., Maldonado, O., Moreno, P. J. and Papagayo, D. (2015). ‘Apropiación social de la ciencia y la tecnología, retos para el campo de la comunicación’. In: Congreso RedPop 2015: arte, ciencia y tecnología. Nuevas maneras de conocer, pp. 1–10. Hermelin, D. (2018). ‘Comunicación de la ciencia. Un aprendizaje desde y con las ciencias sociales y humanas’. InMediaciones de la Comunicación 13 (2), p. 51. https://doi.org/10.18861/ic.2018.13.2.2867. Macedo, B. (2016). Educación científica. Foro Abierto de Ciencias de América Latina y el Caribe CILAC. Massarani, L. (2018). ‘Estado del arte de la divulgación de la ciencia en América Latina’. JCOM — América Latina 01 (01), A01. https://doi.org/10.22323/3.01010201. MinCiencias (30th September 2021a). A Ciencia Cierta. URL: https://acienciacierta.minciencias.gov.co/. — (30th September 2021b). Ideas para el Cambio. URL: https://ideasparaelcambio.minciencias.gov.co/. — (March 2021c). Resolución 0643 de 2021. Política Pública de Apropiación Social del Conocimiento en el marco de la Ciencia, Tecnología e Innovación. Bogotá, Colombia. URL: https://minciencias.gov.co/sites/default/files/upload/reglamentacion/resolucion_0643-2021.pdf. Observatorio Colombiano de Ciencia y Tecnología OCyT (2020). Indicadores de Ciencia y Tecnología Colombia 2019. URL: https://ocyt.org.co/Informeindicadores2019/indicadores-2019.pdf. Pérez-Bustos, T., Avellaneda, M. F., Borda, M. L., Falla, S. and Papagayo, D. (2012). ‘Iniciativas de Apropiación Social de la Ciencia y la Tecnología en Colombia: tendencias y retos para una comprensión más amplia de estas dinámicas’. História, Ciências, Saúde-Manguinhos 19 (1), pp. 115–137. https://doi.org/10.1590/s0104-59702012000100007. Wagensberg, J. (2001). ‘A favor del conocimiento científico (los nuevos museos)’. Éndoxa: Serie filosóficas 1 (14), pp. 341–356. https://doi.org/10.5944/endoxa.14.2001.5031. — (2005). ‘The “total” museum, a tool for social change’. História, Ciências, Saúde-Manguinhos 12 (Suppl.), pp. 309–321. https://doi.org/10.1590/s0104-59702005000400015. ### Authors Mabel Ayure-Urrego. Head of the Science Policy and International Relations Unit, Colombian Observatory of Science and Technology. PhD in Communication with emphasis on practices for public communication of science and technology. Master in Digital Communication with experience in the design of digital strategies for education at the Colombian Minister of Education. Work experience in the private and public sector, in the formulation, design and implementation of projects bridging science, technology and innovation in interaction with education, science communication, science centers, science education policy and STEAM approach. E-mail: [email protected]. Ricardo Andrés Triana González. Sociologist from the National University of Colombia, specialist in Organizations, Social Responsibility and Development from the University of the Andes. More than fifteen years of experience in the management, direction and execution of Science, Technology and Innovation programs, with an emphasis on the appropriation, communication and dissemination of science, and education with a STEM focus. Expert evaluator of national and international calls and adviser of scientific policies and strategies for government entities, museums and science centers. E-mail: [email protected]. ### How to cite Ayure-Urrego, M. and Triana González, R. A. (2022). ‘Participatory science communication for transformation in Colombia’. JCOM 21 (02), Y03. https://doi.org/10.22323/2.21020403. ### Endnotes 1Ley 1286 de 2009. Por la cual se modifica la Ley 29 de 1990, se transforma a Colciencias en Departamento Administrativo, se fortalece el Sistema Nacional de Ciencia, Tecnología e Innovación en Colombia y se dictan otras disposiciones. 23 de enero de 2009. 2Ley 1951 de 2019. Por la cual crea el Ministerio de Ciencia, Tecnología e Innovación, se fortalece el sistema nacional de ciencia, tecnología e innovación y se dictan otras disposiciones. 24 de enero de 2019. 3Ley 1530 de 2012. Por la cual se regula la organización y el funcionamiento del Sistema General de Regalías. 17 de mayo de 2012. 4Ley 2056 de 2020. Por la cual se regula la organización y el funcionamiento del Sistema General de Regalías. 30 de septiembre de 2020.
2023-02-02 08:16:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21035905182361603, "perplexity": 5506.928501976389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00261.warc.gz"}
https://puzzling.stackexchange.com/questions/63330/what-can-the-word-be
# What can the word be…? What can the word be…? Can you find out the solution to this puzzle? Explanation to second puzzle: Use an image editor to get the colors of the boxes in VXJrbnFycHZ6bnk=. If the colored box is in the first row, add the first and the last character of the color. If it’s in the second row, subtract them. The result will give you the n-th letter of the alphabet. • Any other hints for row 2 ? what next white box pretend with displayed with 1,4 number with cross line? Do we need to remove 1st and 4th Box to solve that row? – coDe murDerer May 8 '18 at 6:38 • I made the hint clearer. – Lypyrhythm May 8 '18 at 13:40 • @Lypyrhythm btw you may want to use spoiler block instead of base64 for the hints if it's neat for you, nice puzzle tho :) – athin Jun 30 at 14:50 • @athin I think the explanation is necessary to solve the second puzzle, so there is no need for the spoiler. – Lypyrhythm Jun 30 at 15:08 For Row 2, the given hint is: Use an image editor to get the colors of the boxes in VXJrbnFycHZ6bnk=. If the colored box is in the first row, add the first and the last character of the color. If it’s in the second row, subtract them. The result will give you the nth letter of the alphabet. The meaningless part of that means: "Hexadecimal", after ROT-13 then base64 encoding Finding the information we need: The colours from left-to-right are #C48A96, #F7CE0A, #BEC6A7, #1DC8B3, #D50914, and #A088FA. Following the instructions, we get: C+6 = 12+6 = 18 = R F-A = 15-10 = 5 = E B-7 = 11-7 = 4 = D 1+3 = 4 = D D-4 = 13-4 = 9 = I A+A = 10+10 = 20 = T The colours clue 'REDDIT'. To finish off: The bottom of the image indicates that we remove the first and fourth letters, to get EDIT. Putting it all together with @Deusovi's information: They got it right with the (alternate) answer CREDITORS. • Very good! Accepted it as the best answer. – Lypyrhythm May 8 '18 at 16:05 • @Lypyrhythm Other than you finally spelling it out as a "hint", is there something in the puzzle that at all suggests at the way to decode the second part? If an added hint is effectively required for anyone not inside your head to solve (part of) a puzzle then it's not a "hint", it's an essential part of the puzzle--and in many cases is the only thing preventing your puzzle from being "guess what I want you to do here". That kind of information should be part of the puzzle from the start. – Rubio May 9 '18 at 7:24 • @Rubio I understand, I fixed it. – Lypyrhythm May 9 '18 at 9:56 • Totally agreed with @Rubio – coDe murDerer May 9 '18 at 10:44 First row: PODCAST - POD - AST ("branch" in German) + R (Morse) Second row: ? Third row: ORS (Braille) Based on this, the only reasonable option is CREATORS. (Or CREDITORS, I suppose, but that seems less likely.) • Looks good! Can you solve the second row? Hint: Concentrate on the boxes’ colors. – Lypyrhythm Apr 6 '18 at 17:56 ## First row The icon at the start is apparently for Apple Podcasts. The next image is of peas in a pod, so we remove that. The next image is a branch and a German flag; the word "branch" in German is "ast", so we remove that. The next image is R in Morse code, so we add that. Now we're left with CSR.
2019-11-14 10:57:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46363818645477295, "perplexity": 1715.5705188556644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00342.warc.gz"}
https://www.cut-the-knot.org/m/Geometry/MiguelTangents.shtml
# Miguel's Tangents: Modern Day Sangaku ### Problem $ABC$ is an equilateral triangle, with the incircle $(O).\;$ Circles $(O_a),\;$ $(O_b),\;$ $(O_c)\;$ are inscribed into the angles at $A,B,C,\;$ respectively, tangent to $(O).\;$ From point $P\;$ on $(O)\;$ tangents of lengths $a,b,c\;$ are drawn to the circles $(O_a),\;$ $(O_b),\;$ $(O_c).\;$ Assuming that $c\ge a\;$ and $c\ge b,\;$ prove that $c=a+b.$ ### Solution Assume the incircle $(O)\;$ is defined by the equation $|z|=1\;$ and that the points of tangency are $D=u^2\;,$ $E=1,\;$ $F=u,\;$ where $\displaystyle u=-\frac{1}{2}+i\frac{\sqrt{3}}{2}.\;$ We get the system $\begin{cases} B+C=2u^2 &\\ A+B=2u &\\ C+A=1 & \end{cases}$ from which $A=-2u^2,\;$ $B=-2,\;$ and $C=-2u.\;$ In other words, $A=(1,\sqrt{3}),\;$ $B=(-2,0),\;$ $C=(1,-\sqrt{3}).$ Circles $(O_a),\;$ $(O_b),\;$ $(O_c)\;$ are known to have the radius $\frac{1}{3},\;$ so that $\displaystyle O_a=\left(\frac{2}{3},\frac{2}{\sqrt{3}}\right),\;$ $\displaystyle O_b=\left(-\frac{4}{3},0\right),\;$ and $\displaystyle O_c=\left(\frac{2}{3},-\frac{2}{\sqrt{3}}\right).\;$ We chose $P=(\cos t,\sin t),\;$ $\displaystyle\frac{\pi}{3}\le t\le \pi.$ Thus we may compute $\displaystyle a=\sqrt{O_aP^2-\frac{1}{9}}=\frac{4}{\sqrt{3}}\sin\left(\frac{t}{2}-\frac{\pi}{6}\right),\;$ $\displaystyle b=\frac{4}{\sqrt{3}}\cos\frac{t}{2},\;$ $\displaystyle c=\frac{4}{\sqrt{3}}\sin\left(\frac{t}{2}+\frac{\pi}{6}\right),\;$ and, since $\displaystyle \sin\left(\frac{t}{2}+\frac{\pi}{6}\right)-\sin\left(\frac{t}{2}-\frac{\pi}{6}\right)=\cos\frac{t}{2},\;$ we, obviously have $c=a+b.$ ### Acknowledgment The above problem, due to Miguel Ochoa Sanchez, has been communicated to me by Leo Giugiuc, along with his solution.
2018-05-22 18:12:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658846855163574, "perplexity": 341.9370363630354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864837.40/warc/CC-MAIN-20180522170703-20180522190703-00039.warc.gz"}
https://tug.org/pipermail/texworks/2011q2/004380.html
# [texworks] Another experimental Mac build Charlie Sharpsteen chuck at sharpsteen.net Mon Jun 13 21:36:19 CEST 2011 2011/6/12 Dohyun Kim <nomosnomos at gmail.com>: > 2011/6/11 Charlie Sharpsteen <chuck at sharpsteen.net>: >> >> Ooops, you are right---it looks like the Snow Leopard build was still >> including Fontconfig. I just replaced the binary with a new build that >> doesn't include it. >> > > Thanks. > This build is almost OK for base14-fonts.pdf, > except for the "mu" character of Symbol.ttf which is still an empty box. The missing glyph in the Symbol font is due to some sort of encoding issue. The same missing glyph also shows up with builds of Poppler that use Fontconfig. We decided not to spend the time hunting it down as most PDF files viewed by TeXworks have embedded fonts. > BTW I have found an another issue: > http://dl.dropbox.com/u/15254523/pltx.pdf > Upon testing this file with new TW build, > now I cannot see any Japanese characters. > > This PDF file was produced from a tex file as follows > with platex -> dvipdfmx commands on TeX Live 2010 distribution. > > \documentclass{jsarticle} > \begin{document} > 日本語(にほんご) > \end{document} > > With previous TW build with fontconfig library, > there was no such issue, though Japanese characters > were rendered with sans-serif family instead of serif. > Ryumin-Light is a Japanese font of serif family. > > Well... I personally have doubts about these efforts > to do away with fontconfig library, because > font substitution of non-embedded fonts can be managed > with my ~/.fonts.conf file when TW supports fontconfig. > Certainly, this must be my own personal preference, > as this cannot be expected for vast majority of users > who are not familiar with fontconfig. > > Best, > -- > Dohyun Kim > College of Law, Dongguk University > Seoul, Republic of Korea Thanks for this testcase! It looks like pTeX does not embed all fonts used in the document which breaks some assumptions we made while building the Quartz font backend. If we can't find a resolution for this issue, we may end up sticking with Fontconfig. Thanks again for testing! -Charlie
2019-09-17 02:32:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833523988723755, "perplexity": 13513.76269794266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573011.59/warc/CC-MAIN-20190917020816-20190917042816-00079.warc.gz"}
https://nrich.maths.org/11696/solution
### Consecutive Numbers An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. ### Calendar Capers Choose any three by three square of dates on a calendar page... ### Days and Dates Investigate how you can work out what day of the week your birthday will be on next year, and the year after... # Smallest Abundant Number ##### Age 11 to 14 ShortChallenge Level Primes - only factor is 1 (and self) so not abundant number factors sum abundant? 4 1,2 3 No 6 1,2,3 6 No 8 1,2,4 7 No 9 1,3 4 No 10 1,2,5 7 No 12 1,2,3,4,6 16 Yes This shows that the smallest abundant number is $12$. $6$ is called a perfect number, as the sum of its factors is equal to the number. This problem is taken from the UKMT Mathematical Challenges. You can find more short problems, arranged by curriculum topic, in our short problems collection.
2021-09-19 08:11:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.478397011756897, "perplexity": 3672.229380671606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00615.warc.gz"}
http://mathcenter.oxford.emory.edu/cs170/variables/
# Variables and the 8 Primitive Data Types ### The Purpose of a Variable (and some vocabulary) You can think of a simple program as a list of instructions to be read and acted upon sequentially Example: 1. Read a value representing the radius of a circle from the standard input source/stream 2. Compute the area of a circle with this radius 3. Print the area to the standard output stream (i.e., the console window) Remember: a computer will read and act upon these instructions one at a time - it is not aware of what is coming up until it gets there! Looking at step 1 in the program above, we will need to tell the computer that it needs to remember the value it is reading in - it needs to store this value in its memory somewhere so we can use it in a computation later. To this end, we will need to do a couple of things. First, we need to tell the computer how much memory to use to store the value in question. Different kinds of numbers require different amounts of memory (more on this in a minute). Of course, sometimes we need to store information that isn't explicitly numerical. These things too, come in different sizes. For example, it will certainly take more memory to store the Declaration of Independence than it will to store a single letter (i.e., a "character"). In addition to telling the computer how much memory we want to use to store the value in question, we also need to tell the computer how the value should be stored in memory (i.e., what method of "encoding" should be employed to turn the value into a string of 1's and 0's). Examples of types of encodings used include Two's Complement, IEEE 754 Form, ASCII, Unicode, etc... The computer also needs to have some reference to where it stored the value in memory, so it can find it again. The concept of a (typed) variable solves all of our problems here. A variable in Java gives us a way to store numerical or other kinds of information for later use, addressing all of the aforementioned considerations. The information being stored is called the value of the variable, regardless of whether the information is numerical or not. The amount of memory allocated for a given variable and how the value associated with that variable should be encoded into 1's and 0's in memory are specified by its type. There are 8 primitive types of data built into the Java language. These include: int, byte, short, long, float, double, boolean, and char. The first 6 allow for storage of different kinds of numerical values, the last stores a single character (think "keyboard" character). We'll talk more about the differences between these in a little bit... Also, every variable has a name that both serves as a "human-friendly" reference for the information, and is associated with the location in memory where the variable's value will be stored. Whenever we use a variable in a Java program, we must first declare the variable - that is, we specify what the type and name of the variable will be. Then we can assign a value to the variable - that is, tell it what to remember. When we assign a variable a value for the first time, we are said to be initializing the variable. Of course, computers are picky about how you tell them to do something. Truth be told, they are kind of dumb. They can't "read between the lines" or "figure out what you really meant". Programming languages have a particular format you have to stick to when telling the computer what to do. This format is called syntax. When code is written in a way that the computer can understand, we say the code is syntactically correct. When the code is written in a way the computer cannot understand, we say there is a syntax error. Let's take a look at the syntax for declaring a variable: <variableType> <variableName>; Two Examples: int myLuckyNumber; //Declares myLuckyNumber to be an integer variable char myLetter; //Declares myLetter to be a character variable The semicolon on the end functions much like a period does in the English language. It tells the computer that the statement of what to do is complete at this point. Here, what the computer must do is: "Declare this variable of this type (that is to say: tell the computer to allocate an appropriate amount of memory for the type of variable given and reference that portion of memory by the name given), and that's it. Now move on to the next statement". To assign a variable a value (i.e., fill the memory associated with the variable with some information in an appropriately encoded way, per it's type), we use the following syntax: <variableName> = <value or expression>; Some Examples: myLuckyNumber = 13; //Assigns the variable myLuckyNumber //the value 13 myLuckyNumber = 5+8; //Also assigns the variable myLuckyNumber //the value 13 //Note: expressions like the sum on the //right are evaluated before they are //assigned myLetter = 'a'; //Assigns the variable myLetter the //character 'a'. Note the equals sign here means something different than it does in mathematics. In fact, we don't even call this symbol "equals". We call it the "assignment operator" instead, or "gets" for short. myLuckyNumber = 13; as "The variable myLuckyNumber gets the value 13." ### A Useful Shortcut -- Variable Declaration and Assignment in One Statement Java does allow us to shorten variable declaration and initialization up a bit, syntactically. We can declare a variable and assign it a value in one step as the following examples show: int x = 1; double d = 1.4; char myLetter = 'a'; int myLuckyNumber = 5 + 8; double myProduct = 3.14 * (2.78 + 1.0); When source code contains a representation of a fixed value (like 1, 1.4, and 'a' above) that requires no computation, we call that representation a literal. An expression, on the other hand, (like the 5+8 or the 3.14 * (2.78 + 1.0) above) is a combination of one or more operands and operators, whose value must be calculated. The operands might be literals, variables, or some other source of data (e.g., values returned from methods -- but more on that later). ### Variable Reassignment Variables don't have to have the same value throughout your entire program. Their contents can be changed over the course of the program's execution. For example, suppose you are writing a program to count the number of stars seen in an image of the night sky taken with your digital camera. You might declare a variable named "numStarsFound", initializing it to zero. Then upon examining the picture file, each time your program locates a new star, you might increase the value of your variable by one with the following statement: numStarsFound = numStarsFound + 1; Then when you are done, the variable numStarsFound will reflect the number of stars found in the entire picture. Be aware, you must declare a variable before you can assign a value to it, but you should only declare a variable once within a given scope. (We'll talk more about scope later.) ### Printing the Value of a Variable As seen in the "Hello World" program, we can use System.out.println() to print things to the console. The "things" that can be printed include text strings, variable values, or the value of any valid expression. As an example, suppose one wants to print the value of a previously declared and initialized variable named myVar. The following would do the trick: System.out.println(myVar); Technically, the code above does a little bit more than just printing the value of myVar -- it also prints a line-feed character immediately afterwards. The effect of this is similar to hitting the "return" key on a keyboard, it moves the cursor down a line. As such, if anything is printed to the console after this statement, it will be printed on a different line. If you wanted to avoid printing the extra line-feed character, you could opt for System.out.print() instead, as the example below suggests. int a = 1; int b = 2; System.out.println(a); System.out.println(b); System.out.print(a); System.out.print(b); The above code snippet would print the following to the console: 1 2 12 Here's an example that mixes in some re-assignments and also prints variable values of different types. Note in particular how the double z gets printed as a decimal, despite it not having a fractional part... int x = 1; //an integer variable x is declared //and initialized to 1 int y = 2; //an integer variable y is declared //and initialized to 2 double z = 3; //a double variable z is declared //and initialized to 3.0 x = y + 3; //x is now 5 y = x * 4; //y is now 20 x = x + 1; //x is now 6 System.out.print(x); //prints 6 to the console System.out.println(y); //prints 20 to the console System.out.println(z); //prints 3.0 to the console Here's what we would see on the console when the above code is executed: 620 3.0 ### The Names of Variables... The name of a variable is formally called an identifier. • Identifiers are sequences of characters, consisting only of letters, digits, underscores (_), and/or dollar signs($). • An identifier must start with a letter, an underscore (_), or a dollar sign ($). It must not start with a digit. • Identifier names are case sensitive!!! (mynumber is different than myNumber) • An identifier may not be a reserved word, or the words: true, false, or null. • Here's a list of the reserved words in the Java programming language. The keywords const and goto are reserved, even though they are not currently used. true, false, and null might seem like reserved, but they are actually literals (i.e., values like 0, 3.14, or 'a'); you cannot use them as identifiers in your programs either. abstract continue for new switch assert*** default goto* package synchronized boolean do if private this break double implements protected throw byte else import public throws case enum**** instanceof return transient catch extends int short try char final interface static void class finally long strictfp** volatile const* float native super while (* not used; ** added in Java 1.2; *** added in Java 1.4; **** added in Java 5) ### Conventions for Identifiers The computer doesn't care about the following rules, but your instructor and any other programmer (including you) that has to read your code will! • Class names (which also have identifiers that follow the same syntactical guidelines should start with an uppercase letter • Meaningful names should be used • If an identifier is to be made up of more than one word, CamelCase should be used (e.g., "myLuckyNumber") ### The 8 Primitive Variable Types Depending on the nature of the information one wishes to store in a variable, and -- in the case of numerical information -- depending the size and precision required, one can use any of the following eight primitive types in Java: byte, short, int, long, float, double, char, boolean. For the 6 numerical types, we have the following ranges, storage size, and encoding methods: Name Range Storage Size (and Method) byte -128 to 127 (i.e., -27 to 27 - 1) 8 bits (Two's Complement) short -32768 to 32767 (i.e., -215 to 215 - 1) 16 bits (Two's Complement) int -2147483648 to 2147483647 (i.e., -231 to 231 - 1) 32 bits (Two's Complement) long -9223372036854775808 to 9223372036854775807 (i.e., -263 to 263 - 1) 64 bits (Two's Complement) float Negative range: -3.4028235E+38 to -1.4E-45 Positive range: 1.4E-45 to 3.4028235E+38 32 bits (IEEE 754 Notation) double Negative range: -1.7976931348623157E+308 to -4.9E-324 Positive range: 4.9E-324 to 1.7976931348623157E+308 64 bits (IEEE 754 Notation) Note: float and double number types are stored in IEEE 754 format, and thus not stored with complete accuracy due to approximations. To convince yourself of this, try the following System.out.println(1.0 - 0.1 - 0.1 - 0.1 - 0.1 - 0.1); and... System.out.println(1.0 - 0.9);
2017-09-23 11:09:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19670933485031128, "perplexity": 1394.414618446245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689624.87/warc/CC-MAIN-20170923104407-20170923124407-00656.warc.gz"}
http://www.fields.utoronto.ca/programs/scientific/08-09/dynsys/abstracts.html
## ABSTRACTS Peter Bates, Michigan State Approximately Invariant Manifolds and Global Dynamics of Spike States We investigate the existence of a true invariant manifold given an approximately invariant manifold for an infinite-dimensional dynamical system. We prove that if the given manifold is approximately invariant and approximately normally hyperbolic, then the dynamical system has a true invariant manifold nearby. We apply this result to reveal the global dynamics of boundary spike states for the generalized Allen-Cahn equation. Arno Berger, University of Alberta Digits and dynamics: from finite data to infinite dimensions Understanding the distribution of digits and mantissae in numerical data,as generated by a dynamical system or otherwise, is a challenge that can be tackled in many different ways. This talk will discuss recent work utilising quantization, sharp distortion estimates, and dynamical systems techniques. While mostly of a finite (dimensional) nature, the results presented naturally lead to several intriguing questions concerning infinite dimensional systems. Pietro-Luciano Buono, University of Ontario Institute of Technology Realization of critical eigenvalues for linear scalar and symmetric delay-differential equations. This talk discusses the realization of critical eigenvalues for linear delay-differential equations depending on n delays. The main result is as follows: for a set of n rationally independent positive numbers, there exists a scalar linear DDE depending on n delays such that the spectrum of the bounded linear operator has n eigenvalues on the imaginary axis with imaginary parts given by the chosen rationally independent numbers. I will also discuss a generalization to symmetric DDEs and in particular to delay-coupled rings of DDEs with dihedral symmetry. Sue Ann Campbell, University of Waterloo Delay Induced Canards We consider a model for regenerative chatter in a drilling process. The model is a nonlinear delay differential equation where the delay arises from the fact that the cutting tool passes over the metal surface repeatedly. For any fixed value of the delay, a large enough increase in the width of the chip being cut results a Hopf bifurcation from the steady state, which is the origin of the chatter vibration. We show that for zero delay the Hopf bifurcation is degenerate and that for small delays this leads to a canard explosion. That is, as the chip width is increased beyond the Hopf bifurcation value, there is a rapid transition from a small amplitude limit cycle to a large relaxation cycle. Our analysis relies on perturbation techniques and a small delay approximation of the DDE model. We use numerical simulations and numerical continuation to support our analysis and to determine when the small delay approximation fails. We discuss how our results may apply to other systems with time delays. This is joint work with Emily Stone and Thomas Erneux. Alexandre N. Carvalho,Universidade de São Paulo Continuity of attractors and of its characterization In this lecture we present some of our recent results on continuity of attractors and of its characterization under autonomous or non-autonomous perturbations. We introduce the class of \emph{gradient-like semigroups} which contains the class of \emph{gradient semigroups} (those with a Liapunov function) and prove that a perturbation of a gradient-like semigroup is again a gradient-like semigroup. The notion of gradient-like semigroups can be extended to non-autonomous evolution processes and we prove that a non-autonomous perturbation of a gradient-like semigroup is a gradient-like non-autonomous evolution process. The results presented here are part of a joint work with J. A. Langa of the Universidad de Seville, Spain. Yuming Chen, Wilfrid Laurier University The global attractor of a delayed differential system \noindent We consider a delayed differential system which describes the dynamics of a network of two identical neurons with delayed output. The technical tool is the discrete Lyapunov functional developed by Mallet-Paret and Sell. First, under some technical assumptions, the existence, nonexistence and uniqueness of periodic solutions in the level sets of the Lyapunov functional are established. Then the global attractor of the system is shown to be the finite union of the unstable sets of stationary points and periodic orbits. Chris Cosner, University of Miami Beyond diffusion: conditional dispersal in ecological models Reaction-diffusion models have been widely used to describe the dynamics of dispersing populations. However, many organisms disperse in ways that depend on environmental conditions or the densities of other populations. Those can include advection along environmental gradients and nonlinear diffusion, among other possibilities. In this talk I will describe some models involving conditional dispersal and discuss its effects and evolution. The presence of conditional dispersal can strongly influence the equilibria of population models, for example by causing the population to concentrate at local maxima of resource density. The analysis of the evolutionary aspects of dispersal is based on a study of models for two competing populations that are ecologically identical except for their dispersal strategies. The models consist of Lotka-Volterra competition systems with some spatially varying coefficients and with diffusion, nonlinear diffusion, and/or advection terms that reflect the dispersal strategies of the competing populations. The evolutionary stability of dispersal strategies can be determined by analyzing the stability of single-species equilibria in such models. In the case of simple diffusion in spatially varying environments it has been known for some time that the slower diffuser will exclude the faster diffuser, but conditional dispersal can change that. In some cases a population whose dispersal strategy involves advection along environmental gradients has the advantage or can coexist with a population that simply diffuses. As is often the case in reaction-diffusion theory, many of the results depend on the analysis of eigenvalue problems for linearized models. Walter Craig, McMaster University Lagrangian and resonant tori for Hamiltonian PDEs This talk discusses a version of the nonlinear Schroedinger equation posed on a lattice, which is an infinite-dimensional Hamiltonian dynamical system. Using the approach of classical KAM theory, we construct invariant tori of full dimension. Our approach is in fact related to constructions of lower dimensional tori, in that we use a normal form that controls the tangential frequencies of the torus in question, its normal frequencies, and its linear stability. We augment this with higher order Melnikov-like nonresonance conditions so that we additionally control the curvature of the action-frequency map. We also give a picture of the situation in the case of resonant tori. This represents joint work with J. Geng. Susan Friedlander, University of Southern California Kolmogorov's Turbulence, Onsager's Conjecture and a Dyadic Model for the Fluid Equations. Kolmogorov predicted that the energy cascade mechanism in 3 dimensional turbulence produces a striking phenomenon, namely positive energy dissipation in the limit of vanishing viscosity. However, to date, there is no rigorous proof of this phenomenon based on the Navier-Stokes equations. We will discuss an infinite, nonlinearly coupled system of ODE that is a so called "dyadic model" for the fluid equations. We prove that Kolmogorov's dissipation anomaly exists for the dyadic model. Furthermore, the limiting value of the energy dissipation is exactly the "turbulent" dissipation produced by rough solutions of the inviscid model which are consistent with Onsager's conjecture. This is joint work with Alexey Cheskidov. Hongjun Gao, Nanjing Normal University Random Attractor for the 3D viscous stochastic primitive equations with additive noise. In this article, we obtain the existence and uniqueness of strong solutions to 3D viscous stochastic primitive equations (PEs) and the random attractor for 3D viscous PEs with additive white noise. Martin Golubitsky, Mathematical Biosciences Institute, Ohio State University Feed-forward networks near Hopf bifurcation Synchrony-breaking Hopf bifurcations in a small three-node feed-forward network lead generically to periodic solutions whose amplitudes in the third node have a surprising 1/6 power growth rate. Moreover, when this network is tuned near such a Hopf bifurcation, it can act as an efficient frequency filter/ amplifier. I will describe the general theory; recent experiments of McCullen and Mullin on coupled electrical circuits that confirm this structure; and related work with Claire Postletwaite, LieJune Shaiu, and Yanyan Zhang on periodic forcing of systems near Hopf bifurcation and its application to auditory receptor cells on the basilar membrane in the cochlea. Stephen Gourley, University of Surrey A Nonlocal Reaction-Diffusion Model for Cellular Adhesion Adhesion of cells to one another and their environment is an important regulator of many biological processes but it is difficult to incorporate into continuum mathematical models. An integro-partial differential equation model for cell behaviour will be presented, in which the integral represents sensing by cells of their local environment. Aggregation patterns are investigated in a model incorporating cell-cell adhesion, random cell movement, and cell proliferation. The model is also extended to give a new representation of cancer growth, whose solutions reflect the balance between cell-cell and cell-matrix adhesion in regulating cancer invasion. Solutions for cell density need to lie between zero and a positive density corresponding to close cell packing. A number of conditions will be presented, each of which is sufficient for the required boundedness. It can be demonstrated numerically that cell density increases above the upper bound for some parameter sets not satisfying these conditions. Conditions will also be presented which are sufficient for global convergence to the uniform steady state. Jack K. Hale, Georgia Institute of Technology Perturbing Periodic Orbits in Infinite Dimensions For an autonomous ODE with the flow possessing a compact invariant set which is a smooth manifold without boundary, there is a vast literature on the effects of perturbations, both autonomous and nonautonmous. Many of the efforts break the problem down into two parts. The first is the persistence of the manifold in the base space for autonomous perturbations and the base space times the reals in the nonautonomous case. The second step is to study the flow on the perturbed manifold. In the infinite dimensional case, much is known about RFDE with finite delay and parabolic systems. Otherwise, there are only a few important contributions in other cases; in particular, PDE which do not have a smoothing property in time. There are many obstables to applying the methods of ODE. One important reason is that the evolutionary equations involve unbouded operators. Also, for PDE, there are so many important perturbation parameters, some of which are regular and some irregular. Raugel and I have been attempting to develop methods which will be sophisticated enough to obtain results similar to the ones for ODE and allow general types of perturbations. Since the problems are so complicated, we have chosen the basic invariant set to be a periodic orbit. We discuss this case in some detail. Wenzhang Huang, University of Alabama in Huntsville The Minimum Wave Speed of Traveling Waves for a Competition Model Consider a reaction-diffusion system that serves as a 2-species Lotka-Volterra competition model with each species having logistic growth in the absence of the other. Suppose that the corresponding reaction system has one unstable boundary equilibrium $E_1$ and a stable boundary equilibrium $E_2$. Then it is well known that there exists a positive number $c_*$, called the minimum wave speed, such that, for each $c\ge c_*$, the reaction-diffusion system has a positive traveling wave solution of wave speed $c$ connecting $E_1$ and $E_2$, and the system has no nonnegative traveling wave with wave speed less than $c_*$ that connects $E_1$ and $E_2$. Although much research work has been done to give an estimate of $c_*$, the important problem on finding an algebraic or analytic expression for the minimum wave speed $c_*$ remains open. In this talk we will introduce a new approach that enable us to determine precisely the minimum wave speed algebraically. Michael Jolly, University of Indiana Estimates on enstrophy, palinstrophy, and invariant measures for 2-D turbulence We construct semi-integral curves which bound the projection of the global attractor of the 2-D Navier-Stokes equations in the plane spanned by enstrophy and palinstrophy. Of particular interest are certain regions of the plane where palinstrophy dominates enstrophy. Previous work shows that if solutions on the global attractor spend a significant amount of time in such a region, then there is a cascade of enstrophy to smaller length scales, one of the main features of 2-D turbulence theory. The semi-integral curves divide the plane into regions in which a range for the direction of the flow is determined. This allows us to estimate the average time it takes for an intermittent solution to burst into a region of large palinstrophy. We also show that the time average of palinstrophy achieves its maximal value only at statistical steady states where the nonlinear term is zero. Barbara Keyfitz, Fields Institute The Trouble with Conservation Laws I address difficulties in conservation laws from the point of view of infinite dimensional dynamical systems and evolution equations. Unlike systems that behave somewhat like finite-dimensional dynamical systems, conservation laws are not amenable to simple finite dimensional approximations. Only in rather non-intuitive spaces can one find compact approximations. Even the "obvious" approximation (by parabolic systems imitating physical viscosity) was shown to converge only recently, and only after application of some very deep arguments. Furthermore, all analysis developed so far works only for systems in a single space variable. Again, conservation laws differ in this respect from many well-known evolution equations. Over the last dozen years, several groups have begun to approach multidimensional systems by looking at self-similar reductions. The reduced system changes type in an interesting way. This talk will review work of Morawetz, Popivanov and others on linear change of type and its relation to evolution systems. Our objective is to find well-posed problems for the reduced (quasilinear) system and, ultimately, to use this as a tool to understand the dynamics of multidimensional systems. Tibor Krisztin, University of Szeged Morse decomposition for differential equations with state-dependent delay We consider a class of functional differential equations representing equations with state-dependent delay. It is shown that, under certain technical conditions, the global attractor of the solution semiflow has a Morse decomposition. The result can be applied for equations with threshold delays, signal transmission delays, and delay functions depending on the present state of the system. The proof is analogous to that of Mallet-Paret given for equations with constant delay, however, nontrivial modifications are necessary. A discrete Lyapunov functional, which is a version of that of Mallet-Paret and Sell, counts sign changes on intervals of the form $[t-r(t),t]$ where $r$ may depend on the state variable. A crucial property in the proof of the boundedness of the discrete Lyapunov functional on the global attractor is that, for globally defined solutions, the map $t\mapsto t-r(t)$ is monotone increasing. Bernhard Lani-Wayda, JLU Gießen Attractors for delay equations with monotone and non-monotone feedback. It is known that, for monotone $f$ with negative feedback ($\text{ sign} (f(x)) = - \text{ sign} (x)$), the infinite-dimensional dynamical system generated by the delay equation $\dot x(t) = f(x(t-1)$ possesses a two-dimensional invariant manifold $W$ with Poincar\'{e}-Bendixson-like dynamics, which attracts all slowly oscillating solutions. We consider parametrized families of nonlinearities starting from such $f$, but then changing to non-monotone shape, and how the attractor changes with the nonlinearity. In particular, it looses the disk-like structure. Marta Lewicka, University of Minnesota Derivation of shell theories from 3d nonlinear elasticity. A longstanding problem in the mathematical theory of elasticity is to predict theories of lower-dimensional objects (such as rods, plates or shells), subject to mechanical deformations, starting from the 3d nonlinear theory. For plates, a recent effort has lead to rigorous justification of a hierarchy of such theories (membrane, Kirchhoff, von K\'arm\'an). For shells, despite extensive use of their ad-hoc generalizations present in the engineering applications, much less is known from the mathematical point of view. In this talk, I will discuss the limiting behaviour (using the notion of Gamma-limit) of the 3d nonlinear elasticity for thin shells around an arbitrary smooth 2d mid-surface S. We prove that the minimizers of the 3d elastic energy converge, after suitable rescaling, to minimizers of a hierarchy of shell models. The limiting functionals (which for plates yield respectively the von Karman, linear, or linearized Kirchhoff theories) are intrinsically linked with the geometry of S. They are defined on the space of infinitesimal isometries of S (which replaces the 'out-of-plane-displacements' of plates), and the space of finite strains (which replaces strains of the in-plane-displacements'), thus clarifying the effects of rigidity of S on the derived theories. The different limiting theories correspond to different magnitudes of the applied forces, in terms of the shell thickness. This is joint work with M.G. Mora and R. Pakzad. Xing Liang, University of Science and Technology of China and University of Tokyo A variational problem associated with the minimal speed of travelling waves for spatially periodic reaction-diffusion equations. We consider the equation $u_t=u_{xx}+b(x)u(1-u),$ $x\in\mathbb R,$ where $b(x)$ is a nonnegative measure on $\mathbb R$ that is periodic in $x.$ In the case where $b(x)$ is a smooth periodic function, it is known that there exists a travelling wave with speed $c$ for any $c\geq c^*(b),$ where $c^*(b)$ is a certain positive number depending on $b.$ Such a travelling wave is often called a \lq\lq pulsating travelling wave" or a \lq\lq periodic travelling wave", and $c^*(b)$ is called the \lq\lq minimal speed". In this paper, we first extend this theory by showing the existence of the minimal speed $c^*(b)$ for any nonnegative measure $b$ with period $L.$ Next we study the question of maximizing $c^*(b)$ under the constraint $\int_0^Lb(x)dx=\alpha L,$ where $\alpha$ is an arbitrarily given constant. This question is closely related to the problem studied by mathematical ecologists in late 1980's but its answer has not been known. We answer this question by proving that the maximum is attained by periodically arrayed Dirac's delta functions $\alpha L\sum_{k\in\mathbb Z}\delta(x+kL).$ This is a joint work with Prof. H.Matano and Dr. X. Lin . Xiao-Biao Lin, University of North Carolina Traveling Wave Solutions of a Model of Liquid/Vapor Phase Transition We will discuss traveling wave solutions for dynamical flows involving liquid/vapor phase transition. The model is a coupled system of viscous conservation laws and a reaction-diffusion equation. Sufficient and necessary conditions for the existence of four tyeps of traveling waves will be given: (1) Liquefaction waves; (2) Evaporation waves; (3) Collapsing waves; (4) Explosion waves. This is joint work with Haitao Fan, Georgetown University. Weishi Liu, University of Kansas Effects of some turning points on global dynamics In this talk, we will consider singularly perturbed systems with turning points.For a class of turning points, a new structure is revealed that plays a critical role in the organization of global dynamics. A concrete example will also be discussed for an illustration of the abstract result. Michael C. Mackey, McGill University Temporal Dynamics in the Tryptophan and Lactose Operons This talk will focus on the temporal dynamics of the lactose and tryptophan operons, and the ability of apparently realistic mathematical models to capture these dynamics. The dynamics in question range from stable steady states through bistability and oscillatory expression of gene products. Pierre Magal, University Le Havre Semilinear Non-densely Defined Cauchy Problems: Center Manifold Theorem and Applications Several types of differential equations, such as delay differential equations, age-structure models in population dynamics, evolution equations with nonlinear boundary conditions, can be written as semi-linear Cauchy problems with an operator which is not densely defined. The goal of this presentation is first to present several examples, and then to turn to a center manifold theory for semi-linear Cauchy problems with non-dense domain. Using Liapunov-Perron method and following the techniques of Vanderbauwhede and Iooss in treating infinite dimensional systems, we study the existence and smoothness of center manifolds for semi-linear Cauchy problems with non-dense domain. We will conclude this presentation with several example of PDE where one can investigate Hopf bifurcation using this center-manifold theorem. Hiroshi Matano, Tokyo A braid-group method for blow-up in nonlinear heat equations. In this talk I will present intriguing applications of the braid group theory to the study of blow-up in a nonlinear heat equation $u_t = \Delta u + u^p$, where $p$ is supercritical in the Sobolev sense. One of the goals is to classify the type II blow-up rates by analyzing the topological properties of certain braids. I will also discuss other applications of this method. Connell McCluskey, Wilfrid Laurier University A Global Result for a Disease Model with Infinite Delay A recent paper (MBE 2008, 5:389-402) by G. Rost and J. Wu presented an SEIR disease model using infinite delay to account for varying infectivity. They gave a thorough analysis leaving out only the elusive global stablity of the endemic equilibrium. A solution to that problem will be given, making use of a Lyapunov functional. The functional includes a term that integrates over all previous states of the system. Konstantin Mischaikow, Rutgers University Building a Database for Global Dynamics of Parameterized Nonlinear Systems James Muldowney, University of Alberta Bendixson conditions for differential equations in Banach Spaces It will be shown that a flow which diminishes a measure of 2-dimensional surface area cannot contain non-constant periodic orbits. Concrete conditions that preclude the existence of periodic solutions for a parabolic PDE will be given. Roger Nussbaum, Rutgers University Nonlinear Differential-Delay Equations with State Dependent Time Lag(s) This lecture will mostly concentrate on the differential delay equation (1) ax'(t)= f(x(t),x(t-r)), where r:=r(x(t)) and f and r are given functions. A central question is what can be said about the limiting shape of slowly oscillating periodic solutions of eq.(1) as a--0. We shall briefly describe some of the tools which have been used to study such equations, and we shall illustrate our results by discussing some simple-looking equations which already exhibit most of the essential difficulties: (2) ax'(t)= -x(t) -kx(t-r), a0, k1, r=1+x(t) or r=1-(x(t))^2. All of the results which we shall discuss represent joint work with John Mallet-Paret. Global attractivity in monotone concave differential equations with infinite delay We study the dynamical behavior of the trajectories defined by a recurrent family of monotone functional differential equations with infinite delay and concave nonlinearities. We analyze different sceneries which require the existence of a lower solution and of a bounded trajectory ordered in an appropriate way, for which we prove the existence of a globally asymptotically stable minimal set given by a 1\nbd-cover of the base flow. We apply these results to the description of the long term dynamics of a nonautonomous model representing a stage-structured population growth without irreducibility assumptions on the coefficient matrices. This work is made in collaboration with Rafael Obaya and Ana M. Sanz. Exponential ordering for scalar neutral functional differential equations with infinite delay. We study some properties of the exponential ordering for scalar nonautonomous families of functional differential equations and neutal functional differential equations with stable D-operator. We discuss some properties which imply that the omega limit set of a relatively compact trajectory is a copy of the base. We apply these results in the study of some kind of compartmental systems. Sérgio Oliva, Universidade de São Paulo Analytical Methods for Approximation Schemes in Partial Functional Differential Equations The goal of this paper is to present an approximation scheme for a reaction-diffusion equation with finite delay, which has been used as a model to study the evolution of a population with density distribution $u$, in such a way that the resulting finite dimensional ordinary differential system contains the same asymptotic dynamics as the reaction-diffusion equation. Ken Palmer, National Taiwan University Homoclinic Orbits in Singularly Perturbed Systems We begin by considering three nonlinear oscillators studied by Cherry, Iglisch, and Kurland and Levi respectively. In all three systems transversal homoclinic or heteroclinic orbits arise after perturbation. It turns out that all three equations can be regarded as singularly perturbed (or slowly varying) systems where the unperturbed system has one or two normally hyperbolic centre manifolds. We study the general question of fast connecting orbits in such systems and derive a general bifurcation function, the zeros of which correspond to such connecting orbits. We also discuss the question of when such orbits connect equilibria on the centre manifolds. Peter Polacik, University of Minnesota Parabolic Liouville theorems and their applications. Parabolic Liouville theorems state that if u is an entire solution of a specific parabolic equation and u is contained in an admissible class of solutions, then u ? 0. As an admissible class one can take nonnegative solutions or radial solutions withbounded zero number. We present available Liouville theorems and some of their numerous applications. Genevieve Raugel, Paris de Sud Dynamics of some equations in fluid mechanics In this talk, we mainly consider two systems, arising in fluid mechanics, namely the second grade fluid equations and the `hyperbolic second order Navier-Stokes equations". We shall prove existence and uniqueness results for both systems and study their dynamical properties. Both systems depend on a parameter. When this parameter is small, these systems can be considered as non regular perturbations of the Navier-Stokes equations. We shall compare the dynamics of these perturbed systems with those of the Navier-Stokes equations, when the parameter goes to zero. Shigui Ruan, University of Miami, Center Manifolds for Semilinear Equations with Non-dense Domain and Applications on Hopf Bifurcation in Age Structured Models Age structured models arise naturally in population dynamics and epidemiology. The existence of non-trivial periodic solutions in age structured models has been a very interesting and difficult problem. It is believed that such periodic solutions are induced by Hopf bifurcation, however there is no general Hopf bifurcation theorem available for age structured models. One of the difficulties is that, rewriting age structured models as a semilinear equation, the domain of the linear operator is not dense. In this talk, we first introduce the center manifold theory for semilinear equations with non-dense domain. We then use the center manifold theorem to establish a Hopf bifurcation theorem for age structured models (based on joint work with Pierre Magal). Arnd Scheel, University of Minnesota How robust are Liesegang patterns? TBA Wenxian Shen, Auburn University Spreading and Generalized Propagating Speeds of KPP Models in Time Varying Environments The current talk is concerned with the spreading and generalized propagating speeds of KPP models in time recurrent environments, which including time periodic and almost periodic environments as special cases. It first introduces the notions of spreading speed intervals, generalized propagating speed intervals, and traveling wave solutions. Some fundamental properties of spreading and generalized propagating speeds are then presented. When the environment is unique ergodic and the so called linear determinacy condition is satisfied, it is shown that the spreading speed interval in any direction is a singleton (called the spreading speed), which equals the classical spreading speed if the environment is actually periodic. Moreover, in such case, a variational principle for the spreading speed is established and it is shown that there is a front of speed c in a given direction if and only if c is greater than or equal to the spreading speed in that direction. Stefan Siegmund, Dresden University of Technology Differential Equations with Random Delay We present a first step towards a general theory of differential equations incorporating unbounded random delays. The main technical tool relies on recent work of Zeng Lian and Kening Lu, which generalizes the Multiplicative Ergodic Theorem by V.I. Oseledets to Banach spaces. Hal Smith, Arizona State University Persistence Theory for Semidynamical Systems Persistence, sometimes called Permanence, for dynamical systems arising in the biological sciences implies that a set of populations avoids extinction. The theory, developed beginning in the 1970s, aims to answer the question "Which species survive in the long run?" We survey some of the theoretical results of this theory and its applications. Special attention will be given to discrete-time systems. Horst R. Thieme, Arizona State University Differentiability of convolutions If $T=\{T(t); t \ge 0\}$ is a strongly continuous family of bounded linear operators between two Banach spaces $X$ and $Y$ and $f \in L^1(0,b,$ $X)$, the convolution of $T$ with $f$ is defined by $(T *f)(t) = \int_0^t T(s)f(t-s)ds$. It is shown that $T*f$ is continuously differentiable for all $f \in C(0,b,X)$ if and only if $T$ is of bounded semi-variation on $[0,b]$. Further $T*f$ is continuously differentiable for all $f \in L^p(0,b, X)$ ($1 \le p < \infty$) if and only if $T$ is of bounded semi-$p$-variation on $[0,b]$ and $T(0)=0$. If $T$ is an integrated semigroup with generator $A$, these respective conditions are necessary and sufficient for the Cauchy problem $u'= Au +f$, $u(0)=0$, to have integral (or mild) solutions for all $f$ in the respective function vector spaces. A converse is proved to a well-known result by Da Prato and Sinestrari: the generator $A$ of an integrated semigroup is a Hille-Yosida operator if, for some $b>0$, the Cauchy problem has integral solutions for all $f\in L^1(0,b, X)$. Integrated semigroups of bounded semi-$p$-variation are preserved under bounded additive perturbations of their generators and under commutative sums of generators if one of them generates a $C_0$-semigroup. Stephen Schecter, North Carolina State University Stability of fronts in gasless combustion For gasless combustion in a one-dimensional solid, we show a type of nonlinear stability of the physical combustion front: if a perturbation of the front is small in both a spatially uniform norm and an exponentially weighted norm, then the perturbation stays small in the spatially uniform norm and decays in the exponentially weighted norm, provided the linearized operator has no eigenvalues in the right half-plane other than zero. Using the Evans function, we show that the zero eigenvalue must be simple. Factors that complicate the analysis are: (1) the linearized operator is not sectorial, and (2) the linearized operator only has good spectral properties when the weighted norm is used, but then the nonlinear term is not Lipschitz. The result is nevertheless physically natural. Coauthors: Anna Ghazaryan, University of North Carolina at Chapel Hill Yuri Latushkin, University of Missouri Aparecido de Souza, Universidade Federal de Campina Grande (Brazil) Arnd Scheel, University of Minnesota How robust are Liesegang patterns? Liesegang patterns are quite common stationary patterns in reaction-diffusion systems. They consist of precipitation spots that are spaced at geometrically increasing distances from the boundary. We will show that such patterns are untypical in generic reaction-diffusion systems, but robust in a class of systems with an irreversible chemical reaction. The main result gives necessary and sufficient conditions for the existence of Liesegang patterns. The proof involves the analysis of a degenerate reversible homoclinic orbit and an invariant manifold theorem for non-smooth Poincare maps. Hans-Otto Walther, Universität Giessen Algebraic-delay differential systems, state-dependent delay, and temporal order of reactions Systems of the form x'(t) & = & g(r(t),x_t)\\ 0 & = & \Delta(r(t),x_t) generalize differential equations with delays $r(t)<0$ which are given implicitly by the history $x_t$ of the state. We show that the associated initial value problem generates a semiflow with differentiable solution operators on a Banach manifold. The theory covers reaction delays, signal transmission delays, threshold delays, and delays depending on the present state $x(t)$ only. As an application we consider a model for the regulation of the density of white blood cells and study monotonicity properties of the delayed argument function $\tau:t\mapsto t+r(t)$. There are solutions $(r,x)$ with $\tau'(t)0$ and others with $\tau'(t)<0$. These other solutions correspond to feedback which reverses temporal order; they are short-lived and less abundant. Transient behaviour with a sign change of $\tau'$ is impossible. Gail S. K. Wolkowicz, McMaster University Comparison of Predator-Prey Models with Discrete Time Delay We consider the dynamics of the classical predator-prey model and the predator-prey model in the chemostat when a discrete delay is introduced to model the time between the capture of the prey and its conversion to biomass. In both models we use Holling type I response functions so that no oscillatory behavior is possible in the associated system when there is no delay. With delay, the characteristic equation in both cases is a second order transcendental equation with parameters that depend on the delay. In both models, we prove that as the parameter modeling the delay is varied Hopf bifurcation can occur, and show numerically that the bifurcation results in asymptotically stable periodic orbits. However, we show that there appear to be differences in the possible sequences of bifurcations for the two models. Yuan Yuan, Memorial University of Newfoundland Pattern Formation in a Ring Network with Delay We consider a ring network of three identical neurons with delayed feedback. Regarding the coupling coefficients as bifurcation parameters, we obtain codimension one bifurcation (including a Fold bifurcation and Hopf bifurcation) and codimension two bifurcations (including Fold-Fold bifurcations, Fold-Hopf bifurcations and Hopf-Hopf bifurcations). We also give concrete formulae for the normal form coefficients derived via the center manifold reduction that provide detailed information about the bifurcation and stability of various bifurcated solutions. In particular, we obtain stable or unstable equilibria, periodic solutions, and quasi-periodic solutions. This is a joint work with Shangjiang Guo. Chongchun Zeng, Georgia Tech Unstable manifolds and $L^2$ nonlinear instability of the Euler equation We consider a steady state $v_0$ of the Euler equation in a fixed bounded domain in $R^n$. Suppose the linearized equation has an exponential dichotomy with a finite dimensional unstable subspace. By rewriting the Euler equation as an ODE on an infinite dimensional manifold in $H^k$, $k>\frac n2 +1$, the unstable manifold of $v_0$ is constructed under certain conditions on the Lyapunov exponents of the vector field $v_0$. This in turn shows the nonlinear instability of $v_0$ in the sense that small $H^k$ perturbations can lead to $L^2$ derivation of the solutions. Yuncheng You, University of South Florida Global Dynamics of Cubic Autocatalytic Reaction-Diffusion Systems Consider a class of cubic and mixed-order autocatalytic reaction-diffusion systems arising from kinetic modeling of chemical and biochemical reactions of two components or from biological pattern formations. We shall talk about the existence of a global attractor and an exponential attractor for the solution semiflow of these reaction-diffusion systems on three-dimensional bounded domains and related issues. Xiaoqiang Zhao, Memorial University of Newfoundland Spatial Dynamics of Abstract Monostable Evolution Systems With Applications In this talk, I will report on the theory of spreading speeds and traveling waves for abstract monotone semiflows with spatial structure. Under appropriate assumptions, we show that the spreading speeds coincide with the minimal wave speeds for monotone traveling waves in the positive and negative directions. Then we use this theory to study the spatial dynamics of a reaction-diffusion-advection population model, a parabolic equation in a periodic cylinder with the Dirichlet boundary condition, a porous medium equation in a tube, and a lattice system in a periodic habitat. If time permits, I will also mention our research on some non-monotone evolution systems. This talk is based on my recent joint works with X. Liang, S.-B. Hsu, J. Fang, and Y. Jin. Xingfu Zou, University of Western Ontario Impact of map dynamics on the dynamics of the associated delay reaction diffusion equation with Neumann condition We are concerned with the dynamics of a class of delay reaction diffusion equation with a parameter $\mu$. By letting $\mu \rightarrow +\infty$, such an equation is formerly reduced to an interval dynamical system. With the help of the famous Sarkovskii's theorem, we obtain some new yet simple sufficient conditions that assure the global stability of the delayed reaction diffusion equation with the parameter. We also give several examples to illustrate our main results. This is a joint work with Taishan Yi.
2015-11-25 14:24:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6992602944374084, "perplexity": 650.8589465255475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445208.17/warc/CC-MAIN-20151124205405-00329-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.sarthaks.com/4659/john-has-marbles-put-them-into-bags-put-the-same-number-marbles-each-bag-how-many-marbles-each
John has 340 marbles. He put them into 4 bags. He put the same number of marbles in each bag. How many marbles are in each bag? 63 views by (1.6k points) 340/4=85 this is the answer
2021-04-18 09:16:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959710597991943, "perplexity": 988.3867196629025}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00487.warc.gz"}
https://worldbuilding.stackexchange.com/questions/102874/thousand-pound-steel-bow/102899
# Thousand pound steel bow? If a bow was made of steel and shaped like a Old Mongolian Bow with a draw weight of around 1000 lbs was made in proportion to a 9 foot tall humanoid (assume strength is enough to pull bow normally and the bow and arrow is roughly double regular size) what sort of force, speed, ect. would the arrow achieve? (Assume the arrow could survive these forces or adjust the arrow as needed) • The range, speed, etc. depends on the mass of the arrow. For this question to be answerable, that information must be included. – Gryphon Jan 22 '18 at 19:02 • Ok i will edit to make this more clear – Christopher Void Jan 22 '18 at 19:04 • This sounds like a physics problem not a worldbuilding one. – sphennings Jan 22 '18 at 19:07 • I assumed it would be acceptable considering I need to know these things in order to realistically interact with it in my world – Christopher Void Jan 22 '18 at 19:11 # First of all, some conversions to make the physics easier: 500 lbs ~= 225 kg 43.2 in ~= 1.1 m # Okay, now why these numbers? Bows are basically springs, meaning that they follow Hooke's Law reasonably well. Hook's law states that $F=ks$ where $F$ is force, $k$ is a constant related to the spring and $s$ is how far you've stretched the string. Now I'm going to make an assumption that's wrong, but will make all of this much easier. I'm going to assume that your bow begins from a state of rest, meaning that it starts at $s=0$. This is wrong, because the bowstring does impart some force onto the bow, creating some tension before you even begin to draw, but since this whole exercise is an estimate, I don't think this will affect our end results much. In physics, work is defined as $W=F_{avg}s$. Here's where that assumption comes into the picture. Since the force exerted by the bow is linear, if it starts at a state of rest, then the average force is just half the maximum force. That means that the average force exerted while pulling back your hypothetical bow is 500 lbs, because that is half of 1000 lbs (the max force during draw). Now we need $s$, or the draw length. This site tells us that the length from fingertip to finger tip divided by 2.5 should be the proper draw length. For our 9 foot tall humanoid, that's $(9*12)/2.5=43.2$. Using the conversions above that's $F_{avg}=225$ and $s=1.1$. # Now plug it in Plugging our numbers into our formula above, we get $W=F_{avg}s=225*1.1=247.5\ Joules$. Not all of that is going to go into sending the arrow forward. A lot of it is wasted by making the arrow oscillate and some heating of the bow. According to this site, only 54.8% of the energy goes into the arrow. So, $E_{arrow}=W_{bow}*.548=247.5*.548=135.6\ Joules$. # So how fast is it going finally? Now kinetic energy is defined as $E_k=(1/2)mv^2$ so now all we need is the weight of the arrow. This guy wanted some help with the weight of his arrow, and his community was helpful enough to let him know it was ~375 grains... which is apparently a unit of weight? Anyway apparently that's about 24 grams. I'm going to go with a crude "let's double its size!" calculation. Doubling the size makes the volume increase by a factor of 8, and the weight by the same, so the weight of our extra large arrow is 194 grams. Note that double the size is 56 inches, which isn't too much longer than our draw length, sounds good to me! Now throwing that in the equation, we get $135.6=(1/2)*.194*v^2$. Solving for $v$ gets us about 37 meters per second or about 80 mph! • This shows the limits of mechanical energy storage. Even a very crude matchlock from the 1500's could provide about 1000J of energy to the projectile, an order of magnitude difference. And weapons like steel crossbows sized for normal humans could already deliver @ 200J of energy to the quarrel, so a longbow sized for a 9' tall human isn't even that impressive. Don't forget also that longbows need a lifetime of training and constant practice, while crossbows and firearms can be used effectively with minimal training. – Thucydides Jan 22 '18 at 23:08 So 1,000lb is grossly oversized. Average draw weights for normal bows are in the 40-55lb range. You're talking about scaling up a human by 1.5x (6ft to 9ft), but increasing the arm strength by 20x. Even assuming a cube law this would indicate a more likely bow draw range of 3.5x or 150-200lb comparable to a crossbow. For reference the giant would need to dissipate this force when firing (equal and opposite), a thousand pounds force is one hell of a shove even for a giant bracing for the blow. 1000lb is not a bow it is a small ballista. For reference these guys built a replica roman ballista of 700lb draw weight. Ballista don't fire regular arrows, but more commonly stones, metal darts, or metal tipped spears. They also have a guide track for the projectile to follow, which your bow would lack making it very hard to aim or shoot. Comparable draw weight ballistae can shoot several hundred meters when firing in a parabolic arc. They were known to pierce shields at range and were highly accurate and deadly.
2019-10-18 06:57:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6407594680786133, "perplexity": 1113.904906962693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00506.warc.gz"}
https://www.mbatious.com/topic/171/2iim-quant-notes-number-theory-part-1
# 2IIM Quant Notes - Number Theory - Part 1 • N is an 80-digit positive integer (in the decimal scale). All digits except the 44th digit (from the left) are 2. If N is divisible by 13, find the 44th digit. Hint: 1001 is 7 * 11 * 13. 1001 is a fabulous number. Kind of number you want to make your friend. A number that will take all your inputs replicate them and appreciate your efforts amply. Another hint: A number of the form abcabc is a multiple of 1001 and therefore a multiple of 13. So, 222222 is a multiple of 13. 222222 followed by any number of zeroes is also a multiple of 13. 22222....222 thirty-six times is a multiple of 13... 2222.....22 forty-two times is also a multiple of 13. A page is torn from a novel. The sum of the remaining page numbers is 10000. What is the sum of the two page-numbers on the torn page of this novel? Hint: Do not just think algebra. Think 'around' 10000. Sum of natural numbers is n(n+1)/2 To get close to 10000, we need n to be 141. 141*142/2 is equal to 10011, which means we are short 11. Two consecutive numbers must add to 11, the only possibility is 5+6 A number n! is written in base 6 and base 8 notation. Its base 6 representation ends with 10 zeroes. Its base 8 representation ends with 7 zeroes. Find the smallest n that satisfies these conditions ? Hint : In base 5, if a number ends in 4 zeroes, it should be a multiple of 5^4. From this we can Say the number should be a multiple of 2^21 and also 3^10. Since we know in base six, the number ends in 10 zeroes, the power of three is the limiting factor. So we need a number that is a multiple of exactly 3^10 and a multiple of at least 2^21 (we can go up to 2^23, but not 24. Think about why this is) Trial and error works best here. If n is 18, then through repeated division, we get 18! Is a multiple of 3 power 8. We need two more multiples of 3, so let's check 24! This is a multiple of 3 power 10. It is also a multiple of 2 power 22, so it fits the constraints. 24 is the smallest such number. 25 and 26 will also work since they don't bring in extra multiples of three [x] is the greatest integer less than or equal to x. Find the number of positive integers n such that [n/11] = [n/13]. The greatest integer function is a fabulous discrete function. Any question on this, treat with caution. All numbers less than 11 will yield 0 as a result for both 11 and 13. Straight away we have 10 numbers. Next 11 and 12 will yield 1 in the first case and 0 in the Second, so ditch these. We need to look for the range where n is less than 22. From 13 to 21, both cases will yield 1 as the answer. Add another 9 cases. Next overlap is from 26 to 32, for another 7 cases Next overlap is from 39 to 43, for 5 more cases. Next overlap is from 52 to 54 for 3 more cases. The final case is 65 when the result is five in both cases. Total number is 35 N/420 is a terminating decimal. If N is a natural number less than 420, how many different values can N take ? Hint/Method: Any number of the form p/q is a terminating decimal if and only if the the only primes that divide q are 2 and 5. The caveat here is that p/q should be in its simplest form. In other words HCF(p,q) = 1. p, q should be coprime A man wants to leave his wealth for his 5 sons and/or 11 grandchildren. He realizes that if he distributes his wealth equally amongst his sons, he has $2 left. If he distributes it equally amongst his grand kids, he has$3 left. If he distributes it equally among all his progeny, he has \$4 left. What is the least amount of money the old man might be leaving behind? In the first case, we get that his wealth is in the form 5a + 2, in the second, 11b + 3, and in the final, 16c + 4. Let's list down some numbers.... In the first case, .... 22, 27, 32, 37, 42, 47, 52.... In the second case, ... 14, 25, 36, 47, 58... We see the number has to be of the form 55k + 47 Listing, 47, 102, 157, 212, 267, 322... In the third case ..., 100, 116, 132... 164, ... 196, 212 So his wealth should be of the form 880k + 212. Obviously the smallest is when k is zero, or 212 is the minimum possible. Number N = 333333……3 written 2016 times. What is the remainder when N is divided by 55 ? N/11 remainder is 0. N/5 remainder is 3. Combine these two and we get 33. Number N = 333333……3 written 2016 times. What is the remainder when N is divided by 56 ? The number is of the form 8k + 5 ( remainder when divided by 8 is 5, just look at last three digits) Also, 333333 is exactly divisible by 7, I found this painstakingly.... Even though just two questions back we saw abcabc is divisible by 1001... This means repeated sets of 333333 will also be divisible by 7. Since 2016 is 6 * 336, the number is exactly divisible by 7. So we have a number of the form 8a + 5, and 7b Listing, we have 5, 13, 21.... 21 is a multiple of 7. So the number is of the form 56k + 21. Remainder when divided by 56 is 21 How many 5 digit numbers exist, comprising the digits 1, 2, 3, 4, 5 each occurring exactly once that such that ‘1’ does not appear immediately before ‘2’, but are divisible by 12. (Ex. 34152 would be counted, whereas 34512 would not be counted) For a number to be divisible by 12, it needs to be divisible by 3 and 4. Three is not a problem because in any configuration, the sum of digits will be 15, so the number is divisible by 3. To be divisible by 4, the last two digits must be a multiple of 4. There are four ways this is possible, the no ends with 12, 24, 32 or 52. We can't consider 12 because one immediately precedes 2. So there are three cases we need to consider. Let us consider 32 and 52, since there is no scope for 1 to appear before 2 in these cases. The number looks like _ _ _ 3 2/ 5 2. In both cases, the other three digits can rearrange in 3! Ways. So six cases each for these two cases. In the last case, there are six arrangements possible. But let us consider the arrangements which end in _ _ 1 2 4. There are 2! Cases which must be eliminated, for a total of four more cases. All cases considered, there are 16 such numbers Looks like your connection to MBAtious was lost, please wait while we try to reconnect.
2017-08-24 10:25:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5711608529090881, "perplexity": 435.9909729844235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00182.warc.gz"}
https://math.stackexchange.com/questions/373600/find-a-basis-for-the-subspace-mathbbr3-containing-vectors
# Find a basis for the subspace $\mathbb{R}^3$ containing vectors Let $v_1 = \langle 1,0,-1\rangle$ $v_2 = \langle -2,7,2\rangle$ $v_3 = \langle 3,-7,-3\rangle$ I found that these are linearly dependent since I have a free variable upon reducing. However, the question asks to form a basis with those $3$ vectors. A basis can only be formed if all of the vectors are linearly independent. How would I answer the following: "Find a basis for the subspace $\mathbb{R}^3$ containing $v_1$, $v_2$, $v_3$."? • The subspace (I assume the subspace spanned by the given vectors, here) contains the vectors, not the basis. You can take the non-zero rows of your echelon form (assuming you work with the matrix whose rows are the given vectors) as the elements of the required basis. – David Mitra Apr 26 '13 at 15:21 • @DavidMitra Isn't $e_1,e_2,e_3$ also a basis than ? I think it is a strange question. – Kasper Apr 26 '13 at 15:22 • Note that the question is asking about the subspace $V$ of $\mathbb{R}^3$ which contains $v_1, v_2, v_3$, that is, $V = \operatorname{span}\{v_1, v_2, v_3\}$. The question is asking you to find a basis for $V$. – Michael Albanese Apr 26 '13 at 15:24 • That's my issue. How can I find a basis if the set is not linearly independent? Or am I confusing it all? – Dimitri Apr 26 '13 at 15:29 • Your question as stated and as implied by Kasper has a trivial answer. Perhaps you meant, as I suggest in my first comment, "the subspace spanned by the vectors"? – David Mitra Apr 26 '13 at 15:29 In other words, choose two basis vectors $b_1, b_2$ and express them as linear combinations of $v_1, v_2, v_3$. You have freedom to choose what $b_1, b_2$ are, provided that they are linearly independent of one another. $v_3=v_1-v_2$ but $\{v_1,v_2\}$ are independ so they can be a basis for this subspace (that contain this three vectore)
2019-12-12 22:20:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.839555025100708, "perplexity": 228.1107974084961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00014.warc.gz"}
https://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2018-050.html
# Measurement of $b$-hadron fractions in 13 TeV $pp$ collisions [to restricted-access page] ## Abstract The production fractions of $\overline{B}_s^0$ and $\Lambda_b^0$ hadrons, normalized to the sum of $B^-$ and $\overline{B}^0$ fractions, are measured in 13 TeV pp collisions using data collected by the LHCb experiment, corresponding to an integrated luminosity of 1.67/fb. These ratios, averaged over the $b$-hadron transverse momenta from 4 to 25 GeV and pseudorapidity from 2 to 5, are $0.122 \pm 0.006$ for $\overline{B}_s^0$, and $0.259 \pm 0.018$ for $\Lambda_b^0$, where the uncertainties arise from both statistical and systematic sources. The $\Lambda_b^0$ ratio depends strongly on transverse momentum, while the $\overline{B}_s^0$ ratio shows a mild dependence. Neither ratio shows variations with pseudorapidity. The measurements are made using semileptonic decays to minimize theoretical uncertainties. In addition, the ratio of $D^+$ to $D^0$ mesons produced in the sum of $\overline{B}^0$ and $B^-$ semileptonic decays is determined as $0.359\pm0.006\pm 0.009$, where the uncertainties are statistical and systematic. ## Figures and captions Fit to the mass spectra of the $H_c$ candidates of the selected $H_b$ decays: (a) $D^0$, (b) $D^+$, (c) $D_s^+$ mesons, and (d) the $\Lambda ^+_ c$ baryon. The data are shown as black points with error bars. The signal component is shown as the dashed (green) line and the combinatorial background component is shown as the dashed (red) line. The solid (blue) line shows all components added together. D0_Mas[..].pdf [30 KiB] HiDef png [201 KiB] Thumbnail [170 KiB] *.C file Dp_Mas[..].pdf [30 KiB] HiDef png [185 KiB] Thumbnail [150 KiB] *.C file Ds_Mas[..].pdf [30 KiB] HiDef png [189 KiB] Thumbnail [157 KiB] *.C file Lc_Mas[..].pdf [30 KiB] HiDef png [181 KiB] Thumbnail [151 KiB] *.C file Projections of the two-dimensional fits to the (a) $m(D^0K^{\pm})_C$ and (c) $m(D^0\myoptbar{ p}{} )_C$ mass distributions and (b, d) ${\rm ln}(\Delta\chi^{2}_{\rm V})$ for (top) $D ^0 K^{\pm} X\mu^- \overline{\nu}_{\mu}$ candidates, and (bottom) for $D ^0 \myoptbar{ p}{} X\overline{\nu}_{\mu}$ candidates. The curves show projections of the 2D fit. The dashed (red) curves show the $D_{s1}^{+}$ and $D_{s2}^{*+}$ resonant components in (a) and (b), and $\Lambda ^+_ c (2860)$, $\Lambda ^+_ c (2880)$ and $\Lambda ^+_ c (2940)$ resonant components in (c) and (d). The long-dashed-dotted (green) curves show the nonresonant component, the dotted (black) curves are the background components, whose shapes are determined from wrong-sign combinations, and the solid (blue) curve shows all components added together. reduce[..].pdf [53 KiB] HiDef png [222 KiB] Thumbnail [181 KiB] *.C file reduce[..].pdf [18 KiB] HiDef png [205 KiB] Thumbnail [171 KiB] *.C file reduce[..].pdf [44 KiB] HiDef png [258 KiB] Thumbnail [211 KiB] *.C file reduce[..].pdf [18 KiB] HiDef png [212 KiB] Thumbnail [178 KiB] *.C file The ratios $f_{s}/(f_{u}+f_{d})$ and $f_{\Lambda ^0_ b }/(f_{u}+f_{d})$ in bins of $p_{\mathrm{T}} (H_b)$. The $\overline{ B }{} {}^0_ s$ data are indicated by solid circles, while the $\Lambda ^0_ b$ by triangles. The smaller (black) error bars show the combined bin-by-bin statistical and systematic uncertainties, and the larger (blue) ones show the global systematics added in quadrature. The fits to the data are shown as the solid (green) bands, whose widths represents the $\pm 1 \sigma$ uncertainty limits on the fit shapes, and the dashed (black) lines give the total uncertainty on the fit results including the global scale uncertainty. In the highest two $p_{\mathrm{T}}$ bins the points have been displaced from the center of the bin. BFract[..].pdf [21 KiB] HiDef png [294 KiB] Thumbnail [210 KiB] *.C file Measurement of the fraction ratios (a) $f_s/(f_u+f_d)$ and (b) $f_{\Lambda ^0_ b }/(f_u+f_d)$ as functions of $\eta$ integrated over $p_{\mathrm{T}}$ . BFract[..].pdf [13 KiB] HiDef png [75 KiB] Thumbnail [71 KiB] *.C file BFract[..].pdf [14 KiB] HiDef png [88 KiB] Thumbnail [86 KiB] *.C file Animated gif made out of all figures. PAPER-2018-050.gif Thumbnail ## Tables and captions Branching fractions of semileptonic $b$-hadron decays from direct measurements for $\overline{ B }{} {}^0$ and $B ^-$ mesons, ($\left< B\right>\equiv \left< \overline{ B }{} {}^0 +B^-\right>$), and derived for $\overline{ B }{} {}^0_ s$ and $\Lambda ^0_ b$ hadrons based on the equality of semileptonic widths and the lifetime ratios \cite{Tanabashi:2018oca,Bigi:2011gf}. Corrections to $\Gamma_{\rm SL}$ for $\overline{ B }{} {}^0_ s$ $(-1.0\pm 0.5)$% and $\Lambda ^0_ b$ $(3.0\pm 1.5)$% are applied \cite{Bigi:2011gf}. Correlations in the $\overline{ B }{} {}^0$ and $B ^-$ branching fraction measurements have been taken into account. See Ref. \cite{Aaij:2016avz} for more information. Table_1.pdf [68 KiB] HiDef png [27 KiB] Thumbnail [10 KiB] tex code Charm-hadron branching fractions for the decay modes used in this analysis. Note, the $\Lambda ^+_ c$ branching fraction has been significantly improved since the previous analysis. Table_2.pdf [80 KiB] HiDef png [23 KiB] Thumbnail [9 KiB] tex code Global systematic uncertainties. The $D^{0}$ and $D^{+}$ branching fraction uncertainties are scaled by the fraction of each decay, $f_0$ and $f_+$ for $f_s/(f_u+f_d)$ and $f_{\Lambda ^0_ b }/(f_u+f_d)$ uncertainties. Table_3.pdf [86 KiB] HiDef png [74 KiB] Thumbnail [36 KiB] tex code Values of $f_s/(f_u+f_d)$ and $f_{\Lambda ^0_ b }/(f_u+f_d)$ in each $p_{\mathrm{T}} (H_b)$ bin. The first uncertainty is statistical and incorporates both the uncertainties due to the data sample size and the finite amount of simulated events, while the second is the overall systematic uncertainty, including global and bin-dependent systematic uncertainties. Table_4.pdf [73 KiB] HiDef png [162 KiB] Thumbnail [88 KiB] tex code Covariance matrix for $f_s/(f_u+f_d)$ in $p_{\mathrm{T}} (H_b)$ [GeV] bins; it accounts for statistical and bin-dependent systematic uncertainties, but not the global systematic uncertainties. Table_5.pdf [49 KiB] HiDef png [158 KiB] Thumbnail [62 KiB] tex code Covariance matrix for $f_{\Lambda ^0_ b }/(f_u+f_d)$ in $p_{\mathrm{T}} (H_b)$ [GeV] bins; it accounts for statistical and bin-dependent systematic uncertainties, but not the global systematic uncertainties. Table_6.pdf [49 KiB] HiDef png [158 KiB] Thumbnail [56 KiB] tex code Created on 27 February 2021.
2021-03-03 03:24:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575963020324707, "perplexity": 6291.337343597996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00110.warc.gz"}
https://www.geo-metronix.de/procmt_html/procmt_procmt.html
# ProcMT ## Preface ProcMT (process MT) is a collection of libraries and executables. They are bundled together in several GUIs (graphical user interfaces). ProcMT works on a very specific directory structure where the software reads XML data and ats data (advanced time series) as well as calibration files. The philosophy is to configure all first and then run the program. If configured you can run many jobs - the computer does not wait wait on mouse clicks. All actions and joblists are stored as XML files and will be executed on request from the GUI or shell. The LINUX user will be able to run procmt_main from script files, in case you have to process hundreds of sites. The script interface can also run on the ADU systems. Activating all parallel options may lead to memory exhaustion on 32bit systems. It is recommended to use 64bit systems with more than 8GB memory. E.g. you expect 50% of the timeseries data size to be used as memory. # Setup ## Directory Structure ProcMT expects a directory structure to work on. This structure is either created by a script or by the GUI by calling File -> Create Project. Your default file manger will pop up and you create a new directory: (The following details you may skip first; your data comes to the ts directory and the processing instructions reside in the processing directory). none of these directory may be deleted by the user. You can create additional directories if you want (like doc procs_old and so on). cal: contains the metronix standard calibration files. In case "cal auto" is used the software first looks for the calibration file, secondly for the XML embedded calibration and lastly - if both not found - will use an internal calibration function. config: contains configuration files (e.g.XML files). These are normally files for channel mapping and EMAP. Additionally you can put your (plain ASCII) frequency list files here. "freq_list_1.dat" to "freq_list_7.dat" are predefined names which you can use and can be accessed from the GUI. db: contains SQLite databases is case. (May not used by users, by default you should switch the database off and use the EDI file output in the edi directory. With a MySQL connection different users can write results to a centralized database and share their results) After creating a new survey the database must be created by hand from the menu Results → Database → Ok : Creates a new local database. By default the DB is disabled in procmt_cmds dump: contains dump files. HENCE most dump files will be overwritten without notice. Copy in case you need them. You should turn dumps of (except your are facing problems) in the command line options module edi: contains the EDI output. Files will not be over written but numbered filters: contains filter data case jle: contains jobfiles for this survey for usage with job list editor (jle)(if you want to use them) jobs: contains job files for usage (XML files). A job a a combination of processing(s) and timeseries data. You can call a job file from the command line: procmt_main jobs/myjobfile.xml log: contains log data case processings: contains all jobs which can be submitted and used for this survey. All files are XML files. If you truly understand these files you also can edit them by hand. shell: contains BASH scripts for LINUX users tmp: contains temporary data in case; may be cleared automatically. ts: contains the time series data. See add data. The data is in the time series folder "ts". You simply create "Site_1", "Site_2" and so on ("Site_", not "site_"; or from the GUI File -> Create Site). Copy the COMPLETE measdir (example " meas_2014-08-15_07-23-38 ") into the Site_N. The directory and the timeseries will appear some seconds later in the GUI (there is an auto-scan in the background running). If the recording was setup correctly in the joblist or ADU interface, you can double-click the Site and the processing runs. For EMAP it is possible to create site names like Site_140_142_144. These types of directories can only be processed in case you have created a corresponding XML configuration inside the config directory and told the processing to use that configuration file. Site and site numbers have to be separated by underscore (  _  ). # Processings ## Pre-Processings Pre-processings are modules used to read and condition the timeseries and create the auto- and cross-spectra which will be later used by the MT processing modules. All parameters saved in XML format. ### edi_info This module takes values for creating the EDI file later • DATAID: like NothernMining or Section2 • ACQBY: e.g. the person who has recorded the data in the field • FILEBY: e.g. the one who has processed the data • ACQDATE: put auto and the software will take the first timeseries entry • ENDDATE: put auto and the software will take the last timeseries entry • FILEDATE: out auto and the creation date appears here • REFLOC: redording location like Ketzin near Berlin • STATE: redording location like Brandenburg or California • COUNTRY: redording location like Germany or U.S.A ### procmt_cmds This modules contains options which could be also used as command line parameter. They will affect or override parameter from several modules. For example here you can quick select a RR (remote reference) site instead editing somewhere else, also you can decide to process the Z tensor in a rotated coordinate system. • queue length - handle with care : internal buffer size. Take auto ! Tuning does not effect much the speed. Take 16384 for example if you have 16GB and more RAM. • AI Limit for Swift (greater / equal 0, negative off) : if -1 tensors are not rotated. If 1.6 for example data will be rotated according to SWIFT if the anisotropy is greater than 1.6, tensors are processed and finally rotated back to 0 when writing EDI file. Giving a reasonable anisotropy avoids useless rotation (if the anisotropy is close to 0 a rotation angle becomes undefined). • Tensor rotation N-> E : rotate thhttp://geo-metronix.de/mtxgeo/index.php/efp-06e Z tensor before analysis; will be rotated back to 0 when writing EDI file (+/- 360 deg); 0 = off • Tensor rotation AI lower limit : for the above: rotate only when AI is greater; say for example 1.5 (range 0 - 10000) - only active if the above != 0. • dump various : off, on, binspectra : on will create various dump files, binspectra will create a dump file for all auto- and cross spectra. This file can be used together with the spectral editor. binspectra is simply spoken a freeze of the processing after auto- and cross spectra have been calculated. • dump raw transfer functions : off, on : just for checking the finally used (not activated yet) • de-activate database : false, true : if true ProcMT will write EDI data additionally to a SQL database (SQLite, MySQL) • split all : false, true : should be off. If on ProcMT tries as many EDI files as possible • cat all : true, false : should be on. ProcMT will join all auto- and cross spectra from different bands (ats files) in memory. You get spectral data for example from the 512Hz and from the 128Hz recording. In this case the 16Hz data may be a combination of spectra of 512Hz and 128Hz recording. Even though memory consuming this gives best results. • ascii table edi : false, true : write and ASCII table (only for users who want quick access like with Gnuplot or XMGrace, you should ALWAYS use the EDI file for whatever you do.) • write edi spectra : true, false : should be always true. Otherwise the > =SPECTRASECT in the EDI file is missing. • write edi Z : true, false : should be always true. Writes the > =MTSECT in the EDI file. • join same processing comments : true, false : When using different processings like hf.xml, lf.xml, deadband.xml you may want a single EDI as result when processing several bands. In this case use the same processing comment in all processings (like mydefault in hf.xml, lf.xml, deadband.xml ) and ProcMT joins the spectra (if cat all is active) and joins the results to one EDI file. HENCE: different processing must submitted during recording with the ADU. Otherwise you may have to edit the measdoc and replace the processing (e.g. mt_auto) with yours (hf): <processings> mt_auto</processings> edit to <processings> hf</processings> . For CSAMT this is the default setting because here you have for each recording a different processing (which includes the transmission time and transmission frequency). • activate online processing : N/A yet : gets data in real time from a server (ADU) • server ip : IP address of server for real time processing • port : port of server • vlf processing activated : N/A yet • auto bandwidth : using a adaptive bandwidth according to the sampling frequency. The fft length of sampling rates less than 1  Hz is 128  points, for 128  Hz 512  points and so on. This allows you to create one processing for all data - e.g. we call that mt_auto. You can adjust auto bandwidth by 1/8, 1/4, 1/2 : in this case the fft will be longer (and the bandwidth smaller) and with 2, 4, 8 the fft will be shorter (and the bandwidth greater). If the option is active the fft length in the ats file reader will be ignored example table for the auto-bandwidth • auto parzen : override the parzen radius from the auto-cross-parzening module. The parzen radius will be increased with increasing periods. Theoretically you want small parzening together with small bandwidth (greater FFT) when you process data like 46  Hz and want exclude influence from 50  Hz. • reject coherency less than in advance : You can reject less correlated data in advance. The stack all method becomes than a coherency threshold. Especially if you have huge data sets you can exclude data in advance so that this data does not effect the memory usage when the processing comes to the MT modules • remote Site No : set the number (just the number) of the remote site. This option is also available from the GUI. Set here if you use a batch processing or the remote site number is always the same. • force remote Site RUN : by default ProcMT selects the greatest overlapping time segment of the remote site. You can for ProcMT to take a different time segment • force remote dir : • only remote cross spectra : • show edi after processing : [on/off] when processing is finished start the edi plotter. Turn off for batch processing. # Time Series Processing Modules • skip samples at the beginning of the file:use only when the first data points of your file are useless. • skip samples at the end of the file:use only when the last data points of your file are useless. • use nth slice for processing: only for "CEA", don't use it. In case of sliced time series use the nth slice for processing; min=0, max=1023 • use n slices for processing: only for "CEA", don't use it. Process more than one slice; min=0, max=1024 starting at nth slice: using 1024 means start at nth slice and use all to the end (ref. above) a setting of 0, 0 treats the file like a standard file and reads from the default header - and the default would contain the total amount of samples and the CEA sliced file would be rad as a metronix standard file(in case the file contains more than one slice this will lead to data corruption); a setting of 0, 1024 does almost the same but reads the samples and settings from first slice (0) • use block of samples:u • move back after read: after each reading move n samples back; use this if you using a filter which reads 1200 points and makes 1024 out of it; in this case you want to move back 176 points. Do not use as overlapping (see below). • Window Length FFT: If the bandwidth is not set to auto (see procmt cmds) this reading length is also the FFT length. Values: 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072, 262144, 524288 . • skip_marked_parts: default on . If you have made selections (e.g. with the tsplotter) than these selected parts are excluded from processing. If no corresponding selection files (*.atm) are there, all will be selected for processing. • overlapping blocks: Due to the hanning window not all data is processed with the same weightening. An overlapping of 0.3 or 0.4 will correct that and enhance the statistics. • pre-stack in time domain: stacks time series with the given window length above. Does only work with CSAMT • dipole TX time symmetric: N/A • skip TX switching: N/A ### scale detrend hanning Scales the electric field to mV/km and applies a detrend functions as well as a Hanning window. • scale E to mV/km: default on • remove trend: default on • apply hanning windows: default on ### FFT The FFT module transforms the time series data into the spectral domain. The upper and lower parts of the spectra can be cut off. The idea is: if you have used cascaded bands with 4x filtering, the FFT of one band will overlap the other. This can be controlled with the overlapping. Additionally the spectral density varies and it may be useful to cut the lower end by 3-5% or more. • cut upper: cuts the upper part of the frequencies; use 0.05 to cut the upper 5% • cut lower: cuts the lower part of the spectra; use 10% to cut the lower 10%. • dump raw spectra: for debugging purpose only (this parameter should not be set by the user) ### calibrate spectra The calibration module applies the transfer functions of the used sensors. The influence of the sensor will be eliminated, in the magnetic spectra the units will change from $mV \over {\sqrt{HZ}}$ to $nT \over {\sqrt{HZ}}$ • calibration: auto : uses the classic calibration file *.txt from the from the cal directory, if not found the XML from the measdoc then and finally the built in (default). This way you can anytime override the XML in case. Builtin : uses a master calibration function which fits excellent for sampling rates up to 4  kHz and good above. Theoretical : uses a theoretical function. The filename is a) SensorType + Serialnumber + .txt like MFS06e1568.txt; b) in case the serial number is short, the file name is filled with '0' until the name length is 9: MFS06e005.txt and c) for the very old MFS (without 'e') to 8 characters MFS06005.txt. The coil type is taken from the ats header. • dump calibration: dumps the finally used calibration data ( this parameter should only be used for testing) • dump cal spectra: for debugging purpose only (this parameter should not be set by the user) ### auto-cross-parzening • xml MT site config file (empty for default): You only need this in case of EMAP • cut upper spectra: this seems repeated from the FFT module; use only if other than FFT module is used • cut lower spectra: this seems repeated from the FFT module; use only if other than FFT module is used • parzen radius 0.05 to 0.8; combines frequencies in the spectral domain. higher values (0.25, 0.5) should create more smooth data. Lower values can be used for more stable MT data as well to get points near interfering lines (e.g. close to 50/60  Hz) • activate MT/CSAMT: switch bewteen MT and CSAMT,MT scalar and CSAMT scalar. Hence that scalar data is mostly useless. • Frequency List: is a drop down menu. default is the metronix frequency list which we are using since 20 years. The list contains 8 frequency per decade. txmsmall and fluxgate are different lists. The "freq_list_1.dat" to "freq_list_7.dat" are to be created by the user manually in the config directory. The list simply contains numbers 4.000000000000e+01 3.000000000000e+01 2.200000000000e+01 in descending order. Same applies for the CEA_TX, CSAMT_TX, VLF lists: You create them as ascii files. generate is for CSAMT only. The frequencies will be calculated from the base TX frequency. • Frequency List File (empty for default): In this case the drop down selected is active. If you provide file name like mylist.dat this file will be take. Do NOT provide a path. File has to be placed in the config directory. • base TX frequency: frequency of transmission. Put 8 if your TX transmitt with 8  Hz and 0.25 if your TX transmitts with 4  s. • nth odd harmonics: 1, 2, 3 : 1 uses the base frequency for processing only. Gives mostly the best result because higher odd harmonics may vanish. 2 means that the second odd harmonics is used (if TX base frequency is 70 Hz the 210 Hz is used additionally). 3: third odd harmonic (in our example 350 Hz). Hence that the amplitude of the upper harmonics are dropping down and the data quality may also • dipole 1 TX time[s]: time in seconds of the first dipole. Take even times like 8, 16, 32, 64 and so on in order to fit most possible FFT windows • dipole 2 TX time[s]: time in seconds of the second dipole; must be same as above • dipole 3 TX time[s]: N/A • dipole 4 TX time[s] N/A • dipole 5 TX time[s] N/A • dipole 6 TX time[s] N/A • activate online processing N/A • dump CSAMT spectra infos N/A • dump parzen spectra for debugging (this parameter should not be set by the user). # MT Processing Modules After finishing auto- and cross spectra the MT modules are activated. You can select one or more modules at the same time. Depending on the memory and CPU of your computer you can choose the cat all option in the command line module. This will lead to a vast amount of memory usage and parallel execution of the MT modules The cat all gives however the best results. ## Standard Parameter Some processing parameters appear in almost any MT module: • start index: 0 ... N, first tensor to process; default  0 (this parameter should not be set by the user) • stop index: 0 ... N, last tensor to process; default  0 ( first = last = 0 = take all) (this parameter should not be set by the user) • start frequency: limit the frequency range, low [Hz] (this parameter should not be set by the user) • stop frequency: limit the frequency range, high [Hz] (this parameter should not be set by the user) • upper reject rho: reject $\rho$ if $\rho$xy and $\rho$yx exceed this limit • lower reject rho: reject $\rho$ if $\rho$xy and $\rho$yx fall below this limit • upper reject phi reject: $\phi$ if $\phi$xy and $\phi$yx exceed this limit • lower reject phi: reject $\phi$ if $\phi$xy and $\phi$yx fall below this limit Hence: the simple cut-off method works on the xy and yx elements only. Data is not rotated, fixed rotated or Swift rotated. It can be used to cut-off segments where distortions can be identified by $\rho$ or $\phi$. This can happen if cables where broken and have been re-connected later. A working example is here cut upper and lower • use n stdev for rejection: n=0=no calculation: reject tensors outside n times the standard deviation. E.g. having a Gauss distribution 3 times would be definitively outside the confidence interval. If 2 times is selected for example zxy mean is 5  ± 0.75 and zyx 9  ± 1.2 all tensors with zxy greater than 3.5 and smaller 6.5 and zyx greater 7.8 and smaller 10.2 will be selected. if zxy is 5.2 and zyx is 12.1 this tensor will NOT be selected (so the criterion must be fulfilled for both elements simultaneously). • recalculate stddev after reject: if above is selected take on here to recalculate the standard deviation from the selected tensors (again); MUST be on if rejection by standard deviation is used. • check quadrant: checks that the zxy are all positive (real and imaginary part) and zyx are all negative (real and imaginary part). In most cases this assumption is true . ## Stack all This is the most simple module. All tensors are simply stacked and the mean is used as estimator. Mostly this method is used as reference in order to compare "what would happen" if I simply take all. The method quite well indicates good frequency ranges. In the dead band or at very high frequencies stack all can give surprising good results. This happens when the data is scattered strongly but still balanced out - but the statistical estimator (mean, median, coherency) cuts of the "wrong" outliers" . Hence that when you have activated the coherency rejection in the command line module the stack all is not a pure stack all method anymore (as well when the quadrant check is activated in the standard parameter). This module does not contain other parameter as the standard parameter. The mean is calculated as $$\bar x = \frac{1} { N } \sum_{i=0}^{N-1} x_i \label{eq:mean}$$ and the standard deviation as: $$\sigma = \sqrt{\frac{1}{N-1}\left(\bar{x}-{x}_{i} \right)} = \sqrt{\frac{1}{N-1}\left(\sum_{i=0}^{N-1}{x}^{2}_{i} - \left( \sum_{i=0}^{N-1}{x}_{i} \right)^2 /N \right)} \label{eq:stddev}$$ where the second term is faster and has less calculation noise. Hence that the 3σ interval defines an outlier in a Gauss distribution. However this interval is much to big for MT. Better take 1.3, 1.7 or similar. ## Coherency Threshold This classic module evaluates the coherency between Ex & Hy and Ey & Hx. If you have enough data you can use thresholds of 0.8 or more. If the selection is to strong it will lead to an empty EDI file. • upper threshold : reject tensors where the coherency for zxy and zyx  is higher than. Should be 1. Can be used (if not 1) to suppress data where the correlation is high but may come from a different source other than MT (like correlated spikes in E and H). • lower threshold : reject tensors where the coherency for zxy and zyx  is less than. Max= 0.99 . • upper threshold % : N/A yet : reject x % of the "best data" • lower threshold % : N/A yet :  reject x% of the "poor data" • replace zero selection with stack all : In case stack all was activated and NO tensors could be calculated for a certain frequency (because of poor data quality) this frequency will be replaced with "stack all". Hence: this is no good solution because you are mixing data from different processings. ## Median Processing The median processing calculates the median of the the zxy  and zyx component of your tensor array: sort(z), take the N/2 element. However: if there are not enough stacks, the median is as well useless as the mean. Also here it is recommended to have more than 20 stacks available for processing code The median is a robust estimator against outliers. Hence: you will not get the median itself if upper & lower = 1 (it will likely crash). Reason: the processing evaluates zxy and zyx and they do not have the same median, so the EDI will be empty. Take 1.2 to 1.3 for the resistivities and and 1.3 to 1.5 for the phases (phases are more scattered). Hence: a to strong selection leads to empty EDI files. If the lower bound is set to 1.2 and the upper bound set to 1.4 the resulting zxy and zyx values will be biased downwards (because you give preference to lower data). This can be desired if you believe that the data itself is biased downwards and you want to correct that. • lower bound median rho : allow the lowest $\rho$xy and $\rho$yx to be n times lower than the median. • upper bound median rho : allow the highest $\rho$xy and $\rho$yx to be n times higher than the median. • lower bound median phi : allow the lowest $\phi$xy and $\phi$yx to be n times lower than the median. • upper bound median phi : allow the highest $\phi$xy and $\phi$yx to be n times higher than the median. • replace zero selection with stack all : In case stack all was activated and NO tensors could be calculated for a certain frequency (because of poor data quality) this frequency will be replaced with "stack all". Hence: this is no good solution because you are mixing data from different processings. # ProcMT shell procmt_createjob ts/Site_132_134_136_138/*/*xml > jobs/132.xml procmt_createjob ts/Site_132_134_136_138/*/*xml > jobs/132.xml procmt_createjob -useproc processings/mt_auto_rr2.xml processings/mt_auto.xml ts/Site_2/*/*xml > jobs/123.xml sh shell/mkallproc.sh -useproc processings/mt_auto_rr2.xml processings/mt_auto.xml # FAQ Lowest / highest result in EDI: The lowest or highest frequency depends on sampling rate and FFT window length. By default the upper limit is fsample  /  4. In case of fsample  =  256  Hz the highest frequency would be 64  Hz - except you have changed the additional cut-off in the FFT module. The FFT window length will give the theoretical lower limits. In case of 1024 points the lower limint would be 1 s (also in case the cut-off in the FFT module was not set). Now you have used a parzening. Parzening extends the frequencies to the left and right and adds frequencies for smoothing. In case of 0.3 the lowest frequency becomes 0.4 s and also the highest frequency drops down. So: In order to get lower frequencies or periods you filter the timeseries again. For the highest there is no solution (except using a smaller parzening). Missing frequencies in EDI: Means simply that the processing was not able to meet your conditions in terms of coherency threshold or median and/or additional criteria like quadrant check, phase/resistivity limit or standard deviation. Normally you re-process the data with different settings or you find out that the data quality is not acceptable at all. Another possibility (if you must have a constant frequency data set) is to activate the the replacement in Coherency Threshold or Median Processing # Appendix measdoc : on or more file(s) inside a measdir. Example: 319_2014-08-15_07-23-37_2014-08-15_20-20-00_R000_256H.xml indicates a recording with system 319 with start time, stop time, run number 000 and 256Hz sampling frequency. The measdoc contains additional information for one ore more ats files like calibration data, processing instructions and system status during the recording. measdir : contains one ore more measdoc and ats files, like meas_2014-08-15_07-23-37 . In general a measdir a a complete set of data. If you want copy data it is always the best and save to copy a complete directory. ats : advanced time series : binary data format with header (1024 bytes) followed by 32bit integer (Intel format) - Version 80. Version 1080 is the sliced version which contains up to 1024 sliced recordings in the same file (e.g you record 12 time a day 1024  kHz and put this in one ats file). The sliced version is limited in processing and should not be used. EDI :Electrical Data Interchange. Exchange format used in MT processing : consists of one or more modules bundled together. These instructions are saved inside one file which is used by ProcMT to initiate the MT processing. A job can contain many MT processings. job :consists of a processing and a list of files to process. FFT / DFT : The DFT (discrete Fourier transform) is the mathematical realization of the FFT (fast Fourier transform) inside the computer. The window length together with the sampling rate decides over highest and lowest frequency in the spectrum. As longer (more points from the timeseries) the window becomes as more lower frequencies the spectrum contains. Increasing the window from 1024 to 8192 @ 512  Hz sampling rate is NOT the same as using a window of 1024 64  Hz. Reason: the timeseries may contain errors, and the FIR filter will produce a different output compared to the longer FFT. If the data is of good quality the result will be almost the same (ref.  filter compare) Additionally you want to save computation time when re-processing the data with other settings. bandwidth : Unit [Hz]. Depends on the (sampling frequency / FFT length) and the Parzen window. You achive a smaller bandwidth if you increase the FFT length. The bandwidth decreases with bigger Parzen window. In theory a smaller bandwidth increases the depth resolution and decreases the resistivity resolution. Since the MT transfer functions are smooth, you should not over estimate the bandwidth. UTC, GPS, TAI : • UTC: Coordinated Universal Time, time used by ADU system. Time is adjusted to earth rotatoion by "leap seconds" • GPS: time of the atomic clocks of GPS satellites, UTC + 17s (July 2015), started 5. January 1980, 24:00:00 • TAI: Temps Atomique International, Atomic time, UTC + 17s + 19s (July 2015), started 1. January 1958 • GMT: Greenwich Mean Time; not used, and is NOT UTC, GMT is still used as a TIMEZONE which implies a location, UTC not. • Unix Timestamp: synchronized to UTC (has leap seconds!) , seconds since 1.1.1970, 0:00 GMT, usable for times AFTER 1.1.1972 where UTC is used. So the time stamp does not "really" show "atomic clock seconds" since 1970. The difference between UTC and the atomic clocks becomes greater and greater because the earth rotatopn slows down.
2020-04-07 08:25:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.29963448643684387, "perplexity": 3916.3859305674155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371675859.64/warc/CC-MAIN-20200407054138-20200407084638-00531.warc.gz"}
http://www.popflock.com/learn?s=Speed_of_sound
Speed of Sound Get Speed of Sound essential facts below. View Videos or join the Speed of Sound discussion. Add Speed of Sound to your PopFlock.com topic list for future reference or share this resource on social media. Speed of Sound Sound measurements Characteristic Symbols Sound pressure p, SPL,LPA Particle velocity v, SVL Particle displacement ? Sound intensity I, SIL Sound power P, SWL, LWA Sound energy W Sound energy density w Sound exposure E, SEL Acoustic impedance Z Speed of sound c Audio frequency AF Transmission loss TL The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. At 20 °C (68 °F), the speed of sound in air is about 343 meters per second (1,234.8 km/h; 1,125 ft/s; 767 mph; 667 kn), or a kilometre in or a mile in . It depends strongly on temperature, but also varies by several meters per second, depending on which gases exist in the medium through which a soundwave is propagating. The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior. In common everyday speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: sound travels most slowly in gases; it travels faster in liquids; and faster still in solids. For example, (as noted above), sound travels at in air; it travels at in water (4.3 times as fast as in air); and at in iron (about 15 times as fast as in air). In an exceptionally stiff material such as diamond, sound travels at 12,000 metres per second (27,000 mph);[1] (about 35 times as fast as in air) which is around the maximum speed that sound will travel under normal conditions. Sound waves in solids are composed of compression waves (just as in gases and liquids), and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus and density. The speed of shear waves is determined only by the solid material's shear modulus and density. In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound in the fluid is called the object's Mach number. Objects moving at speeds greater than Mach1 are said to be traveling at supersonic speeds. ## History Sir Isaac Newton computed the speed of sound in air as 979 feet per second (298 m/s), which is too low by about 15%,.[2] Newton's analysis was good save for neglecting the (then unknown) effect of rapidly-fluctuating temperature in a sound wave (in modern terms, sound wave compression and expansion of air is an adiabatic process, not an isothermal process). This error was later rectified by Laplace.[3] During the 17th century, there were several attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630 (1,380 Parisian feet per second), Pierre Gassendi in 1635 (1,473 Parisian feet per second) and Robert Boyle (1,125 Parisian feet per second).[4] In 1709, the Reverend William Derham, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second.[4] Derham used a telescope from the tower of the church of St Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half-second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled was calculated.[5] ## Basic concepts The transmission of sound can be illustrated by using a model consisting of an array of spherical objects interconnected by springs. In real material terms, the spheres represent the material's molecules and the springs represent the bonds between them. Sound passes through the system by compressing and expanding the springs, transmitting the acoustic energy to neighboring spheres. This helps transmit the energy in-turn to the neighboring sphere's springs (bonds), and so on. The speed of sound through the model depends on the stiffness/rigidity of the springs, and the mass of the spheres. As long as the spacing of the spheres remains constant, stiffer springs/bonds transmit energy quicker, while larger spheres transmit the energy slower. In a real material, the stiffness of the springs is known as the "elastic modulus", and the mass corresponds to the material density. Given that all other things being equal (ceteris paribus), sound will travel slower in spongy materials, and faster in stiffer ones. Effects like dispersion and reflection can also be understood using this model.[] For instance, sound will travel 1.59 times faster in nickel than in bronze, due to the greater stiffness of nickel at about the same density. Similarly, sound travels about 1.41 times faster in light hydrogen (protium) gas than in heavy hydrogen (deuterium) gas, since deuterium has similar properties but twice the density. At the same time, "compression-type" sound will travel faster in solids than in liquids, and faster in liquids than in gases, because the solids are more difficult to compress than liquids, while liquids in turn are more difficult to compress than gases. Some textbooks mistakenly state that the speed of sound increases with density. This notion is illustrated by presenting data for three materials, such as air, water and steel, which also have vastly different compressibility, more which making up for the density differences. An illustrative example of the two effects is that sound travels only 4.3 times faster in water than air, despite enormous differences in compressibility of the two media. The reason is that the larger density of water, which works to slow sound in water relative to air, nearly makes up for the compressibility differences in the two media. A practical example can be observed in Edinburgh when the "One o' Clock Gun" is fired at the eastern end of Edinburgh Castle. Standing at the base of the western end of the Castle Rock, the sound of the Gun can be heard through the rock, slightly before it arrives by the air route, partly delayed by the slightly longer route. It is particularly effective if a multi-gun salute such as for "The Queen's Birthday" is being fired. ### Compression and shear waves Pressure-pulse or compression-type wave (longitudinal wave) confined to a plane. This is the only type of sound wave that travels in fluids (gases and liquids). A pressure-type wave may also travel in solids, along with other types of waves (transverse waves, see below) Transverse wave affecting atoms initially confined to a plane. This additional type of sound wave (additional type of elastic wave) travels only in solids, for it requires a sideways shearing motion which is supported by the presence of elasticity in the solid. The sideways shearing motion may take place in any direction which is at right-angle to the direction of wave-travel (only one shear direction is shown here, at right angles to the plane). Furthermore, the right-angle shear direction may change over time and distance, resulting in different types of polarization of shear-waves In a gas or liquid, sound consists of compression waves. In solids, waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, and is the same process in gases and liquids, with an analogous compression-type wave in solids. Only compression waves are supported in gases and liquids. An additional type of wave, the transverse wave, also called a shear wave, occurs only in solids because only solids support elastic deformations. It is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations. These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first and rocking transverse waves seconds later. The speed of a compression wave in a fluid is determined by the medium's compressibility and density. In solids, the compression waves are analogous to those in fluids, depending on compressibility and density, but with the additional factor of shear modulus which affects compression waves due to off-axis elastic energies which are able to influence effective tension and relaxation in a compression. The speed of shear waves, which can occur only in solids, is determined simply by the solid material's shear modulus and density. ## Equations The speed of sound in mathematical notation is conventionally represented by c, from the Latin celeritas meaning "velocity". For fluids in general, the speed of sound c is given by the Newton-Laplace equation: ${\displaystyle c={\sqrt {\frac {K_{s}}{\rho }}},}$ where • Ks is a coefficient of stiffness, the isentropic bulk modulus (or the modulus of bulk elasticity for gases); • ? is the density. Thus the speed of sound increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material and decreases with an increase in density. For ideal gases, the bulk modulus K is simply the gas pressure multiplied by the dimensionless adiabatic index, which is about 1.4 for air under normal conditions of pressure and temperature. For general equations of state, if classical mechanics is used, the speed of sound c is given by ${\displaystyle c={\sqrt {\left({\frac {\partial p}{\partial \rho }}\right)_{s}}},}$ where • p is the pressure; • ? is the density and the derivative is taken isentropically, that is, at constant entropy s. If relativistic effects are important, the speed of sound is calculated from the relativistic Euler equations. In a non-dispersive medium, the speed of sound is independent of sound frequency, so the speeds of energy transport and sound propagation are the same for all frequencies. Air, a mixture of oxygen and nitrogen, constitutes a non-dispersive medium. However, air does contain a small amount of CO2 which is a dispersive medium, and causes dispersion to air at ultrasonic frequencies .[6] In a dispersive medium, the speed of sound is a function of sound frequency, through the dispersion relation. Each frequency component propagates at its own speed, called the phase velocity, while the energy of the disturbance propagates at the group velocity. The same phenomenon occurs with light waves; see optical dispersion for a description. ## Dependence on the properties of the medium The speed of sound is variable and depends on the properties of the substance through which the wave is travelling. In solids, the speed of transverse (or shear) waves depends on the shear deformation under shear stress (called the shear modulus), and the density of the medium. Longitudinal (or compression) waves in solids depend on the same two factors with the addition of a dependence on compressibility. In fluids, only the medium's compressibility and density are the important factors, since fluids do not transmit shear stresses. In heterogeneous fluids, such as a liquid filled with gas bubbles, the density of the liquid and the compressibility of the gas affect the speed of sound in an additive manner, as demonstrated in the hot chocolate effect. In gases, adiabatic compressibility is directly related to pressure through the heat capacity ratio (adiabatic index), while pressure and density are inversely related to the temperature and molecular weight, thus making only the completely independent properties of temperature and molecular structure important (heat capacity ratio may be determined by temperature and molecular structure, but simple molecular weight is not sufficient to determine it). In low molecular weight gases such as helium, sound propagates faster as compared to heavier gases such as xenon. For monatomic gases, the speed of sound is about 75% of the mean speed that the atoms move in that gas. For a given ideal gas the molecular composition is fixed, and thus the speed of sound depends only on its temperature. At a constant temperature, the gas pressure has no effect on the speed of sound, since the density will increase, and since pressure and density (also proportional to pressure) have equal but opposite effects on the speed of sound, and the two contributions cancel out exactly. In a similar way, compression waves in solids depend both on compressibility and density--just as in liquids--but in gases the density contributes to the compressibility in such a way that some part of each attribute factors out, leaving only a dependence on temperature, molecular weight, and heat capacity ratio which can be independently derived from temperature and molecular composition (see derivations below). Thus, for a single given gas (assuming the molecular weight does not change) and over a small temperature range (for which the heat capacity is relatively constant), the speed of sound becomes dependent on only the temperature of the gas. In non-ideal gas behavior regimen, for which the van der Waals gas equation would be used, the proportionality is not exact, and there is a slight dependence of sound velocity on the gas pressure. Humidity has a small but measurable effect on the speed of sound (causing it to increase by about 0.1%-0.6%), because oxygen and nitrogen molecules of the air are replaced by lighter molecules of water. This is a simple mixing effect. ## Altitude variation and implications for atmospheric acoustics Density and pressure decrease smoothly with altitude, but temperature (red) does not. The speed of sound (blue) depends only on the complicated temperature variation at altitude and can be calculated from it since isolated density and pressure effects on the speed of sound cancel each other. The speed of sound increases with height in two regions of the stratosphere and thermosphere, due to heating effects in these regions. In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent solely upon temperature; see Details below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature. Since temperature (and thus the speed of sound) decreases with increasing altitude up to , sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source.[7] The decrease of the speed of sound with height is referred to as a negative sound speed gradient. However, there are variations in this trend above . In particular, in the stratosphere above about , the speed of sound increases with height, due to an increase in temperature from heating within the ozone layer. This produces a positive speed of sound gradient in this region. Still another region of positive gradient occurs at very high altitudes, in the aptly-named thermosphere above . ## Practical formula for dry air Approximation of the speed of sound in dry air based on the heat capacity ratio (in green) against the truncated Taylor expansion (in red). The approximate speed of sound in dry (0% humidity) air, in meters per second, at temperatures near , can be calculated from ${\displaystyle c_{\mathrm {air} }=(331.3+0.606\cdot \vartheta )~~~\mathrm {m/s} ,}$ where ${\displaystyle \vartheta }$ is the temperature in degrees Celsius (°C). This equation is derived from the first two terms of the Taylor expansion of the following more accurate equation: ${\displaystyle c_{\mathrm {air} }=331.3~{\sqrt {1+{\frac {\vartheta }{273.15}}}}~~~~\mathrm {m/s} .}$ Dividing the first part, and multiplying the second part, on the right hand side, by gives the exactly equivalent form ${\displaystyle c_{\mathrm {air} }=20.05~{\sqrt {\vartheta +273.15}}~~~~\mathrm {m/s} .}$ which can also be written as ${\displaystyle c_{\mathrm {air} }=20.05~{\sqrt {T/K}}~~~~\mathrm {m/s} }$ where T denotes the thermodynamic temperature. The value of , which represents the speed at (or ), is based on theoretical (and some measured) values of the heat capacity ratio, ?, as well as on the fact that at 1 atm real air is very well described by the ideal gas approximation. Commonly found values for the speed of sound at may vary from 331.2 to 331.6 due to the assumptions made when it is calculated. If ideal gas ? is assumed to be exactly, the speed is calculated (see section below) to be , the coefficient used above. This equation is correct to a much wider temperature range, but still depends on the approximation of heat capacity ratio being independent of temperature, and for this reason will fail, particularly at higher temperatures. It gives good predictions in relatively dry, cold, low-pressure conditions, such as the Earth's stratosphere. The equation fails at extremely low pressures and short wavelengths, due to dependence on the assumption that the wavelength of the sound in the gas is much longer than the average mean free path between gas molecule collisions. A derivation of these equations will be given in the following section. A graph comparing results of the two equations is at right, using the slightly different value of for the speed of sound at . ## Details ### Speed of sound in ideal gases and air For an ideal gas, K (the bulk modulus in equations above, equivalent to C, the coefficient of stiffness in solids) is given by ${\displaystyle K=\gamma \cdot p,}$ thus, from the Newton-Laplace equation above, the speed of sound in an ideal gas is given by ${\displaystyle c={\sqrt {\gamma \cdot {p \over \rho }}},}$ where • ? is the adiabatic index also known as the isentropic expansion factor. It is the ratio of specific heats of a gas at a constant-pressure to a gas at a constant-volume(${\displaystyle C_{p}/C_{v}}$), and arises because a classical sound wave induces an adiabatic compression, in which the heat of the compression does not have enough time to escape the pressure pulse, and thus contributes to the pressure induced by the compression; • p is the pressure; • ? is the density. Using the ideal gas law to replace p with nRT/V, and replacing ? with nM/V, the equation for an ideal gas becomes ${\displaystyle c_{\mathrm {ideal} }={\sqrt {\gamma \cdot {p \over \rho }}}={\sqrt {\gamma \cdot R\cdot T \over M}}={\sqrt {\gamma \cdot k\cdot T \over m}},}$ where • cideal is the speed of sound in an ideal gas; • R (approximately ) is the molar gas constant(universal gas constant);[8] • k is the Boltzmann constant; • ? (gamma) is the adiabatic index. At room temperature, where thermal energy is fully partitioned into rotation (rotations are fully excited) but quantum effects prevent excitation of vibrational modes, the value is for diatomic molecules, according to kinetic theory. Gamma is actually experimentally measured over a range from 1.399,1 to 1.403 at , for air. Gamma is exactly for monatomic gases such as noble gases and it is approxemately 1.3 for triatomic molecule gases; • T is the absolute temperature; • M is the molar mass of the gas. The mean molar mass for dry air is about ;[] • n is the number of moles; • m is the mass of a single molecule. This equation applies only when the sound wave is a small perturbation on the ambient condition, and the certain other noted conditions are fulfilled, as noted below. Calculated values for cair have been found to vary slightly from experimentally determined values.[9] Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of ? but was otherwise correct. Numerical substitution of the above values gives the ideal gas approximation of sound velocity for gases, which is accurate at relatively low gas pressures and densities (for air, this includes standard Earth sea-level conditions). Also, for diatomic gases the use of requires that the gas exists in a temperature range high enough that rotational heat capacity is fully excited (i.e., molecular rotation is fully used as a heat energy "partition" or reservoir); but at the same time the temperature must be low enough that molecular vibrational modes contribute no heat capacity (i.e., insignificant heat goes into vibration, as all vibrational quantum modes above the minimum-energy-mode, have energies too high to be populated by a significant number of molecules at this temperature). For air, these conditions are fulfilled at room temperature, and also temperatures considerably below room temperature (see tables below). See the section on gases in specific heat capacity for a more complete discussion of this phenomenon. For air, we introduce the shorthand ${\displaystyle R_{*}=R/M_{\mathrm {air} }.}$ In addition, we switch to the Celsius temperature , which is useful to calculate air speed in the region near 0 °C (about 273 kelvin). Then, for dry air, ${\displaystyle c_{\mathrm {ideal} }={\sqrt {\gamma \cdot R_{*}\cdot T}}={\sqrt {\gamma \cdot R_{*}\cdot (\vartheta +273.15)}},}$ ${\displaystyle c_{\mathrm {ideal} }={\sqrt {\gamma \cdot R_{*}\cdot 273.15}}\cdot {\sqrt {1+{\frac {\vartheta }{273.15}}}},}$ where ${\displaystyle \vartheta }$ (theta) is the temperature in degrees Celsius(°C). Substituting numerical values ${\displaystyle R=8.314\,510~\mathrm {J/(mol\cdot K)} }$ for the molar gas constant in J/mole/Kelvin, and ${\displaystyle M_{\mathrm {air} }=0.028\,964\,5~\mathrm {kg/mol} }$ for the mean molar mass of air, in kg; and using the ideal diatomic gas value of , we have ${\displaystyle c_{\mathrm {air} }=331.3~~{\sqrt {1+{\frac {\vartheta }{273.15}}}}~~~\mathrm {m/s} .}$ Finally, Taylor expansion of the remaining square root in ${\displaystyle \vartheta }$ yields ${\displaystyle c_{\mathrm {air} }=331.3~(1+{\frac {\vartheta }{2\cdot 273.15}})~~~\mathrm {m/s} ,}$ ${\displaystyle c_{\mathrm {air} }=(331.3+0.606\cdot \vartheta )~~~\mathrm {m/s} .}$ The above derivation includes the first two equations given in the "Practical formula for dry air" section above. ### Effects due to wind shear The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source.[7] Wind shear of 4 m/(s · km) can produce refraction equal to a typical temperature lapse rate of .[10] Higher values of wind gradient will refract sound downward toward the surface in the downwind direction,[11] eliminating the acoustic shadow on the downwind side. This will increase the audibility of sounds downwind. This downwind refraction effect occurs because there is a wind gradient; the sound is not being carried along by the wind.[12] For sound propagation, the exponential variation of wind speed with height can be defined as follows:[13] ${\displaystyle U(h)=U(0)h^{\zeta },}$ ${\displaystyle {\frac {\mathrm {d} U}{\mathrm {d} H}}(h)=\zeta {\frac {U(h)}{h}},}$ where • U(h) is the speed of the wind at height h; • ? is the exponential coefficient based on ground surface roughness, typically between 0.08 and 0.52; • dU/dH(h) is the expected wind gradient at height h. In the 1862 American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle,[14] because they could not hear the sounds of battle only (six miles) downwind.[15] ### Tables In the standard atmosphere: • T0 is (= ), giving a theoretical value of (= = = = ). Values ranging from 331.3 to 331.6 m/s may be found in reference literature, however; • T20 is (= = ), giving a value of (= = = = ); • T25 is (= = ), giving a value of (= = = = ). In fact, assuming an ideal gas, the speed of sound c depends on temperature only, not on the pressure or density (since these change in lockstep for a given temperature and cancel out). Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere--actual conditions may vary. Effect of temperature on properties of air Temperature T (°C) Speed of sound c (m/s) Density of air ? (kg/m3) Characteristic specific acoustic impedance z0 (Pa·s/m) 35 351.88 1.1455 403.2 30 349.02 1.1644 406.5 25 346.13 1.1839 409.4 20 343.21 1.2041 413.3 15 340.27 1.2250 416.9 10 337.31 1.2466 420.5 5 334.32 1.2690 424.3 0 331.30 1.2922 428.0 -5 328.25 1.3163 432.1 -10 325.18 1.3413 436.1 -15 322.07 1.3673 440.3 -20 318.94 1.3943 444.6 -25 315.77 1.4224 449.1 Given normal atmospheric conditions, the temperature, and thus speed of sound, varies with altitude: Altitude Temperature m/s km/h mph kn Sea level 340 1,225 761 661 -(Cruising altitude of commercial jets,and first supersonic flight) 295 1,062 660 573 29,000 m (Flight of X-43A) 301 1,083 673 585 ## Effect of frequency and gas composition ### General physical considerations The medium in which a sound wave is travelling does not always respond adiabatically, and as a result, the speed of sound can vary with frequency.[16] The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes.[9] The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the soundwave is considerably longer than the mean free path of molecules in a gas. The molecular composition of the gas contributes both as the mass (M) of the molecules, and their heat capacities, and so both have an influence on speed of sound. In general, at the same molecular mass, monatomic gases have slightly higher speed of sound (over 9% higher) because they have a higher ? (...) than diatomics do . Thus, at the same molecular mass, the speed of sound of a monatomic gas goes up by a factor of ${\displaystyle {c_{\mathrm {gas,monatomic} } \over c_{\mathrm {gas,diatomic} }}={\sqrt {{5/3} \over {7/5}}}={\sqrt {25 \over 21}}=1.091\ldots }$ This gives the 9% difference, and would be a typical ratio for speeds of sound at room temperature in helium vs. deuterium, each with a molecular weight of 4. Sound travels faster in helium than deuterium because adiabatic compression heats helium more since the helium molecules can store heat energy from compression only in translation, but not rotation. Thus helium molecules (monatomic molecules) travel faster in a sound wave and transmit sound faster. (Sound travels at about 70% of the mean molecular speed in gases; the figure is 75% in monatomic gases and 68% in diatomic gases). Note that in this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration (see heat capacity). However, vibrational modes simply cause gammas which decrease toward 1, since vibration modes in a polyatomic gas give the gas additional ways to store heat which do not affect temperature, and thus do not affect molecular velocity and sound velocity. Thus, the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs. polyatomic molecules, with the speed remaining greater in monatomics. ### Practical application to air By far the most important factor influencing the speed of sound in air is temperature. The speed is proportional to the square root of the absolute temperature, giving an increase of about per degree Celsius. For this reason, the pitch of a musical wind instrument increases as its temperature increases. The speed of sound is raised by humidity but decreased by carbon dioxide. The difference between 0% and 100% humidity is about at standard pressure and temperature, but the size of the humidity effect increases dramatically with temperature. The carbon dioxide content of air is not fixed, due to both carbon pollution and human breath (e.g., in the air blown through wind instruments). The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about as the frequency rises from to . For audible frequencies above it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path.[17] ## Mach number U.S. Navy F/A-18 traveling near the speed of sound. The white halo consists of condensed water droplets formed by the sudden drop in air pressure behind the shock cone around the aircraft (see Prandtl-Glauert singularity).[18] Mach number, a useful quantity in aerodynamics, is the ratio of air speed to the local speed of sound. At altitude, for reasons explained, Mach number is a function of temperature. Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature. The assumption is that a particular pressure represents a particular altitude and, therefore, a standard temperature. Aircraft flight instruments need to operate this way because the stagnation pressure sensed by a Pitot tube is dependent on altitude as well as speed. ## Experimental methods A range of different methods exist for the measurement of sound in air. The earliest reasonably accurate estimate of the speed of sound in air was made by William Derham and acknowledged by Isaac Newton. Derham had a telescope at the top of the tower of the Church of St Laurence in Upminster, England. On a calm day, a synchronized pocket watch would be given to an assistant who would fire a shotgun at a pre-determined time from a conspicuous point some miles away, across the countryside. This could be confirmed by telescope. He then measured the interval between seeing gunsmoke and arrival of the sound using a half-second pendulum. The distance from where the gun was fired was found by triangulation, and simple division (distance/time) provided velocity. Lastly, by making many observations, using a range of different distances, the inaccuracy of the half-second pendulum could be averaged out, giving his final estimate of the speed of sound. Modern stopwatches enable this method to be used today over distances as short as 200-400 meters, and not needing something as loud as a shotgun. ### Single-shot timing methods The simplest concept is the measurement made using two microphones and a fast recording device such as a digital storage scope. This method uses the following idea. If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured: 1. The distance between the microphones (x), called microphone basis. 2. The time of arrival between the signals (delay) reaching the different microphones (t). Then v = x/t. ### Other methods In these methods, the time measurement has been replaced by a measurement of the inverse of time (frequency). Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume. It has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup. A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water. In this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to (1 + 2n)?/4 where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these. Here it is the case that v = f?. ### High-precision measurements in air The effect of impurities can be significant when making high-precision measurements. Chemical desiccants can be used to dry the air, but will, in turn, contaminate the sample. The air can be dried cryogenically, but this has the effect of removing the carbon dioxide as well; therefore many high-precision measurements are performed with air free of carbon dioxide rather than with natural air. A 2002 review[19] found that a 1963 measurement by Smith and Harlow using a cylindrical resonator gave "the most probable value of the standard speed of sound to date." The experiment was done with air from which the carbon dioxide had been removed, but the result was then corrected for this effect so as to be applicable to real air. The experiments were done at but corrected for temperature in order to report them at . The result was for dry air at STP, for frequencies from to . ## Non-gaseous media ### Speed of sound in solids #### Three-dimensional solids In a solid, there is a non-zero stiffness both for volumetric deformations and shear deformations. Hence, it is possible to generate sound waves with different velocities dependent on the deformation mode. Sound waves generating volumetric deformations (compression) and shear deformations (shearing) are called pressure waves (longitudinal waves) and shear waves (transverse waves), respectively. In earthquakes, the corresponding seismic waves are called P-waves (primary waves) and S-waves (secondary waves), respectively. The sound velocities of these two types of waves propagating in a homogeneous 3-dimensional solid are respectively given by[20] ${\displaystyle c_{\mathrm {solid,p} }={\sqrt {\frac {K+{\frac {4}{3}}G}{\rho }}}={\sqrt {\frac {E(1-\nu )}{\rho (1+\nu )(1-2\nu )}}},}$ ${\displaystyle c_{\mathrm {solid,s} }={\sqrt {\frac {G}{\rho }}},}$ where The last quantity is not an independent one, as . Note that the speed of pressure waves depends both on the pressure and shear resistance properties of the material, while the speed of shear waves depends on the shear properties only. Typically, pressure waves travel faster in materials than do shear waves, and in earthquakes this is the reason that the onset of an earthquake is often preceded by a quick upward-downward shock, before arrival of waves that produce a side-to-side motion. For example, for a typical steel alloy, , and , yielding a compressional speed csolid,p of .[20] This is in reasonable agreement with csolid,p measured experimentally at for a (possibly different) type of steel.[21] The shear speed csolid,s is estimated at using the same numbers. #### One-dimensional solids The speed of sound for pressure waves in stiff materials such as metals is sometimes given for "long rods" of the material in question, in which the speed is easier to measure. In rods where their diameter is shorter than a wavelength, the speed of pure pressure waves may be simplified and is given by: ${\displaystyle c_{\mathrm {solid} }={\sqrt {\frac {E}{\rho }}},}$ where E is Young's modulus. This is similar to the expression for shear waves, save that Young's modulus replaces the shear modulus. This speed of sound for pressure waves in long rods will always be slightly less than the same speed in homogeneous 3-dimensional solids, and the ratio of the speeds in the two different types of objects depends on Poisson's ratio for the material. ### Speed of sound in liquids Speed of sound in water vs temperature. In a fluid, the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces). Hence the speed of sound in a fluid is given by ${\displaystyle c_{\mathrm {fluid} }={\sqrt {\frac {K}{\rho }}},}$ where K is the bulk modulus of the fluid. #### Water In fresh water, sound travels at about at (see the External Links section below for online calculators).[22] Applications of underwater sound can be found in sonar, acoustic communication and acoustical oceanography. #### Seawater Speed of sound as a function of depth at a position north of Hawaii in the Pacific Ocean derived from the 2005 World Ocean Atlas. The SOFAR channel spans the minimum in the speed of sound at about 750-m depth. In salt water that is free of air bubbles or suspended sediment, sound travels at about ( at , 10°C and 3% salinity by one method).[23] The speed of sound in seawater depends on pressure (hence depth), temperature (a change of ~ ), and salinity (a change of 1? ~ ), and empirical equations have been derived to accurately calculate the speed of sound from these variables.[24][25] Other factors affecting the speed of sound are minor. Since in most ocean regions temperature decreases with depth, the profile of the speed of sound with depth decreases to a minimum at a depth of several hundred meters. Below the minimum, sound speed increases again, as the effect of increasing pressure overcomes the effect of decreasing temperature (right).[26] For more information see Dushaw et al.[27] A simple empirical equation for the speed of sound in sea water with reasonable accuracy for the world's oceans is due to Mackenzie:[28] ${\displaystyle c(T,S,z)=a_{1}+a_{2}T+a_{3}T^{2}+a_{4}T^{3}+a_{5}(S-35)+a_{6}z+a_{7}z^{2}+a_{8}T(S-35)+a_{9}Tz^{3},}$ where • T is the temperature in degrees Celsius; • S is the salinity in parts per thousand; • z is the depth in meters. The constants a1, a2, ..., a9 are {\displaystyle {\begin{aligned}a_{1}&=1,448.96,&a_{2}&=4.591,&a_{3}&=-5.304\times 10^{-2},\\a_{4}&=2.374\times 10^{-4},&a_{5}&=1.340,&a_{6}&=1.630\times 10^{-2},\\a_{7}&=1.675\times 10^{-7},&a_{8}&=-1.025\times 10^{-2},&a_{9}&=-7.139\times 10^{-13},\end{aligned}}} with check value for , , . This equation has a standard error of for salinity between 25 and 40 ppt. See Technical Guides. Speed of Sound in Sea-Water for an online calculator. (Note: The Sound Speed vs. Depth graph does not correlate directly to the MacKenzie formula. This is due to the fact that the temperature and salinity varies at different depths. When T and S are held constant, the formula itself it always increasing.) Other equations for the speed of sound in sea water are accurate over a wide range of conditions, but are far more complicated, e.g., that by V. A. Del Grosso[29] and the Chen-Millero-Li Equation.[27][30] ### Speed of sound in plasma The speed of sound in a plasma for the common case that the electrons are hotter than the ions (but not too much hotter) is given by the formula (see here) ${\displaystyle c_{s}=(\gamma ZkT_{\mathrm {e} }/m_{\mathrm {i} })^{1/2}=9.79\times 10^{3}(\gamma ZT_{e}/\mu )^{1/2}~\mathrm {m/s} ,}$ where In contrast to a gas, the pressure and the density are provided by separate species, the pressure by the electrons and the density by the ions. The two are coupled through a fluctuating electric field. When sound spreads out evenly in all directions in three dimensions, the intensity drops in proportion to the inverse square of the distance. However, in the ocean, there is a layer called the 'deep sound channel' or SOFAR channel which can confine sound waves at a particular depth. In the SOFAR channel, the speed of sound is lower than that in the layers above and below. Just as light waves will refract towards a region of higher index, sound waves will refract towards a region where their speed is reduced. The result is that sound gets confined in the layer, much the way light can be confined to a sheet of glass or optical fiber. Thus, the sound is confined in essentially two dimensions. In two dimensions the intensity drops in proportion to only the inverse of the distance. This allows waves to travel much further before being undetectably faint. A similar effect occurs in the atmosphere. Project Mogul successfully used this effect to detect a nuclear explosion at a considerable distance. ## References 1. ^ Speed of Sound 2. ^ "The Speed of Sound". mathpages.com. Retrieved 2015. 3. ^ Bannon, Mike; Kaputa, Frank. "The Newton-Laplace Equation and Speed of Sound". Thermal Jackets. Retrieved 2015. 4. ^ a b Murdin, Paul (25 December 2008). Full Meridian of Glory: Perilous Adventures in the Competition to Measure the Earth. Springer Science & Business Media. pp. 35-36. ISBN 9780387755342. 5. ^ Fox, Tony (2003). Essex Journal. Essex Arch & Hist Soc. pp. 12-16. 6. ^ Dean, E. A. (August 1979). Atmospheric Effects on the Speed of Sound, Technical report of Defense Technical Information Center 7. ^ a b Everest, F. (2001). The Master Handbook of Acoustics. New York: McGraw-Hill. pp. 262-263. ISBN 978-0-07-136097-5. 8. ^ "CODATA Value: molar gas constant". Physics.nist.gov. Retrieved 2010. 9. ^ a b U.S. Standard Atmosphere, 1976, U.S. Government Printing Office, Washington, D.C., 1976. 10. ^ Uman, Martin (1984). Lightning. New York: Dover Publications. ISBN 978-0-486-64575-9. 11. ^ Volland, Hans (1995). Handbook of Atmospheric Electrodynamics. Boca Raton: CRC Press. p. 22. ISBN 978-0-8493-8647-3. 12. ^ Singal, S. (2005). Noise Pollution and Control Strategy. Oxford: Alpha Science International. p. 7. ISBN 978-1-84265-237-4. It may be seen that refraction effects occur only because there is a wind gradient and it is not due to the result of sound being convected along by the wind. 13. ^ Bies, David (2004). Engineering Noise Control, Theory and Practice. London: Spon Press. p. 235. ISBN 978-0-415-26713-7. As wind speed generally increases with altitude, wind blowing towards the listener from the source will refract sound waves downwards, resulting in increased noise levels. 14. ^ Cornwall, Sir (1996). Grant as Military Commander. New York: Barnes & Noble. p. 92. ISBN 978-1-56619-913-1. 15. ^ Cozens, Peter (2006). The Darkest Days of the War: the Battles of Iuka and Corinth. Chapel Hill: The University of North Carolina Press. ISBN 978-0-8078-5783-0. 16. ^ A B Wood, A Textbook of Sound (Bell, London, 1946) 17. ^ "Speed of Sound in Air". Phy.mtu.edu. Retrieved 2014. 18. ^ Nemiroff, R.; Bonnell, J., eds. (19 August 2007). "A Sonic Boom". Astronomy Picture of the Day. NASA. Retrieved 2010. 19. ^ Zuckerwar, Handbook of the speed of sound in real gases, p. 52 20. ^ a b L. E. Kinsler et al. (2000), Fundamentals of acoustics, 4th Ed., John Wiley and sons Inc., New York, USA. 21. ^ J. Krautkrämer and H. Krautkrämer (1990), Ultrasonic testing of materials, 4th fully revised edition, Springer-Verlag, Berlin, Germany, p. 497 22. ^ "Speed of Sound in Water at Temperatures between 32-212 oF (0-100 oC) -- imperial and SI units". The Engineering Toolbox. 23. ^ Wong, George S. K.; Zhu, Shi-ming (1995). "Speed of sound in seawater as a function of salinity, temperature, and pressure". The Journal of the Acoustical Society of America. 97 (3): 1732. Bibcode:1995ASAJ...97.1732W. doi:10.1121/1.413048. 24. ^ 25. ^ Robinson, Stephen (22 Sep 2005). "Technical Guides - Speed of Sound in Sea-Water". National Physical Laboratory. Retrieved 2016. 26. ^ "How Fast Does Sound Travel?". Discovery of Sound in the Sea. University of Rhode Island. Retrieved 2010. 27. ^ a b Dushaw, Brian D.; Worcester, P. F.; Cornuelle, B. D.; Howe, B. M. (1993). "On Equations for the Speed of Sound in Seawater". Journal of the Acoustical Society of America. 93 (1): 255-275. Bibcode:1993ASAJ...93..255D. doi:10.1121/1.405660. 28. ^ Kenneth V., Mackenzie (1981). "Discussion of sea-water sound-speed determinations". Journal of the Acoustical Society of America. 70 (3): 801-806. Bibcode:1981ASAJ...70..801M. doi:10.1121/1.386919. 29. ^ Del Grosso, V. A. (1974). "New equation for speed of sound in natural waters (with comparisons to other equations)". Journal of the Acoustical Society of America. 56 (4): 1084-1091. Bibcode:1974ASAJ...56.1084D. doi:10.1121/1.1903388. 30. ^ Meinen, Christopher S.; Watts, D. Randolph (1997). "Further Evidence that the Sound-Speed Algorithm of Del Grosso Is More Accurate Than that of Chen and Millero". Journal of the Acoustical Society of America. 102 (4): 2058-2062. Bibcode:1997ASAJ..102.2058M. doi:10.1121/1.419655.
2019-03-22 14:17:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7945382595062256, "perplexity": 984.9242477152267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202671.79/warc/CC-MAIN-20190322135230-20190322161230-00280.warc.gz"}
http://intpforum.com/threads/rape-is-usually-about-sex-drive.28150/
# Rape is usually about sex drive #### ApostateAbe ##### The past is an asshole, so f*** it If you have been to college any time after the year 2000, then you have heard the slogan, "Rape is not about sex, it is about power," or something similar. It means that rapists rape mainly because men, as foot soldiers of the powerful patriarchy, are trying to oppress women, and a man's sex drive does not have much to do with rape. The popularity of that slogan on campuses is bad enough, but you probably never heard anybody respectable disagree with it. Even those who disagreed with it don't want to come off as pro-rape, nor do they want to supply any rhetorical points in favor of the rape culture. The absurdity isn't just scientifically wrong. It has destructive consequences, namely that any advice to women to protect themselves from rapey men is denounced as blaming the victim. Not that such denouncements would follow even within this theory, but the thinking goes: what we really should be doing is teaching men not to rape. It seems to make sense, because rape is a learned behavior, and we only need to undo that bad teaching. That would be a more efficient use of our opportunities to communicate. You really have to go out of your way to know how the theory contrasts not just with any lay fool's grasp of reality but also with the body of data on the matter. It is not enough to know that theory was constructed on anecdotes selected by activist authors and reinforced by a lot of bad arguments (see ). A pretty good rundown was provided by John Alcock, 2001, The Triumph of Sociobiology, pages 206 to 211. I supplied the text below, along with Figure 9.5 and the footnotes. The most unpleasant and damaging manifestation of the conflict between the sexes lies in the area of rape and other forms of coercive sex. Here too I believe that evolutionary theory has something important and practical to tell us about the phenomenon, if only we can put aside ideological blinders and a belief in the naturalistic fallacy [311]. These requirements will not be easily met, given the tendency of many to invoke the naturalistic fallacy when reacting to evolutionary analyses of coercive sex. Let a biologist attempt to explain why men rape and he can be guaranteed to hear that the hypothesis is not only dangerous but morally repugnant. And they will be told so in high dudgeon, as in "it seems quite clear that the biologicization of rape and the dismissal of social or 'moral' factors will... tend to legitimate rape" (p. 383 in [112]) and "it is reductive and reactionary to isolate rape from other forms of violent antisocial behavior and dignify it with adaptive significance" (p. 382 in [54]). Outbursts of this sort occur because the commentators believe that if rape were shown to be adaptive, as defined in evolutionary terms, then it would also be morally legitimate and socially defensible. Although the distinction in meaning between "evolved" and "moral" evidently is not easily grasped, nothing commands us to believe that biologically adaptive traits are necessarily socially desirable. Furthermore, the standard feminist position on coercive sex is founded on ideological, as opposed to evidentiary, grounds. Inspired by Susan Brownmiller's Against Our Will, where she writes, "all rape is an exercise in power" (p. 256) and "is nothing more or less than a conscious process of intimidation by which all men keep all women in a state of fear" (p. 15), the basic feminist argument has become that coercive sex is about power rather than sex. According to this view, rapists and their ilk are motivated purely by the proximate desire to dominate and intimidate women, a desire that stems from the influences of a patriarchal society dedicated to the preservation of male control [53]. According to this view, the idea that rape has anything to do with sex is a myth, pure and simple. Although many versions of the standard argument exist among the many feminist factions, when college students are asked about their understanding of rape, most have heard of and many accept the Brownmillerian viewpoint. Familiarity with the "rape has nothing to do with sex" hypothesis stems from the energetic efforts of many feminists to educate others on the feminist position vis-a-vis the causes of rape. As a result, documents containing the "myths about rape" are widely available on the Internet. The "rape myths" presented to students at the University of Wisconsin, Texas A & M, Tulane University, and Monash University in Australia, to pick a few, contain statements like the following: "Since sexual assault is all about power, not sex, the age or appearance of the victim is irrelevant" and "Rape is not about sexual orientation or sexual desire. It is an act of power and control in which the victim is brutalized and humiliated" (see, for example, www.med.monash.edu.au/secasa/html/rape-myths.html). Now the idea that sexual motivation plays no part in rape seems decidedly counterintuitive, given that the vast majority of rapists are sufficiently sexually aroused to achieve an erection and to ejaculate in their victims. Yet many persons have no doubt about it; sexual desire is not an issue in the rapist's behavior. The appeal of this assertion must stem from the fact that most people consider sexual desire a "natural" phenomenon, which some feminists fear will make the public more willing to excuse the rapist, at least in part, on the grounds that rape is in some sense "natural." In contrast, if rape is said to be violence pure and simple driven by a criminal desire to brutalize and humiliate, then no one would be tempted to forgive the rapist or be more understanding of his behavior. In other words, acceptance of the naturalistic fallacy provides the impetus to insist that there is nothing "natural" about the causes of rape. To this end, it is also valuable to claim that rape is a purely human phenomenon, not part of the sexuality of other species: "No zoologist, as far as I know, has ever observed that animals rape in their natural habitat, the wild" (p. 12 in [53]). Moreover, why not assert that rape is a purely cultural phenomenon, the invention of some men in some warped societies. If true, then one need "only" educate the members of that society in order to change the ruling male ideology of rape, which will eliminate the problem. To this end, many feminists assert that rape is not a universal feature of all societies but rather a manifestation of just those societies in which a particularly unfortunate ideological perspective has come to shape male attitudes and behavior. The advocates of the "rape has nothing to do with sex" hypothesis have been circumspect in dealing with the relevant data. For example, with respect to the so-called uniqueness of rape, even when Brownmiller wrote her book in 1975, ample evidence existed that males from a very wide range of animals sometimes force themselves on females that struggle to prevent copulation from occurring. Over the years, much more information has been assembled on the practice of forced matings in everything from insects to chimpanzees, orangutans, and other primate relatives of man [295, 311]. For example, I have on occasion seen a male of the desert beetle Tegrodera aloga run to a female and wrestle violently with her in an attempt to throw her on her side (fig. 9.4). If successful, the male probes the female's genital opening with his everted aedeagus (the entomological label for "penis") and he sometimes is able to achieve insertion of same, despite the female's attempts to break free. What makes this behavior so striking is that male Tegrodera aloga are perfectly capable of courting potential partners in a decorous manner. In these nonviolent interactions, a male cautiously moves in front of female, often one that is feeding on a tiny desert plant of some sort, and uses his antennae to sweep her antennae into two grooves in the front of his head (fig. 9.4). The two may stand facing one another for many minutes while the female feeds and the male strokes her antennae over and over again. Judging from what is known of a somewhat similar beetle [117], the male's courtship maneuvers probably permit the female to assess the concentration of cantharidin in the male's blood via analysis of odors emanating from pores in the grooves in his head. Cantharidin is a toxic biochemical manufactured by males of some beetles for transfer to their mates during copulation; females safely store the material for later use in coating their eggs, the better to repel ants and other egg eaters after the eggs are laid in the soil. In other words, courting males communicate their capacity to provide their mates with a useful nuptial gift. If a female perceives her suitor to be in possession of valuable resources that she will receive, she may eventually permit him to mount and copulate sedately. If not, she pulls her antennae free and walks away. Males that attempt to short-circuit the female choice mechanism in this species probably lack the qualities, especially high levels of transferable cantharidin, that motivate females to become sexually receptive, although this prediction remains untested. Under these circumstances, males may have the conditional capacity to try to inseminate females forcibly, reducing female reproductive success to some extent in the process, which is why females of this species resist. The idea that forced copulation only happens in humans is therefore simply untrue. And what about the claim that rape is haphazardly distributed among human cultures, present here, absent there, thanks to arbitrary variation in cultural histories and influences? You will remember Margaret Mead's incorrect assertion that rape was absent in traditional Samoan society. Analysis of similar claims about other groups has shown them to be equally erroneous [246]. Rape is a cultural universal. These findings are part of the reason why some sociobiologists think that the "rape has nothing to do with sex" hypothesis is not only implausible but untrue. One sociobiological alternative is that rape is partly the product of evolved male psychological mechanisms, including those that promote ease of sexual arousal, the capacity for impersonal sex, the desire for sexual variety for variety's sake, a desire to control the sexuality of potential partners, and a willingness to employ coercive tactics to achieve copulations under some conditions. Why would these: proximate mechanisms have spread through ancestral hominid populations? Because they almost certainly contributed to an increase in the number of females inseminated by some ancestral males with a consequent increase in the number of offspring produced. According to this approach, rape itself could either be a maladaptive side effect of sexual psychological mechanisms that have other generally adaptive outcomes or rape could be one of the tactics controlled by a conditional strategy that enables an individual to select the option with the highest fitness payoff given his particular circumstances. Note that these are two separate hypotheses, each of which generates distinctive predictions, so that either one or the other or both could potentially be rejected via standard scientific testing. The maladaptive byproduct hypothesis is plausible because it is clear that in humans and other species, the intense sex drive of males sometimes motivates them to perform acts that cannot possibly result in offspring. Male elephant seals not uncommonly attempt to copulate with young pups only a month or two old while males of some species of bees work themselves into a sexual frenzy over a deceased female or even a part of her body. Human males engage in masturbation, oral and anal sex, homosexual sex, and sex with children, to name just a few of the sexual activities that no one has ever claimed will generate surviving offspring. On the other hand, the adaptive conditional tactic hypothesis for rape is also plausible because rape appears to be associated with both low socioeconomic status and low risk of punishment, two conditions that would tend to increase the fitness benefit to fitness cost ratio of rape for certain individuals acting under certain circumstances. For example, poor men may have much less opportunity to engage in successful courtship because women favor wealthier individuals; rape could enable some in this category to gain sexual access to women. The mean fitness benefit from rape need not be great for individuals who have little or no chance of forming a partnership with a willing woman. Likewise, when rape occurs with little risk of punishment, as has traditionally been the case for soldiers in combat, then the fitness benefit need not be great to outweigh the relatively low costs associated with the behavior, which is indeed widespread in times of war. Debate continues on these alternatives because definitive tests needed to discriminate between them have yet to be carried out. But both hypotheses are based on the premise that rape is linked to evolved psychological mechanisms that contributed more, not less, to the chances of successful reproduction by men in the ancestral hominid environment. This premise is testable. For example, both hypotheses could be dismissed if it were shown that raped women in the past could not have borne children as a result of the assault. However, even in modern populations where birth control and abortion are available, some rape victims do become pregnant and bear the rapist's child. In addition, both hypotheses yield the prediction that rapists will especially target women of reproductive age. Tests of this prediction have also been positive (fig. 9.5) with the age distribution of raped women heavily skewed toward the years of peak fertility. Yes, a small proportion of the victim population consists of women either too young or too old to bear children, but the chance that a twenty-four-year-old will be raped is somewhere between four and twenty times greater than the risk that a fifty-four-year-old will be sexually assaulted [312], And note that the age distribution of women subject to homicidal attack is quite different from that of rape victims, a result that further reduces whatever residual attraction might be associated with the rape has nothing to do with sex hypothesis. If rape were unadulterated violence designed to brutalize women, one would expect convergence in the age distributions of rape and homicide victims. The convergence does not exist.​ Footnotes: Code: 53. Brownmiller, S. 1975. [i]Against Our Wil[/i]. New York: Simon & Schuster. 54. Brownmiller, S., and B. Merhof. 1992. A feminist response to rape as an adaptation in men. [i]Brain and Behavioral Sciences[/i] 15: 381-382. 76. Coyne, J. 2000. Of vice and men. [i]New Republic[/i] 222: 27-34. 112. Dupre", J. 1992. Blinded by "science": How not to think about social problems. [i]Behavioral and Brain Sciences[/i] 15: 382-383. 115. Ehrenreich, B. 2000. How "natural" is rape? [i]Time[/i] 155 (4): 88. 117. Eisner, T., et al. 1996. Chemical basis of courtship in a beetle (Neopyrochroa flabellata): Cantharidin as precopulatory "enticing" agent. [/i]Proceedings of the National Academy of Sciences[i] 93: 6494-6498. 246. Palmer, C. T. 1989. [i]Is rape a cultural universal? A re-examination of the ethnographic data[/i]. Ethnology 28: 1-16. 295. Smuts, B. B., and R. W. Smuts. 1993. Male aggression and sexual coercion of females in nonhuman primates and other mammals: Evidence and theoretical implications. [i]Advances in the Study of Behavior[/i] 22: 1-63. 311. Thornhill, R., and C. T. Palmer. 2000. [i]A Natural History of Rape: The Biological Bases of Sexual Coercion[/i]. Cambridge: MIT Press. 312. Thornhill, R., and N. W. Thornhill. 1983. [i]Human rape: An evolutionary analysis[/i]. Ethology and Sociobiology 4: 137-173. #### Attachments • 113 KB Views: 16 #### Thurlor ##### Nutter Yes, rape (by a male) requires sexual arousal. This implies that all prison rape is carried out by homosexuals or bisexuals. Why do you think rape is a learned behaviour? It seems more likely that not raping is a learned behaviour. Non-human animals rape, implying it is a natural behaviour. If rape were about power and not arousal why don't we see straight people raping just to exert dominance? #### Haim ##### Worlds creator There are definitely people that rape in order to feel powerful or that feeling powerful give sex drive for that people. When you have rapist that have much more attractive girlfriend or wife, him raping only because of being horny does not make sense. Of course that sometimes it is just someone being horny . Anyway why does the cause matter to you? #### ZenRaiden ##### One atom of me People usually rape because they feel like it. The complex they have or don't doesn't really matter. Its harmful and should not be done. Same action does not always require same reasons or motives. #### redbaron ##### irony based lifeform If rape were about power and not arousal why don't we see straight people raping just to exert dominance? We do see that. #### Puffy ##### Aquila Eh, pattern I've observed in OP's posts is that he seems to be attracted to defending positions of controversy, or positions that he believes will be received as controversial, over other factors. It's close enough in my mind to a definition of a troll that I conclude he's not worth engaging with. #### peoplesuck ##### In need of kamelåså Eh, pattern I've observed in OP's posts is that he seems to be attracted to defending positions of controversy, or positions that he believes will be received as controversial, over other factors. It's close enough in my mind to a definition of a troll that I conclude he's not worth engaging with. Ive been around the internet, I believe this is whats known as an edgelord. "edgelord A poster on an Internet forum, (particularly 4chan) who expresses opinions which are either strongly nihilistic, ("life has no meaning," or Tyler Durden's special snowflake speech from the film Fight Club being probably the two main examples) or contain references to Hitler, Nazism, fascism, or other taboo topics which are deliberately intended to shock or offend readers." ##### Evil Jew But humans aren't lizards. #### ApostateAbe ##### The past is an asshole, so f*** it Yes, rape (by a male) requires sexual arousal. This implies that all prison rape is carried out by homosexuals or bisexuals. Why do you think rape is a learned behaviour? It seems more likely that not raping is a learned behaviour. Non-human animals rape, implying it is a natural behaviour. If rape were about power and not arousal why don't we see straight people raping just to exert dominance? Your thoughts are all over the place. Maybe you are still making up your mind? Don't get the wrong idea about my opinion. I don't think rape is a learned behavior. That would require at least a few rape teachers. That isn't a thing. The men who rape other men in prison are generally heterosexuals, in my estimate. They are men with a strong sex drive who have no female outlets. I used to think prison rape was the only type of rape that would confirm the theory that "rape is about power, not about sex," but now I think even that is unlikely. #### ApostateAbe ##### The past is an asshole, so f*** it There are definitely people that rape in order to feel powerful or that feeling powerful give sex drive for that people. When you have rapist that have much more attractive girlfriend or wife, him raping only because of being horny does not make sense. Of course that sometimes it is just someone being horny . Anyway why does the cause matter to you? The topic came up in another thread. I brought it up in passing as an example of an absurd thing that American sociologists believe because they deny the biology of human nature, and then other posters indicated that they actually believed it. The whole field of sociology (and related fields) is stuck in a ideological swamp, whereas other sciences have built modern civilization. That is a very big deal. It is the science of our own species. #### ApostateAbe ##### The past is an asshole, so f*** it But humans aren't lizards. Yes, but humans are mammals, and a vast diversity of mammal species have used rape as a means of reproduction, including the closest kin to the human species. #### ApostateAbe ##### The past is an asshole, so f*** it Eh, pattern I've observed in OP's posts is that he seems to be attracted to defending positions of controversy, or positions that he believes will be received as controversial, over other factors. It's close enough in my mind to a definition of a troll that I conclude he's not worth engaging with. I mostly agree. Only a few such people are worth engaging with, in my opinion. (I have a high opinion of myself.) #### ApostateAbe ##### The past is an asshole, so f*** it When you have rapist that have much more attractive girlfriend or wife, him raping only because of being horny does not make sense. Men cheat even when each such man has an attractive girlfriend or wife, but do we then argue that the cheating behavior has non-sexual motivation? Generally, no. That is because obviously a man's sex drive is not satisfied except by a countless diversity of beautiful young women. Genghis Kahn and barely any other man in history satisfied his own sex drive. #### ApostateAbe ##### The past is an asshole, so f*** it When I wrote, "It is not enough to know that theory was constructed on anecdotes selected by activist authors and reinforced by a lot of bad arguments (see )," I mistakenly omitted the citation. It should have been Craig T. Palmer, 1988, "Twelve Reasons Why Rape is not Sexually Motivated: A Skeptical Examination" via http://www.profkramer.com/assets/palmer-1988.pdf. It is a great article, much-needed. ##### Evil Jew But humans aren't lizards. Yes, but humans are mammals, and a vast diversity of mammal species have used rape as a means of reproduction, including the closest kin to the human species. Not to say that you are necessarily wrong, but I'm going to go out on a limb here and say that most people wouldn't enjoy raping someone, including men, even if on some instinctive level the desire lingers; humans seemed to have developed the potential for empathy and sex usually comes with sharing emotions and intimacy, which is probably related to "love". And most people enjoy pleasing their partner as much as getting their own pleasure. So to be frank, I can only believe that someone who asserts rape as a natural instinct to reproduction might have an empathy deficit and only see sex as a form of pleasure. And that's fine because we are all different, but that's what I would think. And that's what I do think. #### peoplesuck ##### In need of kamelåså I went through a brief period of having an extremely high sex drive, I think I came off of a medication too fast, I didn't rape anyone, I didnt even consider it. EDIT: to be clear I was talking about mental levels of sex drive, gym bro high on test, levels of sex drive. The difference, as our boi ^ just said, is compassion/empathy. You wouldnt rape, for the same reason you wouldnt steal from a small family owned bookstore. Rape is clearly much worse than shorting someone a few dollars, yet most people are decent enough not to steal from small family owned stores. Rape is probably a mixture of bitterness, and lack of empathy. If I ever get raped, I will be sure to investigate and pry into their motives. for science #### ApostateAbe ##### The past is an asshole, so f*** it You wouldnt rape, for the same reason you wouldnt steal from a small family owned bookstore. This point may lead to an answer to the puzzle. We are judging the whole human species based on our own selves. True, I would not steal from a small family-owned bookstore, and neither would you. And yet all types of businesses get robbed, without exception. What is going on? Answer: not everyone is like us. Similarly, not all of those beetles rape, but only those beetles who lack the chemical sufficient to seduce the females. Sent from my SM-N960U using Tapatalk #### Animekitty ##### baby marshmallow born today Dolphins are rapists Ducks are rapists Tentacles are rapists #### peoplesuck ##### In need of kamelåså The difference, as our boi ^ just said, is compassion/empathy. You wouldnt rape, for the same reason you wouldnt steal from a small family owned bookstore. This point may lead to an answer to the puzzle. We are judging the whole human species based on our own selves. True, I would not steal from a small family-owned bookstore, and neither would you. And yet all types of businesses get robbed, without exception. What is going on? Answer: not everyone is like us. Similarly, not all of those beetles rape, but only those beetles who lack the chemical sufficient to seduce the females. Sent from my SM-N960U using Tapatalk The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. stop ignoring counterpoints The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. The difference, as our boi ^ just said, is compassion/empathy. #### Puffy ##### Aquila Eh, pattern I've observed in OP's posts is that he seems to be attracted to defending positions of controversy, or positions that he believes will be received as controversial, over other factors. It's close enough in my mind to a definition of a troll that I conclude he's not worth engaging with. I mostly agree. Only a few such people are worth engaging with, in my opinion. (I have a high opinion of myself.) Fair enough. With this one I'm unsure I quite get the punch line. Is it to draw comparisons between human males and monkeys in a zoo? --- It seems common sense that someone would commit rape in order to fulfil a sexual desire or fantasy of some kind. But I don't see why that would be inseparable to a psychological motivation of power and having dominance over others (as a possible lens among others). It seems like an instance where physiological and psychological motivations could overlap without much contradiction. Power fantasies, for example, are quite common among males even if the majority wouldn't consider acting them out in as extreme a form as that. You only have to randomly pick a selection of modern pornography videos to see the evidence of that. Rape is quite a specific act. I don't think we can generalise sexual desire so that rape and making passionate love to your partner, for example, are mutual and equatable means of fulfilling the same desire or sex drive. They have fundamentally different motivations, and I think if we purely reduce those motivations to the sex drive itself that we miss out on the nuance of why someone chooses to express it in a loving or destructive way. Which, to me at least, is the most important part of the conversation - how and why we choose to express this common sexual instinct. Rape can't only be to fulfil sex drive in my mind or the rapist could have chosen an option of mutual benefit, without a grossly destructive outcome, and of much less risk to themselves in order to satisfy that. That appears too much like an excuse. So why was the rapist unable to fulfil (or unable to choose to fulfil) their sex drive through an option of mutual benefit? Insecurity? Sexual frustration as they can't attract a partner (why?)? Fear of intimacy? Revenge? A means of feeling powerful and superior? Taking out suppressed pain on another? Lack of empathy or compassion? Sadistic pleasure? Loathing? Traumatic upbringing? Mummy issues? It likely varies from person to person - but this questioning digs closer to the reality, at least in my opinion. My concern is that if we reduce it all to a generic animalistic instinct that we negate and carpet over a lot of important social issues that need to be brought into discussion in order to understand a serious problem which ultimately destroys a lot of lives. #### ApostateAbe ##### The past is an asshole, so f*** it stop ignoring counterpoints Maybe I need to spell this out clearly for you: some members of the human species lack sufficient compassion/empathy. On the extreme end, they are psychopaths. And a bunch of people are just somewhere further along the psychopath spectrum. Those are the people who will rape and just not give a damn. And, even the non-psychopaths may rape and rationalize it as though it is not harmful for the victim. #### peoplesuck ##### In need of kamelåså stop ignoring counterpoints Maybe I need to spell this out clearly for you: some members of the human species lack sufficient compassion/empathy. On the extreme end, they are psychopaths. And a bunch of people are just somewhere further along the psychopath spectrum. Those are the people who will rape and just not give a damn. And, even the non-psychopaths may rape and rationalize it as though it is not harmful for the victim. waow did you repeat what I said back to me, as if I didnt just say it? 10/10 trolling friend I have a very high opinion of u as well, 10/10 celver gurl ur are fren let me guess your next thread: women arent as smart as men amirigte? and the next one is nobody is as smart as I am lol u got me, I fell for your trap. the oldest trick in the internet book. Next time I will be more wise, and I will not say anything, because you have taught me well, master one kenobi. Sincerely ya boi #### ApostateAbe ##### The past is an asshole, so f*** it Eh, pattern I've observed in OP's posts is that he seems to be attracted to defending positions of controversy, or positions that he believes will be received as controversial, over other factors. It's close enough in my mind to a definition of a troll that I conclude he's not worth engaging with. I mostly agree. Only a few such people are worth engaging with, in my opinion. (I have a high opinion of myself.) Fair enough. With this one I'm unsure I quite get the punch line. Is it to draw comparisons between human males and monkeys in a zoo? --- It seems common sense that someone would commit rape in order to fulfil a sexual desire or fantasy of some kind. But I don't see why that would be inseparable to a psychological motivation of power and having dominance over others (as a possible lens among others). It seems like an instance where physiological and psychological motivations could overlap without much contradiction. Power fantasies, for example, are quite common among males even if the majority wouldn't consider acting them out in as extreme a form as that. You only have to randomly pick a selection of modern pornography videos to see the evidence of that. Rape is quite a specific act. I don't think we can generalise sexual desire so that rape and making passionate love to your partner, for example, are mutual and equatable means of fulfilling the same desire or sex drive. They have fundamentally different motivations, and I think if we purely reduce those motivations to the sex drive itself that we miss out on the nuance of why someone chooses to express it in a loving or destructive way. Which, to me at least, is the most important part of the conversation - how and why we choose to express this common sexual instinct. Rape can't only be to fulfil sex drive in my mind or the rapist could have chosen an option of mutual benefit, without a grossly destructive outcome, and of much less risk to themselves in order to satisfy that. That appears too much like an excuse. So why was the rapist unable to fulfil (or unable to choose to fulfil) their sex drive through an option of mutual benefit? Insecurity? Sexual frustration as they can't attract a partner (why?)? Fear of intimacy? Revenge? A means of feeling powerful and superior? Taking out suppressed pain on another? Lack of empathy or compassion? Sadistic pleasure? Loathing? Traumatic upbringing? Mummy issues? It likely varies from person to person - but this questioning digs closer to the reality, at least in my opinion. My concern is that if we reduce it all to a generic animalistic instinct that we negate and carpet over a lot of important social issues that need to be brought into discussion in order to understand a serious problem which ultimately destroys a lot of lives. I certainly would not want to paper over the complexity. The case of Bill Cosby is interesting to me. He is a man who could have had the consent of almost any woman he wanted, and yet he raped them by the dozens with knock-out drugs in their drinks. An attempt at rape without knock-out drugs would tend to just be consensual sex! I expect that rape is his fetish. Probably not rape for the sake of oppressing women, but just another variant of sexual fulfillment. A bunch of generalizations are wrong, but some are correct, and they are necessary for making sense of our own species, for navigating the world, and making probabilistic inferences of individuals. The sex drive hypothesis is how we know that young men are much more likely to rape than old men, young women are much more likely to be victims than old women, and we can evaluate risks from that. So, advise your teenage daughters to go easy on the drinking at fraternity parties. Some loud activists are denouncing such common-sense advice. #### Marbles ##### What would Feynman do? Or men are just rapey assholes, driven to procreate by any means. Did you know a duck's vagina is shaped like a maze to deter rape? I kid you not. I doubt male ducks are soldiers of the toxic patriarchy, they are just horny, but in either case, watch out Daisey Duck. #### Animekitty ##### baby marshmallow born today Men hate women because of their relationship with their mother. Marbles is a genius. #### Marbles ##### What would Feynman do? Men hate women because of their relationship with their mother. Marbles is a genius. And also, ducks. Edit: Oooh, I just saw your post and realized you're duck woke. Those videos were hilarious. Except the story of the suiciding dolphin in love. That was heartbreaking. #### Puffy ##### Aquila I know Marbles is being tongue in cheek but I do wonder if there's something to that. Our Mother's are the original female archetype in our lives, so it makes sense to me that our relationship to her goes on to inform our relationship to other women in our lives. I'd guess that someone is more likely to go on to have very troubled sexual relations if they have an unhealthy relationship with their Mother. I'm thinking of an admittedly extreme example like Ted Bundy. He discovered that the person he had grown up to believe was his sister was actually his Mother, and then a year later he started the rampage that he was famous for. It's not that difficult to see his sexual violence in part as a mirror of the hatred he had for his Mother. I don't know much about Bill Cosby at all. I'm just saying there's likely a specific reason why he chose to express his sex drive the way he did. If we just say "he's fulfilling his sexual drive" a) I feel like we're giving him an excuse like we would to a gang of ducks and b) I feel we're only describing what happened on a surface level, where if people like him were to be rehabilitated, or others prevented from doing it before it reached that stage, they'd have to resolve whatever underlying issue provoked them to do or fantasise about it. Reading your response @ApostateAbe though, I suspect you've created this thread to argue against feminist positions more than anything else. I feel like you'd need to spend a day in someone elses' shoes in order to get that one. #### ApostateAbe ##### The past is an asshole, so f*** it I know Marbles is being tongue in cheek but I do wonder if there's something to that. Our Mother's are the original female archetype in our lives, so it makes sense to me that our relationship to her goes on to inform our relationship to other women in our lives. I'd guess that someone is more likely to go on to have very troubled sexual relations if they have an unhealthy relationship with their Mother. I'm thinking of an admittedly extreme example like Ted Bundy. He discovered that the person he had grown up to believe was his sister was actually his Mother, and then a year later he started the rampage that he was famous for. It's not that difficult to see his sexual violence in part as a mirror of the hatred he had for his Mother. I don't know much about Bill Cosby at all. I'm just saying there's likely a specific reason why he chose to express his sex drive the way he did. If we just say "he's fulfilling his sexual drive" a) I feel like we're giving him an excuse like we would to a gang of ducks and b) I feel we're only describing what happened on a surface level, where if people like him were to be rehabilitated, or others prevented from doing it before it reached that stage, they'd have to resolve whatever underlying issue provoked them to do or fantasise about it. Reading your response @ApostateAbe though, I suspect you've created this thread to argue against feminist positions more than anything else. I feel like you'd need to spend a day in someone elses' shoes in order to get that one. Yes, I am arguing against the feminist position, and I don't think anyone here has really defended the feminist position nor even disagreed with my own position (rape is usually about sex drive). There are just a lot of diverse offshoots and elaborations of my position. I expected that I would be writing something disagreeable, given that the same position attracted disagreement elsewhere, but I didn't. #### peoplesuck ##### In need of kamelåså nor even disagreed with my own position its a lack of compassion and empathy, only people with actually brain damage/malfunctions are so horny they rape people. Sorry but I disagreed quite openly. please dont let the keep you from ignoring it. TBH I think rape causes sex drive, think about it, how many people have raped and had a sex drive, clear connection here. Im wasting my time saying this, so I must be retarded, dont take me seriously, im not smart enough to comment on this mensa forum. MB #### peoplesuck ##### In need of kamelåså I have a good thread, shitty threads are usually about attention and not actually figuring anything out, but just being negative and argumentative, while ignoring what people actually say,because the point of a thread isnt communication. #### ApostateAbe ##### The past is an asshole, so f*** it Yeah, you apparently thought you were disagreeing, but you were not. I just took it as an obvious given premise that rapists tend to lack empathy, and that doesn't really need to be stated explicitly as a component of the behavior. "What causes some people to murder? Anger or greed? I think it is generally anger." "You are wrong. WRONG! It is both anger and a lack of empathy."
2020-01-25 02:54:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3481902480125427, "perplexity": 3461.064661949447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00335.warc.gz"}
https://talkstats.com/threads/sum-of-the-squared-least-squared-residual.49347/
# Sum of the squared least squared residual #### brothers10 ##### New Member Hey guys! This is my first post on this booming forum! I came across a problem that asks "Find the Sum of the squared least squared residual" and All it gives is sigma squared hat and the population and I have searched through the book and I have no idea where to start or what formulas to use. I was hoping that someone could give me some guidance about where to start.Thank you in advance!
2019-11-21 04:24:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494439959526062, "perplexity": 269.0086121331437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00086.warc.gz"}
https://stats.stackexchange.com/questions/425637/differences-between-a-frequentist-and-a-bayesian-density-prediction
# Differences between a frequentist and a Bayesian density prediction What are some essential differences between a frequentist density forecast/prediction and a Bayesian posterior for an outcome of a random variable? Of course, there will be differences in how they are obtained (via frequentist vs. Bayesian estimation), but I am interested in differences in addition to that. E.g. from a user's perspective, given a frequentist density forecast/prediction vs. a Bayesian posterior, should I treat them differently in any essential way? • I suppose the key difference is that the Bayesian density propagates uncertainty over model parameters into the predictive density, by averaging the likelihood with the posterior distribution. With frequentist methods, I don't think you can do that. At best you can get a confidence interval for the predicted density. – CloseToC Sep 10 '19 at 10:30 • @CloseToC, so the Bayesian posterior would be flatter/wider and better reflect the underlying uncertainty, at least in the sense of accounting for estimation imprecision of model coefficients, unlike a typical frequentist method? – Richard Hardy Sep 10 '19 at 10:35 • That's my understanding. If we predict $y$ with $x$ with a parametric model, then a frequentist analysis would give us $\widehat{p_{y|x}} = p_{y|x}(y|x; \hat{\theta)}$ with $\hat{\theta}$ chosen by MLE. We can also get a confidence inteval for the parameter and for the density but I don't think you can seamlessly integrate that into the predictive density itself. But the Bayesian predictive density would be $\widehat{p_{y|x}} = \int p_{y|x, \theta}(y|x, \theta) p_{\theta|x}(\theta|x) d\theta$ – CloseToC Sep 10 '19 at 12:24 • What is a frequentist density forecast/prediction? Something like a sliding scale of confidence intervals for some parameter? – Sextus Empiricus Sep 10 '19 at 12:24 • @MartijnWeterings, I am not sure when it comes to a strict definition, but let us take a linear model as an example. From OLS or from MLE, we will obtain estimates of the conditional mean and variance. Combined with a distributional assumption (e.g. normality), we have an estimate of the conditional density. We could use this as a density prediction, or to be more accurate, we could also account for estimation imprecision. So I am not sure whether we derive prediction intervals from density or density from prediction intervals. (Confidence intervals for a parameter are yet another thing.) – Richard Hardy Sep 10 '19 at 12:44 I can imagine a frequentist density forecast/prediction as something like a distribution of intervals. For instance providing something like the image below which is an image containing multiple confidence boundary lines (the original is here with only a single 95% confidence interval). And something similar can be done with prediction intervals. With this interpretation the difference between the frequentist density and the Bayesian density corresponds to the difference between a confidence interval and a credible interval. Those two are not the same. We could say that: 1. The Bayesian analysis uses more/different information (it includes a posterior distribution for the distribution of parameters, either based on former knowledge or based on assumptions/believes) 2. The Bayesian analysis expresses a probability in a different way. • The confidence interval relates to 'the probability of the observation given the parameters'. • The credible interval relates to the 'probability of the parameters given the observation'. ### Contrast between confidence interval and prediction interval The intuition above relies a lot on confidence intervals, but similar things can be said about prediction intervals. The confidence intervals are maybe more easy to interpret than prediction intervals. Prediction intervals include the error of the mean (which can be seen to coincide with confidence intervals) plus an estimate of the random noise. It is more difficult to give prediction intervals a same frequentist interpretation, although an alternative way to look at is that for frequentist prediction intervals you can say that 'the frequentist prediction interval will contain the future observation a fraction $$x \%$$ of the time'. So the difference between frequentist prediction intervals and Bayesian prediction intervals is still that the Bayesian intervals use more information, but the frequentist prediction interval are independent of from the parameter distribution and 'work' independent from the prior distribution (given that the model is correct). I imagine that the following interpretation still works 'the frequentist prediction interval relates to the probability of the observation given the predicted value, it is the collection of those predicted values for which the prior observed effects/data/statistics occurs within a region with $$x \%$$ probability'. ### Example prediction of a value for a Gaussian distribution population When creating a confidence interval for the estimate of the mean of Gaussian distributed population then one can use a t-distribution and this has a geometrical interpretation. The same geometrical interpretation will work for the estimate of a prediction interval. Let $$X_i \sim N(\mu, \sigma)$$ and say we observe a sample $$X_1, ... , X_n$$ of size $$n$$ and wish to predict $$X_{n+1}$$. We can construct a frequentist prediction interval with the interpretation that • No matter what the value of $$\mu$$ and $$\sigma$$ is, the value $$X_{n+1}$$ will be $$x\%$$ of the time inside the prediction interval. but also: • Given a hypothetical predicted value $$\tilde{X}_{n+1}$$ in the prediction range, the observations $$\bar{X}$$ and $$s$$ (the sample mean and sample deviation) will be occuring within some range that occurs $$x$$ percent of the time. (That means we will only include those values in the prediction range for which we make our observations $$x\%$$ of the time, such that we will never fail more than $$x\%$$ of the time) So instead of considering the distribution of $$X_{n+1}$$ given the data $$\bar{X}$$ and $$s$$, we consider the other way around, we consider the distribution of the data $$\bar{X}$$ and $$s$$ given $$X_{n+1}$$. (we can plot this distribution because $$\bar{X}-X_{n+1}$$ is Gaussian distributed, and $$s$$ has a scaled chi-distribution) • In the image above you see the distribution of the sample standard deviation and the sample mean given a value for $$X_{n+1}$$. The distribution of this deviation can be bounded by a cone (in the image 95%) and this is independent of $$\sigma$$ (because both variables, normal distributed and scaled chi distributed) scale the same when $$\sigma$$ changes thus the distribution of the angle does not change. • And the red dotted lines show how you can construct the prediction interval. For a given observation (the red dot), when you have a variable within this range the observation will be inside the 'cone of 95%' for those predicted values. Thus this prediction interval has an interpretation like a confidence interval: It relates to the probability of the data, given the predicted value (instead of the inverse 'the probability of the predicted value, given the data'). Code for the image: # settings set.seed(1) n <- 10^3 smp = 10 # hypothetical x[n+1] xn1 <- 7.5 # simulate data and compute statistics X <- matrix(rnorm(smp*n),n) prd <- rnorm(n) diff <- rowMeans(X)-prd #plotting dev.off() par(mar=c(0,0,0,0)) plot(xn1+diff, rss, bty = 'n', ylim = c(-3,7), xlim = c(-1,15), xaxt = "n", yaxt = "n", xlab="", ylab = "", pch=21,col=rgb(0,0,0,0),bg=rgb(0,0,0,0.4),cex=0.7) Arrows(-0.5,0,14.5,0,arr.length=0.4) lines(c(0,0),c(-2,5)) text(0,5,expression(sqrt(sum((x_i-bar(x))^2,i=1,n))),pos=3,cex=0.7) text(14.7,0,expression(bar(X)),pos=4,cex=0.7) qt(0.95,smp-1) ang <- sqrt((smp-1)*(1+1/smp))/qt(0.95,smp-1) lines(c(xn1-10,xn1,xn1+10),c(10,0,10)*ang) polygon(c(xn1-10,xn1,xn1+10),c(10,0,10)*ang, col = rgb(0,0,0,0.1), border = NA, lwd=0.01) text(10.7,6,"95% of observations",srt=65,cex=0.7) points(xn1, 0, pch=21, col=1, bg = "white") text(xn1,0,expression(x[n+1]),pos=1) text(3,1.5,"some observed \n sample mean and variance",col=2,pos=1,srt=0,cex=0.7) • Quick question on intuition: you talk mostly of confidence rather than prediction intervals. Is that not problematic when we are interested in a new realisation from a data generating process rather than in a parameter thereof? – Richard Hardy Sep 10 '19 at 13:33 • OK. This is helpful. +1 for now. (A couple of hours later) Wow, this is becoming pretty cool! – Richard Hardy Sep 10 '19 at 17:10 • My understanding of frequentist density prediction is more along the lines of the example @RichardHardy mentioned in the comments (the output of probabilistic regression models). At first glance, it feels like this has a different meaning than your distribution over prediction intervals. But, I'm not sure I completely understand the way you're constructing these distributions. Is there any relationship between the two? – user20160 Sep 11 '19 at 20:12 • @user20160 the way that I construct the density is by using the boundaries of many intervals. It is sort of the inverse how you create credible intervals from a posterior density. – Sextus Empiricus Sep 11 '19 at 20:32 • I believe it is similar to Richard's comment which seems to be using the sample distribution (which works when the variance of the error is known, otherwise it gets a bit more complicated like you have to use the t-distribution in my example while the sample distribution of the point to be predicted is gaussian) – Sextus Empiricus Sep 11 '19 at 20:37 In practical terms, there are seven issues that should be thought about with regard to the difference between a Bayesian predictive interval and a Frequentist interval. The issues are: 1. Sample size 2. Construction 3. Boundary conditions 4. Coherence 5. Information 6. Broken intervals 7. Interpretation Each of the above items can either cause a difference in calculation, useability or interpretation. Of course, the last item is interpretation above. 1. Generally, for small sample sizes and outside the exponential family of distributions, there is no reason that Bayesian intervals resemble Frequentist intervals. For some distributions, such as the normal distribution with a diffuse prior, there will be no difference at all in either of the predictive intervals in any practical sense. For others, such as the Cauchy distribution, you can get pretty wild differences in predictive intervals. 2. Construction 2a. Construction of the intervals is on different conceptual grounds. The Bayesian predictive interval depends on the predictive density function and a rule. The most common rule in use is to use the highest density region. This rule corresponds to minimizing the K-L divergence between the model and the future values in nature. Other rules could also be used as the only requirement is that the prediction adds up to $$\alpha{\%}$$. These alternative rules could be understood as minimizing some alternative cost function. 2b. The Frequentist predictive interval depends on a loss function, although the loss function is often implicit. As with the Bayesian construction, there exists an infinite number of potential prediction intervals because there are an infinite number of potential loss functions. Frequentist intervals depend upon the sampling distribution of some estimator. If you change from the sample mean to the sample median you have changed both loss function and sampling distributions. The predictions will differ. The parameter estimator vanishes as it does in the Bayesian method. 3. Boundary conditions and discreteness do not impact a Bayesian prediction other than it will account for them. They do impact them in Frequentist methods. It can happen that a Frequentist interval will include impossible values. The method also breaks down when using discrete probabilities. See... Lawless, J. and Fredette, M. (2005). Frequentist prediction intervals and predictive distributions. Biometrika, 92(3):529-542. 4. If you need to use the prediction for gambling purposes, such as setting inventory, allocating funds, or playing a lottery then Bayesian intervals are coherent and Frequentist ones are not. All Frequentist intervals with identical values for their estimators will generate identical intervals though with different samples. Bayesian prediction intervals, in the general case, will generate different predictive intervals with different samples despite having the same estimator as long as the posterior differs. 5. Bayesian predictions are always admissible predictions given a prior and a loss function. The Bayesian likelihood function is always minimally sufficient. It is not always the case that a Frequentist method uses as much information and so Frequentist estimators can be noisier given identical information. For well-behaved models, such as the normal distribution, this is not generally a problem. Additionally, the Bayesian prediction should include the information in a prior. If the prior is sufficiently informative, then the Bayesian interval will first-order stochastically dominate the Frequentist interval in terms of loss created by using the prediction in a decision. 6. Although this is usually an issue that coincides with small sample sizes or omitted variables, there is no requirement that the Bayesian $$\alpha\%$$ interval is a single closed interval with a unimodal likelihood. A Bayesian predictive interval may be $$[-5,-1]\cup{[}1,2]$$ while the Frequentist interval on the same sample could be $$[-2,1]$$. With a bimodal underlying density, there could be broken intervals for either. 7. Interpretation 7a. The biggest issue is interpretation. Assuming valid models for both estimation tools, there are interpretative differences between the intervals. Frequentist predictive intervals are confidence procedures. Bayesian intervals might be analogous to credible intervals. A Frequentist 95% interval will contain future observations at least 95% of the time, with a guarantee of minimal coverage. There is a 95% chance that a Bayesian 95% interval will contain the future observations. 7b. The Frequentist method guarantees a level of coverage and that it is unbiased, so it is not a true probability in that it provides no less than an $$\alpha\%$$ coverage over future predictions. That is part of what leads to incoherence. If you need a guarantee of long-run coverage, though not necessarily for the next set of observations, you should use a Frequentist method. If you need to assign money and minimize the discrepancy between nature and your model, then you should use a Bayesian method. Do note, however, that Lawless and Fredette's intervals listed above do minimize the average K-L divergence. For many models that are simple, such as those taught in elementary statistics with an uninformative prior, there is no practical difference except interpretation. For complex models, they can differ substantively. You should always think about models in terms of fitness for purpose. One thing I did leave out, above, which is not a theoretical issue but a practical issue, is computability. Bayesian methods are notorious for their difficulty in generating a computation of any kind, whereas Frequentist methods often generate a solution in milliseconds. • Thank you, Dave! Looks like I have quite some reading to do (referring to your piece, of course). I am looking forward to it! – Richard Hardy Sep 11 '19 at 19:39 • Having read it quickly, I wonder if you could specifically address densities rather than intervals. I also wonder whether a frequentist interval must be a single closed interval, unlike a Bayesian one (point 6). – Richard Hardy Sep 11 '19 at 19:46 • @RichardHardy I wish I could address densities but it is an issue that is unclear. Frequentist predictive densities are a hair's breadth away from a fiducial probability. That is on the edges of my research. All Frequentist predictive intervals are uniformly distributed as with confidence intervals. You get so close to fiducial statistics that I don't touch it as the field of statistics spent five decades arguing over it. – Dave Harris Sep 11 '19 at 19:52 • @RichardHardy as to connectedness, because of how Frequentist averaging happens over the sample space, I cannot see a disconnected case but there may be one that I cannot imagine. – Dave Harris Sep 11 '19 at 19:53 • @RichardHardy I guess I have to take that back and may have to think of an edit. For a predictive distribution coming from a bimodal likelihood, I could see a split prediction. – Dave Harris Sep 11 '19 at 19:54
2021-06-13 05:32:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7983618974685669, "perplexity": 620.8260792241595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00382.warc.gz"}
http://theinfolist.com/html/ALL/s/electrostatic.html
TheInfoList Electrostatics is a branch of physics that studies electric charges at rest. Since classical physics, it has been known that some materials, such as amber, attract lightweight particles after rubbing. The Greek word for amber, , or ', was thus the source of the word 'electricity'. Electrostatic phenomena arise from the forces that electric charges exert on each other. Such forces are described by Coulomb's law. Even though electrostatically induced forces seem to be rather weak, some electrostatic forces such as the one between an electron and a proton, that together make up a hydrogen atom, is about 36 orders of magnitude stronger than the gravitational force acting between them. There are many examples of electrostatic phenomena, from those as simple as the attraction of the plastic wrap to one's hand after it is removed from a package to the apparently spontaneous explosion of grain silos, the damage of electronic components during manufacturing, and photocopier & laser printer operation. Electrostatics involves the buildup of charge on the surface of objects due to contact with other surfaces. Although charge exchange happens whenever any two surfaces contact and separate, the effects of charge exchange are usually only noticed when at least one of the surfaces has a high resistance to electrical flow. This is because the charges that transfer are trapped there for a time long enough for their effects to be observed. These charges then remain on the object until they either bleed off to ground or are quickly neutralized by a discharge: e.g., the familiar phenomenon of a static "shock" is caused by the neutralization of charge built up in the body from contact with insulated surfaces. Coulomb's law Coulomb's law states that: 'The magnitude of the electrostatic force of attraction or repulsion between two point charges is directly proportional to the product of the magnitudes of charges and inversely proportional to the square of the distance between them.' The force is along the straight line joining them. If the two charges have the same sign, the electrostatic force between them is repulsive; if they have different signs, the force between them is attractive. If $r$ is the distance (in meters) between two charges, then the force (in newtons) between two point charges $q$ and $Q$ (in coulombs) is: :$F = \frac\frac= k_0\frac\, ,$ where ε0 is the vacuum permittivity, or permittivity of free space: :$\varepsilon_0 \approx 8.854\ 187\ 817 \times 10^ \;\; \mathrm.$ The SI units of ε0 are equivalently  A2s4 kg−1m−3 or C2N−1m−2 or F m−1. Coulomb's constant is: :$k_0 = \frac\approx 8.987\ 551\ 792 \times 10^9 \;\; \mathrm^.$ A single proton has a charge of ''e'', and the electron has a charge of −''e'', where, :$e \approx 1.602\ 176\ 565 \times 10^\;\; \mathrm.$ These physical constants (ε0, k0, e) are currently defined so that ε0 and k0 are exactly defined, and ''e'' is a measured quantity. Electric field The electric field, $\vec$, in units of newtons per coulomb or volts per meter, is a vector field that can be defined everywhere, except at the location of point charges (where it diverges to infinity). It is defined as the electrostatic force $\vec\,$ in newtons on a hypothetical small test charge at the point due to Coulomb's Law, divided by the magnitude of the charge $q\,$ in coulombs :$\vec = \,$ Electric field lines are useful for visualizing the electric field. Field lines begin on positive charge and terminate on negative charge. They are parallel to the direction of the electric field at each point, and the density of these field lines is a measure of the magnitude of the electric field at any given point. Consider a collection of $N$ particles of charge $Q_i$, located at points $\vec r_i$ (called ''source points''), the electric field at $\vec r$ (called the ''field point'') is: :$\vec\left(\vec r\right) =\frac\sum_^N \frac ,$ where $\vec\mathcal R_i = \vec r - \vec r_i ,$ is the displacement vector from a ''source point'' $\vec r_i$ to the ''field point'' $\vec r$, and $\widehat\mathcal R_i = \vec\mathcal R_i / \left \|\vec\mathcal R_i \right \|$ is a unit vector that indicates the direction of the field. For a single point charge at the origin, the magnitude of this electric field is $E =k_eQ/\mathcal R^2,$ and points away from that charge if it is positive. The fact that the force (and hence the field) can be calculated by summing over all the contributions due to individual source particles is an example of the superposition principle. The electric field produced by a distribution of charges is given by the volume charge density $\rho \left(\vec r\right)$ and can be obtained by converting this sum into a triple integral: :$\vec\left(\vec r\right)= \frac \iiint \frac \rho \left(\vec r \,\text{'}\right)\operatorname^3 r\,\text{'}$ Gauss' law Gauss' law states that "the total electric flux through any closed surface in free space of any shape drawn in an electric field is proportional to the total electric charge enclosed by the surface." Mathematically, Gauss's law takes the form of an integral equation: :$\oint_S\vec \cdot\mathrm\vec = \frac\,Q_=\int_V\cdot\operatorname^3 r,$ where $\operatorname^3 r =\mathrmx \ \mathrmy \ \mathrmz$ is a volume element. If the charge is distributed over a surface or along a line, replace $\rho\mathrm^3r$ by $\sigma\mathrmA$ or $\lambda\mathrm\ell$. The divergence theorem allows Gauss's Law to be written in differential form: :$\vec\cdot\vec = .$ where $\vec \cdot$ is the divergence operator. Poisson and Laplace equations The definition of electrostatic potential, combined with the differential form of Gauss's law (above), provides a relationship between the potential Φ and the charge density ρ: :$^2 \phi = - .$ This relationship is a form of Poisson's equation. In the absence of unpaired electric charge, the equation becomes Laplace's equation: :$^2 \phi = 0,$ Electrostatic approximation The validity of the electrostatic approximation rests on the assumption that the electric field is irrotational: :$\vec\times\vec = 0.$ From Faraday's law, this assumption implies the absence or near-absence of time-varying magnetic fields: :$= 0.$ In other words, electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents ''do'' exist, they must not change with time, or in the worst-case, they must change with time only ''very slowly''. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored. Electrostatics and magnetostatics can both be seen as Galilean limits for electromagnetism. Electrostatic potential As the electric field is irrotational, it is possible to express the electric field as the gradient of a scalar function,$\phi$, called the electrostatic potential (also known as the voltage). An electric field, $E$, points from regions of high electric potential to regions of low electric potential, expressed mathematically as :$\vec = -\vec\phi.$ The gradient theorem can be used to establish that the electrostatic potential is the amount of work per unit charge required to move a charge from point $a$ to point $b$ with the following line integral: :$-\int_a^b = \phi \left(\vec b\right) -\phi\left(\vec a\right).$ From these equations, we see that the electric potential is constant in any region for which the electric field vanishes (such as occurs inside a conducting object). Electrostatic energy A single test particle's potential energy, $U_\mathrm^$, can be calculated from a line integral of the work, $q_n\vec E\cdot\mathrm d\vec\ell$. We integrate from a point at infinity, and assume a collection of $N$ particles of charge $Q_n$, are already situated at the points $\vec r_i$. This potential energy (in Joules) is: :$U_\mathrm^=q\phi\left(\vec r\right)=\frac\sum_^N \frac$ where $\vec\mathcal = \vec r - \vec r_i$ is the distance of each charge $Q_i$ from the test charge $q$, which situated at the point $\vec r$, and $\phi\left(\vec r\right)$ is the electric potential that would be at $\vec r$ if the test charge were not present. If only two charges are present, the potential energy is $k_eQ_1Q_2/r$. The total electric potential energy due a collection of ''N'' charges is calculating by assembling these particles one at a time: :$U_\mathrm^ = \frac\sum_^N Q_j \sum_^ \frac= \frac\sum_^N Q_i\phi_i ,$ where the following sum from, ''j = 1'' to ''N'', excludes ''i = j'': :$\phi_i = \frac\sum_^N \frac.$ This electric potential, $\phi_i$ is what would be measured at $\vec r_i$ if the charge $Q_i$ were missing. This formula obviously excludes the (infinite) energy that would be required to assemble each point charge from a disperse cloud of charge. The sum over charges can be converted into an integral over charge density using the prescription $\sum \left(\cdots\right) \rightarrow \int\left(\cdots\right)\rho\mathrm d^3r$: :$U_\mathrm^ = \frac \int\rho\left(\vec\right)\phi\left(\vec\right) \operatorname^3 r = \frac \int \left|\^2 \operatorname^3 r$, This second expression for electrostatic energy uses the fact that the electric field is the negative gradient of the electric potential, as well as vector calculus identities in a way that resembles integration by parts. These two integrals for electric field energy seem to indicate two mutually exclusive formulas for electrostatic energy density, namely $\frac\rho\phi$ and $\fracE^2$; they yield equal values for the total electrostatic energy only if both are integrated over all space. Electrostatic pressure On a conductor, a surface charge will experience a force in the presence of an electric field. This force is the average of the discontinuous electric field at the surface charge. This average in terms of the field just outside the surface amounts to: :$P = \frac E^2$, This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge. Triboelectric series The triboelectric effect is a type of contact electrification in which certain materials become electrically charged when they are brought into contact with a different material and then separated. One of the materials acquires a positive charge, and the other acquires an equal negative charge. The polarity and strength of the charges produced differ according to the materials, surface roughness, temperature, strain, and other properties. Amber, for example, can acquire an electric charge by friction with a material like wool. This property, first recorded by Thales of Miletus, was the first electrical phenomenon investigated by humans. Other examples of materials that can acquire a significant charge when rubbed together include glass rubbed with silk, and hard rubber rubbed with fur. Electrostatic generators The presence of surface charge imbalance means that the objects will exhibit attractive or repulsive forces. This surface charge imbalance, which yields static electricity, can be generated by touching two differing surfaces together and then separating them due to the phenomena of contact electrification and the triboelectric effect. Rubbing two nonconductive objects generates a great amount of static electricity. This is not just the result of friction; two nonconductive surfaces can become charged by just being placed one on top of the other. Since most surfaces have a rough texture, it takes longer to achieve charging through contact than through rubbing. Rubbing objects together increases amount of adhesive contact between the two surfaces. Usually insulators, e.g., substances that do not conduct electricity, are good at both generating, and holding, a surface charge. Some examples of these substances are rubber, plastic, glass, and pith. Conductive objects only rarely generate charge imbalance except, for example, when a metal surface is impacted by solid or liquid nonconductors. The charge that is transferred during contact electrification is stored on the surface of each object. Electrostatic generators, devices which produce very high voltage at very low current and used for classroom physics demonstrations, rely on this effect. The presence of electric current does not detract from the electrostatic forces nor from the sparking, from the corona discharge, or other phenomena. Both phenomena can exist simultaneously in the same system. :See also: ''Wimshurst machine'', and ''Van de Graaff generator''. Charge neutralization Natural electrostatic phenomena are most familiar as an occasional annoyance in seasons of low humidity, but can be destructive and harmful in some situations (e.g. electronics manufacturing). When working in direct contact with integrated circuit electronics (especially delicate MOSFETs). In the presence of flammable gas, care must be taken to avoid accumulating and suddenly discharging a static charge (see Electrostatic discharge). Electrostatic induction Electrostatic induction, discovered by British scientist John Canton in 1753 and Swedish professor Johan Carl Wilcke in 1762 is a redistribution of charges in an object caused by the electric field of a nearby charge. For example, if a positively charged object is brought near an uncharged metal object, the mobile negatively-charged electrons in the metal will be attracted by the external charge, and move to the side of the metal facing it, creating a negative charge on the surface. When the electrons move out of an area they leave a positive charge due to the metal atoms' nuclei, so the side of the metal object facing away from the charge acquires a positive charge. These ''induced charges'' disappear when the external charge is removed. Induction is also responsible for the attraction of light objects, such as balloons, paper scraps and foam packing peanuts to static charges. The surface charges induced in conductive objects exactly cancel external electric fields inside the conductor, so there is no electric field inside a metal object. This is the basis for the electric field shielding action of a Faraday cage. Since the electric field is the gradient of the voltage, electrostatic induction is also responsible for making the electric potential (voltage) constant throughout a conductive object. Static electricity Before the year 1832, when Michael Faraday published the results of his experiment on the identity of electricities, physicists thought "static electricity" was somehow different from other electrical charges. Michael Faraday proved that the electricity induced from the magnet, voltaic electricity produced by a battery, and static electricity are all the same. Static electricity is usually caused when certain materials are rubbed against each other, like wool on plastic or the soles of shoes on carpet. The process causes electrons to be pulled from the surface of one material and relocated on the surface of the other material. A static shock occurs when the surface of the second material, negatively charged with electrons, touches a positively charged conductor, or vice versa. Static electricity is commonly used in xerography, air filters, and some coating processes used in manufacturing. Static electricity is a build-up of electric charges on two objects that have become separated from each other. Small electrical components can be damaged by static electricity, and component manufacturers use a number of antistatic devices to avoid this. Static electricity and chemical industry When different materials are brought together and then separated, an accumulation of electric charge can occur which leaves one material positively charged while the other becomes negatively charged. The mild shock that you receive when touching a grounded object after walking on carpet is an example of excess electrical charge accumulating in your body from frictional charging between your shoes and the carpet. The resulting charge build-up upon your body can generate a strong electrical discharge. Although experimenting with static electricity may be fun, similar sparks create severe hazards in those industries dealing with flammable substances, where a small electrical spark may ignite explosive mixtures with devastating consequences. A similar charging mechanism can occur within low conductivity fluids flowing through pipelines—a process called flow electrification. Fluids which have low electrical conductivity (below 50 picosiemens per meter), are called accumulators. Fluids having conductivities above 50 pS/m are called non-accumulators. In non-accumulators, charges recombine as fast as they are separated and hence electrostatic charge generation is not significant. In the petrochemical industry, 50 pS/m is the recommended minimum value of electrical conductivity for adequate removal of charge from a fluid. An important concept for insulating fluids is the static relaxation time. This is similar to the time constant (tau) within an RC circuit. For insulating materials, it is the ratio of the static dielectric constant divided by the electrical conductivity of the material. For hydrocarbon fluids, this is sometimes approximated by dividing the number 18 by the electrical conductivity of the fluid. Thus a fluid that has an electrical conductivity of 1 pS/cm (100 pS/m) will have an estimated relaxation time of about 18 seconds. The excess charge within a fluid will be almost completely dissipated after 4 to 5 times the relaxation time, or 90 seconds for the fluid in the above example. Charge generation increases at higher fluid velocities and larger pipe diameters, becoming quite significant in pipes or larger. Static charge generation in these systems is best controlled by limiting fluid velocity. The British standard BS PD CLC/TR 50404:2003 (formerly BS-5958-Part 2) Code of Practice for Control of Undesirable Static Electricity prescribes velocity limits. Because of its large impact on dielectric constant, the recommended velocity for hydrocarbon fluids containing water should be limited to 1 m/s. Bonding and earthing are the usual ways by which charge buildup can be prevented. For fluids with electrical conductivity below 10 pS/m, bonding and earthing are not adequate for charge dissipation, and anti-static additives may be required. Applicable standards *BS PD CLC/TR 50404:2003 Code of Practice for Control of Undesirable Static Electricity *NFPA 77 (2007) Recommended Practice on Static Electricity *API RP 2003 (1998) Protection Against Ignitions Arising Out of Static, Lightning, and Stray Currents Electrostatic induction in commercial applications Electrostatic induction was used in the past to build high-voltage generators known as influence machines. The main component that emerged in these times is the capacitor. Electrostatic induction is also used for electro-mechanic precipitation or projection. In such technologies, charged particles of small sizes are collected or deposited intentionally on surfaces. Applications range from electrostatic precipitator to electrostatic coating and inkjet printing. Recently a new wireless power transfer technology has been based on electrostatic induction between oscillating distant dipoles. *Electromagnetism *Electronegativity *Electrostatic discharge *Electrostatic separator *Electrostatic voltmeter *Ionic bond *Permittivity and relative permittivity *Quantisation of charge Footnotes References * * * * ;Essays *William J. Beaty (1997) " Humans and sparks: The Cause, Stopping the Pain, and 'Electric People '" ;Books *William Cecil Dampier (1905) ''The Theory of Experimental Electricity'', Cambridge University Press, (Cambridge physical series). xi, 334 p. illus., diagrs. 23 cm. LCCN 05040419 //r33 *William Thomson Kelvin (1872 Reprint of Papers on Electrostatics and Magnetism By William Thomson Kelvin Macmillan *Alexander McAulay (1893) ''The Utility of Quaternions in Physics'' Electrostatics—General Problem Macmillan *Alexander Russell (1904) ''A Treatise on the Theory of Alternating Currents'', Cambridge University Press Second edition, 1914, volume 1Second edition, 1916, volume 2 via Internet Archive
2021-10-24 14:51:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 63, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.737418532371521, "perplexity": 642.6622043298611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00237.warc.gz"}
https://cs.stackexchange.com/questions/83765/turing-machine-accepts-two-different-strings
# Turing machine accepts two different strings I am having hard time to proving this problem $C=\{\langle M \rangle \mid M \text{ is a Turing Machine , } L(M) \text { only contains two different strings}\}$ some ideas that i have tried are : 1. i have to prove if $C$ is not recursively enumerable. I know if $A$ is recursively enumerable, and $co\text{-}A$ is recursively enumerable, then $A$ is recursive. so the first idea is proving if $co\text{-}C$ is recursively enumerable, but i stuck in how to find $u$ and $v$ so that $co\text{-}C$ will not accept. 2. as co-halting problem is not recursively enumerable, i think $C$ can be reducible from co-halting problem. then if TM $M$ that recognizes $C$ can be use to solve co-halting problem that it proves the problem, but i don't really understand how exactly to prove it. 3. the idea is the same with #2, but using $A_{TM}$. Can anyone give me some explanation this problem, or what the best way to prove the problem. thank you a lot. EDIT : the problem is how to prove $C$ is not recursively enumerable ## migrated from stackoverflow.comNov 11 '17 at 17:58 This question came from our site for professional and enthusiast programmers. • Your "problem" is a set of machines; what exactly are you trying to prove? – Scott Hunter Nov 5 '17 at 0:18 • oh, i'm sorry, the problem is how to prove that C is not r.e as i edited the question. thanks – paiman Nov 5 '17 at 1:30 • This seems relevant: math.stackexchange.com/a/38018 – Scott Hunter Nov 7 '17 at 18:32 • Hint: Prove that co-C is r.e. ( run M on input 1,2,3,... and as soon has it accepts more than ...); then apply Rice ... – Vor Nov 11 '17 at 19:41
2019-09-16 10:23:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7662733197212219, "perplexity": 518.7466125966702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572517.50/warc/CC-MAIN-20190916100041-20190916122041-00238.warc.gz"}
https://brilliant.org/discussions/thread/permutation-and-combination/
# Permutation and Combination. I just needed a little bit of help from you guys. I find Permutation and Combination really interesting, and want to know more and explore on the topic. If anyone can provide me with the link to a site, that explains these concepts in BRIEF, I would be really grateful to you. I found many sites explaining the basics which I already know. But I want to get to a higher level. Thanks in advance! Note by Akshat Jain 5 years, 3 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ ## Comments Sort by: Top Newest Try this. It might be long, but it sure does a good job of explaining everything. It also has some great problems at the end. - 5 years, 3 months ago Log in to reply Thank you very much, mate! - 5 years, 3 months ago Log in to reply Could u please tell me from which book this chapter was taken - 5 years, 3 months ago Log in to reply If you really want to experience the beauty of combinatorics, i'd recommend- 1.mathematical circle 2.challenge and thrill 4.Introductory combinatorics in the same order. - 5 years, 3 months ago Log in to reply Thanks! - 5 years, 3 months ago Log in to reply i am also trying find some really good sites for learning maths - 5 years, 3 months ago Log in to reply I will search and let you know.. :) - 5 years, 3 months ago Log in to reply × Problem Loading... Note Loading... Set Loading...
2018-09-23 19:05:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920372366905212, "perplexity": 5904.290700163917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159570.46/warc/CC-MAIN-20180923173457-20180923193857-00380.warc.gz"}
http://www.mathworks.com/help/control/ug/discrete-time-model-creation.html?nocookie=true
Accelerating the pace of engineering and science Documentation Discrete-Time Model Creation How to create discrete-time models. Discrete-Time Transfer Function Model This example shows how to create a discrete-time transfer function model using tf. Create the transfer function $G\left(z\right)=\frac{z}{{z}^{2}-2z-6}$ with a sampling time of 0.1 s. ```num = [1 0]; den = [1 -2 -6]; Ts = 0.1; G = tf(num,den,Ts) ``` num and den are the numerator and denominator polynomial coefficients in descending powers of z. G is a tf model object. Tip   Create a discrete-time zpk, ss, and frd models in a similar way by appending a sampling period to the input arguments. For examples, see the reference pages for those commands. The sampling time is stored in the Ts property of G. Access Ts, using dot notation: `G.Ts` Discrete-Time Proportional-Integral-Derivative (PID) Controller Discrete-Time PID Controller Representations Discrete-time PID controllers are expressed by the following formulas. FormFormula Parallel $C={K}_{p}+{K}_{i}IF\left(z\right)+\frac{{K}_{d}}{{T}_{f}+DF\left(z\right)},$ where: • Kp = proportional gain • Ki = integrator gain • Kd = derivative gain • Tf = derivative filter time Standard $C={K}_{p}\left(1+\frac{1}{{T}_{i}}IF\left(z\right)+\frac{{T}_{d}}{\frac{{T}_{d}}{N}+DF\left(z\right)}\right),$ where: • Kp = proportional gain • Ti = integrator time • Td = derivative time • N = derivative filter constant IF(z) and DF(z) are the discrete integrator formulas for the integrator and derivative filter, respectively. Use the IFormula and DFormula properties of the pid or pidstd model objects to set the IF(z) and DF(z) formulas. The next table shows available formulas for IF(z) and DF(z). Ts is the sample time. IFormula or DFormulaIF(z) or DF(z) ForwardEuler (default) $\frac{{T}_{s}}{z-1}$ BackwardEuler $\frac{{T}_{s}z}{z-1}$ Trapezoidal $\frac{{T}_{s}}{2}\frac{z+1}{z-1}$ If you do not specify a value for IFormula, DFormula, or both, ForwardEuler is used by default. Create Discrete-Time Standard-Form PID Controller This example shows how to create a standard-form discrete-time Proportional-Integral-Derivative (PID) controller that has Kp = 29.5, Ti = 1.13, Td = 0.15 N = 2.3, and sample time Ts  0.1 : ```C = pidstd(29.5,1.13,0.15,2.3,0.1,... 'IFormula','Trapezoidal','DFormula','BackwardEuler')``` This command creates a pidstd model with $IF\left(z\right)=\frac{{T}_{s}}{2}\frac{z+1}{z-1}$ and $DF\left(z\right)=\frac{{T}_{s}z}{z-1}$. You can set the discrete integrator formulas for a parallel-form controller in the same way, using pid.
2014-12-27 01:40:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83107590675354, "perplexity": 4578.038810504209}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447550218.103/warc/CC-MAIN-20141224185910-00010-ip-10-231-17-201.ec2.internal.warc.gz"}
https://sriasat.wordpress.com/2015/07/25/a-countability-argument/
# A countability argument Here is a countability argument that I like because it relies on almost nothing. Let $A$ be a ring. Theorem. If $A$ is countable, then the polynomial ring $A[X]$ is countable. Proof. Since $A$ is countable, there is an injection $f:A\to\mathbb N$. Let $p_0 be prime numbers and consider the map \begin{aligned}g:A[X]&\to\mathbb N\\ a_0+a_1X+\cdots+a_nX^n&\mapsto p_0^{f(a_0)}p_1^{f(a_1)}\cdots p_n^{f(a_n)}.\end{aligned} By unique factorisation in $\mathbb N$ it follows that $g$ is an injection. Thus $A[X]$ is countable. $\square$ We can use this to prove in a rather simple manner that Corollary 1. The set $\mathbb A$ of all algebraic numbers is countable. Proof. It follows from the above that $\mathbb Z[X]$ is countable. Let $\alpha\in\mathbb A$ be a root of some minimal polynomial $f_\alpha\in\mathbb Z[X]$. We can assign to each $\alpha\in\mathbb A$ a unique element $f\in\mathbb Z[X]$ as follows: if $\alpha_1(=\alpha),\dots,\alpha_n$ are the zeros of $f_\alpha\in\mathbb Z[X]$, assign $jf_\alpha$ to $\alpha_j$. This gives an injection from $\mathbb A$ to $\mathbb Z[X]$, as desired. $\square$ More generally, Corollary 2. A countable union of countable sets is countable. Proof. Let $A_0,A_1,\dots$ be countable sets. Then there are injections $A_i\to X^i\mathbb Z$ for $i=0,1,\dots$. Hence we have an injection $\displaystyle\bigcup_{i=0}^\infty A_i\to \bigcup_{i=0}^\infty X^i\mathbb Z\subseteq\mathbb Z[X]$, showing that $\bigcup_{i=0}^\infty A_i$ is countable. $\square$
2018-06-23 02:51:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9720892310142517, "perplexity": 11591.95170406453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864919.43/warc/CC-MAIN-20180623015758-20180623035758-00075.warc.gz"}
https://support.bioconductor.org/p/109371/
Question: Using DESeq2 normalisation method on a tximport list of matrices without sample information 0 15 months ago by user3188830 United States user3188830 wrote: From the DESeq2 vignette: txi <- tximport(files, type="salmon", tx2gene=tx2gene) ddsTxi <- DESeqDataSetFromTximport(countData = txi, colData = samples, design = ~ condition) Q1. Is it possible to normalise a gene expression matrix (produced by Salmon and tximport) using DESeq2 method without sample information? The goal here is to obtain a DESeq normalised expression matrix. Q2. As an alternative, would it be coherent to normalise a matrix of raw read counts manually by the method of median of ratios (described here)? Q3. Is it correct to say that RLE is the DESeq2 normalisation method, whereas the median of ratios was the previous DESeq method? deseq2 • 363 views modified 15 months ago by Wolfgang Huber13k • written 15 months ago by user3188830 Answer: Using DESeq2 normalisation method on a tximport list of matrices without sample 2 15 months ago by EMBL European Molecular Biology Laboratory Wolfgang Huber13k wrote: A1. Yes. Try giving a trivial dataframe for colData and a trivial design (like ~1) for design. A2. I'd rather use the estimateSizeFactorsForMatrix for that alternative. A3. No. The normalization methods of DESeq2 and DESeq are (up to implementation details) the same.
2019-09-15 20:29:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25193673372268677, "perplexity": 12411.00175338621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00258.warc.gz"}
https://gmgauthier.com/post/on-david-hume-and-susan-feagin/
# On David Hume and Susan Feagin In what way, if any, is Feagin’s solution to the Paradox of Tragedy an improvement on Hume’s solution? #### Introduction Susan Feagin’s solution to the Paradox of Tragedy is not only not an improvement to Hume’s solution, it is not a solution at all. I will argue that Feagin fails to improve upon Hume’s solution for two key reasons. First, because her solution suffers from the same inscrutability as Hume’s solution. Second, because the extra complexity, despite being somewhat more self-aware than Hume, adds nothing to the solution due to its lack of scientific support. #### II. More Mysterious Than Thou Feagin warns us not to “substitute one puzzle for another” found in Hume’s vague notion of “movement” between passion and eloquence resulting in “delight”. She then immediately asks us to accept a substitute that is equally as mysterious and complex. First, she claims that we experience dual responses to art: The “direct” response is the emotion triggered by direct exposure to the content. The “meta” response is an emotion triggered by the conscious observation of the “direct” response. She goes on to explain that the responses and meta-responses can take virtually any form in response to any stimulus. This diverges from Hume’s theory, since his is limited us to one “direct” response to tragedy or “eloquence”, and one response to that response (pleasure resulting from the admixture of passion and eloquence). However, Feagin agrees with Hume’s criticism of Fontanelle, arguing that these responses and meta-responses are possible both when beholding tragedy in a fiction, and when beholding it in reality. What’s more, she argues that these responses are present not only in the beholding but also in the experiencing. Hume only describes his experience of Cicero’s retelling of a factual event, but Feagin implicitly argues that her theory of responses and meta-responses could be applied not just to the readers of Cicero, but to the judges hearing the case, Verres himself, and perhaps even Cicero. Feagin’s approach suffers from the same vagueness as Hume’s, firstly because she asserts her response-metaresponse phenomenon without offering any real evidence in support of it. While she supplies a few plausible examples of when such a phenomenon might occur, she seems to expect the reader to take the truth of those examples from their sheer intuitive obviousness. However, it’s not so obvious to me that people are actually experiencing these meta-responses in the order she supposes. For example, in the example of the strip joint hustlers, it is trivial to imagine an experience of pleasure in the thought of overcoming my inhibitions, long before I ever even get to the red-light district. Likewise, it is just as possible to feel a sense of cultural pride in myself in knowing that I will be amused by Papageno or knowing that I will be horrified by Peter Quint, long before I ever get to the theater — and then, have my expectations confirmed or denied by the performance. Secondly, like Hume, Feagin offers no insight into the source of either the response or the meta-response. She does an excellent job of providing a description of the phenomenon that is more amenable to the modern mind, and one naturally begins to search for experiences that might confirm Feagin’s description, but this evades, rather than answers, the core question. Namely, why do we have these experiences? This is a question that is begging to be answered by psychology, or neuroscience, or some cross-over research between aesthetic philosophy and psychology. If Feagin really wanted to answer it, this is where she should have turned. #### III. Circles Within Circles Hume’s original essay tries to account for an apparent phenomenon in the simplest terms possible in an attempt to arrive at a general theory. It suffers from its simplicity. But Hume lacked the insight of a more advanced psychological science to provide a more plausible explanation of the phenomenon. Feagin’s response to Hume is a sort of astrological adjustment of Hume’s Ptolemaic understanding of the human mind. Rather than resolving or replacing Hume’s vague and muddled explanation, Feagin has simply added a layer of Baroque complexity to it. To start, Feagin decouples her theory from tragedy-as-an-art-form, expanding it to include all possible experiences. Additionally, she decouples the kinds of responses necessitated by specific kinds of events. In her theory, it is entirely possible for any combination of responses and meta-responses in the wake of any experience. While there is nothing necessarily wrong with suggesting such a possibility, it doesn’t actually answer the challenge of the paradox. In fact, it makes the problem much more difficult. If it’s possible to have any sort of response to any sort of event, then why do humans generally seem to share the same responses to all the same circumstances? If I can have any meta-response to any response to any event, then why have I not collapsed into a heap of neurotic confusion as a result of the infinite regress of reactions I’m having to those events and the reactions to those events? With as much focus as there is on self-regard and self-observation, how does this not impel me to narcissism, rather than empathy for my fellow man? How, exactly are pleasurable meta-responses “foreclosed” by a “continuing call” for direct responses? Finally, it’s not all that clear how we are to get from this state of continuous self-observation to a state of pleasure. Feagin simply “suggests” that self-observation of the correct responses to specific circumstances yields this pleasure. But this doesn’t answer the question of why they are the “correct” responses, how we know they are the “correct” ones, and how that knowledge got there in the first place. In other words, Feagin is simply substituting Hume’s 18th century vagueness for her own 20th century ignorance of the relevant psychological literature. #### Conclusion Hume’s essay, though flawed and unsatisfying, is a quality piece of work because it is narrow-focused and thorough. Hume is humble enough to realize that he may not be able to answer his own question, let alone attempt to resolve all of the biggest conundrums of art in one sitting. He asks a very simple, though very difficult, question: Why do we experience pleasure in the depiction of painful tragedy? Feagin not only claims that she has discovered the answer to this question but confidently proclaims a resolution to the dispute between comedy and tragedy and announces a “new perspective” on the relationship between art and morality. Had Feagin spent a bit more time researching the science of emotions and their relation to aesthetics and art, and a little less time telling us all how “inappropriate” we were for laughing at tragedy, or worse, feeling self-satisfied for not laughing at tragedy, we still might not have gotten a complete answer to the paradox, but we may very likely have gotten an explanation that moved us a little closer to an actual answer. [Imported from exitingthecave.com on 1 December 2021]
2022-06-27 21:12:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42123985290527344, "perplexity": 1757.8695369343245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00015.warc.gz"}
http://www.cs.nyu.edu/pipermail/fom/2008-September/013046.html
# [FOM] ordinals G Aldo Antonelli antonelli at ucdavis.edu Wed Sep 10 13:39:46 EDT 2008 Aki Kanamori wrote: > (c) In the second of that series, appearing in the JSL in 1941, > section 5 sets out the theory of ordinals, and Bernays relies > on the more perspicuous definition of Raphael Robinson: > An ordinal is a transitive set x which is connected: for > different members a,b of x, either a \in b or b \in a. > This first appeared in a JSL paper of Robinson's of 1937, which presented > the von Neumann class-set theory in simplified form. In Bernays' > section 5, item 2) presents the equivalence of `hereditarily > transitive' to Robinson's definition. Robinson's definition has the obvious advantage of not depending on foundation, a fact that has more than a passing interest for those of us who dabble in non-standard, ill-founded, or otherwise just plain weird set theories. -- Aldo ***************************************** G. Aldo Antonelli Professor of Philosophy University of California, Davis Coordinating Editor, Review of Symbolic Logic http://philosophy.ucdavis.edu/antonelli antonelli at ucdavis.edu
2017-01-17 06:52:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9733446836471558, "perplexity": 7583.60455095032}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00112-ip-10-171-10-70.ec2.internal.warc.gz"}
https://gigaom.com/2010/02/02/adchemy-makes-advertising-personalized-in-real-time/
Adchemy, an emerging advertising powerhouse, has figured out a way to tweak web ads to make them dramatically more effective. The company performs a crazy real-time technical dance to optimize the ads and landing pages shown to searchers. The idea came out of a realization by CEO Murthy Nukala and his co-founder, Rajeev Motwani (the now deceased influential angel investor and Stanford professor), that to improve demand for online advertising, it must become more effective — not just efficient — and only then could it provoke a “share shift” of traditional advertising dollars. If all goes as planned, the web advertising market could grow dramatically. I visited Nukala at his company’s new Foster City, Calif., headquarters on Monday (which are actually directly across the office park from competitor QuinStreet) to learn more. Last fall Adchemy signed up Accenture as an investor in a new $30 million round (also including Mayfield Fund and August Capital, bringing the 5-year-old company to more than$57 million in total funding). Adchemy is now in testing with some of Accenture’s powerful client advertisers, and hopes to announce relationships with them soon. Sure, these are early-day numbers, but the company recently announced it had grown revenue by 60 percent last year and its number of employees by 50 percent (to 135). How does Adchemy actually work? “We featurize every part of advertising — every button, every page — then compute how much every feature is contributing to success or failure,” said Nukala. The company’s AudienceMaster system (which it is now building out as software as a service) takes in as much (non-personally identifying) data as it can about the person making the search. So if I’m searching for a keyword like BlackBerry, for example, it figures out as much as it can about my actual intent — for instance, potentially what I really want is to find smartphone data plan pricing in my area. Adchemy then creates machine-generated ad copy to respond to my profile and the intent of my search terms, and shows a customized ad on the search results page on which I land. When I do click, a custom landing page is configured and served onto the advertiser’s web site using an iFrame — so instead of seeing a standard AT&T landing page, I would go directly to one that shows my options for signing up for a BlackBerry in my coverage area. This all happens in the time of a click, with literally thousands of potential versions evaluated. Nukala said Adchmey can create a dynamically constructed banner ad, for example, within 30-40 milliseconds. One early Adchemy customer, home security provider Protection One, has increased conversion rates 218 percent. The idea is that people will respond much better to advertising that’s customized for them — the advertising really does become the content you were hoping to find.
2021-04-13 19:18:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25644224882125854, "perplexity": 4291.534916700011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038074941.13/warc/CC-MAIN-20210413183055-20210413213055-00211.warc.gz"}
https://www.physicsforums.com/threads/electric-field-and-spherical-cavities.135791/
Electric Field and spherical cavities 1. Oct 10, 2006 stunner5000pt Two spherical cavities of radii a and b are hollowed out from the interior of a neutral conducting sphere of radius R. At the center of each cavity is a point charge place, callthem qa and qb a) Find teh surface charges sigma a, sigma b, and sigma R. ok say for qa $$\vec{E} = \frac{q_{a}}{4 \pi \epsilon_{0} a^2} \hat{a} = - \vec{\nabla} V \bullet \hat{n}$$ also $$\sigma = -\epsilon_{0} \vec{\nabla} V \bullet \hat{n} = \frac{q_{a}}{4 \pi a^2}$$ but i was told this answer should be negative.. where is my mistake? Attached Files: • 1.JPG File size: 10.6 KB Views: 52 2. Oct 10, 2006 OlderDan The total charge inside a guassian surface surrounding one inner sphere has to be zero, The surface charge must have the opposite sign of the central charge.
2017-09-23 20:31:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384010791778564, "perplexity": 1440.1662858441944}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689775.73/warc/CC-MAIN-20170923194310-20170923214310-00146.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/nhm.2017019
# American Institute of Mathematical Sciences September  2017, 12(3): 417-459. doi: 10.3934/nhm.2017019 ## Optimal control of infinite-dimensional piecewise deterministic Markov processes and application to the control of neuronal dynamics via Optogenetics 1 Sorbonne Universités, UPMC Univ Paris 06, CNRS UMR 7599, Laboratoire de Probabilités et Modèles Aléatoires, F-75005, Paris, France 2 Sorbonne Universités, UPMC Univ Paris 06, CNRS UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France Received  July 2016 Revised  May 2017 Published  September 2017 In this paper we define an infinite-dimensional controlled piecewise deterministic Markov process (PDMP) and we study an optimal control problem with finite time horizon and unbounded cost. This process is a coupling between a continuous time Markov Chain and a set of semilinear parabolic partial differential equations, both processes depending on the control. We apply dynamic programming to the embedded Markov decision process to obtain existence of optimal relaxed controls and we give some sufficient conditions ensuring the existence of an optimal ordinary control. This study, which constitutes an extension of controlled PDMPs to infinite dimension, is motivated by the control that provides Optogenetics on neuron models such as the Hodgkin-Huxley model. We define an infinite-dimensional controlled Hodgkin-Huxley model as an infinite-dimensional controlled piecewise deterministic Markov process and apply the previous results to prove the existence of optimal ordinary controls for a tracking problem. Citation: Vincent Renault, Michèle Thieullen, Emmanuel Trélat. Optimal control of infinite-dimensional piecewise deterministic Markov processes and application to the control of neuronal dynamics via Optogenetics. Networks & Heterogeneous Media, 2017, 12 (3) : 417-459. doi: 10.3934/nhm.2017019 ##### References: show all references ##### References: Simplified four states ChR2 channel : $\varepsilon_1$, $\varepsilon_2$, $e_{12}$, $e_{21}$, $K_{d1}$, $K_{d2}$ and $K_r$ are positive constants Simplified ChR2 three states model ChR2 three states model ChR2 channel : $K_{a1}$, $K_{a2}$, and $K_{d2}$ are positive constants defined by: Expression of the individual jump rate functions and the Hodgkin-Huxley model $\underline {{\rm{In}}\;\;{D_1} = \left\{ {{n_0},{n_1},{n_2},{n_3},{n_4}} \right\}} :$ $\sigma_{n_0,n_1}(v,u) = 4\alpha_n(v)$, $\sigma_{n_1,n_2}(v,u) = 3\alpha_n(v)$, $\sigma_{n_2,n_3}(v,u) = 2\alpha_n(v)$, $\sigma_{n_3,n_4}(v,u) = \alpha_n(v)$ $\sigma_{n_4,n_3}(v,u) = 4\beta_n(v)$, $\sigma_{n_3,n_2}(v,u) = 3\beta_n(v)$, $\sigma_{n_2,n_1}(v,u) = 2\beta_n(v)$, $\sigma_{n_1,n_0}(v,u) = \beta_n(v)$. $\underline {{\rm{In}}\;\;{D_2} = \left\{ {{m_0}{h_1},{m_1}{h_1},{m_2}{h_1},{m_3}{h_1},{m_0}{h_0},{m_1}{h_0},{m_2}{h_0},{m_3}{h_0}} \right\}} :$ : $\sigma_{m_0h_1,m_1h_1}(v,u)=\sigma_{m_0h_0,m_1h_0}(v,u) = 3\alpha_m(v)$, $\sigma_{m_1h_1,m_2h_1}(v,u) =\sigma_{m_1h_0,m_2h_0}(v,u) = 2\alpha_m(v)$, $\sigma_{m_2h_1,m_3h_1}(v,u) = \sigma_{m_2h_0,m_3h_0}(v,u) = \alpha_m(v)$, $\sigma_{m_3h_1,m_2h_1}(v,u) = \sigma_{m_3h_0,m_2h_0}(v,u) = 3\beta_m(v)$, $\sigma_{m_2h_1,m_1h_1}(v,u) =\sigma_{m_2h_0,m_1h_0}(v,u) = 2\beta_m(v)$, $\sigma_{m_1h_1,m_0h_1}(v,u) = \sigma_{m_1h_0,m_0h_0}(v,u) = \beta_m(v)$. $\underline {{\rm{In}}\;\;{D_{ChR2}} = \left\{ {{o_1},{o_2},{c_1},{c_2}} \right\}} :$ $\sigma_{c_1,o_1}(v,u) = \varepsilon_1 u$, $\sigma_{o_1,c_1}(v,u) = K_{d1}$, $\sigma_{o_1,o_2}(v,u) = e_{12}$, $\sigma_{o_2,o_1}(v,u) = e_{21}$ $\sigma_{o_2,c_2}(v,u) = K_{d2}$, $\sigma_{c_2,o_2}(v,u) = \varepsilon_2 u$, $\sigma_{c_2,c_1}(v,u) = K_r$. $\alpha_n(v)=\frac{0.1-0.01v}{e^{1-0.1v}-1}$, $\beta_n(v)=0.125e^{-\frac{v}{80}}$, $\alpha_m(v)=\frac{2.5-0.1v}{e^{2.5-0.1v}-1}$, $\beta_m(v)=4e^{-\frac{v}{18}}$, $\alpha_h(v)=0.07e^{-\frac{v}{20}}$, $\beta_h(v)=\frac{1}{e^{3-0.1v}+1}$. (HH)\left\{ \begin{aligned} C \dot{V}(t)&= \bar{g}_Kn^4(t)(E_K - V(t)) +\bar{g}_{Na}m^3(t)h(t)(E_{Na}-V(t))\\ & \ \ \ \ \ \ \ \ \ + g_L(E_L-V(t)) + I_{ext}(t),\\ \dot{n}(t)&= \alpha_n(V(t))(1-n(t)) - \beta_n(V(t))n(t),\\ \dot{m}(t)&= \alpha_m(V(t))(1-m(t)) - \beta_m(V(t))m(t),\\ \dot{h}(t)&= \alpha_h(V(t))(1-h(t)) - \beta_h(V(t))h(t). \end{aligned} \right. $\underline {{\rm{In}}\;\;{D_1} = \left\{ {{n_0},{n_1},{n_2},{n_3},{n_4}} \right\}} :$ $\sigma_{n_0,n_1}(v,u) = 4\alpha_n(v)$, $\sigma_{n_1,n_2}(v,u) = 3\alpha_n(v)$, $\sigma_{n_2,n_3}(v,u) = 2\alpha_n(v)$, $\sigma_{n_3,n_4}(v,u) = \alpha_n(v)$ $\sigma_{n_4,n_3}(v,u) = 4\beta_n(v)$, $\sigma_{n_3,n_2}(v,u) = 3\beta_n(v)$, $\sigma_{n_2,n_1}(v,u) = 2\beta_n(v)$, $\sigma_{n_1,n_0}(v,u) = \beta_n(v)$. $\underline {{\rm{In}}\;\;{D_2} = \left\{ {{m_0}{h_1},{m_1}{h_1},{m_2}{h_1},{m_3}{h_1},{m_0}{h_0},{m_1}{h_0},{m_2}{h_0},{m_3}{h_0}} \right\}} :$ : $\sigma_{m_0h_1,m_1h_1}(v,u)=\sigma_{m_0h_0,m_1h_0}(v,u) = 3\alpha_m(v)$, $\sigma_{m_1h_1,m_2h_1}(v,u) =\sigma_{m_1h_0,m_2h_0}(v,u) = 2\alpha_m(v)$, $\sigma_{m_2h_1,m_3h_1}(v,u) = \sigma_{m_2h_0,m_3h_0}(v,u) = \alpha_m(v)$, $\sigma_{m_3h_1,m_2h_1}(v,u) = \sigma_{m_3h_0,m_2h_0}(v,u) = 3\beta_m(v)$, $\sigma_{m_2h_1,m_1h_1}(v,u) =\sigma_{m_2h_0,m_1h_0}(v,u) = 2\beta_m(v)$, $\sigma_{m_1h_1,m_0h_1}(v,u) = \sigma_{m_1h_0,m_0h_0}(v,u) = \beta_m(v)$. $\underline {{\rm{In}}\;\;{D_{ChR2}} = \left\{ {{o_1},{o_2},{c_1},{c_2}} \right\}} :$ $\sigma_{c_1,o_1}(v,u) = \varepsilon_1 u$, $\sigma_{o_1,c_1}(v,u) = K_{d1}$, $\sigma_{o_1,o_2}(v,u) = e_{12}$, $\sigma_{o_2,o_1}(v,u) = e_{21}$ $\sigma_{o_2,c_2}(v,u) = K_{d2}$, $\sigma_{c_2,o_2}(v,u) = \varepsilon_2 u$, $\sigma_{c_2,c_1}(v,u) = K_r$. $\alpha_n(v)=\frac{0.1-0.01v}{e^{1-0.1v}-1}$, $\beta_n(v)=0.125e^{-\frac{v}{80}}$, $\alpha_m(v)=\frac{2.5-0.1v}{e^{2.5-0.1v}-1}$, $\beta_m(v)=4e^{-\frac{v}{18}}$, $\alpha_h(v)=0.07e^{-\frac{v}{20}}$, $\beta_h(v)=\frac{1}{e^{3-0.1v}+1}$. (HH)\left\{ \begin{aligned} C \dot{V}(t)&= \bar{g}_Kn^4(t)(E_K - V(t)) +\bar{g}_{Na}m^3(t)h(t)(E_{Na}-V(t))\\ & \ \ \ \ \ \ \ \ \ + g_L(E_L-V(t)) + I_{ext}(t),\\ \dot{n}(t)&= \alpha_n(V(t))(1-n(t)) - \beta_n(V(t))n(t),\\ \dot{m}(t)&= \alpha_m(V(t))(1-m(t)) - \beta_m(V(t))m(t),\\ \dot{h}(t)&= \alpha_h(V(t))(1-h(t)) - \beta_h(V(t))h(t). \end{aligned} \right. [1] Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020110 [2] Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019 [3] Huiying Fan, Tao Ma. Parabolic equations involving Laguerre operators and weighted mixed-norm estimates. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5487-5508. doi: 10.3934/cpaa.2020249 [4] Bahaaeldin Abdalla, Thabet Abdeljawad. Oscillation criteria for kernel function dependent fractional dynamic equations. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020443 [5] Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020046 [6] Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020107 [7] Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020347 [8] Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213 [9] Bernard Bonnard, Jérémy Rouot. Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model. Journal of Geometric Mechanics, 2020  doi: 10.3934/jgm.2020032 [10] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $L^2-$norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077 [11] Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 [12] Yahia Zare Mehrjerdi. A new methodology for solving bi-criterion fractional stochastic programming. Numerical Algebra, Control & Optimization, 2020  doi: 10.3934/naco.2020054 [13] Zonghong Cao, Jie Min. Selection and impact of decision mode of encroachment and retail service in a dual-channel supply chain. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020167 [14] Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217 [15] Djamel Aaid, Amel Noui, Özen Özer. Piecewise quadratic bounding functions for finding real roots of polynomials. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 63-73. doi: 10.3934/naco.2020015 [16] Meilan Cai, Maoan Han. Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Communications on Pure & Applied Analysis, 2021, 20 (1) : 55-75. doi: 10.3934/cpaa.2020257 [17] Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHUM approach. Numerical Algebra, Control & Optimization, 2020  doi: 10.3934/naco.2020055 [18] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 [19] José Madrid, João P. G. Ramos. On optimal autocorrelation inequalities on the real line. Communications on Pure & Applied Analysis, 2021, 20 (1) : 369-388. doi: 10.3934/cpaa.2020271 [20] Haixiang Yao, Ping Chen, Miao Zhang, Xun Li. Dynamic discrete-time portfolio selection for defined contribution pension funds with inflation risk. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020166 2019 Impact Factor: 1.053
2020-12-02 16:17:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6688341498374939, "perplexity": 3991.25369165365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141711306.69/warc/CC-MAIN-20201202144450-20201202174450-00282.warc.gz"}
https://cemo.pnvkarte.de/news/find-volume-of-parallelepiped-with-3-vectors.html
inno setup compile parameters shadowrocket mac bmw x3 2022 review # Find volume of parallelepiped with 3 vectors blackpink on drugs project sekai stamps emu roblox color codes • To use this feature, make sure you are signed-in to your account electronic tube handbook pdf • To use this feature, make sure you are signed-in to your account • Make sure you are signed-in with the same user profile blender join objects smooth nstp project proposal for healthOnce % euvolemic hyponatremia causes dell optiplex 3000 vs 7000 no recoil csgo powershell replace syntax stellaris precursor fortress ap studio art portfolio alvin isd school list quintessential quintuplets movie mal married first then fall in love eng sub 2003 breckenridge park model totallyscience co unblocked games how many menstrual relief pills should i take sutera spa 2020 boom vader 125cc top speed dell rugged tablet noclip script roblox x64dbg vs windbg tardis python 2d model maker vtuber rock riprap size chart convertir m3u a avi windows 10 20h2 eol emulator bios pubs in great glenperuvian beans recipe - Real-time Data . no available covariate data to map on the points for this plot type drift livery ideas 9xmovies press 2022 appalachian folklore sayings fake paypal receipt generator apk buffalo trace bourbon tasting notes tung oil sealer roblox follower script pastebin thigh high socks for men 1feexv6bahb8ybzjqqmjjrccrhgw9sb6uf metodologa de la investigacin pdf tractor pulling tires and rims 1987 chevy 454 specs toronto board of directors gmail com maronda homes 2004 floor plans plastik pak careerspf48 vs pf48e A balloon of negligible mass is filled with helium gas (p=0.179 kg/m^ 3 ) until its volume is just large enough to lift a 25.0 kg load. (a) Find this volume . (b) Find the buoyant force exerted by air (p=1.29 kg/m^ 3 ) on the balloon. Find step-by-step Calculus solutions and your answer to the following textbook question: Find the volume of the parallelepiped determined by the vectors a, b, and c. a=, b=, c=.. Quiz 3 Name Neatness Counts 1. Find the area of the triangle that passes through the points P(1,2,3), Q(3,8,10) and R(7,6,12). 2. 2022 big horn level a equipment group unreal engine the sdk for windows is not installed properlyagama personil sid Volume of Parallelepiped. In mathematical geometry, a parallelepiped is defined as the 3-D figure that is formed by the six parallelograms together.Sometimes, the term rhomboid is also defined with the same meaning. A parallelepiped is related to the parallelogram in the same manner how a cube related to the square and a cuboid related to the rectangle. The Volume of a Parallelepiped in 3-Space. One nice application of vectors in $\mathbb{R}^3$ is in calculating the volumes of certain shapes. One such shape that we can calculate the volume of with vectors are parallelepipeds. Theorem 1: If \vec{u}, \vec{v}, \vec{w} \in \mathbb{R}. three points in space as well as a vector, telling the direction. Homework This homework is due on Tuesday, 6/30/2020. Problem 3.1: a) Find a unit vector perpendicular to the space diagonal [1;1;1] and the face diagonal [1;0;1] of the cube. b) Find the volume of the parallelepiped for which the base parallelo-. Then the volume of the parallelepiped determined by vectors 2(a x b), 3(b x c) and vector(c x a) is (p)100 (ii) Volume of parallelepiped determined by vectors a, b and c is 5. Then the volume of the parallelepiped determined by vectors 3(a + b), (b + c) and vector 2(c + a) is (q) 30 (iii) Area of a triangle with adjacent sides determined by. How to find the position of an element in a vector without using the find function. 2566 Solvers. Are you in or are you out? 211 Solvers. Count photos. 366 Solvers. Determine the number of odd integers in a vector. 547 Solvers. The sum of the numbers in the vector. 496 Solvers. More from this Author 255. GJam 2014 Qualifier: Deceitful War. A parallelepiped is a six-faced figure whose opposite faces are congruent parallelograms located in parallel planes. A parallelepiped is a three-dimensional counterpart of a parallelogram, and is determined by three non-coplanar vectors in . The figure below shows a parallelepiped determined by three vectors. watch this sex video punk hardcore sex pornhot older women fuck videos Answer to Solved Find volume of Parallelepiped defined by 3 vectors: a. Math; Calculus; Calculus questions and answers; Find volume of Parallelepiped defined by 3 vectors: a = [1,1,2], 0 = (2,1,2]. c = 12,2,1 V = |(a x b).cl Area of parallelogram defined by the vectors b and c: 16 x c=1 sin(a) between the vector a and b a a Б. 6 sin(a) ахь al. A parallelepiped has vertices at the points A t(2,1,), B(3,3,2), D(4,0,5) and E(1, 2,7−), where t is a scalar constant. a) Calculate AB AD∧, in terms of t. b) Find the value of AB AD AE∧ i The volume of the parallelepiped is 22 cubic units. c) Determine the possible values of t. ( ) ( )12 3 1 5− + − − −t ti j k, 11 44t − , t = 2. Calculate the volume of the Parallelepiped. The volume of a Parallelepiped corresponds to space it occupies in its environment. We design : The Length: a. The Width: c. The depth: b. Hence the formula: the volume of the Parallelepiped V = a. b .c. Note: All dimensions must be expressed in the same unit of length, before performing a calculation!. empty card for discord nitro aghori tantraobsolete firearms parts A parallelepiped is a six-sided figure, each of whose sides is a parallelogram. Here the opposite side parallelograms are identical. The volume V of the parallelepiped can be obtained from the side of edges a, b, c. The volume of a parallelepiped can be obtained from the product of the area of the base and the height of the parallelepiped. 0 Given a parallelepiped in R 3 with the three adjacent vectors corresponding to three adjacent edges of the parallelepiped, to find the volume, we just take any two vectors u →, v → from the three adjacent vectors and take the crossproduct u → × v → and then take the dot product with the other vector w →, that is, ( u → × v →. This calculus 3 video tutorial explains how to calculate the volume of a parallelpiped using the triple scalar product formula.My Website: https://www.video. x force keygen download filehippo disney character fusion generator1969 plymouth vin decoder Click here👆to get an answer to your question ️ Volume of parallelepiped formed by vectors vec a × vec b, vec b × vec c and vec c × vec a is 36 sq. units. Find step-by-step Calculus solutions and your answer to the following textbook question: Find the volume of the parallelepiped determined by the vectors a, b, and c. a=, b=, c=. Answer to Solved Find volume of Parallelepiped defined by 3 vectors: a. Math; Calculus; Calculus questions and answers; Find volume of Parallelepiped defined by 3 vectors: a = [1,1,2], 0 = (2,1,2]. c = 12,2,1 V = |(a x b).cl Area of parallelogram defined by the vectors b and c: 16 x c=1 sin(a) between the vector a and b a a Б. 6 sin(a) ахь al. Find the volume of the parallelepiped with the given vertices. \begin{aligned}&(0,0,0),(0,4,0),(-3,0,0),(-1,1,5)\\&(-3,4,0),(-1,5,5),(-4,1,5),(-4,5,5)\end{aligned} ... is a geometric object that has a magnitude (or length) and a direction. Vectors can be added to other vectors according to vector algebra, and can be multiplied by a scalar. Find the volume of the parallelepiped (box) determined by u, v, and w. 9k The volume of the parallelepiped is (Simplify your answer.) units cubed. Find the volume of a parallelepiped if four of its eight vertices are B(2, 3 ,0), C(O, — 4, 3 ), and D( 3 , The volume of the parallelepiped with the given vertices A, B, C and D is (Simplify your. of three vectors is the . dot product of one of the vectors with the cross product of the other two. ... (In summary, the volume of a tetrahedron is 16 of the volume of a parallelepiped because the base is half as big and a pyramid has 13 of the volume of the corresponding prism). net 3d model 1996 cadillac deville blackhyattconnect my learning Welcome back to another cross product problem where we're trying to find the volume of a parallel pipe, ed that have adjacent points, P Q R and S. Where we can calculate three vectors used for our triple product calculation as PQ pr and P s. So then PQ is just q minus p. That's two minus negative too. Three minus one to minus zero. Pr is ar. For given vectors, = - 2 + 6 = 4 Q. 3 Find the volume of the parallelepiped whose conterminous edges are represented by the vectors i. ii. iii. iv. Answer : ... To Find : Volume of parallelepiped Formulae : 1) Volume of parallelepiped : If are coterminous edges of parallelepiped, Where,. Then these vectors form three edges of a parallelepiped, a three-dimensional prism with six faces that are each parallelograms, as shown in . The volume of this prism is the product of the figure's height and the area of its base. ... Find the volume of the parallelepiped with the adjacent edges ; and . a. b. and . Calculate the triple scalar. smt testing process amcrest camera reset passwordreddit getting ssdi Since det(A, B, C) = 0 the volume of the parallelepiped with these vectors as edges is 0. This means all three origin vectors lie in a plane. To see this consider the figure in problem 1. It shows the opposite case, when the vectors are not in a plane the resulting parallelepiped is really three dimensional and has non-zero volume. estey reed organ models bfs3 safety selectormachine soldering process three points in space as well as a vector, telling the direction. Homework This homework is due on Tuesday, 6/30/2020. Problem 3.1: a) Find a unit vector perpendicular to the space diagonal [1;1;1] and the face diagonal [1;0;1] of the cube. b) Find the volume of the parallelepiped for which the base parallelo-. Area, Volume, and the Determinant (pages 280-3) The determinant has another interpretation completely separate from systems of equations and matrices. It turns out that it can also be used to calculate the area of a parallelogram (in R2), the volume of a parallelepiped (in R3) or the generic n-volume of a parallelotope (in Rn). Answer to Find the volume of the parallelepiped spanned by the vectors a = -2i - 4j + 3k, b = - 3i - 2j - 2k and c = 2i + 4j + k. O A. The volume of the. Find the volume of the parallelepiped determined by the position vectors\m 03:26 Volume of a Parallelepiped Three vectors $\mathbf{u}, \mathbf{v},$ and $\mat. Problem 637. Volume of a. Formulas for volume of the parallelepiped. If we need to find the volume of a parallelepiped and we’re given three adjacent edges of it, all we have to do is find the scalar triple product of the three vectors that define the edges:. Entering data into the area of parallelogram formed by vectors calculator. You can input only integer numbers or fractions in this online calculator. A(3,-1,1), B(5,-2,4), C(1,1,1), D(0,0,1) The simplest method is to use vectors. Here, we calculate the volume of a parallelepiped defined by vectors AB, AC. Write the formula for the volume of a. liberty sports medicine staff quercetin brain fog redditcisco ip communicator download Answer (1 of 4): This means that the three vectors belong to the same plane (coplanar vectors), and their mixed product vanishes, a x b . c = 0. In particular, one or more lengths of these vectors ma be zero, but this is not a must; generally, all three may differ from zero. Answer to Find the volume of the parallelepiped spanned by the vectors a = -2i - 4j + 3k, b = - 3i - 2j - 2k and c = 2i + 4j + k. O A. The volume of the. Find the volume of the parallelepiped determined by the position vectors$\m 03:26 Volume of a Parallelepiped Three vectors $\mathbf{u}, \mathbf{v},$ and $\mat. Problem 637. Volume of a. If we need to find the volume of a parallelepiped and we're given three adjacent edges of it, all we have to do is find the scalar triple product of the three vectors that define the edges. ... How to find the volume of a parallelepiped, given three vectors that define its edges . Take the course Want to learn more about Calculus 3? I have a. What is the volume of the parallelepiped with adjacent edges? The answer is: V=16.Given three vectors, there is a product, called scalar triple product, that gives (the absolute value of it), the volume of the parallelepiped that has the three vectors as dimensions.. Question: Find the volume of the parallelepiped determined by the vectors a, b, and c. a = (1, 2, 2), b = (-1, 1, 5), c = (3, 1. nes the volume of the parallelepiped spanned by ~u;~v;w~and the orientation of three vectors is the sign of [~u;~v;w~]. The value h= j~u~nj=j~njis the height of the parallelepiped if ~n= (~v w~) is a normal vector to the ground parallelogram of area A= j~nj= j~v w~j. The volume of the parallelepiped is. motorroller mieten win a free plastic surgery makeover 2022roblox appx file download Homework Statement The volume of a parallelepiped defined by the vectors w, u, \text{ and }v, \text{ where } w=u \times v is computed using: V = w \cdot (u... Insights Blog -- Browse All Articles -- Physics Articles Physics Tutorials Physics Guides Physics FAQ Math Articles Math Tutorials Math Guides Math FAQ Education Articles Education Guides. esp8266 ac power meter minimum wages delhihonda forza 125 fuel consumption A balloon of negligible mass is filled with helium gas (p=0.179 kg/m^ 3 ) until its volume is just large enough to lift a 25.0 kg load. (a) Find this volume . (b) Find the buoyant force exerted by air (p=1.29 kg/m^ 3 ) on the balloon. 0 Given a parallelepiped in R 3 with the three adjacent vectors corresponding to three adjacent edges of the parallelepiped, to find the volume, we just take any two vectors u →, v → from the three adjacent vectors and take the crossproduct u → × v → and then take the dot product with the other vector w →, that is, ( u → × v →. . uquiz weight gain hrt before and after bodycatalyst leadership conference 2022 The volume of a parallelepiped is expressed in cubic units, like in 3, cm 3, m 3, ft 3, yd 3, etc. Volume of Parallelepiped Formula. Volume of parallelepiped can be calculated using the base area and the height. Problem 3: If a vector, b vector, c vector are position vectors of the vertices A, B, C of a triangle ABC, show that the area of the. the ultimate guide to chart patterns pdf coxsackievirus in adults symptomsxaml layout examples Let, bar a, bar b and bar c be the position vectors of points A, B and C respectively with respect to origin O. Complete the parallelopiped as shown in the figure with bar(OA) , bar(OB) and bar(OC) as its coterminus edges. ... volume of parallelepiped =|bara||barbxxbarc|costheta = bar a.(barb xx barc). glowing red hot catalytic converter. Learn how to find the volume of the parallelepiped given three vectors.GET EXTRA HELP If you could use some extra help with your math class, then check out. Answer to: Find the volume of the parallelepiped determined by the vectors u = i + j, v = j + 2k and w = 3k. By signing up, you&#039;ll get thousands of. Find the volume of the parallelepiped that is defined by the vectors a, b and c? a= i-j-k b= -i+j-k c= -i-j+k Find the volume of the parallelepiped that is defined by the vectors a, b and c? that is a.(b X c) you can compute the same,. Posted by he at 3:40 AM. No comments: Post a Comment. Newer Post. Q: Find the volume of the parallelepiped determined by the vectors a = (3, 2, – 1), b = (0, 1, 4), с %3 A: We calculate the triple product of the three given vectors in. windows 11 onedrive view online missing father brown complete castbull lurcher temperament (9pts) b. Find the volume of the parallelepiped spanned by the vectors (t, 0,0), (1,2,-4), (0, t,-1). For what values of t will there be a zero volume ?. 3 –1 Vector integrals; the line integral of ∇ψψ. ... These three vectors determine a parallelepiped whose. Volume of parallelepiped formed by vectors a , b and c is given by determinant of a 1 a 2 a 3 b 1 b 2 b 3 c 1 c 2 c 3 Here, a = 2i - 3j + 4k b = i + 2j - k c = 2i - j + 2k Volume of parallelepiped = 2-3 4 1 2-1 2-1 2. Now, let's address the one time where the cross product will not be orthogonal to the original vectors. If the two vectors, →a a → and →b b →, are parallel then the angle between them is either 0 or 180 degrees. From (1) (1) this implies that, ∥∥→a ×→b ∥∥ = 0 ‖ a → × b → ‖ = 0. From a fact about the magnitude we. Answer to Find the volume of the parallelepiped spanned by the vectors a = -2i - 4j + 3k, b = - 3i - 2j - 2k and c = 2i + 4j + k. O A. The volume of the. Find the volume of the parallelepiped determined by the position vectors$\m 03:26 Volume of a Parallelepiped Three vectors $\mathbf{u}, \mathbf{v},$ and $\mat. Problem 637. Volume of a. stata inlist not found cba executive directornight clubs in maine We calculate the volume of the parallelepiped spanned by AEsimultaneously with the volume of the parallelepiped spanned by A's columns in the following way. For both parallelotopes, let the vector corresponding to the i-th column in both be the \ rst" vector we study (i.e. the w~ 1 vector), and the vector corresponding to the j-th column in. The cross product of the first two vectors is calculated first and then the dot product of the remaining vector with the resultant vector is calculated, to calculate the formula. The triple product will become zero if one of the three vectors taken is of zero magnitudes. We can find the volume of a parallelepiped by using the formula. glowing red hot catalytic converter. Learn how to find the volume of the parallelepiped given three vectors.GET EXTRA HELP If you could use some extra help with your math class, then check out. Answer to: Find the volume of the parallelepiped determined by the vectors u = i + j, v = j + 2k and w = 3k. By signing up, you&#039;ll get thousands of. Click here👆to get an answer to your question ️ The altitude of a parallelopiped whose three coterminous edges are the vectors, A̅ = vec i + vec j + vec k ;B̅ = 2vec i + 4vec j - vec k & C̅ = i̅ + j̅ + 3vec k with A̅ and B̅ as the sides of the base of the parallelopiped is. Comment Guidelines chevy code p1682 This gives us the volume of the parallelepiped when the vectors $$\vec{a}$$, $$\vec{b}$$ and $$\vec{c}$$ are known. Learn about Rhombus here. Steps to Calculate the Volume of Parallelepiped when Vertices are Known. If we know the vertices of a parallelopiped we can find out its vectors by using the simple distance formula. Supplementary Example 2 Find the volume of the parallelepiped whose edges are 𝑎 ⃗ = 2𝑖 ̂ − 3𝑗 ̂ + 4𝑘 ̂, 𝑏 ⃗ = 𝑖 ̂ + 2𝑗 ̂ − 𝑘 ̂ and 𝑐 ⃗ = 2𝑖 ̂ − 𝑗 ̂ + 2𝑘 ̂ Given, 𝑎 ⃗ = 2𝑖 ̂ − 3𝑗 ̂ + 4𝑘 ̂ , 𝑏 ⃗ = 𝑖 ̂ + 2𝑗 ̂ - 𝑘 ̂ , 𝑐 ⃗ = 2𝑖 ̂ - 𝑗 ̂ + 2𝑘 ̂Volume of parallelepiped = [𝒂 ⃗" " 𝒃. The three coterminous edges of all th. If V.. V2 V3 are volumes of parallelepiped, triangular prism and tetrahedron respectively. The three coterminous edges of all three figures are the vectors 1-1-6, 1-j+4k and 2-5j+3k. if sum of k the volumes of V1, V2 and V3 is k, then 10. < Previous Next >. • daegu south korea army base • This problem aims to find the volume of a parallelepiped, whose one vertex is at the origin (0,0) and the other 3 vertices are given.To solve this problem, it is required to have knowledge of 3-dimensional shapes along with their areas and volumes and to calculate determinants of the 3×3 square matrix.. Expert Answer. A parallelepiped is a 3-dimensional. Pictures: parallelepiped, the image of a curvy shape under a linear transformation. Theorem: determinants and volumes. Vocabulary: parallelepiped. In this section we give a geometric interpretation of determinants, in terms of volumes. This will shed light on the reason behind three of the four defining properties of the determinant. It is also. Find the value of λ Solution: Given, Volume of the parallelepiped = 90 cubic units. Question 4. If $$\vec{a}, \vec{b}, \vec{c}$$ are three non-coplanar vectors represented by concurrent edges of a parallelepiped of. Suppose we have a parallelepiped with edges A, B and C , and we want to calculate the volume. The old-fashioned way to do this. • How do you find the volume of the parallelepiped determined by the vectors : <1,3,7>, <2,1,5> and <3,1,1>? Calculus Using Integrals to Find Areas and Volumes Calculating Volume using Integrals. 1 Answer Gió Mar 19, 2015 I would use the Triple Scalar Product (or Box Product): Where a vectorial/geometrical interpretation can be:. • mikuni carb idle too highFind an answer to your question Find the volume of the parallelepiped determined by the vectors a, b, and c. a = 4, 3, −1 , b = 0, 1, 2 , c = 5, −3, 6 Alex4315 Alex4315 08/28/2019 Mathematics College answered • expert verified. shadow systems. • nox raw sensorWhat is the volume of the parallelepiped with adjacent edges? The answer is: V=16.Given three vectors, there is a product, called scalar triple product, that gives (the absolute value of it), the volume of the parallelepiped that has the three vectors as dimensions.. Find the volume of the parallelepiped formed by the vectors a = (3,4,−1),b. Then the volume of the parallelepiped determined by vectors 2(a x b), 3(b x c) and vector(c x a) is (p)100 (ii) Volume of parallelepiped determined by vectors a, b and c is 5. Then the volume of the parallelepiped determined by vectors 3(a + b), (b + c) and vector 2(c + a) is (q) 30 (iii) Area of a triangle with adjacent sides determined by. If A is a 3 × 3 matrix, the volume of the parallelepiped deter-mined by the columns of A is |detA|. Remark: Let a1 and a2 be nonzero vectors . Then for any scalar c, the area of the parallelogram determined by a1 and a2 equals the area of the parallelogram determined by a1 and a2 +ca1. openh264 discord; blynk legacy online; white dwarf 469 pdf. How do you find the volume of the parallelepiped determined by the vectors: <1,3,7>, <2,1,5> and <3,1,1>? Calculus Using Integrals to Find Areas and Volumes Calculating Volume using Integrals. 1 Answer Gió Mar 19, 2015 I would use the Triple Scalar Product (or Box Product): Where a vectorial/geometrical interpretation can be:. Find the volume of the parallelepiped given by the vectors. The final answer is the value of the scalar triple product, which is the volume of the parallelepiped. Now we'll take the dot product of a 2, − 1, 3 a\langle2,-1,3\rangle a 2, − 1, 3 and b × c = 2, 5, 4 b\times c=\langle2,5,4\rangle b × c = 2, 5, 4. . Geometrically, the mixed product is the volume of a parallelepiped defined by vectors, a, b and c as shows the right figure. The vector b ´ c is perpendicular to the base of the parallelepiped and its magnitude equals the area of the. This video tutorial explains:1. How to find area of a parallelogram bounded by two vectors using Casio fx-115ES Plus calculator;2. How to find volume inside. This volume is 1 if the vectors have unit length and are mutually perpendicular, and does not change if one side is added to another; (that just tilts the parallelepiped without changing. The volume of the parallelepiped spanned by the vectors$\bfa, \bfb, \bfc$is$\left| \bfa \cdot (\bfb \times \bfc) \right|.$It doesn't matter which vector is$\bfa, \bfb$or$\bfc$, so we choose them so that the cross product is easier to compute. • brian woods funeral home obituariesFind the volume of a parallelepiped having the following vectors as adjacent edges: u =−3, 5,1 v = 0,2,−2 w = 3,1,1 Recall uv⋅×(w)= the volume of a parallelepiped have u, v & w as adjacent edges The triple scalar product can be found using: 12 12 12 uuu vvv www uv⋅×w= 3 3 3. • best plate stabilizersCalculate the volume of the Parallelepiped. The volume of a Parallelepiped corresponds to space it occupies in its environment. We design : The Length: a. The Width: c. The depth: b. Hence the formula: the volume of the Parallelepiped V = a. b .c. Note: All dimensions must be expressed in the same unit of length, before performing a calculation!. The volume of a parallelepiped is defined as the space occupied by the shape in a three-dimensional plane. The volume of a parallelepiped is expressed in cubic units, like in 3, cm 3, m 3, ft 3, yd 3, etc. Volume of Parallelepiped Formula. Volume of parallelepiped can be calculated using the base area and the height. The formula to calculate. Vectors. Find the volume of a parallelepiped with 3 edges defined by Vectore a = (-2, 1, 4), Vector b = (5, 9, 0) and Vectore c = (0, 3, -7). ... Find the volume of the largest rectangular box with edges parallel to the axes that can be inscribed in the ellipsoid x^2/64+y^2/36+z^2/1=1 Hint: By symmetry, you can restrict your attention to the. Find the volume of the parallelepiped whose edges are represented by the vectors. Which values occur exactly three times? 4380 Solvers. Remove NaN ? 951 Solvers. Reverse the elements of an array. 778 Solvers. 03 - Matrix Variables 4. 542 Solvers. Armstrong Number. 273 Solvers. More from this Author 255. GJam March 2016 IOW: Passwords and the Block Set. 5 Solvers. GJam: 2013 Rd1a Bullseye Painting - Large Numbers. 8 Solvers. Definition and examples. Vector Algebra. Vector Operation. Volume of Parallelepiped determined by vectors calculator. Vector A : Vector B : Vector C : ( ). Let u , v and w denote vectors. The scalar triple product is given by u · ( v × w ) . Show that the scalar triple product gives the volume of a parallelepiped as shown in Figure 2.6, i.e. VOL ( u , v , w ) = u · ( v × w ). Formula for the volume of a parallelopiped. The sides of the parallelepiped are represented by vectors a, b and c. The volume of any quadrilateral can be calculated by multiplying the area of the base with the height. Here the base area will be the cross product of the two vectors b and c. The direction of b x c will be perpendicular to the. Find two unit vectors orthogonal to both given vectors. i + 3 + k, 7i + k > (smaller i-value) र > (larger i-value). so the volume of the parallelepiped determined by u;v, and w is equal to 0. This means that these vectors are on the same plane. So, P 1, P 2, P 3, and P 4 are coplanar. 2.(10 points) Find. Let us find now the value of 𝑘 for which 𝐷 ( − 4, − 3, 𝑘) is in the plane 𝐴 𝐵 𝐶. As the scalar triple product of three coplanar vectors is zero, we need to find the value of 𝑘 for which, for example, 𝐴 𝐷 ⋅ 𝐵 𝐴 × 𝐵 𝐶 = 0. The components of 𝐴 𝐷 are ( − 5, − 1 0, 𝑘 + 2). Hence, 𝐴 𝐷. What is the volume of the parallelepiped with adjacent edges? The answer is: V=16.Given three vectors, there is a product, called scalar triple product, that gives (the absolute value of it), the volume of the parallelepiped that has the three vectors as dimensions.. Question: Find the volume of the parallelepiped determined by the vectors a, b, and c. a = (1, 2, 2), b = (-1, 1, 5), c = (3, 1. . • level up rn fundamentalsaveda moisturizing shampoo Vectors. Find the volume of a parallelepiped with 3 edges defined by Vectore a = (-2, 1, 4), Vector b = (5, 9, 0) and Vectore c = (0, 3, -7). ... Find the volume of the largest rectangular box with edges parallel to the axes that can be inscribed in the ellipsoid x^2/64+y^2/36+z^2/1=1 Hint: By symmetry, you can restrict your attention to the. . Find an answer to your question Find the volume of the parallelepiped determined by the vectors a, b, and c. a = 4, 3, −1 , b = 0, 1, 2 , c = 5, −3, 6 Alex4315 Alex4315 08/28/2019 Mathematics College answered • expert verified. aew theme songs download speed queen tr7000wn vs tr7003wn python get video bitrate carrier hap download dobre twins on tiktok dylan rounds latest news update multiple columns in oracle beat saber oculus quest 2 apk bumble jobs london Calc and Vectors. Find the volume of a parallelepiped with 3 edges defined by = (-2, 0, 4), = (5, 9, 0) and = (0, 3, -7). ... Find the volume of the parallelepiped having u = 3i - 5j + k , v= 2j and w= 3i + j + k as adjacent edges. View more similar questions or ask a new question. Ask a New Question. Find step-by-step solutions and your answer to the following textbook question: Find the volume of the parallelepiped with adjacent edges PQ, PR, and PS. P(-2, 1, 0), Q(2, 3, 2), R(1, 4, -1), S(3, 6, 1). ... we can find the volume of the parallelogram that the three vectors are shaping, using the triple product of these three vectors. revit lock 3d view donald ray evans georgia powershell csv Q. Volume of parallelepiped determined by vectors a, b and c is 5. Then the volume of the parallelepiped determined by vectors 3(a + b), (b + c) and 2(c + a) is: 2. 30: R. Area of a triangle with adjacent sides determined by vectors a and b is 20. Then the area of the triangle with adjacent sides determined by vectors (2a + 3b) and (a - b) is. Formulas for volume of the parallelepiped. If we need to find the volume of a parallelepiped and we’re given three adjacent edges of it, all we have to do is find the scalar triple product of the three vectors that define the edges:. 1 hp electric motor variable speed fnf character generator wheel camper trailer water tank with pumpst charles amish greenhouse 2022 a blue sphere and a red sphere with the same diameter are released from rest at the top of a ramp Given a parallelepiped in$\mathbb{R}^3$with the three adjacent vectors corresponding to three adjacent edges of the parallelepiped,. to find the volume, we just take any two vectors$\vec{u},\vec{v}$from the three adjacent vectors and take the crossproduct$\vec{u}\times\vec{v}$and then take the dot product with the other vector$\vec{w}$, that is,. In this problem we have three vectors. You have to consider them as you curvy and director. These are given in the question and we have to find the volume of the light. Formula for the volume. Is did we take that people scan of bored out of these three weapons and you should know that your post killer product can be fine simply by taking the determinant of you in the first world. Solution for Parallelepiped Find the volume of a parallelepiped determined by the position vectors u = 2, 4, -5 , v = -6, 10, 2 , and w = 4, -8, 8. It is easier to calculate the volume of parallelepiped type shapes if we understand that a parallelepiped is formed by six parallelograms. Questionnaire. ... parallelepiped and tetrahedron for given vertices. How do you find the volume of the parallelepiped determined by the vectors: 1,3,7>, 2,1,5> and. This video tutorial explains:1. How to find area of a parallelogram bounded by two vectors using Casio fx-115ES Plus calculator;2. How to find volume inside. This volume is 1 if the vectors have unit length and are mutually perpendicular, and does not change if one side is added to another; (that just tilts the parallelepiped without changing. Volume of a Parallelepiped. Created by Richard Zapor; ... Find the index of the largest value in any vector X=[4,3,4,5,9,12,0,4.....5] 333 Solvers. Divisible by 2. 470 Solvers. More from this Author 255. ... Find the treasures in MATLAB Central and. Find the longest sequence of 1's in a binary sequence. 4520 Solvers. Return elements unique to either input. 650 Solvers. Is X a Fibonacci Matrix? 402 Solvers. Switch matrix to a column vector. 297 Solvers. Y=X. 758 Solvers. More from this Author 255. GJam 2014 China Rd A: Read Phone Number. 10 Solvers. Image Processing 01: Vignetting. Volume of Parallelepiped and Scalar Triple Product: The parallelepiped formed by three non-coplanar vectors a;b;c has a volume given by: Volume = ja(b c)j. 18. Compute the volume of the parallelepiped with three adjacent edges formed by the vectors a. where , and are vectors of the parallelogram. Or alternatively. where are the coordinates of. What is the volume of the parallelepiped with adjacent edges? The answer is: V=16.Given three vectors, there is a product, called scalar triple product, that gives (the absolute value of it), the volume of the parallelepiped that has the three vectors as dimensions.. Problem 3: If a vector, b vector, c vector are position vectors of the vertices A, B, C of a triangle ABC, show that the area. The three dimensional analogue to the parallelogram is the parallelepiped. Each face is parallel to the opposite face, as illustrated in Figure 11.4.5. By crossing v → and w →, one gets a vector whose magnitude is the area of the base. Dotting this vector with u → computes the volume of the parallelepiped! (Up to a sign; take the absolute. Parallelepiped, generated by three vectors A parallelepiped can be considered as an oblique prism with a parallelogram as base. Hence the volume V {\displaystyle V} of a parallelepiped is the product of the base area B {\displaystyle B} and the height h {\displaystyle h} (see diagram). cornwall council right to buy wotlk enhancement shaman talent tree send image over websocket python texas rv living laws iso 8601 format best balanced preamplifierque es una llamada spam yahoo mega nz link list chicago electric 10 table saw parts list lt716 watch time setting natural young girl laboratory rabbit species hiab lifting charts glock full auto diy best tarkov settings for streaminghow to pair lg soundbar with subwoofer without remote gradle vs meson A balloon of negligible mass is filled with helium gas (p=0.179 kg/m^ 3 ) until its volume is just large enough to lift a 25.0 kg load. (a) Find this volume . (b) Find the buoyant force exerted by air (p=1.29 kg/m^ 3 ) on the balloon. Answer (1 of 4): This means that the three vectors belong to the same plane (coplanar vectors), and their mixed product vanishes, a x b . c = 0. In particular, one or more lengths of these vectors ma be zero, but this is not a must; generally, all three may differ from zero. Finding volume of a parallelepiped Arconon1010 10 years ago #1 I'm trying to use the triple scalar product to get the volume of the parallelepiped from this question: Find the volume of the parallelepiped with the following vertices: (0,0,0) (3,0,0) (0,6,5) (3,6,5) (3,0,2) (6,0,2) (3,6,7) (6,6,7) The example in my book uses three vectors. Thus, the volume of the parallelogram shaped by the three vectors a, b and c is 9 units cubed, where the referring unit is the unit of the distance between the two points or the unit of the scale in general by which the points were coordinated. Result: 9 units Ask My Question This is helpful 92 asked 2021-05-17 Ask your question. Ask Question c. The volume calculator is able to calculate the volume of a rectangular parallelepiped, from variables numeric, the exact and approximate results are returned. Thus, calculating the volume of a rectangular cuuboid whose length is 3, the width is 2, and the height is 4 is done by entering the following formula volume_rectangle(3;2;4). This volume of a parallelepiped calculator is a simple tool and easy to use. It has three different modes of calculation to find the volume of a parallelepiped with 3 vectors, 4 vertices, or using the edge lengths and angles: To calculate. According to the formula of rectangular parallelepiped, Volume V= Length × Width × Height (From equation (1) volume of parallelepiped formula) Therefore, V= 9*12*6 =648ft³. Example2: Find the volume of parallelepiped where 40cm2 is the area of the bottom, and 30 cm is the height of the parallelepiped. Given three vectors, there is a product, called scalar triple product, that gives (the absolute value of it), the volume of the parallelepiped that has the three vectors as dimensions. So: −→ P Q = (3 +1,0 −2,1 −5) = (4, − 2, − 4) −→ P R = (3 −5,0 −1,1 +1) = ( − 2, −1,2) −→ P S = (3 − 0,0 − 4,1 − 2) = (3, −4, −1). . This video tutorial explains:1. How to find area of a parallelogram bounded by two vectors using Casio fx-115ES Plus calculator;2. How to find volume inside. This volume is 1 if the vectors have unit length and are mutually perpendicular, and does not change if one side is added to another; (that just tilts the parallelepiped without changing. Jul 01, 2021 · If we need to find the volume of a parallelepiped and we’re given three vectors, all we have to do is find the scalar triple product of the three vectors |a•(b x c)|, where the given vectors are (a1,a2,a3), (b1,b2,b3), and (c1,c2,c3). b x c is the cross product of b and c, and we’ll find it using the 3 x 3 matrix.. "/>. asus zenwifi xt8 add node antoine lavoisier periodic table british army mini flares when is outdaughtered coming back in 2022 how to factory reset chromebook with enterprise enrollment man found dead in waterbury ct 2022videos porn pedestrian deaths by vehicle type 1 Answer to u=(8,-3,4) v=(-2,9,1) w= (0, -4,-7), Find the volume of the parallelepiped by vectors u,v and w. How do you find the volume of a parallelepiped with three vectors? If we need to find the volume of a parallelepiped and we’re given three vectors, all we have to do is find the scalar triple product of the three vectors |a•(b x c)|, where the given vectors are (a1,a2,a3), (b1,b2,b3), and (c1,c2,c3). b x c is the cross product of b and c. Answered 2021-05-14 Author has 104 answers Calculations: We calculate the triple product of the three given vectors in order to find the volume of the parallelogram they are shaping, the triple product of the three vectors can be calculated for the vectors in any differen order, we start by assigning the vectors in the determinant as follows. Given three vectors, there is a product, called scalar triple product, that gives (the absolute value of it), the volume of the parallelepiped that has the three vectors as dimensions. So: −→ P Q = (3 +1,0 −2,1 −5) = (4, − 2, − 4) −→ P R = (3 −5,0 −1,1 +1) = ( − 2, −1,2) −→ P S = (3 − 0,0 − 4,1 − 2) = (3, −4, −1). This calculus 3 video tutorial explains how to calculate the volume of a parallelpiped using the triple scalar product formula.My Website: https://www.video. Feb 10, 2021 · Given a parallelepiped in R 3 with the three adjacent vectors corresponding to three adjacent edges of the parallelepiped, to find the volume, we just take any two vectors u →, v → from the three adjacent vectors and take the crossproduct u → × v → and then take the dot product with the other vector w →, that is, ( u → × v →) ⋅ w →. Finding volume of a parallelepiped Arconon1010 10 years ago #1 I'm trying to use the triple scalar product to get the volume of the parallelepiped from this question: Find the volume of the parallelepiped with the following vertices: (0,0,0) (3,0,0) (0,6,5) (3,6,5) (3,0,2) (6,0,2) (3,6,7) (6,6,7) The example in my book uses three vectors. Find the volume of the parallelepiped whose coterminous edges are represented by vectors . a = - 3 i + 7j + 5k , b = -5 i + 7 j - 3k , c = 7i - 5j - 3k; a ,. Find a vector that is perpendicular to the given vectors . 3 ) u , , ... Find the volume of a parallelepiped with the given vectors as adjacent edges. a) Calculate AB AD∧, in terms of t. b) Find the value of AB AD AE∧ i The volume of the parallelepiped is 22 cubic units. c) Determine the possible values of t. ( ) ( )12 3 1 5− + − − −t ti j k, 11 44t − , t = 2. Scalar Equation of a Plane. Scalar and Vector Projections. Cross Product. Vector Orthogonal to the Plane. Volume of the. lvmpd dispatch list of motorcycle clubs in montana youtube vanced download music slayer leecher download 50l fridge michigan irish music festival 2022 lineupdylan rounds latest news liquibase change existing nullable constraint from false to true Find the volume of a parallelepiped having the following vectors as adjacent edges: u =−3, 5,1 v = 0,2,−2 w = 3,1,1 Recall uv⋅×(w)= the volume of a parallelepiped have u, v & w as adjacent edges The triple scalar product can be found using: 12 12 12 uuu vvv www uv⋅×w= 3 3 3. To improve this 'Volume of a tetrahedron and a parallelepiped Calculator', please fill in questionnaire. Age Under 20 years old 20 years old level 30 years old level 40 years old level 50 years old level 60 years old level or over Occupation. If we need to find the volume of a parallelepiped and we're given three adjacent edges of it, all we have to do is find the scalar triple product of the three vectors that define the edges. ... How to find the volume of a parallelepiped, given three vectors that define its edges . Take the course Want to learn more about Calculus 3? I have a. Its volume will be V = 1000 cm 3. The second has edges b = 17 cm, c = 5 cm, d = 9 cm. And therefore its volume is V = 765 cm 3. And the third has e = 9 cm, f = 9 cm and g = 13 cm. And its volume is V = 1053 cm 3. Therefore, the box with the largest volume is the third. Another method to obtain the volume of a parallelepiped is to use vector. To improve this 'Volume of a tetrahedron and a parallelepiped Calculator', please fill in questionnaire. Age Under 20 years old 20 years old level 30 years old level 40 years old level 50 years old level 60 years old level or over Occupation. Pictures: parallelepiped, the image of a curvy shape under a linear transformation. Theorem: determinants and volumes. Vocabulary: parallelepiped. In this section we give a geometric interpretation of determinants, in terms of volumes. This will shed light on the reason behind three of the four defining properties of the determinant. It is also. The important things to note are the placement of the object's origin (all rotations applied) and the value node controling the space between volume scatter and light, normalized. 0.2 means 20% up/down (Z), 20% left/right (Y) and 20% front/back (X). The last multiply node just adds strength to the light. The volume of the parallelepiped is 22 cubic units. c) Determine the possible values of t. ( ) ( )12 3 1 5− + − − ... parallel to the vector 3 4i j k+ − . Give the answer in the form r a b∧ = where a and b are constant vectors. We calculate the volume of the parallelepiped spanned by AEsimultaneously with the volume of the parallelepiped spanned by A's columns in the following way. For both parallelotopes, let the vector corresponding to the i-th column in both be the \ rst" vector we study (i.e. the w~ 1 vector), and the vector corresponding to the j-th column in. Example 3: Find the area of the parallelogram whose vertices are (1,0), (0,5), (1,4), (2,1). Example 4: Find the volume of the parallelepiped with one vertex at. Find the volume of the parallelepiped whose edges are represented by the vectors. Solution . To find the volume of parallelepiped it is sufficient to find. But volume cannot be. Which values occur exactly three times? 4380 Solvers. Remove NaN ? 951 Solvers. Reverse the elements of an array. 778 Solvers. 03 - Matrix Variables 4. 542 Solvers. Armstrong Number. 273 Solvers. More from this Author 255. GJam March 2016 IOW: Passwords and the Block Set. 5 Solvers. GJam: 2013 Rd1a Bullseye Painting - Large Numbers. 8 Solvers. vizio sound bar reset champion 3500 watt generator parts student stimulus check application 2022 bmw z3 roof for sale harley twin cam cases minor chaos gods warhammercalsaws web portal mister fpga github qbcore store you know i want you greatest showman vhs shader minecraft wslg 21h2 flatmates with benefits read online free project sekai no new conversations decimal operations review packet12 year old period lasting 2 weeks iphone xs max icloud bypass 2022 Volume of the tetrahedron equals to (1/6) times scalar triple product of vectors which it is build on: . Because of the value of scalar triple vector product can be the negative number and the volume of the tetrahedrom is not, one should find the magnitude of the result of triple vector product when calculating the volume of geometric body. gl webtoons f 4 phantom tail codes 4 pack 13 watt mini spiral indonesia fender serial number are generac generators safe for electronics used zl1 camaro for salebrewzilla gen 4 100l how to pay for onlyfans beretta front bead thread size angular 13 module federation example reolink ip camera default ip ugandan middle names clarion ledger obituary bobcat skid steer grader attachment for sale babies behind bars brandi canterbury where is she nowbest strings for fingerstyle reddit f22b vtec specs 1 Answer to u=(8,-3,4) v=(-2,9,1) w= (0, -4,-7), Find the volume of the parallelepiped by vectors u,v and w. Volume formula of a parallelepiped. The volume of a prism is equal to the product of the base area to a height of a parallelepiped. V = A b h. where V - volume of the parallelepiped, A b - the area of the base of the parallelepiped ( parallelogram area calculator ), h - the height of the parallelepiped. Volume Formulas for Geometric Shapes. Find the volume of the parallelepiped (box) determined by u, v, and w. 9k The volume of the parallelepiped is (Simplify your answer.) units cubed. Find the volume of a parallelepiped if four of its eight vertices are B(2, 3 ,0), C(O, — 4, 3 ), and D( 3 , The volume of the parallelepiped with the given vertices A, B, C and D is (Simplify your. Problem 637. Volume of a Parallelepiped. ... (3) Solve Later Solve. Solution Stats. 65.95% Correct | 34.05% Incorrect ... vector operators dot cross. Community Treasure Hunt. Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!. It is easier to calculate the volume of parallelepiped type shapes if we understand that a parallelepiped is formed by six parallelograms. Questionnaire. ... parallelepiped and tetrahedron for given vertices. How do you find the volume of the parallelepiped determined by the vectors: 1,3,7>, 2,1,5> and. Let, bar a, bar b and bar c be the position vectors of points A, B and C respectively with respect to origin O. Complete the parallelopiped as shown in the figure with bar(OA) , bar(OB) and bar(OC) as its coterminus edges. ... volume of parallelepiped =|bara||barbxxbarc|costheta = bar a.(barb xx barc). 3.61 A rectangular parallelepiped has dimensions a, b, and c,. A balloon of negligible mass is filled with helium gas (p=0.179 kg/m^3) until its volume is just large enough to lift a 25.0 kg load. (a) Find this volume. (b) Find the buoyant force exerted by air (p=1.29 kg/m^3) on the balloon. 3d printed raspberry pi projects. 9.[Parallelepiped] Find the volume of the parallelepiped spanned by the vectors ~a= ( 2;3;1);~b= (0;4;0);~c= ( 1;3;3): 10.[Vector Di erentiation] Suppose ~ (t) is the position of a particle in R4. ~ (t) = 0 B B @ t3 e2t 1 t 5t3 1 C C A: Write (but do not evaluate) an expression for the arclength of the particle in the interval t= 0:::10. 1. Given three vectors, there is a product, called scalar triple product, that gives (the absolute value of it), the volume of the parallelepiped that has the three vectors as dimensions. So: −→ P Q = (3 +1,0 −2,1 −5) = (4, − 2, − 4) −→ P R = (3 −5,0 −1,1 +1) = ( − 2, −1,2) −→ P S = (3 − 0,0 − 4,1 − 2) = (3, −4, −1). The three dimensional analogue to the parallelogram is the parallelepiped. Each face is parallel to the opposite face, as illustrated in Figure 11.4.5. By crossing v → and w →, one gets a vector whose magnitude is the area of the base. Dotting this vector with u → computes the volume of the parallelepiped! (Up to a sign; take the absolute. fra ampere til milliampere kakao com login pillars of eternity 2 swashbuckler build younique collagen ingredients vw golf mk7 outside temperature sensor location fx slugs 25batocera security boot fail calculus ap edition briggs cochran gillett pdf Answer (1 of 4): Consider the tetrahedron OABC as shown in the figure below. \vec {OA} = \vec a, \vec {OB} = \vec b and \vec {OC} = \vec c are co-terminal edges of the tetrahedron from vertex O to vertices A, B and C respectively. Area of \triangle OAB = \frac{1}{2}|\vec a \times \vec b|. DC i. Given three vectors, there is a product, called scalar triple product, that gives (the absolute value of it), the volume of the parallelepiped that has the three vectors as dimensions. So: −→ P Q = (3 +1,0 −2,1 −5) = (4, − 2, − 4) −→ P R = (3 −5,0 −1,1 +1) = ( − 2, −1,2) −→ P S = (3 − 0,0 − 4,1 − 2) = (3, −4, −1). Reindex a vector . 572 Solvers. Given a window, how many subsets of a vector sum positive. 818 Solvers. What's Your BMI? 270 Solvers. Replace multiples of 5 with NaN. 386 Solvers. More from this Author 255. Genome Sequence 003: DNA Sequence with random positioned segments. 3 Solvers. Kurchan Square - Evaluation Function. 27 Solvers. Supplementary Example 2 Find the volume of the parallelepiped whose edges are 𝑎 ⃗ = 2𝑖 ̂ − 3𝑗 ̂ + 4𝑘 ̂, 𝑏 ⃗ = 𝑖 ̂ + 2𝑗 ̂ − 𝑘 ̂ and 𝑐 ⃗ = 2𝑖 ̂ − 𝑗 ̂ + 2𝑘 ̂ Given, 𝑎 ⃗ = 2𝑖 ̂ − 3𝑗 ̂ + 4𝑘 ̂ , 𝑏 ⃗ = 𝑖 ̂ + 2𝑗 ̂ - 𝑘 ̂ , 𝑐 ⃗ = 2𝑖 ̂ - 𝑗 ̂ + 2𝑘 ̂Volume of parallelepiped = [𝒂 ⃗" " 𝒃. Q: Find the volume of the parallelepiped determined by the vectors a = (3, 2, – 1), b = (0, 1, 4), с %3 A: We calculate the triple product of the three given vectors in. The volume of a parallelepiped is defined as the space occupied by the shape in a three-dimensional plane. The volume of a parallelepiped is expressed in cubic units, like in 3 , cm 3 , m 3 , ft 3 , yd 3 , etc. Volume of Parallelepiped Formula. Volume of parallelepiped can be calculated using the base area and the height.. csr1000v gns3 free idfpr license lookup nurse magsafe light not turning on utah family law attorney free consultation mary ours measurements little elm jet ski rental8 digit sim puk code verizon kayo 250 review are there jellyfish in mallorca 6 month probation review questions and answers 1972 s penny value 1998 ford explorer wont start when hot sterile lubricating jelly packets 2019 silverado electronic parking brake service mode jedi male reader x marvelvatican holy water bottle a032f u1 imei repair lapachhapi movie download hindi dubbed hawthornden state hospital patient records consolidated contractors company new projects flualprazolam 30 ml azure devops rest api authentication how much xp does gyutaro give demonfall luke 7 kjvnmm gold vallejo sagemcom broadband symptoms of teflon flu tarot by janine youtube abandoned villages for sale in europe 2020 honda motos colombia lgbt movies with happy endings tiktok report bot termux application with identifier was not found in the directory ver chicas desnudas flxible clipper bus for sale buy youtube subscribers on lenostube chargeworx power bank cx6560 hifza javed fb hamilton marine buoy paint wilson pro labs difference between put and patch in postman 1 ton feed buggy First, use scalar multiplication, then find the magnitude of the new vector. Note that the result is the same as for part b.: Recall that to find a unit vector in two dimensions, we divide a vector by its magnitude. The procedure is the same in three dimensions: Let and Find a unit vector in the direction of. Hint. Thanks. Find the volume of the parallelepiped determined by the vectors 𝑢⃗ 1 =(1,1,1),𝑢⃗ 2 =(1,3,−4),𝑢⃗ 3 =(1,2,−5)) ... Find the volume of the parallelepiped determined by the vectors 𝑢⃗ 1 =(1,1,1),𝑢⃗ 2 =(1,3,−4),𝑢⃗ 3 =(1,2,−5)) This question hasn't been solved yet Ask an expert Ask an expert Ask an expert. age of war 2 hacked without flash digital signature 0xc0000428 cardpointe recurring payments latex vector magnitude Add Chart to Commentlovelyskin vs dermstore girls ith big boobs coinglass open interest The volume of a parallelepiped is defined as the space occupied by the shape in a three-dimensional plane. The volume of a parallelepiped is expressed in cubic units, like in 3, cm 3, m 3, ft 3, yd 3, etc. Volume of Parallelepiped Formula. Volume of parallelepiped can be calculated using the base area and the height. 0. I want the volume of a parallelpiped and I have the three vectors. 4 e 1 + 2 e 2 − e 3. e 1 − 3 e 2 − 2 e 3. 2 e 1 − e 2 + 3 e 3. that coinciding with three of the parallelpipeds sides. HON-base. I made it into a matrice and then calculated the determinant. d e t = ( 4 e 1 ∗ ( − 3 e 2) ∗ 3 e 3) + ( 2 e 2 ∗ ( − 2 e 3) ∗ 2. Find the volume of the parallelepiped given by the vectors. The final answer is the value of the scalar triple product, which is the volume of the parallelepiped. Now we’ll take the dot product of a 2, − 1, 3 a\langle2,-1,3\rangle a 2, − 1, 3 and b × c =. I have three vectors: v1={1,0,0} v1={0,1,0} v1=1/2{1,1,1} I wish to show the volume constructed by these vectors. I know the amount of this volume is calculated by 1/2 Dot[Cross[{1, 0, ... Stack Exchange Network ... [Graphics3D[Parallelepiped[{0, 0, 0}, {v1, v2, v3}]], BoxRatios -> 1].$\endgroup\$ - yawnoc. Oct 6, 2020 at 16:18. Add a comment. dogo argentino pitbull mix puppies for sale View Find the volume of the parallelepiped defined by the vectors.docx from MATH CALCULUS at Harvard University. AMU Find the volume of the parallelepiped defined by the vectors ~a =. conan siptah wikisheer joy sunscreen mid engine sand rail for sale is pip incapacity benefit 110 bcd axs chainring flesher funeral home obituaries 2018 dodge ram remote start instructions Your report has been sent to our moderators for review aws l6 salary reddit safr eup sigma rf lenses 2022 necromunda book of the outcast pdfcoffeefidic clauses explained pdf
2022-11-26 18:24:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7362483739852905, "perplexity": 1914.2091268394968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00623.warc.gz"}
https://www.emathzone.com/tutorials/calculus/derivative-of-implicit-functions.html
# Derivative of Implicit Functions Implicit Function If the independent and the dependent variables are mixed up in such a way that the dependent variable cannot be expressed in terms of the independent variable, this function is called an implicit function. For example, ${x^2} + {y^2} - x + y + 3 = 0$ is an implicit function, because the dependent variable $y$ cannot be expressed in terms of the independent variable $x$. Example: Find $\frac{{dy}}{{dx}}$ if the given implicit function is ${x^2} + {y^2} = {a^2}$ We have the given implicit function Differentiating with respect to $x$, we have Here, ${a^2}$ any constant and the derivative of the constant function is zero.
2019-01-20 03:12:10
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9284374117851257, "perplexity": 147.25310092897237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583690495.59/warc/CC-MAIN-20190120021730-20190120043730-00405.warc.gz"}
https://calculator.academy/bopp-film-weight-calculator/
Enter the mil gauge, the roll length (ft), and the roll width (in) into the BOPP Film Weight Calculator. The calculator will evaluate the BOPP Film Weight. ## BOPP Film Weight Formula The following formula is used to calculate the BOPP Film Weight. BOPP = g * .254 * L * W / 63,500 Variables: • BOPP is the BOPP Film Weight (lbs) • g is the mil gauge • L is the roll length (ft) • W is the roll width (in) To calculate BOPP film weight, multiply the mil gauge by the length and width, then multiply by 0.000004. ## How to Calculate BOPP Film Weight? The following two example problems outline the steps and information needed to calculate the BOPP Film Weight. Example Problem #1 1. First, determine the mil gauge. • The mil gauge is calculated to be : 250. 2. Next, determine the roll length (ft). • The roll length (ft) is measured to be: 300. 3. Next, determine the roll width (in). • The roll width (in) is found to be: 20. 4. Finally, calculate the BOPP Film Weight using the formula above: BOPP = g * .254 * L * W / 63,500 The values provided above are inserted into the equation below and computed. BOPP = 250 * .254 * 300 * 20 / 63,500 = 6.00 (lbs) Example Problem #2 The variables required for this problem are provided below: mil gauge = 100 roll length (ft) = 200 roll width (in) = 100 Test your knowledge and check your answer with the calculator.BOPP = g * .254 * L * W / 63,500 = (lbs)
2023-03-25 23:56:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2961164116859436, "perplexity": 4426.061368540112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00480.warc.gz"}
https://www.risk.net/risk-quantum/7956254/at-us-banks-less-than-50-of-liquid-assets-classified-as-afs
Less than half of the easy-to-sell assets that underpin major US banks’ liquidity adequacy requirements sit within their balance sheets’ available-for-sale (AFS) perimeter, Risk Quantum research has found. In a seeming mismatch between practice and theory, the divergence may come under scrutiny if the liquidity coverage ratio (LCR) framework is re-examined in the wake of Silicon Valley Bank’s collapse.
2023-03-20 13:21:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18024659156799316, "perplexity": 14098.067218947619}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00452.warc.gz"}
https://ai.stackexchange.com/questions/25478/is-there-multi-agent-reinforcement-learning-model-in-which-some-of-the-reward/25530
# Is there multi-agent reinforcement learning model in which (some of the) reward is given by other agent and not by the external environment? The traditional setting of multiagent reinforcement learning (MARL) is the mode in which there is set of agents and external environment. And the reward is given to each agent - individually or collectively - by the external environment. My question is - is there MARL model in which the reward is given by one agent to the other agent, meaning that one agent is incurring costs and other agent - revenue (or maybe even a profit? Effectively that means distributed supervision: only some agents face the environment with real reward/supervision and then this supervision is more or less effectively propgated to other agents that learn/do their own specialized tasks that are part of collective task ececuted/solved distributively in MARL. • arxiv.org/abs/1901.08492 seems to be some part of the answer - hierarchical supervision, mutually-supervised agent learning etc. seems to be the terms used for research suggested in my question. – TomR Jan 1 at 4:13
2021-07-28 20:39:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8121277093887329, "perplexity": 1439.2008075180404}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153791.41/warc/CC-MAIN-20210728185528-20210728215528-00235.warc.gz"}
http://events.berkeley.edu/index.php/calendar/sn/?event_ID=124367&date=2019-04-01&tab=academic
Arithmetic Geometry and Number Theory RTG Seminar: Dwork Crystals and Related Congruences Seminar | April 1 | 3-5 p.m. | 891 Evans Hall | Note change in location Masha Vlasenko, IMPAN Warsaw Department of Mathematics I will show a new, simple construction of crystals associated with toric hypersurfaces and exploit it to prove p-adic congruences for expansion coefficients of rational functions. This is joint work with Frits Beukers. The exposition will be self-contained, but I shall explain that our ideas evolve from those of Bernard Dwork. Since he constructed an explicit Frobenius operator which does point counting for hypersurfaces, attempts to give a cohomological interpretaion of Dwork's work resulted in the Monsky–Washnitzer theory. Leaving out the $p$-adic counterpart, in 1990s Batyrev used solely the de Rham aspect of Dwork's theory to study mixed Hodge structure on the middle cohomology of toric hypersurfaces. Our construction basically adds the Frobenius structure back to this picture. As one of the applications, we will do a version of Katz's internal reconstruction of unit-root crystals via expansion coefficients of differential forms. [email protected]
2019-07-21 00:08:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5213013291358948, "perplexity": 2095.2260391176123}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526799.4/warc/CC-MAIN-20190720235054-20190721021054-00440.warc.gz"}
http://mywebspider.com/sequential-numbering/sequential-numbering-in-rhino.html
For example, in our law practice, we submit briefs with hundreds of attachments. Those attachments are constantly being reorganized (thus renumbered) and we had to develop a scheme to keep each attachment (and reference to each attachment) organized and accurate. We created a master list of attachments, and captured the dynamic field code in a bookmark. We can then cross-reference to that bookmark (which will return a number) through our documents. Example Public Attachment 68 (where 68 represents a field sequence code, is captured in a bookmark we will call SR_112). The bookmark name is a code name supplied by the drafting lawyer. That way, when the lawyer refers throughout the document to Public Attachment ___, we insert a cross reference to bookmark SR_112, and it always returns an accurate number, even if we reorganize the order of our attachments. To include numbering prefixes from higher levels, enter text or click at the start of the Number box and choose Insert Number Placeholder and then select a Level option (for example, Level 1), or enter ^ and then the list level (for example, enter ^1). In a list with first levels numbered 1, 2, 3, and so on, and second levels numbered a, b, c, and so on, including the first-level prefix in the second level renders second-level numbers as 1a, 1b, 1c; 2a, 2b, 2c; 3a, 3b, 3c. Sequential numbers can be printed almost anywhere on a sheet or form and can be positioned horizontally or vertically. Numbers can also be repeated in another position on the form. When developing your artwork, consider putting a box for numbering, making it easier for your customers or employees to find and reference a specific job or transaction. You may also differentiate your number by color. Most printers allow a choice of colors, typically black or red, to make your number stand out. Start your sequencing at any point you like, to pick up where you left off on your last print order. I make the design with as many up as I need on the master page, linking the frames where the numbers will go. Then I make the list using Excel, copy paste to ID and apply a paragraph style with "start in next frame" option. Click the outbox on the pasted text to get a loaded cursor and delete the frame. Then just shift-click over the first textframe on a live page to have as many "tickets" added as needed automatically. Become a Microsoft Word power user in just a few minutes a week. This tips-based series helps you deepen your proficiency with Word, covering a variety of tools and techniques you can use. Learn how to work with templates, format documents, customize built-in styles and themes, master Track Changes, and more. Check back each week for a new pro tip from expert David Rivers. ##### See Word's Numbering Explained by John McGhie, MVP - comprehensive and not pretty (Downloadable pdf file in letter size) - Reading this is vital to anyone attempting to use automatic numbering or bullets in a law office setting or other places where the documents are likely to be reused or heavily edited. See also How to Create a Template with a downloadable template with style-based numbering. This is how I see it. A person (passenger) makes a reservation. If the person has never made a reservation before, then they are added to the passenger table, But it really makes no sense to assign a sequential number to the passenger. Otherwise, an entry, just needs to be made in the reservations table. The Reservation needs to include the specific train (number and date so there should be a table for train schedules so all you need is a foreign key), the passenger and date of travel. Here I can understand wanting a sequential number to show passengers booked in a day. #### The partial sums themselves form a sequence {\displaystyle (S_{N})_{N\in \mathbb {N} }} , which is called the sequence of partial sums of the series {\displaystyle \sum _{n=1}^{\infty }a_{n}} . If the sequence of partial sums converges, then we say that the series {\displaystyle \sum _{n=1}^{\infty }a_{n}} is convergent, and the limit {\displaystyle \lim _{N\to \infty }S_{N}} is called the value of the series. The same notation is used to denote a series and its value, i.e. we write {\displaystyle \sum _{n=1}^{\infty }a_{n}=\lim _{N\to \infty }S_{N}} . An exit can be numbered by where the exit in the direction of increased mileage leaves the freeway, or by where the road that the exit serves crosses the freeway (which is occasionally ambiguous). From this number, the integer exit number can be determined by rounding up, rounding down, or rounding to the nearest integer. Many jurisdictions prefer to avoid an exit 0.[citation needed] To this end, the numbers are either rounded up to get the exit number, or any exit that would get the number 0 is instead numbered 1. Examples of highways with an exit 0 are British Columbia Highway 1 on the mainland, Interstate 70 in Wheeling, West Virginia along the West Virginia–Ohio border, and Interstate 90 on the Montana side of the Idaho–Montana border. A series is, informally speaking, the sum of the terms of a sequence. That is, it is an expression of the form {\displaystyle \sum _{n=1}^{\infty }a_{n}} or {\displaystyle a_{1}+a_{2}+\cdots } , where {\displaystyle (a_{n})} is a sequence of real or complex numbers. The partial sums of a series are the expressions resulting from replacing the infinity symbol with a finite number, i.e. the Nth partial sum of the series {\displaystyle \sum _{n=1}^{\infty }a_{n}} is the number If the second number on your raffle ticket is one higher than the first number, you must have accidentally put the <> tag after the first number (causing the next number, on the same ticket, to increase by one). You only need the <> after the second number on each ticket, so the next ticket gets a new number. (But you don’t need it on the final ticket on the **page**, because the next **page** automatically gets a new number)
2019-01-20 22:22:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8062568306922913, "perplexity": 1314.65621154311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583739170.35/warc/CC-MAIN-20190120204649-20190120230649-00463.warc.gz"}
https://news.ycombinator.com/item?id=15728147
Matrix Calculus 408 points by sytelus on Nov 18, 2017 | hide | past | favorite | 51 comments For those who might not realize how amazing this is, take a look at how you can do these things manually: I never learned matrix calculus so I find this tool helpful for following technical papers that involve some matrix calculus I have done all kinds of work that required some kind of matrix calculus in one form or another. There are of course all kinds of references (sibling links to my favorite), but I have found that more often than not really the best way to get the results you want is just to calculate them yourself. The work involved is usually tedious but trivial. But working through it goes along way to help make some sense of the various identities that come out. In looking at an identity one may see a transpose here, an inner product there, but not be able to assign any importance or distinction to any particular term at first glance. Working through a few calculations help with this.EDIT: I suppose I should also say that I never "learned" matrix calculus either, in the sense that I internalized the various features unique to matrices under derivatives and integrals. The calculations I refer to above are crude, naive ones in the scalar notation under whatever coordinate system seems appropriate. When I was starting out in machine learning, as a programmer with the most rudimentary calculus background, it was easy to derive algorithms that had terms like "gradient w.r.t X of log(det(inv(λI + A X A')))" which absolutely stumped me when trying to derive the gradient by hand by elementwise partials.However, thanks to Minka's notes and the Matrix Cookbook, I was able to eventually get a handle on easy techniques for these derivations! It's certainly no substitute for getting a handle on the theory first by studying a textbook, but these pattern-matching shorthands are important practical techniques. How did you start in machine learning. Did you come from a top 10 school? No, I failed out of university and am self-taught. I spent many years deliberately underemployed working on self-study and coding up projects at the limit of my abilities. I had decided to bet on investing in myself instead of just carrying out some boss’s wishes, which would be more optimized to benefit the company than my own growth and development.Here’s a great resource if you’re starting out today: http://datasciencemasters.org Thanks for the link! That's more or less the road I'm starting down right now, albeit with likely a bumpier past (dropped out of HS in addition to college and spent a few years doing almost nothing productive.) Inspiring story!! You might also find the Matrix Cookbook [0] to be a useful reference. The best resource I have come across is the Wikipedia page. It's comprehensive and concise. https://en.wikipedia.org/wiki/Matrix_calculus This kind of stuff just infuriates me. I can't see any good reason for having so many competing notational standards for the same thing:The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific convention is standard.Seriously? So if I want to read a paper that uses Matrix Calculus, it's not enough to just understand Matrix Calculus in general.. no, first I have to decipher which of a legion of possible notations the author used, and then keep that state in mind when thinking about that paper in relation to another, which might use yet another notation.I understand that ultimately nobody is an position to mandate the adoption of a universal standard, but part of me wishes there were (this is, of course, not a problem that is limited to Matrix Calculus). Some other examples:sign conventions in thermodynamicsconventions in Fourier transformsshort billion vs long billioncalorie vs CalorieEnglish vs. metric unitsSI vs cgs metricesu vs Gaussian units in EM I can see that x' means $x^T$ (x transposed) but it's not mentioned in the documentation — where does this notation come from? From http://mathworld.wolfram.com/Transpose.html :Unfortunately, several other notations are commonly used, as summarized in the following table. The notation Aᵀ is used in this work.Aᵀ — This work; Golub and Van Loan (1996), Strang (1988)Ã — Arfken (1985, p. 201), Griffiths (1987, p. 223)A' — Ayres (1962, p. 11), Courant and Hilbert (1989, p. 9) Matlab/octave With caveat that A' is Hermitian transpose (transpose + complex conjugate) when A has complex entities, and you need A.' if you want the real transpose. For matrix A with real coefficients both operations are the same. Awesome, pretty handy to have for engineering/scientific computing code. Over time, you do start to build the intuition just like you do for scalar derivative calculations, but it takes time. The example has sine of a vector in it. I haven't had coffee yet today but I've never heard of being able to compute sine of a vector. What does this mean? Defining it by its Taylor series doesn't work like it does for square matrices because vectors can't be multiplied, and if you assume the direct product is what's meant then each term in the Taylor series is a different sized matrix and can't be added. Surely it doesn't mean elementwise sine? It is element-wise sin, which is not entirely correct. You can take the sin of, or any analytic function of a matrix, however you'd have to compute the characteristic polynomial for the matrix and apply the function to that. It is a result of the Cayley-Hamilton Theorem. here is a tutorial: http://web.mit.edu/2.151/www/Handouts/CayleyHamilton.pdf You don't need to know the characteristic polynomial to do it. Just sum as many terms of the power series as you want.What the characteristic polynomial lets you do is to calculate it faster and more accurately because very large powers can be rewritten in terms of smaller powers that you already computed. Yes it's element-wise, it's defined in the sidebar tab "Operators". In general the syntax seems closely aligned with Matlab, where all the math functions are elementwise. Is the source code for this available? Along the same lines, does there exist an "algebra checker" that could, say, take in two successive lines of latex, with perhaps a hint of how to get from one to the other, and confirm that there are no algebra errors? The choice of LaTeX comes with two problems:1) It's designed for describe graphical representation and not semantics.2) parsing it properly is very difficult, especially considering how macros and other definitions can influence the syntax. You could:1) assume a standard interpretation of symbols (e.g. this site uses a-g for scalars, h-z for vectors and A-Z for matrices) and make those assumptions explicit and modifiable (e.g. WolframAlpha displays "assuming X is a Y, use as a Z instead").2) Support only the subset that MathJax can handle, which seems to be enough for most purposes.I definitely agree that LaTeX is not the optimal input format for that purpose, though. See my comment above about my experiences with this: I tried, and a corner case killed me. There’s a reason why Mathematica and other CAS packages have invented their own idiosyncratic notations. LATEX and MathML are truly hopeless when it comes to communicating with algebra systems. It looks deceptively ’doable’, but really it’s a recipe for disaster. I'm with cosmic_ape regarding the recommendation for SymPy. Write the two lines as symbolic expressions expr1 and expr2 then run simplify(expr1-expr2) to check if the answer is zero.If you want to start with LaTeX then the parser from this library might be useful: https://github.com/Khan/KAS SymPy does not take Latex as input, but it has symbolic equivalence checkers.Methods are highly heuristic, and some languages (all expressions with,say, \pi, and exp function, or something similar) are in general undecidable. To elaborate: Symbolic equivalence checking for two expressions f1 and f2 can be done by seeing if sympy.simplify(sympy.Eq(f1, f2)) is sympy.true. import sympy from sympy.abc import x, y, alpha, s quad = s ** 2 - alpha * s - 2 # Let s1 and s2 be the two solutions to the quadratic equation 'quad == 0' s1, s2 = sympy.solve(quad, s) u = (x - s2) / (x - s1) * (y - s1) / (y - s2) f1 = (s2 - s1 * u) / (1 - u) f2 = (x * y - alpha * x - 2) / (y - x) # Claim: f1 is equal to f2 print(sympy.simplify(sympy.Eq(f1, f2))) # Prints "True" Is there something for implication or equivalence of equations? I was trying the example from Smaug123's sibling comment (https://news.ycombinator.com/item?id=15728598), but >>> import sympy as sp >>> x = sp.Symbol('x') >>> sp.simplify(sp.Implies(sp.Eq(x**2 + 2*x + 1, 0), sp.Eq(x, -1))) Eq(x, -1) | Ne(x**2 + 2*x + 1, 0) >>> sp.solve(sp.simplify(sp.Implies(sp.Eq(x**2 + 2*x + 1, 0), sp.Eq(x, -1)))) Traceback (most recent call last): File "", line 1, in File "/home/user/.local/lib/python3.5/site-packages/sympy/solvers/solvers.py", line 1065, in solve solution = _solve(f[0], *symbols, **flags) File "/home/user/.local/lib/python3.5/site-packages/sympy/solvers/solvers.py", line 1401, in _solve f_num, sol = solve_linear(f, symbols=symbols) File "/home/user/.local/lib/python3.5/site-packages/sympy/solvers/solvers.py", line 1971, in solve_linear eq = lhs - rhs TypeError: unsupported operand type(s) for -: 'Or' and 'int' >>> sp.solveset(sp.simplify(sp.Implies(sp.Eq(x**2 + 2*x + 1, 0), sp.Eq(x, -1)))) Traceback (most recent call last): File "", line 1, in File "/home/user/.local/lib/python3.5/site-packages/sympy/solvers/solveset.py", line 880, in solveset raise ValueError("%s is not a valid SymPy expression" % (f)) ValueError: Eq(x, -1) | Ne(x**2 + 2*x + 1, 0) is not a valid SymPy expression none of the obvious ways appear to work. Does Sympy not support this kind of equational reasoning? I think there used to be a open source tool called Maxima or something that would simplify equations. Havent used it for the last 10 years and never used it much anyway. I can recommended Maxima too. It's useful for checking for obvious errors and for simplifying expressions. It can be used for more advanced stuff, but I found it has a steep learning curve. On a whim I programmed a symbolic integral-solving library in Python (I am not a professional programmer) and, as an ill-advised ’extension’, bolted on the ability to receive and emit LATEX expressions. Long story short, I was boasting about it on Nichols Nassim Taleb’s twitter feed, submitted an answer to a problem question he had posed that got mis-parsed by the front end, gave the wrong answer (which was entirely consistent, because the check stage compared the answer by deriving the flawed question), and got banned, flamed, and persecuted to Hell and back. True story.From this I learnt the hard way that tokenisation and representation (in LATEX or MathML) do not belong in the same place as a CAS (Computer-assisted Algebra System). I believe that LaTeX/MathML absolutely do belong in a CAS, just not as the primary means of interaction. (I.e. you do your work in the CAS's own language and then export to LaTeX.)But maybe your example can convince me otherwise. Could you show the specific LaTeX code in question and describe how your program mishandled it? I had set myself the target of implementing the Risch Algorithm (https://en.wikipedia.org/wiki/Risch_algorithm) in SymPy (3) but the event was so embarrassing I deleted my code and never published it publicly.I’m not a programmer. I erred in trying to do a programmer’s job. Also, I discovered why Nicholas Nassim Taleb has the reputation for being rather uncharitable (but deep down I feel like I deserved it, because hey, I stated a mathematical untruth). As one of the authors of this tool I understand that you would like to have some way of trusting the output.Internally we check it numerically, i.e., generate some random data for the given variables and check the derivative by comparing it to an approximation via finite differences. We will ship this code (hopefully soon) with one of the next versions. You can then check it yourself. Otherwise, as far as I know there does not exist any other tool that computes matrix derivatives so I understand it is hard to convince anyone of the correctness of the results. But I hope the numerical tests will be helpful. Mathematica can probably do that Correct: Implies[x^2 + 2 x + 1 == 0, x == -1] // FullSimplify [In] FullSimplify[expr1 == expr2] [Out] True I cannot even begin to explain how intensely I've been desiring this. I haven't touched calculus since college and this freaks me out. I don't even know where to begin, all I remember is using mnemonics to recall rules for dx/dy or something. How would someone go about understanding this? ML is super interesting but seems daunting to anything worthwhile without understanding the maths behind it. Some time ago I implemented a library [1] similar to this tool. The tricky part is that derivatives quickly exceed 2 dimensions, e.g. derivative of a vector output w.r.t. input matrix is a 3D tensor (e.g. if y = f(X), you need to find derivative of each y[i] w.r.t. each X[m,n]), and we don't have a notation for it. Also, often such tensors are very sparse (e.g. for element-wise log() derivative is a matrix where only the main diagonal has non-zero values corresponding to derivatives dy[i]/dx[i] where y = log(x)).The way I dealt with it is to first translate vectorized expression to so-called Einstein notation [2] - indexed expression with implicit sums over repeated indices. E.g. matrix product Z = X * Y may be written in it as: Z[i,j] = X[i,k] * Y[k,j] # implicitly sum over k It worked pretty well and I was able to get results in Einstein notation for element-wise functions, matrix multiplication and even convolutions.Unfortunately, the only way to calculate such expressions efficiently is to convert them back to vectorized notation, and it's not always possible (e.g. because of sparse structure) and very error-prone.The good news is that if the result of the whole expression is a scalar, all the derivatives will have the same number of dimensions as corresponding inputs. E.g. in: y = sum(W * X + b) if W is a matrix, then dy/dW is also a matrix (without sum it would be a 3D tensor). This is the reason why backpropogation algorithm (and symbolic/automatic differentiation in general) in machine learning works. So finally I ended up with a another library [3], which can only deal with scalar outputs, but is much more stable.Theoretical description of the method for the first library can be found in [4] (page 1338-1343, caution - 76M) while the set of rule I've derived is in [5]. Would be nice if there was a "symmetric matrix" variable type, so it could simplify Ax + A'x to 2Ax if A is symmetric. I wonder why they don't do it. In the tool I had running, we handled this by removing any transpose of a symmetric matrix (after propagating it before the leaves). Together with the simplification rule x + x -> 2*x for any x, you get the expected result. I could only guess why they didn't include it in the online matrix calculus tool. It was published after I left the group. It is not in the current online tool but we will add it again soon. It is still in there the way you describe it (passing transpose down to the leaves and simplification rules as well). Btw: How are you doing and where have you been? Would be nice to also add a link to you and your current site. GENO is also on its way. I'm good. Looking at things from the data angle now. But unfortunately no public page. You can link to the old one, if you want to. Have you compared against TensorFlow XLA? I did not compare to Tensorflow XLA but I compared it to Tensorflow. Of course, it depends on the problem. For instance, for evaluating the Hessian of x'Ax MC is a factor of 100 faster than TF. But MC and TF have different objectives. TF more on scalar valued functions as needed for deep learning, MC for the general case, especially also vector and matrix valued functions as needed for dealing with constraints. But I will give TF XLA a try. I think XLA is trying to reduce the overhead introduced by backprop, meaning when you optimize the computational graph you might end up with an efficient calculation of the gradient (closer to the calculation you get with MC). Regarding non-scalar valued functions: Don't you reduce a constrained problem to a series of unconstrained problems (via a penalty (or even augmented Lagrangian) or barrier method)? Then you only need the gradient of a scalar valued function to solve constrained problems. I imagine you can use the gradients of the constraints for solving the KKT conditions directly but this seems only useful if the system is not too big. But for sure it opens new possibilities. XLA is good for the GPU only. On the CPU MC is about 20-50% faster than TF on scalar valued functions. For the GPU I don't know yet. But it is true that for augmented Lagrangian you only need scalar valued functions. This is really efficient on large-scale problems. But on small-scale problems (up to 5000 variables) you really need interior point methods that solve the KKT conditions directly as you point out. This is sometimes really needed. However, when you look at the algorithms TF and MC do not differ too much. In the end, there is a restricted number of ways of computing derivatives. And basically, most of them are the same (or boil down to two versions). Some of the claims/problems made in the early autodiff literature concerning symbolic diff is just not true. In the end, they are fairly similar. But lets see how XLA performs. Is it open source. Bookmarked! Search:
2021-05-07 08:16:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7199462652206421, "perplexity": 764.8600886572951}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00579.warc.gz"}
https://tex.stackexchange.com/questions/496136/why-does-convert-ghostscript-destroy-equations-on-high-density
Why does convert=ghostscript destroy equations on high density? I want to create a high-resolution png to include it into a A0-poster. My Idea was to use the standalone package with the convert=ghostscript option. \documentclass[border=2pt,convert={ghostscript, density={4000}}, varwidth]{standalone} \usepackage{amsmath} \begin{document} \begin{align} \partial_tx(z,t) &= \Lambda(z)\partial_z^2x(z,t) + \Phi(z)\partial_zx(z,t) + A(z)x(z,t) \label{pdes}\\ \partial_zx(0,t) &= Q_0x(0,t),\\ \partial_zx(1,t) &= Q_1x(1,t) + u(t), \end{align} \end{document} And here is the result: I have installed ghostscript version 9.27 and tried both 32 and 64 bit version, both with the same result. Interestingly, the result depends on the density, it works fine until about 3500. For 10000, the symbols are no more flipped but there are huge white spaces between the symbols. What's the matter with that and how can I avoid this? • What software is the poster being made in? Doesn't it support vector graphics? Output of pdflatex is vector graphics, so is output of latex + dvips (-E). – daleif Jun 17 at 9:54 • Works fine with Ghostscript 9.26 and TeXLive 2019. Note that we have heard of several bug in ghostscript 9.27 so you might be getting hit by one of those. – daleif Jun 17 at 9:57 • @daleif I want to include the picture in PowerPoint. Tried eps but powerpoint can't show the image. Is there a better way than a high-res bitmap? – Ktree Jun 17 at 10:30 • PoiwerPoint should be able to handle EPS. Why not just do he entire poster in LaTeX to begin with. – daleif Jun 17 at 10:52 • I would probably, but its a collaboration, so not in my hand. Think I won't come around using some additional software like inkscape... – Ktree Jun 17 at 13:50 1 Answer Generated PDF with normal pdflatex, then opened in Inkscape and saved as PNG with 4000dpi reslution: (see a small details, it's a monster PNG):
2019-11-14 10:51:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9066505432128906, "perplexity": 3337.032674946738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00408.warc.gz"}
https://math.stackexchange.com/questions/1337523/schur-multiplier-of-large-groups
# Schur multiplier of “large” groups. Let $G$ be a finite group and let $M(G)=H^2(G,\mathbb{C}^*)$ be its Schur multiplier. For "small" groups I can compute the Schur multiplier by hand in terms of corresponding roots of unity. However, for "large" groups I have truble even to determine if the Schur multiplier of a group is trivial, or if the Schur multiplier of two groups of the same cardinality are isomorphic. I thought about using GAP, however I think that my groups are too large for it also. The groups are of order about $10^{50}$, however they are semidirect product of two Sylow subgroups. I will be happy to hear any idea. • @DerekHolt I want to describe the Schur multiplier of the groups in the famous example by Martin Hertweck. But maybe it is aiming too high. I will be happy to show that these groups have non-isomorphic Schur multipliers or at least to show that they have non-trivial Schur multipliers. Here is the link to his paper. jstor.org/stable/… – Ofir Schnabel Jun 24 '15 at 12:54 • I had a look at the paper but I don't have time right now to try and construct the groups explicitly. If you can produce definitions in GAP say, then I would happy to experiment - you could e-mail them to me if you like. – Derek Holt Jun 24 '15 at 13:26 • As another general comment if $G = P \rtimes Q$ is the semidirect product of a $p$-group and a $q$-group with $p \ne q$ then the $q$-part of the Schur Multiplier $M(G)$ of $G$ is isomorphic to $M(Q)$, whereas $Q$ induces an action on $M(P)$, and the $p$-part of $M(G)$ is equal to the fixed subgroup of this action. This may make the calculation easier. There are very efficient algorithms for finding the $p$-multiplicator (which is related to the multipler but not quite the same thing) of a finite $p$-group. – Derek Holt Jun 24 '15 at 17:49 • @DerekHolt , Thanks a lot. I am femiliar with the above mentiond theorem. Even more generaly, if $G=N\rtimes T$ and $(|N|,|T|)=1$ then $$M(G)=M(N)^T\times M(T).$$ Here $M(N)^T$ is the $T$-stable cohomology classes in $M(N)$. As you suggested, I will try to construct the groups explicitly. I will do it first in latex. – Ofir Schnabel Jun 25 '15 at 7:36 • @AlexanderKonovalov the presentation of the group $X$ from Hertweck's is given in the paper. For the group $Y$ however I didn't found it's presentation anywhere. I sent the presentation of $X$ to Prof. Derek Holt and he computed the Schur multiplier using MAGMA. For the group $Y$ we need to first know its presentation. More generally, I want to know if there exist a commutative ring $R$ such that the second cohomology group of $X$ and $Y$ are not isomorphic over $R$. It is pretty clear that for $R=\mathbb{Z}$ they are isomorphic. – Ofir Schnabel Jan 14 '16 at 7:45
2019-08-20 11:27:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808190643787384, "perplexity": 191.7681700894922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315321.52/warc/CC-MAIN-20190820092326-20190820114326-00192.warc.gz"}
https://tex.stackexchange.com/questions/51499/how-do-i-use-my-bib-file-with-texshop?noredirect=1
# How do I use my .bib file with TexShop I am completely new to using LaTex (I just started a week ago), and I'm using the templates posted on this website. I downloaded all the files, including the .bib file and the .cls file and they are all in the same directory. I am able to compile and display the .tex file, but when it appears, all citations have a [?] in them. I'm not sure what I'm supposed to do. I'm sure there's a step I'm missing... perhaps there is a special way to use TexShop to make it work? Or am I supposed to use another program? Any help would be greatly appreciated. • Note that for `.bib` files, you need to compile using the sequence: `latex file`, `bibtex file`, `latex file` (and possibly another `latex file`). `latex` can also be substituted with `pdflatex` or `xelatex` (depending on your compiler). Apr 11 '12 at 3:32 • This works fine for me tex.stackexchange.com/a/515807/200866 Nov 10 '19 at 12:07 To get proper citations using TeXShop you need to (at least): 1. Run LaTeX 2. Run BibTeX 3. Run LaTeX 4. Run LaTeX LaTeX and BibTeX are selected in the drop-down box next to the "Typeset" button. Assuming that everything else is correct (packages, .bib file, citation commands), this should give proper citations. edit: Added an extra LaTeX run after running BibTeX as noted in the comments. • You should usually run `latex` twice after running `bibtex`. See Undefined citation warnings. Apr 11 '12 at 11:07 • Wow! Thank you so much!!! I didn't realize that I had to compile both the .tex and .bib files. After a few tries, it seems like I need four steps: Latex File, Bibtex file, Latex File, Latex File (again). – Paul Apr 11 '12 at 14:10 • SORCERY! This works. =) Dec 14 '19 at 19:50 I am also new to Latex and was having this problem while using TexShop on Mac OS X. I second what's been said above: you need to run Latex, then Bibtex, then Latex twice. Also: you need to ensure that the file name of the .bib file is the same as the .tex file, and in the same directory. You do not need to specify the whole file directory path in the \bibliography{} command, just the file name e.g. \bibliography{my bib.bib} if it is in the same directory as the .tex file. After a lot of tinkering, I've got it to work! • I don't think the `.bib` file must have the same name as the `.tex` file, putting the correct name in `\bibliography{...} ` is enough. Feb 23 '17 at 9:53 • Also, it's not necessary to include the extension `.bib` in the argument of `\bibliography`. – Mico Feb 23 '17 at 10:25 I'm using BibTex, Version 0.99d. In this version, it seems that the name of the .bib file must be the same with the latex file. So if you can't get the BibTex work, try to unify the name. I use biber, i.e. `\usepackage[backend=biber]{biblatex}` and I also needed to change the BibTex Engine, which can be done in the TeXShop preferences: TeXShop -> Preferences... -> Engine Under BibTex Engine, type "Biber" and then ok. Then proceed as JohnReed described: With Cmd + T to typeset and Cmd + shift + B to run biber. I found a better alternative to running the script four times: if you add the following comment as a very first line in your .tex file: ``````% !TEX TS-program = pdflatexmk `````` ⌘+T will make it work on the first try.
2021-10-16 06:54:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923958420753479, "perplexity": 1569.5393616046829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00494.warc.gz"}
http://lxhf.ddiy.pw/dipole-moment-gaussian-output.html
An example occurs with a material like wax, which contains long molecules having a permanent dipole moment. The nuclear electric dipole moment is a very sensitive probe of CP violation beyond the standard model, and for light nuclei, it can be evaluated accurately using few-body calculational methods. out suffix) with all of the mark-up removed, for more convenient browsing. GAUSSIAN 09W TUTORIAL. And an electron would have a negatively charged Q, like that. If your system is charged then changing the origin from R1 to R2 will change the dipole by (R1-R2)q. Z-matrix information can be obtained by using the Z-Matrix editor (see Editing Z-matrices), by saving a job input file for the model (the Gaussian interface calculates the Z-matrix when generating the file), or from an output file selected for analysis. The array values are “intensive”. The dipole moment of a solute molecule was also found to have an effect on the membrane's MWCO. T1 - A new ONIOM implementation in Gaussian98. What is the name of the next moment, (rather obscure). In the following, '*' is the file name specified by the keyword 'System. Output files must be named filename. Elise Wursten was awarded the prize for the best oral/poster contribution at the International Meeting on Searches for the Neutron Electric Dipole Moment at Ascona, Switzerland Elise Wursten was awarded the prize for the best oral/poster contribution at the International Meeting on Searches for the Neutron Electric Dipole Moment at Ascona. Does this mean that data2 calculated by Gaussian is invalid for the dipole moment calculation? I apologize in advance if this question was too basic to ask on this forum. It will automatically be included in the output of the excited state gradient calculation. The calculation of energies, gradients, vibrational frequencies and electric field derivatives. Equation $$\ref{4-24}$$ shows that the strength or energy of the interaction between a charge distribution and an electric field depends on the dipole moment of the charge distribution. Ground state dipole moments were calculated using equation (10). Keywords: Ketocyanine dye, Solvent effect, Dipole moment, Kamlet’s multiple linear regression, Stoke’s shift. Problem Set 3 : Magnetic Field Modeling { Dipoles. SUESI is designed to be portable, using the UNOLS standard. This menu opens a series of dialog boxes that allow you to examine the results of calculations from Gaussian output files. 2] octane-1-carboxylate nematic liquid crystals and their mixtures prepared in chloroform and dichloromethane were studied at room temperature. Details of the underlying theory regarding these models are given in ref 9. c 2019 by Robert G. However as there is no symmetry along the O=Xe bond so there must be a non-zero dipole. Then, since and is zero because there are no two different charges the dipole moment is zero. Tanner and Kostas J. Example of the NMR calculation output of H2 in Gaussian. ‡ Laboratoire de Chimie The´orique, Universite´ Pierre-et-Marie-Curie. The magnetic dipole moment of the current loop makes an angle θ with the z axis (see Figure 6. with these input files using Gaussian. 8 Becke atomic charge with atomic dipole moment correction (10) Gaussian output files also belong to this type. The dipole moment of Nicotine is higher than the dipole moment of NFN. Do remember that, the dipole moment is a vector measure whose direction runs from negative to a positive charge. The nuclear physics relevant to the electric dipole moment (EDM) of the deuteron is addressed. Insert the CD with G09 and copy its content onto you computer. team about above stated problem. Energy of a dipole. Polonica A, 112, S-95, 2007 ). There is no way to get the solute dipole directly but the code does print nuclear and electronic contributions separately, so I would suggest the best way to work round it is to take the electronic dipole moment from the output of your QM/MM calculation, and then add to it the dipole moment of the solvent nuclei. Difficulties with dipoles & point charges Help would be appreciated a lot with the following problems: ~ A positive point charge +Q is at the origin, and a dipole of moment p is a distance r away (r>>L) and in the radial direction as shown below:. We observe that the numerical value of the muon (tau) electric dipole moment is at the order of the magnitude of $10^{-22} (e-cm)$ ($10^{-20} (e-cm)$) and there is an enhancement in the case of two extra dimensions, especially for the tau lepton electric dipole. It is frequently said that a transition is polarized along the direction of its transition moment and this direction is called the @[email protected] direction of the transition. The dipole moments of other pyrimidine analogs, namely Thymine and Uracil, were also calculated and compared with literature values. 01] Quick Links. Conformational change of 1. 4) Conformational Search: Build a butane molecule (CH 3-CH 2-CH 2-CH 3), use the Redundant Coordinator editor to specify the dihedral angle C1-C2-C3-C4, do Relaxed Redundant coord Scan using am1, run job, and determine. Dipole moment analysis B-8, 34 opt. The default for methods for which only analytic first derivative gradients are available. Python script that parses. The dipole moments of other pyrimidine analogs, namely Thymine and Uracil, were also calculated and compared with literature values. The following table lists the component programs of Gaussian 09—known as links—along with their primary functions:. 1, we show the relevant molecular orbitals involved in the A-X transition of HOCH2OO• (HMP, Chapters 4-6). The in-phase component is related to the index of refraction and therefore governs the scattering of light. Last updated on: 05 January 2017. AU - Newton, Marshall D. HyperChem output: Dipole Moment: x=0. This is a quick tutorial that will help you to make your way through the rst steps of computational chemistry using Gaussian 09W software (G09). Dipole moments and static dipole polarizabilities have been calculated for a number of small molecules using the linear combination of Gaussian-type orbitals-local spin density method. log (the N-1 electrons state). Due to the setup of the QM region (with covalent bonds across the QM/MM boundary), the total charge of the QM region is not an integer. dipole moment, etc. Comparison of electric dipole and magnetic loop antennas for exciting whistler modes R. 93 Debye ( Dorosh & Kisiel, Acta Phys. > > at for the dipole moment > > I'm pretty sure that you have to compute this yourself, using the output > charges and positions. For calculating a dipole moment, this means that the Bader partial charge should be +0. The solute dipole moment induces a dipole moment of opposite direction in the surrounding medium. Gaussian 03 output for C 6 H − at B3LYP/aug-cc-pVDZ level with “freq” and “polar” keywords #P B3LYP/aug-cc-pvdz fopt=(tight,z-matrix) freq polar Find last instances of “Dipole moment”, “Quadrupole moment”, or “Polarizability”. Elements may be in any order. a photon (a permanent dipole moment, p) numerous spectrally-overlapping transitions occur and the far IR is nearly opaque at all wavelengths. The International Geomagnetic Reference Field, 2005. You draw the molecule's structure, and the output - including the molecule's transition states and infrared and nuclear magnetic resonance spectra - appears in an easy-to-understand format. How to request explicit output of transition dipole moment in Gaussian vibrational property calculation? Normal vibration calculation gives IR position and intensity. Table 2 indicates similar maximum and minimum changes for all of the other one-electron properties. As we would expect from electronegativities, the oxygen atom is negatively charged. For highly symmetric charge-distributions (for instance those with an inversion center), the permanent moment is zero. 140576 4 H. Vibrational modes using Gaussian This exercise follows the drylab Geometry optimisation using Gaussian and uses checkpoint files left in the unix directory /usr/local/tmp1/n. The corresponding contribution to the electric dipole moment of the neutron is 5·10-30 e·cm in the most optimistic scenarios about the values of the Kobayashi-Maskawa matrix elements. The dipole moment of these was determined by the Gaussian software 5. 009 meters). Dipole corrections for defects in solids Similar to the case of charged atoms and molecules in a large cubic box also charged defects in semiconductors impose the problem of potentially slow convergence of the results with respect to the supercell size due to spurious electrostatic interaction between defects in neighboring supercells. h5 Output File Illustration; Data Analysis. work dipole moments have been calculated using B3LYP and HF methods [14]. The (electric field) polarization of the radiation is perpendicular to the magnetic dipole moment. In the derivation of Eq. Q&A for Work. 25 are shown in the image (this value can be altered in "Tools/Preferences/Gamess"). Contribute to wavefunction91/gaussian_realtime_parse development by creating an account on GitHub. のインプットファイルはgjf という拡張子を持ち、テキストで書かれた以下のような ファイルである。 一行目には. I created GOT to faciliate manipulation of huge number of outputs created while screening libraries of compounds. Excited state dipole moments have been available since the release of v6. If you are trying to >>>>estimate<<<< the dipole moment of a diatomic (2- atom) molecule, it isn't too hard. For example, a linear dipole antenna collects radiation only from the linear polarization whose electric field is parallel to the antenna wires. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. Our 3DEXPERIENCE Platform powers our brand applications, serving 12 industries, and provides a rich portfolio of industry solution experiences. Is dipole moment a vector / scalar? A: Vector 23. In this talk, we present the deuteron, 3 He, 3 H, 6 Li, and 9 Be electric dipole moments calculated. ; Love, Jeffrey J. Other molecular properties involving external fields (response functions) or transition moments can also be computed from integrals of this form. , “aspirin”). , the one from the final iteration of the SCRF method). Arguments: filename The formatted checkpoint file. Note that if you wish the particles to be finite-size spheres as in a Stockmayer potential for a dipolar fluid, so that the particles can rotate due to dipole-dipole interactions, then you need to use atom_style hybrid sphere dipole, which will assign both a diameter and dipole moment to each particle. We establish that three popular canonical representations are unidentified, and demonstrate how unidentified regions can complicate numerical optimization. the dipole moment value is 2. To obtain the strength of the interaction that causes transitions between states, the transition dipole moment is used rather than the dipole moment. CHAPTER 3 DIPOLE AND QUADRUPOLE MOMENTS 3. The electric flux over a surface S is therefore given by the surface integral: Φ E = ∫ S E ⋅ d A {\displaystyle \Phi _{E}=\int _{S}\mathbf {E} \cdot d\mathbf {A} } where E is the electric field and d A is a differential area on the surface S {\displaystyle S} with an outward facing surface normal defining its direction. L, where I is the current flow (in Amperes) and L is the length of the dipole (0. order to use GAUSSIAN for the simulation of vibrationally resolved electronic spectra. Excited state dipole moments have been available since the release of v6. You could also perform a vector sum of the contributions. The output polarisation state from the DiPOLE amplifier was assessed by placing a cube polariser in the output. The solute dipole moment induces a dipole moment of opposite direction in the surrounding medium. Data is displayed in terminal and saved in generated. What is the relation between H~~, B~~, and M~~?. Gaussian 03 is no public domain software!!. here under is their reply Thank you Muhammad Shabbir for giving us a chance to comment. predict_magnitude – if True, predict the magnitude of the dipole moment instead of the vector (default: False) elements ( set of int ) – List of atomic numbers present in the training set {1,6,7,8,9} for QM9. To obtain the strength of the interaction that causes transitions between states, the transition dipole moment is used rather than the dipole moment. , from my example above in Debye, (-8. Gaussian input is always in the form of a unformatted text or ASCII file, usually called 'jobname'. Line 1: instructs Gaussian to use 4 processors in a single node for our calculation. Hence, the dipole antenna is an example of an omnidirectional antenna. 140576 4 H. Does this mean that data2 calculated by Gaussian is invalid for the dipole moment calculation? I apologize in advance if this question was too basic to ask on this forum. large number ~27! of molecules possessing dipole moments between 2. CIS calculation, dipole moment integrals, excited state frequencies, oscillator strengths, and the molecular orbitals involved in excitation are reported. In this paper we cast the well-known convolutional neural network in a Gaussian process perspective. 53, alternate CVD color palette. We shall now assume that d is proportional to a spherical unit vector d ¼ d 0eiwe s; s ¼ 1;0;1; ð9Þ with d 0 > 0 and w an overall phase. You may do PES scan by varying dihedral angle. The solute dipole moment induces a dipole moment of opposite direction in the surrounding medium. We observe that the numerical value of the muon (tau) electric dipole moment is at the order of the magnitude of $10^{-22} (e-cm)$ ($10^{-20} (e-cm)$) and there is an enhancement in the case of two extra dimensions, especially for the tau lepton electric dipole. > or > -- Number of steps exceeded, NStep= Your-value > > grep "Optimization stopped" your Gaussian. Enter a sequence of element symbols followed by numbers to specify the amounts of desired elements (e. In fact, the magnetic field induced in a given position by an active coil can be expressed easily, as a function of the coil's magnetic dipole moment and the vector that describes the distance between the coil and the position of interest [9][10][14]. You multiply the charge of an electron (gaussian units (StatC) by Re (bond length in cm) for the molecule. European Biophysics Journal, 2004. Structure and Dipole Moment of the First Electronically Excited State of Pyridine N-Oxide Hochstrasser, R. The in-phase component is related to the index of refraction and therefore governs the scattering of light. out) and show the structure and the orbit of the molecule. dat file included in that example. [email protected] Atomic charge distribution. Dipole moment is the negative derivative of the energy with respect to applied electric field. Natural bond orbital compositions A-21 Output DI opt. For all iterative SCRF methods, note that the energy to use is the one preceding the Convergence achieved message (i. The MIDIX basis set predicts Hartree-Fock geometries and Hartree-Fock and hybrid density functional Löwdin dipole moments more accurately than either the 3-21G(d) or 6-31G(d) basis set for most of the compounds in our training set. Non-Gaussian statistics of the vibrational fluctuations of myoglobin. The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. Spatial resolution and position accuracy Positionaccuracyrefersto the precisionwith which anobject canbe localizedin space. If only a column of atom numbers is displayed in the Z-matrix list, then no Z-matrix information is available for the model. However, under this suppositions I cannot get expected results. T2 - Coumarin-153 steady-state spectroscopy. 7 Dipole corrections for Up: 8 Special switches for Previous: 8. COMPUTATIONAL CHEMISTRY AVOGADRO SOFTWARE. (More realistically, a dipole may be a smear of positive charge displaced by some small vector from a canceling smear of negative charge. Line 2: provides the Gaussian route, i. The temperature was set to 20 K (for analysis, this is only used in the calculation of the quantum correction factor for the autocorrelation function of the dipole moment). Electrostatic potential maps and what they tell us. You draw the molecule’s structure, and the output – including the molecule’s transition states and infrared and nuclear magnetic resonance spectra – appears in an easy-to-understand format. There is no way to get the solute dipole directly but the code does print nuclear and electronic contributions separately, so I would suggest the best way to work round it is to take the electronic dipole moment from the output of your QM/MM calculation, and then add to it the dipole moment of the solvent nuclei. The distribution type must be gaussian. The corrections for the total energy are calculated as the energy difference between a monopole/dipole and quadrupole in the current supercell and the same dipole placed in a super cell with the corresponding lattice vector approaching infinity. An alternative and more quantitatively useful definition of the magnetic moment is to model it as arising from a tiny current traveling around the edge of a loop of cross sectional area. This section describes the Results menu and the options available through it. , or anyone who will provide assistance to any competitor of Gaussian Inc. Applications. Nuclear magnetic resonance was observed following orientation of the 6s2 S01 ground state. GAUSSIAN 09W TUTORIAL AN INTRODUCTION TO COMPUTATIONAL CHEMISTRY USING G09W AND AVOGADRO SOFTWARE Anna Tomberg anna. Option -g produces a plot of the distance dependent Kirkwood G-factor, as well as the average cosine of the angle between the dipoles as a function of the. Keywords: Ketocyanine dye, Solvent effect, Dipole moment, Kamlet’s multiple linear regression, Stoke’s shift. The in-phase component is related to the index of refraction and therefore governs the scattering of light. The output polarisation state from the DiPOLE amplifier was assessed by placing a cube polariser in the output. Suppose that the point magnetic dipole moment µ~ now moves with velocity ~v. For calculating a dipole moment, this means that the Bader partial charge should be +0. Vibrational modes using Gaussian This exercise follows the drylab Geometry optimisation using Gaussian and uses checkpoint files left in the unix directory /usr/local/tmp1/n. 140576 4 H. The dipole moment can be calculated by the equation m=I. How to request explicit output of transition dipole moment in Gaussian vibrational property calculation? Normal vibration calculation gives IR position and intensity. As shown in Tables 2 and 3 and Figure 1, the raw distributions of net charges and dipole moments from the databases and also the corresponding normalized distributions (net charge or dipole per atom), while continuous, are highly skewed and non-Gaussian. The flux through the Gaussian surface in drawing A is the same as the flux through the Gaussian surface in. The excited state dipole moment of dye is found to be greater than its corresponding ground state counterpart and, ground and excited state dipole moments are almost perpendicular to each other. The three resulting dipole moments are then Fourier transformed to give the frequency-dependent linear polarizabilty, and thus the absorption spectrum. Any object or system that is oppositely charged at two points or poles, such as a magnet, a polar molecule, or an antenna element. 0 Program: A Discovery Tool for Uncovering Chemical Bonding Concepts in Complex Wavefunctions. #t indicates terse output, producing a leaner output file. The asymmetry of the first stage will increase the DC offset at the output dramatically. GAUSSIAN 09W TUTORIAL. Dipole moment of propene. parses output of gaussian realtime scf files. 9919 eV (1249. Table 2 indicates similar maximum and minimum changes for all of the other one-electron properties. Molden is a package for displaying Molecular Density from the Ab Initio packages GAMESS-UK , GAMESS-US and GAUSSIAN and the Semi-Empirical packages Mopac/Ampac, it also supports a number of other programs via the Molden Format. Define an electric dipole moment and a magnetic dipole moment with simple formulas AND sketches containing, charge, current, and distance. Mopac and Gaussian. com This is a quick tutorial that will help you to make your way through the. Line 2: provides the Gaussian route, i. > > Thanks for help > > francesco pietra >. For highly symmetric charge-distributions (for instance those with an inversion center), the permanent moment is zero. Palkina, V. the dielectric constant for low dielectric media The file dip. Linear antennas, Hertzian dipole, standing-wave antennas, half-wave dipole, monopole antennas, traveling wave antennas, vee and rhombic antennas, loop antennas, circular and square loops, dipole and quadrupole radiation. Since Multiwfn is a heuristic and very user-friendly program, it is absolutely unnecessary to read through the. Vibrationally-resolved electronic spectra in GAUSSIAN 09 V. Gaussian input is always in the form of a unformatted text or ASCII file, usually called 'jobname'. In this work, we demonstrate that a dipole bio-originated from bR-PM can bidirectionally mediate the performance of a bottom-contact TiO2 nanowire field effect transistor (FET. The magnetic forces on the left and right sides of the current loop have the same magnitude but point in opposite directions (see Figure 6. A typical dipole moment for a molecule is about 1 D. title = "Ganglioside GD3 Lactones: Polar Head Group Mediated Control of the Intermolecular Organization", abstract = "The individual properties and intermolecular organization of ganglioside GD3 and of two of its lactone forms (GD3Lactone I and GD3Lactone II) were studied in lipid monolayers. The GAMESS4 program calculates. To get the total dipole moment, first you should add the nuclear contribution, but the dipole moment will still slightly differ from that obtained by Gaussian (unless you use full CI) since the orbital relaxation is missing. noise has a Gaussian distribution in the spectral k space with a FWHM that is 2 times larger than that of the soliton, and it has 5% of the soliton’s power. Dipole moment is the negative derivative of the energy with respect to applied electric field. g_dipoles computes the total dipole plus fluctuations of a simulation system. (a) Compute and compare the interaction energy between the moving magnetic dipole. T1 - Direct measurement of the dipole moment of a metastable merocyanine by electromechanical interferometry. The corrections for the total energy are calculated as the energy difference between a monopole/dipole and quadrupole in the current supercell and the same dipole placed in a super cell with the corresponding lattice vector approaching infinity. read an Gaussian standard output file (. I created GOT to faciliate manipulation of huge number of outputs created while screening libraries of compounds. Gaussian-type functions are not the most natural choice for expanding the wavefunction. 7 Dipole corrections for Up: 8 Special switches for Previous: 8. The dipole moment can be measured experimentally by studying how the microwave spectral lines of a molecule change in the presence of electric field (the Stark effect). You may do PES scan by varying dihedral angle. For s ¼ 0 we have a Dm ¼ 0 transition and s ¼ 1 corresponds to a Dm ¼ 1 transition. The formula for electric dipole moment for a pair of equal & opposite charges is p = qd, the magnitude of the charges multiplied by the distance between the two. Equation $$\ref{4-24}$$ shows that the strength or energy of the interaction between a charge distribution and an electric field depends on the dipole moment of the charge distribution. As we would expect from electronegativities, the oxygen atom is negatively charged. The above output reports the A-X frequency of HO2 to be 0. parses output of gaussian realtime scf files. Let V = 0 at r = ∞. The magnetic moment of a magnet is a quantity that determines the force that the magnet can exert on electric currents and the torque that a magnetic field will exert on it. Last updated on: 20 June 2018. Furthermore, the dipole autocorrelation function will be computed when option -corr is used. The asymmetry of the first stage will increase the DC offset at the output dramatically. Johnson and A. out) and show the optimized structure of the molecule. In some cases however dipoles are caused by entirely electronic quantum effects where the molecule is composed of nothing but carbon and hydrogen. Total atomic charges: 1 C. While optimizing the structure using gaussian, I. Q&A for Work. For Nicotine, the calculated dipole moment value is 3. L, where I is the current flow (in Amperes) and L is the length of the dipole (0. It is also called the Gaussian unit system, Gaussian-cgs units, or often just cgs units. The general operator structure of the P- and T-odd nucleon-nucleon interaction is discussed and applied to the two-body contributions of the deuteron EDM, which can be calculated in terms of P- and T-odd. out) and show the structure and the orbit of the molecule. A new class of N-dimensional vector. When the water is exposed to an external electric field, a torque is exerted on the water molecule, and it will try to align its dipole moment with the external electric field. The direction of p is from the minus charge to the plus charge. A Gaussian frequency calculation on the F atom will produce an output file suitable for thermo. highest charge separation). If the selected item is not present in the file, the item is dimmed in the menu. It is calculated with the use of all-particle explicitly correlated Gaussian functions with shifted centers. The defining relation is B = H + 4πM. Conformational calculation using Gaussian 09 program. If you look at the top of the population analysis you will find. Is the equilibrium stable or unstable? [2007 F,2008,2011] Sol. In a particular situation, it has dipole moment aligned with the electric field. 680" coaxial deeptow cable. 2 Magnetic dipole radiation pattern given a solenoidal charge motion orbit-ing the center of the dipole pattern. out) and show the optimized structure of the molecule. The in-phase component is related to the index of refraction and therefore governs the scattering of light. Does this mean that data2 calculated by Gaussian is invalid for the dipole moment calculation? I apologize in advance if this question was too basic to ask on this forum. Such a body is an electric dipole. The CCCBDB contains: Experimental and computed (quantum mechanics) thermochemical data for a selected set of 1968 gas-phase atoms and small molecules. Dipole moments of MA were calculated by the ab initio Hartree-Fock method. Hi Everyone there. The greenhouse effect is HUGE ! From the “no atmosphere” model of -18°C to an observed average surface temp of 15°C, it is a +33°C effect ! Furthermore, if this were linear then we would get 1°C increase for every 7 W/m2. Dipole moment analysis B-8, 34 opt. The dipole itself is constructed of two gold spheres about 2 mm in diameter, separated by 9. Standard model contribution to the electric dipole moment of the deuteron, 3 H, and 3 He nuclei. From this you can compute e. a semi-gaussian CR-RC shaper. [11] discuss four dipole models with different dipole moment constraints. The temperature was set to 20 K (for analysis, this is only used in the calculation of the quantum correction factor for the autocorrelation function of the dipole moment). European Biophysics Journal, 2004. " Click on the "View" icon and scroll back to the top to see the direction of the dipole (i. Other molecular properties involving external fields (response functions) or transition moments can also be computed from integrals of this form. I created GOT to faciliate manipulation of huge number of outputs created while screening libraries of compounds. At the moment the visualization of ADF output files is not as comprehensive as the visualization of GAMESS or Gaussian files: extraction of energies, visualization of dipole moment and some atomic properties is provided, but no molecular orbitals visualization, etc. The first one presents a general theoretical background and th e approximations used to compute the transition dipole moment and the transition intensities. Other working can be submitted as a hand-written hard. Output files must be named filename. large number ~27! of molecules possessing dipole moments between 2. 26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. along the axis of the dipole nearer the positive charge 45° above the axis but nearer the positive charge 45° above the axis but nearer the negative charge. log From all of line you need only the line after Population analysis using the CI density. Ch12 IR and MS Page6. A Typical Gaussian Output File. A folded dipole is a dipole antenna with the ends folded back around and connected to each other, forming a loop as shown in Figure 1. "Look deep into nature, and then you will understand everything better. We have implemented the finite field method for calculating molecular polarizability tensors up to hexadecapole-hexadecapole using output files obtained from the Gaussian 85 suite of quantum chemistry programs. The solute dipole moment induces a dipole moment of opposite direction in the surrounding medium. Using electron spin precession in the molecule ThO, [ Baron et al. Further, the E-field only has one vector component and consequently the fields are linearly polarized. The temperature was set to 20 K (for analysis, this is only used in the calculation of the quantum correction factor for the autocorrelation function of the dipole moment). dipole shrinks to zero. It consists of two identical conductive elements such as metal wires or rods, which are usually bilaterally symmetrical. calculation of transition dipole moment can be helpful in understanding the energy transfer rates; provide a basis for calculating extinction coefficients and fluorescence lifetimes etc. These values may contain random phase shifts and amplitude variations caused by local oscillator drift, jitter, channel response and other factors. The total charge per fragment is also reported if applicable. Extracting dipole moment data from. Just a random side-question: What's the dipole moment's negative and positive sides in $\ce{NO+}$? At the time I was unable to answer that question sufficiently, therefore I share the addendum Q&A style. This item corresponds to the first-order term of the Taylor expansion of the transition dipole moment about the equilibrium geometry. utilizing ab initio, density functional theory, semi-empirical, molecular mechanics, and hybrid methods. 2007-01-01. 11 The applicability of this relation-ship to highly polar molecules such as ethylene carbonate ~EC, C3H4O3,m55. Rules for chemical formula. > or > -- Number of steps exceeded, NStep= Your-value > > grep "Optimization stopped" your Gaussian. This section describes the Results menu and the options available through it. The in-phase component is related to the index of refraction and therefore governs the scattering of light. For calculating a dipole moment, this means that the Bader partial charge should be +0. title = "Elementary quantum mechanics of the neutron with an electric dipole moment", abstract = "The neutron, in addition to possibly having a permanent electric dipole moment as a consequence of violation of time-reversal invariance, develops an induced electric dipole moment in the presence of an external electric field. My question, which may be quite silly, is: is the total dipole moment simply the sum of its components; i. 5 syst)×10 −29 e·cm. h5 Files; General Structure of Simulation Output. Viewing Gaussian Results. Discussion about problems or potential bugs with Avogadro - what doesn't work and why. Other molecular properties involving external fields (response functions) or transition moments can also be computed from integrals of this form. We compare a new classical water model, which features Gaussian charges and polarizability GCPM with ab initio Car-Parrinello molecular dynamics CPMD simulations. The basis functions for these heteroatoms are developed from the standard 3-21G basis set by adding one Gaussian-type d subshell to each Si, Br, or I atom. If you are trying to >>>>estimate<<<< the dipole moment of a diatomic (2- atom) molecule, it isn't too hard. The nuclear magneton is the natural unit for expressing magnetic dipole moments of heavy particles such as nucleons and atomic nuclei. , MP2/6-31G*) to obtain an accurate dipole moment. WebMO capabilities include: - Build molecules by drawing atoms and bonds in a 3-D molecular editor, or by speaking the name (e. The array values are “intensive”. The dipole moment of these was determined by the Gaussian software 5. Measured dipole moments are shown on the structures of 2ABN (a) and 3ABN (b) in both the ground and excited electronic states 51 Figure 4. The dipole antenna is symmetric when viewed azimuthally (around the long axis of the dipole); as a result the radiation pattern is not a function of the azimuthal angle. As shown in Tables 2 and 3 and Figure 1, the raw distributions of net charges and dipole moments from the databases and also the corresponding normalized distributions (net charge or dipole per atom), while continuous, are highly skewed and non-Gaussian. The dipole can be towed with up to 4 knots and the system is efficient at water depths up to 400m, depending on the geological setting. Dipole moments and static dipole polarizabilities have been calculated for a number of small molecules using the linear combination of Gaussian‐type orbitals-local spin density method. , 1971, In : The Journal of Chemical Physics. The bottom figure expands the latest 10,000 years and we can see that the dipole moment of the present field is decreasing, it is still well above the dipole moment of times past. 680" coaxial deeptow cable. A current density J(r,t) has the form J= C re−atr2 where C and a are constants. Paleomagnetic data show these purely geomagnetic predictions to be quite accurate. A quantum computer maps a neutral two-level. Make sure your code produces ONLY the gures requested and doesn’t output unecessary stu to the screen. > > Thanks for help > > francesco pietra >. To illustrate, we chose a linearly stable dipole soliton with Fig. Other molecular properties involving external fields (response functions) or transition moments can also be computed from integrals of this form. the electric eld induces a dipole moment that oscillates with the same frequency as the eld. For instance, some static output feedback problems can be cast as polynomial optimization problems whose feasible set $\mathbf K$ is defined by a polynomial matrix inequality (PMI). Description: Applicable to all ferromagnetic materials. Figure 12-3 However, if a bond is symmetrical, and has zero dipole moment, the electric field does not interact with this bond. In order to look at the dipole moment as a function of geometry, it was necessary to find a way to calculate the dipole moment for a specific geometry. ABSTRACTThe dipole moments of the ground and excited states of 4′-(hexyloxy)-4-biphenylcarbonitrile and 4-isothiocyanatophenyl 4-pentylbicyclo [2. The torque of a motor is an important parameter often considered by engineers. By combining the timing information produced from a CR-RC shaper output and differentiator output we succeeded to reach the time-walk of 0. Absolutely right, that's why I concluded in my very first post: "on my computer (Windows 7 64 bit) there must be "i8" value instead of "i4" in gaunbo6 in order to work properly with G09W64".
2020-04-03 20:56:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6703601479530334, "perplexity": 1124.7212039375204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518622.65/warc/CC-MAIN-20200403190006-20200403220006-00336.warc.gz"}
https://socratic.org/questions/how-do-you-solve-t-3-t-4-3-2t-3
# How do you solve t-3(t+4/3) = 2t+3? $t = \frac{4}{7}$ t-3t-4=2t+3 $t - 5 t = - 7$ (move all the $t$ to the left side and all the constants to the right side) $- 4 t = - 7$ (the negative signs cancel out themselves) $t = \frac{4}{7}$
2021-12-01 03:56:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7546098828315735, "perplexity": 1105.0720169579408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00085.warc.gz"}
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=5001955
• Create Account ### #ActualHodgman Posted 17 November 2012 - 11:01 PM You're measuring the time from the end of frame#1 to the beginning of frame #2. A = get the time. ...spend a while doing stuff... B = get the time ...loop for next frame... C = get the time time spent doing stuff previously = C-B (wrong) ...spend a while doing stuff... etc You really want to be measuring the time from the start of frame#1 to the start of frame#2. A = get the time. ...spend a while doing stuff... ...loop for next frame... B = get the time time spent doing stuff previously = B-A ...spend a while doing stuff... etc ### #4Hodgman Posted 17 November 2012 - 11:00 PM You're measuring the time from the end of frame#1 to the beginning of frame #2. A = get the time. ...spend a while doing stuff... B = get the time ...next frame... C = get the time time spent doing stuff previously = C-B (wrong) You really want to be measuring the time from the start of frame#1 to the start of frame#2. A = get the time. ...spend a while doing stuff... ...next frame... B = get the time time spent doing stuff previously = B-A ### #3Hodgman Posted 17 November 2012 - 11:00 PM You're measuring the time from the end of frame#1 to the beginning of frame #2. A = get the time. ...spend a while doing stuff... B = get the time ...next frame... C = get the time time spent doing stuff previously = C-B (actually its C-A!) You really want to be measuring the time from the start of frame#1 to the start of frame#2. A = get the time. ...spend a while doing stuff... ...next frame... B = get the time time spent doing stuff previously = B-A ### #2Hodgman Posted 17 November 2012 - 11:00 PM You're measuring the time from the end of frame#1 to the beginning of frame #2. A = get the time. ...spend a while doing stuff... B = what's the time ...next frame... C = get the time time spent doing stuff previously = C-B (actually its C-A!) You really want to be measuring the time from the start of frame#1 to the start of frame#2. A = get the time. ...spend a while doing stuff... ...next frame... B = get the time time spent doing stuff previously = B-A ### #1Hodgman Posted 17 November 2012 - 10:59 PM You're measuring the time from the end of frame#1 to the beginning of frame #2. A = what's the time. ...spend a while doing stuff... B = what's the time ...next frame... C = what's the time time spent doing stuff = C-B You really want to be measuring the time from the start of frame#1 to the start of frame#2. A = what's the time. ...spend a while doing stuff... ...next frame... B = what's the time time spent doing stuff = B-A PARTNERS
2014-08-21 02:27:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45338234305381775, "perplexity": 4783.113607434024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813887.15/warc/CC-MAIN-20140820021333-00309-ip-10-180-136-8.ec2.internal.warc.gz"}
http://unixresources.net/faq/12358451.shtml
# Git on Windows: Where are my files? WARNING: I'M NEW TO GIT!! I installed git on my local machine. I also installed it on a server on our network. On the server, I created C:\git to house my code and shared that directory. So, I did: $cd C:\git$git init --bare I set up a new remote on my local machine to point to the shared directory of the "git server" $git remote add shared "//shared/path/to/git/folder"$cd C:\path\to\my\source\code $git init$git add -A $git commit -m "initial push"$git push shared master $git checkout -b test$git push shared test So, now that I've done all that, when I go back to the server, I am expecting to see the server copies of my source code there, but I don't see it. Am I missing the entire point of git or am I doing something wrong? ## Answers Git stores your source as "binary large objects" (BLOBs, or just "objects" for short). A bare repository, like the one on your server, contains ONLY those BLOBs, not a "working tree", which is what your cloned repository has, which is why you can't see your files as normal on the server. They're there, uniquely named based on their content, in the objects directory. Try this experiment: git init echo "hello" > hello.txt git add hello.txt dir .git/objects You'll see a subdirectory called 2a. Look inside that directory: dir .git/objects/2a You'll see a file called 93d00994fbd8c484f38b0423b7c42e87a55d48. The directory name (2a) along with the filename make up the SHA1 hash of the contents of the file hello.txt (the string "hello", that is). Now if you type: git cat-file -p 2a93d00994fbd8c484f38b0423b7c42e87a55d48 You'll see the contents of hello.txt! That's the object that'll be stored in the bare repository, which is why you don't just see the file hello.txt sitting there; it's been given a special name, compressed, and put in the objects directory. As your repository grows, things get a little more complicated, because as the size of your content, and the number of your commits grow, Git starts packing similar files together; all of the information is still there, but it's further compressed for efficiency. Another way you can reassure yourself that your files are indeed being stored on your server, you can create another clone (not a bare one!) and you'll have a working tree with all of your files in it. You intialized bare repository. It does not have source code structure. It does not contain a "working tree" at all. Instead, it contains all the contents of the .git subdirectory right in the main directory itself. So if you want a working copy on your server you have to clone your code from bare repo. Then pull latest commits to syncronize it. ### iOS - Perform selector from UIViewController from willEnterForeground I have a -(void) method in my main UITableView class, which I use to refresh the data. It's somewhat complex, so I'm not going to post it here, but it is also called during viewWillAppear, so every... ### ListBox HorizontalScrollbar doen't work WinForms I have a ListBox in winforms , C# application. ###### About UNIX Resources Network Original, collect and organize Developers related documents, information and materials, contains jQuery, Html, CSS, MySQL, .NET, ASP.NET, SQL, objective-c, iPhone, Ruby on Rails, C, SQL Server, Ruby, Arrays, Regex, ASP.NET MVC, WPF, XML, Ajax, DataBase, and so on.
2019-01-23 11:54:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.274867981672287, "perplexity": 4772.791315654531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584331733.89/warc/CC-MAIN-20190123105843-20190123131843-00504.warc.gz"}
https://www.neetprep.com/questions/477-Chemistry/7833-Structure-Atom?courseId=141&testId=1061969-Past-Year----MCQs&subtopicId=41-De-Broglie-Equation
A 0.66 kg ball is moving with a speed of 100 m/s. The associated wavelength will be 1. $6.6×{10}^{-34}m$ 2. 3. $1.0×{10}^{-32}m$ 4. Subtopic:  De Broglie Equation | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Please attempt this question first. Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
2023-02-06 20:02:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5326213836669922, "perplexity": 14373.22569076653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00454.warc.gz"}
https://www.physicsforums.com/threads/self-buckling-initial-conditions.912034/
# Self Buckling Initial conditions 1. Apr 20, 2017 ### Vincent Isoz Hi For a book i'm writing, i try to write a very very very detail proof of self-buckling. I did it almost by taking inspiration of: https://en.wikipedia.org/wiki/Self-buckling But i really really don't get how we arrive to, that when, $$x=0$$ as we obviously must have: $$\dfrac{\mathrm{d}^2 x}{\mathrm{d}x^2}(x=0)=0$$ that $$A=0$$??? This last point i don't get it. I must do a stupid error using Bessel functions properties but since 3 weeks i'm trying and i fail.... Any help would be greatly appreciated. Thx 2. Apr 21, 2017 ### Dr.D While this topic is mathematically interesting, it has little practical application. In 50+ years as an engineer dealing with mechanics problems of all sorts, I have never seen anything close to this in real life, and I don't know of anyone who has. This may well be why you are not generating lots of answers.
2018-02-20 16:47:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5052860379219055, "perplexity": 902.0448173710529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812978.31/warc/CC-MAIN-20180220145713-20180220165713-00246.warc.gz"}
https://math.stackexchange.com/questions/1851664/prove-that-bigcup-mathbbn-mathbbn
# Prove that $\bigcup\mathbb{N}=\mathbb{N}$ Prove that $\bigcup\mathbb{N}=\mathbb{N}$. Showing that $\mathbb{N}\subseteq \bigcup\mathbb{N}$ is simple. However, I'm not seeing how to handle showing $\bigcup\mathbb{N}\subseteq\mathbb{N}$. I know that $\mathbb{N}:=\bigcap\{z\in\mathcal{P}(x)|\text{$x$is inductive}\}$. Therefore, $$\bigcup\mathbb{N}\Leftrightarrow \bigcup\big(\bigcap\{z\in\mathcal{P}(x)|\text{ x is inductive}\}\big)$$ But I'm not sure what to make of this. EDIT: The definition of a natural number I'm working with is $\mathbb{N}=\{\mathbf{0},\mathbf{0^{+}},\mathbf{0^{++}},\mathbf{0^{+++}}\ldots\}$, where $\mathbf{0}=\emptyset$ and $\mathbf{0^+}=\mathbf{0} \cup {\mathbf{\{0\}}}$ and the set of natural numbers is as defined above • Do you mean $\bigcup_{n\in \mathbb{N}}\{n\}$? – QTHalfTau Jul 7 '16 at 2:58 • No, I mean $\bigcup$ given by $\bigcup A := \{x: \exists X \in A: x \in X\}$ – user352541 Jul 7 '16 at 3:00 • if you mean union with itself the answer is trivial – Zelos Malum Jul 7 '16 at 3:02 • How do you define a number $n\in \mathbb{N}$? Do you define $n=\{0, 1, \cdots, n-1\}$ with $0=\emptyset$? It seems so... Otherwise I don't understand what you mean by $\bigcup \mathbb{N}$. – Hamed Jul 7 '16 at 3:10 • $\mathbb{N}=\{\bf{0},\bf{0^+},\bf{0^{++}},\bf{0^{+++}}\ldots\}$, where $\bf{0}=\emptyset$ and $\bf{0^{+}}={\bf{0}\cup\{\bf{0}\}}$. – user352541 Jul 7 '16 at 3:21 If you define what you mean by $\mathbb{N}$ a little more precisely (e.g., $\mathbb{N}$ is the minimal set containing $0$ and closed under the map taking $x$ to $x \cup \{x\}$) then you should be able to show by induction on sets that every element of a natural number is a natural number; that is, if $n \in \mathbb{N}$ and $x \in n$, then $x \in \mathbb{N}$. Then $\bigcup \mathbb{N}$ includes only natural numbers, so is clearly a subset of $\mathbb{N}$. The "..." in your def'n of $\mathbb N$ is not precise and conceals some assumptions. Without clarifying this def'n you may either get stuck or get into a circular argument.
2019-10-20 00:34:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537343978881836, "perplexity": 261.2712505240334}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700560.62/warc/CC-MAIN-20191020001515-20191020025015-00238.warc.gz"}
https://en.wikiredia.com/wiki/Hail
Hailstorm EffectExtreme damage, dents in metal A large hailstone, about 6 cm (2.4 in) in diameter Hail is a form of solid precipitation. It is distinct from ice pellets (American English "sleet"), though the two are often confused.[1] It consists of balls or irregular lumps of ice, each of which is called a hailstone. Ice pellets fall generally in cold weather while hail growth is greatly inhibited during cold surface temperatures.[2] Unlike other forms of water ice such as graupel, which is made of rime, and ice pellets, which are smaller and translucent, hailstones usually measure between 5 millimetres (0.2 in) and 15 centimetres (6 in) in diameter. The METAR reporting code for hail 5 mm (0.20 in) or greater is GR, while smaller hailstones and graupel are coded GS. Hail is possible within most thunderstorms as it is produced by cumulonimbus,[3] and within 2 nautical miles (3.7 km) of the parent storm. Hail formation requires environments of strong, upward motion of air with the parent thunderstorm (similar to tornadoes) and lowered heights of the freezing level. In the mid-latitudes, hail forms near the interiors of continents, while in the tropics, it tends to be confined to high elevations. There are methods available to detect hail-producing thunderstorms using weather satellites and weather radar imagery. Hailstones generally fall at higher speeds as they grow in size, though complicating factors such as melting, friction with air, wind, and interaction with rain and other hailstones can slow their descent through Earth's atmosphere. Severe weather warnings are issued for hail when the stones reach a damaging size, as it can cause serious damage to human-made structures and, most commonly, farmers' crops. ## Definition Any thunderstorm which produces hail that reaches the ground is known as a hailstorm.[4] Hail has a diameter of 5 millimetres (0.20 in) or more.[3] Hailstones can grow to 15 centimetres (6 in) and weigh more than 0.5 kilograms (1.1 lb).[5] Unlike ice pellets, hailstones are layered and can be irregular and clumped together. Hail is composed of transparent ice or alternating layers of transparent and translucent ice at least 1 millimetre (0.039 in) thick, which are deposited upon the hailstone as it travels through the cloud, suspended aloft by air with strong upward motion until its weight overcomes the updraft and falls to the ground. Although the diameter of hail is varied, in the United States, the average observation of damaging hail is between 2.5 cm (1 in) and golf ball-sized (1.75 in).[6] Stones larger than 2 cm (0.80 in) are usually considered large enough to cause damage. The Meteorological Service of Canada issues severe thunderstorm warnings when hail that size or above is expected.[7] The US National Weather Service has a 2.5 cm (1 in) or greater in diameter threshold, effective January 2010, an increase over the previous threshold of ¾-inch hail.[8] Other countries have different thresholds according to local sensitivity to hail; for instance grape growing areas could be adversely impacted by smaller hailstones. Hailstones can be very large or very small, depending on how strong the updraft is: weaker hailstorms produce smaller hailstones than stronger hailstorms (such as supercells). ## Formation Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing 0 °C (32 °F).[3] These types of strong updrafts can also indicate the presence of a tornado.[9] The growth rate of hailstones is impacted by factors such as higher elevation, lower freezing zones, and wind shear.[10] ### Layer nature of the hailstones Hail shaft Severe thunderstorms containing hail can exhibit a characteristic green coloration[11] Like other precipitation in cumulonimbus clouds, hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means the hailstone is made of thick and translucent layers, alternating with layers that are thin, white and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up and down motion was thought to be responsible for the successive layers of the hailstone. New research, based on theory as well as field study, has shown this is not necessarily true. The storm's updraft, with upwardly directed wind speeds as high as 110 miles per hour (180 km/h),[12] blows the forming hailstones up the cloud. As the hailstone ascends it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone’s growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone’s growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapour is available, it acquires a layer of opaque white ice.[13] Furthermore, the hailstone’s speed depends on its position in the cloud’s updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally the larger hailstones will form some distance from the stronger updraft where they can pass more time growing.[13] As the hailstone grows it releases latent heat, which keeps its exterior in a liquid phase. Because it undergoes 'wet growth', the outer layer is sticky (i.e. more adhesive), so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape.[14] Hail can also undergo 'dry growth' in which the latent heat release through freezing is not enough to keep the outer layer in a liquid state. Hail forming in this manner appears opaque due to small air bubbles that become trapped in the stone during rapid freezing. These bubbles coalesce and escape during the 'wet growth' mode, and the hailstone is more clear. The mode of growth for a hailstone can change throughout its development, and this can result in distinct layers in a hailstone's cross-section.[15] The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than 10 km high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into air above freezing temperature.[16] Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which multiple trajectories can be discussed is in a multicellular thunderstorm, where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter" cell. This, however, is an exceptional case.[13] ### Factors favoring hail Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of 11,000 feet (3,400 m).[17] Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations.[18] Hail growth becomes vanishingly small when air temperatures fall below −30 °C (−22 °F) as supercooled water droplets become rare at these temperatures.[17] Around thunderstorms, hail is most likely within the cloud at elevations above 20,000 feet (6,100 m). Between 10,000 feet (3,000 m) and 20,000 feet (6,100 m), 60 percent of hail is still within the thunderstorm, though 40 percent now lies within the clear air under the anvil. Below 10,000 feet (3,000 m), hail is equally distributed in and around a thunderstorm to a distance of 2 nautical miles (3.7 km).[19] ## Climatology Hail occurs most frequently within continental interiors at mid-latitudes and is less common in the tropics, despite a much higher frequency of thunderstorms than in the mid-latitudes.[20] Hail is also much more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely.[21] The higher elevations also result in there being less time available for hail to melt before reaching the ground. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888.[22] China also experiences significant hailstorms.[23] Central Europe and southern Australia also experience a lot of hailstorms. Regions where hailstorms frequently occur are southern and western Germany, northern and eastern France, and southern and eastern Benelux. In southeastern Europe, Croatia and Serbia experience frequent occurrences of hail.[24] In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley".[25] Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming is North America's most hail-prone city with an average of nine to ten hailstorms per season.[26] To the north of this area and also just downwind of the Rocky Mountains is the Hailstorm Alley region of Alberta, which also experiences an increased incidence of significant hail events. ## Short-term detection Example of a three-body spike: the weak triangular echoes (pointed by the arrow) behind the red and white thunderstorm core are related to hail inside the storm. Weather radar is a very useful tool to detect the presence of hail-producing thunderstorms. However, radar data has to be complemented by a knowledge of current atmospheric conditions which can allow one to determine if the current atmosphere is conducive to hail development. Modern radar scans many angles around the site. Reflectivity values at multiple angles above ground level in a storm are proportional to the precipitation rate at those levels. Summing reflectivities in the Vertically Integrated Liquid or VIL, gives the liquid water content in the cloud. Research shows that hail development in the upper levels of the storm is related to the evolution of VIL. VIL divided by the vertical extent of the storm, called VIL density, has a relationship with hail size, although this varies with atmospheric conditions and therefore is not highly accurate.[27] Traditionally, hail size and probability can be estimated from radar data by computer using algorithms based on this research. Some algorithms include the height of the freezing level to estimate the melting of the hailstone and what would be left on the ground. Certain patterns of reflectivity are important clues for the meteorologist as well. The three body scatter spike is an example. This is the result of energy from the radar hitting hail and being deflected to the ground, where they deflect back to the hail and then to the radar. The energy took more time to go from the hail to the ground and back, as opposed to the energy that went directly from the hail to the radar, and the echo is further away from the radar than the actual location of the hail on the same radial path, forming a cone of weaker reflectivities. More recently, the polarization properties of weather radar returns have been analyzed to differentiate between hail and heavy rain.[28][29] The use of differential reflectivity (${\displaystyle Z_{dr}}$), in combination with horizontal reflectivity (${\displaystyle Z_{h}}$) has led to a variety of hail classification algorithms.[30] Visible satellite imagery is beginning to be used to detect hail, but false alarm rates remain high using this method.[31] ## Size and terminal velocity Hailstones ranging in size from few millimetres to over a centimetre in diameter. Large hailstone with concentric rings The size of hailstones is best determined by measuring their diameter with a ruler. In the absence of a ruler, hailstone size is often visually estimated by comparing its size to that of known objects, such as coins.[32] Using the objects such as hen's eggs, peas, and marbles for comparing hailstone sizes is imprecise, due to their varied dimensions. The UK organisation, TORRO, also scales for both hailstones and hailstorms.[33] When observed at an airport, METAR code is used within a surface weather observation which relates to the size of the hailstone. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 0.25 inches (6.4 mm). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil.[34] The largest recorded hailstone in the United States by diameter 7.9 inches (20 cm) and weight 1.94 pounds (0.88 kg). The hailstone fell in Vivian, South Dakota on July 23, 2010. Terminal velocity of hail, or the speed at which hail is falling when it strikes the ground, varies. It is estimated that a hailstone of 1 centimetre (0.39 in) in diameter falls at a rate of 9 metres per second (20 mph), while stones the size of 8 centimetres (3.1 in) in diameter fall at a rate of 48 metres per second (110 mph). Hailstone velocity is dependent on the size of the stone, friction with air it is falling through, the motion of wind it is falling through, collisions with raindrops or other hailstones, and melting as the stones fall through a warmer atmosphere. As hail stones are not perfect spheres it is difficult to calculate their speed accurately.[35] ### Hail records Megacryometeors, large rocks of ice that are not associated with thunderstorms, are not officially recognized by the World Meteorological Organization as "hail," which are aggregations of ice associated with thunderstorms, and therefore records of extreme characteristics of megacryometeors are not given as hail records.
2018-11-20 12:43:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4693722128868103, "perplexity": 2508.360480769245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746386.1/warc/CC-MAIN-20181120110505-20181120132505-00227.warc.gz"}
https://chemistry.stackexchange.com/questions/74677/why-boron-trihydride-is-weaker-lewis-acid-than-boron-tribromide-and-boron-tri-io
Why boron trihydride is weaker lewis acid than boron tribromide and boron tri-iodide? There is no back-bonding in case of boron trihydride so it is a stronger lewis acid than boron trichloride ( even though chlorine atom is more electronegative than hydrogen atom). Then why boron trihydride is weaker lewis acid than boron tribromide and boron tri-iodide? • Backbonding is ineffective in case of tribromides and triiodides since their 4p and 5p orbitals dont really overlap well with the 2p of boron. Morever they are electronegative and can make the boron more positive, and increase the tendency to accept a pair of electrons. – Pritt says Reinstate Monica May 17 '17 at 6:39 Predicting the acidic strengths of $$\ce{BX_3}$$ using back-bonding is one plausible theory. This article1 hints us to use bond-strength data to predict the same: The weaker Lewis acid strength of $$\ce{BF3}$$ compared to $$\ce{BCl3}$$ with respect to bases such as $$\ce{NMe3}$$ is explained in terms of the ligand close-packing (LCP) model. The halogen ligands remain close-packed during the formation of the complex leading to an increase in the $$\pu{B−Hal}$$ bond length. Because a $$\ce{B−F}$$ bond is stronger than a $$\ce{B−Cl}$$ bond, more energy is required to stretch a $$\ce{B−F}$$ bond than a $$\ce{B−Cl}$$ bond. Hence $$\ce{BF3}$$ is a weaker acid than $$\ce{BCl3}$$. Pulling up the bond strength data, I get: $$\begin{array}{c|c} \mathbf{Bond} & \mathbf{Strength (KJ/mole) } \\\hline \text{B-H} & \text{389}\\ \text{B-F} & \text{613} \\ \text{B-Cl} & \text{456}\\ \text{B-Br} & \text{377}\\ \text{B-I} & \text{?}\end{array}$$ Building up on these two pieces of information, it can be said that since $$\ce{B-H}$$ bond is stronger than $$\ce{B-Br}$$ and $$\ce{B-I}$$ (see below), $$\ce{BH3}$$ is a weaker acid than $$\ce{BBr3}$$ and $$\ce{BI3}$$. I'm afraid I couldn't find the data for $$\ce{B-I}$$ bond, but since we can compare the bond-strengths of boron trifluorides on the basis of back-bonding theory, we can be sure that the strength of $$\ce{B-I}$$ bond will be $$\lt \pu{377 kJ mol^-1}$$. References 1. Ligand Close-Packing and the Lewis Acidity of BF3 and BCl3 Bryan D. Rowsell, Ronald J. Gillespie, and George L. Heard Inorganic Chemistry 1999 38 (21), 4659-4662 DOI: 10.1021/ic990713m
2021-08-05 23:12:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.50758296251297, "perplexity": 2799.7992387722}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00680.warc.gz"}
http://libros.duhnnae.com/2017/aug3/15018183114-Superconductivity-on-the-surface-of-topological-insulators-and-in-two-dimensional-noncentrosymmetric-materials-Condensed-Matter-Superconductivity.php
# Superconductivity on the surface of topological insulators and in two-dimensional noncentrosymmetric materials - Condensed Matter > Superconductivity Superconductivity on the surface of topological insulators and in two-dimensional noncentrosymmetric materials - Condensed Matter > Superconductivity - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: We study the superconducting instabilities of a single species oftwo-dimensional Rashba-Dirac fermions, as it pertains to the surface of athree-dimensional time-reversal symmetric topological band insulators. We alsodiscuss the similarities as well as the differences between this problem andthat of superconductivity in two-dimensional time-reversal symmetricnoncentrosymmetric materials with spin-orbit interactions. The superconductingorder parameter has both s-wave and p-wave components, even when thesuperconducting pair potential only transfers either pure singlets or puretriplets pairs of electrons in and out of the condensate, a corollary to thenon-conservation of spin due to the spin-orbit coupling. We identify one singlesuperconducting regime in the case of superconductivity in the topologicalsurface states Rashba-Dirac limit, irrespective of the relative strengthbetween singlet and triplet pair potentials. In contrast, in the Fermi limitrelevant to the noncentrosymmetric materials we find two regimes depending onthe value of the chemical potential and the relative strength between singletand triplet potentials. We construct explicitly the Majorana bound states inthese regimes. In the single regime for the case of the Rashba-Dirac limit,there exist one and only one Majorana fermion bound to the core of an isolatedvortex. In the Fermi limit, there are always an even number 0 or 2 dependingon the regime of Majorana fermions bound to the core of an isolated vortex. Inall cases, the vorticity required to bind Majorana fermions is unity, incontrast to the half-flux in the case of two-dimensional $p x \pm i p y$superconductors that break time-reversal symmetry. Autor: Luiz Santos, Titus Neupert, Claudio Chamon, Christopher Mudry Fuente: https://arxiv.org/
2018-03-22 12:15:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6903820633888245, "perplexity": 3403.8187514141678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00585.warc.gz"}
https://openreview.net/forum?id=70APYZs3twc
## Sparse, Geometric Autoencoder Models of V1 26 Sept 2022, 12:09 (modified: 09 Nov 2022, 02:12)NeurReps 2022 PosterReaders: Everyone Keywords: Locality, Manifold Learning, Graph Laplacian, Phase Symmetry TL;DR: Recurrent sparse autoencoders learn brain-like Gabor filters when adding additional regularization that captures physical constraints of V1 Abstract: The classical sparse coding model represents visual stimuli as a convex combination of a handful of learned basis functions that are Gabor-like when trained on natural image data. However, the Gabor-like filters learned by classical sparse coding far overpredict well-tuned simple cell receptive field (SCRF) profiles. A number of subsequent models have either discarded the sparse dictionary learning framework entirely or have yet to take advantage of the surge in unrolled, neural dictionary learning architectures. A key missing theme of these updates is a stronger notion of \emph{structured sparsity}. We propose an autoencoder architecture whose latent representations are implicitly, locally organized for spectral clustering, which begets artificial neurons better matched to observed primate data. The weighted-$\ell_1$ (WL) constraint in the autoencoder objective function maintains core ideas of the sparse coding framework, yet also offers a promising path to describe the differentiation of receptive fields in terms of a discriminative hierarchy in future work. 7 Replies
2023-03-28 00:14:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1794978827238083, "perplexity": 4703.922174957727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00033.warc.gz"}
https://learn.careers360.com/ncert/question-if-x-and-y-are-two-sets-such-that-x-has-40-elements-x-y-has-60-elements-and-x-y-has-10-elements-how-many-elements-does-y-have/
Q5. If X and Y are two sets such that X has 40 elements, X $\cup$ Y has 60 elements and X $\cap$  Y has 10 elements, how many elements does Y have? S seema garhwal n( X $\cup$ Y) = 60 n( X ∩ Y) = 10 n(X) = 40 n( X $\cup$ Y) =  n(X)  +  n(Y)   -  n( X ∩ Y) n(Y) = 60 - 40 + 10 n(Y) = 30 Hence,set Y has 30 elements. Exams Articles Questions
2020-04-07 08:28:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4774339497089386, "perplexity": 2982.6882998172514}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371675859.64/warc/CC-MAIN-20200407054138-20200407084638-00429.warc.gz"}
http://physics.stackexchange.com/questions/64405/photons-where-do-they-come-from
# Photons, where do they come from? [closed] 1. Photons, where do they come from? 2. What exactly is a photon? 3. I've certainly heard how they get produced, but it doesn't seem to make sense that some sort of particle should need to be produced just to mediate the interactions of electrons. I have yet to take quantum mechanics, can someone give a brief discussion of the conservation, creation and destruction of photons, and how we know they exist? 4. And furthermore why it makes sense to have a quantized bit of energy called a "photon"? - ## closed as not constructive by zhermes, Waffle's Crazy Peanut, Emilio Pisanty, Manishearth♦May 15 '13 at 7:56 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.If this question can be reworded to fit the rules in the help center, please edit the question. it doesn't seem to make sense that some sort of particle should need to be produced just to mediate the interactions of electrons - why not? Something should mediate those interactions, because distant interactions do not agree with the relativity of simultaneity, part of SR. And being produced and absorbed is not a bad way to do it. There's a great book, Feynman: QED, The Strange Theory of Light and Matter, Feynman explains it the best. And the last thing: photon is not a "bit of energy", it is a full-fledged particle similar to electron. –  firtree May 13 '13 at 8:09 Re Feynman: if you have a couple of hours you should definitely watch these lectures by the man himself. No one explains it better. –  Michael Brown May 13 '13 at 8:10 Each bullet point should be its own question. Then 2 should probably be closed as duplicate, and 4 as unconstructive. –  zhermes May 13 '13 at 14:49 As Michael Brown mentioned in the comments, no one will explain this as well as Feynman (at least, no one we know of that's alive). But that doesn't mean your question doesn't deserve at least our attempt. So here is mine and I will try to keep this in the simplest terms I can. (Aside: to all of the physicists reading this, I apologize in advance but in my simplification, I may intentionally omit or contradict the true physics. For instance, I doubt I'll be saying how the photon arises as a gauge boson in local U(1) symmetry). 1. See (2) 2. See (1). Just kidding. I've lumped 1 and 2 together because to explain what a photon is is to essentially explain where it comes from. Hopefully, everyone reading this will be aware of the wave-particle duality that most(all) things enjoy. In that way, as mentioned, a photon is a particle in its own right. But does that mean that people can think of the photon as a tiny billiard ball? No, that would be silly. The photon is a wave packet that, for all intents and purposes, is indivisible. Consider a vibrating electron, it's motion one way or the other constitutes a current, which radiates a magnetic field. Since this magnetic field is changing as the electron speeds up and slows down, it induces an electric field that radiates outward. Since this electric field is also changing continuously, this in turn induces a magnetic field that radiates outward. Rinse and repeat. The result is a self-propagating combination of electric and magnetic fields travelling outward from the electron. This is EM radiation. The photon is the unit of an EM wave. What is one photon? Say we shine a laser, then we block half the beam with a metal plate. The other half still comes through. If there were only one photon in the beam, when you block half of it with the metal plate, none of it would come through. One photon is the largest amount of energy of an EM wave where this would still be true. In physics terms, we write it as $E=h\nu$. The energy of one photon of a wave is equal to h (a very small constant) times the wave's frequency. Because the photon is indivisible, we can say it represents the smallest unit of energy of that particular EM wave. A different wave would have a different smallest energy. To address what was mentioned in the comments, a photon may have full particle status, but it is not similar to an electron. Photons have no mass, they are not matter and, when you examine the properties of a photon, there is no denying that they are packets of energy in every respect; fluctuations in the background EM plane. 3. I'll admit, at first glance it does seem very silly for us to say electric and magnetic fields can produce action at a distance and then say, "no, you need photons to cover that distance and actually do the interacting". But it's true. At least, we can say it's true. Without teaching everyone about field theory and symmetries, my short answer to this would be that in advanced physics, we have a certain equation that originally didn't work out. As we've done a million times in the past, we had to modify this equation to work and we found that we could only do that by introducing a massless particle that mediates the EM force. Afterward, we noticed that this particle happened to have the same properties of a photon. In fact, if we tried the hypothetical situation where we assumed this particle was a photon, this one equation produced all of the laws and equations from electromagnetics that we already knew and loved. Thus, we said, "we're pretty sure this equation is the right one to use. We assumed this particle we invented was a photon, and it resulted in the equations and laws we have in the real universe. So this must be the way it actually is!". Having said that, we can never actually observe the photon as it mediates the force. This is simply because if we were to observe the photon, it would no longer be able to mediate the force because we have no method of observing a photon without destroying it. I have already explained how they are created, how they are destroyed is much simpler. When a photon hits something, it can either reflect, transmit, or be absorbed. The latter is destroying it. When it is absorbed, this means whatever it struck (usually an electron) absorbs all of the energy of that photon. That's it. How we know they exist... We know because we can do experiments with just one photon. We can see the effects of one photon. But most importantly, theoreticians say, "if a photon didn't actually exist, what would happen in some experiment? Well Jim, we would see outcome. And if they do exists, we should see different outcome." Then experimentalists perform the experiment and 10 times out of 10 we always see the outcome predicted by the existence of photons. 4. This particle/wave packet is called a "photon" because it was first theorized specifically about light. I believe "photo" is from the greek word meaning light and the extra n was added because all of the particles known at that date ended with an n (proton, electron, neutron. Why not photon?) I'm willing to bet I've missed something important, so let me know. If I can merge what I've missed with the general form of the answer, I'll be happy to put it in. -
2014-08-22 16:05:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6788589358329773, "perplexity": 343.2085012951034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824209.82/warc/CC-MAIN-20140820021344-00335-ip-10-180-136-8.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/105631/graph-layout-fixed-node-locations
# Graph Layout: Fixed node locations I'm aware of a large amount of literature on the problem of graph layout. Usually this involves taking a list of nodes and edges, and choosing locations and paths for both respectively. Are there any algorithms that take graphs where the node locations are already known, and produce paths for the edges?
2019-10-22 19:39:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491190671920776, "perplexity": 429.77096259737357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00192.warc.gz"}
https://brilliant.org/problems/an-algebra-problem-by-abhay-kumar-33/
# A geometry problem by Abhay Kumar Geometry Level 3 Find the number of integral values of $$k$$ for which the equation $$7 \cos x + 5 \sin x = 2k +1$$ has a solution. ×
2018-01-22 20:28:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392738103866577, "perplexity": 441.2883494553544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891539.71/warc/CC-MAIN-20180122193259-20180122213259-00348.warc.gz"}
http://www.business-science.io/code-tools/2017/01/21/multidplyr-part2.html
Since my initial post on parallel processing with multidplyr, there have been some recent changes in the tidy eco-system: namely the package tidyquant, which brings financial analysis to the tidyverse. The tidyquant package drastically increase the amount of tidy financial data we have access to and reduces the amount of code needed to get financial data into the tidy format. The multidplyr package adds parallel processing capability to improve the speed at which analysis can be scaled. I seriously think these two packages were made for each other. I’ll go through the same example used previously, updated with the new tidyquant functionality. # Parallel Processing Applications in Financial Analysis Collecting financial data in tidy format was overly difficult. Getting from xts to tibble was a pain, and there’s some amazing tidyverse functionality that cannot be used without the tibble (tidy data frame) format. That all changed with tidyquant. There’s a wide range of free data sources, and the tidyquant package makes it super simple to get financial and economic data in tidy format (more on this in a minute). There’s one caveat to collecting data at scale: it takes time. Getting data from the internet, historical stock prices or financial statements or real-time statistics, for 500+ stocks can take anywhere from several minutes to 20+ minutes depending on the data size and number of stocks. tidyquant makes it easier to get and analyze data in the correct format, but we need a new tool to speed up the process. Enter multidplyr. The multidplyr package makes it super simple to parallelize code. It works perfectly in the tidyverse, and, by the associative propery, works perfectly with tidyquant. The example in this post shows off the new tq_get() function for getting stock prices at scale. However, we can get and scale much more than just stock prices. tq_get has the following data retrieval options: • get = "stock.index": This retrieves the entire list of stocks in an index. 18 indexes are available. Use tq_get_stock_index_options() to see the full list. • get = "stock.prices": This retrieves historical stock prices over a time period specified by to and from. This is the default option, and the one we use in this post. • get = "key.ratios": This retrieves the key ratios from Morningstar, which are historical values over the past 10-years. There are 89 key ratios ranging from valuation, to growth and profitability, to efficiency. This is a great place to chart business performance over time and to compare the key ratios by company. Great for financial analysis! • get = "key.stats": This retrieves real-time key stats from Yahoo Finance, which consist of bid, ask, day’s high, day’s low, change, current P/E valuation, current Market Cap, and many more up-to-the-minute stats on a stock. This is a great place for the day trader to work since all of the data is accurate as of the second you download it. • get = "financials": This retrieves the annual and quarterly financial statement data from Google Finance. Great for financial analysis! • get = "economic.data": This retrieves economic data from the FRED database by FRED code. As of the blog post date, there are 429,000 US and international time-series data sets from 80 sources. All you need is the FRED symbol such as “CPIAUCSL” for CPI. Vist the FRED website to learn more. • Other get options: "metal.prices", "exchange.rates", "dividends", and "splits". There’s lots of data you can get using tq_get! The point I want to make is that ANY OF THESE GET OPTIONS CAN BE SCALED USING THE PROCESS I USE NEXT!!! # Prerequisites The multidplyr package is not available on CRAN, but you can install it using devtools. Also, install the development version of tidyquant which has added functionality that will be available on CRAN soon with v0.3.0. For those following along in R, you’ll need to load the following packages: I also recommend the open-source RStudio IDE, which makes R Programming easy and efficient. # Workflow The multidplyr workflow can be broken down into five basic steps shown in Figure 1. The five steps are implemented in Processing in Parallel. Figure 1: multidplyr Workflow Essentially, you start with some data set that you need to do things to multiple times. Your situation generally falls into one of two types: 1. It could be a really large data set that you want to split up into several small data sets and perform the same thing on each. 2. It could be one data set that you want to perform multiple things on (e.g. apply many models). The good news is both situations follow the same basic workflow. The toughest part is getting your data in the format needed to process using the workflow. Don’t worry, we’ll go through a real world example shortly so you can see how this is accomplished. # Real World Example We’ll go through the multidplyr workflow using a real world example that I routinely use: collecting stock prices from the inter-web. Other uses include using modeling functions over grouped data sets, using many models on the same data set, and processing text (e.g. getting n-grams on large corpora). Basically anything with a loop. ## Prep-Work In preparation for collecting stock prices, we need two things: 1. A list of stocks 2. A function to get stock prices from a stock symbol Let’s see how tidyquant makes this easy. First, getting a stock index used to be a pain: Before tidyquant (Don’t use this): Now with tidyquant (Use this!): Second, getting stock prices in tidy format used to be a pain: Before tidyquant (Don’t use this): Now with tidyquant (Use this!): Note that you can replace "stock.prices" with "key.ratios", "key.stats", "financials", etc to get other financial data for a stock symbol. These options can be scaled as well! ## Processing In Series The next computation is the routine that we wish to parallelize, but first we’ll time the script running on one processor, looping in series. We are collecting ten years of historical daily stock prices for each of the 500+ stocks. This is now a simple chaining operation with tidyquant, which accepts a tibble of stocks with the symbols in the first column. The result, sp_500_processed_in_series is a tibble (tidy data frame) with the stock prices for the 500+ stocks. And, let’s see how long it took when processing in series. The processing time is the time elapsed in seconds. Converted to minutes this is approximately 3.71 minutes. ## Processing in Parallel We just collected ten years of daily stock prices for over 500 stocks in about 3.71 minutes. Let’s parallelize the computation to get an improvement. We will follow the six steps shown in Figure 1. ### Step 0: Get Number of Cores (Optional) Prior to starting, you may want to determine how many cores your machine has. An easy way to do this is using parallel::detectCores(). This will be used to determine the number of groups to split the data into in the next set. Let’s add groups to sp_500. The groups are needed to divide the data across your cl number cores. For me, this is 8 cores. We create a group vector, which is a sequential vector of 1:cl (1 to 8) repeated the length of the number of rows in sp_500. We then add the group vector to the sp_500 tibble using the dplyr::bind_cols() function. ### Step 2: Create Clusters Use the create_cluster() function from the multidplyr package. Think of a cluster as a work environment on a core. Therefore, the code below establishes a work environment on each of the 8 cores. ### Step 3: Partition by Group Next is partitioning. Think of partitioning as sending a subset of the initial tibble to each of the clusters. The result is a partitioned data frame (party_df), which we explore next. Use the partition() function from the multidplyr package to split the sp_500 list by group and send each group to a different cluster. The result, by_group, looks similar to our original tibble, but it is a party_df, which is very different. The key is to notice that the there are 8 Shards. Each Shard has between 63 and 64 rows, which evenly splits our data among each shard. Now that our tibble has been partitioned into a party_df, we are ready to move onto setting up the clusters. ### Step 4: Setup Clusters The clusters have a local, bare-bones R work environment, which doesn’t work for the vast majority of cases. Code typically depends on libraries, functions, expressions, variables, and/or data that are not available in base R. Fortunately, there is a way to add these items to the clusters. Let’s see how. For our computation, we are going to need to add the tidyquant library and our variables to, and from. We do this by using the cluster_library() and cluster_assign_value() functions, respectively. We can verify that the libraries are loaded using the cluster_eval() function. ### Step 5: Run Parallelized Code Now that we have our clusters and partitions set up and everything looks good, we can run the parallelized code. The code chunk is a little bit different than before because we need to use purrr to map tq_get() to each stock symbol: 1. Instead of starting with the sp_500 tibble, we start with the by_group party_df 2. This is how we scale with purrr: We use a combination of dplyr::mutate() and purrr::map() to map our tq_get() function to the stocks. 3. We combine the results at the end using the multidplyr::collect() function. The result is a nested tibble. 4. We unnest with tidyr::unnest() to return the same single-level tibble as before. 5. Finally, we use dplyr::arrange() to arrange the stocks in the same order as previous. The collect() function returns the shards (cluster data groups) binded in whatever group is done first. Typically you’ll want to re-arrange. Let’s check out the results And, let’s see how long it took when processing in parallel. The processing time is approximately 0.66 minutes, which is 5.6X faster! Note that it’s not a full 8X faster because of transmission time as data is sent to and from the nodes. With that said, the speed will approach 8X improvement as calculations become longer since the transmission time is fixed whereas the computation time is variable. # Conclusion Parallelizing code can drastically improve speed on multi-core machines. It makes the most sense in situations involving many iterative computations. On an 8 core machine, processing time significantly improves. It will not be quite 8X faster, but the longer the computation the closer the speed gets to the full 8X improvement. For a computation that takes two minutes under normal conditions, we improved the processing speed by over 5X through parallel processing! # Recap The focus of this post was to show how you can implement tidyquant with the multidplyr parallel-processing package for parallel processing financial applications. We worked through the five main steps in the multidplyr workflow using the new tidyquant package, which makes it much easier to get financial data in tidy format. Keep in mind that you can use any tq_get “get” option at scale. 1. Tidyquant Vignettes: This tutorial just scratches the surface of tidyquant. The vignettes explain much, much more! 2. Multidplyr on GitHub: The vignette explains the multidplyr workflow using the flights data set from the nycflights13 package.
2018-03-21 20:15:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.309346467256546, "perplexity": 1834.4553700310707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647692.51/warc/CC-MAIN-20180321195830-20180321215830-00116.warc.gz"}
https://intuitive-math.club/linear-algebra/inverses/
Inverses We saw earlier that products of matrices and vectors have a definition that can be understood through the lens of transformation. But what about division by a matrix? The only intuitive definition we can think of might be dividing every coefficient in an $m×n$ matrix, but that would run counter to our understanding of multiplication, which is defined on matrices of size $m×n$ and $n×m$, where each component becomes a dot product of its corresponding row and column. Instead, we might want to think about division in terms of multiplication. Remember that multiplying any number by a fraction with numerator 1 is a division by the denominator of that fraction. And also recall that any fraction with numerator 1 is the inverse of multiplication by the denominator, in the sense that if we multiply 1 by the denominator, multiplying by the fraction will undo the multiplication and yield 1 again. $1×a×\frac{1}{a}=1$ For the sake of notational convenience, we usually refer to such fractions as negative exponents, since we can exploit the fact that multiplying two numbers together adds their powers and the sum of 1 and -1 is zero. $1×{a}^{1}×{a}^{-1}={a}^{0}=1$ Note that not every number has an inverse. In particular, if what we are multiplying by is zero then there is no way that you can use a function to get the number back, since zero times anything is zero. Expressed as fractional notation, it would not make much sense either: $1×0×\frac{1}{0}=?$ In those cases, we say that the number has no inverse, since it would be the same thing as dividing by zero. We can express the same thing for matrices as well - the product of any matrix and its inverse (if the inverse exists) will always be the indentity matrix, $I$ ${A}^{-1}×A=I$ So if we can find ${A}^{-1}$ then by premultiplying a matrix by it, we are effectively dividing by $A$. Again, note that not every matrix has an inverse. Recall that some transformations squash all of space on to a line or a plane: In this case, it would be impossible to find a matrix that would go back to our original space, since multiple points within the space all could have ended up in the same position on the plane. We also said that at least for this 3x3 transformation that it had an overall volume of zero and thus the determinant was zero. We will see shortly why a zero determinant means that an inverse cannot exist formally. Now for computing inverses. We will start with the 2x2 case. In this instance, I am just going to throw down a formula, but be aware that the formula is not a general or even recursive definition for how an inverse is calculated, but is really just a shortcut for the 2x2 case: $\frac{1}{detA}\left(\begin{array}{cc}d& -b\\ -c& a\end{array}\right)$ You would be forgiven for thinking that formula had been pulled out of nowhere, so lets break it down into its components by focusing on an example: a matrix that scales by 2 in both directions and rotates counterclockwise. $\left[\begin{array}{cc}0& 2\\ -2& 0\end{array}\right]$ Now, consider just multiplying the transformed basis vectors by the rearranged matrix we saw earlier. Does this get us back to the standard basis vectors? $\left[\begin{array}{cc}0& -2\\ 2& 0\end{array}\right]$ Unfortunately that did not quite do what we wanted. We rotated back to our basis vectors, but now they are four times as big! The problem with just rearranging the matrix like this is that it did not undo the change in the area, in fact, the determinant of the transformation was preserved so by multiplying by the new matrix, we just made space twice more expanded. This is the reason why in order to find the inverse, we also need to undo the change in the determinant by scaling by the inverse of the determinant. $\frac{1}{0×0-2×-2}×\left(\begin{array}{cc}0& -2\\ 2& 0\end{array}\right)$$\frac{1}{4}×\left(\begin{array}{cc}0& -2\\ 2& 0\end{array}\right)$ Now, if we do that, the transformation takes us back to our basis vectors. Note that the fact that we multiply by the inverse of the determinant means that some matrices are by definition non-invertable, because we cannot multiply by the inverse of zero. Now if this section followed the others, it would give a generalized technique for extending this knowledge into multiple dimensions. Unfortunately, there is no general mechanism for computing inverses of$n×m$ matrices. Instead, we will explore a more algorithmic approach using Elementary Row Operations. Recall that by using Elementary Row Operations that we were able to take a complex looking transformation and reduce it down to the identity matrix. If we were able to represent those operations as a matrix, that matrix would be the inverse of the original matrix. Consider the same 3x3 transformation we row-reduced above: $\left[\begin{array}{ccc}1& -1& 0\\ -1& -1& 0\\ 2& 1& 2\end{array}\right]$ We will row-reduce it again, but this time, we will apply the same operations to the identity matrix and observe how that transformation affects an already transformed unit cube. So, side by side: $\left[\begin{array}{ccc}1& -1& 0\\ -1& -1& 0\\ 2& 1& 2\end{array}\right]$$\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]$ First, subtract 2 times the first row from the third. $\left[\begin{array}{ccc}1& -1& 0\\ -1& -1& 0\\ 0& 3& 2\end{array}\right]$$\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ -2& 0& 1\end{array}\right]$ Now subtract the second row from the first. $\left[\begin{array}{ccc}2& 0& 0\\ -1& -1& 0\\ 0& 3& 2\end{array}\right]$$\left[\begin{array}{ccc}1& -1& 0\\ 0& 1& 0\\ -2& 0& 1\end{array}\right]$ Then add half of the first row to the second. $\left[\begin{array}{ccc}2& 0& 0\\ 0& -1& 0\\ 0& 3& 2\end{array}\right]$$\left[\begin{array}{ccc}1& -1& 0\\ 0.5& 0.5& 0\\ -2& 0& 1\end{array}\right]$ Add three times the second row to the third $\left[\begin{array}{ccc}2& 0& 0\\ 0& -1& 0\\ 0& 0& 2\end{array}\right]$$\left[\begin{array}{ccc}1& -1& 0\\ 0.5& 0.5& 0\\ 0.5& 1.5& 1\end{array}\right]$ Clean up the matrix by multipying the first row by $\frac{1}{2}$, the second row by -1 and the third row by $\frac{1}{2}$. $\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]$$\left[\begin{array}{ccc}0.5& -0.5& 0\\ -0.5& -0.5& 0\\ -0.25& 0.75& 0.5\end{array}\right]$ And with that, we have found the inverse
2019-09-18 16:41:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769246578216553, "perplexity": 214.24124393630672}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573309.22/warc/CC-MAIN-20190918151927-20190918173927-00294.warc.gz"}
http://mathhelpforum.com/calculus/183098-laws-changing-integrating-orger.html
# Math Help - laws for changing integrating orger 1. ## laws for changing integrating orger $\int_{0}^{2}\int_{y^{2}}^{\sqrt[3]{y^{2}}}ye^{x^{2}}dxdy$ we solve it by changing the order of ontegration at this integral y goes from 0 till 2 x goes from $x=y^{2}$ till $x=\sqrt[3]{y^{2}}$ now we what first integrate by y then by x so when we integrate by y: our intervals are $y=\sqrt{x}$ till $y=\sqrt[2]{y^{3}}$ what are the intervals when we integrate by x ? 2. ## Re: laws for changing integrating orger Originally Posted by transgalactic $\int_{0}^{2}\int_{y^{2}}^{\sqrt[3]{y^{2}}}ye^{x^{2}}dxdy$ we solve it by changing the order of ontegration at this integral y goes from 0 till 2 x goes from $x=y^{2}$ till $x=\sqrt[3]{y^{2}}$ now we what first integrate by y then by x so when we integrate by y: our intervals are $y=\sqrt{x}$ till $y=\sqrt[2]{y^{3}}$ what are the intervals when we integrate by x ? Did you draw the region defined by the terminals of the given double integral? It should then be obvious. 3. ## Re: laws for changing integrating orger if i draw it than it will be obvios but not every formula i could draw and i dont want to remmember every type how to solve it mathematickly without using graph 4. ## Re: laws for changing integrating orger Originally Posted by transgalactic $\int_{0}^{2}\int_{y^{2}}^{\sqrt[3]{y^{2}}}ye^{x^{2}}dxdy$ we solve it by changing the order of ontegration at this integral y goes from 0 till 2 x goes from $x=y^{2}$ till $x=\sqrt[3]{y^{2}}$ now we what first integrate by y then by x so when we integrate by y: our intervals are $y=\sqrt{x}$ till $y=\sqrt[2]{y^{3}}$ what are the intervals when we integrate by x ? Your terminals tell you $\displaystyle y^2 \leq x \leq \sqrt[3]{y^2}$ and $\displaystyle 0 \leq y \leq 2$. From the second inequality $\displaystyle 0 \leq y \leq 2 \implies 0 \leq y^2 \implies 0 \leq x$ and $\displaystyle 0 \leq y \leq 2 \implies \sqrt[3]{y^2} \leq \sqrt[3]{4} \implies x \leq \sqrt[3]{4}$. So you have new $\displaystyle x$ terminals $\displaystyle 0 \leq x \leq \sqrt[3]{4}$. Now from the first inequality $\displaystyle y^2 \leq x \leq \sqrt[3]{y^2} \implies y \leq \sqrt{x}$ and $\displaystyle y^2 \leq x \leq \sqrt[3]{y^2} \implies \sqrt{x^3} \leq y$. So you now have $\displaystyle y$ terminals $\displaystyle \sqrt{x^3} \leq y \leq \sqrt{x}$. Therefore $\displaystyle \int_0^2{\int_{y^2}^{\sqrt[3]{y^2}}{y\,e^{x^2}\,dx}\,dy} = \int_0^{\sqrt[3]{4}}{\int_{\sqrt{x^3}}^{\sqrt{x}}{y\,e^{x^2}\,dy}\ ,dx}$ cool proof thank
2016-06-25 17:03:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902751445770264, "perplexity": 2260.922896463072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00089-ip-10-164-35-72.ec2.internal.warc.gz"}
https://chem.libretexts.org/Courses/Athabasca_University/Chemistry_350%3A_Organic_Chemistry_I/Chapter_15%3A_Benzene_and_Aromaticity/15.02_Structure_and_Stability_of_Benzene
# 15.2 Structure and Stability of Benzene Objectives After completing this section, you should be able to 1. compare the reactivity of a typical alkene with that of benzene. 2. Use the heat of hydrogenation data to show that benzene is more stable than might be expected for “cyclohexatriene.” 3. state the length of the carbon-carbon bonds in benzene, and compare this length with those of bonds found in other hydrocarbons. 4. describe the geometry of the benzene molecule. 5. describe the structure of benzene in terms of resonance. 6. describe the structure of benzene in terms of molecular orbital theory. 7. draw a molecular orbital diagram for benzene. Key Terms Make certain that you can define, and use in context, the key term below. • degenerate Study Notes You may wish to review Sections 1.5 and 14.1 before you begin to study this section. Note that the figure showing the molecular orbitals of benzene has two bonding (π2 and π3) and two anti-bonding (π* and π5*) orbital pairs at the same energy levels. Orbitals with the same energy are described as degenerate orbitals. Among the many distinctive features of benzene, its aromaticity is the major contributor to why it is so unreactive. This section will try to clarify the theory of aromaticity and why aromaticity gives unique qualities that make these conjugated alkenes inert to compounds such as Br2 and even hydrochloric acid. It will also go into detail about the unusually large resonance energy due to the six conjugated carbons of benzene. The delocalization of the p-orbital carbons on the sp2 hybridized carbons is what gives the aromatic qualities of benzene. This diagram shows one of the molecular orbitals containing two of the delocalized electrons, which may be found anywhere within the two "doughnuts". The other molecular orbitals are almost never drawn. • Benzene ($$C_6H_6$$) is a planar molecule containing a ring of six carbon atoms, each with a hydrogen atom attached. • The six carbon atoms form a perfectly regular hexagon. All of the carbon-carbon bonds have exactly the same lengths - somewhere between single and double bonds. • There are delocalized electrons above and below the plane of the ring, which makes benzene particularly stable. • Benzene resists addition reactions because those reactions would involve breaking the delocalization and losing that stability. Because of the aromaticity of benzene, the resulting molecule is planar in shape with each C-C bond being 1.39 Å in length and each bond angle being 120°. You might ask yourselves how it's possible to have all of the bonds to be the same length if the ring is conjugated with both single (1.47 Å) and double (1.34 Å), but it is important to note that there are no distinct single or double bonds within the benzene. Rather, the delocalization of the ring makes each count as one and a half bonds between the carbons which makes sense because experimentally we find that the actual bond length is somewhere in between a single and double bond. Finally, there are a total of six p-orbital electrons that form the stabilizing electron clouds above and below the aromatic ring. If benzene is forced to react by increasing the temperature and/or by addition of a catalyst, It undergoes substitution reactions rather than the addition reactions that are typical of alkenes. This further confirms the previous indication that the six-carbon benzene core is unusually stable to chemical modification. The conceptual contradiction presented by a high degree of unsaturation (low H:C ratio) and high chemical stability for benzene and related compounds remained an unsolved puzzle for many years. Eventually, the presently accepted structure of a regular-hexagonal, planar ring of carbons was adopted, and the exceptional thermodynamic and chemical stability of this system was attributed to resonance stabilization of a conjugated cyclic triene. ## The High Stability of Benzene Here, two structurally and energetically equivalent electronic structures for a stable compound are written, but no single structure provides an accurate or even an adequate representation of the true molecule. The six-membered ring in benzene is a perfect hexagon (all carbon-carbon bonds have an identical length of 1.40 Å). The cyclohexatriene contributors would be expected to show alternating bond lengths, the double bonds being shorter (1.34 Å) than the single bonds (1.54 Å). An alternative representation for benzene (circle within a hexagon) emphasizes the pi-electron delocalization in this molecule, and has the advantage of being a single diagram. In cases such as these, the electron delocalization described by resonance enhances the stability of the molecules, and compounds composed of such molecules often show exceptional stability and related properties. Evidence for the enhanced thermodynamic stability of benzene was obtained from measurements of the heat released when double bonds in a six-carbon ring are hydrogenated (hydrogen is added catalytically) to give cyclohexane as a common product. In the following diagram cyclohexane represents a low-energy reference point. Addition of hydrogen to cyclohexene produces cyclohexane and releases heat amounting to 28.6 kcal per mole. If we take this value to represent the energy cost of introducing one double bond into a six-carbon ring, we would expect a cyclohexadiene to release 57.2 kcal per mole on complete hydrogenation, and 1,3,5-cyclohexatriene to release 85.8 kcal per mole. These heats of hydrogenation would reflect the relative thermodynamic stability of the compounds. In practice, 1,3-cyclohexadiene is slightly more stable than expected, by about 2 kcal, presumably due to conjugation of the double bonds. Benzene, however, is an extraordinary 36 kcal/mole more stable than expected. This sort of stability enhancement is now accepted as a characteristic of all aromatic compounds. A molecular orbital description of benzene provides a more satisfying and more general treatment of "aromaticity". We know that benzene has a planar hexagonal structure in which all the carbon atoms are sp2 hybridized, and all the carbon-carbon bonds are equal in length. As shown below, the remaining cyclic array of six p-orbitals ( one on each carbon) overlap to generate six molecular orbitals, three bonding and three antibonding. The plus and minus signs shown in the diagram do not represent electrostatic charge, but refer to phase signs in the equations that describe these orbitals (in the diagram the phases are also color coded). When the phases correspond, the orbitals overlap to generate a common region of like phase, with those orbitals having the greatest overlap (e.g. π1) being lowest in energy. The remaining carbon valence electrons then occupy these molecular orbitals in pairs, resulting in a fully occupied (6 electrons) set of bonding molecular orbitals. It is this completely filled set of bonding orbitals, or closed shell, that gives the benzene ring its thermodynamic and chemical stability, just as a filled valence shell octet confers stability on the inert gases. ## Exercises #### Questions Q15.2.1 The molecule shown, p-methylpyridine, has similar properties to benzene (flat, 120° bond angles). Draw the pi-orbitals for this compound. #### Solutions S15.2.1 The nitrogen has a lone pair of electrons perpendicular to the ring.
2019-08-22 09:10:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.437350332736969, "perplexity": 1730.7390713077918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317037.24/warc/CC-MAIN-20190822084513-20190822110513-00536.warc.gz"}
https://math.stackexchange.com/questions/2448270/an-elementary-proof-of-int-01-frac-arctan-x-sqrtx1-x2-dx-fra
# An elementary proof of $\int_{0}^{1}\frac{\arctan x}{\sqrt{x(1-x^2)}}\,dx = \frac{1}{32}\sqrt{2\pi}\,\Gamma\left(\tfrac{1}{4}\right)^2$ When playing with the complete elliptic integral of the first kind and its Fourier-Legendre expansion, I discovered that a consequence of $\sum_{n\geq 0}\binom{2n}{n}^2\frac{1}{16^n(4n+1)}=\frac{1}{16\pi^2}\,\Gamma\left(\frac{1}{4}\right)^4$ is: $$\int_{0}^{1}\frac{\arctan x}{\sqrt{x(1-x^2)}}\,dx = \tfrac{1}{32}\sqrt{2\pi}\,\Gamma\left(\tfrac{1}{4}\right)^2\tag{A}$$ which might be regarded as a sort of Ahmed's integral under steroids. I already have a proof of this statement (through Fourier-Legendre expansions), but I would be happy to see a more direct and elementary proof of it, also because it might have some consequences about the moments of $K(x)$ of the form $\int_{0}^{1}K(x)\,x^{m\pm 1/4}\,dx$, which are associated with peculiar hypergeometric functions. • I guess I found it: the trick is just to enforce the substitution $$x \mapsto \frac{1-t}{1+t}.$$ Sep 27 '17 at 23:08 • The LHS turns out to be a multiple of a Beta function and we are done. Sep 27 '17 at 23:08 • ...you know, you should let some of us have a chance to answer your questions before you do... =P Sep 27 '17 at 23:12 • @SimplyBeautifulArt: sorry, I didn't do it on purpose, I just realized it a few minutes after writing the question. I guess that happens, quite often :) Sep 27 '17 at 23:13 • :'( welp... guess we shall await for your self-answer and hopefully some nice alternative proofs (which may be a suitable tag) Sep 27 '17 at 23:16 ## 1 Answer A possible way is to enforce the substitution $x\mapsto\frac{1-t}{1+t}$, giving: $$\mathfrak{I}=\int_{0}^{1}\frac{\arctan(x)}{\sqrt{x(1-x^2)}}\,dx = \int_{0}^{1}\frac{\tfrac{\pi}{4}-\arctan t}{\sqrt{t(1-t^2)}}\,dt$$ and $$2\mathfrak{I} = \frac{\pi}{4}\int_{0}^{1} x^{-1/2}(1-x^2)^{-1/2}\,dx =\tfrac{\pi}{8}\,B\left(\tfrac{1}{4},\tfrac{1}{2}\right).$$ • If I may, can I ask what was your line of thinking that made you realize, "You know what, substituting $x=(1-t)/(1+t)$ is the perfect way to evaluate this problem!" I fail to see how someone even gets there in the first place. May 16 '18 at 23:49 • @FrankW.: the geometry of the arctangent function made me realize it. $\arctan\left(\frac{1-t}{1+t}\right)$ is a nice object; indeed the substitution $x=\frac{1-t}{1+t}$ removes the arctangent from the integrand function. Given the relation between the arctangent and the logarithm, this is more or less the same thing as $$\int_{0}^{+\infty}\frac{\log(x)}{p(x)}\,dx=0$$ for any quadratic and palindromic polynomial $p(x)$, non-vanishing over $\mathbb{R}^+$. May 17 '18 at 0:02 • Okay... but how did you know that the denominator would stay the same? I can see how you would arrive at the substitution for the arctan function, but it seems kind of coincidental that the denominator was unchanged. Jun 3 '18 at 23:40
2021-09-24 08:50:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8741791844367981, "perplexity": 317.4849147314276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00020.warc.gz"}
https://dmoj.ca/problem/ecoo15r3p2
## ECOO '15 R3 P2 - Tiling Trouble View as PDF Points:7 (partial) Time limit:2.0s Memory limit:64M Problem type A grid is subdivided into right-angled triangles with an arbitrary base (horizontal) and height (vertical). In the images below, the base of each triangle is and the height is . Each triangular tile is made of a special color-changing material. By "reversing the polarity" of a tile you can trigger a color change. Given a starting point (the and coordinate of the right angle of a tile) and measurements in the horizontal and vertical directions, larger shapes can be specified. You can make right-angled triangles and rectangles in this way. The diagram to the right shows a right-angled triangle at point with base and height of tiles, and a rectangle at point with a base and a height of tile. Note the shaded tiles that are completely covered by the shapes. If we reverse the polarity on those tiles we will change the color. As we continue to define shapes we will continue to reverse the polarity on all tiles that lie completely within the shapes, and we will change their colors. The input will contain test cases. The first line of each test case consists of two integers and that specify the base and height of the right-angled triangle tiles on the plane. lies on the X axis while lies on the Y axis. The second line of the test case is the sequence of colors (e.g.m Red, Blue, SunshineYellow, etc.). The sequence will contain between and colors. Each color is a single word and the colors are each separated by a single space. Each tile starts with the first color and changes to the next color in the sequence each time its polarity is reversed. If this happens enough times, the color sequence will restart from the beginning. Each color in the list will be different from the color immediately preceding it. Tiles share their edges, so reversing the polarity of either tile along an edge changes the color of the points on the edge. The third line of each test case is an integer , that specifies the number of shapes you will use for reversing the polarity of the tiles. The next lines will each consist of a shape described in one of the two following ways: • T X Y Tb = Triangle with the right angle at point and a base of tiles, where and • R X Y W H = Rectangle with a corner at point , a width of tiles and a height of tiles, where and The last lines of a test case will each consist of two integers and representing a point on the grid. For each unique color in the sequence, you must output the number of points that will end up that color after the appropriate tiles have had their polarities reversed once for each of the shapes in the list. The colors and counts should be printed on a single line using the exact format shown below. The order in which you output the colors on each line is not important. Note that in the sample input, there are test cases, each with points but the real data will have test cases with points each. #### Sample Input 1 3 White Black 10 R 78 66 19 2 T 18 96 8 T 2 78 1 R 87 78 66 2 R 33 45 55 11 R 50 36 8 3 T 59 72 40 R 19 15 67 6 T 51 54 17 R 98 54 16 28 123 60 33 64 196 177 99 145 88 162 3 3 White Black 10 R 45 63 8 23 R 42 18 19 21 R 45 42 28 30 R 18 48 19 12 R 3 63 22 4 T 75 72 28 T 84 72 22 T 42 0 22 T 78 57 29 R 24 36 25 6 108 44 24 164 177 40 122 82 115 37 3 2 White Black 10 T 75 44 27 R 12 36 6 26 T 21 12 4 T 3 38 22 T 45 40 17 T 75 32 30 T 33 80 1 R 6 42 19 49 T 15 92 24 R 72 66 5 16 172 120 195 82 61 99 140 169 132 155 #### Sample Output Black:1 White:4 Black:1 White:4 Black:0 White:5 Educational Computing Organization of Ontario - statements, test data and other materials can be found at ecoocs.org
2017-10-23 11:42:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4085054397583008, "perplexity": 728.533058421893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825900.44/warc/CC-MAIN-20171023111450-20171023131450-00200.warc.gz"}
https://math.eretrandre.org/tetrationforum/showthread.php?tid=396&pid=4369&mode=threaded
Thread Rating: • 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 Transseries, nest-series, and other exotic series representations for tetration Daniel Fellow Posts: 91 Threads: 33 Joined: Aug 2007 12/01/2009, 02:56 AM (11/29/2009, 09:38 AM)bo198214 Wrote: (11/29/2009, 09:09 AM)Daniel Wrote: I can compute a nested series for the fractional iterates of $e^x-1$, but I don't claim the series converges. I think the series is a formal power series. It is interesting to know that the series is Borel-summable. So your Schröder sums compute the regular iteration, is that true? I think it is very important to know those equalities. For example it took a while until I realized that the matrix approach introduced by Gottfried is actually equal to the regular iteration. As a test, e.g. the regular half-iterate of $e^x-1$ has as the first 10 coefficients: $0$, $1$, $\frac{1}{4}$, $\frac{1}{48}$, $0$, $\frac{1}{3840}$, $-\frac{7}{92160}$, $\frac{1}{645120}$, $\frac{53}{3440640}$, $-\frac{281}{30965760}$ Or generally the $t$-th iterate has as the first 10 coefficients $0$, $1$, $\frac{1}{2} t$, $\frac{1}{4} t^{2} - \frac{1}{12} t$, $\frac{1}{8} t^{3} - \frac{5}{48} t^{2} + \frac{1}{48} t$, $\frac{1}{16} t^{4} - \frac{13}{144} t^{3} + \frac{1}{24} t^{2} - \frac{1}{180} t$, $\frac{1}{32} t^{5} - \frac{77}{1152} t^{4} + \frac{89}{1728} t^{3} - \frac{91}{5760} t^{2} + \frac{11}{8640} t$, $\frac{1}{64} t^{6} - \frac{29}{640} t^{5} + \frac{175}{3456} t^{4} - \frac{149}{5760} t^{3} + \frac{91}{17280} t^{2} - \frac{1}{6720} t$, $\frac{1}{128} t^{7} - \frac{223}{7680} t^{6} + \frac{1501}{34560} t^{5} - \frac{37}{1152} t^{4} + \frac{391}{34560} t^{3} - \frac{43}{32256} t^{2} - \frac{11}{241920} t$, $\frac{1}{256} t^{8} - \frac{481}{26880} t^{7} + \frac{2821}{82944} t^{6} - \frac{13943}{414720} t^{5} + \frac{725}{41472} t^{4} - \frac{2357}{580608} t^{3} + \frac{17}{107520} t^{2} + \frac{29}{1451520} t$, Does that match your findings? I think these formulas are completely derivable from integer-iteration. If one knows that each coefficient is just a polynomial then this polynomial is determined by the number of degree plus 1 values for $t$ and these can be gained by just so many consecutive integer values. So this sounds really like your Schröder summation. However an alternative approach is just to solve the equation $f^{\circ t}\circ f=f\circ f^{\circ t}$ for $f^{\circ t}$, where $f$ and $f^{\circ t}$ are treated as formal powerseries. Yes, this does match my findings. See Hierarchies of Height n at http://tetration.org/Combinatorics/Schro...index.html to see the results of my derivation. Note: multiply my terms by 1/n! to get your terms. I agree there are alternate ways to iterate $e^x-1$, there are at least three ways I know of from my own research. Schroeder summations are not an efficient to iterate $e^x-1$. It requires 2312 summations in order to evaluate the tenth term. What they do is show that there is a combinatorial structure underlying all iterated functions, Schroeder's Fourth Problem http://www.research.att.com/~njas/sequences/A000311 . Also Schroeder summations are produced using Faà di Bruno's formula which is an example of a Hopf algebra which is important in several different areas of quantum field theory including renormalization. It is my hope that this might shine some light on how to show that our formulations of iterated functions and tetration are actually convergent. « Next Oldest | Next Newest » Messages In This Thread Transseries, nest-series, and other exotic series representations for tetration - by mike3 - 11/26/2009, 09:46 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by Daniel - 11/26/2009, 03:57 PM RE: Transseries, nest-series, and other exotic series representations for tetration - by bo198214 - 11/26/2009, 04:42 PM RE: Transseries, nest-series, and other exotic series representations for tetration - by Daniel - 11/29/2009, 09:09 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by bo198214 - 11/29/2009, 09:38 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by Daniel - 12/01/2009, 02:56 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by bo198214 - 12/01/2009, 09:08 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by tommy1729 - 12/01/2009, 10:22 PM RE: Transseries, nest-series, and other exotic series representations for tetration - by mike3 - 11/27/2009, 01:29 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by andydude - 11/28/2009, 04:56 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by mike3 - 11/28/2009, 06:36 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by mike3 - 11/28/2009, 06:50 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by kobi_78 - 12/14/2009, 07:17 PM Possibly Related Threads... Thread Author Replies Views Last Post Calculating the residues of $$\beta$$; Laurent series; and Mittag-Leffler JmsNxn 0 156 10/29/2021, 11:44 PM Last Post: JmsNxn Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 163 10/26/2021, 02:12 AM Last Post: JmsNxn Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 734 07/22/2021, 03:37 AM Last Post: JmsNxn Perhaps a new series for log^0.5(x) Gottfried 3 4,414 03/21/2020, 08:28 AM Last Post: Daniel Taylor series of i[x] Xorter 12 23,086 02/20/2018, 09:55 PM Last Post: Xorter An explicit series for the tetration of a complex height Vladimir Reshetnikov 13 24,274 01/14/2017, 09:09 PM Last Post: Vladimir Reshetnikov Complaining about MSE ; attitude against tetration and iteration series ! tommy1729 0 3,295 12/26/2016, 03:01 AM Last Post: tommy1729 2 fixpoints , 1 period --> method of iteration series tommy1729 0 3,356 12/21/2016, 01:27 PM Last Post: tommy1729 Taylor series of cheta Xorter 13 25,281 08/28/2016, 08:52 PM Last Post: sheldonison Tetration series for integer exponent. Can you find the pattern? marraco 20 29,995 02/21/2016, 03:27 PM Last Post: marraco Users browsing this thread: 1 Guest(s)
2021-12-07 12:51:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 30, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680967688560486, "perplexity": 2703.438577997629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363376.49/warc/CC-MAIN-20211207105847-20211207135847-00094.warc.gz"}
https://bookstore.ams.org/view?ProductCode=STML/78
An error was encountered while trying to add the item to the cart. Please try again. The following link can be shared to navigate to this page. You can select the link to copy or click the 'Copy To Clipboard' button below. Copy To Clipboard Successfully Copied! A User-Friendly Introduction to Lebesgue Measure and Integration Gail S. Nelson Carleton College, Northfield, MN Available Formats: Softcover ISBN: 978-1-4704-2199-1 Product Code: STML/78 List Price: $52.00 Individual Price:$41.60 Electronic ISBN: 978-1-4704-2737-5 Product Code: STML/78.E List Price: $49.00 Individual Price:$39.20 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price: $78.00 Click above image for expanded view A User-Friendly Introduction to Lebesgue Measure and Integration Gail S. Nelson Carleton College, Northfield, MN Available Formats: Softcover ISBN: 978-1-4704-2199-1 Product Code: STML/78 List Price:$52.00 Individual Price: $41.60 Electronic ISBN: 978-1-4704-2737-5 Product Code: STML/78.E List Price:$49.00 Individual Price: $39.20 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price:$78.00 • Book Details Student Mathematical Library Volume: 782015; 221 pp MSC: Primary 26; 28; A User-Friendly Introduction to Lebesgue Measure and Integration provides a bridge between an undergraduate course in Real Analysis and a first graduate-level course in Measure Theory and Integration. The main goal of this book is to prepare students for what they may encounter in graduate school, but will be useful for many beginning graduate students as well. The book starts with the fundamentals of measure theory that are gently approached through the very concrete example of Lebesgue measure. With this approach, Lebesgue integration becomes a natural extension of Riemann integration. Next, $L^p$-spaces are defined. Then the book turns to a discussion of limits, the basic idea covered in a first analysis course. The book also discusses in detail such questions as: When does a sequence of Lebesgue integrable functions converge to a Lebesgue integrable function? What does that say about the sequence of integrals? Another core idea from a first analysis course is completeness. Are these $L^p$-spaces complete? What exactly does that mean in this setting? This book concludes with a brief overview of General Measures. An appendix contains suggested projects suitable for end-of-course papers or presentations. The book is written in a very reader-friendly manner, which makes it appropriate for students of varying degrees of preparation, and the only prerequisite is an undergraduate course in Real Analysis. Undergraduate and graduate students and researchers interested in learning and teaching real analysis. • Chapters • Chapter 0. Review of Riemann integration • Chapter 1. Lebesgue measure • Chapter 2. Lebesgue integration • Chapter 3. $L^p$ spaces • Chapter 4. General measure theory • Ideas for projects • Request Review Copy • Get Permissions Volume: 782015; 221 pp MSC: Primary 26; 28; A User-Friendly Introduction to Lebesgue Measure and Integration provides a bridge between an undergraduate course in Real Analysis and a first graduate-level course in Measure Theory and Integration. The main goal of this book is to prepare students for what they may encounter in graduate school, but will be useful for many beginning graduate students as well. The book starts with the fundamentals of measure theory that are gently approached through the very concrete example of Lebesgue measure. With this approach, Lebesgue integration becomes a natural extension of Riemann integration. Next, $L^p$-spaces are defined. Then the book turns to a discussion of limits, the basic idea covered in a first analysis course. The book also discusses in detail such questions as: When does a sequence of Lebesgue integrable functions converge to a Lebesgue integrable function? What does that say about the sequence of integrals? Another core idea from a first analysis course is completeness. Are these $L^p$-spaces complete? What exactly does that mean in this setting? This book concludes with a brief overview of General Measures. An appendix contains suggested projects suitable for end-of-course papers or presentations. The book is written in a very reader-friendly manner, which makes it appropriate for students of varying degrees of preparation, and the only prerequisite is an undergraduate course in Real Analysis. • Chapter 3. $L^p$ spaces
2023-02-05 01:01:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42214807868003845, "perplexity": 747.3444594002948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00781.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/5/lesson/5.5.2/problem/5-167
### Home > CALC > Chapter 5 > Lesson 5.5.2 > Problem5-167 5-167. Determine the following limits. For each, describe your method. 1. $\lim\limits_ { x \rightarrow 1 } \frac { \operatorname { ln } x } { x ^ { 2 } + 1 }$ The denominator does not equal $0$. Evaluate. 1. $\lim\limits_ { x \rightarrow 2 } \frac { x ^ { 3 } - 8 } { x ^ { 2 } - 4 }$ $x^3 −8 = (x − 2)(x^2 + 2x + 4)$ 1. $\lim\limits_ { x \rightarrow \infty } \frac { \operatorname { ln } x } { \sqrt { x } }$ Think about the graphs of $y=\text{ln}x$ and $y=\sqrt{x}$. Which grows faster? Is there a horizontal asymptote? 1. $\lim\limits_ { x \rightarrow 0 } \frac { \operatorname { sin } ^ { 2 } x } { \operatorname { sin } ( x ^ { 2 } ) }$ Use two iterations of l'Hôpital's Rule. $1$
2022-05-20 04:35:23
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9091171622276306, "perplexity": 6085.092606784209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531352.50/warc/CC-MAIN-20220520030533-20220520060533-00735.warc.gz"}
https://web2.0calc.com/questions/i-need-help-with-this_9
+0 # I need help with this. 0 163 1 +279 Suppose a function f(x) has domain (-\infty,\infty) and range [-11,3]. If we define a new function g(x) by g(x) = f(6x)+1, then what is the range of g(x)? Express your answer in interval notation. Apr 5, 2019 edited by Guest  Apr 5, 2019 $$f(6x) \text{ has the same range as }f(x)\\ f(6x)+1 \text{ shifts the range of }f(x) \text{ 1 unit in the positive direction }\\ \text{so that the range of }g(x) = f(6x)+1 \text{ is } [-10,4]$$
2019-12-05 22:06:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992184042930603, "perplexity": 3379.3084407745237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00429.warc.gz"}
https://www.rellek.net/book/s_graphalgorithms_exercises.html
Exercises12.5Exercises 1. For the graph in Figure 12.20, use Kruskal's algorithm (“avoid cycles”) to find a minimum weight spanning tree. Your answer should include a complete list of the edges, indicating which edges you take for your tree and which (if any) you reject in the course of running the algorithm. 2. For the graph in Figure 12.20, use Prim's algorithm (“build tree”) to find a minimum weight spanning tree. Your answer should list the edges selected by the algorithm in the order they were selected. 3. For the graph in Figure 12.21, use Kruskal's algorithm (“avoid cycles”) to find a minimum weight spanning tree. Your answer should include a complete list of the edges, indicating which edges you take for your tree and which (if any) you reject in the course of running the algorithm. 4. For the graph in Figure 12.21, use Prim's algorithm (“build tree”) to find a minimum weight spanning tree. Your answer should list the edges selected by the algorithm in the order they were selected. 5. For the graph in Figure 12.22, use Kruskal's algorithm (“avoid cycles”) to find a minimum weight spanning tree. Your answer should include a complete list of the edges, indicating which edges you take for your tree and which (if any) you reject in the course of running the algorithm. 6. For the graph in Figure 12.22, use Prim's algorithm (“build tree”) to find a minimum weight spanning tree. Your answer should list the edges selected by the algorithm in the order they were selected. 7. A new local bank is being created and will establish a headquarters $h\text{,}$ two branches $b_1$ and $b_2\text{,}$ and four ATMs $a_1\text{,}$ $a_2\text{,}$ $a_3\text{,}$ and $a_4\text{.}$ They need to build a computer network such that the headquarters, branches, and ATMs can all intercommunicate. Furthermore, they will need to be networked with the Federal Reserve Bank of Atlanta, $f\text{.}$ The costs of the feasible network connections (in units of \$10,000) are listed below: The bank wishes to minimize the cost of building its network (which must allow for connection, possibly routed through other nodes, from each node to each other node), however due to the need for high-speed communication, they must pay to build the connection from $h$ to $f$ as well as the connection from $b_2$ to $a_3\text{.}$ Give a list of the connections the bank should establish in order to minimize their total cost, subject to this constraint. Be sure to explain how you selected the connections and how you know the total cost is minimized. 8. A disconnected weighted graph obviously has no spanning trees. However, it is possible to find a spanning forest of minimum weight in such a graph. Explain how to modify both Kruskal's algorithm and Prim's algorithm to do this. 10. In the paper where Kruskal's algorithm first appeared, he considered the algorithm a route to a nicer proof that in a connected weighted graph with no two edges having the same weight, there is a unique minimum weight spanning tree. Prove this fact using Kruskal's algorithm. 11. Use Dijkstra's algorithm to find the distance from $a$ to each other vertex in the digraph shown in Figure 12.23 and a directed path of that length. 12. Figure 12.24 contains the length of the directed edge $(x,y)$ in the intersection of row $x$ and column $y$ in a digraph with vertex set $\{a,b,c,d,e,f\}\text{.}$ For example, $w(b,d)=21\text{.}$ (On the other hand, $w(d,b)=10\text{.}$) Use this data and Dijkstra's algorithm to find the distance from $a$ to each of the other vertices and a directed path of that length from $a\text{.}$ 13. Use Dijkstra's algorithm to find the distance from $a$ to each other vertex in the digraph shown in Figure 12.25 and a directed path of that length. 14. Figure 12.26 contains the length of the directed edge $(x,y)$ in the intersection of row $x$ and column $y$ in a digraph with vertex set $\{a,b,c,d,e,f\}\text{.}$ For example, $w(b,d)=47\text{.}$ (On the other hand, $w(d,b)=6\text{.}$) Use this data and Dijkstra's algorithm to find the distance from $a$ to each of the other vertices and a directed path of that length from $a\text{.}$ 15. Give an example of a digraph having an undirected path between each pair of vertices, but having a root vertex $r$ so that Dijkstra's algorithm cannot find a path of finite length from $r$ to some vertex $x\text{.}$ 16. Notice that in our discussion of Dijkstra's algorithm, we required that the edge weights be nonnegative. If the edge weights are lengths and meant to model distance, this makes perfect sense. However, in some cases, it might be reasonable to allow negative edge weights. For example, suppose that a positive weight means there is a cost to travel along the directed edge while a negative edge weight means that you make money for traveling along the directed edge. In this case, a directed path with positive total weight results in paying out to travel it, while one with negative total weight results in a profit. 1. Give an example to show that Dijkstra's algorithm does not always find the path of minimum total weight when negative edge weights are allowed. 2. Bob and Xing are considering this situation, and Bob suggests that a little modification to the algorithm should solve the problem. He says that if there are negative weights, they just have to find the smallest (i.e., most negative weight) and add the absolute value of that weight to every directed edge. For example, if $w(x,y)\geq -10$ for every directed edge $(x,y)\text{,}$ Bob is suggesting that they add $10$ to every edge weight. Xing is skeptical, and for good reason. Give an example to show why Bob's modification won't work.
2021-05-07 01:09:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5392058491706848, "perplexity": 251.65661639159427}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00253.warc.gz"}
https://brilliant.org/problems/when-you-have-too-many-xs/
When You Have Too Many Ex's Part 1 Calculus Level 4 What is the slope of the line tangent to the graph of $\huge{y=x^{x^{x^{x^{x^{.^{.^{.}}}}}}}}$ at the point $$x=\sqrt{2}$$? If your answer can be expressed as $p\sqrt{q}\displaystyle \sum_{k=0}^{\infty}\ln^{k}(r)$ for positive, prime integers $$(p,q,r)$$, find $$p+q+r$$. Note: the x's continue to infinity. × Problem Loading... Note Loading... Set Loading...
2018-01-21 12:36:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4175974726676941, "perplexity": 1430.9817656797022}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890582.77/warc/CC-MAIN-20180121120038-20180121140038-00556.warc.gz"}
https://krishnakitchen.org/sapphire-beach/application-of-differential-equation-in-electrical-engineering.php
# Application Of Differential Equation In Electrical Engineering LTI system described by a differential equation. Application of differential equation in hardware belongs to applied physics or electrical engineering. Application of ordinary differential equation in, Electrical Engineering. A fitting example of application of Bernoulli's Equation in a moving reference These were few applications of Bernoulli's Equation.. ### Laplace Transforms Electrical Engineering and Technology LTI system described by a differential equation. Laplace transformation is a technique for solving differential equations. of engineering applications Bode Plot Laplace Transforms Laplace Transform, ANALYSIS AND APPLICATIONS OF LAPLACE /FOURIER TRANSFORMATIONS IN ELECTRIC CIRCUIT Electrical Electronics Engineering, differential equations can be. Physical Problem for Ordinary Differential Equations: Electrical Engineering 08.00F.3 0 5 10 15 20 25 Ordinary Differential Equations-Physical problem-Electrical What are the real-world engineering applications but almost everything in engineering uses differential equations In Electrical Engineering: wave Equations Electrical and Electronics Applications of Differential Equations in Engineering Three-day Faculty Development program on ” Applications of Differential First Order Differential Equations; video lesson we will learn about some Applications of Linear Systems and Linear Models in Business, Science and Engineering. equations, and polar Electrical circuit analysis (exponential functions Engineering Applications in Differential and Integral Calculus 79. engineering applications, The preferred and more useful approach to solving differential equations in electrical Engineering: Electrical Engineering 69 Physical Problem for Ordinary Differential Equations: Electrical Engineering 08.00F.3 0 5 10 15 20 25 Ordinary Differential Equations-Physical problem-Electrical Electrical Engineering Stack Exchange is a question and answer site for electronics and LTI system described by a differential equation. Web Applications; Applications of the Laplace Transform are those that require one to solve either some kind of differential equation. most topics in electrical engineering) 2/10/2011 · I am starting my EE courses next semester and I noticed that I have to take Differential Equations first instead of multi-variable calculus like the other Differential Equations for Engineers It introduces each topic with a motivating application drawn from electrical, Differential Equations; Engineering Differential Equations are the language in which the laws of nature are expressed. Understanding properties of solutions of differential equations is fundamental to ... electrical and mechanical engineering, Advanced Partial Differential Equations with Applications Partial Differential Equations with Electrical and Electronics Applications of Differential Equations in Engineering Three-day Faculty Development program on ” Applications of Differential 4/10/2015 · Differential Equations in Electrical Engineering stephanie barros. An application of linear differential equations - Mathematics - Calculus What are the real-world engineering applications but almost everything in engineering uses differential equations In Electrical Engineering: wave Equations LTI system described by a differential equation. Importance of Linear algebra in Engineering Design Methodology and their applications" should receive "increased attention" in differential equation, Ordinary Differential Equations 1. phenomena in electrical and mechanical systems, and t since many applications of these type of ODEs are time dependent.. ### Applications of Bernoulli's Equation Brighthub Engineering ENGINEERING PPT Differential Equations Notes PDF. Applications of Differential Equations : Electrical Applications of Differential Equations : Applications of Differential Equations : Electrical Circuits and, Electrical and Electronics Applications of Differential Equations in Engineering Three-day Faculty Development program on ” Applications of Differential. ### Modelling with exact differential equations? Stack Exchange How are differential equations used in Electrical Engineering?. Operational calculus first found applications in electrical engineering problems, solving any linear differential equation is reduced to a purely algebraic problem. https://en.wikipedia.org/wiki/Superposition_principle Differential Equations in Electrical Engineering: Code and numeric methods of solutions of ordinary and partial differential equations. and application.. produce a pure differential equation. 3. ENGINEERING Principles and Applications Chapter 4 Transients In electrical engineering, Differential Equations are the language in which the laws of nature are expressed. Understanding properties of solutions of differential equations is fundamental to 12/01/2018 · Problems on Application of Partial Differential Equation IMPORTANT LINKS Complete Free handbook of Electrical Engineering with diagrams and graphs What are the real-world engineering applications but almost everything in engineering uses differential equations In Electrical Engineering: wave Equations APPLICATIONS OF DIFFERENTIAL EQUATIONS department في Faculty of Petroleum and Mining Engineering, Application of differential equation in real Applications of Differential Equations : Electrical Applications of Differential Equations : Applications of Differential Equations : Electrical Circuits and Applied Differential Equations in Electrical Engineering Modeling of systems by ordinary differential equations. Numerical methods and use of application Physical Problem for Industrial Engineering Ordinary Differential Equations which are essentially DC motors are widely used in applications where the speed of a Differential Equations are the language in which the laws of nature are expressed. Understanding properties of solutions of differential equations is fundamental to Applications of Differential Equations in Engineering and Mechanics Electrical Engineering Applications of Differential Equations in Engineering and Mechanics 12/01/2018 · Problems on Application of Partial Differential Equation IMPORTANT LINKS Complete Free handbook of Electrical Engineering with diagrams and graphs Operational calculus first found applications in electrical engineering problems, solving any linear differential equation is reduced to a purely algebraic problem. What are the real-world engineering applications but almost everything in engineering uses differential equations In Electrical Engineering: wave Equations Electrical and Electronics Applications of Differential Equations in Engineering Three-day Faculty Development program on ” Applications of Differential Practical applications of first order exact ODE? y\, \partial x} $$Herewith the conditions for an exact differential equation are Electrical Engineering; What are some applications of ODEs and PDEs in Electrical Engineering? My upcoming curriculum requires that Diff. EQ be taken alongside Circuit Analysis and as a pre Applications of Differential Equations in Engineering and Mechanics Electrical Engineering Applications of Differential Equations in Engineering and Mechanics Applications of Differential Equations these equations which model engineering systems. differential equations, modelling simple electrical circuits, ## Chapter 4 Transients Department of Electrical and Laplace Transforms Electrical Engineering and Technology. Electrical Engineering. A fitting example of application of Bernoulli's Equation in a moving reference These were few applications of Bernoulli's Equation., Physical Problem for Industrial Engineering Ordinary Differential Equations which are essentially DC motors are widely used in applications where the speed of a. ### ENGINEERING PPT Differential Equations Notes PDF Applications of Bernoulli's Equation Brighthub Engineering. ... electrical and mechanical engineering, Advanced Partial Differential Equations with Applications Partial Differential Equations with, Physical Problem for Industrial Engineering Ordinary Differential Equations which are essentially DC motors are widely used in applications where the speed of a. 2/10/2011 · I am starting my EE courses next semester and I noticed that I have to take Differential Equations first instead of multi-variable calculus like the other Applications of Differential Equations : Electrical Applications of Differential Equations : Applications of Differential Equations : Electrical Circuits and First Order Differential Equations; video lesson we will learn about some Applications of Linear Systems and Linear Models in Business, Science and Engineering. Electrical and Electronics Applications of Differential Equations in Engineering Three-day Faculty Development program on ” Applications of Differential Differential Equations with MATLAB: Exploration, Applications, and Theory Electrical Engineering ABSTRACT ORDINARY DIFFERENTIAL EQUATIONS Answer to APPLICATIONS OF SECOND-ORDER DIFFERENTIAL home / study / engineering / electrical engineering Solving the guiding differential equation disciplines and various types of differential equations are motivated by engineering applications; Differential equations, Applications, electrical and ANALYSIS AND APPLICATIONS OF LAPLACE /FOURIER TRANSFORMATIONS IN ELECTRIC CIRCUIT Electrical Electronics Engineering, differential equations can be Differential Equations and Engineering Applications 1.7 Applications to Mechanics, Electrical Differential Equations and Engineering Applications Fall, produce a pure differential equation. 3. ENGINEERING Principles and Applications Chapter 4 Transients In electrical engineering, Application of differential equation in hardware belongs to applied physics or electrical engineering. Application of ordinary differential equation in Laplace transforms and it‟s Applications in Engineering Field enables them to solve linear differential equations In the theory of electrical circuits, Answer to APPLICATIONS OF SECOND-ORDER DIFFERENTIAL home / study / engineering / electrical engineering Solving the guiding differential equation Laplace transformation is a technique for solving differential equations. of engineering applications Bode Plot Laplace Transforms Laplace Transform Electrical Engineering Stack Exchange is a question and answer site for electronics and LTI system described by a differential equation. Web Applications; Physical Problem for Industrial Engineering Ordinary Differential Equations which are essentially DC motors are widely used in applications where the speed of a Differential Equations in Electrical Engineering: Code and numeric methods of solutions of ordinary and partial differential equations. and application. 2/10/2011 · I am starting my EE courses next semester and I noticed that I have to take Differential Equations first instead of multi-variable calculus like the other Operational calculus first found applications in electrical engineering problems, solving any linear differential equation is reduced to a purely algebraic problem. Answer to APPLICATIONS OF SECOND-ORDER DIFFERENTIAL home / study / engineering / electrical engineering Solving the guiding differential equation Application of differential equation in hardware belongs to applied physics or electrical engineering. Application of ordinary differential equation in First Order Differential Equations; video lesson we will learn about some Applications of Linear Systems and Linear Models in Business, Science and Engineering. Purchase Engineering Mathematics with Examples and Applications and numerical methods of differential equations ; electrical engineering, Differential Equations and Engineering Applications 1.7 Applications to Mechanics, Electrical Differential Equations and Engineering Applications Fall, Applications of Differential Equations : Electrical Applications of Differential Equations : Applications of Differential Equations : Electrical Circuits and Physical Problem for Ordinary Differential Equations: Electrical Engineering 08.00F.3 0 5 10 15 20 25 Ordinary Differential Equations-Physical problem-Electrical equations, and polar Electrical circuit analysis (exponential functions Engineering Applications in Differential and Integral Calculus 79. Differential Equations are the language in which the laws of nature are expressed. Understanding properties of solutions of differential equations is fundamental to First Order Differential Equations; video lesson we will learn about some Applications of Linear Systems and Linear Models in Business, Science and Engineering. Applications of Dynamical Systems in Department of Electrical and Electronic Engineering differential equations is a collection of n interrelated Differential Equations for Engineers It introduces each topic with a motivating application drawn from electrical, Differential Equations; Engineering 12/01/2018 · Problems on Application of Partial Differential Equation IMPORTANT LINKS Complete Free handbook of Electrical Engineering with diagrams and graphs ### Modelling with exact differential equations? Stack Exchange LTI system described by a differential equation. 4/10/2015 · Differential Equations in Electrical Engineering stephanie barros. An application of linear differential equations - Mathematics - Calculus, APPLICATIONS OF DIFFERENTIAL EQUATIONS department في Faculty of Petroleum and Mining Engineering, Application of differential equation in real. Applications of Bernoulli's Equation Brighthub Engineering. Operational calculus first found applications in electrical engineering problems, solving any linear differential equation is reduced to a purely algebraic problem., Applications of Differential Equations in Engineering and Mechanics Electrical Engineering Applications of Differential Equations in Engineering and Mechanics. ### LTI system described by a differential equation Modelling with exact differential equations? Stack Exchange. Differential Equations in Electrical Engineering: Code and numeric methods of solutions of ordinary and partial differential equations. and application. https://en.wikipedia.org/wiki/Superposition_principle Answer to APPLICATIONS OF SECOND-ORDER DIFFERENTIAL home / study / engineering / electrical engineering Solving the guiding differential equation. Modelling with exact differential equations? in an electrical circuit. I know about de's for LRC circuits and a few other engineering applications engineering applications, The preferred and more useful approach to solving differential equations in electrical Engineering: Electrical Engineering 69 An Application of Fractional Calculus in Electrical Engineering Mohd. Farman Ali1,*, Manoj Sharma2 and Renu Jain3. applications of differential equations. produce a pure differential equation. 3. ENGINEERING Principles and Applications Chapter 4 Transients In electrical engineering, Applications of Differential Equations : Electrical Applications of Differential Equations : Applications of Differential Equations : Electrical Circuits and MAE502 Partial Differential Equations in Engineering Partial differential equations for In real world applications, the heat equation is often Differential Equations and Engineering Applications 1.7 Applications to Mechanics, Electrical Differential Equations and Engineering Applications Fall, Differential Equations for Engineers It introduces each topic with a motivating application drawn from electrical, Differential Equations; Engineering produce a pure differential equation. 3. ENGINEERING Principles and Applications Chapter 4 Transients In electrical engineering, What are the real-world engineering applications but almost everything in engineering uses differential equations In Electrical Engineering: wave Equations Electrical Engineering Stack Exchange is a question and answer site for electronics and LTI system described by a differential equation. Web Applications; Electrical Engineering. What are some applications of ordinary/partial differential equations What are the applications of differential equations in Applications of the Laplace Transform are those that require one to solve either some kind of differential equation. most topics in electrical engineering) engineering applications, The preferred and more useful approach to solving differential equations in electrical Engineering: Electrical Engineering 69 Recent advances in the application of differential equations that Mathematical Problems in Engineering is rapid electrical engineering, Operational calculus first found applications in electrical engineering problems, solving any linear differential equation is reduced to a purely algebraic problem. equations, and polar Electrical circuit analysis (exponential functions Engineering Applications in Differential and Integral Calculus 79. Electrical Engineering Stack Exchange is a question and answer site for electronics and LTI system described by a differential equation. Web Applications; produce a pure differential equation. 3. ENGINEERING Principles and Applications Chapter 4 Transients In electrical engineering, Practical applications of first order exact ODE? y\, \partial x}$$ Herewith the conditions for an exact differential equation are Electrical Engineering; Application of differential equation in hardware belongs to applied physics or electrical engineering. Application of ordinary differential equation in Electrical Engineering. What are some applications of ordinary/partial differential equations What are the applications of differential equations in First Order Differential Equations; video lesson we will learn about some Applications of Linear Systems and Linear Models in Business, Science and Engineering. Differential Equations are the language in which the laws of nature are expressed. Understanding properties of solutions of differential equations is fundamental to Laplace transforms and it‟s Applications in Engineering Field enables them to solve linear differential equations In the theory of electrical circuits, Electrical and Electronics Applications of Differential Equations in Engineering Three-day Faculty Development program on ” Applications of Differential Differential Equations with MATLAB: Exploration, Applications, and Theory Electrical Engineering ABSTRACT ORDINARY DIFFERENTIAL EQUATIONS 12/01/2018 · Problems on Application of Partial Differential Equation IMPORTANT LINKS Complete Free handbook of Electrical Engineering with diagrams and graphs Modelling with exact differential equations? in an electrical circuit. I know about de's for LRC circuits and a few other engineering applications ANALYSIS AND APPLICATIONS OF LAPLACE /FOURIER TRANSFORMATIONS IN ELECTRIC CIRCUIT Electrical Electronics Engineering, differential equations can be An Application of Fractional Calculus in Electrical Engineering Mohd. Farman Ali1,*, Manoj Sharma2 and Renu Jain3. applications of differential equations. 4/10/2015 · Differential Equations in Electrical Engineering stephanie barros. An application of linear differential equations - Mathematics - Calculus APPLICATIONS OF DIFFERENTIAL EQUATIONS department في Faculty of Petroleum and Mining Engineering, Application of differential equation in real First Order Differential Equations; video lesson we will learn about some Applications of Linear Systems and Linear Models in Business, Science and Engineering. Recent advances in the application of differential equations that Mathematical Problems in Engineering is rapid electrical engineering, First Order Differential Equations; video lesson we will learn about some Applications of Linear Systems and Linear Models in Business, Science and Engineering. Applied Differential Equations in Electrical Engineering Modeling of systems by ordinary differential equations. Numerical methods and use of application Operational calculus first found applications in electrical engineering problems, solving any linear differential equation is reduced to a purely algebraic problem.
2021-06-16 18:05:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23004668951034546, "perplexity": 766.9712133637595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00384.warc.gz"}
http://autumnaskswhy.blogspot.com/
## Friday, March 7, 2014 ### Whose Doll Is It Anyways?? I know, I know...so much for getting back to writing. I am happy to say that I finally am over being sick. But Hubs was home for a short rotation so it was fast and furious getting things done around here. And now he's gone again. So this story is actually from two rotations ago. For Christmas, Mouses big brother gave her an American Girl Store Gift Certificate. Since there is a store in Dallas, which is only about two hours away, and we have family there, Hubs decided he needed to take her so she could spend it.  She was foaming at the mouth and had a list ready.....that she had compiled while perusing the online store. I can promise you that neither Hubs nor I was thrilled about this. Especially with the child that can spend an hour (not kidding) in Claire's. I have already had the privilege of going to this massively overcrowded and stupidly expensive store, and Hubs has zero interests in dolls. But off we went. First, Mouse wanted her new American Girl Doll to have her ears pierced.  Wait...what? Get a dolls ears pierced? And pay money for it??? Both parents are horrified at the thought....but it wasn't our money so we marched upstairs to get in line. And I do mean line. Like we stood in it for 20 minutes just to get to the front to tell them what we wanted.  See, it was done in the beauty salon. Not a people beauty salon you understand..... a DOLL beauty salon. And the sign informed us that the wait (after you get to the front of the line) was currently at two hours. No. Way.  We were going to have to wait two hours for this???? Thankfully we found out that all we had to do was make it to the front of the line and they would do ear piercing immediately. No wait beyond that. There is a God. As we stand in line for foreverrrrrrrrrrr, we start people watching. Not like there is anything else to do. While waiting to get a DOLL'S ears pierced. There were 5 ladies behind the counter working away. I had to wonder what it was like for this to be your employment. "So, what do you do?"  Not that I'm saying there is anything wrong with this, beats fast food for sure. There are posters up all over with the prices of the hairstyles. Different dolls had different choices depending on the hair type they have.The ladies clean the hair, trim it back into shape and then style the do you payed for.  And do they pay. This starts at about $20. I don't pay$20 for Mouses haircuts, so this was a shock to Hubs and my system.  Are you kidding me??? Standing in line. After 10 minutes Hubs began to squirm. You could see that dazed look enter his eye. The trapped animal panic. So you can imagine the kids around us started doing the same thing. Kids and guys....about the same patience level, right? A couple of people up we noticed a very nicely dressed mom and her beautiful daughter. The girl looked to be about 4.  Mom was pushing a stroller with all her shopping bags hanging off it (and we were at the Dallas Galleria folks) and it looked like the girl was her only child with her. I wondered if the child still used the stroller...she was kinda too old for that....or whether mom just still used it as a shopping cart. It is handy like that if you are a power shopper and haven't given the stroller away yet.  Or maybe there is a younger sibling somewhere. Anyway- mom gets to the poster, holding TWO dolls and asks the daughter which hair-do's she wants. And the child? Not the least bit interested.  She had the trapped animal, bored out of my skull look that Hubs was wearing.  She was being very well behaved about the whole thing while she stood there, but there was no interest. She wouldn't even give mom an answer....just a lazy shrug of the shoulders. Mom looked a little distressed.  So she turned around and decided what to get.  And I'm watching this whole thing play out and realize they really aren't there for the child.  They are there cause mom is going to spend $40-50 to fix these dolls hair. And its not like the hair was very bad off to begin with. That really puzzles me. Why does a grown woman want to spend that kind of money to fix doll hair. Why is it important enough to stand in a line for 20 minutes to deposit the dolls, and then wait another two hours for them to be done? Even supposing she has more money than she knows what to do with. Is she just a crazy adult doll lover? Is it somehow a status thing? Something the moms compare at play dates? I would have liked to know because I find most people have REASONS for what they do, even when it looks crazy to us. Hubs is really getting antsy now. He starts asking if we are done after this. Nope. Mouse has a list and we already found out what she wanted was out of stock so she was going to have to shop. She admits that there is something else but she doesn't have enough money. Dad volunteers to help her out. Hah! Note to kids: take dad and make him stand in a line for a long time. Its financially beneficial. About that time the couple in front of us had issues. Here we have a family that looks middle class. Dad is in work coveralls that look like a construction worker. Mom is rather over weight and so is daughter. Daughter looks to be 10-11ish. This family is obviously here for the daughter. Mom shows her the poster when we are close and asks which style the girl wants. She proceeds to pick out a style that does not go with her dolls hair type. Mom points out which section to chose from. The child starts throwing a two year old fit. It was ugly. And they get her settled down in a bit. Not a single word of warning or reprimand. You have got to be kidding. We would have removed our kid from line and left. Who rewards over the top ugliness in a child that age???? Just wow. Finally we make it to the head of the line and the ears are pierced in about 2 minutes and we move down stairs to finish up the gift card. Dad is past his limit at this point and just pushing to find something so we can leave. And to make a long story short -- Mouse ends up with about$50 in extra merchandise just so dad can  escape! Hysterical. And brilliant. ## Wednesday, February 5, 2014 ### Stunned By Joy In my favorite group of friends here in Texas, I am the old woman. Its a little strange since for years and years I was always the youngest in any group. But whatever. I earned every year. Ha Still, it was with great joy that I learned on of my friends sons was getting married. Not only because he was getting married...also because I was not going to be the first mother-in-law of the group. Its the little things... This also gave me the push needed to finish setting up my quilting room. I love saying that. I never thought I would have a dedicated space all my own to quilt and make a mess in. Because quilting is very messy. After putting it all together, I was able to start on a quilt for a wedding gift. Timing was bad for going out and buying a bunch of material (Oh how I miss The Fabric Shop in Ohio) so I dug through my material bins and pattern notebook and came up with a scrap quilt \to use as a couch throw.  And I had everything I needed, including the batting to get it done. Of course I pushed it all the way to the last minute, but I did get it completed. Sorry about the hazy shot...I didn't check the photos before wrapping it up so I don't have good shots of this quilt. Not smart. This is called a Plus Quilt and is a free pattern I found on the internet by Jeni Baker at incolororder.blogspot.com The back is made up of scraps from the 11 Pony quilts I did. I swear I will never use up all the purple leftovers.  I normally wouldn't have used purples on a wedding quilt like this but plum was one of the wedding colors. I took advantage. My bestie Rena did the signature block for me. She has the ultra cool embroidering machines. The binding is left over pieces of binding from other quilts. I don't throw much away. Material is too expensive to waste. This was cool because it was already cut and ironed in half, etc and I just had to join the pieces. Not a good shot but you can't really see it from the front. The actual quilting I just stitched in the ditch around each plus sign.  I would usually do more quilting than that but I was running short on time. And that was also quilt number 40.  Wow. Now its time to get back to my list of other quilts I am making. No idea how long it will take me to get through my list.  Still have lots of friends I want to send one to because I love them! ## Monday, January 27, 2014 ### Squirrel!! My favorite part of the movie UP is how the dogs are always getting distracted by the possibility of squirrels. But its not as funny when it feels like your everyday life is like that. All day. Every day. I swear I can't complete a task without picking up four more right in the middle of it. And sometimes I finish something and lots of times I don't. Is this getting old? It feels more like loosing my mind. On top of that Mouse and I have been rather sick. Of course its while Hubs is gone which makes it twice as bad. The day Mouse got sick I was relieved. I wasn't sure I could drive her to school. But we are on the mend.  I think. Every time I say that it seems we suffer a relapse. Hubs has come and gone. He did get the rest of the boxes gone through if not unpacked -- and our claim has been turned in. Waiting for that first insulting offer. We turned in about $4500 in damages. I say the first offer is$500. Hubs had his ulnar nerve surgery and is recovering. He has a lot of pain but has a little feeling returning to his hand. It can take up to a year before he knows how well the surgery actually worked. Lots going on. House is mostly unpacked but we are working on rooms and all the little changes, etc. It feels like there are still a million things to work on. Because there are. Sometimes it leaves me feeling paralyzed. I try to remind myself how much we've already done. And it would help if I could complete a task without getting distracted by another. But there is progress. In Mouses bathroom they still had the builders grade brassy looking light fixture.  I abhor this look. Now wasn't the time to spend the money on it though as we upgraded in Songbirds bathroom. So I grabbed a can of spray paint that matches the colors in there and painted it.  I've never been real big on spray painting things but there is no way I could hate it more so I did it. Its like a tan that picks up the tan in the new sink top and the throne. One of the glass globes broke so we had to replace those also. Its okay. Not great. But better than the gold. The fixtures in Songbirds room are antique. They have these cool looking tassels (can't believe I said that). But the metal was not exactly rusting but discolored and oxidizing or something. Some of the pieces are cracked.  So, we were sad to have to replace them -- especially since the wall scones in the window seat match them.  I figured we could see what a coat of paint would do for them also -- we didn't have anything to lose. Of course the ancient Bakelite holder in  the ceiling came out and now that has to be fixed before we can rehang it. Just requires crawling around in the attic, which is not a priority with the other lights in the room. Its on the list. Hubs one big request was to put up curtains in the living room. Well, that doesn't work with picture windows. And I hate curtains. Really. I do. So we talked to a decorating place and it was going to cost over $1000 for the cordless pleated shades that can be put all the way up. My mother and my sister have both used selectblinds.com and they recommend them. So I went online and found blackout shades in a mocha color that matches the trim exactly. When they are all the way up you don't even notice them. So Hubs is happy he can pull them down at night so people can't see in, and I am happy because you don't notice them when they are open and it doesn't spoil the wall of windows. Plus...all four of these plus two shades in Mouses room came out to just over$400 - good quality stuff. I highly recommend this company guys! One of the living room blinds had an internal defect and I got it replaced no problem. Plus they even have videos on how to measure to give them the measurements they need. It was simple. Speaking of windows. There are full sized ones in all the bathrooms. This one is in Mouses room and made me to uncomfortable. So, Amazon to the rescue. I bought that window cling stuff that makes it so you can't see in well but it lets all the light in. I did three of the four bathrooms. Songbird didn't want it in her bathroom, but I decided to go ahead and order some for it anyhow. It just makes me feel better. Don't know why. The laundry room has been painted. It was a muddy tannish brown with raspberry trim. Not working for me. It is now a good grey color that Hubs picked out. Love it. And did I mention the stove-top tragedy? We had the spices stored over the stove....and one fell and broke it. Horrified. The replacement cost for the glass top was in the $300 from the company,$500's at Sears and \$250 with free shipping on Ebay.  Shop around people. Anyhow, it came in and the electrician came by and installed it for me. The spices are no longer over the stove. Turns out that is the most common reason for breakage of the glass stove tops. Mouses room used to be a ladybug theme. Everything was pretty much black, red and white. Here she picked a blue and white.  This weekend was exceptionally nice so it was time to turn all that red and black stuff to white or blue. Two frames, two shelves and the stuffed animal basket all got the spray paint treatment. I have a white hand that Mouse says looks like mold. LOL. Now I need to change out the mats on a couple of her paintings so I can get everything hung in her room. Its still only half done. I also did some carpet cleaning thanks to a puking dog (more socks) and cleaned the rug from Mouses room. The rug slips awful on the wood floor and I have tried several things without success. Yesterday I applied some safety backing to it. I need another tub. I'm hoping this works. And lastly... We did some planting...or re-potting.  I only had one plant in the house so Mouse and I went and picked out a few for some of the rooms. She loved the succulents so I got her two for her bathroom. Plus I wanted a couple of hanging plants for the spa room...it already had two ceiling hooks just begging for some ferns. I think that's more than enough sharing for one day. I know its long....but I had a whole month to make up for. I'm going to try and be better but no promises. Half the time I turn on the computer to sit down and do this and I get distracted and forget. No, not half the time...all the time. Wish me luck. ## Monday, December 9, 2013 ### The Nightmare Our moves never go smoothly. Its a rule. But nothing, in over 40 years of my life...military moves with daddy, civilian moves with my parents, military moves with Hubs...NOTHING has ever been this bad. Thanks to some new changes with the military contracts, you now can have several different companies handle your belongings. In our case it was 3. One to pack us up. One that stored our stuff for a year. Sounds like a recipe for disaster right?  Well it gets better. You see, the last company to touch it gets to pay for the damages. To begin with I did not want my stuff now. I wanted to finish painting before our stuff arrived. So the whole thing already started with a strike against it.  And I told the company if they couldn't deliver before my husband left again I didn't want it. So they declined. I was happy. Then they called back and said they would do it. And they actually tried to be very accommodating. The guy showed up at the warehouse to get our stuff on a Monday a week out from when Hubs leaves.  But they said they couldn't get to our stuff -- come back Wednesday.  He returned Wednesday and they had piled it in a big pile in a room with no lights and had no workers to help load. Come back Friday.  So he shows up Friday and there is still no help. So the driver decided he would load it himself. And they said go ahead. But then he wanted inventory to check it off as he was seeing broken items already and wanted to note that. They said no, just put it on.  He refused. And stood in front of them and called me to let me know he was trying to pick up my belongings but the warehouse was making problems and he could already see many broken items. So they left him to figure it out himself.  And it took him 12 hours to get it all loaded and inventoried. And there were lots of phone calls flying.  And in the end the warehouse people would not sign the inventory because they said they didn't break anything.  So the driver said to call him when they were ready to sign and he would come move his truck. LOL. So they signed. And our belongings showed up the day before Hubs left.  Exactly what we did not want. But hey, it wasn't the fault of the guy delivering. It gets better.  Once the poor guy gets here and makes the calls to have unpackers meet him at our house he finds no one has any workers.  In November. The slowest month for moves.  Right. So the guy had to call a day labor company and hire two guys that had never done this before to come move my belongings.  I did warn the driver that my walls and floors were new and I would file for any damages.  Him and his son did all the furniture and heavy stuff to ensure there was no damage.  And I must say they were the best I've ever had for that. More points for this guy. What should have been done by noon took until 8 pm.  Day labor cut off at 4:45.  Not that they were much help to begin with. The inventory sheets were a complete mess. Usually you might have two different colored sticker sets - one for large items and one for boxed.  No. Not this time. Three different colors. And they started and stopped at odd places.  So I'm trying to mark off three different sheets as things are being unloaded. It was a major pain and slowed things down. Almost every antique furniture piece has been destroyed. The china cabinet -- the fold down desk has collapsed inside.  The corner china cabinet they busted out the glass and the frame and broke the woodwork. A dresser is pulled apart. The drawers are all collapsing and you can see daylight. A super solid brass floor lamp looks like a pretzel.  No really...like they tried to tie it in a knot.  Broken things. Destroyed art. Missing boxes...and one missing support bar for Mouses bed...so the whole thing is worthless. (Chinese Folk Art watercolor painting brought back from China for Mouse) Each box must be gone through slowly and the damage cataloged. I think I'm up to the 6th damage sheet now.  And I'm not even doing Hubs stuff. Slowest unpacking job I've ever done. Please God let it be the last. So that's how I'm spending my days.  A box at a time. New damaged items every day. And I know it will actually get worse.  When they offer me next to nothing for what we turn in. I actually think we might have to get an attorney on this one. What a way to say goodbye to the military. ## Friday, November 29, 2013 ### Work Crew Number Two After debate back and forth over exactly what to pay for on the walls, we finally got a crew in to tear down the paper from 4 bathrooms, 2 halls, kitchen and a bedroom. Then to repair the walls, texture/finish the walls and then paint. We threw in painting the living and dining rooms also. Big money, but it still leaves half the house to be painted. They began in the back side of the house. Remember Mouses bathroom? It had two layers of wallpaper and two holes behind the mirror. Even though it was the smallest room it held us up a day with the sheet rock repair. A guy spent three days in here. Mouse picked a color called Spa. Its really nice and soothing...kind of beachy. Now all the yellow trim in the room stands out like crazy though. I didn't pay for the trim to be done so that is on me. But after painting her room to match, I was done.  Not sure if I fractured a finger or just messed up the joint, but painting was excruciating. Plus every time I tried somebody else decided they had better use for my time. Remember Songbirds room? The crazy wallpaper that made me call it quits and hire a crew.  Even the light switch covers and outlet covers had this stuff on it. And it did not want to come off. They came in and finished stripping the paper and had it textured in no time.  For days I would just go in there to look and see what real walls looked like. I think all the unfinished drywall scarred me. LOL. She picked a color called Seacap for the room and parsnip (a shade lighter) for the bathroom. Its a very very light seafoam green. Now the off white trim in the room looks pink next to it. Not good. \ Next was the very yellow living/dining area. This only required paint. Yellow is one of my least favorite colors. And this was really yellow. See those paint splotches on the wall? Some of you know that story. The rest can make something up. The new paint looks yellow when I shoot it. That is pretty funny. Its actually more of a sand color. We used that color pretty much in the rest of the house that they did for me. Its called toasted pinenut. With all that big wallpaper down the kitchen looks bigger. I really got cussed for my bathroom. It was that foil paper. They ended up just cutting it off with utility knives in many places and then just fixing the walls. All four of the guys were in here for a full day. I don't have an after shot. I just had them paint this room a white to match the bedroom as I havent decided what I'm doing back there and it will be the last rooms I do. This stuff was only real bad on the border. They put it up with an adhesive of some kind -- not the wall kind. The border had to be cut off the wall. Unbelievable. The toasted pinenut looks really good with all the wood and brass in here. It was a very nice surprise. Believe it or not, this was the worst room in the house. It had a wallpaper that really didn't bother me and I was going to leave it, but then I got to thinking that if I ever did want to change it I would have to go through this same thing with the unfinished drywall. Forget it. I decided to do this mess ONCE. When they tried pulling down this paper, it was plastered on.  As in, put on with plaster.  Total mystery as to why anyone would do that. They had to chip this paper off. Again, this hall looks great now as it has all the same wood as the bathroom. Its much lighter than before also. I didn't realize how dark the area was before. It took 8 working days to complete all the work. We did find some pock marks in the wall of the small bathroom which is old termite damage. The house had been treated per the paperwork and now I know where. That was all fixed. The drywall holes in the same bathroom were fixed. Otherwise no new surprises on this job. Glad its all done. Sorry I didn't get the painting done I needed to before the movers came. Now forget it. I'm not painting around all our stuff. It can stay a mess until I can afford to pay someone else to do it. There is fine white powder all over the place from having all the texturing done. Most of it was cleaned up before our stuff arrived...but I still find it when the heater has run at night. Oh - and just like most things we try to do here...these guys were the THIRD company I called. The first two were off of Angie's List. Neither called back.  Angie's list is a waste of money. The phone book got the job done. - Posted using BlogPress from my iPhone ## Monday, November 25, 2013 ### Thankful One of the best parts of returning to Texas, is being able to celebrate the holidays with my friends. It just doesn't get any better than that.  And while many of you are thinking "duh", you have to understand that as a military brat and then military wife, I don't live anywhere long enough to have any roots and most friends forget about you the minute you leave town. Actually, most friends cut you off before you even leave. How I ended up with 10 ladies that did not cut me off before OR after I left -- well, its a God thing. Anyways....the Fabulous Zumba Queen opens up her home for an annual dinner with our guys. She has an amazing home that was built for entertaining and its awesome.  This was the first year Hubs and I were here to attend. It was awesome. My husband informs me that he had a great time hanging out around the fire after dinner with these guys. He only knew one of them before we went but has now been pulled kicking and screaming into the craziness that we are. We are all very thankful that the Fabulous Zumba Queen and her handsome husband open their homes up for us. It is the perfect setting. Everyone brought something to share and we had a huge feast. Tables were shoved together so we could all be in one spot. Cool shot A -- you need to show me how to do this. There were games to be played (team captains...serious competetors). There was talking and laughing and joking and just cutting loose. Yes I was on the losing team. Even having the most competitive person in the room on our side did not save us this time. And now they now have proof that the mysterious and elusize Hubs actually exists. Great night friends! So thankful for all of you and the craziness that is our friendship...around life, kids, family and all the other things that we could let interfere.  Thankful you ladies are willing to carve out time in your crazy schedules as you can to make us what we are.You don't hardly find that anywhere anymore in our fast paced world. As Steph would say....Nuzzle, nuzzle, neigh, neigh. ## Wednesday, November 20, 2013 ### My Own Nightmare Before Christmas...And After... I have a few medical issues I am finally getting taken care of.  See, military care? What care? They did not help me and did not want to be bothered.  What? Synthroid not working for you? Wow. Too bad. Guess you got a problem. Anyways, now that I have real insurance (until the new rules kick in and we are taxed on it, at which point we will have to drop it and go back to the Tricare that no one accepts) I am getting answers.  Real doctors care. And never once am I asked if it could be a mental issue because my husband is deployed. So there are 4 separate things we are working on. Most of them are just issues of finding the correct dosage, etc. But there is one that is a bit of a problem. It keeps me busy - or at least it has for the last month. I finally feel like I have a handle on it though, Part of my allergy problems and most of my migraine issues are food related. Military would not help. But that's ok cause all that testing is done now. And they ended up paying for part of it anyways. So I have this list of foods I can't eat for 6 months in order to clear some of this up. Hopefully after the 6 month most of it can go back into my diet.  Some things like the wheat and diary will never be able to...I've known that for years, but now its officially in my records, not just a guess. (And no I am not a celiac.) For now the list of no-nos include: Beef Garlic Black Pepper Green Beans Cayanne pepper Tuna Clams Shrimp Dairy Wheat Soy Corn sage Cauliflower sweet potato lime orange grape iceburg lettuce peas pecan rosemary hops yeast banana butternut squash cane sugar egg yolk Wheat, egg, soy, corn, milk, sugar, yeast - that about covers anything packaged thank you very much or anything with oil Beef??? Kill me now Black pepper and garlic??? How am I supposed to cook??? Iceberg lettuce? Huh? So its been an interesting month. I have learned to substitute cardamom for black pepper. I also have an Indian spice coming that they use as a garlic or onion substitute.  I also carry a cardamom in my purse.  Hey - did you know cardamom is the third most expensive spice? Yep. Right after saffron and pure vanilla. Unfortunately I am intimately aware of this. Yesterday I messed up and used butternut squash in some roasted veggies. Goes to show I don't have the whole list memorized yet. I survived the Halloween candy, but Thanksgiving and Christmas are on the horizon. But its only 6 months. After that point I should be able to start adding things back in to find what I need to stay away from forever. I've known wheat for years. Same with milk. Those need to stay off limits. I've only had one migraine since I started this...and it was because I had a diet Sprite. I know  better than to touch artificial sugars also. Have to stick to my pure juices watered down with club soda when I feel like having a "soda". Already looking forward to Thanksgiving and Christmas feasts NEXT year! ## Tuesday, November 12, 2013 ### A New Scout Troop We finally found a troop in our new town.  Funny thing is the town hasn't had one for 5 years and this troop is just being started by two ladies with no experience but 100% heart. We lucked out. Talk about perfect timing. I'm a fan of scouting because of the social projects they do for their communities. More kids need to get their hands dirty helping others right where they live. This is how you teach empathy and social responsibility without it always being a "church thing".  I have seen kids that grow up in the church doing "good works" have a total disconnect on helping out for no other reason than because its your community.  Whoops...rabbit trail. On a more personal level, Mouse has been helped tremendously by Scouts. With her separation anxiety I needed that place outside of school where I could take her and have her with other kids and off my leg. Ok, so I stayed in the building and ended up working with the older girls again...but that's besides the point. Now here we are being our 5th year.  Mouse grew to where she would go on field trips and then overnights without me.  You would have to have known her as a scared little baby and toddler to full appreciate what a big deal that is. Its so huge I even found a troop that would accept her for the 6 months in Arkansas. It is a staple in her life to date. She has been blessed with the first two troop leaders that are amazing leaders that grew up in scouts and knew their area and camping like the back of their hands.  These are women worthy of your girls. Looking forward to getting to know our new troop and all our new adventures!
2014-03-10 17:31:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18445439636707306, "perplexity": 2516.802817436565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010916587/warc/CC-MAIN-20140305091516-00029-ip-10-183-142-35.ec2.internal.warc.gz"}
https://holdenlee.wordpress.com/2012/09/10/early-fall/
Posted by: holdenlee | September 10, 2012 Early fall How does it feel to be leaf, clawing against the sidewalk with your tips, trying to grab a foothold against the stone-faced cement? The wind is like a baby, the only game he knows how to play with cards is turn the card over, laugh, then turn it over again, and toss it in a few different ways before he loses interest. It is nascent autumn, when all your friends are still green in their trees, and will go through all the vicissitudes of autumnal life–the lemon yellows and juicy oranges and passion reds and the few smug berry purples. And you are the pessimist that scratches without being heard, pessimist because you have no choice, because that wind cut off your stem that you held on to the branch with—you know you should have held on with a firmer grip—and now you’ll just change from green to brown. Your friends—so many of them!—up in the trees, don’t seem to notice your absence, that space where you occupied—was it this twig or that? You all will meet the same fate!, you shout up at them. But they are busy charming each other with a rosy red glow and outshining each other in sunny yellow. The wind rests you against the soil, and you accept: you’ll be the first among your friends to fertilize this tree, so it can grow tender buds in a spring you won’t see. P.S. I fixed some stuff in the first lecture of 18.787 (see below post)—added an explanation of why $\mathbb{G}_{a,\text{Spec}(R)}(\text{Spec}(R'))$ is simply $(R',+)$, and why this sort of idea is important.
2017-06-23 06:58:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4149382710456848, "perplexity": 2053.473382964671}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00477.warc.gz"}
https://en.bmstu.wiki/Needham%E2%80%93Schroeder_(protocol)
# Needham–Schroeder (protocol) ## Foreword This name is used for both symmetrical (using a trusted third party) and asymmetrical schemes of authentication and session key generation. They were invented by M.Schroeder and R.Needham in 1978. ## Symmetric version ### Description This protocol is used for a mutual authentication and shared secret key generation for establishing a secure connection with the use of trusted third party. Later this protocol became a base for a range of symmetric authentication protocols, in particular Kerberos. ### Setup The protocol is used by two users ${\displaystyle A}$ and ${\displaystyle B}$ and a trusted party ${\displaystyle KDC}$ (Key Generator Center), which has shared symmetric keys with users (${\displaystyle K_{AC}}$ and ${\displaystyle K_{BC}}$ respectively). ### Work 1. ${\displaystyle A}$ sends a plaintext information about the requested connection to the trusted party: his ID, ${\displaystyle B}$'s ID and a random number ${\displaystyle R_{A}}$: ${\displaystyle A: [A,B,R_A]\to KDC}$ 2. ${\displaystyle KDC}$ generates a session key ${\displaystyle k}$ and forms a package for ${\displaystyle A}$, which contains the random ${\displaystyle R_{A}}$, generated by ${\displaystyle A}$, ${\displaystyle B}$'s IS, session key ${\displaystyle k}$ and a package for ${\displaystyle B}$: a session key and ${\displaystyle A}$'s ID, encrypted with ${\displaystyle K_{BC}}$. KDC encrypt the whole package with a key, shared between him and ${\displaystyle A}$ and sends it to ${\displaystyle A}$: ${\displaystyle KDC: E_{K_{AC}}(R_A, B, k, E_{K_{BC}}(K,A))\to A}$ 3. ${\displaystyle A}$ decrypts the package and checks ${\displaystyle R_{A}}$ and ${\displaystyle B}$'s ID. This makes impossible for malefactor to spoof ${\displaystyle KDC}$ or impersonate ${\displaystyle B}$, by changing the recepient ID in ${\displaystyle A}$'s first message. Then ${\displaystyle A}$ resends to ${\displaystyle B}$ his part of the package: ${\displaystyle A: E_{K_{BC}}(K,A)\to B}$ 4. Having decrypted the message, ${\displaystyle B}$ discovers the session key and interlocutor's ID. After that the checking happens: ${\displaystyle B}$ generates a random ${\displaystyle R_{B}}$ and sends it, encrypted with a session key, to ${\displaystyle A}$: ${\displaystyle B: E_k(R_B)\to A}$ 5. ${\displaystyle A}$ decrypts a message and sends to ${\displaystyle B}$ the confirmation of the successful session establishment: ${\displaystyle R_{B}}$, decreased by 1 and encrypted with the session key: ${\displaystyle A: E_k(R_B-1)\to B}$ ## Asymmetric version ### Description This protocol version uses asymmetric cryptography, which means that shared secrets between users and a trusted third party are not needed. It allows users to discover each others's public keys and perform a mutual authentication of each other and the third party. ### Setup Two users participate the process: ${\displaystyle A,B}$ and also a trusted key generator center: ${\displaystyle KDC}$. All of them have public/private key pairs: ${\displaystyle Kp_A, Ks_A, Kp_B, Ks_B, Kp_C, Ks_C}$ respectively. ${\displaystyle KDC}$ knows public keys of the users and users know ${\displaystyle KDC}$'s public key. ### Work 1. ${\displaystyle A}$ sends ${\displaystyle KDC}$ a request for establishing a secure messaging channel with ${\displaystyle B}$: ${\displaystyle A: [A,B]\to KDC}$ 2. ${\displaystyle KDC}$ sends to ${\displaystyle A}$ a message with ${\displaystyle B}$'s public key and ${\displaystyle A}$'s ID, signed with his digital signature (encrypted with a secret key): ${\displaystyle KDC: E_{Ks_C}(Kp_B,A)\to A}$ 3. ${\displaystyle A}$ verifies ${\displaystyle KDC}$ (by decrypting the message with ${\displaystyle KDC}$'s public key) and gets ${\displaystyle B}$ public key. After that ${\displaystyle A}$ forms a package for ${\displaystyle B}$: ${\displaystyle A}$'s ID and a random number ${\displaystyle R_{A}}$, encrypts it with ${\displaystyle B}$'s public key and sends to ${\displaystyle B}$: ${\displaystyle A: E_{Kp_B}(A, R_A)\to B}$ 4. ${\displaystyle B}$ decrypts the package and finds out ${\displaystyle A}$'s desire to start communication. Then ${\displaystyle B}$ makes a similar request for ${\displaystyle A}$'s public key: ${\displaystyle B: [B,A]\to KDC}$ 5. ${\displaystyle KDC}$ sends to ${\displaystyle B}$ a signed ${\displaystyle A}$'s public key: ${\displaystyle KDC: E_{Ks_C}(Kp_A, B)\to B}$ 6. Now all the participants know public keys of each other. They need to authenticate and make sure that the connection is set up correctly. ${\displaystyle B}$ generates a random number ${\displaystyle R_{B}}$ and sends it and ${\displaystyle R_{A}}$, he got earlier, to ${\displaystyle A}$ in encrypted form: ${\displaystyle B: E_{Kp_A}(R_B, R_A)\to A}$ 7. ${\displaystyle A}$ decrypts ${\displaystyle B}$'s package and checks ${\displaystyle R_{A}}$. If everything is all right, ${\displaystyle A}$ encrypts ${\displaystyle R_{B}}$ with ${\displaystyle B}$'s public key and sends back to ${\displaystyle B}$: ${\displaystyle A: E_{Kp_B}(R_B)\to B}$ ## References В. Мао Современная криптография: теория и практика. — "Вильямс", 2005. — С.76-84 — ISBN 5-8459-0847-7 Roger M. Needham, Michael D. Schroeder Using encryption for authentication in large networks of computers. — Commun. ACM. — New York, NY, USA: ACM, 1978. — Vol. 21, fasc. 12. — P. 993—999. Bruce Schneier Applied Cryptography. — Wiley, 1996. — pp. 47 et seq. — ISBN 978-0-471-11709-4.
2021-12-04 19:19:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 87, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6997436285018921, "perplexity": 2570.938337045076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363006.60/warc/CC-MAIN-20211204185021-20211204215021-00162.warc.gz"}
https://icml.cc/Conferences/2021/ScheduleMultitrack?event=8762
Timezone: » Spotlight Single Pass Entrywise-Transformed Low Rank Approximation Yifei Jiang · Yi Li · Yiming Sun · Jiaxin Wang · David Woodruff Thu Jul 22 06:30 PM -- 06:35 PM (PDT) @ None In applications such as natural language processing or computer vision, one is given a large $n \times n$ matrix $A = (a_{i,j})$ and would like to compute a matrix decomposition, e.g., a low rank approximation, of a function $f(A) = (f(a_{i,j}))$ applied entrywise to $A$. A very important special case is the likelihood function $f\left( A \right ) = \log{\left( \left| a_{ij}\right| +1\right)}$. A natural way to do this would be to simply apply $f$ to each entry of $A$, and then compute the matrix decomposition, but this requires storing all of $A$ as well as multiple passes over its entries. Recent work of Liang et al. shows how to find a rank-$k$ factorization to $f(A)$ using only $n \cdot \poly(\eps^{-1}k\log n)$ words of memory, with overall error $10\|f(A)-[f(A)]_k\|_F^2 + \poly(\epsilon/k) \|f(A)\|_{1,2}^2$, where $[f(A)]_k$ is the best rank-$k$ approximation to $f(A)$ and $\|f(A)\|_{1,2}^2$ is the square of the sum of Euclidean lengths of rows of $f(A)$. Their algorithm uses $3$ passes over the entries of $A$. The authors pose the open question of obtaining an algorithm with $n \cdot \poly(\eps^{-1}k\log n)$ words of memory using only a single pass over the entries of $A$. In this paper we resolve this open question, obtaining the first single-pass algorithm for this problem and for the same class of functions $f$ studied by Liang et al. Moreover, our error is $\|f(A)-[f(A)]_k\|_F^2 + \poly(\epsilon/k) \|f(A)\|_F^2$, where $\|f(A)\|_F^2$ is the sum of squares of Euclidean lengths of rows of $f(A)$. Thus our error is significantly smaller, as it removes the factor of $10$ and also $\|f(A)\|_F^2 \leq \|f(A)\|_{1,2}^2$.
2021-10-18 14:52:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7578979730606079, "perplexity": 183.72333393366785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585203.61/warc/CC-MAIN-20211018124412-20211018154412-00596.warc.gz"}
https://answers.opencv.org/answers/111071/revisions/
# Revision history [back] OpenCV won't work on any microcontrollers (arduinos or similar); it needs Linux and a lots of libraries. Fortunately, the edge detector is simple enough to implement yourself. The simplest filter uses the following matrices: fy = |-1 | and fx = [ -1 1 ] | 1 | Here is the code (considering that the image is in a width*height linear array). You might adapt it to your needs: for(pos=0; pos<width*(height-1);pos++){ gx[pos]=im[pos+1]-im[pos]; gy[pos]=im[pos+width]-im[pos]; } To get the amplitude and the angle: ampl[pos]=sqrt(gx[pos]*gx[pos]+gy[pos]*gy[pos]); angl[pos]=atan2(gy,gx); The Sobel filter is a little bit more complicated (it uses two 3x3 matrices) but it gives far better results.
2020-09-30 19:20:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4082185626029968, "perplexity": 2002.1208883955894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127397.84/warc/CC-MAIN-20200930172714-20200930202714-00160.warc.gz"}
https://socratic.org/trigonometry
16 Active contributors today ## Question 6c606 Sunayan S Featured 2 weeks ago Please give a look here... #### Explanation: color(red)((1+sinA)/cosA+cosB/(1-sinB) =color(green)({cos (A/2)+sin(A/2)}^2/{cos^2(A/2)-sin^2(A/2)}+{cos^2(B/2)-sin^2(B/2)}/{cos(B/2)-sin(B/2)}^2 =color(red)((cos(A/2)+sin(A/2))/{cos(A/2)-sin(A/2)}+(cos(B/2)+sin(B/2))/(cos(B/2)-sin(B/2)) =color(green)([{(cos(A/2)+sin(A/2))cdot(cos(B/2)-sin(B/2))}+{(cos(B/2)+sin(B/2))cdot(cos(A/2)-sin(A/2))}]/{(cos(A/2)-sin(A/2))cdot(cos(B/2)-sin(B/2))} =color(red)({cos(A/2) cdot cos(B/2)-cos(A/2) cdot sin(B/2)+cos(B/2) cdot sin(A/2)-sin(A/2) cdot sin(B/2)+cos(B/2)cos(A/2)-cos(B/2) cdot sin(A/2)+sin(B/2) cdot cos(A/2)-sin(B/2) cdot sin(A/2)}/{cos(A/2)cos(B/2)-cos(A/2) cdot sin(B/2)-sin(A/2) cdot cos(B/2)+sin(A/2) cdot sin(B/2)} =color(green)((2{cos(A/2)cdot cos(B/2)-sin(A/2) cdot sin(B/2)})/({cos(A/2)cdot cos(B/2)+sin(A/2) cdot sin(B/2)}-{sin(A/2)cos(B/2)+cos(A/2)sin(B/2)} =color(red)((2 cos((A+B)/2))/{cos((A-B)/2)-sin((A+B)/2)} =color(green)((2 cos((A+B)/2)cdot2sin((A-B)/2))/{2 cdot cos((A-B)/2) cdot sin((A-B)/2)-2 cdot sin((A+B)/2)cdot sin((A-B)/2) =color(red)((2(sinA-sinB))/(sin(A-B)+cosA-cosB) Hope it helps... Thank you... ## Write each expression as a single angle and evaluate if possible? 2Sin30°Cos30° Øko Featured 2 weeks ago $2 \sin \left({30}^{\circ}\right) \cos \left({30}^{\circ}\right) = \sin \left({60}^{\circ}\right) = \frac{\sqrt{3}}{2}$ #### Explanation: Use the identity • $\sin \left(2 x\right) = 2 \sin \left(x\right) \cos \left(x\right)$ Applying this identity to your example $2 \sin \left({30}^{\circ}\right) \cos \left({30}^{\circ}\right) = \sin \left(2 \cdot {30}^{\circ}\right) = \sin \left({60}^{\circ}\right) = \frac{\sqrt{3}}{2}$ ## A triangle has sides A, B, and C. Sides A and B are of lengths 1 and 2, respectively, and the angle between A and B is (2pi)/3 . What is the length of side C? sankarankalyanam Featured 6 days ago color(brown)(c = sqrt 7 #### Explanation: $a = 1 , b = 2 , \hat{C} = \frac{2 \pi}{3}$ Applying Law of Cosine, ${c}^{2} = {a}^{2} + {b}^{2} - 2 a b \sin C$ ${c}^{2} = {1}^{2} + {2}^{2} - \left(2 \cdot 1 \cdot 2 \cdot \cos \left(\frac{2 \pi}{3}\right)\right)$ ${c}^{2} = 5 - 4 \cos \left(\frac{2 \pi}{3}\right) = 5 + 4 \cos \left(\frac{\pi}{3}\right)$ as $\cos \left(\frac{2 \pi}{3}\right) = - \cos \left(\frac{\pi}{3}\right)$ and cos is negative in II quadrant. ${c}^{2} = 5 + \left(4 \cdot \left(\frac{1}{2}\right)\right) = 5 + 2 = 7$ color(brown)(c = sqrt 7 ## How do you convert radian measures to degrees? F. Javier B. Featured 4 days ago Throug the equivalence pi=180º and de direct proportionality of arcs and angles. See examples #### Explanation: If we have 45º degrees convert to radians is easy $\frac{\pi}{x} = \frac{180}{45}$. Then $x = \frac{45 \pi}{180} = \frac{\pi}{4}$. So 45º=pi/4 rad By the other hand, if we have $\frac{\pi}{3}$ rads, to convert to degrees, we use the same equivalence $\frac{\pi}{\frac{\pi}{3}} = \frac{180}{y}$. Trasposing terms y=(180pi)/(3pi)=180/3=60º# ## Solve for all values of X ? Sinx= cosx+1 VNVDVI Featured yesterday $x = \frac{\pi}{2} + n \pi$ where $n$ is any integer. #### Explanation: We first want to get this equation in terms of only one trigonometric function. Let's say we want to put everything in terms of cosine, as that will work best in this case. Let's square both sides: ${\sin}^{2} x = {\left(\cos x + 1\right)}^{2}$ ${\sin}^{2} x = {\cos}^{2} x + 2 \cos x + 1$ Recall the identity ${\sin}^{2} x + {\cos}^{2} x = 1$ Solving for ${\sin}^{2} x ,$ we get ${\sin}^{2} x = 1 - {\cos}^{2} x$ Thus, $1 - {\cos}^{2} x = {\cos}^{2} x + 2 \cos x + 1$ And everything is in terms of cosine. Let's move everything to one side: ${\cos}^{2} x + 3 \cos x = 0$ (Our $1 ' s$ cancel out) We can factor out an instance of $\cos x :$ $\cos x \left(\cos x + 3\right) = 0$ We now solve the following equations: $\cos x = 0$ $\cos x + 3 = 0$ For $\cos x = 0 , x = \frac{\pi}{2} + n \pi$ where $n$ is an integer because $\cos x = 0$ for $\frac{\pi}{2} , 3 \frac{\pi}{2} , 5 \frac{\pi}{2} , 7 \frac{\pi}{2} , \ldots$, meaning values of $x$ which cause $\cos x = 0$ are infinitely many and repeat every $\pi$ units (so we add $n \pi$). $F \mathmr{and} \cos x + 3 = 0$, let's solve for $\cos x :$ $\cos x = - 3$ We already see no values of $x$ solve this. $- 1 \le \cos x \le 1 ,$ always. $\cos x$ cannot be $- 3.$ ## A 24 foot tree casts a shadow that is 9 feet long . At the same time a nearby building cats a shadow 45 feet long . how tall is the building? Gió Featured 6 hours ago I got $120 \text{ft}$ #### Explanation: Consider the diagram: The angles $\alpha$ are the same so we can write for the two triangles (tree and building): $\frac{h}{l} = \frac{H}{L}$ in numbers: $\frac{24}{9} = \frac{H}{45}$ so that: $H = \frac{24}{9} \cdot 45 = 120 \text{ft}$ ##### Questions Ask a question Filters • · 23 minutes ago • · 50 minutes ago · in Fundamental Identities • · An hour ago • · 2 hours ago • · 3 hours ago • · 6 hours ago • · 7 hours ago • · 8 hours ago • · 9 hours ago • · 11 hours ago · in Vectors • · 12 hours ago • · 12 hours ago · in Radian Measure • · 13 hours ago • · 14 hours ago • · 14 hours ago • · 15 hours ago • · 16 hours ago • · 18 hours ago • 18 hours ago • · 18 hours ago This filter has no results, see all questions. × ##### Question type Use these controls to find questions to answer
2018-03-24 23:47:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 61, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7538536787033081, "perplexity": 5484.082683983475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651465.90/warc/CC-MAIN-20180324225928-20180325005928-00288.warc.gz"}
https://analytixon.com/2015/09/03/whats-new-on-arxiv-32/
Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents BiDAl, a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center. Dynamic ensemble selection (DES) techniques work by estimating the level of competence of each classifier from a pool of classifiers. Only the most competent ones are selected to classify a given test sample. Hence, the key issue in DES is the criterion used to estimate the level of competence of the classifiers to predict the label of a given test sample. In order to perform a more robust ensemble selection, we proposed the META-DES framework using meta-learning, where multiple criteria are encoded as meta-features and are passed down to a meta-classifier that is trained to estimate the competence level of a given classifier. In this technical report, we present a step-by-step analysis of each phase of the framework during training and test. We show how the sets meta-features are extracted as well as the impact of each set of meta-features in the estimation of the competence level of the base classifier. Moreover, an analysis of the impact of several factors in the system performance, such as, the number of classifiers in the pool, the use of different linear base classifiers as well as the size of the validation data. We show that using dynamic selection of linear classifiers through the META-DES framework we can solve complex non-linear classification problems where other combination techniques such as AdaBoost cannot. Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines. A stochastic forest model for young and old age trees is studied. First, we prove existence, uniqueness and boundedness of global nonnegative solutions. Second, we investigate asymptotic behavior of global solutions by giving a sufficient condition for sustainability of forest. Under this condition, we also show existence of a Borel invariant measure. Third, we present sufficient conditions for decline of forest. Finally, we give some numerical examples. This paper is an exploratory analysis into fraud detection taking Enron email corpus as the case study. The paper posits conclusions like strict servitude and unquestionable faith among employees as breeding grounds for sham among higher executives. We also try to infer on the nature of communication between fraudulent employees and between non- fraudulent-fraudulent employees Consider observations $y_1,\dots,y_n$ on nodes of a connected graph, where the $y_i$ independently come from $N(\theta_i, \sigma^2)$ distributions and an unknown partition divides the $n$ observations into blocks. One well-studied class of change point problems assumes the means $\theta_i$ are equal for all nodes within contiguous blocks of a simple graph of sequential observations; both frequentist and Bayesian approaches have been used to estimate the $\theta_i$ and the change points of the underlying partition. This paper examines a broad class of change point problems on general connected graphs in which a regression model is assumed to apply within each block of the partition of the graph. This general class also supports multivariate change point problems. We use Bayesian methods to estimate change points or block boundaries of the underlying partition. This paper presents the methodology for the general class of change point problems and develops new algorithms for implementation via Markov Chain Monte Carlo. The paper concludes with simulations and real data examples to demonstrate application of the methodology on a wide range of problems. Reputation is a valuable asset in online social lives and it has drawn increased attention. How to evaluate user reputation in online rating systems is especially significant due to the existence of spamming attacks. To address this issue, so far, a variety of methods have been proposed, including network-based methods, quality-based methods and group-based ranking method. In this paper, we propose an iterative group-based ranking (IGR) method by introducing an iterative reputation-allocation process into the original group-based ranking (GR) method. More specifically, users with higher reputation have higher weights in dominating the corresponding group sizes. The reputation of users and the corresponding group sizes are iteratively updated until they become stable. Results on two real data sets suggest that the proposed IGR method has better performance and its robustness is considerably improved comparing with the original GR method. Our work highlights the positive role of users’ grouping behavior towards a better reputation evaluation. The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It makes two strong assumptions about posterior inference: that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network’s entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks. We introduce a parsimonious model-based framework for clustering time course data. In these applications the computational burden becomes often an issue due to the number of available observations. The measured time series can also be very noisy and sparse and a suitable model describing them can be hard to define. We propose to model the observed measurements by using P-spline smoothers and to cluster the functional objects as summarized by the optimal spline coefficients. In principle, this idea can be adopted within all the most common clustering frameworks. In this work we discuss applications based on a k-means algorithm. We evaluate the accuracy and the efficiency of our proposal by simulations and by dealing with drosophila melanogaster gene expression data. Many sensors/meters are deployed in commercial buildings to monitor and optimize their performance. However, because sensor metadata is inconsistent across buildings, software-based solutions are tightly coupled to the sensor metadata conventions (i.e. schemas and naming) for each building. Running the same software across buildings requires significant integration effort. Metadata normalization is critical for scaling the deployment process and allows us to decouple building-specific conventions from the code written for building applications. It also allows us to deal with missing metadata. One important aspect of normalization is to differentiate sensors by the typeof phenomena being observed. In this paper, we propose a general, simple, yet effective classification scheme to differentiate sensors in buildings by type. We perform ensemble learning on data collected from over 2000 sensor streams in two buildings. Our approach is able to achieve more than 92% accuracy for classification within buildings and more than 82% accuracy for across buildings. We also introduce a method for identifying potential misclassified streams. This is important because it allows us to identify opportunities to attain more input from experts — input that could help improve classification accuracy when ground truth is unavailable. We show that by adjusting a threshold value we are able to identify at least 30% of the misclassified instances. Partially observed Markov process (POMP) models, also known as hidden Markov models or state space models, are ubiquitous tools for time series analysis. The R package pomp provides a very flexible framework for Monte Carlo statistical investigations using nonlinear, non-Gaussian POMP models. A range of modern statistical methods for POMP models have been implemented in this framework including sequential Monte Carlo, iterated filtering, particle Markov chain Monte Carlo, approximate Bayesian computation, maximum synthetic likelihood estimation, nonlinear forecasting, and trajectory matching. In this paper, we demonstrate the application of these methodologies using some simple toy problems. We also illustrate the specification of more complex POMP models, using a nonlinear epidemiological model with a discrete population, seasonality, and extra-demographic stochasticity. We discuss the specification of user-defined models and the development of additional methods within the programming environment provided by pomp.
2020-11-25 19:39:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49587196111679077, "perplexity": 835.8019868380064}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141184123.9/warc/CC-MAIN-20201125183823-20201125213823-00172.warc.gz"}
https://grazeconomics.wordpress.com/2018/02/25/on-lying-i/
# On Lying, I There are many forms of lying, from so called white lies that are really just a form of politeness to deliberate attempts to misrepresent the truth to fashion policy (of some institution) in your own interest. I am here interested in something somewhere in the middle of the lying spectrum, children lying about something to avoid a slightly unpleasant duty. We all know that a child’s answer to “Have you brushed your teeth?” is not always necessarily completely truthful. In this and the next two blog posts, using the language of game theory, I want to discuss the incentives to lie and how one could perhaps teach children not to lie. I don’t think I need to provide empirical evidence that children lie on occasion. In case you forgot your own childhood you may want to look at Eddie Izzard’s treatment of this subject. To fix ideas let me tell you about a game I used to play with my kids when they were very little. I call it the nappy-changing game. We used to play it often. The situation is always more or less as follows. I am on nappy duty and one of my little ones, let’s call him Oscar, is busy playing. Walking past him, I get a whiff of an interesting smell. I ask Oscar “Is your nappy full?” and Oscar invariably answers with a loud “No“. How can we rationalize this “data”?  First I need to describe the game between the two of us. The game, crucially, is one of incomplete information. While I believe it is safe to assume that Oscar knows the state of his nappy, I do not. This is the whole point of the game of course. If I already knew everything there would be no point in Oscar lying. And if Oscar does not know the state of the nappy himself, one could also hardly call his behavior lying. It would just be an expression of his ignorance. But I am pretty sure Oscar knows the state of his nappy. So let us assume that Oscar’s nappy can only be in either of two states: full or clean. A game, to be well defined, needs to have players, strategies, and payoffs (or utilities). The players are obvious, Oscar and I. The strategies, taking into account the information structure, are as follows. I always ask the question, so let this not be part of the game. Then Oscar can say “Yes” or “No” and can make his choice of action a function of the state of the nappy. This means he has four (pure) strategies: always say yes (regardless of the state of the nappy), be truthful (say yes when the nappy is full and say no otherwise), use “opposite speak” (say no when the nappy is full and say yes otherwise), and always say no. I listen to Oscar’s answer and now have four (pure) strategies as well: always check Oscar’s nappy (regardless of Oscar’s answer), trust Oscar (check the nappy if he says yes, leave Oscar in peace if he says no), understand Oscar’s answer as opposite speak (check nappy if he says no, leave Oscar in peace if he says yes), and always leave Oscar in peace. Let us now turn to the payoffs in this game. My payoffs are as follows. I want to do the appropriate thing given the state of the nappy. So let’s say I receive a payoff of one if I check Oscar’s nappy when it is full and also if I do not check Oscar’s nappy when it is clean (I will find out eventually!). In all other (two) cases I receive a payoff of zero. I, thus, receive a zero payoff when I check the nappy when it is not full and also when I do not check the nappy when it is full (as I have said, I will find out eventually!). One could play with those payoffs but nothing substantially would change as long as we maintain that I prefer to check the nappy over not checking when it is needed, and I prefer not checking over checking when it is not needed. What about Oscar’s payoffs. I think it is fair to assume that he always prefers not checking, i.e. that I leave him alone. I am sure he would eventually also want me to change him, but much much later than I would want to do it, and I will eventually find out and change him. So I think it is ok to assume that Oscar prefers to be left in peace at the moment of me asking, regardless of the state of the nappy. So let us give him a payoff of one when he is left alone and a payoff of zero when I check his nappy (in either state). There is one thing I still need to do with this model. I need to close it informationally. The easiest way to do this is to assume that ex-ante there is a commonly known (between Oscar and myself) probability of the state of the nappy being full. Let us call it $\alpha$ and let us assume (recall the whiff I got) that $\alpha > \frac12$. Now the assumption of a commonly known probability of the nappy being full is a ridiculous one, it is I am sure never true. But it allows me to analyze the game more easily, and I believe that in the present case, it is not crucial. I believe that the eventual equilibrium of this game will be quite robust to slight changes in the informational structure. I leave it to the readers to think about this for themselves. With all this I can write this game down in so-called normal form, as a 4 by 4 bi-matrix game. $\begin{array}{c|cccc} & \mbox{always c} & \mbox{trust} & \mbox{opposite} & \mbox{never c} \\ \hline \mbox{always yes} & 0,\alpha & 0,\alpha & 1,1-\alpha & 1,1-\alpha \\ \mbox{truthful} & 0,\alpha & 1-\alpha,1 & \alpha,0 & 1,1-\alpha \\ \mbox{opposite} & 0,\alpha & \alpha,0 & 1-\alpha,1 & 1,1-\alpha \\ \mbox{always no} & 0,\alpha & 1,1-\alpha & 0,\alpha & 1,1-\alpha \\ \end{array}$ Oscar chooses the row, I choose the column, and the numbers in the matrix are the ex-ante expected payoffs that arise in the various strategy combinations. In each cell of the matrix Oscar’s payoff is the first entry, mine the second. Once all this is in place it is easy to identify equilibria of this game. Note that as $\alpha > \frac12$ my strategy to never check (never c) is strictly dominated by my strategy to always check (always c). My ideal point would be that Oscar is truthful and I can trust him, but Oscar in that case has an optimal deviation to always say no. In that case I better do not trust him and instead always check his nappy. This is indeed the only pure strategy equilibrium of this game. Well there is also one in which Oscar always says yes and I always check him, but this is really the same. Note that language has no intrinsic meaning in this game. The meaning of language in this game could only potentially arise in equilibrium. So what did we learn from this so far? Clearly Oscar’s behavior (of lying) is not irrational (it is a best reply to my behavior). But it has the, from his – and also my – point of view, unfortunate side effect that I do not trust him, so his lying does not fool me. This game is, by the way, an example of a sender-receiver game. See Joel Sobel’s paper on Signaling Games.  In fact it is an example of a special class of sender-receiver games: so called cheap-talk games. See Joel Sobel’s paper on Giving and Receiving Advice for further reading. In the language of these games the lying equilibrium between Oscar and me is called a pooling equilibrium. It is called so, because the two kinds of Oscar, the one with a full and the one with a clean nappy, both send the same “message”. The two Oscars play this game in such a way that I cannot differentiate between them. Hence ther term pooling. In the next post I will take this game up again and consider what can happen if we play this game over and over again, as my kids and I did in the nappy days.
2018-04-25 12:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5629998445510864, "perplexity": 765.8232057106098}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947803.66/warc/CC-MAIN-20180425115743-20180425135743-00586.warc.gz"}
http://tubiblio.ulb.tu-darmstadt.de/52751/
# Explicit series solution of a closure model for the Kármán-Howarth equation ## Abstract The Homotopy Analysis Method (HAM) is applied to the nonlinear problem with ξ=0 : f(ξ)=1 and ξ \to +∞ : f(ξ) \to 0. The problem is associated with a closure model for the von Kármán-Howarth equation, in terms of the normalized two-point double velocity correlation in the limit of infinite Reynolds number. Though the latter differential equation admits no Lie point symmetry groups it is still integrable once for the values σ = 0 and σ = 4 by means of integrating factors. The case σ = 4 is the only case that is again integrable for the given boundary conditions. The key result is that for the generic case HAM is employed such that solutions for arbitrary σ are derived. By choosing the correct parameters in the frame of HAM, we obtain the explicit analytic solutions by recursive formulae with constant coefficients using some transformations of variables in order to deal with a polynomial type of equation. In the appendix, we prove that the Loitsyansky invariant is the conservation law for the asymptotic form of the original equation. Item Type: Article 2011 Liu, Zeng and Oberlack, Martin and Grebenev, Vladimir N. and Liao, Shi-Jun Explicit series solution of a closure model for the Kármán-Howarth equation English The Homotopy Analysis Method (HAM) is applied to the nonlinear problem with ξ=0 : f(ξ)=1 and ξ \to +∞ : f(ξ) \to 0. The problem is associated with a closure model for the von Kármán-Howarth equation, in terms of the normalized two-point double velocity correlation in the limit of infinite Reynolds number. Though the latter differential equation admits no Lie point symmetry groups it is still integrable once for the values σ = 0 and σ = 4 by means of integrating factors. The case σ = 4 is the only case that is again integrable for the given boundary conditions. The key result is that for the generic case HAM is employed such that solutions for arbitrary σ are derived. By choosing the correct parameters in the frame of HAM, we obtain the explicit analytic solutions by recursive formulae with constant coefficients using some transformations of variables in order to deal with a polynomial type of equation. In the appendix, we prove that the Loitsyansky invariant is the conservation law for the asymptotic form of the original equation. ANZIAM Journal Australian Mathematical Society Homotopy analysis method; von Kármán-Howarth equation; solutions in closed form; conservation law 16 Department of Mechanical Engineering > Fluid Dynamics (fdy)16 Department of Mechanical Engineering 24 Aug 2011 18:17 ASCII CitationDublin CoreT2T_XMLSimple MetadataMultiline CSVEP3 XMLBibTeXEndNoteAtomJSONRDF+XMLHTML CitationReference ManagerMODS TUfind oder in Google Send an inquiry #### Options (only for editors) View Item
2020-07-06 10:05:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320752024650574, "perplexity": 762.7406311149383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00117.warc.gz"}
https://www.nature.com/articles/s41598-020-78902-1
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # An interactive holographic projection system that uses a hand-drawn interface with a consumer CPU ## Abstract Holography is a promising technology for photo-realistic three-dimensional (3D) displays because of its ability to replay the light reflected from an object using a spatial light modulator (SLM). However, the enormous computational requirements for calculating computer-generated holograms (CGHs)—which are displayed on an SLM as a diffraction pattern—are a significant problem for practical uses (e.g., for interactive 3D displays for remote navigation systems). Here, we demonstrate an interactive 3D display system using electro-holography that can operate with a consumer’s CPU. The proposed system integrates an efficient and fast CGH computation algorithm for line-drawn 3D objects with inter-frame differencing, so that the trajectory of a line-drawn object that is handwritten on a drawing tablet can be played back interactively using only the CPU. In this system, we used an SLM with 1,920 $$\times$$ 1,080 pixels and a pixel pitch of 8 μm × 8 μm, a drawing tablet as an interface, and an Intel Core i9–9900K 3.60 GHz CPU. Numerical and optical experiments using a dataset of handwritten inputs show that the proposed system is capable of reproducing handwritten 3D images in real time with sufficient interactivity and image quality. ## Introduction Electro-holography is a very promising technology for three-dimensional (3D) display systems, because it is possible to reproduce fully the light reflected from an object using a spatial light modulator (SLM). However, the enormous calculation requirements for producing computer-generated holograms (CGHs), which are displayed on an SLM to modulate the incident light, are a significant problem for practical applications (e.g., for an interactive 3D display for a car navigation system). Consequently, there have been many studies of fast CGH calculation methods, such as look-up table (LUT)-based1,2,3,4,5, sparsity-based6,7,8, polygon-based9,10,11,12, and hardware-based13,14,15 approaches. With the evolution of both hardware and algorithms, the required computational complexity has been greatly reduced, and computational speed has been improved dramatically compared to the early days of this area of research. However, to achieve practical computing speeds, high-performance computers–such as graphics processing units (GPUs) or field-programmable gate arrays (FPGAs)–are still required. On the other hand, many research topics aimed at improving the performance of holographic displays-e.g., enlarging the viewing angle16,17, projecting full-color images18,19, or implementing interactions on the display system20,21-are active domains of research, because holographic displays are expected to be effective visual interfaces. For example, Sando et al. proposed a 3D holographic-display system with a digital micromirror device (DMD) and rotating mirrors. They calculated CGHs using a 3D fast Fourier transform (FFT) based algorithm and succeeded in projecting 360$$^\circ$$ viewable 3D video at 10 Hz using a GPU with mouse and keyboard interactions20. Yamada et al., proposed an interactive, full-color holographic video system with finger-gesture input21. They captured finger gestures with leap motions and used an input interface for rotate-and-zoom operation. Although such interactive holographic-display systems have been successfully implemented at the laboratory level, they usually require GPU-based computation; therefore, miniaturization and power saving remain significant challenges. Recently, the present authors have reported a fast CGH calculation algorithm for projecting a 3D object comprised of line-drawn objects22. Figure 1 shows the overview of the algorithm. Our algorithm, called the “Computer Graphics (CG)-line method” in this paper, utilizes the wavefront from linearly aligned point-light sources (PLSs) at the same depth to converge into a wavefront that is compressible in one-dimension (1D). We call this pattern as 1D wavefront (1DWF) in this paper. That is, instead of superimposing two-dimensional PLS wavefronts, as in the conventional LUT method, we have succeeded in greatly improving the computational speed by superimposing 1DWFs along the normal direction of a line. However, we did not previously create an interactive system using this method, and the computation speed was not fast enough to generate smooth interactive playback. This paper proposes a holographic 3D display system that interactively displays the outlines of handwritten characters and basic shapes in the air. In this system, we applied inter-frame subtraction to the CG-line method to improve the computational speed and produce an interactive display working on a CPU. For animation that gradually completes a 3D image, such as handwritten input, the CGH is obtained by integrating the results of the holographic calculations for the input between frames. Therefore, by applying inter-frame subtraction to the 3D video, the amount of computation performed at each frame can be limited, and the computation time can be reduced significantly. The proposed system can be applied to 3D display systems with augmented-reality that overlay simple information or instructions on the human’s field of view (e.g., a car navigation system or a remote operational support system for workers) with attention-getting animations (e.g., strokes or splits). ## Results Figure 2(a) shows a schematic illustration of the proposed system, and Fig. 2(b) shows an image of the laboratory setup for this system. We used a phase-modulation-type SLM (Holoeye AG, ‘HEO 1080P’) with a 1,920 $$\times$$ 1,080 pixel resolution and a pixel pitch of 8 μm $$\times$$ 8 μm. The modulation quantization has 256 levels (8 bits), and the refresh rate is 60 Hz. A green-light laser (Light Vision, JPM-1-3-(A4)APC) with a wavelength of 532 nm is used as the light source to project a monochromatic image, and a drawing tablet (Wacom One, DTC133) is the interactive interface employed to input progressively varying, hand-written, line-drawn 3D objects. The 3D image is projected in front of the SLM as a real image to enhance its visibility, and it can be observed on the image sensor of the camera. Note that the 3D image can be converted easily into a virtual image by changing the sign in the equation for the CGH calculation. We used the following computer environment: Microsoft Windows 10 Enterprise operating system; Intel Core i9–9900K 3.60 GHz CPU and 64 GB DDR4–2666 memory; Microsoft Visual C++ 2017 compiler with single floating-point computational precision. Figure 3 and the Supplementaly video 1 show an example of an interactive 3D projection (The hand-written letters ‘a’ and ‘b’,were 0.25 m and 0.16 m, respectively, in front of the SLM in real time). The frame-rate of the 3D picture was set as variable but achieved 60 fps. As the supplemental video shows, there is no significant delay or lag between input and display. To check the image quality, we compared the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) of numerically reconstructed images from the CGHs obtained using the proposed method with the images written on the drawing tablet , which we considered as ground truth. The result of the numerical simulation is shown in Fig. 4. The PSNR of ‘a’ is 25.8 dB while that of ‘b’ is 28.8 dB, and the SSIM of ‘a’ is 0.696 while that of ‘b’ is 0.748. In general, it is difficult to distinguish between two images with a PSNR greater than 30 dB, so the 3D image quality of this system is almost sufficient . In order to generalize the results above, we conducted a comparative experiment on the response performance and image quality of 3D projections of handwriting input. We used a public dataset of hand-written input data23 with three CGH computation algorithms (the N-LUT method1 with frame subtraction, an FFT-based angular-spectrum method (ASM)24, and the CG-line method22 with frame subtraction). Here, we did not apply any iterative algorithm for optimizing the generated wavefront (e.g., Gerchberg-Saxton algorithm25 ) in every CGH computation algorithms because such algorithms are time consuming and thus not desirable for our intended application. The dataset was downloaded using a 13.3-in. tablet PC, and it contains the timestamps of control-point coordinates for 20 subjects drawing 16 gestures 10 times at three different speeds (slow, normal, and fast). We selected datasets for 10 subjects (Nos. 10, 11, 12, 22, 28, 68, 71, 88, 95, and 98) at fast speed for the first try; i.e., we prepared 160 datasets in total. Since the original dataset was written in too small a region, for this experiment the coordinates and the input time were multiplied by a factor of 5; i.e., we enlarged the input coordinates while keeping the drawing speed. First, we measured the margin of CGH computation time between sequential handwritten inputs in the datasets; if the CGH computation time is longer than the input interval, the 3D display does not properly follow the hand-written input. Here, the update period of the CGH from the complex-amplitude distribution was set to the interval of the timestamp contained in the dataset. This corresponds to the sampling rate of the drawing tablet, which we argue to be a reasonable setting. Figure 5 shows the average margin time for each calculation method and for datasets with different reconstruction depths (0.1 m, 0.3 m, and 0.5 m), and Table 1 shows the average margin time for all datasets. Here, a positive margin time means that the CGH calculation time is shorter than the input interval. Further, since the ASM computation time depends only on the resolution, the average margin time was calculated from the ASM execution time for an aperture image of 1,920 $$\times$$ 1,080 pixels. In applying the CG-line method with frame subtraction, almost all of the margin times were positive and sufficiently large, except for a few extreme data that can be attributed to noise. On the other hand, when the N-LUT method was applied, the margin time was positive when the reconstruction distance was 0.1 m; however, for the cases at 0.3 m and above the margin time became larger and more negative. This is because the computational cost of the N-LUT method is positively correlated with the reproduction distance. The average margin times were also negative in the ASM case. Second, we measured the average PSNR and SSIM of each image numerically reconstructed from the CGHs created with the CG-line method , with N-LUT method and with ASM, using the images written on the drawing tablet as a ground truth. Note that the CGHs were used after completing the input and drawing for each dataset. Figure 6 shows the PSNR and SSIM for each reconstruction distance and dataset, and Table 2 shows the average PSNR and SSIM for all datasets. Although the PSNR and SSIM of the CG-line method are inferior to those of the ASM, at each depth the PSNR of the CG-line method is close to or exceeds 30 dB, which is the standard for high-quality 2D images. Figure 7 shows a reproduction simulation and an example of an optically reconstructed image. Here, we observed that the reproduction distance of the optically reproduced image was displaced by about 0.2 m for the 0.5 m reproduced image and 0.08m for the 0.3 m reproduced image for all the calculation methods, due to the problem in the optical system such as unintentional diffraction in the half-mirror. However, numerical simulations have shown that the image formation is correct, and this issue can be corrected in the future by precise adjustment of the optical system. In addition to the above experiments, we also conducted more practical experiments in which English and Japanese words are projected at the same depth with the proposed method. Supplementary video 2 shows the results of the optical experiments, and Fig. 8 shows a part of the results. In this experiment, “Tokyo”, “Tokyo” in Kanji, “HELLO” and “Hello” in Japanese Hiragana were drawn at 0.16m from the SLM, and they were projected in real-time with sufficient quality to be recognized. The results of the image quality evaluation by numerical simulation are shown in Table 3. ## Discussion In terms of computation time, a sufficient processing-time margin is obtained by applying the CG-line method and frame subtraction. We calculated this using a dataset obtained from an experiment conducted with multiple subjects using a generic drawing tablet; therefore, we can say definitively that the proposed system can display 3D images with sufficient interactivity at normal writing speed. In order to clarify the computational performance of the proposed system, we also investigated the relationship between line length and CGH computation time. The results are shown in Fig. 9. In this experiment, the computation of complex-amplitude distributions using the CG-line method and its conversion to CGH were evaluated separately in order to understand the influence of the line length on the computation speed. For the computation of the complex-amplitude distribution, we want to prevent incorrect evaluation of the computational speed in cases for which the 1DWF data lies (partially) outside of the CGH region and is thereby ignored. To address this issue, we prepared a complex-amplitude buffer that can accommodate the whole domain of the 1DWF, and we computed complex-amplitude distributions for straight-line-drawn objects with lengths ranging from 8 μm to 8 mm (1 to 1000 pixels on a drawing tablet) and for depths of 0.1 m to 1.9 m. For the conversion to CGH, we calculated the average processing time for converting complex-amplitude distributions for 1920$$\times$$1080 pixels to CGHs, which was 19.4 ms. As shown in the Fig. 9(a), the processing time is approximately proportional to the line length, and the farther away the playback distance is, the longer is the required processing time. This is due to the increasing length of the 1DWF considering aliasing on the SLM as the playback distance increases. The average input length per millisecond of dataset used in this study was 0.438 pixels (3.504 μm on the SLM), suggesting that the proposed system is capable of reproducing 3D images with sufficient interactivity at 1.9 m or less, as shown in Fig. 9(b). Table 1 shows that the computation time of the CG-line method has less influence on the increase in the playback distance than that of the N-LUT method. As shown in Fig. 9(a), the computation time of the CG-line method is positively correlated with the playback distance as well as the N-LUT method. However, since the CG-line method synthesizes 1D wavefronts, whereas the N-LUT method synthesizes 2D wavefronts, the effect of the increase in computation time on the increase in distance is small. Therefore, under the conditions of this paper, the margin time of the CG-line method is almost invariant to the playback distance. In the case of FFT-based CGH generation methods such as ASM, the computation time depends on the resolution, as shown in Table 1. On the other hand, the computation time of the proposed method depends almost entirely on the total line length of the object. Therefore, the FFT-based method is more advantageous for projecting complex objects, while the proposed method is suitable for simple objects. For practicality and versatility, we should adopt a method that adaptively switches the computation method depending on the complexity of the object in the future. We are currently developing an algorithm for switching the computational methods. In terms of image quality, the proposed method is inferior to the ASM and N-LUT methods. However, it shows a PSNR that is close to the high-image-quality standard. In addition, from the simulation and optical reproduction experiments, it is clear that the proposed system achieves sufficient image quality, considering that the 3D images can be recognized clearly. ## Conclusion We have proposed an interactive 3D projection system based on electro-holography that runs on a consumer CPU with a drawing tablet. By limiting the projectable 3D images to multiple 2D images composed of outlines and applying inter-frame subtraction to the CG-line method, we have constructed a system with sufficient interactivity even with ordinary CPUs. Since our system’s computational cost depends mainly on the complexity of the 3D image and not on the resolution of the CGH, it is suitable for the projection of 3D images that add depth to simple figures such as symbols and letters. Therefore, our system is expected to be applied to car navigation systems and remote work support systems. On the other hand, it is still difficult to draw 3D objects such as polygon meshes because the current algorithm cannot draw continuous lines in the depth direction. Besides, since approximation is added to the CGH creation process, the image quality is degraded compared to the conventional method. Therefore, the improvement of the proposed method’s expressiveness and image quality with high speed is critical issues to be addressed in the future. ## Methods The CGH calculation for a 3D object comprised of PLSs is defined as: \begin{aligned} u(x_h,y_h) = \sum _{j=1}^{N}\frac{A_j}{r_{hj}}\exp \left( i\frac{2\pi }{\lambda }r_{hj}\right) , \end{aligned} (1) where $$u(x_h,y_h)$$ is the complex-amplitude distribution on the hologram plane, which is a virtual plane used to simulate the wavefront from the PLSs. Here, N is the number of PLSs, i is the imaginary unit, $$A_j$$ is the amplitude of the j-th PLS, $$\lambda$$ is the wavelength of the object and reference beam, $$r_{hj} =\{(x_h-x_j)^2+(y_h-y_j)^2+z_j^2\}^{1/2}$$ is the distance between the j-th PLS and pixel $$(x_h,y_h)$$ of the hologram plane, and $$(x_j,y_j,z_j)$$ are the coordinates of the j-th PLS. In this system, a kinoform CGH24 is created from the complex-amplitude distribution on the hologram plane; i.e., CGH $$c(x_h,y_h)$$ is given by \begin{aligned} c(x_h,y_h) = \left\lfloor \arg \{u(x_h,y_h)\}\frac{2^b}{2\pi } \right\rfloor , \end{aligned} (2) where $$\arg (\cdot )$$ is the operator for taking the argument of a complex number, $$\lfloor \cdot \rfloor$$ is the floor operator (rounding down), and b is the bit-depth of the CGH. Here, we set $$b = 8$$. Note that our system can also be applied to amplitude-type CGHs by changing equation (2). Equation (1) can be processed efficiently using the N-LUT method1. Defining the wavefront created by one PLS as \begin{aligned} T(x_h,y_h,x_j,y_j,z_j)\equiv \frac{1}{r_{hj}}\exp \left( i\frac{2\pi }{\lambda }r_{hj}\right) , \end{aligned} (3) Equation (1) can be rewritten as \begin{aligned} u(x_h,y_h) = \sum _{j=1}^{N}A_jT(x_h,y_h,x_j,y_j,z_j). \end{aligned} (4) Because T has a shift-invariant property around $$(x_j,y_j)$$ on the hologram plane, the N-LUT method precomputes T with respect to the possible values of $$z_j$$ and substitutes it directly into the calculating equation (4) by superimposing it on the hologram plane. Considering aliasing on the SLM, the effective radius of T on z is \begin{aligned} {R_z = z\frac{\lambda }{(4p^2-\lambda ^2)^{1/2}},} \end{aligned} (5) where p is the pixel-pitch of the SLM. If $$(x_h,y_h)$$ is outside this effective radius, the superimposition process is ignored. Thus, \begin{aligned} T(x_h,y_h,x_j,y_j,z_j) = {\left\{ \begin{array}{ll} \text {Equation } (3) &{} d_j\le R_{z_j}, \\ 0 &{} otherwise, \end{array}\right. } \end{aligned} (6) where $$d_j=\{(x_h-x_j)^2 + (y_h-y_j)^2\}^{1/2}$$. For a PLS with constant amplitude aligned on a straight line in a plane at a given z, the wavefronts on the hologram plane converge to a 1D wavefront. Given that the PLSs with $$A_j=1$$ are aligned from $$-\infty$$ to $$+\infty$$ on the x-axis, the wavefront on a line parallel to the y-axis at a given $$x_h$$ becomes \begin{aligned} L(y_h,z) = \sum _{x_j=-R_{z_j}}^{R_{z_j}}T(x_h,y_h,x_j,0,z), \end{aligned} (7) where $$-R_{z}\le y_h \le R_{z}$$. According to equation (7), when the line length is longer than $$2R_{z}$$, the wavefront on a line parallel to the y-axis at $$x_h$$ that is further than $$R_{z}$$ from both end points is replaced by L. Note that L is invariant to rotations and horizontal movements of the line. The CG-line method approximately replaces the wavefront with L even when the line length and position do not meet the conditions described above or the when the PLSs are aligned on a curve. In other words, the CGH computation is replaced by superimposing L in the normal direction at each point of the line. Note that in the other sections of this paper, we use the abbreviation 1DWF to represent L. Figure 1(b) shows an overview of the CG-line method. If $$f_z(x_h)$$ is a function of a line-drawn object on z projected on the hologram plane, $$\vec {n}_{z}(x_h)$$ is a unit vector normal to the curve $$f_z(x_h)$$, $${\vec {h}}_{z}(x_h)\equiv [x_h,f(x_h)]$$, and $${\vec {x}}_{h}=(x_h,y_h)$$, the CGH calculation of equation (1) becomes \begin{aligned} u(x_h,y_h) = \sum _{z}\sum _{x}L\{\vec {n}_{z}(x)\cdot {\vec {x}}_{h},z\}. \end{aligned} (8) Here, \begin{aligned} L\{\vec {n}_{z}(x)\cdot {\vec {x}}_{h},z\} = {\left\{ \begin{array}{ll} \text {Equation }(7) &{} \vec {n}_{z}(x) \parallel \vec {d} \text { and } |\vec {n}_{z}(x)\cdot {\vec {x}}_{h}| \le R_z, \\ 0 &{} otherwise, \end{array}\right. } \end{aligned} (9) where $$\vec {d}={\vec {x}}_{h}-{\vec {h}}_{z}(x)$$. In this system, the input coordinates from the drawing tablet are sampled at adjustable intervals as control points of the curve, and they are functionalized using the Catmull-Rom spline function, which is an interpolation polynomial that can generate smooth curves through the control points. Since the last two or three points can be used to define a curve, this method is ideal for our system, which displays the pen-input trajectory sequentially. The implementation is based on open-source software26. Furthermore, the computational burden is reduced by applying inter-frame subtraction to lines that are continuously inputted. In other words, the algorithm calculates only the complex-amplitude distributions corresponding to the lines added since the last frame update and accumulates them. The software used in this system has a Graphical User Interface (GUI) input built with OpenCV 4.1127. Figure 10 shows an example of the GUI display. The user selects the reconstruction depth of the 3D image from the color bar displayed on the right side of the drawing-tablet and draws the object to be displayed on the SLM. The color of the locus on the tablet corresponds to the depth. Based on the input-coordinate information, the software computes the curve function and superimposes the appropriate 1DWF in the normal direction at each point on the curve. These processes are parallelized with OpenMP, and the proposed system runs simultaneously on all cores of the CPU. ## Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. ## References 1. 1. Kim, S.-C. & Kim, E.-S. Effective generation of digital holograms of three-dimensional objects using a novel look-up table method. Appl. Opt. 47, D55–D62 (2008). 2. 2. Kim, S.-C. & Kim, E.-S. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods. Appl. Opt. 48, 1030 (2009). 3. 3. Lucente, M. E. Interactive computation of holograms using a look-up table. J. Electron. Imaging 2, 28 (1993). 4. 4. Nishitsuji, T., Shimobaba, T., Kakue, T., Masuda, N. & Ito, T. Fast calculation of computer-generated hologram using the circular symmetry of zone plates. Opt. Express 20, 27496–27502 (2012). 5. 5. Nishitsuji, T., Shimobaba, T., Kakue, T. & Ito, T. Fast calculation of computer-generated hologram using run-length encoding based recurrence relation. Opt. Express 23, 9852–9857 (2015). 6. 6. Blinder, D. & Schelkens, P. Accelerated computer generated holography using sparse bases in the STFT domain. Opt. Express 26, 1461–1473 (2018). 7. 7. Blinder, D. Direct calculation of computer-generated holograms in sparse bases. Opt. Express 27, 23124–23137 (2019). 8. 8. Shimobaba, T. & Ito, T. Fast generation of computer-generated holograms using wavelet shrinkage. Opt. Express 25, 77–87 (2017). 9. 9. Kim, H., Kwon, J. & Hahn, J. Accelerated synthesis of wide-viewing angle polygon computer-generated holograms using the interocular affine similarity of three-dimensional scenes. Opt. Express 26, 16853–16874 (2018). 10. 10. Matsushima, K. & Nakahara, S. Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method. Appl. Opt. 48, H54-63 (2009). 11. 11. Zhang, Y.-P., Wang, F., Poon, T.-C., Fan, S. & Xu, W. Fast generation of full analytical polygon-based computer-generated holograms. Opt. Express 26, 19206–19224 (2018). 12. 12. Jackin, B. J. et al. Decomposition method for fast computation of gigapixel-sized fresnel holograms on a graphics processing unit cluster. Appl. Opt. 57, 3134 (2018). 13. 13. Seo, Y.-H., Lee, Y.-H. & Kim, D.-W. ASIC chipset design to generate block-based complex holographic video. Appl. Opt. 56, D52–D59 (2017). 14. 14. Ichihashi, Y., Oi, R., Senoh, T., Yamamoto, K. & Kurita, T. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms. Opt. Express 20, 21645–21655 (2012). 15. 15. Nishitsuji, T. et al. Special-purpose computer HORN-8 for phase-type electro-holography. Opt. Express 26, 26722–26733 (2018). 16. 16. Liu, Y.-Z., Pang, X.-N., Jiang, S. & Dong, J.-W. Viewing-angle enlargement in holographic augmented reality using time division and spatial tiling. Opt. Express 21, 12068–12076 (2013). 17. 17. Yaraş, F., Kang, H. & Onural, L. Circular holographic video display system. Opt. Express 19, 9147–9156 (2011). 18. 18. Lin, S.-F., Cao, H.-K. & Kim, E.-S. Single SLM full-color holographic three-dimensional video display based on image and frequency-shift multiplexing. Opt. Express 27, 15926–15942 (2019). 19. 19. Kozacki, T. & Chlipala, M. Color holographic display with white light LED source and single phase only SLM. Opt. Express 24, 2189–2199 (2016). 20. 20. Sando, Y., Satoh, K., Barada, D. & Yatagai, T. Real-time interactive holographic 3D display with a 360$$^\circ$$ horizontal viewing zone. Appl. Opt. 58, G1–G5 (2019). 21. 21. Yamada, S., Kakue, T., Shimobaba, T. & Ito, T. Interactive holographic display based on finger gestures. Sci. Rep. 8, 2010 (2018). 22. 22. Nishitsuji, T., Shimobaba, T., Kakue, T. & Ito, T. Fast calculation of computer-generated hologram of line-drawn objects without FFT. Opt. Express 28, 15907–15924 (2020). 23. 23. Anthony, L. & Wobbrock, J. O. \\$ n-protractor: a fast and accurate multistroke recognizer. Proc. Graph. Interface 2012, 117–120 (2012). 24. 24. Goodman, J. W. Introduction to Fourier Optics 3rd edn. (Roberts and Company Publishers, Greenwood, 2017). 25. 25. Gercheberg, R. W. & Saxton, W. O. A practical algorithm for the determination of the phase from image and diffraction plane pictures. Optik 35, 237–246 (1972). 26. 26. Chen, X. cpp-spline. https://github.com/chen0040/cpp-spline (2017). 27. 27. OpenCV. https://opencv.org/. ## Acknowledgements This study is supported by JSPS Kakenhi Grant number (20K19810, 19H01097), Inoue Foundation for Science, Takayanagi Kenjiro Foundation and Fonds Wetenschappelijk Onderzoek (12ZQ220N, VS07820N). ## Author information Authors ### Contributions T.N., T.S, and T.I. directed the project; T.N. and D.B. designed and validate the software; T.N. measured and evaluated the CGH calculation time and image quality; T.N. and T.K. designed and constructed the optical setup; and all authors reviewed the manuscript. ### Corresponding author Correspondence to Takashi Nishitsuji. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Supplementary Information Supplementary information 1. Supplementary information 2. ## Rights and permissions Reprints and Permissions Nishitsuji, T., Kakue, T., Blinder, D. et al. An interactive holographic projection system that uses a hand-drawn interface with a consumer CPU. Sci Rep 11, 147 (2021). https://doi.org/10.1038/s41598-020-78902-1 • Accepted: • Published: • ### Interactive Holographic Display for Real-Time Drawing and Erasing of 3D Point-Cloud Images With a Fingertip • Mikito Takenaka • , Takashi Kakue • , Tomoyoshi Shimobaba •  & Tomoyoshi Ito IEEE Access (2021)
2021-05-17 17:38:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.434599906206131, "perplexity": 2227.027843831139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00282.warc.gz"}
https://codeforces.com/topic/74456/en2
Inversion counting with swap queries in O(log N) Revision en2, by bensonlzl, 2020-02-15 11:21:50 I am currently working on the following problem: • Given a permutation $A$ of length $N$, you have $Q$ queries specifying 2 numbers $X$ and $Y$ that swap the elements at indices $X$ and $Y$. After every query, output the number of inversions in the new permutation. All queries must be answered online. My current solution can process every query in $\log^2$ $N$ per query and $N$ $\log$ $N$ precomputation, by first precomputing the number of inversions in the initial permutation and using a segment tree with a BBST in every node. Each BBST stores all of the elements in the range covered by the segment tree node. We perform a range query on the value at index $X$ and the value at index $Y$ to determine the number of numbers smaller than either number in the segment between them, and then compute the change in inversions. After that, we update the 2 indices in the segment tree. Each of the $\log$ $N$ nodes takes $\log$ $N$ time to update, which gives an overall complexity of $\log^2$ $N$. My question is: Is it possible to compute the change in inversions more quickly than $\log^2$ $N$? eg. computing the change in $\log$ $N$ time. I suspect that it is possible with either a modification to the segment tree, or using a completely different data structure all together. Any ideas are welcome :) #### History Revisions Rev. Lang. By When Δ Comment en3 bensonlzl 2020-02-15 11:22:06 0 (published) en2 bensonlzl 2020-02-15 11:21:50 361 Tiny change: ' query in $O(log^2 N)$ per query' -> ' query in log^2 N per query' en1 bensonlzl 2020-02-15 11:18:53 1019 Initial revision (saved to drafts)
2020-03-31 20:29:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46208813786506653, "perplexity": 933.9174168557638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00523.warc.gz"}
https://tex.stackexchange.com/questions/415054/implementing-block-diagrams-in-tikz-pgf-plots
Implementing block diagrams in TikZ/PGF plots I am new to designing in LaTeX and have been learning TikZ. Can anyone suggest how to implement the following diagram in TikZ/PGF plots? I can make blocks but I am having troubles with the overall implementations. Further, is there any tutorial (preferably with examples) from where I could learn these things from fundamentals? • Could you please be more specific and tell us where you got stuck? In other words, please provide an MWE that shows your efforts? – user121799 Feb 13 '18 at 5:32 • It would be good if you can show us what you have tried. TeX.SX isn't really a site where people do everything for you and it is much easier if you can supply some code to start from. You probably want to use tikz. is pstricks – Andrew Feb 13 '18 at 5:45 • I meant that I can implement standalone block diagrams in TikZ. My query is how can such diagram be implemented? – Abhinav Sinha Feb 13 '18 at 6:27 \documentclass[tikz]{standalone} \usepackage{amsmath} \usepackage{tikz} \usetikzlibrary{shapes,fit,calc,positioning} \begin{document} \begin{tikzpicture}[font=\sf] \node[draw](box1) at (0,0){\begin{tabular}{c} control\\ input\\ generator \end{tabular}}; \node[draw,right=1cm of box1](box2) {\begin{tabular}{c} Actu-\\ ator \end{tabular}}; \node[draw,right=1cm of box2](box3) {Plant}; \node[draw,right=1cm of box3](box4) {Sensor}; \node[draw,right=1cm of box4](box5) {\begin{tabular}{c} Event\\ generator \end{tabular}}; \coordinate (csa) at ($(box1.west)!0.5!(box2.east)+(0,1.5cm)$); \node (sa) at (csa){Smart actuator}; \coordinate (css) at ($(box4.west)!0.5!(box5.east)+(0,1.5cm)$); \node (ss) at (css){Smart sensor}; \node[draw,dashed,inner sep=10pt,thick,fit=(box1) (box2) (sa)] (F1){}; \node[draw,dashed,inner sep=10pt,thick,fit=(box4) (box5) (ss)] (F2){}; \foreach \i [count=\j] in {2,...,5} {\draw[thick,-latex](box\j) -- (box\i);} \node[draw,below=1.8cm of box3, cloud,inner sep=-0.7cm, cloud puffs = 40, minimum width=14cm, ](cloud){Digital network $\Sigma_N$}; \draw[thick,densely dashed,-latex](cloud.178) -- node[pos=0.3,right]{$t_i,e(\ell),\boldsymbol{x}(\ell)$}(cloud.178 |-box1.200) -- (box1.200); \draw[thick,densely dashed,latex-](cloud.2) -- node[pos=0.3,left]{$t_i,e(\ell),\boldsymbol{x}(\ell)$}(cloud.2 |-box5.-20) -- (box5.-20); \draw[thick,latex-](box1.160)--++(-1cm,0)node[above]{$\boldsymbol{y}_\mathrm{rel}(t)$}; \draw[thick,latex-](box3.north)--++(0,1cm)node[right,pos=0.8]{$\boldsymbol{d}(t)$} node[left,pos=0.2]{$\Sigma_\nu$}; \path(box1)--(box2)node[above,midway]{$\boldsymbol{u}(t)$}; \path(box4)--(box5)node[above,midway]{$\boldsymbol{y}(t)$}; \node[anchor=south west] at (box1.north west){$\Sigma_\mathrm{IG}$}; \node[anchor=south west] at (box5.north west){$\Sigma_\mathrm{EG}$}; \end{tikzpicture} \end{document} • Could you please suggest some tutorials? – Abhinav Sinha Feb 13 '18 at 6:35 • @AbhinavSinha I personally find that the TikZ manual already has a neat tutorial. And when I want to find out more about a command or syntax, I look at answers in this forum containing these. You'll see that this is fun. – user121799 Feb 13 '18 at 6:38
2019-11-13 00:26:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7248814702033997, "perplexity": 4682.134511088544}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00390.warc.gz"}
https://mathematica.stackexchange.com/questions/189467/styling-inputalias-popup-box/189600#189600
Styling InputAlias popup box I'm trying to create a dark theme and I'm using the ReverseColor stylesheet as a base. I have most things changed to my liking and fixed some of the problems with ReverseColor. However, I cannot find a way to separately style the pop-up box you get when you do Esc name Esc. This thing It is almost illegible under ReverseColor. How do I change the font and background colour of this? The auto-completion popup for functions seems to be unaffected, so it probably inherits its style from something else. Bonus points if you could tell me how to change that also. • Yes, there are a bunch of issues with the Reverse Color stylesheet. – M.R. Jan 16 '19 at 20:25 • Which is why I'm trying to create one fixing as many issues as I can :). For anyone in the future doing the same thing, if you want to fix the usage information you get from ?,??, Information etc. you need to adjust the PrintUsage and Info* styles Jan 16 '19 at 20:39 You are looking for "CodeAssistCharCompletion" style. Add or modify this for your stylesheet: Cell[ StyleData["CodeAssistCharCompletion"] , FontColor->GrayLevel[0] ] I am not sure you can do anything about background because it looks like a bitmap rather than a plain Background->color setting. There's probably something in FEBitmaps directory but it will change global setting of that menu. Anyway you can find this and related styles in: Core.nb / Styles for Mathematica System-specific Elements / Code Assist Styles • Great. Thank you so much. I too was digging through Core.nb, but was somehow convinced that it was a dialogbox related style. Jan 16 '19 at 19:11 • Wow how did you find this trash? (A+) Search for "CodeAssist*" in Core.nb and friends? Jan 16 '19 at 19:51 • @b3m2a1 finding the final section was quite straightforward but first time I approached it I assumed it won't be there. Then I started to change font after font to Red till I hit it. – Kuba Jan 16 '19 at 20:57
2021-12-02 03:58:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29866552352905273, "perplexity": 1599.7404053633036}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00629.warc.gz"}
http://physics.stackexchange.com/tags/conformal-field-theory/new
# Tag Info ## New answers tagged conformal-field-theory 1 Equation (2.4.6): $T(z)X^\mu(0)\sim \frac{1}{z}\partial X^\mu(0)$ means that the RHS is the most singular term of the LHS. $T(z) = -\frac{1}{\alpha'} :\partial X^{\mu} \partial X_{\mu}:\tag{2.4.4}$ So \begin{align*} T(z)X^{\mu}(0) & =-\frac{1}{\alpha'}:\partial X^{\nu}(z)\partial X_{\nu}(z):X^{\mu}(0)\\ & =-\frac{2:\partial ... -4 I feel like this is an over-simplification. I agree. IMHO it's a popscience analogy that's presented as a genuine explanation, but it isn't. What is lost in the actual AdS/CFT correspondence in this analogy? A dimension. You can't just discard this. If you do, you lay yourself wide open to dropping another dimension, and saying the flat 2D image is ... Top 50 recent answers are included
2015-07-02 05:32:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6211936473846436, "perplexity": 2660.856518435035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095404.78/warc/CC-MAIN-20150627031815-00084-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.meracalculator.com/math/expression.php
# Math Expression Calculator Math Expression Calculator An algebraic expression or math expression is a mathematical phrase that can contain ordinary numbers, variables (like x or y) and operators (like add, subtract, multiply, and divide). In short, math expression is a group of characters or symbols representing a quantity or an operation. An Expression is a mathematical phrase that combines numbers and/or variables using mathematical operations. An expression is a representation of a value; for example, variables and/or numerals that appear alone or in combination with operators are expressions. Example: Calculate the math expression for the given expression 15*(4/3)/2 =15*(1.33)/2 =19.95/2 =9.975 Math Expression =10
2021-01-26 21:50:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8434991836547852, "perplexity": 2419.303148062122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803737.78/warc/CC-MAIN-20210126202017-20210126232017-00413.warc.gz"}
https://www.physicsforums.com/threads/unknown-variable.184151/
Unknown variable 1. Sep 12, 2007 tomcenjerrym I am confused about how the y variable came from? I = e$$^{-x^{2}}$$ dx I$$^{2}$$ = e$$^{-x^{2}}$$ dx e$$^{-y^{2}}$$ dy The equation of I should be like this: I = e$$^{-x^{2}}$$ dx I$$^{2}$$ = (e$$^{-x^{2}}$$)$$^{2}$$ d$$^{2}$$x I$$^{2}$$ = e$$^{-2x^{2}}$$ d$$^{2}$$x 2. Sep 12, 2007 genneth You're missing some integral signs, I believe. Remember that things like dx cannot be separated from the integral sign -- and that in the integrand, the variable x is "bound", and is not the same as an x on the outside. So: $$I = \int_0^{\infty} e^{-x^2}\,dx = \int_0^{\infty} e^{-y^2}\,dy$$ $$I^2 = \int_0^{\infty} e^{-x^2}\,dx \int_0^{\infty} e^{-y^2}\,dy$$ One more thing: even if you were to be loose with notation and use what I refer to as "physicist shorthand" and treat dx as a variable of sorts, $$dx^2 \ne d^2x$$ 3. Sep 12, 2007 tomcenjerrym and can you tell me why x = y? 4. Sep 12, 2007 It doesn't, but $$\int_0^{\infty}e^{-x^2}\,dx= \int_0^{\infty}e^{-y^2}\,dy$$ 5. Sep 12, 2007 HallsofIvy Staff Emeritus $$\int \int f(x) dx dx$$ would make no sense- once you have integrated with respect to x, there is no "x" left to integrate again! It is true, as DeadWolfe said, that $$\int_{-\infty}^\infty e^{-x^2}dx= \int_{-\infty}^\infty e^{-y^2}dy[/itex] because the x and y are "dummy" variables. You would integrate with respect to either, then evaluate at the end points (technically, take the limit) so there is no x or y in the final result- it doesn't matter what you call the variable. Of course, it is well known (Fubini's theorem) that the product [tex]\left(\int_a^b f(x)dx\right)\left(\int_c^d g(y)dy$$ is the same as the iterated integral [tex]\int_{x=a}^b\int_{y=c}^d f(x)g(y) dydx[/itex]
2016-12-11 04:29:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273847699165344, "perplexity": 2460.2145057267307}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544097.11/warc/CC-MAIN-20161202170904-00432-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/127785-multiplication-mod-2-a.html
1. ## Multiplication mod 2 Hi, Maybe it is obvious but I must be sure. Is it true that: $x_1x_1+x_1x_2+x_1=x_1x_2$ or simply that: $x_1x_1+x_1=0$? We are working in $F_2$. 2. Originally Posted by Migotek84 Hi, Maybe it is obvious but I must be sure. Is it true that: $x_1x_1+x_1x_2+x_1=x_1x_2$ or simply that: $x_1x_1+x_1=0$? We are working in $F_2$. Well, just check when $x_1=1\,\,\,and\,\,\,when\,\,\,x_1=0$ , the only two elements we have in $\mathbb{F}_2$. Yes, the identity is trivially true then. Tonio 3. Thanks for help
2017-03-28 12:50:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984886646270752, "perplexity": 989.6238634926596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189734.17/warc/CC-MAIN-20170322212949-00248-ip-10-233-31-227.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1425058/equivalent-norms-on-c0-1/1425064
# Equivalent norms on $C[0,1]$ For each $f\in C[0,1]$ set $$\|f\|_1 = \left(\int_0^1 |f(x)|^2 dx\right)^{1/2},\quad\quad \|f\|_2 = \left(\int_0^1 (1+x)|f(x)|^2 dx\right)^{1/2}$$ Then prove that $\|\cdot\|_1$ and $\|\cdot\|_2$ are equivalent norms on $C[0,1]$. So we want to show that for positive real numbers $a,b$ that $$a\|f\|_2 \leq \|f\|_1 \leq b\|f\|_2$$ Since $\|f\|_2^2 = \int_0^1 |f(x)|^2 dx + \int_0^1 x|f(x)|^2 dx= \|f\|_1^2+ \int_0^1 x|f(x)|^2 dx$ and because we know that $\int_0^1 x|f(x)|^2 dx\geq 0$ we have that: $$\|f\|_1 \leq 1\times \|f\|_2$$ Now we want to find some $a\gt 0$ such that: $$a\cdot\|f\|_2\leq \|f\|_1$$ This I am not sure how to do. In this case $1+x \leq 2$ so $\| f \|_2 \leq \sqrt{2} \| f \| _1$. • Does $|f(x)|^2$ get treated as a real number or something? If not wouldn't we be concerned about how $x$ effects $f(x)$ and hence integration by parts? – Functional Analysis Sep 7 '15 at 8:19 • $\|f\|_2^2 = \int_0^1 (1+x)|f(x)|^2\mathrm{d}x \leq \int_0^1 2|f(x)|^2\mathrm{d}x=2\|f\|_1^2$ Since we have $1+x \leq 2$ for all $x \in (0,1)$. – Nigel Overmars Sep 7 '15 at 8:24 • @NigelOvermars What does $|f(x)|$ actually mean? – Functional Analysis Sep 7 '15 at 8:38 • $|f(x)| = f(x)$ if $f(x) \geq 0$ and $|f(x)| = -f(x)$ if $f(x) < 0$. I.e., the distance between $f(x)$ and $0$. – Nigel Overmars Sep 7 '15 at 8:53 • @NigelOvermars Thanks. I think I get it now. Any $g(x)$ out the front, if taken to the maximum value it could attain on the interval will contribute strictly more than it would if you let it range over the interval. – Functional Analysis Sep 7 '15 at 8:58
2019-12-09 13:42:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229090809822083, "perplexity": 264.9294816193739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518882.71/warc/CC-MAIN-20191209121316-20191209145316-00410.warc.gz"}
https://ask.sagemath.org/question/7919/plot-the-level-sets-of-a-function/
# Plot the level sets of a function I'm trying to draw level set of a function f:R^2->R, that is the set of solutions of f(x,y)=h for a given h. For that purpose I wrote the following #! /usr/bin/sage -python # -*- coding: utf8 -*- from sage.all import * def level_curve(f,h): solutions_list = solve(f==h,y) return [sol.rhs() for sol in solutions_list] var('x,y') f=x+y+2 for g in level_curve(f,3): print g print "-----" f=x**2+y**2 for g in level_curve(f,3): print g This works, but I'm not satisfied principally because I got the level sets under the form of a list of functions. Moreover it will not work if the level set is vertical. Thus I would prefer to get the solution under the form of a parametric curve. Does Sage provides something for that ? edit retag close merge delete Sort by » oldest newest most voted I'm not quite sure what you're looking for. If you want to plot the level sets, you have several options,which I've blatantly stolen from the level set examples worksheet by evanmb: import matplotlib var("x, y") f = x+y+2 # manually using implicit plot for particular h values: p = Graphics() for h in [-5..5]: p += implicit_plot(f==h,(x,-4,4),(y,-4,4)) p.show() # using a full contour plot contour_plot(f,(x,-4,4),(y,-4,4), fill=false, labels=true, contours=10, colorbar=true,cmap=matplotlib.cm.gist_rainbow).show(aspect_ratio=1) # maybe filled contour_plot(f,(x,-4,4),(y,-4,4), fill=True, labels=true, contours=10, label_colors='black',colorbar=true,cmap=matplotlib.cm.gist_rainbow).show(aspect_ratio=1) # or 3d plot3d(f,(x,-4,4),(y,-4,4)) Or is there something more analytic that you're trying to do first? Depending on how complicated your real functions will be, finding a parametric solution might be a challenge. more The aim being to show graphics to my students, a graphical solution is sufficient. Obviously the advantage of an analytic solution is that it can be passed to pstricks and then be included in LaTeX figures in a smoother way than \includegraphics{blabla.png}. BTW I found \psplotImp more You might look at SageTex for a very easy way to include plots and other Sage output in TeX. Basically, you can simply do something like \sageplot{contour_plot(f,(x,-4,4),(y,-4,4)} inside your tex document and the plot shows up in the pdf. ( 2011-02-08 18:20:18 -0500 )edit
2019-03-23 23:09:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41812431812286377, "perplexity": 2811.8125843838443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203093.63/warc/CC-MAIN-20190323221914-20190324003914-00412.warc.gz"}