text
stringlengths
100
500k
subset
stringclasses
4 values
Course Objectives: Walking through Kemeny Hall, you will hear professors and students discussing a myriad of topics. A comment made suprisingly often in an explanation — independent of the subject — is 'It's just linear algebra.' Whatever this subject is, one can be certain that linear algebra is as fundamental to higher mathematics as counting is to arithmetic. But what is it? In a concrete context, it is the study of matrices and of finding solutions to systems of linear equations. In a more abstract setting it is the study of vector spaces (spaces which have an algebraic structure like $\mathbb R^n$), and of maps between them which preserve linear structure. There are two linear algebra courses at Dartmouth, Math 22 and 24. The first is concerned more with computation and applications and less so with abstraction. This course is concerned with a broader context in which to view linear algebra, and learning to justify assertions by means of rigorous proofs. But both courses strive to reveal the power and beauty of this subject as well as some of its amazing applications. In the end, we shall split our time. The theorems you learn are your tools, but tools are only useful if you know how to use them; a hammer is a great tool, but not terribly useful until you have learned not to bend too many nails. It is great to be able to determine whether a system of equations has a solution or not, but if it doesn't, you might still be interested in "how close" to a solution you can actually get.
CommonCrawl
A few days ago I had a look at my hungarian method project again. You can find the blog post here. After writing that blog post the owner of the julia library for that algorithm commented on my post and made a huge performance improvement to his library to be much faster than my code where my goal of actually implementing it was to be faster to compete in this kaggle challenge. He also gave me some performance tips which I wanted to test. Before I start I have to actually change some code of the hungarian method to be compatible with Julia v1.0 and not Julia v0.6 which I used back then. in general in quite some places changes - to .- or + to .+ etc whenever the right hand side must be broadcasted. here for 2D arrays you get CartesianIndex with (y,x) instead of a single coordinate. which is actually much easier to read. first of all the library is always faster and uses less memory and in both cases I mean much faster and way less memory. The library is about 20 times faster and uses $\approx 170$ times less memory. By memory I always refer to memory allocation during the process. At no point of my algorithm is actually 20GiB used. First I want to know why my code is using so much memory. You can do this by running julia by calling julia --track-allocation=user. You should then run a quite fast version of your algorithm as this takes quite some time. Therefore I only run it with n=800. Now running the library version takes about 6 seconds instead of 0.15... anyway waiting a while until mine finishes. If you close your julia command line you see that a file for each julia file is created which shows you how much memory is used by each line in your program. Each line shows you the number of bytes allocated. Okay now my run is done (115s). Let's have a look at the file. which are 1.28MB and we probably don't really need this if we are okay with the fact that our method changes the original input. Anyway the whole process for a 800x800 matrix uses over 1.5GiB which means this isn't that important. which is also more then expected. Here I can see that the array is sorted completely and then we take the first elements which seems a bit wasteful so we might wanna have a look into it later. That are quite easy fixes. Additionally for the sorting I use sorted_matching = partialsort(matching.match, 1:matching.cardinality) which only partially sorts the array. We have a mask which we use to mark all fields which we want to use for getting the minimum and the subtract or add them to some rows and columns. First I want to create a representation of marked columns/rows and unmarked columns/rows basically just a list of indices for each and a different representation when building where I set a row to marked or not so a boolean array. Then I can later iterate over all unmarked columns and then marked rows to get the minimum value instead of doing this mask thing as the mask uses a lot of memory and is slowing things down as the mask is not "random" but actually very well structured. I mentioned in my previous post that I want to use a mask as I thought loops are slow but actually they aren't :D (at least in julia). Then I use this code to actually determine the minimum and subtract and add the minimum to part of the arrays. also measuring some extra timings. That is definitely quite an improvement. It is more than twice as fast and uses less than half of the memory. Nevertheless nobody should use it as it is still way slower than the library. Shell I leave it like that? This shows that we use about half of the time in the inner while loop then about 1/6 of the whole time in finding the minimum and subtracting/adding it. Where do we actually spend the other third? It is again about twice as fast and uses about half of the memory it used before and is still about 5 times slower than the library :D Anyway we got from half of the time for the inner loop to about nothing Total inner while time: 0.27219057083129883. The library takes less than 4 seconds in total. Therefore we definitely have to cut down the time for finding the zeros as that takes 9s. Let's think about that again. We get some input from the user in the beginning which means that we have to do it at least once as I still think that we need the zeros... okay but later on we might be able to figure out where the zeros actually are. We actually only add the minimum value somewhere and subtract it at some other positions of the array. Which means that only at the positions where we subtract a value there can be a new zero. Which means we are finding all zeros and save it and also save how many zeros we found both as the maximum zeros ever found and as the current index of the end similar to the _sub arrays we had before. in the beginning of the outer while loop. Okay where is it possible to have zeros in the array? We have marked rows, unmarked rows, marked cols and unmarked cols and then the minimum is found and subtracted where !cols_marked but rows_marked and added where !rows_marked but cols_marked so we have two more possibilities one where both marked and one both unmarked and in these two cases we only have to check where there was a 0 before basically the unchanged zeros. In both cases we always have to check whether there are more zeros than before and if that is the case we have to extend the array. Next part is to have a look at the setup time. Subtracting the col minimum and row minimum takes over 1.3s which seems quite a lot especially if you look at only the time for subtracting the column minimum which only takes 0.3s. I know that a matrix is stored column wise so this is a little bit expected. now for 8000x8000 it takes $\approx 0.15s$ which is super nice and the memory allocation is much better as well from >700MiB to < 300MiB. As it now faster I want to test on bigger matrices as well so I introduce 16000x16000 and 20000x20000 and I removed the copying step in the beginning to indicate that I'm changing the input matrix I renamed the function to function run_hungarian!(mat) the ! indicates that the parameters might change. I'm a bit confused why this takes longer for smaller matrices than before but anyway it's near the time of the library for bigger matrices and it uses less memory. This gets a sub matrix by the columns which is fast and then minimizes over the marked rows and takes the minimum of that. In my test this reduces the time for finding the minimum from 5.7s to about 2.8. by this and declare addition_vector = zeros(Int64, nrows) at the top. the problem here is that we not only subtract it but we also need to add the new zeros. I tried quite a bit with checking mat[r,c] == 0 only if the row has actually the correct minimum but it doesn't make anything faster. This shows that the memory usage of my algorithm both algorithms seems to be linear is not rising very quickly with a relatively high starting point on my side which I will check in a second. The duration is also not growing as fast as the one from the library. You can see that for large arrays $> 20,000 \times 20,000$ my algorithm is faster. Okay I don't really get why the starting point of memory is so high maybe some of you know additional improvements. Checkout my code in the GitHub repository. If you enjoy the blog in general please consider a donation via Patreon. You can read my posts earlier than everyone else and keep this blog running.
CommonCrawl
The slope of a line is most commonly quoted as the "rise over the run." More precisely, this means the $\Delta y$ over $\Delta x$, or $m=\Delta y/\Delta x$, where $m$ is commonly assigned to be the "slope." Now, do you remember how the slope works? Does a vertical line have an infinite slope or a zero slope? What about a horiztonal line? What about a $45^\circ$ line? Which lines have negative slopes? In this lesson, you will be able to investigate these properties of the slope, by drawing any line of your choice with two mouse clicks, like was done in this lesson. When you get the code below to work, click once, then click again to draw a line. The program will compute and tell you $\Delta y$, $\Delta x$, and the slope of your line. See if you can answer the questions posed above. Now you try. Run this code several times, then click on two points. Try to learn properties of the slope of a line! This code will not run. You have to fix three things. First, the dy= line which is supposed to be $\Delta y$, which here is $y_2-y_1$. Second is the dx= line which is supposed to be $\Delta x$ which here is $x_2-x_1$. Finally, you need to fix the m= line to compute the slope of the mouse-clicked line which is $\Delta y/\Delta x$. Can you do it? Dismiss.
CommonCrawl
We study the blow-up of $H$-perimeter minimizing sets in the Heisenberg group $\mathbb H^n$, $n\geq 2$. We show that the Lipschitz approximations rescaled by the square root of excess converge to a limit function. Assuming a stronger notion of local minimality, we prove that this limit function is harmonic for the Kohn Laplacian in a lower dimensional Heisenberg group.
CommonCrawl
Yesterday I asked a question on parameterizations of knotted surfaces in $\mathbb R^4$. After I stated in the comments that I wanted the question to be kept to the case of a general surface, the question was promptly put on hold as "unclear what you're asking". I then refined my question to make it clearer. A day has since passed, but the question has not been reopened. All there has been since then is one comment (after I altered the question) mentioning that the question does not seem unclear at all. I would very much appreciate it if the question could be reopened. Although the question at MO has been reopened, the meta post is not completely moot: let me remind users that there is a meta thread dedicated to the requests for reopening: Requests for reopen and undelete votes for on-hold, closed, and deleted questions. Not the answer you're looking for? Browse other questions tagged discussion status-completed specific-question .
CommonCrawl
$$p \times \log(p) + q \times \log(q) + p \times \log\left(1 + (1-q)/p\right) + q \times \log \left(1 + (1-p)/q\right).$$ I expect the last two terms somehow equal $\log(2)$ but I have been scratching my head for hours trying to get there. Can anyone show me how please? Browse other questions tagged finance-mathematics statistical-finance kelly-criterion or ask your own question.
CommonCrawl
seven and when you roll three dice it will add up to $21$. In three dice you roll three dice and add the top score up and the boottem score together and add it together we found out that the answer was always $21$ thats because three times seven makes $21$ and whatever you get on one side add to the other side always make $7$ like $6$ and $1$ and $2$ and $5$ they both make $7$ . The opposite sides always add up to $7$. When you have $3$ dice and you take the sum of the top and add the sum of the bottom the total equals $21$. This is because $3\times7$ is $21$. Thank you all for these interesting findings. Visualising. Generalising. Factors and multiples. Making and proving conjectures. Creating and manipulating expressions and formulae. Dice. Multiplication & division. Divisibility. Addition & subtraction. Working systematically.
CommonCrawl
Homotopy theory is an important sub-field of algebraic topology. It is mainly concerned with the properties and structures of spaces which are invariant under homotopy. Chief among these are the homotopy groups of spaces, specifically those of spheres. Homotopy theory includes a broad set of ideas and techniques, such as cohomology theories, spectra and stable homotopy theory, model categories, spectral sequences, and classifying spaces. $\infty$-categorical understanding of Bridgeland stability?
CommonCrawl
Let X a compact (complex) manifold which has a non-zero cohomology class $\alpha \in H^1(X,\mathbb Z)$. Let $\pi: \bar X\to X$ be the corresponding infinite cyclic covering. What does this mean? It seems that an infinite cyclic covering is a cover with fiber $\mathbb Z$. But why does such a covering exist, and how is it related to the cohomology class? Browse other questions tagged geometry differential-geometry algebraic-topology complex-geometry covering-spaces or ask your own question. How do I know when a form represents an integral cohomology class? Are there odd-sheeted coverings of non-orientable surfaces by orientable surfaces? Is the homology class of a compact complex submanifold non-trivial? What does it mean for a complex differential form on a complex manifold to be real?
CommonCrawl
Abstract : Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations, and can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to min-max as the number of simulations grows. This approach provides a fine-grained control of the tree growth, at the level of individual simulations, and allows efficient selectivity methods. This algorithm was implemented in a Go-playing program, Crazy Stone, that won the gold medal of the $9 \times 9$ Go tournament at the 11th Computer Olympiad.
CommonCrawl
We present a general-purpose method to train Markov Chain Monte Carlo kernels (parameterized by deep neural networks) that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jump distance, a proxy for mixing speed. We demonstrate significant empirical gains (up to $124\times$ greater effective sample size) on a collection of simple but challenging distributions. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. Python source code is included as supplemental material, and will be open-sourced with the camera-ready paper.
CommonCrawl
I recently learned about an oil drop experiment that showed how a classical object can produce quantum like behavior because its assisted by a pilot wave. How has this not gained more attention? What flaws does Broglie–Bohm pilot wave theory have in explaining particle behavior? It might help to cite your source: I found this one here - is this what you speak of? Anyhow, actually this kind of idea has had considerable, if not mainstream attention over the years. Many people who have worked with quantum mechanics will have at least heard of the following: it's just that it doesn't make it into many QM courses (being an equivalent way to think about QM). The de Broglie / Bohm pilot wave theory has a fairly well known hydrodynamic interpretation, as indeed does Schrödinger's equations. The latter was studied extensively by the German physicist Erwin Madelung (see the Wikipedia page Madelung Equations for more information) and he was doing this almost as soon as Schrödinger put pen to paper: beginning 1926. So fluid dynamical systems do have analogies in quantum mechanics and contrariwise. That doesn't make them the same physical phenomena. Moreover, the big unsolved mystery in quantum mechanics is the measurement problem and this is not described by the Schrödinger equation. It is not emphasised enough that ALL of quantum mechanics aside from measurement is utterly deterministic. So, without an expert opinion, this is highly interesting work, but it is not relevant to the mysteries of quantum mechanics. Bohmian mechanics, which is the most mature form of the de Broglie pilot wave theory does explain measurement through the mechanism of hidden variables (i.e. by saying that there is state in a quantum system which is hidden from us). However, it is also known that Bohmian mechanics needs to be nonlocal to make the hidden variable explanation work. Roughly this means that it implies faster-than-light signalling, which in turn makes it very hard to make sense of causality: in a universe where faster than light signalling can be done, effects can come before their causes. So my belief is that most physicists would say that Bohmian mechanics is not a good explanation. I've recently answered a very similar question here not realising it was a duplicate of this. Anyway, I'd recommend reading that first as part of this answer and then continuing below. then this implies superluminal signalling is possible and causality is violated (think grandfather paradoxes, cats and dogs becoming friends etc). Fortunately it can be shown statistically that typical configurations of the universe obey the QEH, and if at some time the distribution of a system obeys the QEH, then at all future times this relation continues to hold (see Valentini, 1991). Further, the majority of configurations of the universe that don't obey the QEH are shown to quickly tend to it. Since Born's rule is backed up by experiment we can be pretty confident that this is indeed the configuration our universe is in. Of course, some proponents of this theory just take it as a postulate and ignore all this. For further reading (as well as, or if you don't have access to the above paper), see Goldstein et al., 1992 (which is open access). To close with essentially the same point I gave in my previous answer; the reason it hasn't gained more attention is (in my opinion) largely down to social factors. It has some ideas that are perhaps better covered by other interpretations (QEH is commonly considered an example of this), but its issues are no greater as far as I can see to any other interpretation. It has some extra maths Copenhagen doesn't require, so that's one reason many people won't bother with it, but conceptually it's a much better interpretation for gaining an intuitive understanding of quantum mechanics, so I think it'd be great if some introductory textbooks introduced it (viz. spent a chapter on it, not the vague paragraph several currently do). Every year since ~2000 the number of published papers covering it have increased modestly (along with a few other interpretations), to now around 80/year covered by JCR (a few hundred on Google Scholar), so perhaps as the primacy of Copenhagen slowly begins to weaken this will be the case in the future. If you're interested in why this theory was shunned in its formative years, the book 'Quantum theory at the crossroads: reconsidering the 1927 Solvay conference' (open access) and the article 'Physical Isolation and Marginalization in Physics: David Bohm's Cold War Exile' might be of interest. I am not sure if the OP shared the link or not. This is amazing explanation in terms of real visual. Between 2:35 and 3:15 the video shows how the pattern is built over a period of time, while the jumps appear to be random at any one time. Therefore things may not be random, as claimed by some parts/interpretations of QM. If you want to understand the nature of the superluminal De Broglie pilot wave, one has to correct the errors in the classical non-relativistic Maxwell-Lorentz electrodynamics. The generalized Lorentz force (in differential form) $$\vec f ~=~ \rho \vec E + \vec J \times \vec B$$ is incorrect, since it violates Newton's third principle of motion, in case of 'open' circuits of stationary current (btw, a closed circuit of stationary current is a most theoretical construct, almost impossible to put to the Newtonian principles tests). Secondly, Maxwell's electric field expression $$\vec E = -\nabla\Phi-\partial_t\vec A$$ implies that dynamic electric currents can induce a non-divergence free electric field: $$\nabla \cdot (\partial_t\vec A) \neq 0$$ however, Faraday's induction experiments do not prove at all a divergent/convergent electric field can be induced by means of dynamic electric currents. So, either one applies Ockham's razor by defining: $$\vec E = -\nabla\Phi-\partial_t\vec A ~~~~and~~~~ \nabla \cdot (\partial_t\vec A) = 0$$ or, one proves by experiment the existence of a scalar form of magnetism: $$B=-\nabla \cdot \vec A,~~~~\partial_t B -\nabla \cdot \nabla \Phi ~=~ \nabla \cdot \vec E$$ $$\vec f ~=~ \rho \vec E + \vec J \times \vec B + \vec J B$$ and this force law agrees with Newton's third principle of motion btw. This theoretical development will eventually lead to the conclusion that the electric potential, $\Phi$, must be superluminal, and that superluminal longitudinal far field waves should exist that are expressed only in terms of $\Phi$, the 'electric' potential. This is the nature of the DeBroglie pilot wave: it is a superluminal and longitudinal 'electric potential' wave, with phase/group/information velocity exceeding velocity 'c' many times. After all, the spread velocity of the Coulomb field has been measured, and found to be much higher than 'c'. Einstein's SR theory is based on Voigt's erroneous "there is no TEM wave medium that can have velocity" assumption. Lorentz added the gamma factor for a more symmetrical result. However, multiple experiments have disproved Voigt's assumption, such as one-way light speed measurements by means of atomic clocks. If SR is wrong, then many well known equations, such as $E = mc^2$ should be reevaluated (the origin of this equation is in non-relativistic electrodynamics). To follow in De Broglie's footsteps, one must be prepared to review more than a century of faulty physics. De Broglie was the best physicist of the 1927 Solvay conference, if you ask me. There was NO flaw in De Broglie's approach to wave mechanics based on the Schrödinger equation (the eigen value problem technique that Schrödinger picked up from fluid dynamics theory), to answer the question shortly. The only mysterious aspect was the nature of the pilot wave. The de-Broglie-Bohm theory is a modification of quantum theory that adds particle trajectories on top of the wave function. Many physicists are apparently only interested in being able to make predictions and aren't interested in what is happening in reality. People who take this position are in general hostile to any attempt to improve quantum theory or explain its content. But let's suppose you are interested in reality. These trajectories make it difficult to construct a relativistic version of the theory. Any theory that reproduces the predictions of quantum theory but features only a single trajectory for each particle is both non-local (Bell inequalities) and non-Lorentz-invariant (Lucien Hardy 'Quantum mechanics, local realistic theories, and Lorentz-invariant realistic theories'). So we would have to discard all of quantum field theory and the special and general theory of relativity and the principles underlying those theories. Copenhagen interpretation claims there are no tragectories, and slams pilot wave theory because the tragectories are surrealistic. Self serving rebuttal. Everything is nowhere untill the observation. Poppycock. Not the answer you're looking for? Browse other questions tagged quantum-mechanics newtonian-mechanics quantum-interpretations causality bohmian-mechanics or ask your own question. How does De Broglie–Bohm theory or pilot wave theory explain the results of the Stern–Gerlach experiment? How does the de Broglie-Bohm interpretation explain quantum uncertainty?
CommonCrawl
Why does this grammar derive into $\beta \alpha ^*$ instead of $\alpha ^* \beta$? In this video clip the teacher presents a grammar $A \rightarrow A \alpha | \beta$ and after providing the parse tree explains that the regular expression for the language generated is represented as $\beta \alpha ^*$. Why isn't it $\alpha ^* \beta$ instead? Isn't the grammar left-recursive into terminal $\alpha$ and once terminal $\beta$ is reached the string ends? The pattern is clear: starting from $A$, we'll generate strings of the form $\beta\alpha\dotsc\alpha$, in other words, $\beta\alpha^*$. Since at any stage we have $A$ on the left of the sentential form, we'll eventually generate strings with $\beta$ on the left. If the grammar had been $A\rightarrow \alpha A\mid \beta$, then we'd derive strings of the form $\alpha^*\beta$.
CommonCrawl
In this talk, I will present our recent work on the local in time existence of two-dimensional gravity water waves with angled crests. Specifically, we construct an energy functional $E(t)$ that allows for angled crests in the interface. We show that for any initial data satisfying $E(0)<\infty$, there is $T>0$, depending only on $E(0)$, such that the water wave system is solvable for time $t\in [0, T]$. Furthermore we show that for any smooth initial data, the unique solution of the 2d water wave system remains smooth so long as $E(t)$ remains finite.
CommonCrawl
Tetris is a popular video game that has been widely used as a benchmark for various optimization techniques including approximate dynamic programming (ADP) algorithms. A close look at the literature of this game shows that while ADP algorithms, that have been (almost) entirely based on approximating the value function (value function based), have performed poorly in Tetris, the methods that search directly in the space of policies by learning the policy parameters using an optimization black box, such as the cross entropy (CE) method, have achieved the best reported results. This makes us conjecture that Tetris is a game in which good policies are easier to represent, and thus, learn than their corresponding value functions. So, in order to obtain a good performance with ADP, we should use ADP algorithms that search in a policy space, instead of the more traditional ones that search in a value function space. In this paper, we put our conjecture to test by applying such an ADP algorithm, called classification-based modified policy iteration (CBMPI), to the game of Tetris. Our extensive experimental results show that for the first time an ADP algorithm, namely CBMPI, obtains the best results reported in the literature for Tetris in both small $10\times 10$ and large $10\times 20$ boards. Although the CBMPI's results are similar to those achieved by the CE method in the large board, CBMPI uses considerably fewer (almost 1/10) samples (call to the generative model of the game) than CE.
CommonCrawl
222 The Integral that Stumped Feynman? 66 Evaluating $\int_0^\infty \sin x^2\, dx$ with real methods? 64 Am I too young to learn more advanced math and get a teacher? 59 Do "Parabolic Trigonometric Functions" exist? 54 What is the fastest/most efficient algorithm for estimating Euler's Constant $\gamma$? 40 How do you calculate the decimal expansion of an irrational number?
CommonCrawl
We study a class of countable-state zero-sum stochastic games, called (1-exit) Recursive Simple Stochastic Games (1-RSSGs), with strictly positive rewards. This model is a game version of several classic stochastic models, including stochastic context-free grammars and multi-type branching processes. The goal of the two players in the game is to maximize/minimize the total expected reward generated by a play of the game. These games are motivated by the goal of analyzing the optimal/pessimal expected running time of probabilistic procedural programs with potential recursion. We first show that in such games both players have optimal deterministic ``stackless and memoryless'' optimal strategies. We then provide polynomial-time algorithms for computing the exact optimal expected reward (which may be infinite, but otherwise is rational), and optimal strategies, for both the maximizing and minimizing single-player versions of the game, i.e., for (1-exit) Recursive Markov Decision Processes (1-RMDPs). It follows that the quantitative decision problem for positive reward 1-RSSGs is in NP $\cap$ coNP. We show that Condon's well-known quantitative termination problem for finite-state simple stochastic games (SSGs) reduces to a special case of the reward problem for 1-RSSGs, namely, deciding whether the value is $\infty$. Whereas, for finite-state SSGs with strictly positive rewards, deciding if this expected reward value is $\infty$ is solvable in P-time. We also show that there is a simultaneous strategy improvement algorithm that converges in a finite number of steps to the optimal values and strategies of a 1-RSSG with positive rewards. The worst-case complexity of this strategy-improvement algorithm is open, just like its classic version for finite-state SSGs. Optimized versions of our algorithms have been implemented in a tool called PReMo with promising results. 2007 by The University of Edinburgh.
CommonCrawl
Although LaTeX produces output that is far superior to MS Word, it suffers from a fatal flaw: LaTeX document files are non-portable. Thus, it's often necessary to convert LaTeX files to MS Word. I tested a number of ways of doing this, using three types of files that are hard to convert: a table, a framebox, and an equation. I also tested their ability to handle Greek letters. In math mode, I used this instead: $\alpha$. Examples of the Word output are shown below. This list makes no attempt to be complete. If you know of a better way to convert Latex to Word, feel free to let me know. Latex2rtf is a free, open source product that does an outstanding job on figures, thanks to its integration with GhostScript in Linux. It produces a usable RTF (Rich Text Format) file that needs considerable editing before it looks like the original, but there were no nasty surprises. Tables were usable, and equations were rendered in text mode, not converted to images. Font sizes were a little off, and it was necessary to use LaTeX's math mode to create Greek characters that would convert properly (see screen images below). The biggest problem with latex2rtf is its incompatibility with the xspace package. Xspace should not be used if you plan to convert to the document to Word, because it will cause latex2rtf to truncate large sections of your text. This produces a high-quality PDF file from your Latex document. Nuance adds an extra menu item called "Open PDF File" to the File menu in Word. It "converts" the file using virtual optical character recognition. That means you can convert anything, but you also have to check the output for errors. Nuance sometimes gets the `nuances' wrong. For example, it often converted "USA" to "US A". But otherwise, it does a fantastic job. The margins and fonts are precisely the same as in LaTeX. Converting Bibtex references is easy, because to Nuance, they're just more pixels. Even frameboxes are converted perfectly. The only problem comes with equations and Greek letters. Nuance hopelessly mangles equations. As for Greek letters, sometimes they are just dropped, and other times they're converted to images which are fixed in place at incorrect locations in your Word document. These have to be laboriously removed and replaced with symbols. Other times, Greek letters are converted to gibberish (`δ' is changed to '8', for instance). Tth produces pretty good results within the limitations of HTML. However, when Greek letters were produced using text mode, tth incorrectly used English letters instead. Latex2html, on the other hand, converted all the Greek letters to images. Latex2html also had more difficulty with tables. Although latex2html converted equations into images, it was done correctly. Tth converted the equations to HTML characters. Windows Firefox and IE rendered the equations correctly, but Linux Firefox and Opera did not. Chikrii TeX2Word is a Windows program that integrates with Word and converts the LaTeX file to Word as it's being read. It's a shareware program that is heavily copy-protected. On one computer, I had trouble getting Chikrii to start up. Apparently, I had tried an earlier version two years ago and deleted it for some reason. The new version insisted that my 30 day trial period had expired, and refused to run. To give Chikrii TeX2Word a fair test, I had to find another Windows computer and install it. Chikrii TeX2Word was not as effective as some of the other programs and failed to render an equation, find and convert Bibtex entries, or create a table properly. This is consistent with the truism that the more time programmers spend on copy-protection, the less time they have to make a product that is worth protecting. Error 1706 No valid source could be found for product Microsoft Office 2000 SR-1 Professional. The Windows installer cannot continue. However, the conversion results were much the same whether the Equation Editor was present or not. Update cross-references ('Update GrindEQ labels and references' button) and save the converted document in doc, docx, or rtf format. One or more fields in the selection could not be updated. Afterward, instead of being replaced by error messages, the Bibtex references were replaced by question marks. Another commercial program is AbleToExtract. This Windows program is similar to Nuance, but it handled Greek letters almost perfectly. Like Nuance, it mangles any images in the file by converting them to text boxes, which makes them impossible to move. The document is also full of hard returns, which makes editing tedious. But otherwise it does a fairly good job. There was no correlation between quality and price. Nuance PDF converter produced the best result. None of the programs handled text mode Greek letters correctly. If all you need to do is send a Doc file to somebody else, the commercial converters work best, provided you remove the figures first. However, if you need to actually edit the file afterward, the best option is to use detex and convert the file manually to HTML. Caution is needed because detex frequently drops words from your document.
CommonCrawl
Is it okay to say that the time for each job is 0.4 hours? I presume $x$ is number of jobs and $y$ is number of hours to complete it. It's not correct to say that time for each job is 0.4 hours because you have a (pretty large) bias term. This means you have a fix cost. Performing one job takes $90.4$ hours, two jobs $90.8$ hours etc. So, you can say that each additional job takes 0.4 hours. And, time for each job asymptotically approaches $0.4$ hours, i.e. as $x\rightarrow \infty$. Not exactly. Assuming a simple linear model $y = \beta_0 + \beta_1 x + \epsilon$, the parameter $\beta_1$ (in this case $0.4$) represents the effect of a one-unit change in the corresponding $x$ (here jobs) covariate on the mean value of the dependent variable, $y$ (here hours), assuming that any other covariates remain constant at some value. As the $\beta_0$ (the intercept) is $90$, probably there are some other factors that require on average $90$ hours to be taken into account. It is more relevant to say that each additional job requires an additional $0.4$ hours. How to determine trend strength from linear regression slope? How can I test possible reasons for the relationship between income and handedness? How to read/determine trend from a line graph? Does regression analysis make sense for Bayesians?
CommonCrawl
A group $H$ is said to be capable if there is a group $G$ such that $G/Z(G)\cong H$. It is well known that $G/Z(G)$ can never be cyclic of order $>1$; so cyclic groups are not capable. More interesting: quaternion group $Q_8$ is not capable: there is no $G$ with $G/Z(G)\cong Q_8$. Now $Z_p$ (cyclic of order $p$) is not capable; but $Z_p\times Z_p$ is capable. Q. Given any group $H$, is $H\times H$ capable? Edit: Is there a known example of a finite group $H$ such that $H\times H$ is not capable? Shahriar Shahriari proved in On normal subgroups of capable groups, Arch. Math. (Basel) 48 (1987) no.3 193-198, MR 0880078 (88e:20026), among other things, that if a finite group $G$ contains a normal subgroup $H$ which is either generalized quaternion of order $2^n$, $n\gt 2$, or semidihedral of order $2^n$, $n\gt 3$, then $G$ is not capable. This immediately shows that $Q_8\times Q_8$ cannot be capable (and more generally, gives you lots of examples of groups $H$ such that $H\times H$ is not capable; note that if $H$ is capable, then so is $H\times H$, since any witness $K$ to the capability of $H$ yields $K\times K$ as a witness for the capability of $H\times H$). Shahriari also proved that if $G$ is finite nilpotent and contains a normal subgroup $H$ which is an extraspecial $p$-groups of order $p^3$ and exponent $p$, $p$ odd, then $G$ is not capable. Theorem (Prop. 3.2 in the above-cited paper) Let $G$ be a finite group. If $G = QC_Q(G)$ for some $Q\leq G$, and $1\neq M\subseteq Z^*(Q)\cap[Q,Q]$, then $M\subseteq Z^*(G)$; in particular, $G$ is not capable. Here, $Z^*(Q)$ is the epicenter of $Q$ (similar with $G$). The epicenter is the obstruction to capability: it is the smallest central subgroup of $G$ such that $G/Z^*(G)$ is capable. In the article "On groups occurring as center factor groups, J. Algebra, 61 (1979)" F. R. Beyl, U. Felgner and P. Schmid present a condition in which the capability of a direct product of finitely many of groups implies the capability of each of the factors. For details see section $6$. So for many classes of groups, the condition that $H\times H$ is capable already implies that $H$ is capable. For example, if we let $G$ be a $p$-group of class at most two and odd prime exponent, then we obtain that if $G$ is a nontrivial direct product, then $G$ is capable if and only if each direct factor is either capable or nontrivial cyclic. There are furthermore articles by Arturo Magidin on this subject. I suppose one can decide the question on direct products of quaternion groups with these results too, but I did not search "long enough" to find it in the literature. There is also a classification of extra-special capable $p$-groups, see Corollary 8.2 in the paper above. Not the answer you're looking for? Browse other questions tagged abstract-algebra group-theory finite-groups or ask your own question. What's the smallest non-$p$ group that isn't a semidirect product? Smallest example of a group that is not isomorphic to a cyclic group, a direct product of cyclic groups or a semi direct product of cyclic groups. Groups that are not direct products of other groups? Which non-abelian finite groups have the property that every subgroup is normal?
CommonCrawl
This problem challenged you to check your understanding of the building blocks of maths: "plus", "minus", "divide", and "multiply". You noticed that the order of numbers and symbols is important; in different positions, the "number sentence" can say different things. This is the same as when you speak or write; the order of the words matters. However, with some of the number sentences, there are two different orders, which still mean the same thing. Again, this is like speaking or writing; even if the order is different, it can sometimes mean the same thing. For the first four questions, you were asked to put the correct symbol into the box. Several students submitted correct solutions. These include: Jonathan, Jordan, and Callum from Aycliffe Drive Primary School, Lauren from Princess Elizabeth, Anna, Isabella, George, Sophie, and Rhiannon from St. Swithun's, Nathan from Wilson's, Rebecca from Bourne Westfield primary school, Ayush from Garden Gate Elementary School, Brandon, Narissa, Jordan, Justin, and Cameron from Village Elementary, and Charlotte from Manor Preparatory School. Rhiannon, from St Swithun's Primary School approached this problem by trial and improvement: she placed a different symbol in the box, and looked to see if the calculation made sense. In this way, she worked out the correct symbols. On the $27+36=63$, the largest number is at the end so if you tried to make a second number sentence like $27=36-63$ it wouldn't give the right answer, because if you take away $63-36$ you would go into negative numbers. So, to get two "working" number sentences, you have to have the largest number at the front . $50\div 5=10$ and $50=5\times 10$ . This is a an example to show one that you can only do in one way: $7\times 5=35$. Charlotte, from Manor Preparatory School, also submitted the correct solution. A few people noticed a reason why there can be two different solutions for the number sentence. As Jonathan, Jordan and Callum, from Aycliffe Drive Primary School point out, "plus" and "minus" are inverse operations (processes), as are "multiply" and "divide". This means that you can do a sum, for example a multiplication, and then undo it, by doing the reverse, or "inverse": divide. Thank you to everyone who submitted answers! Well done to you all! Addition & subtraction. Visualising. Games. Practical Activity. Compound transformations. Trial and improvement. Working systematically. Inverses. Multiplication & division. Interactivities.
CommonCrawl
This challenge is about a variant of RSA encryption where the integers are replaced by a polynomial ring over $\mathbb F_2$. Recall that RSA works in a ring $\mathbb Z/n$ where $n=pq$ is a product of two large random primes. To break RSA, it is sufficient to compute the size $\varphi(n)$ of the unit group $(\mathbb Z/n)^\ast$ since this allows one to compute $e$th roots in that group. The owner of the private key can compute $\varphi(n)$ as $(p-1)(q-1)$; without knowing $p$ and $q$ this problem is assumed to be hard. In this challenge, $\mathbb Z$ is replaced by $\mathbb F_2[x]$ and $n$ is replaced by a big polynomial $f\in\mathbb F_2[x]$ of degree $2049$.
CommonCrawl
In this chapter, we show how to extend grammars with functions – pieces of code that get executed during grammar expansion, and that can generate, check, or change elements produced. Adding functions to a grammar allows for very versatile test generation, bringing together the best of grammar generation and programming. As this chapter deeply interacts with the techniques discussed in the chapter on efficient grammar fuzzing, a good understanding of the techniques is recommended. Suppose you work with a shopping system that – among several other features – allows customers to pay with a credit card. Your task is to test the payment functionality. We'd like to test specific amounts being charged – for instance, amounts that would excess the credit card limit. We find that 9 out of 10 credit card numbers are rejected because of having an incorrect checksum. This is fine if we want to test rejection of credit card numbers – but if we want to test the actual functionality of processing a charge, we need valid numbers. We could go and ignore these issues; after all, eventually, it is only a matter of time until large amounts and valid numbers are generated. As it comes to the first concern, we could also address it by changing the grammar appropriately – say, to only produce charges that have at least six leading digits. However, generalizing this to arbitrary ranges of values will be cumbersome. The second concern, the checksums of credit card numbers, however, runs deeper – at least as far as grammars are concerned, is that a complex arithmetic operation like a checksum cannot be expressed in a grammar alone – at least not in the context-free grammars we use here. (In principle, one could do this in a context–sensitive grammar, but specifying this would be no fun at all.) What we want is a mechanism that allows us to attach programmatic computations to our grammars, bringing together the best of both worlds. after expansion, checking generated elements, and possibly also replacing them. In both cases, functions are specified using the opts() expansion mechanism introduced in the chapter on grammars. They are thus tied to a specific expansion $e$ of a symbol $s$. with the intention that whenever <float> is expanded, the function high_charge would be invoked to generate a value for <float>. (The actual expansion in the grammar would still be present for fuzzers that ignore functions, such as GrammarFuzzer). Here, we don't have to give the function to be applied twice a name (say, square()); instead, we apply it inline within the invocation. It can serve as a constraint or filter on the expanded values, returning True if the expansion is valid, and False if not; if it returns False, another expansion is attempted. It can also serve as a repair, returning a string value; like pre-expansion functions, the returned value replaces the expansion. With such a filter, only valid credit cards will be produced. On average, it will still take 10 attempts for each time check_credit_card() is satisfied, but then, we do not have to recourse to the system under test. Here, each number is generated only once and then repaired. This is very efficient. The checksum function used for credit cards is the Luhn algorithm, a simple yet effective formula. """Compute Luhn's check digit over a string of digits""" """Check whether the last digit is Luhn's checksum over the earlier digits""" """Return the given string of digits, with a fixed check digit""" """Return the specified pre-expansion function, or None if unspecified""" """Return the specified post-expansion function, or None if unspecified""" The order attribute will be used later in this chapter. Our first task will be implementing the pre-expansion functions – that is, the function that would be invoked before expansion to replace the value to be expanded. To this end, we hook into the process_chosen_children() method, which gets the selected children before expansion. We set it up such that it invokes the given pre function and applies its result on the children, possibly replacing them. A string $s$ replaces the entire expansion with $s$. A list $[x_1, x_2, \dots, x_n]$ replaces the $i$-th symbol with $x_i$ for every $x_i$ that is not None. Specifying None as a list element $x_i$ is useful to leave that element unchanged. If $x_i$ is not a string, it is converted to a string. A value of None is ignored. This is useful if one wants to simply call a function upon expansion, with no effect on the expanded strings. Boolean values are ignored. This is useful for post-expansion functions, discussed below. All other types are converted to strings, replacing the entire expansion. The Python language has its own concept of generator functions, which we of course want to support as well. A generator function in Python is a function that returns a so-called iterator object which we can iterate over, one value at a time. To create a generator function in Python, one defines a normal function, using the yield statement instead of a return statement. While a return statement terminates the function, a yield statement pauses its execution, saving all of its state, to be resumed later for the next successive calls. To support generators, our process_chosen_children() method, above, checks whether a function is a generator; if so, it invokes the run_generator() method. When run_generator() sees the function for the first time during a fuzz_tree() (or fuzz()) call, it invokes the function to create a generator object; this is saved in the generators attribute, and then called. Subsequent calls directly go to the generator, preserving state. We see that the expression contains all integers starting with 1. Note that both above grammars will actually cause the fuzzer to raise an exception when more than 1,000 integers are created, but you will find it very easy to fix this. Finally, yield is actually an expression, not a statement, so it is also possible to have a lambda expression yield a value. If you find some reasonable use for this, let us know. Let us now turn to our second set of functions to be supported – namely, post-expansion functions. The simplest way of using them is to run them once the entire tree is generated, taking care of replacements as with pre functions. If one of them returns False, however, we start anew. The method run_post_functions() is applied recursively on all nodes of the derivation tree. For each node, it determines the expansion applied, and then runs the function associated with that expansion. The helper method find_expansion() takes a subtree tree and determines the expansion from the grammar that was applied to create the children in tree. The method eval_function() is the one that takes care of actually invoking the post-expansion function. It creates an argument list containing the expansions of all nonterminal children – that is, one argument for each symbol in the grammar expansion. It then calls the given function. Note that unlike pre-expansion functions, post-expansion functions typically process the values already produced, so we do not support Python generators here. Let us try out these post-expression functions on an example. Suppose we want to produce only arithmetic expressions that evaluate to a negative number – for instance, to feed such generated expressions into a compiler or some other external system. Doing so constructively with pre functions would be very difficult. Instead, we can define a constraint that checks for precisely this property, using the Python eval() function. The Python eval() function takes a string and evaluates it according to Python rules. Since the syntax of our generated expressions is slightly different from Python, and since Python can raise arithmetic exceptions during evaluation, we need a means to handle such errors gracefully. The function eval_with_exception() wraps around eval(); if an exception occurs during evaluation, it returns False – which causes the production algorithm to produce another value. Post-expansion functions can not only be used to check expansions, but also to repair them. To this end, we can have them return a string or a list of strings; just like pre-expansion functions, these strings would then replace the entire expansion or individual symbols. This fragment consists of two HTML (XML) tags that surround the text; the tag name (strong) is present both in the opening (<strong>) as well as in the closing (</strong>) tag. For a finite set of tags (for instance, the HTML tags <strong>, <head>, <body>, <form>, and so on), we could define a context-free grammar that parses it; each pair of tags would make up an individual rule in the grammar. If the set of tags is infinite, though, as with general XML, we cannot define an appropriate grammar; that is because the constraint that the closing tag must match the opening tag is context-sensitive and thus does not fit context-free grammars. So far, we have always first generated an entire expression tree, only to check it later for validity. This can become expensive: If several elements are first generated only to find later that one of them is invalid, we spend a lot of time trying (randomly) to regenerate a matching input. This works, but is very slow; it can take several seconds before a matching expression is found. We can address the problem by checking constraints not only for the final subtree, but also for partial subtrees as soon as they are complete. To this end, we extend the method expand_tree_once() such that it invokes the post-expansion function as soon as all symbols in a subtree are expanded. The main work takes place in this helper method run_post_functions_locally(). It runs the post-expansion function $f$ with run_post_functions() only on the current node by setting depth to zero, as any completed subtrees would have their post-expansion functions ran already. If $f$ returns False, run_post_functions_locally() returns an unexpanded symbol, such that the main driver can try another expansion. It does so for up to 10 times (configurable via a replacement_attempts parameter during construction); after that, it raises a RestartExpansionException to restart creating the tree from scratch. With the above generators and constraints, we can also address complex examples. The VAR_GRAMMAR grammar from the chapter on parsers defines a number of variables as arithmetic expressions (which in turn can contain variables, too). Applying a simple GrammarFuzzer on the grammar produces plenty of identifiers, but each identifier has a unique name. What we'd like is that within expressions, only identifiers previously defined should be used. To this end, we introduce a set of functions around a symbol table, which keeps track of all variables already defined. Finally, we clear the symbol table each time we (re)start an expansion. This is helpful as we may occasionally have to restart expansions. To address this issue, we allow to explicitly specify an ordering of expansions. For our previous fuzzers, such an ordering was inconsequential, as eventually, all symbols would be expanded; if we have expansion functions with side effects, though, having control over the ordering in which expansions are made (and thus over the ordering in which the associated functions are called) can be important. """Return the specified expansion ordering, or None if unspecified""" To control the ordering in which symbols are expanded, we hook into the method choose_tree_expansion(), which is specifically set for being extended in subclasses. It proceeds through the list expandable_children of expandable children to choose from and matches them with the nonterminal children from the expansion to determine their order number. The index min_given_order of the expandable child with the lowest order number is then returned, choosing this child for expansion. "Order must have one element for each nonterminal" assert j < len(nonterminal_children), "Expandable child not found" Real programming languages not only have one global scope, but multiple local scopes, frequently nested. By carefully organizing global and local symbol tables, we can set up a grammar to handle all of these. However, when fuzzing compilers and interpreters, we typically focus on single functions, for which one single scope is enough to make most inputs valid. Let us close this chapter by integrating our generator features with the other grammar features introduced earlier, in particular coverage-driven fuzzing and probabilistic grammar fuzzing. The general idea to integrate the individual features is through multiple inheritance, which we already used for ProbabilisticGrammarCoverageFuzzer, introduced in the exercises on probabilistic fuzzing. Probabilistic fuzzing integrates very easily with generators, as both extend GrammarFuzzer in different ways. We have to implement supported_opts() as the merger of both superclasses. At the same time, we also set up the constructor such that it invokes both. Fuzzing based on grammar coverage is a bigger challenge. Not so much for the methods overloaded in both; we can resolve these just as above. The problem is that during expansion, we may generate (and cover) expansions that we later drop (for instance, because a post function returns False). Hence, we have to remove this coverage which is no longer present in the final production. We resolve the problem by rebuilding the coverage from the final tree after it is produced. To this end, we hook into the fuzz_tree() method. We have it save the original coverage before creating the tree, restoring it afterwards. Then we traverse the resulting tree, adding its coverage back again (add_tree_coverage()). With ProbabilisticGeneratorGrammarCoverageFuzzer, we now have a grammar fuzzer that combines efficient grammar fuzzing with coverage, probabilities, and generator functions. The only thing that is missing is a shorter name. PGGCFuzzer, maybe? as repairs to apply changes to produced strings, such as checksums and identifiers. In the chapter on fuzzing APIs, we show how to produce complex data structures for testing, making use of GeneratorGrammarFuzzer features to combine grammars and generator functions. In the chapter on fuzzing User Interfaces, we make use of GeneratorGrammarFuzzer to produce complex user interface inputs. For fuzzing APIs, generator functions are very common. In the chapter on API fuzzing, we show how to combine them with grammars for even richer test generation. The combination of generator functions and grammars is mostly possible because we define and make use of grammars in an all-Python environment. We are not aware of another grammar-based fuzzing system that exhibits similar features. So far, our pre and post processing functions all accept and produce strings. In some circumstances, however, it can be useful to access the derivation trees directly – for instance, to access and check some child element. Extend GeneratorGrammarFuzzer such that a function can return a derivation tree (a tuple) or a list of derivation trees, which would then replace subtrees in the same way as strings. Extend GeneratorGrammarFuzzer with a post_tree attribute which takes a function just like post, except that its arguments would be derivation trees.
CommonCrawl
Abstract: We consider a two-way half-duplex relaying system where multiple pairs of single antenna users exchange information assisted by a multi-antenna relay. Taking into account the practical constraint of imperfect channel estimation, we study the achievable sum spectral efficiency of the amplify-and-forward (AF) and decode-and-forward (DF) protocols, assuming that the relay employs simple maximum ratio processing. We derive an exact closed-form expression for the sum spectral efficiency of the AF protocol and a large-scale approximation for the sum spectral efficiency of the DF protocol when the number of relay antennas, $M$, becomes sufficiently large. In addition, we study how the transmit power scales with $M$ to maintain a desired quality-of-service. In particular, our results show that by using a large number of relay antennas, the transmit powers of the user, relay, and pilot symbol can be scaled down proportionally to $1/M^\alpha$, $1/M^\beta$, and $1/M^\gamma$ for certain $\alpha$, $\beta$, and $\gamma$, respectively. This elegant power scaling law reveals a fundamental tradeoff between the transmit powers of the user/relay and pilot symbol. Finally, capitalizing on the new expressions for the sum spectral efficiency, novel power allocation schemes are designed to further improve the sum spectral efficiency.
CommonCrawl
It follows that $h(x\wedge y)=h(x)\wedge h(y)$, $h(0)=0$, $h(1)=1$. Remark: The term-equivalence with Boolean algebras is given by $x\wedge y=x\cdot y$, $-x=x+1$, $x\vee y=-(-x\wedge -y)$ and $x+y=(x\vee y)\wedge -(x\wedge y)$. Example 1: $\langle \mathcal P(S), \cup ,\emptyset, \cap, S, -\rangle$, the collection of subsets of a sets $S$, with union, intersection, and setcomplementation.
CommonCrawl
Volume 4, Number 1 (2006), 93-116. In this paper, we study contact structures on any open $3$-manifold $V$ that is the interior of a compact $3$-manifold. To do this, we introduce new proper contact isotopy invariants called the slope at infinity and the division number at infinity. We first prove several classification theorems for $T^2 \times [0, \infty)$, $T^2 \times \R$, and $S^1 \times \R^2$ using these concepts. The only other classification result on an open $3$-manifold is Eliashberg's classification on $\R^3.$ Our investigation uncovers a new phenomenon in contact geometry: There are infinitely many tight contact structures on $T^2 \times [0,1)$ that cannot be extended to a tight contact structure on $T^2 \times [0, \infty)$. Similar results hold for $T^2 \times \R$ and $S^1 \times \R^2$. Finally, we show that if every $S^2 \subset V$ bounds a ball or an $S^2$ end, then there are uncountably many tight contact structures on $V$ that are not contactomorphic, yet are isotopic. Similarly, there are uncountably many overtwisted contact structures on $V$ that are not contactomorphic, yet are isotopic. These uncountability results generalize work by Eliashberg when $V = S^1 \times \R^2$. J. Symplectic Geom., Volume 4, Number 1 (2006), 93-116.
CommonCrawl
Mostly on Quora (cf) now. Give me a heads-up if you need anything regarding cyber warfare–I'm likely the top ten on the island (actually, could be the top three and you won't know who the other two are, but that's hardly the point). Gmail. Meet in person. To-individual Fast, or cash only. To-individual Fast, or cash only. 769 Is it possible to set the equivalent of a src attribute of an img tag in CSS? 368 "directory junction" vs "directory symbolic link"? 215 What is the result of $\infty - \infty$? 203 What happens when there's insufficient memory to throw an OutOfMemoryError?
CommonCrawl
While browsing the internet, of course using Internet Explorer without any adblocker, you have noticed a number of interesting competitions advertised in the panels on various webpages. In most of these competitions you need to answer a simple question, like how many triangles/squares/rectangles there are in a picture, or even choose the right answer out of three possibilities. Despite the simplicity of the task, it seems that there are many valuable prizes to be won. So there is definitely something to compete for! In order to increase your chances, you decided to write a simple program that will solve the problem for you. You decided to focus first on the question "How many squares are there in the picture?", and to simplify the problem even more, you assume that the input picture consists only of a number of lines that are infinite in both directions. To be precise, we say that four lines $\ell _1,\ell _2,\ell _3,\ell _4$ in the picture form a square if lines $\ell _1$ and $\ell _3$ are parallel to each other and perpendicular to $\ell _2$ and $\ell _4$, and moreover the distance between $\ell _1$ and $\ell _3$ is the same as the distance between $\ell _2$ and $\ell _4$. The first line of the input contains a single integer $n$ ($1\leq n\leq 2\ 000$), denoting the number of lines in the input picture. Then follow $n$ lines, each containing a description of one line in the input picture. The line is given as a pair of distinct points lying on it. That is, the description consists of four integers $x_1,y_1,x_2,y_2$, each of them of absolute value at most $10\ 000$, such that the line passes through points $(x_1,y_1)$ and $(x_2,y_2)$. You may assume that points $(x_1,y_1)$ and $(x_2,y_2)$ are different, and also that all the lines in the picture are pairwise different. Output exactly one line with one integer, denoting the total number of squares formed by the lines in the picture.
CommonCrawl
Abstract: We study Type IIB supergravity solutions with spacetime of the form $AdS_6\times S^2$ warped over a Riemann surface $\Sigma$, where $\Sigma$ includes punctures around which the supergravity fields have non-trivial $SL(2,R)$ monodromy. Solutions without monodromy have a compelling interpretation as near-horizon limits of $(p,q)$ 5-brane webs, and the punctures have been interpreted as additional 7-branes in the web. In this work we provide further support for this interpretation and clarify several aspects of the identification of the supergravity solutions with brane webs. To further support the identification of the punctures with 7-branes, we show that punctures with infinitesimal monodromy match a probe 7-brane analysis using $\kappa$-symmetry. We then construct families of solutions with fixed 5-brane charges and punctures with finite monodromy, corresponding to fully backreacted 7-branes. We compute the sphere partition functions of the dual 5d SCFTs and use the results to discuss concrete brane web interpretations of the supergravity solutions.
CommonCrawl
We extend the recent investigation of G. Wasilkowski on the equivalence of weighted anchored and ANOVA $L_1$ and $L_\infty$ norms of function spaces with mixed partial derivatives of order one. Among other norms, we consider $L_p$ norms for $1\le p\le \infty$ and confirm the conjecture that for product weights, summability of the weight sequence is necessary and sufficient. Joint work with Jan Schneider (University of Rostock, Germany).
CommonCrawl
Abstract: We study the necking dynamics of a filament of complex fluid or soft solid in uniaxial tensile stretching at constant imposed Hencky strain rate $\dot\varepsilon$, by means of linear stability analysis and nonlinear (slender filament) simulations. We demonstrate necking to be an intrinsic flow instability that arises as an inevitable consequence of the constitutive behaviour of essentially any material (with a possible rare exception, which we outline). We derive criteria for the onset of necking that are reportable simply in terms of characteristic signatures in the shapes of the experimentally measured rheological response functions, and should therefore apply universally to all materials. As evidence of their generality, we numerically show them to hold in six popular constitutive models of polymers and soft glasses. Two distinct modes of necking instability are predicted. The first is relatively gentle, and sets in when the tensile stress signal first curves down as a function of the time $t$ (or accumulated strain $\epsilon=\dot\varepsilon t$) since the inception of the flow. The second is more violent, and sets in when a carefully defined `elastic derivative' of the tensile force first slopes down as a function of $t$ (or $\dot\varepsilon$). In the limit of fast flow $\dot\varepsilon\tau\to\infty$, where $\tau$ is the material's characteristic stress relaxation time, this second mode reduces to the Considére criterion for necking in solids. However we show that the Considére criterion fails to correctly predict the onset of necking in any viscoelastic regime of finite imposed $\dot\varepsilon\tau$. Finally, we elucidate the way these modes of instability manifest themselves in entangled linear polymers, wormlike micelles and branched polymers. We demonstrate four distinct regimes as a function of imposed strain rate, consistent with experimental master curves.
CommonCrawl
A martingale is a sequence of random variables that maintain their future expected value conditioned on the past. A $[0,1]$-bounded martingale is said to polarize if it converges in the limit to either $0$ or $1$ with probability $1$. A martingale is said to polarize strongly, if in $t$ steps it is sub-exponentially close to its limit with all but exponentially small probability. In 2008, Arikan built a powerful class of error-correcting codes called Polar codes. The essence of his theory associates a martingale with every invertible square matrix over a field (and a channel) and showed that polarization of the martingale leads to a construction of codes that converge to Shannon capacity. In 2013, Guruswami and Xia, and independently Hassani et al. showed that strong polarization of the Arikan martingale leads to codes that converge to Shannon capacity at finite block lengths, specifically at lengths that are inverse polynomial in the gap to capacity, thereby resolving a major mathematical challenge associated with the attainment of Shannon capacity. We show that a simple necessary condition for an invertible matrix to polarize over any non-trivial channel is also sufficient for strong polarization over all symmetric channels over all prime fields. Previously the only matrix which was known to polarize strongly was the $2\times 2$ Hadamard matrix. In addition to the generality of our result, it also leads to arguably simpler proofs. The essence of our proof is a ``local definition'' of polarization which only restricts the evolution of the martingale in a single step, and a general theorem showing the local polarization suffices for strong polarization. In this talk I will introduce polarization and polar codes and, time permitting, present a full proof of our main theorem. No prior background on polar codes will be assumed.
CommonCrawl
I'm a FORTRAN/NAG/SPSS/SAS/Cephes/MathCad/R user and I don't see where the functions like dnorm(mean, sd) are in Boost.Math? I'm a user of New SAS Functions for Computing Probabilities. I'm allegic to reading manuals and prefer to learn from examples. Fear not - you are not alone! Many examples are available for functions and distributions. Some are referenced directly from the text. Others can be found at \boost_latest_release\libs\math\example. If you are a Visual Studio user, you should be able to create projects from each of these, making sure that the Boost library is in the include directories list. How do I make sure that the Boost library is in the Visual Studio include directories list? You can add an include path, for example, your Boost place /boost-latest_release, for example X:/boost_1_45_0/ if you have a separate partition X for Boost releases. Or you can use an environment variable BOOST_ROOT set to your Boost place, and include that. Visual Studio before 2010 provided Tools, Options, VC++ Directories to control directories: Visual Studio 2010 instead provides property sheets to assist. You may find it convenient to create a new one adding \boost-latest_release; to the existing include items in $(IncludePath). I'm a FORTRAN/NAG/SPSS/SAS/Cephes/MathCad/R user and I don't see where the properties like mean, median, mode, variance, skewness of distributions are in Boost.Math? They are all available (if defined for the parameters with which you constructed the distribution) via Cumulative Distribution Function, Probability Density Function, Quantile, Hazard Function, Cumulative Hazard Function, mean, median, mode, variance, standard deviation, skewness, kurtosis, kurtosis_excess, range and support. I am a C programmer. Can I user Boost.Math with C? Yes you can, including all the special functions, and TR1 functions like isnan. They appear as C functions, by being declared as "extern C". I am a C# (Basic? F# FORTRAN? Other CLI?) programmer. Can I use Boost.Math with C#? What these "policies" things for? Policies are a powerful (if necessarily complex) fine-grain mechanism that allow you to customise the behaviour of the Boost.Math library according to your precise needs. See Policies. But if, very probably, the default behaviour suits you, you don't need to know more. I am a C user and expect to see global C-style::errno set for overflow/errors etc? You can achieve what you want - see error handling policies and user error handling and many examples. I am a C user and expect to silently return a max value for overflow? You (and C++ users too) can return whatever you want on overflow - see overflow_error and error handling policies and several examples. I don't want any error message for overflow etc? You can control exactly what happens for all the abnormal conditions, including the values returned. See domain_error, overflow_error error handling policies user error handling etc and examples. My environment doesn't allow and/or I don't want exceptions. Can I still user Boost.Math? Yes but you must customise the error handling: see user error handling and changing policies defaults . The docs are several hundreds of pages long! Can I read the docs off-line or on paper? Yes - you can download the Boost current release of most documentation as a zip of pdfs (including Boost.Math) from Sourceforge, for example https://sourceforge.net/projects/boost/files/boost-docs/1.45.0/boost_pdf_1_45_0.tar.gz/download. And you can print any pages you need (or even print all pages - but be warned that there are several hundred!). Both html and pdf versions are highly hyperlinked. The entire Boost.Math pdf can be searched with Adobe Reader, Edit, Find ... This can often find what you seek, a partial substitute for a full index. I want a compact version for an embedded application. Can I use float precision? Yes - by selecting RealType template parameter as float: for example normal_distribution<float> your_normal(mean, sd); (But double may still be used internally, so space saving may be less that you hope for). You can also change the promotion policy, but accuracy might be much reduced. I seem to get somewhat different results compared to other programs. Why? We hope Boost.Math to be more accurate: our priority is accuracy (over speed). See the section on accuracy. But for evaluations that require iterations there are parameters which can change the required accuracy. You might be able to squeeze a little more accuracy at the cost of runtime. Will my program run more slowly compared to other math functions and statistical libraries? Probably, thought not always, and not by too much: our priority is accuracy. For most functions, making sure you have the latest compiler version with all optimisations switched on is the key to speed. For evaluations that require iteration, you may be able to gain a little more speed at the expense of accuracy. See detailed suggestions and results on performance. How do I handle infinity and NaNs portably? See nonfinite fp_facets for Facets for Floating-Point Infinities and NaNs. Where are the pre-built libraries? Good news - you probably don't need any! - just #include <boost/math/distribution_you_want>. But in the unlikely event that you do, see building libraries. I don't see the function or distribution that I want. You could try an email to ask the authors - but no promises!
CommonCrawl
where $R$ is the DISCRETE random variable for the loss and $\alpha$ is the confidence level. which says CVaR is coherent for general loss distributions, including discrete distributions. I think that I was confused by other authors who were also confused with the definitions of CVaR. In particular, in the following paper, the author mistakenly stated that Tail Conditional Expectation (TCE) is same as CVaR, and they are not coherent. However, TCE is not same as CVaR in general. If the underlying distribution is continuous, they are same. $VaR^\alpha$ is not a coherent risk measure because it fails sub-additivity (a coherent risk measure is monotonic, sub-additive, positive homogenous, and translation invariant). The expectation operator $E[\cdot]$ is linear, so it meets sub-additivity, as well as the other three properties, so $CVaR$ is a coherent risk measure. Conditional VaR (CVaR), which is also called Expected Shortfall, is a coherent risk measure (although being derived from a non-coherent one, namely VaR). EDIT: I just saw that you emphasized discrete but that shouldn't change the general situation. Not the answer you're looking for? Browse other questions tagged risk coherent-risk-measure risk-models or ask your own question.
CommonCrawl
Introduction to the work of "7 samurais" Abstract: This semester we plan to discuss the recent work of Abert-Bergeron-Biringer-Gelander-Nikolov-Raimbault-Samet) and to (hereafter 7s) that, among other things, discusses the asymptotic growth of the Betti numbers of compact locally symmetric manifolds $b_i(M)/vol(M)$ as $vol(M)\to \infty$, where $M=\Gamma\backslash X$ are quotients of a fixed (higher rank) irreducible symmetric space. To understand these results (and to put them in perspective) we need to discuss a variety of important and cool math topics: - L^2-Betti numbers - Luck's Approximation Theorem - Benjaminy-Schramm convergence - Invariant Random Subgroups - Stuck-Zimmer theorem - and more... There is some topology, geometry, dynamics, and number theory in this all. In this first talk we will give a general overview, and discuss soem organizational topics.
CommonCrawl
This page is intended to contain notes on what we discussed in individual classes. You can gain some cheap wiki points by adding a summary of a class here, calling attention to whatever highlights you feel are important, or adding onto a summary that someone else has already posted. Today we talked about the process of how knowledge is obtained, specifically in science versus in mathematics. There is also a secondary issue of how/when/why/if we are brought up to understand that process in each of those two cases. We then talked about the issue of indisputable truth, and to what extent such a concept is reasonable. Are mathematical facts like 1+1=2 indisputable? If they are not, then could anything be? If they are, then are indisputable truths possible in any spheres outside of mathematics? We discussed the 16 statements on page 8 and the degree to which one can confidently justify whether each of them is true or false. Why is $\mathbb Z$ used for the set of Integers? It turns out that the German word for "number" is zahlen. Yeah.
CommonCrawl
The first player picks a positive integer $X$. The second player gives a list of $k$ positive integers $Y_1, \ldots , Y_ k$ such that $(Y_1+1)(Y_2+1) \cdots (Y_ k+1) = X$, and gets $k$ points. Write a program that plays the second player. The input consists of a single integer $X$ satisfying $10^3 \le X \le 10^9$, giving the number picked by the first player. Write a single integer $k$, giving the number of points obtained by the second player, assuming she plays as good as possible.
CommonCrawl
Jacob, Thomas K and Pandit, Shashidhara S (1987) Spinel deoxidation equilibrium in the cobalt-manganese-aluminum-oxygen system: Experiment and modeling. In: International Journal for Materials Research/Zeitschrift für Metallkunde, 78 (9). pp. 652-656. The concentration and chemical potential of O in molten Co-Mn alloys equilibrated with (Co, Mn)$Al_2O_4$ spinel solid solns. and $\alpha-Al_2O_3$ were detected at 1873 K as a function of Mn concentration Compostion of the equilibrim spinel phase was detected by using electron probe microanal. The effect of Mn on the activity coefficient of O in liquid Co was derived. The deoxidation equilibrium was computed by using data on the activity coefficient of O and Mn in liquit Co, Gibbs energies of formation of pure spinels, and published interaction parameters. The activity-compn. relation for the spinel solid soln. was derived from a cation distribution model. The computations were in agreement with exptl. data only when the activity coefficient of Mn was taken as 0.22, rather than 0.154 suggested in the literature.
CommonCrawl
Today, we will discuss about the Daily Magic Spell proposed on January 6, 2017. Problem: Suppose you have a club of $20$ members. The club chooses four officers: President, Vice-President, Treasurer, and Secretary. They also choose someone to be in charge of fundraising. The four officers must all be different, but the member in charge of fundraising can be one of the officers. How many ways can we choose the four officers in the club with no restrictions? Solution: We can first choose the President in the club. There are $20$ options. After the President is chosen, when choosing the Vice-President, there are $19$ options. We repeat this process until all of the positions are filled. Therefore, there are $20 \times 19 \times 18 \times 17 = 116280$ ways to fill the four positions. Additionally, we wish to appoint someone in the club to take charge of fundraising. This can be any one of the $20$ members in the club. Therefore, we have: $116280 \times 20 = 2325600$ ways to appoint four officers and a person in charge of fundraising. The common approach that most students who answered this question incorrectly is most likely interpreting the problem incorrectly. Some users may not account for the possibility of one of four officials to be the member in charge of fundraising. Because of this, a common answer might be $20 \times 19 \times 18 \times 17 \times 16 = 1860480$. Otherwise, this problem is a straight forward Product Rule problem (Counting Principle). Most students who have attempted this problem recognized this! If you have any other approaches that you believe is worth discussing about, please let us know by posting here! We would greatly appreciate listening to your ideas!
CommonCrawl
There are a few topics that I wish were taught in an introduction to statistics undergraduate course. One of those topics is Bayesian Statistics, the other is Statistical Power. In this post, I go through the analysis of flipping coins, and how to calculate statistical power for determining if a coin is biased or fair. For 100 coin flips, if we get a number of heads between 40 and 60, we "fail to reject the null hypothesis", otherwise we "reject the null hypothesis." With $\alpha=0.05$, we would incorrectly "reject the hypothesis" 5% of the time. If we flip 100 coins, we get at least 0.9 power for a bias coin where $p<0.34$ or $p>0.66$. If we flip 250 coins, we get at least 0.9 power for a bias coin where $p<0.40$ or $p>0.60$. If we flip 1000 coins, we get at least 0.9 power for a bias coin where $p<0.45$ or $p>0.55$. If we flip 3000 coins, we get at least 0.9 power for a bias coin where $p<0.47$ or $p>0.53$. With 100 flips, we can only distinguish a very bias coin where $p<0.34$ or $p>0.66$ from a fair coin 90% of the time. With 10 times more flips (1000), we can distinguish a less bias coin where $p<0.45$ or $p>0.55$ from a fair coin 90% of the time. It is important that experiements have a large enough sample size so that there is enough statistical power to detect differences. If the sample size is too small, we can only detect a difference when there is a massive difference from the norm.
CommonCrawl
I will discuss the interplay between (abstract) square functions that I shall explain and bounded functional calculi. Part of these results are explicitly thought for any abstract functional calculus, parts more specific to the $H^\infty$-calculus developed by Alan McIntosh. I will try to break down the abstract results to concrete situations of (semi)groups on usual Banach spaces, like $L_p(\Omega)$. Using the main result, I will give as applications some (really) three line proofs, for example on the optimal angle of the $H^\infty$-calculus , or the Hörmander-calculus of Kriegler-Weis.
CommonCrawl
Abstract: We prove that any Boolean algebra with the subsequential completeness property contains an independent family of size continuum. This improves a result of Argyros from the 80ties which asserted the existence of an uncountable independent family. In fact we prove it for a bigger class of Boolean algebras satisfying much weaker properties. It follows that the Stone spaces of all such Boolean algebras contains a copy of the Cech-Stone compactification of the integers and the Banach space of contnuous functions on them has $l_\infty$ as a quotient. Connections with the Grothendieck property in Banach spaces are discussed.
CommonCrawl
How to prove that a $3 \times 3$ Magic Square must have $5$ in its middle cell?, Mathematics, May 18, 2016. How to reduce the low-rank matrix completion problem to integer programming?, Computer Science, April 18, 2017. Toeplitz matrix nearest to a given (symmetric) matrix, MathOverflow, March 15, 2018. Writing $(x^2 + y^2 + z^2)^2 - 3 ( x^3 y + y^3 z + z^3 x)$ as a sum of two squares of quadratic forms, Mathematics, September 3-8, 2017. Profile picture courtesy of Erik Rauch. 73 Is "The empty set is a subset of any set" a convention? 72 Why are some coins Reuleaux triangles? 28 What is a real-world metaphor for irrational numbers? 27 How to prove that a $3 \times 3$ Magic Square must have $5$ in its middle cell? 19 Systems of linear equations: Why does no one plug back in? 19 Are two sequences equal if the sums and sums of squares are equal?
CommonCrawl
The numerical range of an $n\times n$ matrix is determined by an $n$ degree hyperbolic ternary form. Helton-Vinnikov confirmed conversely that an $n$ degree hyperbolic ternary form admits a symmetric determinantal representation. We determine the types of Riemann theta functions appearing in the Helton-Vinnikov formula for the real symmetric determinantal representation of hyperbolic forms for the genus $g=1$. We reformulate the Fiedler-Helton-Vinnikov formulae for the genus $g=0,1$, and present an elementary computation of the reformulation. Several examples are provided for computing the real symmetric matrices using the reformulation.
CommonCrawl
The 'lmvar' package fits a Gaussian linear model. It differs from a classical linear model in that the variance is not constant. Instead, the variance has its own model, comparable to the model for the expected value. The classical linear model is provided by the function 'lm' in the package 'stats'. Working with the package is a lot like working with 'lm'. A fit with 'lmvar' results in an 'lmvar' object, which is a list. Accessor functions are provided to extract the list members, such as the fitted coefficients \(\beta\) and the log-likelihood. Various utility functions such as residuals to calculate residuals, AIC to calculate the AIC, fitted to obtain expected values, standard deviations and confidence intervals, etc., are also provided by the package. The package is intended for people who run a classical linear model and want to see what happens if the restriction of a constant variance is dropped. Questions in this context are: does the allowance of heteroscedasticity result in a better fit, lower values for the AIC or BIC, smaller prediction errors, etc.? where \(X_\mu\) is the 'model matrix' or 'design matrix' for \(\mu\) and \(\beta_\mu\) the parameter vector for \(\mu\). \(X_\mu\) is a \(n \times k_\mu\) matrix and \(\beta_\mu\) a vector of length \(k_\mu\). where \(\log \sigma\) stands for the vector \((\log\sigma_1, \dots, \log\sigma_n)\), \(X_\sigma\) is the 'model matrix' or 'design matrix' for \(\sigma\) and \(\beta_\sigma\) the parameter vector for \(\sigma\). The logarithm is taken to be the 'natural logarithm' with base \(e\). The dimensions of \(X_\sigma\) are \(n \times k_\sigma\) and \(\beta_\sigma\) is a vector of length \(k_\sigma\). The vector of observations \(Y\) and the matrices \(X_\mu\) and \(X_\sigma\) are specified by the user. They must contain real values. The fit returns the maximum-likelihood estimators for \(\beta_\mu\) and \(\beta_\sigma\). The model for both \(\mu\) and \(\sigma\) contains an intercept term by default. That means that the first column of both matrices is a column in which each matrix-element equals 1. The package will add this column to the user-suppplied matrices to ensure that the intercept term is present. There is no need for a user to include such a column in a user-supplied model-matrix. Intercept terms can be suppressed with the arguments intercept_mu = FALSE and intercept_sigma = FALSE of the function lmvar. After adding the intercept columns (if not suppressed), the package will check whether the resulting matrices are full rank. If not, columns will be removed from each matrix until it is full rank. The addition of an intercept column and, possibly, the removal of columns to obtain a full-rank matrix, imply that the actual matrices used in the fit can be different from the user-specified matrices. The matrices that are actually used in the fit are returned as members of the lmvar object. Carrying out the fit boils down to solving a set of non-linear equations. By default this is carried out in the background by the function maxNR from the package 'maxLik' but it is possible to use another function from the same package. More mathematical details about the model can be found in the vignette 'Math' which comes with this package. It can be viewed with vignette("Math") or vignette("Math", package="lmvar"). The main function in the package is lmvar. It carries out a fit and returns an lmvar object. All observations and matrix elements must be real-valued. Missing values, values that are NaN etc., are not allowed. A matrix either has column names for all columns, or no column names at all. The same column name can appear for both \(X_\mu\) and \(X_\sigma\) but column names should be unique within each matrix. The intercept column that is added by lmvar (although one can suppress it), is called (Intercept) for \(X_\mu\) and (Intercept_s) for \(X_\sigma\). With each column in \(X_\mu\) corresponds an element in \(\beta_\mu\). The name of that element is the corresponding column name. The same is true for \(X_\sigma\) and \(\beta_\sigma\). It can happen that lmvar fails to solve the maximum-likelihood equations and exits with warnings. See here for the options you have in this case. slvr_log: log-output from the function maxNR. Only added to the lmvar object when requested. Once lmvar has run and an lmvar-object has been created, one can obtain \(\beta_\mu\) and \(\beta_\sigma\) with the function coef. The function fitted allows one to obtain \(\mu\) and \(\sigma\). We refer to the package documentation (in particular the package index which can be viewed with help(package = "lmvar")) for a list of all available functions and function details. We demonstrate the package with the help of the dataframe cats which can be found in the MASS package. # As example we use the dataset 'cats' from the library 'MASS'. We want to regress the cats heart weight 'Hwt' onto the body weight 'Bwt'. # Create the model matrix. It only contains the body weight. An intercept term will be added by 'lmvar'. The first line shows the call that created fit. Then, we are told something about the distribution of the standard scores (also called the z-scores). Next, the summary shows the matrix with the coefficients \(\beta_\mu\) and \(\beta_\sigma\). The coefficients \(\beta_\mu\) are (Intercept), and Bwt. The coefficients \(\beta_\sigma\) are (Intercept_s), and Bwt_s. They are called this way to distinguish them from the coefficients for \(\beta_\mu\). In cases where there is no risk of confusion, the true names of the coefficients \(\beta_\sigma\) will be used, which are (Intercept_s) and Bwt. The matrix with coefficients shows that Bwt and Bwt_s are statistically significant at the 5% level, but the intercept terms are not. The next piece of information gives an impression of the distribution of the standard deviations \(\sigma\). Finally the model is compared to a classical linear model with the same model matrix \(X_\mu\) but a standard deviation that is the same for all observations. The summary shows the difference in log-likelihood between the two models and the difference in degrees of freedom. Twice the difference in log-likelihood is the difference in deviance, for which a p-value is calculated. The p-value of 0.00433, indicates that the lmvar fit is a better fit than the classical linear model at the 5% confidence level. I.e., it makes sense to let the standard deviation vary instead of keeping it fixed. Another useful, high-level check of the fit is to look at a number of diagnostic plots with the command plot (fit). The value 2 is correct: the user-supplied matrix had 1 column and lmvar added an intercept column. The two columns are linearly independent so no column had to be removed. The summary overview has left out the comparison with the classical linear model as well. The classical linear model and the model in fit are no longer nested models because of the absence on an intercept term in \(X_\sigma\). Let's also plot the average heart weights versus the body weight, with a 95% confidence interval as error bar. Both AIC and BIC favor the fit with lmvar over the fit with lm. The red line is the standard deviation that is returned from the classical linear model. Hopefully this demonstration has given an idea of how to work with the package. The documentation of the individual functions contains further examples. We refer to the package index for a list of all available functions. The index can be viewed with help(package="lmvar"). Various generic functions from the 'stats' package work for an 'lmvar' object as well. Examples are BIC and confint. The function plot.lmvar returns a number of diagnostic plots for an 'lmvar' object, similar to the function plot.lm for an 'lm' object. The function plot_qq produces a QQ-plot for one or two fits. They can be of class 'lm' or class 'lmvar'. It is also possible that one id of class 'lm' and the other of class 'lmvar'. The function plot_qdis produces a plot of the distribution of quantiles. Like plot_qq, one can specify two plots, each of class 'lm' or class 'lmvar'. The package provides the function fwbw for a selection of the independent model-variables (also called the 'predictors' or 'covariates'). It searches for an optimal subset of variables by means of a forward / backward-stepping algorithm. The function works on a 'lmvar' object but also on a 'lm' object. See the function documentation of fwbw for details. The package provides the functions cv.lm and cv.lmvar for cross-validation of prediction errors or any other function of choice. The former function works on an 'lm' object, the latter on a 'lmvar' object. See the function documentation for details. The package 'lmvar'is concerned mostly with objects of class 'lmvar'. However, it also contains the function lmvar_no_fit which creates an object of class 'lmvar_no_fit'. This class is like the class 'lmvar' but misses any information that is the result of a model fit. The class 'lmvar' is an extension of the class 'lmvar_no_fit'. This means that whenever an object of class 'lmvar_no_fit' is required, an object of class 'lmvar' can be used as well. An example is the function nobs.lmvar_no_fit supplied by this package. It takes as input an object of class 'lmvar_no_fit'. Therefore it will also accept an object of class 'lmvar'. What if the fit does not converge? It can happen that the function lmvar exits with a warning that it had trouble carrying out the model fit. The warning will be like 'Last step could not find a value above the current' or 'Log-likelihood appears not to be at a maximum!'. This is means that the iterative fitting procedure did not converge in a satisfactory manner. In our experience, this is most likely to happen when the model for \(\sigma\) has many degrees of freedom, i.e., the model matrix \(X_\sigma\) has many columns. What are your options if this happens? You can try the following strategies. Check whether the model for \(\sigma\) contains factor levels which occur only a few times, or which occur almost always. Such levels are columns in \(X_\sigma\) where nearly all elements are equal to 0 and a few equal to 1, or nearly all elements are equal to 1 and a few equal to zero. Remove these columns from \(X_\sigma\) and run lmvar again. Run lmvar with the argument control = list(remove_df_sigma_post = TRUE). With this option, lmvar tries to remove degrees of freedom from the model for \(\sigma\) that prevent the fit to converge. See the vignette 'Math' for a mathematical background of this option. Create the model by starting with a model with only 2 degrees of freedom and gradually adding degrees of freedom, avoiding ones that prevent the fit to converge. This can be done by first running lmvar_no_fit (with the same arguments as lmvar). The resulting object is then input for the function fwbw. This function must be run with argument fw = TRUE. The function fwbw requires one to select a function that measures the goodness of fit. Reasonable choices for this function are e.g. AIC or BIC. The output of fwbw contains a model-fit which is hopefully one you can work with. There are several other package that can fit the same model as 'lmvar'. Below, we mention the ones we are aware of. The 'lmvar' package has been developed as a relatively simple next step for users who typically run a linear regression, but want to gain experience with a heterscedastic model. Depending on one's taste and needs, one might need or prefer another package though. If we can, we demonstrate the alternatives with the same example we have used to demonstrate 'lmvar'. The function remlscore from the package 'statmod' (Giner and Smyth 2016) fits our example as follows. The coefficients \(\beta\) for the expected value can be obtained as fit$beta. The coefficients for the logarithm of the standard deviation as fit$gamma. By definition, the latter coefficients are twice the value calculated by lmvar. The function crch from the package 'crch' (Messner, Mayr, and Zeileis 2016) fits our example as follows. All coefficients \(\beta\) can be obtained with coef(fit). The function gam from the package 'mgcv' (Wood 2011) fits our example as follows. The function gamlss from the package 'gamlss' (Rigby and Stasinopoulos 2005) fits our example as follows. The coefficients \(\beta\) for the expected value can be obtained as coef(fit). The coefficients for the log of the standard deviation are obtained by coef(fit, what = "sigma"). Other functions that allow for a model of the dispersion are, e.g., hglm in the package hglm (Ronnegard, Shen, and Alam 2010) and geese in the package geepack (Hojsgaard, Halekoh, and Yan 2006). These models are more complicated though, and require a level of expertise not required by lmvar. We thank Prof. Dr. Eric Cator for his valuable comments and suggestions. Prof. Dr. Achim Zeileis brought the packages 'crch, 'mgcv' and 'gamlss' to our attention. This package uses the package maxLik (Henningsen and Toomet 2011) to find the maximum likelihood. The package matrixcalc is used to check properties of the Hessian. The package Matrix is used to support matrices of class 'Matrix'. Giner, Goknur, and Gordon K. Smyth. 2016. "Statmod: Probability Calculations for the Inverse Gaussian Distribution." R Journal 8 (1): 339–51. Henningsen, Arne, and Ott Toomet. 2011. "MaxLik: A Package for Maximum Likelihood Estimation in R." Computational Statistics 26 (3): 443–58. doi:10.1007/s00180-010-0217-1. Hojsgaard, Soren, Ulrich Halekoh, and Jun Yan. 2006. "The R Package Geepack for Generalized Estimating Equations." Journal of Statistical Software 15/2: 1–11. Messner, Jakob W., Georg J. Mayr, and Achim Zeileis. 2016. "Heteroscedastic Censored and Truncated Regression with crch." The R Journal 8 (1): 173–81. https://journal.r-project.org/archive/2016-1/messner-mayr-zeileis.pdf. Rigby, R. A., and D. M. Stasinopoulos. 2005. "Generalized Additive Models for Location, Scale and Shape,(with Discussion)." Applied Statistics 54.3: 507–54. Ronnegard, Lars, Xia Shen, and Moudud Alam. 2010. "Hglm: A Package for Fitting Hierarchical Generalized Linear Models." The R Journal 2 (2): 20–28. https://journal.r-project.org/archive/2010-2/RJournal_2010-2_Roennegaard~et~al.pdf. Wood, S. N. 2011. "Fast Stable Restricted Maximum Likelihood and Marginal Likelihood Estimation of Semiparametric Generalized Linear Models." Journal of the Royal Statistical Society (B) 73 (1): 3–36.
CommonCrawl
Abstract: Let $G$ be a linear algebraic group defined over the field of rational numbers and subject to certain conditions, let $G(\mathbf R)$ be its group of real points, and let $G(\mathbf Z,m)$ be a congruence-subgroup of its group of integer points. In this paper it is proved that, using a recursive procedure, one can construct a fundamental set of $G(\mathbf Z,m)$ in $G(\mathbf R)$. This result will be applied in the second part of the article.
CommonCrawl
What do the local systems in Lusztig's perverse sheaves on quiver varieties look like? In "Quivers, perverse sheaves and quantized enveloping algebras," Lusztig defines a category of perverse sheaves on the moduli stack of representations of a quiver. These perverse sheaves are defined as summands of the pushforwards of the constant sheaves on stacks of quiver representations along with a choice of invariant flag (and thus, by definition are supported on the nilpotent locus in the moduli stack). They're mostly of interest since they categorify the canonical basis. My question is: Is there a stratum in this stack where the pull-back of one of these sheaves is not the trivial local system? Now, in finite type, this is not a concern, since each stratum is the classifying space of a connected algebraic group, and thus simply connected. But I believe in affine or wild type this is no longer true; this was at least my takeaway from the latter sections of "Affine quivers and canonical bases." However, I got a little confused about the relationship between the results of the two papers mentioned above, since they use quite different formalisms, so I hold out some hope that the local systems associated to symmetric group representations aren't relevant to the perverse sheaves for the canonical basis. Am I just hoping in vain? For affine quivers, except cyclic ones, there are always perverse sheaves attached to nontrivial local systems. If you just need an example, I recommend you to read McGerty's paper math/0403279, before Lusztig's paper, where the Kronecker quiver case is studied in detail. Crystal, canonical and PBW bases of quantum affine algebras, in Algebraic Groups and Homogeneous Spaces, Ed. V.B.Mehta, Narosa Publ House. 2007, 389–421. Another thing perhaps worth mentioning is that, outside the finite type case, you have to do some real work to find a stratification with respect to which the perverse sheaves constituting the canonical basis are constructible: for affine types, such a stratification is pretty much implicit in Lusztig's Publ IHES paper which you cite, and it uses the classification of representations of tame quivers which he recovers via the McKay correspondence. In general it is known that the characteristic cycles of the sheaves in the canonical basis lie in a certain Lagrangian variety (this is already established in the paper on quivers and canonical bases). This doesn't give you a stratification however, because they components are conormals to locally closed subvarieties of the moduli space whose union is not the whole space. The same phenomenon happens for character sheaves, though there Lusztig did produce a stratification of the group and show character sheaves have locally constant cohomology on the strata. You see papers studying this sort of problem in terms of quiver representations at the level of functions on $\mathbb F_q$-points when people try and generalize the "existence of Hall polynomials" outside of finite type quivers and on the quantum group side when people look for "PBW" bases. Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry rt.representation-theory perverse-sheaves quantum-groups quivers or ask your own question. Are the strata of Nakajima quiver varieties simply-connected? Do they have odd cohomology? What's known about the stalks of Lusztig's perverse sheaves on quiver varieties? Auslander-Reiten theory of wild algebras known in examples? Are Lusztig's perverse sheaves the only equivariant ones with nilpotent characteristic cycle? Local counterpart of the NON-Hitchin Hecke eigen-sheaves ?
CommonCrawl
Darren Lee Hitt, Kevin R. Gagne, Michael Ryan McDevitt . Aerospace, , 5, 2018. This study focused on the development of a chemical micropropulsion system suitable for primary propulsion and/or attitude control for a nanosatellite. Due to the limitations and expense of current micropropulsion technologies, few nanosatellites with propulsion have been launched to date; however, the availability of such a propulsion system would allow for new nanosatellite mission concepts, such as deep space exploration, maneuvering in low gravity environments and formation flying. This work describes the design of "dual mode" monopropellant/bipropellant microthruster prototype that employs a novel homogeneous catalysis scheme. Results from prototype testing are reported that validate the concept. The micropropulsion system is designed to be fabricated using a combination of additively-manufactured and commercial off the shelf (COTS) parts along with non-toxic fuels, thus making it a low-cost and environmentally-friendly option for future nanosatellite missions. Darren Lee Hitt, William F. Louisos. 2018 Space Flight Mechanics Meeting, , AIAA SciTech Forum, 2018. Darren Lee Hitt. 2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, , AIAA SciTech Forum, 2018. Darren Lee Hitt. Aerospace, , 4, 2017. In this paper, we use differential evolution (DE), with best-evolved results refined using a Nelder–Mead optimization, to solve boundary-value complex problems in orbital mechanics relevant to low Earth orbits (LEO). A class of Lambert-type problems is examined to evaluate the performance of this evolutionary method in its application to solving nonlinear boundary value problems (BVP) arising in mission planning. In this method, we evolve impulsive initial velocity vectors giving rise to intercept trajectories that take a spacecraft from given initial position in space to specified target position. The positional error of the final position is minimized subject to time-of-flight and/or energy (fuel) constraints. The method is first validated by demonstrating its ability to recover known analytical solutions obtainable with the assumption of Keplerian motion; the method is then applied to more complex non-Keplerian problems incorporating trajectory perturbations arising in low Earth orbit (LEO) due to the Earth's oblateness and rarefied atmospheric drag. The viable trajectories obtained for these challenging problems demonstrate the ability of this computational approach to handle Lambert-type problems with arbitrary perturbations, such as those occurring in realistic mission trajectory design. Darren Lee Hitt. 27th AAS/AIAA Space Flight Mechanics Meeting, , , 2017. Visual coverage of surface elements of a small-body object requires multiple images to be taken that meet many requirements on their viewing angles, illumination angles, times of day, and combinations thereof. Designing trajectories capable of maximizing total possible coverage may not be useful since the image target sequence and the feasibility of said sequence given the rotation-rate limitations of the spacecraft are not taken into account. This work presents a means of optimizing, in a multi-objective manner, surface target sequences that account for such limitations. Stephanie Wood, Jason Pearl, Darren Lee Hitt. 27th AAS/AIAA Space Flight Mechanics Meeting, , , 2017. The ability to observe small celestial bodies has grown drastically over the last decade. The increase in interest for these bodies has increased demand for higher fidelity trajectory simulations in order to assure mission success. Most methods that are available for simulating trajectories about asymmetric bodies assume they are of uniform density. Here we propose a modification to two well-known methods: the mascon model and the spherical harmonic series approximation, for use in simulating trajectories about variable density bodies. In particular, we will look at contact binaries which are bodies consisting of two different densities. Jason Pearl, Darren Lee Hitt. 27th AAS/AIAA Space Flight Mechanics Meeting, , , 2017. A number of recent missions by space agencies to irregularly shaped asteroids have initiated an interest in accurately modeling the irregular gravitational field of these bodies. Two common methods for approximating these irregular gravity field are the polyhedral model and the mascon model. The polyhedral model employs Gauss's Divergence Theorem to calculate gravitational potential from a closed surface mesh. The mascon model uses a finite number of point-masses, distributed throughout the interior of the body, to approximate the gravitational field. In the present study, the accuracy and computational efficiency of the mascon model and polyhedral model are directly compared. The unit sphere is used as a test-case allowing the error of both methods to be calculated analytically. Both models are then applied to the real-world case of asteroid 25143 Itokawa. Results indicate that, for the same computational expenditure, the mascon model can provide the same level of accuracy as the polyhedral model at the surface of the body. Moreover, in general, away from the body the mascon model is more accurate and requires a shorter run-time. A number of recent and future missions to irregularly shaped asteroids have initiated an interest in accurately modeling the irregular gravitational potential field of these bodies. Close to highly irregular asteroids this is often accomplished using the polyhedral model. This method uses a small number of computational elements because only the surface is discretized; however, the number of computations required per element is large. As such, a simplification of the polyhedral potential model is proposed that approximates each face of the surface mesh as a surface-concentration. The simplified surface-concentration model and the full polyhedral model are compared using a sphere as test case so that the accuracy of both methods can be compared to an analytical solution. Both methods are then applied to surface meshes of Asteroid 24153 Itokawa to assess their abilities to model irregular bodies. For a given level of surface resolution, the surface-concentration model is found to be 30$\times$ faster than the polyhedral model with only a marginal reduction in accuracy. Moreover, for meshes requiring equivalent CPU-times the surface-concentration model is found to be over an order of magnitude more accurate. Michael Ryan McDevitt , Darren Lee Hitt. Advances in aircraft and spacecraft science, 21-35, 4, 2017. Hydrogen peroxide is being considered as a monopropellant in micropropulsion systems for the next generation of miniaturized satellites (`nanosats`) due to its high energy density, modest specific impulse and green characteristics. Efforts at the University of Vermont have focused on the development of a MEMS-based microthruster that uses a novel slug flow monopropellant injection scheme to generate thrust and impulse-bits commensurate with the intended micropropulsion application. The present study is a computational effort to investigate the initial decomposition of the monopropellant as it enters the catalytic chamber, and to compare the impact of the monopropellant injection scheme on decomposition performance. Two-dimensional numerical studies of the monopropellant in microchannel geometries have been developed and used to characterize the performance of the monopropellant before vaporization occurs. The results of these studies show that monopropellant in the lamellar flow regime, which lacks a non-diffusive mixing mechanism, does not decompose at a rate that is suitable for the microthruster dimensions. In contrast, monopropellant in the slug flow regime decomposes 57% faster than lamellar flow for a given length, indicating that the monopropellant injection scheme has potential benefits for the performance of the microthruster. Jason Pearl, William F. Louisos, Darren Lee Hitt. Journal of Spacecraft and Rockets, 287-298, 54, 2017. Micronozzles represent a unique flow regime defined by low Reynolds numbers (Re<1000) and supersonic Mach numbers. Currently, the classic method of calculating thrust is used by the micropropulsion community to determine nozzle performance from simulation data. This approach accounts for momentum flux and pressure imbalance at the nozzle exit, and it assumes that the viscous stress tensor's contribution to thrust is negligible. This assumption, however, can break down at low Reynolds numbers, where viscous forces play a significant role in the flow dynamics. In this paper, an extended method of calculating thrust, which accounts for the force due to the viscous stress tensor, is derived from the Navier–Stokes equation. Computational fluid dynamic simulations are then used to assess and quantify the error produced by the classic method at low Reynolds numbers (80<800). Two micronozzle geometries are used as test cases: 1) an 80% truncated planar plug nozzle, and 2) a 30 deg linear-walled planar de Laval nozzle. Results indicate that the accuracy of the classic method begins to break down at Re≈1000, below which there is a significant risk that the classic method will produce erroneous results. Moreover, for Re<100, the classic method has the potential to misrepresent the thrust of a simulated micronozzle by 50%. Jason Pearl, Darren Lee Hitt. AIAA/AAS Astrodynamics Specialist Conference, At Long Beach CA, , , 2016. A number of recent missions by national space agencies (NASA, JAXA, and ESA) to irregularly shaped asteroids has initiated an interest in accurately modeling the irregular gravitational potential field of these bodies. In this study, we examine using non-uniform mascon distributions derived from unstructured volume meshes to model the gravitational potential fields of irregular bodies. The type and topology of the unstructured mesh and its effect on the accuracy of the mascon model is examined. Meshes consisting of either tetrahedral cells or higher-order polyhedral cells with varying degrees of cell-size grading are considered. A unit sphere is used as a test case to compare numerical calculated mascon-based potentials with analytical results. Mascon models are then applied to asteroid 25143 Itokawa. The grid-dependence of the potential field and a spacecraft trajectory are examined as well as the effects of a variable density distribution. Results suggests that with the right mesh type and topology a greater than 90% reduction in the require number of mascons can be achieved in comparison to uniform distributions without sacrificing accuracy. Michael Cross, Walter Varhue, Michael Ryan McDevitt , Darren Lee Hitt. Advances in Chemical Engineering and Science, 541-552, 6, 2016. The ability of some nanostructured materials to perform as effective heterogeneous catalysts is potentially hindered by the failure of the liquid reactant to effectively wet the solid catalyst surface. In this work, two different chemical reactions, each involving a change of phase from liquid to gas on a solid catalyst surface, are investigated. The first reaction is the catalyzed decomposition of a H2O2 monopropellant within a micro-chemical reactor chamber, decorated with RuO2 nanorods (NRs). The second reaction involves the electrolysis of dilute aqueous solutions of H2SO4 performed with the cathode electrode coated with different densities and sizes of RuO2 NRs. In the catalyzed H2O2 decomposition, the reaction rate is observed to decrease with increasing catalyst surface density because of a failure of the liquid to wet on the catalyst surface. In the electrolysis experiment, however, the reaction rate increased in proportion to the surface density of RuO2 NRs. In this case, the electrical bias applied to drive the electrolysis reaction also causes an electrostatic force of attraction between the fluid and the NR coated surface, and thus assures effective wetting. Darren Lee Hitt, Margaret (Maggie) Eppstein. AIAA Guidance, Navigation, and Control Conferenc, 1554, , 2015. Michael Ryan McDevitt , Darren Lee Hitt. Journal of Propulsion and Power, 1-10, 31, 2015. Gas and liquid converging at a microchannel cross junction results in the formation of periodic, dispersed microslugs. This microslug-formation phenomenon has been proposed as the basis for a fuel-injection system in a novel, discrete monopropellant microthruster design for use in next-generation miniaturized satellites. Experimental work by McCabe et al. ("A Micro-Scale Monopropellant Fuel Injection Scheme Using Two-Phase Slug Formation," Journal of Propulsion and Power, Vol. 27, No. 6, 2012, pp. 1295–1302) demonstrated the ability to generate fuel slugs with characteristics commensurate with the intended application. In this work, numerical modeling and simulation is used to further study this problem and identify the sensitivity of the slug characteristics to key material properties including surface tension, contact angle, and fuel viscosity. These concerns are of practical concern for this application due to the potential for thermal variations and/or fluid contamination during typical operation. For each of these properties, highly stable regions exist where the slug characteristics are essentially insensitive to property variations. Next, a series of three-dimensional simulations were performed to study the effects of channel depth on the slug-formation process. These simulations show that the relative slug volume and the detachment location increase with channel depth. Over the range of depths studied, the relative slug volume increased by up to 20% and the detachment location increased by 10 channel widths. The results demonstrate the impact of three-dimensional effects on the ability of the system to throttle the fuel flow rate to a level necessary for low thrust applications, which will have ramifications on the design and manufacture of the microthruster system. Darren Lee Hitt, David Hinckley, Margaret (Maggie) Eppstein. AIAA SPACE 2014, , , 2014. In this paper we use Differential Evolution (DE), with best evolved results refined using a Nelder-Mead optimization, to solve complex problems in orbital mechanics relevant to low Earth orbits (LEO) and within the Earth-Moon system. A class of Lambert problems is examined to evaluate the performance and robustness of this evolutionary approach to orbit optimization. We evolve impulsive initial velocity vectors giving rise to intercept trajectories that take a spacecraft from given initial positions to specified target positions. We seek to minimize final positional error subject to time-of-flight and/or energy (fuel) constraints. We first validate that the method can recover known analytical solutions obtainable with the assumption of Keplerian motion. We then apply the method to more complex and realistic non-Keplerian problems incorporating trajectory perturbations arising in LEO due to the Earth's oblateness and rarefied atmospheric drag. Finally, a rendezvous trajectory from LEO to the L4 Lagrange point is computed. The viable trajectories obtained for these challenging problems suggest the robustness of our computational approach for real-world orbital trajectory design in LEO situations where no analytical solution exists. Karol Zieba, Darren Lee Hitt, Margaret (Maggie) Eppstein. Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, 1127-1134, , 2014. In this paper we use Differential Evolution (DE), with best evolved results refined using a Nelder-Mead optimization, to solve complex problems in orbital mechanics relevant to low Earth orbits (LEO). A class of so-called 'Lambert Problems' is examined. We evolve impulsive initial velocity vectors giving rise to intercept trajectories that take a spacecraft from given initial positions to specified target positions. We seek to minimize final positional error subject to time-of-flight and/or energy (fuel) constraints. We first validate that the method can recover known analytical solutions obtainable with the assumption of Keplerian motion. We then apply the method to more complex and realistic non-Keplerian problems incorporating trajectory perturbations arising in LEO due to the Earth's oblateness and rarefied atmospheric drag. The viable trajectories obtained for these difficult problems suggest the robustness of our computational approach for real-world orbital trajectory design in LEO situations where no analytical solution exists. William F. Louisos, Darren Lee Hitt. Journal of Spacecraft and Rockets, , 51, 2014. In this study, numerical computations are performed that examine the thrust production and efficiency of supersonic micronozzles with bell-shaped expanders. The bell geometry is favored on the macroscale for its flow alignment. To date, concerns over microfabrication challenges of the contoured geometry have limited its consideration for microscale applications. Three different bell expander configurations are examined (100% full bell, 80%, and 60%) for two-dimensional and three-dimensional duct configurations of varying depths (25-200 mu m), and a decomposed H2O2 monopropellant is used as the working fluid, and the associated throat Reynolds numbers range from 15 to 800. Owing to the inherently low Reynolds numbers on the microscale, substantial viscous subsonic layers develop on the walls of the nozzle expander, retard the bulk flow, and reduce the nozzle performance. The thrust production and specific impulse efficiency are computed for the various flow scenarios and nozzle geometries to delineate the impact of viscous forces on the nozzle performance. Results are also compared to the inviscid theory and to two-dimensional and three-dimensional results for 30deg linear nozzle configurations. It is found that the flow alignment of the bell nozzle comes at the expense of increased viscous losses, and, on the microscale, a 30deg linear nozzle offers a higher efficiency for Re<320 in two-dimensional micronozzles and over the majority of Reynolds numbers in three-dimensional simulations. The simulation results indicate that a short micronozzle outperforms a longer nozzle at a given Reynolds number, and this result is supported by existing micronozzle studies in the literature. Stephen Widdis, Kofi Asante, Darren Lee Hitt, Michael Cross, Walter Varhue, Michael Ryan McDevitt . IEEE/ASME Transactions on Mechatronics, 1250-1258, 18, 2013. The next generation of miniaturized satellites ('nanosats') feature dramatically reduced thrust and impulse requirements for purposes of spacecraft attitude control and maneuvering. The present study is a joint computational and experimental design effort at developing a new MEMS-based microreactor configuration for incorporation into a monopropellant micropropulsion system. Numerical models of the gas phase catalytic decomposition in microchannel configurations are used to obtain critical sizing requirements for the reactor design. The computational results show that the length scales necessary for complete decomposition are compatible with MEMS-based designs; however, it is also found that the catalytic process is dominated by mass diffusion characteristics within the flow at this scale. Experimentally, a microscale catalytic reactor prototype has been designed and microfabricated using MEMS techniques. The reactor uses self-assembled ruthenium oxide nanorods grown on the wall surfaces as a catalyst. Experimental testing indicates that only partial decomposition of the hydrogen peroxide is achieved. Among the potential sources of the incomplete decomposition, a likely cause appears to be the inability of the H2O2 reactant stream to adequately wet the surface of the catalyst film composed of a high surface density of RuO2 nanorods. Michael Ryan McDevitt , Darren Lee Hitt. 43rd AIAA Fluid Dynamics Conference, , , 2013. Laminar flow mixing remains an active area of research within the microfluidics community. Traditional mixing methods often rely upon turbulent flow, which is generally not present on the micro-scale and so alternative approaches must be sought. This work studies enhanced laminar mixing for use in a proposed monopropellant microthruster based upon homogeneous catalysis in a flow with Re < 10. The enhancement is realized through the introduction of an inert gas at a channel junction, which can lead to the formation of discrete liquid slugs. These slugs contain the monopropellant and the catalyst and have an internal recirculation that is found to enhance mixing. The focus of this study is on the numerical investigation of this process with the goal of minimizing the mixing length and characterizing the dependence of mixing on inlet conditions. The slug formation process is found to decrease the minimum mixing length by a factor of up to 7.2, with much of the benefit of the multiphase flow occurring shortly after slug formation. As minimizing the dimensions of the microthruster is a key design consideration, this reduction in mixing length demonstrates the value of the enhanced laminar mixing for the proposed micropropulsion application. Jason Pearl, William F. Louisos, Darren Lee Hitt. ASME 2014 International Mechanical Engineering Congress and Exposition, , , 2014. A parametric, two-dimensional, computational study examining steady-state plug micronozzle performance has been conducted. As part of the study, a new method for plug contour construction is proposed. The performance of several different nozzle geometries is compared to that of a traditional plug nozzle geometry designed using the Method of Characteristics (MOC). New nozzle designs are derived from the MOC based design and geometric transformations are used to produce plug nozzles of reduced length. Spike lengths corresponding to 60, 50, 40, and 27% of the MOC nozzle's length are examined. The throat Reynolds number is varied from 80–820. Thrust is used a metric to assess nozzle performance. The geometry which maximizes performance is found to vary with Reynolds number. It is observed that reducing the plug length improves thrust production for the range of Reynolds number examined. Michael Ryan McDevitt , Darren Lee Hitt. 42nd AIAA Fluid Dynamics Conference and Exhibit, , , 2012. Abstract Converging flows of a gas and a liquid at a microchannel cross junction, under proper conditions, can result in the formation of periodic, dispersed microslugs. This microslug formation phenomena has been proposed as the basis for a fuel injection system in a novel, 'discrete' monopropellant microthruster designed for use in next-generation miniaturized satellites. An experimental study by McCabe et al.1 demonstrated the ability to generate fuel slugs with characteristics commensurate with the intended application during steadystate operation. In this work, numerical and experimental techniques are used to study the effect of valve actuation on slug characteristics, and the results are used to compare with equivalent steady-state slugs. Computational simulations of a valve with a 1 ms valveactuation cycle show that as the ratio of the response time of the valve to the fully open time is increased, transient effects can increase slug length by up to 17%. The simulations also demonstrate that the effect of the valve is largely independent of surface tension coefficient, which is the thermophysical parameter most responsible for slug formation characteristics. Flow visualization experiments performed using a miniature valve with a 20 ms response time showed less than a 1% change in the length of slugs formed during the actuation cycle. The results of this study indicate that impulse bit and thrust calculations can discount transient effects for slower valves, but as valve technology improves transient effects may become more significant. © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. Gas and liquid converging at a microchannel cross junction results in the formation of periodic, dispersed microslugs. This microslug formation phenomena has been proposed as the basis for a fuel injection system in a novel, discrete monopropellant microthruster design for use in next-generation miniaturized satellites. Experimental work by McCabe et al.1 demonstrated the ability to generate fuel slugs with characteristics commensurate with the intended application. In this study, a computational model of the microchannel junction is used to investigate the effects of channel depth on the formation process and characteristics of fuel slugs. The simulations show the relative slug volume and detachment location increase with channel depth. Over the range of depths studied, the relative slug volume increased by up to 20% and the detachment location increased by 10 channel widths. These results demonstrate the impact of 3D effects on the ability of the system to throttle the fuel flow rate to a level necessary for low thrust applications, which will have ramifications on the design and manufacture of the microthruster system. © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. Stephen Widdis, Darren Lee Hitt, Kofi Asante, Michael Ryan McDevitt , Walter Varhue, Michael Cross. 43rd AIAA Thermophysics Conference, , , 2012. Abstract The next generation of miniaturized satellites ('nanosats') feature dramatically reduced thrust and impulse requirements for purposes of spacecraft attitude control and maneuvering. Efforts at the University of Vermont have concentrated on developing a chemical micropropulsion system based on a rocket grade hydrogen peroxide (HTP) monopropellant fuel. The present study is a joint computational and experimental design effort at developing a new MEMS-based micro-reactor configuration for incorporation into a monopropellant micropropulsion system. Two-dimensional numerical models of the gas phase catalytic decomposition in microchannel configurations have been developed and used to obtain critical sizing requirements for the reactor design. The computational results show that the length scales necessary for complete decomposition are compatible with MEMSbased designs; however, it is also found that the results are highly sensitive to the mass diffusion characteristics within the flow at this scale. Experimentally, a micro-scale catalytic reactor has been designed and microfabricated using MEMS techniques. The reactor uses self-assembled ruthenium oxide nanorods grown on the wall surfaces as a catalyst. It is found during experimental testing that only partial decomposition of the hydrogen peroxide occurs. The combination of multiphase and thermal quenching effects in a micro-scale geometry are identified as likely obstacles to the process. © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. William F. Louisos, Darren Lee Hitt. Journal of Spacecraft and Rockets, 450-460, 49, 2012. A numerical model to characterize the influence of wall heat transfer on performance of a microelectromechanical systems (MEMS)-based supersonic nozzle is reported. Owing to the large surface-area-to-volume ratio and inherently low Reynolds numbers of a MEATS device, wall phenomena, such as viscous forces and heat transfer, play critical roles in shaping performance characteristics of the micronozzle. Viscous subsonic layers inhibit flow and can grow sufficiently large on the nozzle expander walls, potentially merging to cause the flow to be subsonic at the nozzle exit, and result in reduced efficiency and performance. Heat flux from the flow into the surrounding substrate can mitigate subsonic layer growth and improve overall thrust production. In this study, subsonic layer growth is quantified to characterize the impact on performance of micronozzles with a flowfield that is subject to wall heat transfer. Both two- and three-dimensional (3-D) simulations are performed for varying expander half-angles (15 deg, 30 deg, and 45 deg) and varying throat Reynolds numbers (30-800), whereas the depth of the 3-D nozzle is varied (25-300 mu m). Simulation results and nozzle efficiencies are compared with inviscid theory, previous adiabatic results, and existing numerical and experimental data. It is found that heat loss to the substrate will further accelerate the supersonic core flow via Rayleigh flow theory and can reduce subsonic layer growth. These effects can combine to alter the micronozzle expansion angle, which maximizes thrust production and specific impulse efficiency. Matthew McGarry, James Kohl, Darren Lee Hitt. Engineering Applications of Computational Fluid Mechanics, 595-604, 5, 2011. The liquid droplet growth from a punctured, pressurized vessel immersed in a quiescent medium is studied under steady flow conditions. Local strain rates at the puncture site are also investigated. The droplet growth and local strain rates at the puncture are characterized as functions of various hydrodynamic and geometric conditions. Dimensional analysis shows that the fractional droplet growth rate, Q*, is a function of the Reynolds number, Weber number, hole-to-main tube diameter ratio, D*, and the puncture geometry. A 3-D finite volume computational model is constructed for laminar flow of a Newtonian fluid under steady conditions and validated with supporting experiments. The results show that the fractional growth rate Q* increases with the Weber number and is largest for the lowest Reynolds number of one. In addition, the droplet shape is spherical at low Weber numbers (2.6) and ellipsodial at high Weber numbers (7.8). Additional simulations detail how the growth ratio is lower for small diameter ratios and rectangular punctures. Physiological implications for the hemostatic response (clotting) of a punctured blood vessel can be found by examining the local strain rates in the vicinity of the puncture. The strain rate displays the largest values for the highest Reynolds number (100). In addition, when D* = 0.04 the strain rate is greater at the low (2.6) and high (7.8) Weber numbers, while the strain rate is larger for D* = 0.075 when the Weber number is 5.2. The strain rate is also affected by the puncture shape and displays higher values for the rectangular puncture when Weber < 5.2. Finally, the impact of microgravity on droplet formation was studied. Numerical simulations and quantification of the forces on fluid particles show that there is no effect from gravity on the droplet growth rate or strain rate in medium and large sized veins. J. W. McCabe, Darren Lee Hitt, Michael Ryan McDevitt . Journal of Propulsion and Power, 1295-1302, 27, 2011. The periodic formation of dispersed microscopic liquid slugs (microslugs) at a microchannel junction is investigated as a means for enhancing precision and control of fuel delivery in monopropellant-based micro-propulsion systems. Slug length and frequency of production are determined by digital image analysis of high-speed videomicroscopy recordings of the formation process in a microchannel (hydraulic diameter similar to 28 mu) under varying inlet pressure conditions. Experimental findings show that a range of slug characteristics are possible, with sizes spanning 0.05-1.7 mm at corresponding formation frequencies of 39-397 Hz. For a hydrogen-peroxide monopropellant fuel, it is estimated that the associated impulse bit of a single microslug can range from 0.2-3 mu N . s, which demonstrates the potential utility for micropropulsion applications. An Experimental Investigation of a Bound Vortex Surface Impingement Method for the Removal of Adhered Dust Particles. Nicholas Vachon, Darren Lee Hitt. 41st AIAA Fluid Dynamics Conference and Exhibit, , , 2011. Dust mitigation is an integral issue faced by many manufacturing and aerospace activities. This problem is particularly challenging in Lunar and Martian environments which produce electrically charged particles that easily adhere to exposed surfaces. This work experimentally investigates the operation of the computational design reported by Vachon and Hitt in a recent numerical study.6The effectiveness of vortex-induced flow conditions is evaluated using a combination of high speed flow visualization and particle image velocimetry techniques. In addition, a convective cooling method is used to provide a visualization of surface shear stress. Current experimental results are compared to previous computational results in order to confirm the existence of a bond vortex flow condition. Experimental evaluation of particle removal behavior and bound vortex formation are found to be in good agreement with numerical predictions. © 2011 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. Analysis of Transient Flow in Supersonic Micronozzles. William F. Louisos, Darren Lee Hitt. Journal of Spacecraft and Rockets, 303-311, 48, 2011. A numerical investigation of transient supersonic flow through a two-dimensional linear micronozzle has been performed. The baseline model for the study is derived from the NASA Goddard Space Flight Center microelectromechanical-systems-based hydrogen peroxide prototype microthruster. A hyperbolic-tangent actuation profile is used to simulate the opening and closing of a microvalve with a maximum inlet stagnation pressure of 250 kPa, which generates a maximum throat Reynolds number of Re ̃ 800. The complete duty cycle occurs over 1.7 ms. Numerical simulations have been conducted for expander half-angles of 10-50°, and both slip and no-slip wall boundary conditions have been examined. The propulsion scheme employs 85%-pure hydrogen peroxide as the monopropellant fuel. Simulation results have been analyzed, and thrust production as a function of time has been quantified, along with the total impulse delivered. Micronozzle impulse efficiency has also been determined based on a theoretical maximum impulse achieved by a quasi-1-D inviscid flow responding instantaneously to the actuation profile. It is found that both the flow and thrust exhibit a response lag to the timevarying inlet pressure profile. Simulations indicate that a maximum efficiency and impulse occur for an expander half-angle of 30° for the no-slip wall boundaries, and the slip simulations demonstrate a maximum plateau in the range of 20-30°; these angles are significantly larger than with traditional conical nozzle designs. Copyright © 2010 by the American Institute of Aeronautics and Astronautics, Inc. Numerical Simulations of Heat and Fluid Flow in Grooved Channels With Curved Vanes. Matthew McGarry, Antonio Campo, Darren Lee Hitt. Numerical Heat Transfer Applications, 41-54, 1, 2010. The use of vanes in grooved channels for heat transfer enhancement has received unprecedented attention in recent years due to applications in high-performance heat exchangers and electronics cooling. The current work focuses on characterizing the vortex formation around heated elements in grooved channels with curved vanes. A computational model is developed to examine the effect that the vortices have on heat transfer and system performance for a range of Reynolds numbers of 100 to 800. These vortices explain the previously observed characteristics in system performance for geometries with the use of curved vanes. At a Reynolds number of 400 these vortices inhibit heat transfer and increase pressure drop in the channel resulting in significant decreases in system performance. In addition, a sensitivity analysis was performed for the size and location of the baffle at a high Reynolds number of 800.
CommonCrawl
Decision trees are one of the oldest and most widely-used machine learning models, due to the fact that they work well with noisy or missing data, can easily be ensembled to form more robust predictors, and are incredibly fast at runtime. Moreover, you can directly visual your model's learned logic, which means that it's an incredibly popular model for domains where model interpretability is important. Decision trees are pretty easy to grasp intuitively, let's look at an example. Note: decision trees are used by starting at the top and going down, level by level, according to the defined logic. This is known as recursive binary splitting. For those wondering – yes, I'm sipping tea as I write this post late in the evening. Now let's dive in! Let's look at a two-dimensional feature set and see how to construct a decision tree from data. The goal is to construct a decision boundary such that we can distinguish from the individual classes present. Any ideas on how we could make a decision tree to classify a new data point as "x" or "o"? Here's what I did. Run through a few scenarios and see if you agree. Look good? Cool. So now we have a decision tree for this data set; the only problem is that I created the splitting logic. It'd be much better if we could get a machine to do this for us. But how? If you analyze what we're doing from an abstract perspective, we're taking a subset of the data, and deciding the best manner to split the subset further. Our initial subset was the entire data set, and we split it according to the rule $x_1 < 3.5$. Then, for each subset, we performed additional splitting until we were able to correctly classify every data point. How do we judge the best manner to split the data? Simply, we want to split the data in a manner which provides us the most amount of information - in other words, maximizing information gain. Going back to the previous example, we could have performed our first split at $x_1 < 10$. However, this would essentially be a useless split and provide zero information gain. In order to mathematically quantify information gain, we introduce the concept of entropy. Entropy may be calculated as a summation over all classes where $p_i$ is the fraction of data points within class $i$. This essentially represents the impurity, or noisiness, of a given subset of data. A homogenous dataset will have zero entropy, while a perfectly random dataset will yield a maximum entropy of 1. With this knowledge, we may simply equate the information gain as a reduction in noise. Here, we're comparing the noisiness of the data before splitting the data (parent) and after the split (children). If the entropy decreases due to a split in the dataset, it will yield an information gain. A decision tree classifier will make a split according to the feature which yields the highest information gain. This is a recursive process; stopping criterion for this process include continuing to split the data until (1) the tree is capable of correctly classifying every data point, (2) the information gain from further splitting drops below a given threshold, (3) a node has fewer samples than some specified threshold, (4) the tree has reached a maximum depth, or (5) another parameter similarly calls for the end of splitting. To learn more about this process, read about the ID3 algorithm. Often you may find that you've overfitted your model to the data, which is often detrimental to the model's performance when you introduce new data. To prevent this overfitting, one thing you could do is define some parameter which ends the recursive splitting process. As I mentioned earlier, this may be a parameter such as maximum tree depth or minimum number of samples required in a split. Controlling these model hyperparameters is the easiest way to counteract overfitting. You could also simply perform a significance test when considering a new split in the data, and if the split does not supply statistically significant information (obtained via a significance test), then you will not perform any further splits on a given node. Another technique is known as pruning. Here, we grow a decision tree fully on your training dataset and then go back and evaluate its performance on a new validation dataset. For each node, we evaluate whether or not it's split was useful or detrimental to the performance on the validation dataset. We then remove those nodes which caused the greatest detriment to the decision tree performance. Evaluating a split using information gain can pose a problem at times; specifically, it has a tendency to favor features which have a high number of possible values. Say I have a data set that determines whether or not I choose to go sailing for the month of June based on features such as temperature, wind speed, cloudiness, and day of the month. If I made a decision tree with 30 child nodes (Day 1, Day 2, ..., Day 30) I could easily build a tree which accurately partitions my data. However, this is a useless feature to split based on because the second I enter the month of July (outside of my training data set), my decision tree has no idea whether or not I'm likely to go sailing. One way to circumvent this is to assign a cost function (in this case, the gain ratio) to prevent our algorithm from choosing attributes which provide a large number of subsets. Here's an example implementation of a Decision Tree Classifier for classifying the flower species dataset we've studied previously. Without any parameter tuning we see an accuracy of 94.9%, not too bad! The decision tree classifier in sklearn has an exhaustive set of parameters which allow for maximum control over your classifier. These parameters include: criterion for evaluating a split (this blog post talked about using entropy to calculate information gain, however, you can also use something known as Gini impurity), maximum tree depth, minimum number of samples required at a leaf node, and many more. Regression models, in the general sense, are able to take variable inputs and predict an output from a continuous range. However, decision tree regressions are not capable of producing continuous output. Rather, these models are trained on a set of examples with outputs that lie in a continuous range. These training examples are partitioned in the decision tree and new examples that end in a given node will take on the mean of the training example values that reside in the same node. Alright, but how do we get there? As I mentioned before, the general process is very similar to a decision tree classifier with a few small changes. We'll still build our tree recursively, making splits on the data as we go, but we need a new method for determining the optimal split. For classification, we used information entropy (you can also use the Gini index or Chi-square method) to figure out which split provided the biggest gain in information about the new example's class. For regression, we're not trying to predict a class, but rather we're expected the generate an output given the input criterion. Thus, we'll need a new method for determining optimal fit. One way to do this is to measure whether or not a split will result in a reduction of variance within the data. If a split is useful, its combined weighted variance of the child nodes will be less than the original variance of the parent node. We can continue to make recursive splits on our dataset until we've effectively reduced the overall variance below a certain threshold, or upon reaching another stopping parameter (such as reaching a defined maximum depth). Notice how the mean squared error decreases as you step through the decision tree. The techniques for preventing overfitting remain largely the same as for decision tree classifiers. However, it seems that not many people actually take the time to prune a decision tree for regression, but rather they elect to use a random forest regressor (a collection of decision trees) which are less prone to overfitting and perform better than a single optimized tree. The common argument for using a decision tree over a random forest is that decision trees are easier to interpret, you simply look at the decision tree logic. However, in a random forest, you're not going to want to study the decision tree logic of 500 different trees. Luckily for us, there are still ways to maintain interpretability within a random forest without studying each tree manually. Here's the code I used to generate the graphic above.
CommonCrawl
Given a field $(F, +, \cdot)$ we can restrict the coefficients $a_0, a_1, ..., a_n$ to be contained in $F$ and then study various properties of such polynomials that we are already familiar with (such as roots, factorability, etc…). We first formally define a polynomial over a field. Definition: If $(F, +, \cdot)$ is a field and $f \in F[x]$ is given by $f(x) = a_0 + a_1x + ... + a_nx^n$ with $a_n \neq 0$ then the Degree of $f$ is $n$ and is denoted $\deg (f) = n$. By convention we denote $\deg (0) = -\infty$. In the above example, $\deg (f) = 3$. Definition: If $(F, +, \cdot)$ is a field and $f \in F[x]$, $f(x) \neq 0$ with $f(x) = a_0 + a_1x + ... + a_nx^n$ then $f$ is said to be a **Monic Polynomial if $a_n = 1$. Given any field $(F, +, \cdot)$, we can define a new field on $F[x]$, the set of polynomials over $F$ with the binary operations of function addition and function multiplication. This is outlined in the following theorem and is easy (but tedious) to verify. Theorem 1: If $F$ is a field then $(F[x], +, \cdot)$ is a field where for all $f, g \in F[x]$ with $f(x) = a_0 + a_1x + ... + a_nx^n$ and $g(x) = b_0 + b_1x + ... + b_mx^m$ we define $f + g = [a_0 + a_1x + ... + a_nx^n] + [b_0 + b_1x + ... + b_mx^m]$ and $f \cdot g = [a_0 + a_1x + ... + a_nx^n][b_0 + b_1x + ... + b_mx^m]$. The additive identity in $F[x]$ is the polynomial $0(x) = 0$, and the multiplicative identity in $F[x]$ is the polynomial $1(x) = 1$.
CommonCrawl
There is 80% increase in an amount in 8 years at simple interest. What will be the compound interest on Rs.14,000 after 3 years at the same rate? In which of the given years, the number of candidates appeared from city C has maxi mum percentage of qualified candidates? What is the percentage of candidates qualified from city B for all the years together, over the candidates appeared from city B during all the years together? What is the circum radius of a triangle whose sides are 7, 24 and 25 units respectively? 1080 = $2 3$ $\times$ $3 3$ $\times$ 5. For any perfect square, all the powers of the primes have to be even numbers. So, if the factor is of the form $2^ a$ $\times$ $3^ b$ $\times$$ 5^ c$ . The values 'a' can take are 0 and 2, b can take are 0 and 2, and c can take the value 0. Totally there are 4 possibilities. 1, 4, 9, and 36. One year payment to the servant is Rs. 200 plus one shirt. The servant leaves the jobs after 9 months and receives Rs. 120 and a shirt. What is the price of the shirt? What will be the equation of a circle of radius 6 units centered at (3, 2)? In an examination it is required to get 40% of minimum marks to pass. A student got 142 marks, thus he got 22 marks more than minimum marks required to pass. What is the total marks?
CommonCrawl
a. No, -2 is not a solution of the given equation. b. Yes, 2 is solution of the given equation. First, simplify the equation 1.$4x+7=9x-3$ 2. Add 3 to both sides:$4x+10=9x$ 3. Subtract 4x from both sides: $10=5x$ a. Plug in -2 to see if it makes the equation true: 1. $10=5\times-2$ 2. $10=-10$ The equation is not true, so -2 is not a solution of the equation. b. Plug in 2 to see if it makes the equation true: 1. $10=5\times2$ 2. $10=10$ The equation is true, so 2 is a solution of the equation.
CommonCrawl
It is shown by elementary continuity considerations tht there exist values of the two accessory parameters of a Fuchsian differential equation with the singularities ?$a$?, ?$b$?, ?$c$?, ?$d$?. ?$\infty$? such that the ratio ?$w$? of two independent solutions is single valued in the complex plane furnished with the slits ?$a \le x \le b$? and ?$c \le x \le \infty$?. There exist also values of parameters such that ?$w$? is univalent in the slit plane. Vidav, Ivan, Plemelj, Josip. Kleinovi teoremi v teoriji linearnih diferencialnih enačb. Akademija znanosti in umetnosti, 1941.
CommonCrawl
One of the most important decisions when designing an A/B test is choosing the goal of the test. After all, if you don't have the right goal the results of the test won't be of any use. It is particularly important when using Myna as Myna dynamically changes the proportion in which variants as displayed to maximise the goal. So how should we choose the goal? Let's look at the theory, which tells us how to choose the goal in a perfect world, and then see how that theory can inform practice in a decidedly imperfect world. for visitors $25 \(\times\) 0.1 = $2.50. The great thing with CLV is you don't have to worry about any other measures such as click-through, time on site, or what have you – that's all accounted for in lifetime value. Accurately predicting CLV is the number one problem with using it in practice. A lot of people just don't have the infrastructure to do these calculations. For those that do there are other issues that make predicting CLV difficult. You might have a complicated business that necessitates customer segmentation to produce meaningful lifetime values. You might have very long-term customers making prediction hard. I don't need to go on; I'm sure you can think of your own reasons. This doesn't mean that CLV is useless, as it gives us a framework for evaluating other goals such as click-through and sign-up. For most people using a simple to measure goal such as click-through is a reasonable decision. These goals are usually highly correlated with CLV, and it is better to do testing with a slightly imperfect goal than to not do it at all due to concern about accurately measuring lifetime value. I do recommend from time-to-time checking that these simpler goals are correlated with CLV, but it shouldn't be needed for every test. CLV is very useful when the user can choose between many actions. Returning to our landing page example, imagine the visitor could also sign up for a newsletter as well as signing up to use our product. Presumably visitors who just sign up for the newsletter have a lower CLV than those who sign up for the product, but a higher CLV than those who fail to take any action. Even if we can't predict CLV precisely, using the framework at least forces us to directly face the problem of quantifying the value of different action. This approach pays off particularly well for companies with very low conversion rates, or a long sales cycle. Here A/B testing can be a challenge, but we can use the model of CLV to create useful intermediate goals that can guide us. If it takes six months to covert a visitor into a paying customer, look for other intermediate goals and then try to estimate the CLV of them. This could be downloading a white paper, signing up for a newsletter, or even something like a repeat visit. Again it isn't essential to accurately predict CLV, just to assign some value that is in the right ballpark. So far everything I've said applies to general A/B testing. Now I want to talk about some details specific to Myna. When using Myna you need to specify a reward. For simple cases like a click-through or sign-up, the reward is simply 1 if the goal is achieved and 0 otherwise. For more complicated cases Myna allows very flexible rewards that can handle most situations. Let's quickly review how Myna's rewards work, and then how to use them in more complicated scenarios. you can send multiple rewards for a single view of a variant, but the total of all rewards for a view must be no more than 1. Now we know about CLV the correct way to set rewards is obvious: rewards should be proportional to CLV. How do we convert CLV to a number between 0 and 1? We recommend using the logistic function to guarantee the output is always in the correct range. However, if you don't know your CLV just choose some numbers that have the correct ranking and roughly correct magnitude. So for the newsletter / sign-up example we might go with 0.3 and 0.7 respectively. This way if someone performs both actions they get a reward of 1.0. That's really all there is to CLV. It a simple concept but has wide ramifications in testing.
CommonCrawl
Mathias, PC and Patnaik, LM (1990) Systolic Evaluation of Polynomial Expressions. In: IEEE Transactions on Computers, 39 (5). pp. 653-665. High-speed evaluation of a large number of polynomial expressions has potential applications in the modeling and real-time display of objects in computer graphics. Using VLSI techniques, chips called pixel planes have been built by Fuchs and his group to evaluate linear expressions on frame buffers. Extending the linear evaluation to quadratic evaluation, however, has resulted in the loss of regularity of interconnection among the cells. In this paper, we present two types of organizations for frame buffers of m \times m pixels: one, a single wavefront complex cell array requiring $O(m^2n)$ space and the other a simple cell multiple wavefront array with $O(m^2)$ area and $O(n^2)$ wavefronts. Both these organizations have two main advantages over the earlier proposed method. The cells and the interconnection among them are regular and hence are suitable for efficient VLSI implementation. The organization also permits evaluation of higher order polynomials.
CommonCrawl
If a the row reduced form of a $n \times n$ matrix is the equivalent $n \times n$ identity matrix. Is the $n \times n$ matrix always invertible? Furthermore if the row reduced form is not the equivalent $n \times n$ identity matrix is the $n \times n$ matrix not invertible? What you said is true. Being invertible is equivalent to a huge list of other properties one of which is that the matrix is full rank (in the case of an $n\times n$ matrix, this means that the rank of the matrix is $n$). A full rank square matrix has the identity matrix as its reduced row echelon form. Why is L invertible in LU factorization, and why is the reduced Echelon form of $U$ identity matrix when $A$ is invertible? Finding a row-reduced echelon matrix. Why can the row-reduced echelon matrix $R$ only be identity matrix? Why is a matrix non-invertible if its row-reduced echelon form matrix is not identity? Why can all invertible matrices be row reduced to the identity matrix?
CommonCrawl
We will study shock waves (i.e. a disturbance skirting regions with tremendously different properties e.g. of density, pressure, temperature) arising in systems of 1st order PDEs modeling e.g. transport phenomena, conservation laws, gas dynamics, wave equations, traffic flows. On this way, we will learn main notions and tools for the analysis of 1st order PDEs, in particular, characteristics, weak solutions, entropy, viscosity and asymptotics of solutions. The language of the seminar is supposed to be English (with the help of German if needed). Interested students are supposed to be acquainted with ordinary differential equations and analysis and should furthermore possess basic knowledge about partial differential equations. What are non-linear first order PDEs. Examples. Derivation of the characteristic ODE system. Non-characteristic surface condition. Existence and uniqueness of local solution via characteristic ODE system. Example: Quasilinear equations. What are conservation laws? How do they appear? Examples: Gas dynamics, Burgers equation. Canonical pictures of singularities. Rankine-Hugoniot conditions. Entropy condition. Weak solutions. Convergence in $ L^\infty $ and in $ L^1 $. N-Waves. Semilinear equations. Traveling waves. Hyperbolicity. Example: Euler equations. Weak solution of the Riemann problem. Rarefaction and shocks. Simple waves and rarefaction waves in the Riemann problem. Solution of Riemann problem by singularities. Evans 11.2.4. Viscosity solution. Traveling waves. Examples. Lawrence C. Evans. Partial differential equation. Joel Smoller. Shock waves and Reaction-Diffusion Equations. Constantine M. DaFermos. Hyperbolic conservation laws in continuum physics.
CommonCrawl
We provide an introduction to POD-MOR with focus on (nonlinear) parametric PDEs and (nonlinear) time-dependent PDEs, and PDE constrained optimization with POD surrogate models as application. We cover the relation of POD and SVD, POD from the $\infty$-dimensional perspective, Reduction of nonlinearities, Certification with a priori and a posteriori error estimates, Spatial and temporal adaptivity, Input dependency of the POD surrogate model, POD basis update strategies in optimal control with surrogate models, and sketch related algorithmic frameworks. The perspective of the method is demonstrated with several numerical examples.
CommonCrawl
Abstract : Algorithm UCB1 for multi-armed bandit problem has already been extended to Algorithm UCT (Upper bound Confidence for Tree) which works for minimax tree search. We have developed a Monte-Carlo Go program, MoGo, which is the first computer Go program using UCT. We explain our modification of UCT for Go application and also the intelligent random simulation with patterns which has improved significantly the performance of MoGo. UCT combined with pruning techniques for large Go board is discussed, as well as parallelization of UCT. MoGo is now a top level Go program on $9\times9$ and $13\times13$ Go boards.
CommonCrawl
Problems in which optimal series of products and their component parts have to be determined. An optimal series of products is a set of various types of products selected from an initial series that enables one to meet all forms of demands on the required quantity with minimum total expenditure in the course of the development, production and use of all the products. An optimal series of products exists when, as the number of distinct types of products increases, the expenditure incurred during their development increases monotonically, while the serial and operating costs diminish. The terminological difference between problems of standardization and problems of unification is to a certain extent questionable. They reflect different ways of looking at the question of the delineation between standardization and unification. For example, problems of choosing optimal series for simple individual and mass-produced products come under the heading of standardization, while problems of choosing optimal series for complex, expensive products and their components come under the heading of problems of unification. Another approach to the delineation of problems of standardization and unification is based on the degree of detail with which the structure of the products in the initial series is studied. If the products of different types in the initial series are completely different from each other and do not have identical, i.e. unified, components, then one speaks of a single-level standardization problem, or simply of a standardization problem. Taking into account the structure of the products and the fact that different products may have the same components, one speaks of a two-level standardization problem. By studying the structure of the components of the products in greater detail, it is possible to obtain an $n$-level standardization problem. Unification problems are $n$-level standardization problems, where $n>1$. If it is supposed that in defining an optimal series of complex products one must also, as a rule, define an optimal series of their most important components, then the two approaches to the delineation between standardization and unification described above coincide. If for the class of products in question the optimality of one of these series is proved, and the minimum value of the main parameter $a_0$ is chosen, then, subsequently, the values of the main parameter of all the other products in the series can be obtained by rounding off, where necessary, the values $a_0q^n$, $n=1,2,\ldots,$ where $q$ is the ratio of the chosen series. An approach based on the system of preference numbers gives a very approximate solution to standardization problems. Moreover, the domain of feasibility of this approach is confined to the narrow class of comparatively simple one-dimensional standardization problems in which the products in the series are characterized by one main parameter. In most cases, particularly where complex and expensive products (which cannot be characterized by one main parameter) are concerned, the optimal solution of problems of standardization and unification has to be defined using very strong mathematical methods. Mathematical models designed for use in solving problems of standardization and unification generally reduce to fairly complex multi-extremal problems of non-linear programming, the solution of which requires modern computing methods and computers with high operating speed and a large memory. For certain special classes of problems in which characteristics can be of essential use, simpler effective solution methods are possible. This page was last modified on 15 August 2014, at 08:42.
CommonCrawl
The workshop is organized as extra workshop (besides the three major workshops) of the trimester on "Types, Sets and Constructions" of the Hausdorff Research Institute for Mathematics (HIM), see the official announcement on the HIM site. All talks are held in HIM lecture hall, Poppelsdorfer Allee 45, Bonn. Most participants will have arrangements with the HIM. We will therefore refrain from giving further indications. Krivine, followed by Berardi and Valentini, explored in the 90's how to compute with Gödel's completeness using "exploding models" and A-translation. We later gave with Ilik a direct computational formulation of Henkin's proof. In the talk, we shall study yet another approach to compute with Gödel's completeness theorem by observing that the Kripke forcing translation of the statement of completeness is a statement of completeness with respect to Kripke semantics. Going then the other way round, looking at direct-style provability for the Kripke forcing translation the same way as classical logic can be seen as direct-style provability for the double-negation translation, we obtain a proof of Gödel's completeness with "side effects" whose computational content is to "reify" a proof of validity into a proof of derivability. This talk presents an introduction to constructive reverse mathematics (CRM) with some recent results. The aim of CRM is to classify various theorems in intuitionistic, constructive recursive and classical mathematics by logical principles, function existence axioms and their combinations. We review some results in classical reverse mathematics (Friedman-Simpson program), and axioms of intuitionistic and constructive recursive mathematics with their consequences to motivate CRM. Then we present some results of CRM with and without countable choice, and recent results in CRM without countable choice. In SAT solving we have the New and the Old, modern SAT solving versus old-fashioned backtracking (and look-ahead). As the success of the solution of the Pythagorean Triples Problem https://en.wikipedia.org/wiki/Boolean_Pythagorean_triples_problem shows, indeed the combination of Old and New can be best in many cases, and I will discuss the landscape of these ideas. Progressions of theories along paths through Kleene's $\mathcal O$, adding the consistency of the previous theory at every successor step, can deduce every true $\Pi^0_1$-statement. This was shown by Turing in his 1938 thesis who called these progressions ordinal logics. In 1962 Feferman proved the amazing theorem that progressions based on the uniform reflection principle can deduce every true arithmetic statement. In contrast to Turing's, Feferman's proof is very complicated, involving several cunning applications of self-reference via the recursion theorem. Using Schütte's method of search trees (or decomposition trees) for ω-logic, however, one can give a rather transparent proof. If time permits, I shall present a general proof-theoretic machinery, based on search trees, for investigating statements about well-orderings from the point of view of reverse mathematics. The notion of elementary quotient completion of a Lawvere elementary doctrine introduced by Maietti and Rosolini proved to be a generalization of the notion of the exact completion of a category with finite products and weak equalizers which is a crucial step in the construction of computational models such as Hyland's Effective Topos. We describe that completion and focus on the special class of elementary doctrines called triposes, introduced by Hyland, Johnstone, and Pitts. We characterize those triposes whose elementary quotient completion is an arithmetic quasitopos, i.e. a quasitopos equipped with a natural number object. This suggests that the notion of elementary topos may be too strong to model computational aspects of mathematics. There will also be the opportunity to join the organizers for a dinner on Thursday evening at 7 p.m., in the restaurant Zum Treppchen (the reservation has been confirmed). The participants will be asked if they intend to join (everybody has to pay for herself/himself) before lunch on Thursday. There is no specific registration procedure, and no registration fee will be charged - the coffee breaks are taken care of by the HIM. Lunches and dinners have to be organized by the participants individually, however, there is a suggestion for dinner on Thursday, see above. PCC 2017 was held in Göttingen, Germany, including an unveiling ceremony of a commemorative plaque for Paul Bernays on his former house. PCC 2016 was held in Munich, Germany, including a day-long special session to celebrate the 60th birthday of Ulrich Berger.
CommonCrawl
"What is the Largest Possible Prime Number?" First it appears this question is outside the bounds of good mathematical reasoning for two main reasons. 1) The concept of 'largest possible prime' is ill-defined. 2) The concept also appears to be ill-logical based on reason (1). For the same reason there is 'no largest prime' there is also no number with an infinite amount of digits, yes? A "number" with an infinite number of digits is a natural number? I realize, as a concept, the 'largest possible prime' is not a natural number and therefore cannot be a prime number, however the same rules we use to generate primes can be abstracted to strings of numbers, infinite in length, despite there being no definition for such. What is such a string called? I cannot find an answer excepting the idea of infinite, which seems to bulldoze any distinction of pattern or lack thereof in an infinite string. Reasonably an infinite strings of 2's is not different from an infinite string of 3's, 5's, or 17's, in terms of numbers ... or any other symbol, it is simply infinite. Despite this making sense it seems to leave something fundamental behind. I am sure I missed something here and there are strings of numbers infinite in length that assume some distinction based on their makeup and pattern. It seems that such a string could not be defined at irrational either. 1) Will a finite set of numbers (10^x) always have a greatest possible prime of (10^x)-3? 2) Can we extrapolated this logic to a set (10^∞) and say the largest possible irrational illogical prime is ∞97? Or: Of all sets (10^x), is [(10^x)-3] the largest possible prime set. [I am not a mathematician so please forgive my laymen question/topic. This question, "What is the largest possible prime number?" is distinct and separate from the question "What is the largest prime number currently known?." I understand the set of primes is infinite and that any 'largest' prime could be smaller then the next larger prime, but continuing in an illogical fashion (+Godel) I have come across a interesting idea. Let me explain. If we could think about the largest possible prime as not real, ie.. not a real number.. but a theoretical number (not sure what the name is) then we could say maybe there is a 'not real prime' number with an infinite amount digits. Further if we considered all 'not real primes' with infinite digits what would be the greatest of those? I think the answer to this question is a number with an infinite amount of 9's ending in a 7. The logic being ∞999 is not prime but ∞9997 could be and that would be the 'largest possible prime number.' I do have documentation to support the idea but it is lengthy to transcribe here. I know the 'not real' quality of the answer will immediately annoy many but I am wondering what everyone thinks of the question and this obviation of normal thinking. Thank you All! Thanks StackExhange! Infinity isn't a number, it's a concept. Having "an infinite" amount of $9$'s and then a $7$ wouldn't really help you at all. $\infty9997$ would be the same as just $\infty $. Since at least the Greeks, and probably before that, we've known that there's an infinite amount of primes; with the normal definition of infinity this then implies that the "largest" prime is impossible to ascertain. Because we can always keep on adding $1$, until we get the next "largest" prime ad infinitum. If you want your question to make any sense at all then you need to wholly redefine the concept of infinity and that would make basically everything relying on infinity behave in unpredictable ways. It's probably easier to just don't think of a specific "largest" prime, but instead of a concept of "the largest prime" which is just an extension of infinity. A "number" with an infinite number of digits is a natural number? The largest possible prime gap? Is there a largest "nested" prime number? What prime numbers have the sum of their digits as a prime number? What is the largest prime number? Is the set of all prime numbers bounded? Proof that there exists a larger prime than prime number P, which is the largest of a finite set of primes? What is the largest 'sequential' prime number? Largest set of consecutive prime numbers.
CommonCrawl
In connection with National Mathematics Day, Kerala School of Mathemtics iconducted a 3 day workshop for High School and Higher Secondary School Mathematics Teachers. Topics Covered: Teaching of mathematics, and on topics relevant to school teachers. KSoM conducted a workshop for M.Sc. students (those who have completed the First Year and going into the Second Year from colleges in Kerala) from April 23 to May 5, 2018. There were two components for the workshop. During the first week (April 23-28), the topics covered are Algebra and Number Theory and during the second week, it was on Linear Operators. Topics of discussion:Topics in Functional Analysis-Spectral theorem and Operator Algebras. examine their social (as opposed to spatial) behavior. KSOM conducted a Workshop at St. Joseph's College, Moolamattam, Idukki district, Kerala during 16-23 November 2017. The program was meant for BSc Mathematics students studying in colleges in Idukki, Kerala. . Gamelin, Theodore W. Complex analysis. Springer, 2009. Differential Calculus, integral calculus, theory of probability. Prof. S.Somasundaram, Manonmaniam Sundaranar University, Tirunelveli. the total Cauchy theorem, the power series representation, the open maping theorem, the global Cauchy theorem. Prof. Arindama Singh : (i) Mathematical reasoning; (ii) Permutations and Combinations. Topics: Number fields. Finite fields. Ring of integers in a number field and their properties. and finite fields. Galois theory. Solvability by radicals. Rtd. Head, Department of Mathematics, RKM Vivekananda College, Chennai. Trainee in Olympiad Camp. 2. Using Computer simulations for Math learning at Higher Secondary level. Click here for topics and list of references for the course. A one week long Mini Mathematics Training and Talent Search programme organized during April 24-30, 2017 at Kerala School of Mathematics, Kozhikode, Kerala with the funding of National Board of Higher Mathematics, Govt. of India. The program was targeted to students in the second year/fourth semester B.Sc/Integrated M.Sc programmes having Mathematics as one of the compulsory papers. The topics which discussed in the program were Number Theory, Foundations of Set Theory, Real Analysis, Group Theory, and Linear Algebra. The program was conducted on the solid academic support of the prestigious Mathematics Training and Talent Search (MTTS) programme and it took up and implemented the meticulous teaching methodology of MTTS. A summer workshop on basic subjects of MSc syllabus meant for MSc students conductred at KSOM during 01-21, April 2017. A small group of young college teachers also attended the program along with students as tutors and facilitators in this residential program. Subject experts from various national-level research institutes handled classes, lead discussions and conducted tutorial sessions. KSOM conducted a Workshop at Pavanatma College, Idukki district, Kerala during 28 November - 02 December 2016. The program was meant for BSc Mathematics students studying in colleges in Idukki, Kerala. Semisimple representations.Closed and "algebraic" subgroups of GL(n,C). matrices (Jordan decomposition). Nilpotent and unipotent groups, Torus, Classification of connected abelian groups, Solvable groups, Lie-Kolchin Theorem. Reductive groups: If a connected algebraic group has a faithful irreducible representation then it is a reductive group. Classification of finite subgroups of GL(n,C): Theorem of Jordan. theorem, Gamma and Zeta functions (basic properties). Weierstrass function. Prof. L Venkataraman : Cauchy theory (basically complex integration and its applications). Cauchy-Goursat Theorem, Cauchy Integration Formulas, Singularities, Residue Theorem, Evaluation of Ontegrals using Residues, Argument Principle, Maximum Modulus Principle. KSOM conducted a Workshop at Wayanad during 09-13 December, 2015. The program was meant for BSc Mathematics students studying in colleges in Wayanad. Following Bourbaki (Chapter I), we will study filters and their limits. Filters, like nets, generalize sequences. They enable us to restore intuition to the notion of continuity: a map between general topological spaces is continuous if and only if, for every filter in the domain that converges to a point, the direct image filter converges to the image of that point. Moreover, Hausdorfness may be defined as the property that every filter has at most one limit point, (quasi-)compactness as the property that every filter admits at least one cluster point. Following Gilman & Jerison, we will illustrate the use of filters in the study of rings of continuous functions (such as C[0,1]). Following Bourbaki again (Chapter II), we will study uniformities. Cauchy sequences and uniform continuity are not topological notions (they are not preserved under homeomorphisms; their usual definitions presume a metric) but they are closely related to topological notions. Uniform spaces generalize metric spaces. Like a metric, a uniformity too induces a topology. Uniform continuity, Cauchy filters, and completions may be defined in the context of uniform spaces. Some notes following Bourbaki may be found at http://www.imsc.res.in/~knr/past/top15 (these notes are being currently added to). W. Klingenberg, A Course in Differential Geometry, Springer-Verlag, 1978. Manfredo P. do Carmo, Differential Geometry of Curves and Surfaces , Prentice-Hall, 1976. S. Kumaresan, A Course in Differential Geometry and Lie Groups, Hindustan Book Agency, 2002. Differentiable maps, Tangent space, Implicit function theorem, Critical Points, Structure Theorem for nondegenerate critical points, Topological and Differentiable manifolds, Differentiability of functions on a manifold, Sard's theorem, Partitions of unity, para compactness., Whitney's Embedding Theorem, Morse functions and their Existence, CW complexes, Change of Topology as one passes a non degenerate critical point. A one week long Mini Mathematics Training and Talent Search programme organized during May 4-9, 2015 at Kerala School of Mathematics, Kozhikode, Kerala with the funding of National Board of Higher Mathematics, Govt. of India. KSOM conducted a refresher course during April 2015 for final year M.Sc. Mathematics students in Kerala. The course covered some basic topics of Algebra, Analysis, and Number Theory. 4. L_p spaces; Specializing to p=2 simple introduction to Fourier series. Generalities on linear representations, characters, Schur's lemma, orthogonality relations, induced representations, Mackey's criterion. Modules over Commutative rings, ascending and descending chain conditions, Noetherian rings and modules, Hilbert basis theorem. 4. Introduction to Commutative Algebra by Atiyah and Macdonald.5. Linear Representations of Finite Groups by Serre. R^n calculus, continuity, various notions of differentiability, inverse function and implicit function theorems, extension of these notions to Banach spaces, finally some applications. From Cauchy Riemann equations to some advanced topics of Analytic continuation. Homological Algebra: Complexes and homology, Exact sequences, Long exact sequence assciated to a short exact sequence of complexes. Topology: Simplicial complexes, CW Complexes. Homology Thoeries: Simplicial Homology, Cellular Homology, Singular Homology. Main Theorems (Only statements with examples): Excision Theorem, Homology sequence of a pair/tripple, Functoriality. Invariance of domain. Further if time permits: Brouwer fixed point theorem, Lefschetz fixed point theorem and Borsuk-Ulam theorem. The two series of lectures will complement each other; First one, demystifying the finite dimensional spectral theorem (diagonalization of self adjoint matrices) following a paper of Adam Koranyi by doing the singular decomposition first (see A. Koranyi: Around the finite dimensional spectral theorem, Amer. Math. Monthly 108 (2001) no.2 120-125) and the second, introducing the participants to the infinite dimensional version. KSOM conducted refresher course during April 2014 for B.Sc. Mathematics IV Semester students studying in Kerala state. The course covered some basic topics of Algebra, Analysis, Linear Algebra and Number Theory. The course with relaxed schedule and interactive sessions discussed abstract topics in Number Theory, Analysis, Algebra and Linear Algebra. The sessions were handled by Joseph Mathew, A. K. Vijayarajan, V Srivatsa, P Jisna, P Shankar(KSOM) and Kirankumar V B. 1: $\pi_1$: definition, Homotopy, Exapmles. 2: VanKampen Theorem; Computation of the Fundamental groups of various spaces. 5: Universal covering (Existence - statement only) Relation between $\pi_1$ and fibres of covering maps. Day 1: Basic notions--product spaces, connected, compactness. Topic: The basic theorems of functional analysis - Hahn Banach, Completeness and Banach Spaces, Operators, Baire Category Theorem, Uniform Boundedness, Closed Graph Open Mapping together with some well-known applications. Topic: Quick review of topological preliminaries, simply connectedness, homotopy etc. Definition of log and n^th roots, lifting of maps, covering maps. Covering spaces of C*, C** and D*. Applications: Characterization of nowhere vanishing functions, Picard Theorems, Fundamental Theorem of Algebra. Topic: Quick review of holomorphic (analytic) functions, Cauchy-Riemann equations, Cauchy's theorem, power series. Properties of holomorphic functions, zeros, maximum modulus principle, Liouville's theorem, isolated singularities, arguments principle, open mapping theorem. If time permits some major theorems like, Montel's theorem, Riemann mapping theorem and Pciard's theorem (Ahlfors's proof). Topic: Borel \sigma-algebra, outer measure, measurable sets, construction of Lebesgue measure on \mathbb R, measurable functions, introduction to Lebesgue integration and limit theorems. KSOM conducted refresher course during September 2013 for B.Sc. Mathematics Final year students in Kerala state. The course covered some basic topics of Algebra and Analysis. The course with relaxed schedule and interactive sessions discussed abstract topics in Number Theory, Analysis, Algebra, Linear Algebra and Graph Theory. The sessions were handled by V. Krishna Kumar(NICER, Bhuvaneswar), P. K. Sanjay(NIT, Calicut), Ananthavardhanan (IIT, Bombay), Jijo Sukumaran (Govt. College, Kasargod), M. Manickam, Joseph Mathew, A. K. Vijayarajan and Guram Donadze (KSOM). 21-Sep-13 GD AKV JM Res. Sch. 1. Bipin Balaram, Assistant Professor, Dept. of Mechanical Engineering, Amrita School of Engineering, Amrita Viswa Vidyapeetham, coimbatore. 2. Vinod V., Research Scholar, Dept. of Mechanical Engineering, NIT Calicut. The basic objective of this work is to develop a robust numerical method for obtaining the periodic solutions of nonlinear ordinary differential equations and to device a strategy to quantitatively estimate the change in this solution when the system parameters are slowly varied. Historically, homotopy based methods like parametric continuation coupled with shooting algorithms has been used to perform this task. With the help of an arc-length reparameterisation technique at singular points in the parameter space, this can be used for practical bifurcation analysis too. But this scheme has two serious shortcomings. practical importance in areas like physics, engineering and biology and which show such intresting behaviour as synchronisation and clustering, cannot be studied using this simple continuation strategy. So, an effort will be made to develop a continuation strategy based on generalised shooting method which treats the frequency of the periodic solution also as an unknown. This means that the new scheme should converge to a point in the enhanced state space of dimension 2n+1; 2n being the dimension of the initial condition vector. Preliminary efforts show that the use of homotopy methods will reduce this problem to an algebraic one with 2n equations in 2n+1 variables. This poses a challenge which we hope to address. This scheme shall be used to study the synchronisation in (dissipatively and reactively) coupled Van der Pol oscillators in a ring by treating the coupling constant as the continuing parameter with an emphasis on tracking down possible bifurcation phenomenon. Second, existing continuation strategies are equipped to carry out the solution continuation in the euclidean state space alone. But in many cases involving n coupled oscillators, the dynamics is restricted to a manifold embedded in the 2n dimensional state space. For example, the dynamics of a double pendulum can be studied on a 4 dimensional euclidean state space, but the modulo 2π nature of dynamics means that it is restricted to a torus. There can be significant computational gains if this continuation can be restricted to the manifold. Keeping this in mind, the possibility of adapting the developed continuation scheme to work on the appropriate manifold will be explored. This will involve the use of techniques from computational differential geometry. This outreach programme was meant for B.Sc. Mathematics third year students, who were studying in Kerala state. The course covered some basic topics of Algebra, Analysis and Number Theory. Some first year students of M.Sc. Mathematics also participated. Second phase of outreach programme for Higher Secondary School Students in the nearby region was held at KSOM in continuation with the Outreach Programme for Higher secondary School Students conducted in May 2012. This programme connected students with mathematical role models and shared their knowledge of and passion for mathematics and its various subspecialities and related fields. A two days discussion meeting with MathematicsTeachers of Higher Secondary Schools in the nearby region was held at KSOM in continuation with the Outreach Programme for Higher secondary School Teachers conducted in May 2012. In continuation with the discussion meeting with Higher Secondary School Mathematics Teachers held at KSOM on 7th April, 2007, an outreach programme for HSS students conducted at KSOM. KSCSTE Member Secretary Dr. K. K. Ramachandran inauguratede the programme. Prof. Ram Murty from Queens University, Canada delivered a series of "Panorama Lectures" on Ramanujan & L-functions from 01 to 04, May 2012. Mathematics underpins so many aspects of our lives and provides the foundation for science, engineering, economics, finance and many other professional fields. For all these reasons, it is critical to help students early on to appreciate the beauty and power of mathematics and to begin to understand how it plays out in so many areas. Of course, teachers are already experts at encouraging and guiding students, but having the added support of a mathematical research institute provides obvious benefits not only for students but for the teachers and mathematicians as well. As a prelude to the outreach program for Higher Secondary School students, a one day discussion meeting held at KSOM with mathematics teachers of Higher Secondary Schools who are teaching for +1/+2 students in the near by region to discuss and find out the modalities of conducting the outreach program. The visit was for a discussion meeting on `Operator Spaces and Operator Theory' with KSOM Faculty and researchers working in the area.
CommonCrawl
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow. Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion. Please help promote PhysicsOverflow ads elsewhere if you like it. New printer friendly PO pages! Migration to Bielefeld University was successful! Please vote for this year's PhysicsOverflow ads! Please do help out in categorising submissions. Submit a paper to PhysicsOverflow! Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user. Is there any idea why the electric charges of electron and muon are equal? What is the 2-point correlation function of the electron field in QED? Discussion from "Does a real electron possess a certain spin" Does a real electron possess a certain spin? Transition of electric charge to "magnetic charge" when $\alpha$ gets >> 1 in QED? In what sense can fields or states form *representations* of a group? Why is baryon or lepton violation in standard model is a non-perturbative effect? In a spinning electron, does charge rotate faster than mass or viceversa? Schwinger showed that for a charged elementary fermion, the g-factor obeys g/2 = 1 + alpha / 2*pi + O(alpha^2) . Given that g/2 is the ratio between the "charge rotation" and the "mass rotation", the expression should imply that charge rotates faster than mass (assuming that both have the same spatial distribution). Is this correct? Or is it the other way round? Schwinger calculated the anomalous magnetic moment of electron, not of any fermion. Proton and neutron have quite significant anomalous magnetic moments, which are calculable. A macroscopic solid body, bearing some charge, may have any ratio of the angular momentum to magnetic moment, but it does not mean that the charged parts move differently than uncharged ones. Vote to close. I added "elementary" to the question, which I had implied but forgotten to state explicitly. This is not graduate-level, voting to close. To answer, leave an answer instead. Comments are usually for non-answers. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. To alert a user, please use the "@" command and remove spaces from the username, example, the user "John Doe" should be pinged as "@JohnDoe", while the user "Johndoe" should be pinged as "@Johndoe". The post author is always automatically pinged (unless you are the post author). Please consult the FAQ for as to how to format your post. Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
CommonCrawl
The image below shows a solved pentomino puzzle in a $6\times10$ grid. Your challenge is to divide the rectangle along the black lines only to make two pieces that can be rearranged and fit together again to make the $7\times9$ rectangle below with three holes in it. The shapes to the bottom right of the original image have been cut and moved down and left, each by one square. Not the answer you're looking for? Browse other questions tagged geometry polyomino or ask your own question.
CommonCrawl
why is it that people don't die from bruises? What mechanism does the human body have to keep this from happening? This statement is primarily true only for blood clots within blood vessels, especially in the veins. When you are talking about bruising, you are talking about clots outside of the vasculature. When a blood clot occurs in an artery, it can block that artery or break off, flow downstream, and block some smaller distal vessel. These events are most severe when they affect crucial organs like the brain and heart ("stroke" or "heart attack"), though of course any organ can be damaged in this way. However, blood clots in arteries can't ever directly affect a tissue that is not distal from where the clot starts, because they can never pass through capillaries or travel backward. When a blood clot occurs in a vein and is dislodged, it can follow the increasingly larger venous system back to the heart, where it can cause a pulmonary embolism (blockage in the lungs) or, via a patent foramen ovale, travel to the left-side circulation and end up anywhere, including the coronary arteries or blood vessels of the brain. Similarly, clots that form in the venous return from the lungs to the heart or in the left side of the heart itself can travel anywhere (except the lungs) and create a blockage. For a clot outside the vasculature, for it to have an effect somewhere else systemically it must re-enter the vasculature. This is simply not possible in most situations, because the vessels involved are very small, and during bleeding the blood is flowing out of vessels: there is no pressure gradient to push the clot back into the vessels. In more severe cases of injury where major vessels are involved, clotting in those major vessels can indeed be a problem, but not for common occurrences of bruising. @Bryan Krause's answer is quite correct. I just want to clarify what a bruise is, because I think you have a misconception of what happens. A capillary is small. As an analogy, imagine someone injects a small amount of blood, say, .5cc, under your skin with a very fine needle, but leaves the needle there. The blood will spread out as much as it can over time. The blood does not need to clot; only at the level of the injured capillary does the blood need to clot. So if the fine needle represents the capillary, the .5cc of blood won't clot then go back up into the needle; it can't happen. It's stuck in the tissue (the extracellular fluid of the skin.) A tiny clot near the tip of the needle will stop more blood from pouring out. No problem. No bleeding to death. Since the blood has nowhere to go once it spreads out, the blood cells die in the tissue, and heme degradation begins. Initially the blood is reddish, then reddish blue, then dark purple, followed by purplish-brown, to brown-greenish, to yellow, to normal. A hematoma is a collection of blood that can't spread out because it is unable to penetrate the tissue very well (maybe there's a fibrous capsule in the way.) The blood will stay there and will eventually clot, but it can't get into the end of the fine needles (capillaries) or the bigger needle (arteriole). The body doesn't work that way. Hematomas are eventually broken down by the body as well, but it takes more time, and often there is a residual "scar" in the tissue where it was. That's why sometimes you feel a little bump forever afterwards where you had a really bad bruise. Bruises - really bad bruises - can cause major damage if there is insufficient or no clotting factor, but not because clots form and kill us, but because the bleeding doesn't stop and if it's bad enough, one can bleed so much into a tissue that the swelling can cut off the blood flow into the tissue (e.g. see compartment syndrome). I would like to add a point which I feel others have missed: our body already has a safety system to prevent such a thing from happening, called fibrinolysis. Obviously, this system is not solely for degrading clots that might have come into the bloodstream, but this is an important part of all of its functions. The main character here is plasmin: a serine protease (just like trypsin) whose function is to degrade blood proteins, including fibrin clots. It is synthesized by the liver and secreted into the bloodstream in an inactive form, namely plasminogen. It comes with two levels of safety systems. In the zymogen form, it is in the closed form i.e. the active site is hidden and inactive. When it encounters a blood clot, it gets a conformational change and exposes its activation site. Now, enzymes like tissue plasminogen activator (tPA), urokinase plasminogen activator (uPA), kallikrein and even factor XII convert it into its active form (by breaking a peptide bond between Arg561 and Val562). This two level safety system helps in preventing unnecessary activation of plasminogen (since both the zymogen and its activators are flowing in the bloodstream). So what prevents it from chewing off all the blood proteins? The secret: its inhibitors are also flowing with it in the bloodstream. Plasmin is inactivated by proteins like $\alpha_2$-macroglobulin and $\alpha_2$-antiplasmin. Plasmin cleaves $\alpha_2$-macroglobulin at the bait region, causing a conformational change. In the resulting $\alpha_2$-macroglobulin-plasmin complex, the active site of plasmin is hidden so that its efficiency is greatly reduced. This conformational change also allows the clearance proteins to bind this complex and let it get out of circulation. You can see the Wikipedia article to know more. Bruises usually damage capillaries (think of them as being side roads well away from the expressway and major routes). Just as closing down a few side roads in a lightly populated suburban or rural area a mile away from a major interstate is unlikely to cause a backup/traffic jam, blood clotting in capillaries at a bruise shouldn't generate dangerous blockages. Not the answer you're looking for? Browse other questions tagged biochemistry blood-circulation or ask your own question. Why are diabetic people often overweight? Why are higher doses of atropine required to produce central effects? Why is urea not converted to ammonia in the body?
CommonCrawl
I'm watching MIT chemistry by Donald Sadoway. In one of his lectures devoted to solutions and phase separation, he performs experiments with absinthe. First he mixes absinthe with 5 $\times$ water which turns into milky louche. Then he adds some cognac and the mixture becomes transparent again. I don't grasp his explanation of this. Here is a timestamped link to the video. What does "stabbing the fat" mean? Does it mean that the ethanol somehow makes the surface between fat and water disintegrate thus turning the mixture into a single phase solution? Or maybe ethanol + water + fat somehow combine to form a single molecule so that, again, there is a single phase solution now? Absinthe is a strongly alcoholic beverage containing anethole, which is insoluble in water and very soluble in ethyl alcohol. When water is added to the absinthe, the alcohol becomes too dilute to keep the anethole dissolved, and it appears milky. When more ethyl alcohol is added in the form of cognac, the solution again becomes strong enough in alcohol to dissolve the anethole and the solution becomes clear again. The phrase "stab the fat" is a bit odd, but I'll do my best to explain. Anethole is a fat-like or fat-loving oil, meaning it dissolves well in other fatty sorts of compounds, but not in water (which is lipohpobic, or fat-avoiding/fat-fearing). Ethyl alcohol has two parts to it. One end of it (labeled (fat) in the chemical formula in the question) is lipophilic (attracted to fat) and the other end is hydrophilic (attracted to water). If there is enough alcohol in the mixture, the lipophilic end of the alcohol molecules can surround the anethole ("stabbing the fat") which leaves the other end of the alcohol free to be dissolved in the water (and also any excess hydrophilic portions of the alcohol). I hope the "stab the fat" explanation was clear. Don't hesitate to ask for specific clarifications in the comments below. Not the answer you're looking for? Browse other questions tagged everyday-chemistry aqueous-solution solubility solutions alcohols or ask your own question. How quickly does ethanol evaporate? Does sorbitol dissolve in ethanol? A drop of water in a tin of sugar: Which one's the solvent, the sugar or the water? Does "soluble in alcohol" imply ethanol? What result should I have obtained in these tests? Also, what went wrong?
CommonCrawl
Fifth year graduate student, interested in algebraic geometry, representation theory, and number theory. 4 Is it appropriate to ask questions about previously unexplored concepts? 15 What is the archimedean Hecke algebra? 14 What's the point of a Whittaker model? 11 Why are anisotropic tori compact? 11 How does the Bernstein-Zelevinsky construction of irreducibles from supercuspidals parallel the representations of the Weil-Deligne group? 11 If $R$ is an etale extension of $\mathbb Z$, then $R = \mathbb Z^n$?
CommonCrawl
Abstract: We prove the global asymptotic equivalence between the experiments generated by the discrete (high frequency) or continuous observation of a path of a time inhomogeneous jump-diffusion process and a Gaussian white noise experiment. Here, the considered parameter is the drift function, and we suppose that the observation time $T$ tends to $\infty$. The approximation is given in the sense of the Le Cam $\Delta$-distance, under smoothness conditions on the unknown drift function. These asymptotic equivalences are established by constructing explicit Markov kernels that can be used to reproduce one experiment from the other.
CommonCrawl
Professor Zac is trying to finish a collection of tasks during the first week at the start of the term. He knows precisely how long each task will take, down to the millisecond. Unfortunately, it is also Frosh Week. Zac's office window has a clear view of the stage where loud music is played. He cannot focus on any task when music is blaring. The event organizers are also very precise. They supply Zac with intervals of time when music will not be playing. These intervals are specified by their start and end times down to the millisecond. Each task that Zac completes must be completed in one quiet interval. He cannot pause working on a task when music plays (he loses his train of thought). Interstingly, the lengths of the tasks and quiet intervals are such that it is impossible to finish more than one task per quiet interval! Given a list of times $t_ i$ (in milliseconds) that each task will take and a list of times $\ell _ j$ (in milliseconds) specifying the lengths of the intervals when no music is being played, what is the maximum number of tasks that Zac can complete? The first line of input contains a pair of integers $n$ and $m$, where $n$ is the number of tasks and $m$ is the number of time intervals when no music is played. The second line consists of a list of integers $t_1, t_2, \ldots , t_ n$ indicating the length of time of each task. The final line consists of a list of times $\ell _1, \ell _2, \ldots , \ell _ m$ indicating the length of time of each quiet interval when Zac is at work this week. You may assume that $1 \leq n,m \leq 200\, 000$ and $100\, 000 \leq t_ i, \ell _ j \leq 199\, 999$ for each task $i$ and each quiet interval $j$. Output consists of a single line containing a single integer indicating the number of tasks that Zac can accomplish from his list during this first week.
CommonCrawl
1(a) Differentiate between Plugging and generating mode in AC Motor. 1(b) With the help of diagram explain principle of working of induction heating. 1(c) For a single phase full converter with inductive load, if the source inductance Ls is considered Find the average output voltage and reduction in the average output voltage due to overlap if $\alpha = 30 \space deg$. and $\mu= 2 \space deg$. with supply volt of 230 volts. 1(d) The speed of 10 HP separately excited DC motor is controlled by single phase full converter. 1(e) The rated armature current is 30 A. Ra = 0.5 ohm. The ac supply voltage is 230 volts. The Motor voltage constant is 0.182 V/rpm. While in motoring action with back emf of 192 volts the polarity of it is reversed for regenerative action. Calculate firing angle to keep the motor current at its rated value. 1(f) Explain battery charging circuit in detail. 2(a) Explain stator voltage control technique for three phase induction motor. 2(b) Explain three phase fully controlled bridge convertor with source inductance. Draw waveforms. 3(a) Draw and explain average model and state space model for buck DC-DC converter in detail. 3(b) What is the need of SVM? Explain SV sequence and SV switching in detail in space vector modulation. 4(a) Explain continuous mode fly- back converter in continuous mode. Derive the relation for load voltage. 4(b) A 3 phase 4 pole induction motor operated from 415 V / 50 Hz supply. Stator voltage 10 control technique is to be applied to vary the speed. The motor is driving a load torque of 100 N-m. Find out the following if motor speed is 100 rad/sec. 5(a) Draw and explain semi-converter drive for separately excited DC motor. Draw torque-speed characteristics. 5(b) State and explain different characteristics of battery. 5(c) A UPS driving 600 W load which has a power factor of 0.8. The efficiency of the inverter 5 is 80 percent. The battery voltage is 24 volts DC. Assume that there is a separate charger for the battery. Determine the followings. ii) Wattage of the rectifier. 6(a) On line and off-line UPS. 6(b) Controllers in DC-Dc converters. 6(c) Torque-Slip/Speed characteristics of induction motor with operating regions with different Value of slip.
CommonCrawl
Abstract : The paper presents designs that allow detection of mixed effects when performing preliminary screening of the inputs of a scalar function of $d$ input factors, in the spirit of Morris' Elementary Effects approach. We introduce the class of $(d,c)$-cycle equitable designs as those that enable computation of exactly $c$ second order effects on all possible pairs of input factors. Using these designs, we propose a fast Mixed Effects screening method, that enables efficient identification of the interaction graph of the input variables. Design definition is formally supported on the establishment of an isometry between sub-graphs of the unit cube $Q_d$ equipped of the Manhattan metric, and a set of polynomials in $(X_1,\ldots, X_d)$ on which a convenient inner product is defined. In the paper we present systems of equations that recursively define these $(d,c)$-cycle equitable designs for generic values of $c\geq 1$, from which direct algorithmic implementations are derived. Application cases are presented, illustrating the application of the proposed designs to the estimation of the interaction graph of specific functions.
CommonCrawl
I am trying to solve a set of coupled, nonlinear ODEs. The only dependent variable is a 1-dimensional spatial coordinate, let's call it $x$. For now, I've managed to approximate away some of the coupling between the equations, such that the first one doesn't depend on the second one at all. I solved it with FEM without major difficulty: boundary conditions met, residuals look good, physically meaningful result. where $A$, $B$, $C$, and $D$ are known, at least at the resolution of the FEM mesh. The problem is that, using the same basic code as before, the solution appears to violate the boundary conditions entirely, blowing up as $x \rightarrow 0$ instead of approaching zero (or even any finite value, it doesn't seem to matter what I choose). Thinking this was a problem with my code, I experimented with a finer mesh and with finite differences instead of finite elements, with no success. It finds the same nonsensical result every time. Finally, I put my numerical data for the coefficients into Mathematica and attempted to solve the above BVP. I let it run for quite a while (maybe 15-30 minutes) before giving up. It never found a solution. So finally, my question: Is it possible that this equation simply can't be solved with these boundary conditions? For example, I tried changing $f(1) = 0 \rightarrow f'(0) = 0$ and the solution was found almost instantaneously by Mathematica, but it still violated the new boundary condition; somehow, that didn't trigger an error, but it made me suspicious that something "bigger" might be going on here. I'm looking for some mathematical intuition behind why the numerical methods might fail in these cases and how to mitigate these problems. This equation is linear (correcting here so that comments still make sense). I think that's not the root of my problem, though; I tried using a linear solver with the proper stiffness matrix and load vector to solve $-f''(x) = g(x)$, which should be EASY; here, $g(x)$ is the FEM solution to the first equation. The solution meets boundary conditions but is "jagged" (think sawtooth patterns overlaid on a smooth function). Now I think perhaps there's something wrong with substituting the FEM solution for the first equation into the second equation, but I'm not sure why. I think I can safely rule out a problem with this code; I am generating the source files with SymPy, so there shouldn't be any human errors associated with changing functional forms, etc. of the equations I want to solve. As an example of what I'm seeing now, see figure below. The only intuition I have for this is that the code "tries" to meet boundary conditions (which are homogeneous Dirichlet), but the solution, for whatever reason, fundamentally wants to blow up to $-\infty$. So we end up with this "compromise" solution that straddles the two possibilities. Is this even remotely correct? It doesn't seem to matter what I change at this point, and refining the mesh just increases the frequency of the sawtooth form. It is impossible that finite element or finite difference violate their boundary condition that you imposed. It is not related to the physics of your problem and it's just an issue with your implementation. No matter your ODE is consistent with your boundary conditions or something else. The only thing matters is that you enforce your boundary conditions when you are trying to assemble your stiffness matrix and response vector in your finite element or finite difference methods and if you assemble them correctly it should recover the imposed boundary conditions. Otherwise maybe you have a bug in your matrix assembling which makes the stiffness matrix singular, which shows you some nonsense values at the boundaries. As a result, my suggestion is that check your matrix assembling function and also check if your linear solver or whatever could generate informative warnings/errors when something is wrong with your stiffness matrix or response vector. You need to get at least whatever you wanted to impose on your equation correctly and then that's an another story when you want to discuss about the physical meaning of your model, which makes sense or not. Also your ODE is not nonlinear because $A(x)$, $B(x)$, $C(x)$, and $D(x)$ do not depend on $f$ itself and as a result it should be solved easily by simple linear finite element or finite difference solvers (e.g. scipy BVP solver). Not the answer you're looking for? Browse other questions tagged finite-element finite-difference ode differential-equations mathematica or ask your own question.
CommonCrawl
Last week our paper titled "Fully-Coupled Simulation of Cosmic Reionization. I: Numerical Methods and Tests" was accepted to the Astrophysical Journal Supplement. This paper was written in collaboration with a few colleagues in UC-San Diego, and SMU. I was added to this work during the review process when it was determined that we needed a comparison test between the two radiation transport solvers in Enzo: flux-limited diffusion and adaptive ray tracing. The tests in this paper sets the stage for future work on galaxy formation simulations with a self-consistent reionization history. (abs, pdf) Choudhury et al., Lyman-$\alpha$ emitters gone missing: evidence for late reionization?
CommonCrawl
this que is belong to which topic?? In an $N \times M$ array multiplier we have $N \times M$ AND gates and $(M-1),$ $N-bit$ adders are used. Refer this article page 16. according to this it will take O(n) in one level of computation?? this may help, in last 10 mins of the video, time complexity is explained! can you please elaborate the explanation ? Which of the following sets of component(s) is/are sufficient to implement any arbitrary Boolean function? XOR gates, NOT gates $2$ to $1$ multiplexers AND gates, XOR gates Three-input gates that output $(A.B) + C$ for the inputs $A, B$ and $C$.
CommonCrawl
In this paper, we introduce the concept of IVF weakly continuity and investigate some characterizations for IVF weakly continuous mappings on the interval-valued fuzzy topological spaces. We introduce and study the concepts of almost IVF compactness and nearly IVF compactness. M. B. Gorzalczany, A method of inference in approximate reasoning based on interval valued fuzzy sets, J. Fuzzy Math. 21 (1987), 1-17. Y. B. Jun, G. C. Kang and M.A. Ozturk Interval-valued fuzzy Semiopen, preopen and $\alpha$-open mappings, Honam Math. J., 28(2), (2006) pp. 241-259. T. K. Mondal and S. K. Samanta, Topology of interval-valued fuzzy sets, Indian J. Pure Appl. Math., 30(1), (1999) pp. 23-38.
CommonCrawl
The thermodynamic properties of single crystal Niobium are presented. Anomalies in thermal expansion, specific heat, elastic constants, and electrical resistivity are observed. The linear coefficient of thermal expansion, $\alpha$, exhibits a large, broad peak in the range 200 K $<$ T $<$ 280 K, with a nearly two-fold increase in $\alpha$. The elastic constants show anomalies over a similar temperature range, while anomalies in heat capacity and resistivity are much narrower. This is surprising since crystalline Nb is a simple system, with only one naturally occurring isotope and a body centered cubic structure. Measurements on a second single crystal and on high purity polycrystalline Nb will also be presented. *Work at MSU was supported by the US DOE Office of Basic Energy Sciences (DE-FG-06ER46269). Work at LANL was conducted under the auspices of the US DOE Office of Basic Energy Sciences.
CommonCrawl
Abstract: We report the generation of a macroscopic singlet state in a cold atomic sample via quantum non-demolition (QND) measurement induced spin squeezing. We observe 3 dB of spin squeezing and detect entanglement of up to $5.5\times10^5 $ atoms with $5\sigma$ statistical significance using a generalized spin squeezing inequality. The degree of squeezing implies at least 50% of the atoms have formed singlets, while the response to a magnetic field gradient indicates entanglement bonds at all length scales, a characteristic of quantum spin liquids.
CommonCrawl
While what I am expecting is that the first four values of $y$ to be 1,2,3,4 say since I've set the first four entries of innov to be zero. Can someone help me with this problem? Thanks in advance. This gives the solution: $y_1=-7.086890,\, y_2=-2.607843,\, y_3=-3.254902\;\,$ and $\,y_4=-3.901961$. The vector $(1, 1.33, 1.66, 1.99)$ is obtained as follows: the first element is $y_5$, for which we want the value $1$ ($\epsilon_5$ is set to zero); the second element is $-0.51y_2 = y_6 - 0.67y_5 = 2 - 0.67\times1 = 1.33$ (the desired value for $y_6$ is $2$ and $\epsilon_6$ is set to zero); the third element is $-0.51y_3 = y_7 - 0.67y_6 = 3-0.67\times 2 = 1.66$; and from the last equation $-0.51y_4 = y_8 - 0.67y_7 = 4 - 0.67\times 3 = 1.99$. After the auxiliary observations $y_1$ to $y_4$ that were found above, the series continues with the desired values $1, 2, 3, 4$. Not the answer you're looking for? Browse other questions tagged r regression time-series or ask your own question. Simulate ARIMA model in R using same starting values as original time series? Consequences of modeling a non-stationary process using ARMA? How to generate uncorrelated white noise sequence in R without using arima.sim?
CommonCrawl
Abstract: We consider a set of transmitters broadcasting simultaneously on the same frequency under the SINR model. Transmission power may vary from one transmitter to another, and a signal's strength decreases (path loss or path attenuation) by some constant power $\alpha$ of the distance traveled. Roughly, a receiver at a given location can hear a specific transmitter only if the transmitter's signal is stronger than the signal of all other transmitters, combined. An SINR query is to determine whether a receiver at a given location can hear any transmitter, and if yes, which one. An approximate answer to an SINR query is such that one gets a definite yes or definite no, when the ratio between the strongest signal and all other signals combined is well above or well below the reception threshold, while the answer in the intermediate range is allowed to be either yes or no. We describe several compact data structures that support approximate SINR queries in the plane in a dynamic context, i.e., where both queries and updates (insertion or deletion of a transmitter) can be performed efficiently.
CommonCrawl
We introduce smooth entropy as a measure for the number of almost uniform random bits that can be extracted from a source by probabilistic algorithms. The extraction process should be universal in the sense that it does not require the distribution of the source to be known. Rather, it should work for all sources with a certain structural property, such as a bound on the maximal probability of any value. The concept of smooth entropy unifies previous work on privacy amplification and entropy smoothing in pseudorandom generation. It enables us to systematically investigate the spoiling knowledge proof technique to obtain lower bounds on smooth entropy and to show new connections to R'enyi entropy of order $\alpha > 1$.
CommonCrawl
GPUs are designed to work well on structured data. Parallelizing graph algorithms on GPUs is challenging due to the irregular memory accesses involved in graph traversals. In particular, three important GPU-specific aspects affect performance: coalescing, memory latency, and thread-divergence. In this work, we tame these challenges by injecting approximations. In particular, we improve coalescing by renumbering and replicating nodes, memory latency by adding edges among specific nodes brought into shared memory, and thread-divergence by normalizing degrees across nodes assigned to a warp. Using a suite of graphs with varied characteristics and five popular algorithms, we demonstrate the effectiveness of our proposed techniques. Our approximations for coalescing, memory latency and thread-divergence lead to mean speedups of 1.3$\times$, 1.41$\times$ and 1.06$\times$ achieving accuracies of 83%, 78% and 84%, respectively.
CommonCrawl
For a finite group $H$‎, ‎let $cs(H)$ denote the set of non-trivial conjugacy class sizes of $H$ and $OC(H)$ be the set of the order components of $H$‎. ‎In this paper‎, ‎we show that if $S$ is a finite simple group with the disconnected prime graph and $G$ is a finite group such that $cs(S)=cs(G)$‎, ‎then $|S|=|G/Z(G)|$ and $OC(S)=OC(G/Z(G))$‎. ‎In particular‎, ‎we show that for some finite simple group $S$‎, ‎$G \cong S \times Z(G)$‎. R. Abbott and et al., Atlas of finite group representations-version 3, brauer.maths.qmul.ac.uk/Atlas/v3/. N. Ahanjideh, Groups with the given set of the lengths of conjugacy classes, Turkish J. Math., 39 (2015) 507–514. N. Ahanjideh, On the Thompson's conjecture on conjugacy classes sizes, Int. J. Algebr Comput., 23 (2013) 37–68. N. Ahanjideh, On Thompson's conjecture for some finite simple groups, J. Algebra, 344 (2011) 205–228. class sizes, J. Group Theory , 18 (2015) 115–131. Asian-Eur. J. Math., 4 (2011) 559–588. A. R. Camina and R. D. Camina, Recognising nilpotent groups, J. Algebra, 300 (2006) 16–24. G. Chen, Characterization of3D4(q), Southeast Asian Bull. Math., 25 (2001) 389–401. G. Chen, A new characterization of Suzuki-Ree groups, Sci. China Ser. A, 40 (1997) 807–812. G. Chen, A new characterization of sporadic simple groups, Algebra Colloq., 3 (1996) 49–58. G. Chen, A new characterization of G2(q), J. Southwest China Normal Univ., 21 (1996) 47–51. G. Chen, A new characterization of E8(q), J. Southwest China Normal Univ., 21 no. 3 (1996) 215–217. G. Chen, On Thompson's conjecture, J. Algebra, 185 (1996) 184–193. M. R. Darafsheh and A. Mahmiani, A characterization of the group2Dn(2) where n = 2m+ 1 ≥ 5, J. Appl. Math. Math. Soc., 304 (2003) 277–283. tatis Carolinae, 43 (2002) 9–21. A. Khosravi and B. Khosravi, A new characterization of PSL(p,q), Commun. Algebra, 32 (2004) 2325–2339. B. Khosravi and B. Khosravi, A characterization of2E6(q), Kumamoto J. Math., 16 (2003) 1–11. B. Khosravi and B. Khosravi, A characterization of E6(q), Algebras, Groups and Geom., 19 (2002) 225–243. Mathematics, Novosibirsk, 17th edition, 2010. P. Kleidman and M. Liebeck, The subgroup structure of finite classical groups, Cambridge University Press, 1990. A. S. Kondratiev, Prime graph components of finite simple groups, Math. USSR-Sb., 67 (1990) 235–247. A. V. Vasilev, On Thompson's conjecture, Sib. Elektron. Mat. Izv., 6 (2009) 457–464. J. S. Williams, Prime graph components of finite groups, J. Algebra, 69 (1981) 487–513.
CommonCrawl
Definition and how to solve linear equations. The expression: is an equation. That is, an equality that is satisfied for some value of . The left side of the equality is named the first member of the equation and the right one, the second member. In this equality there are known numbers (and ) and one not known number (). These are the terms of the equation: is the unknown since it is the... So I don't have to the linear equations to solve this wow. – user1238097 Dec 25 at 18:56 1 I take issue with this answer, as it is a bit sloppy; you cannot take the determinant of a $3\times 4$ matrix. Basic concept of the Transposing Method (Shortcut) in solving linear equations "When a term moves (transposes) to the other side of the equation, its operation changes to the inverse operation". how to take the poop out of prawns Solving formulas, or literal equations, is just another way of saying take an equation with lots of letters and solve it for one letter in particular, as Purple Math so nicely states. Together we are going to work through a ton of examples, to ensure mastery and understanding of solving Literal Equations. Basic concept of the Transposing Method (Shortcut) in solving linear equations "When a term moves (transposes) to the other side of the equation, its operation changes to the inverse operation". how to teach long division to grade 3 The method for solving linear differential equations is similar to the method above—the "intelligent guess" for linear differential equations with constant coefficients is e λx where λ is a complex number that is determined by substituting the guess into the differential equation. For example, "4x+8=29" is a linear equation, but "5x^4=100" is not a linear equation, because the variable "x" is raised to the power of "4." Just remember these three rules of solving linear equations, and you'll be successful every time.
CommonCrawl
I have a set of data points $f(x_i)$ for some instances $x_i$, $i\leq N$. Also, I was able to fit the curve asymptotically due to prior knowledge, i.e. I was able to fit the data to functional forms $f(x\ll x_0), f(x\gg x_0)$. How can I proceed in order to connect the two regions smoothly to fit the entire data points to a curve? Are there techniques to do this given $x_i, f(x_i), f(x\ll x_0), f(x\gg x_0)$? Browse other questions tagged fitting curve-fitting or ask your own question. How can I convert a lognormal distribution into a normal distribution?
CommonCrawl
We will now look at the algorithm for the fixed point method in approximating a root of a function. Obtain a function $f$ in the appropriate form ($f(x) = 0 \Leftrightarrow x = g(x)$) and assume that a root $\alpha$ exists. Obtain an initial approximation $x_0$, a maximum number of iterations, and an error tolerance $\epsilon$. If the above inequality is true, then stop. $x_n$ is a good approximation of the root $\alpha$. If the inequality above is false, then continue to compute successive approximations until the maximum number of iterations is reached. If the maximum number of iterations is reached and the error tolerance $\epsilon$ is not obtained, then print out a failure message.
CommonCrawl
Turns out, that if you're standing at that point on the hill, you look all around and you find that the best direction is to take a little step downhill is roughly that direction. Okay, and now you're at this new point on your hill. You're gonna, again, look all around and say what direction should I step in order to take a little baby step downhill? And if you do that and take another step, you take a step in that direction. use := to denote assignment, so it's the assignment operator. $\alpha$ called:learning rate.controls how big a step we take downhill with creating descent. where m is the size of the training set, $\theta_0$ a constant that will be changing simultaneously with $\theta_1$ and $x_i y_i$are values of the given training set (data).
CommonCrawl
Your task is to count the number of one bits in integers between $1 \ldots n$. The only input line has an integer $n$. Print the number of one bits in integers between $1 \ldots n$. Explanation: The bit representations of $1 \ldots 7$ are 1, 10, 11, 100, 101, 110, and 111, so there are a total of 12 one bits.
CommonCrawl
Multichannel blind deconvolution is the problem of recovering an unknown signal $f$ and multiple unknown channels $x_i$ from convolutional measurements $y_i=x_i \circledast f$ ($i=1,2,\dots,N$). We consider the case where the $x_i$'s are sparse, and convolution with $f$ is invertible. Our nonconvex optimization formulation solves for a filter $h$ on the unit sphere that produces sparse output $y_i\circledast h$. Under some technical assumptions, we show that all local minima of the objective function correspond to the inverse filter of $f$ up to an inherent sign and shift ambiguity, and all saddle points have strictly negative curvatures. This geometric structure allows successful recovery of $f$ and $x_i$ using a simple manifold gradient descent algorithm with random initialization. Our theoretical findings are complemented by numerical experiments, which demonstrate superior performance of the proposed approach over the previous methods.
CommonCrawl
Is the following Boolean expression satisfiable? Justify your answer. To be satisfiable, $x_1$ must be true, otherwise (2) is false and its AND would falsify the entire expression. If $x_1$ is true, then $x_3$ must be true to make (6) true, because $x_1$ being true makes (5) false, and (6) must be true for the ((5) OR (6)) sub-expression to be true. Given that $x_3$ must be true, $x_2$ must be false, or (4) would be false, rendering the entire expression false. Having established that $x_3$ must be true for the entire expression to be true, (3) will be true when $x_3$ is true. Therefore the above expression is satisfiable. Here is my impression on satisfiability. If you can find a single combination of true/false values that satisfies the Boolean expression, then the expression is satisfiable. You only need one combination for the expression to be satisfiable. In order to prove that a Boolean expression is unsatisfiable, you have to show that the expression is false for all possible true/false combinations. Actually, that's the first way I read it and then I convinced myself I was wrong. I think you're right Jason. You do only need to find one that works for satisfiability. Please disregard my previous post.
CommonCrawl
A matrix whose all elements are arranged in a column is called a Column matrix. Column matrix is a type of matrix and it is also called as a column vector. All elements in this type of matrix are arranged in different rows but only in one column. $M$ is a column matrix in general form and it is known as a column matrix of order $m \times 1$. The column matrix can be expressed in simple form. Each element in this matrix is arranged in a row but in only one column. Therefore, the column $j = 1$. Therefore, the number of columns is one. The general form column matrix can be expressed as follows. Observe the following examples to understand how elements are arranged in column matrices. $A$ is a column matrix of order $1 \times 1$. In this column matrix, the only one element is displayed in one row and one column. $B$ is a column matrix of order $2 \times 1$ and in this matrix, the two elements are arranged in two rows and one column. $C$ is a column matrix of order $3 \times 1$. The three elements are arranged in the matrix in three rows and one column. $D$ is a column matrix of order $4 \times 1$. The four elements are arranged in the matrix in four rows and one column. The column vector can have any number of elements but all the elements are arranged in number of rows but only in one column.
CommonCrawl
You have a standard chessboard $(8\times8)$. And you have a lot of numbers from the set $[-1, 0, 1]$. You have to place one number in each square so that the sums on each row, column, and the two diagonals are different. To clarify, they must all be unique, regardless of direction (not just different among rows or columns). If you can do that, what's the strategy? Not the answer you're looking for? Browse other questions tagged mathematics checkerboard . How many chess pieces does it take to "cover" all spaces on a chessboard?
CommonCrawl
The evolution of a large-scale poloidal magnetic field is in an accretion disc is an important problem because it determines the launching of winds and the feasibility of the magnetorotational instability to generate turbulence or channel flows. Recent studies, both semi-analytic and numerical simulations, have highlighted the crucial role non-ideal MHD effects (Ohmic resistivity, Hall drift and ambipolar diffusion), relevant in the protoplanetary disc context, might play in magnetic flux evolution in the disc. In some cases these magnetic effects led to the formation of large scale structures (rings and gaps), which may be relevant for planet formation theory. We investigated the flux transport in discs through one-dimensional semi-analytic models in the vertical direction, exploring regimes where different physical effects dominate. Flux transport rates and vertical structure profiles are calculated for a range of diffusivities and disc magnetisations. We found similar results to previous studies in how Ohmic and ambipolar diffusivities drive radially outward flux transport with an inclined field, while a wind would drive inward transport. The Hall effect offers a correctional contribution to the flux transport given a background Ohmic and/or ambipolar diffusivity, and drives no flux transport when it is the only non-ideal effect present. We report the surprise finding of a non-zero laminar $\alpha$ in the vertical structures of our models in the absence of wind and viscosity, suggesting that diffusivities have a role in disc accretion as well. Future plans for further semi-analytic work and shearing box simulations will be presented.
CommonCrawl
Yves Achdou, Mathieu Lauri\u00E8re. On the system of partial differential equations arising in mean field type control. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 3879-3900. doi: 10.3934\/dcds.2015.35.3879. Marianne Akian, St\u00E9phane Gaubert, Antoine Hochart. Ergodicity conditions for zero-sum games. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 3901-3931. doi: 10.3934\/dcds.2015.35.3901. Mohamed Assellaou, Olivier Bokanowski, Hasnaa Zidani. Error estimates for second order Hamilton-Jacobi-Bellman equations.Approximation of probabilistic reachable sets. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 3933-3964. doi: 10.3934\/dcds.2015.35.3933. Martino Bardi, Annalisa Cesaroni, Daria Ghilli. Large deviations for some fast stochastic volatility models by viscosity methods. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 3965-3988. doi: 10.3934\/dcds.2015.35.3965. Piernicola Bettiol. State constrained $L^\\infty$ optimal control problems interpreted as differential games. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 3989-4017. doi: 10.3934\/dcds.2015.35.3989. J\u00F3hann Bj\u00F6rnsson, Peter Giesl, Sigurdur F. Hafstein, Christopher M. Kellett. Computation of Lyapunov functions for systems with multiple local attractors. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4019-4039. doi: 10.3934\/dcds.2015.35.4019. Olivier Bokanowski, Maurizio Falcone, Roberto Ferretti, Lars Gr\u00FCne, Dante Kalise, Hasnaa Zidani. Value iteration convergence of $\\epsilon$-monotone schemes for stationary Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4041-4070. doi: 10.3934\/dcds.2015.35.4041. Mattia Bongini, Massimo Fornasier, Dante Kalise. (Un)conditional consensus emergence under perturbed and decentralized feedback controls. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4071-4094. doi: 10.3934\/dcds.2015.35.4071. Bernard Bonnard, Thierry Combot, Lionel Jassionnesse. Integrability methods in the time minimal coherence transfer for Ising chains of three spins. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4095-4114. doi: 10.3934\/dcds.2015.35.4095. Ugo Boscain, Gregoire Charlot, Moussa Gaye, Paolo Mason. Local properties of almost-Riemannian structures in dimension 3. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4115-4147. doi: 10.3934\/dcds.2015.35.4115. Alberto Bressan, Fang Yu. Continuous Riemann solvers for traffic flow at a junction. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4149-4171. doi: 10.3934\/dcds.2015.35.4149. Fabio Camilli, Elisabetta Carlini, Claudio Marchi. A model problem for Mean Field Games on networks. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4173-4192. doi: 10.3934\/dcds.2015.35.4173. C\u00E9dric M. Campos, Sina Ober-Bl\u00F6baum, Emmanuel Tr\u00E9lat. High order variational integrators in the optimal control of mechanical systems. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4193-4223. doi: 10.3934\/dcds.2015.35.4193. Piermarco Cannarsa, Marco Mazzola, Carlo Sinestrari. Global propagation of singularities for time dependent Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4225-4239. doi: 10.3934\/dcds.2015.35.4225. Marco Caponigro, Anna Chiara Lai, Benedetto Piccoli. A nonlinear model of opinion formation on the sphere. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4241-4268. doi: 10.3934\/dcds.2015.35.4241. Elisabetta Carlini, Francisco J. Silva. A semi-Lagrangian scheme for a degenerate second order mean field game system. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4269-4292. doi: 10.3934\/dcds.2015.35.4269. Giovanni Colombo, Thuy T. T. Le. Higher order discrete controllability and the approximation of the minimum time function. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4293-4322. doi: 10.3934\/dcds.2015.35.4293. Andrei V. Dmitruk, Nikolai P. Osmolovskii. Necessary conditions for a weak minimumin optimal control problems with integral equationson a variable time interval. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4323-4343. doi: 10.3934\/dcds.2015.35.4323. Ermal Feleqi, Franco Rampazzo. Integral representations for bracket-generating multi-flows. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4345-4366. doi: 10.3934\/dcds.2015.35.4345. Elena Goncharova, Maxim Staritsyn. Optimal control of dynamical systems with polynomial impulses. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4367-4384. doi: 10.3934\/dcds.2015.35.4367. Lars Gr\u00FCne, Vryan Gil Palma. Robustness of performance and stability for multistep and updated multistep MPC schemes. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4385-4414. doi: 10.3934\/dcds.2015.35.4385. Cristopher Hermosilla. Stratified discontinuous differential equations and sufficient conditions for robustness. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4415-4437. doi: 10.3934\/dcds.2015.35.4415. Oliver Junge, Alex Schreiber. Dynamic programming using radial basis functions. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4439-4453. doi: 10.3934\/dcds.2015.35.4439. Robert J. Kipka, Yuri S. Ledyaev. Optimal control of differential inclusions on manifolds. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4455-4475. doi: 10.3934\/dcds.2015.35.4455. Karl Kunisch, Markus M\u00FCller. Uniform convergence of the POD method and applications to optimal control. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4477-4501. doi: 10.3934\/dcds.2015.35.4477. Helmut Maurer, Willi Semmler. Expediting the transition from non-renewable to renewable energy via optimal control. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4503-4525. doi: 10.3934\/dcds.2015.35.4503. Monica Motta, Caterina Sartori. Asymptotic problems in optimal control with a vanishing Lagrangian and unbounded data. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4527-4552. doi: 10.3934\/dcds.2015.35.4527. Lu\u00EDs Tiago Paiva, Fernando A. C. C. Fontes. Adaptive time--mesh refinement in optimal control problems with state constraints. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4553-4572. doi: 10.3934\/dcds.2015.35.4553. When are minimizing controls also minimizing relaxed controls? Michele Palladino, Richard B. Vinter. When are minimizing controls also minimizing relaxed controls?. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4573-4592. doi: 10.3934\/dcds.2015.35.4573. Sim\u00E3o P. S. Santos, Nat\u00E1lia Martins, Delfim F. M. Torres. Variational problems of Herglotz typewith time delay: DuBois--Reymond condition and Noether\'s first theorem. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4593-4610. doi: 10.3934\/dcds.2015.35.4593. Heinz Sch\u00E4ttler, Urszula Ledzewicz. Fields of extremals and sensitivity analysis for multi-input bilinear optimal control problems. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4611-4638. doi: 10.3934\/dcds.2015.35.4611. Cristiana J. Silva, Delfim F. M. Torres. A TB-HIV\/AIDS coinfection model and optimal control treatment. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4639-4663. doi: 10.3934\/dcds.2015.35.4639. Aleksandar Zatezalo, Du\u0161an M. Stipanovi\u0107. Control of dynamical systems with discrete and uncertain observations. Discrete & Continuous Dynamical Systems - A, 2015, 35(9): 4665-4681. doi: 10.3934\/dcds.2015.35.4665.
CommonCrawl
The superalgebra tag has no usage guidance. Can you quantize Grassmann-even superfields in the same fashion as Boson fields? How are supersymmetry transformations even defined? Why do we want supersymmetry transformations to form a group? How to prove the translation generator commutes with the spinors in SUSY algebra? How to change a commutator of SUSY super-charges into an anti-commutator? Is there a type of supersymmetry where supercharges have spin 3/2? Could you get real space from Grassmann numbers? Cauchy-Schwarz inequality for Grassmann Integrals? How to construct a supersymmetry algebra? Can Grassmann-number variations of operators be represented by operators? What has "quantisation" to do with associated graded algebras? Are Fock spaces just a special type of tensor algebra? What are anticommuting spinor parameters $\zeta^\alpha$? Why $\delta F = B\epsilon$ and not $F=B \epsilon$ in supersymmetry? Where does the "Supersymmetry" in Witten's proof of the Morse inequalities come from? Do the Grassmann coordinates in the superfield formalism have any physical meaning? Why must the supersymmetry generators be spinors? What mathematical structure describes superspace and superfields? Is it possible to write the fermionic quantum harmonic oscillator using $P$ and $X$? Under what cases is the Batalin-Vilkovisky (BV) operator nilpotent? Why the bosonic part of the superconformal group $SU(2,2|1)$ is $SO(4,2) \times U(1)_R$? Does the commutator of anything with itself not vanish?
CommonCrawl
How do you measure numerically the central charge of a system? I was asked by my numerical calculus teacher (undergraduate course) to solve for roots of a transcendental or a non-linear equation using some numerical method. The problem is that this equation has to be related with physics, and I don't know any equation of this kind in physics (unfortunately, I don't have a deep knowlege in physics yet); actually, the ones I know involve differential and integral calculus, so they would require more advanced methods that I don't know yet. So, could you guys provide me with an equation of such kind? You don't have to explain the physics behind it if you don't want to; I just need the equation and its name (or something that specifies it) - the theory behind it I can search myself. Just some points concerning this problem: the equation should not be a differential or an integral equation; and it should be (please) an easy one (especially, an one variable one), because I will have to apply this numerical method both with and without a computer (to compare the results), which means that a hard equation could lead to a problem that I could not answer without a computer. I can add the simplest example: $$\sin(kx_1)=\sin(k(L-x_1))\qquad (1).$$ The first $\sin$ is a solution turning into zero at $x=0$, the second one turns into zero at $x=L$. This is a solution for an elastic guitare string fixed at $x=0$ and $x=L$. At $x=x_1$ we require continuety of the total solution; thus the equation (1). You must find the possible values of $k_n$ to satisfy this equation. Note, the possible discrete values of $k$ will not depend on $x_1$ since I proposed the case of a uniform string. Still, it is interesting to make sure that it is so and compare with the analytical solutions $\sin(k_nx)$ where $k_n=\pi\cdot n/L$, $n=1, 2, 3,...$. If you plot this for different temperatures, you'll see exactly how the phase transition happens. It has one solution for high temperatures but three (with one solution unstable) for low temperatures. OK, in electricity I can come up with a diode (Shockley diode equation) connected to a generator. (v - V) = r i_0 (exp(V/V0) - 1).
CommonCrawl
Farmer John has installed a new system of $N-1$ pipes to transport milk between the $N$ stalls in his barn ($2 \leq N \leq 50,000$), conveniently numbered $1 \ldots N$. Each pipe connects a pair of stalls, and all stalls are connected to each-other via paths of pipes. $t_i$, as well as through every stall along the path between them. The first line of the input contains $N$ and $K$. between stalls $x$ and $y$. stalls of a path through which milk is being pumped.
CommonCrawl
for events the day of Thursday, September 6, 2018. Abstract: Nakada's $\alpha$-expansions interpolate between three classical continued fractions: regular (obtained at $\alpha=1$), Hurwitz singular (obtained at $\alpha$=little golden mean), and nearest integer (obtained at $\alpha$=1/2). This talk will consider $\alpha$-expansions in the situation where all partial quotients are asked to be odd positive integers. We will describe the natural extension of the underlying Gauss map and the ergodic properties of these transformations. This is joint work with Claire Merriman. Abstract: We will read through this paper. Abstract: I will give a survey talk on primes in arithmetic progressions. The talk should be accessible to any graduate student, number theorist or not. Abstract: The past several years have seen a flurry of activity in the physics of scattering amplitudes, in part motivated by a new geometric approach to the problem, which finds the solution encoded in a geometrical object, most famously in the amplituhedron of Arkani-Hamed and Trnka for N=4 super Yang-Mills. I will discuss a version of this approach for a simpler quantum field theory (biadjoint scalar $\varphi^3$ theory), where the geometrical object encoding the answer at tree level has recently been shown by Arkani-Hamed et al. to be an associahedron, a polytope originally defined by Jim Stasheff in the context of homotopy theory, and now well-known thanks to its connection to type $A_n$ cluster algebras. In recent work with my students Bazier-Matte, Chapelier, Douville, Mousavand, and former student Yıldırım, we showed that the construction of the associahedron developed by Arkani-Hamed et al. for their purposes is also applicable to other finite type cluster algebras, yielding simple constructions both of generalized associahedra, and, unexpectedly, of the Newton polytopes of the cluster variables. Time permitting, I will discuss the possibility (which we are investigating with Arkani-Hamed) that this construction in other types also has a physical interpretation.
CommonCrawl
Abstract: A simplified Gyunaydin–Gyursey model, in which a Majorana field constructed using quaternions combines a lepton and a color quark, is considered. Formulation of the gauge principle directly in the quaternions leads to the appearance of two vector quaternion gauge fields, these corresponding to the decomposition $SO(4)\simeq SO(3)\times SO(3)$ of the invariance group. The diagonal subgroup $SO(3)$ of automorphisms of the quaternions appears as a pseudocolor symmetry of the quarks, and the gauge fieldcorresponding to it as the field of three color gluons. The other gauge field corresponds to lepton-quark transitions and in the presence of spontaneous breaking of the $SO(4)$ gauge symmetry by the scalar quaternion field acquires a (large) finite mass.
CommonCrawl
Click the License Report tab. From the Select Deployments list, select a deployment (for example, BMC Remedy AR System Server (ARS). To select multiple deployments, use your CTRL and Shift keys while selecting deployments. To add more deployments to the list, see Adding deployments. From the Duration list, select a time (for example, Last Three Months). To select a custom time, select Custom, then double-click to select the start date from the "From" Calendar icon and the end date from the "To" Calendar icon. The Duration option is only applicable to ARS and BSA 8.7 to 8.9 products, as noted in the tooltip on the Information icon. The License Report Run Notes box displays the real-time status messages while the report is running. "Process Finished" will appear once completed. To view the report, in the Previous License Report Details section, in the License Report column, click the Click Here link. The selected report opens in a CSV format for each product. Each time reports are run, they are stored in the user\Reports folder where the License Utility is installed ($LicenseUsageCollectorHome\licenseusagecollector\user\Reports\timestamp folder). For BSA 8.2 to 8.6, the reports are stored in user\Reports\date and time stamp\BSA.
CommonCrawl