text
stringlengths
100
500k
subset
stringclasses
4 values
Farmer John and his cows are planning to leave town for a long vacation, and so FJ wants to temporarily close down his farm to save money in the meantime. The farm consists of $N$ barns connected with $M$ bidirectional paths between some pairs of barns ($1 \leq N, M \leq 3000$). To shut the farm down, FJ plans to close one barn at a time. When a barn closes, all paths adjacent to that barn also close, and can no longer be used. FJ is interested in knowing at each point in time (initially, and after each closing) whether his farm is "fully connected" -- meaning that it is possible to travel from any open barn to any other open barn along an appropriate series of paths. Since FJ's farm is initially in somewhat in a state of disrepair, it may not even start out fully connected. The first line of input contains $N$ and $M$. The next $M$ lines each describe a path in terms of the pair of barns it connects (barns are conveniently numbered $1 \ldots N$). The final $N$ lines give a permutation of $1 \ldots N$ describing the order in which the barns will be closed. The output consists of $N$ lines, each containing "YES" or "NO". The first line indicates whether the initial farm is fully connected, and line $i+1$ indicates whether the farm is fully connected after the $i$th closing. Contest has ended. No further submissions allowed.
CommonCrawl
Like many other mathematicians I use mathematical software like SAGE, GAP, Polymake, and of course $\LaTeX$ extensively. When I chat with colleagues about such software tools, very often someone has an idea of how to extend an existing tool, what (non-existent) tool would be useful, or which piece of documentation should be (re)written. Due to lack of time & energy and often also programming expertise, these ideas rarely materialize. On the other hand, every now and then I meet programmers with a strong interest in mathematics (who are often actually trained mathematicians), and who are looking for a software project to work on. However, normally they don't really know what's needed and end up doing a non-mathematical project. This gave me the idea to ask the mathematical community to compile a wish list for mathematical software. Wishes can be very small or or something bigger. Just try to make sure that it's realistic and maybe also give an explanation why you consider your project as interesting. And if you happen to be a programmer fulfilling one of the wishes, please leave a comment. It would be great if you could also include an estimate on how complex your project and what the math/coding ratio is -- but this is optional. What software tool would you like to see created? What existent software tool would you like to see extended by what feature? What piece of documentation is missing or should be updated/extended? One suggestion per answer, please. I think some aspects of math would be revolutionized by having a good math search engine. Recently, a question was asked on Meta.MathStackExchange about what they perceived as the greatest problems facing the site. The biggest response was that there was no search engine that indexed mathematics. This is partly reasonable, since math is stored and documented in $\TeX$ and this can be taken as a standard. But this is also problematic, as there are multiple noncanonical ways to do things in $\TeX$. I would be remiss if I didn't say there are very many other challenging aspects of this. As an example use case, I often have to look things up in the Gradshteyn and Ryzhik Table of Integrals and Series. It would be remarkable if there were a reasonable way to search for my expressions within the book. Even if I had to attempt multiple searches, it would almost certainly be faster. Taking it up a step, it would be great to search through TeX on the arXiv for certain expressions as well. A more modern typesetting language to replace $\TeX$. TeX is basically impossible to parse and its internals are really odd and difficult to work with, when one tries to do something advanced. Knuth is a genius, and it was a really neat hack for the time, but, with all due respect, after 30 years of experience with computer typesetting I am sure it is possible to put together something better. If not, at least a TeX compiler with better error messages. I always thought it would be nice to have a real-time virtual blackboard (supporting digitizer pen), say, as an extension of Skype or similar service, where you can not only talk with a colleague but also do math together over great distances. A vastly improved support for handwritten math (e.g. via digitizer pen) incl. its conversion to typeset math would be awesome! In a long run, ideally it should be able to replace LaTeX. Just think of how much of researchers' time is spent on inputting math. A good diff software is essential for collaboratively writing articles. Latexdiff takes two tex files and outputs a new tex file with the differences highlighted (additions are underlined in blue and deletions are crossed out in red). This is very useful since it facilitates viewing the changes that coauthors have made during a round of editing, especially if some of your coauthors are not super computer-savy (e.g., they don't use diff themselves) since you can just pass them the output PDF with the marked changes. However, my experience with using latexdiff is that the output file usually requires some manual editing before it can be compiled into a PDF, since the diff markup algorithm often messes up the latex syntax. It would be useful to have a more user-friendly latexdiff. I would like software that makes the specific job of managing mathematical references easier. When I've looked, there are BibTeX and its relatives for making sure that the whole process stays under control; many tools for managing references; tools for pulling BibTeX from MathSciNet; tools for creating BibTeX from arxiv identifiers; tools for searching these places for papers; tools for merging BibTeX files; and so on. and then some misc. extra jobs that always show up, like fixing tildes and putting capitals in braces and adding hyperlinks to bib entries that lack them because they date all the way back to my thesis. The process is exhausting and the tools don't click together enough to make it much easier - e.g. automating pulling references from MathSciNet seems almost not worth the trouble because it involves firing up special-purpose software that's only useful for half of the new entries that I'm referencing. I find it particularly cumbersome to produce good-looking mathematical illustrations. I know of several ways to make decent cartoon images in bitmap format with little hassle, but I prefer the image quality provided by vector graphics. TikZ seems to be the go-to for math-based vector graphics, but this is incredibly time consuming, even after climbing the learning curve. I would very much like a bmp-to-tikz "converter." Depending on the quality of the bitmap, the converter might need to iteratively suggest a vector-graphics interpretation for the user to evaluate. The user could then fine tune the TikZ code after conversion if he's extremely picky. LaTeX support (or mode) in voice-recognition softwares for people with upper limb disabilities. Maybe via Dragon NaturallySpeaking or something new entirely. Something similar for coding as well! It would be nice to have a pdf viewer which gave the user the option to collapse and also restore individual proof sections. I can imagine this better as an online service than a locally running software, but nevertheless I imagine it to be very useful. You ever wondered (because of your research or out of curiosity) whether there is an example of a structure A (e.g. a topological space, graph, group, ...) which has the properties B, C and D, but not E? If Yes, what is an example of such a structure? If No, how can this be proven? Property [B] and [C] imply [D] and [not E]. Hence you are actually looking for structures A with only [D] and [not E]. The database does not contain information on this combination of properties. Do you want to extend the knowledge? "wiki-like" means that the database's knowledge can be extended/corrected by anyone $-$ like in Wikipedia. Even though this might seem like a complicated semantic search engine, I think that the strong formalization we have in mathmatics enables us to choose a strict syntax for the input. It follows a list of properties, e.g. finite, compact, 3-dimensional, connected, Hausdorff, has inner point, metrizable, bijective, ... . Every such property can be suffixed with a [not]-operator. The listed properties are joined by conjunction. The structures and property names are no free-form input, but chosen from a pop-up menu or by auto-completion, so that the users know what to input. The database should implement very basic reasoning, e.g. If a property A implies B, and B implies C, so does A imply C. If A and B contradict each other and C implies A, so C and B contradict each other too. The structures can be linked, e.g. every metric space is a topological space (by its induced topology). Hence, every property which is available for topological spaces, is also available for metric spaces. General: I want a combined database for all/most structures. All this under a common interface. Extendable: I think everyone should be allowed to add his knowledge to the database. Searchable: Most of the time I know only the names (or some names or vague descriptions) of some properties of the desired structure. I do not know the structures name. Hence I want to filter by these properties. Sometimes I might be not even interested in examples, but in the relation between two properties: e.g. do they contradict each other, are they the same, does one imply the other, ...? Structured: Not a loose collection of examples/counterexamples/articles, but highly interconnected and analysable data. Userfriendly/Beautiful: I think mathcounterexamples and of course StackExchange is a good demonstration of these goals. I once had an idea how this can be realized. I even asked a question on Computer Science StackExchange to see whether useful data structure for this kind of task already exist. I would love to realize such a project, but I am definitely lacking the web-developer skills, and currently also the time. There should be $\LaTeX$-browsers. The (relatively) new HTML 5 is great. It'd be still wonderful to have both: HTML-browsers (as today) and $\LaTeX$-browsers. What I think would be most useful to address this issue is improving the system to make requests and contributions to such software packages. I'm sure many of these systems would be happy to have more help with development. For instance, I found that the Sage implementation of computation of zeta functions of graphs is horribly slow and I wrote a much faster implementation (using Sage), and I wanted give Sage my code so they could use it a subsequent release, but after looking at the amount of effort required to contribute code, I decided I didn't want to spend that much time on it. I was just hoping to submit my code with some comments, which an interested developer could revise to conform to standards, test and implement. One example of something I would like implemented (say in Sage) is computation of $p$-adic integrals. E.g., given a compact open subgroup $K$ of $GL_2(\mathbb Q_p)$ and a character $\psi$ on the upper unipotent $N$, compute $\int_N 1_K(n) \psi(n) \, dn$. (Some simple cases like basic character sums might be implemented, but possibly they are only implemented mod $p$.) I once thought about trying to automate calculations for a highly computational project I had, but then decided it wasn't worth the development time for just that one project. For the sake of searching mathematical texts one could create a purely auxiliary language sTeX, a simplified $\LaTeX$ -- just a software tool (not directly for people). Then one would add a "translator" (or rather a simplifier) from $\LaTeX$ into sTeX. Then search engines (like Google) can search texts in $\LaTeX$ by first obtaining the intermediate sTeX. Mathematicians may learn just a little bit about sTeX to make mathematical searches still more efficient (but even without knowing anything about sTeX, the mathematical searches will be much easier to handle than without sTeX anyway). A blog comment hosting service that supports MathJax in comments. One of the comments mentioned moving LaTeX from PDF to an XHTML sort of environment (Stacks Project is a good example). It may be worth expanding on that as a separate answer. One obvious advantage is that words in text mode can be searched. And it may be a step towards making LaTeX code searchable (the current top answer). Most of the non-mathematical TeX codes (bullet points, italic, include pictures) we can use markdown like here on MathOverflow. But the real game-changer is to make it more like GitHub, where one can fork a proof in order to add missing details, or make more substantial rewriting, and "publish" it for all to see. The rest of us can vote on them. Eventually the platform should have sufficiently many of the basic theorems (with many different proofs) in all branches of mathematics that we can cite directly, instead of citing the original paper or a textbook. The citations can also be used to generate a "dependency graph", barring circular reasonings (It may not be as exhaustive in the details as in Stacks Project, or we'd lose the big pictures). If we want to say a certain result (say, in a certain abstract theory) is important, we can just point to it and see how many important results—or results that you care about—are connected to it. It may be more fun to learn new mathematics this way, combining the best of all textbooks, old and new. Aside from the theorems, there would be special pages that are more expository, giving historical context (say of a problem) and connecting different theorems into a coherent narrative without getting bogged down in the details of proofs. Another thing I would like to see (not strictly math only) is a proper editor for .djvu (for all platforms) documents the way there are such editors for .pdf documents. Especially for bigger scanned documents (like many historical stuff in math), .djvu offers much better compression and much smoother viewing performance. To my knowledge, there is only one program that comes close to a .djvu editor, but unfortunately it is lacking in many regards. On a related note, it would be nice to have a proper .pdf editor, native for Linux (there is one that comes close to the Windows ones, but it is not developed and has many problems, including usability ones), so that one does not need to use (the somewhat unstable) wine. It would be great if mathematical plotting programs like gnuplot would support both mouse based zoom-in and mouse based zoom-out. I once tried to describe what I mean in a German post titled Zoom-out could be so easy, but the description is too incomplete. One issue of that post is that you normally want to keep the aspect ratio. And the concrete formulas are also missing. Both issues could be solved easily, but the principal issue that people don't understand why this would be important is much harder to address. Idea generator using generative machine learning. If it can generate new art, lets give it a try with math. Using new techniques from machine learning's aspect of generating new data (eg. GAN networks), it would be very interesting to devise algorithms that input massive amounts of theorems and problems and combine them in various ways to output new statements. As in supervised machine learning, the human will be labeling whether the output was helpful in giving them novel perspectives. A first step in the algorithm would be rephrasing the theorem's and problem's statements in as many different ways as possible. Often big bridges in math are build because we managed to find equivalent problems from distinct areas that allowed an exchange of techniques. Interesting conjectures "discovered" by computers and proved by humans? I'd like to have on a usb key a user friendly software that could parse a math article to check the proofs in it without having to learn how to use stuff like Coq and highlight the possible gaps. But this may sound unrealistic, at least for now. helps in finding existing mathematical software and documentation. swMATH is a freely accessible, innovative information service for mathematical software. swMATH not only provides access to an extensive database of information on mathematical software, but also includes a systematic linking of software packages with relevant mathematical publications. The intention is to offer a list of all publications that refer to a software recorded in swMATH. In particular, all articles are given, which are included in Zentralblatt MATH (zbMATH). It can be both, articles that describe the background and technical details of a program, as well as those publications in which a piece of software is applied or used for research. In this way, swMATH provides information on actual use of the software that is otherwise impossible or very difficult to obtain. At the same time the documentation of literature referring to a software is a valuable source of information for the authors of the software about where their software is used. Moreover, if software is cited in scientific publications, this is also an important quality criterion, which is used by swMATH for software selection. swMATH sees itself as a service to the mathematical community. Additions, corrections and other notes from authors and users of mathematical software can be communicated under 'Feedback' and are very welcome. For more detailed information, we refer to the following article. swMATH is a project of the Mathematical Research Institute Oberwolfach (MFO) and FIZ Karlsruhe (FIZ), funded by the Leibniz Association 2011-2013. I would like to have a feature in a (La)TeX IDE that would allow one to ``collapse" one's document tree into a single file. I modularize my typesetting: I have separate files for definitions, lemmas, \newcommands, etc., and it is not convenient to share all of the separate files with others. Uploading one's document tree to the arXiv--one file at a time--is a tedious chore. Also, some editors request that submissions be put into a single file for publication. The ability to work modularly and then readily convert the corpus to a single file would be useful. Not the answer you're looking for? Browse other questions tagged big-list mathematical-software software or ask your own question. Interesting conjectures "discovered" by computers and proved by humans?
CommonCrawl
Using the numbers $1, 2, 3, 4$ and $5$ once and only once, and the operations $\times$ and $ \div$ once and only once, what is the smallest whole number you can make? Place value. Divisibility. Properties of numbers. Addition & subtraction. Working systematically. Factors and multiples. Multiplication & division. Investigations. Trial and improvement. Interactivities.
CommonCrawl
We obtain dimension free estimates for noncommutative Riesz transforms associated to conditionally negative length functions on group von Neumann algebras. This includes Poisson semigroups, beyond Bakry's results in the commutative setting. Our proof is inspired by Pisier's method and a new Khintchine inequality for crossed products. New estimates include Riesz transforms associated to fractional laplacians in $\mathbb R^n$ (where Meyer's conjecture fails) or to the word length of free groups. Lust-Piquard's work for discrete laplacians on LCA groups is also generalized in several ways. In the context of Fourier multipliers, we will prove that Hörmander–Mikhlin multipliers are Littlewood-Paley averages of our Riesz transforms. This is highly surprising in the Euclidean and (most notably) noncommutative settings. As application we provide new Sobolev/Besov type smoothness conditions. The Sobolev-type condition we give refines the classical one and yields dimension free constants. Our results hold for arbitrary unimodular groups.
CommonCrawl
TL;DR What in the world is roughness remapping for? In general, artists like working with a linear roughness value between 0 and 1 (similarly for all other material parameters), since this is easier to work with and to understand compared to directly using the parameters of certain BRDF components as presented in the literature. Disney for instance always uses linear material parameters for their Disney BRDF in the range [0,1] from the perspective of the artists (see their course notes on page 18). Working with linear values in the range [0,1] also simplifies storing and loading these values in RGB or sRGB textures. The actual roughness used in the BRDF equations is non-linear. So one needs to map linear to non-linear roughness in some computationally cheap way that pleases the artists. The most important thing is to be consistent across your renderer and to explicitly specify when a roughness parameter is linear or non-linear. It is worth reading Moving Frostbite to Physically Based Rendering 3.0. These course notes explicitly use the terminology linear roughness and (non-linear) roughness, both in the text and code samples. Furthermore, it is also worth reading The Specular BRDF Reference which defines various BRDF components for the Cook-Torrance BRDF using the same non-linear roughness parameter $\alpha$ (defined as the square of the linear roughness $roughness$). Not the answer you're looking for? Browse other questions tagged rendering lighting brdf pbr or ask your own question.
CommonCrawl
In the building of Jewelry Art Gallery (JAG), there is a long corridor in the east-west direction. There is a window on the north side of the corridor, and $N$ windowpanes are attached to this window. The width of each windowpane is $W$, and the height is $H$. The $i$-th windowpane from the west covers the horizontal range between $W\times(i-1)$ and $W\times i$ from the west edge of the window. You received instructions from the manager of JAG about how to slide the windowpanes. These instructions consist of $N$ integers $x_1, x_2, ..., x_N$, and $x_i \leq W$ is satisfied for all $i$. For the $i$-th windowpane, if $i$ is odd, you have to slide $i$-th windowpane to the east by $x_i$, otherwise, you have to slide $i$-th windowpane to the west by $x_i$. You can assume that the windowpanes will not collide each other even if you slide windowpanes according to the instructions. In more detail, $N$ windowpanes are alternately mounted on two rails. That is, the $i$-th windowpane is attached to the inner rail of the building if $i$ is odd, otherwise, it is attached to the outer rail of the building. Before you execute the instructions, you decide to obtain the area where the window is open after the instructions. The first line consists of three integers $N, H,$ and $W$ ($1 \leq N \leq 100, 1 \leq H, W \leq 100$). It is guaranteed that $N$ is even. The following line consists of $N$ integers $x_1, ..., x_N$ while represent the instructions from the manager of JAG. $x_i$ represents the distance to slide the $i$-th windowpane ($0 \leq x_i \leq W$). Print the area where the window is open after the instructions in one line.
CommonCrawl
And then thine sharpened figure shall be rounded. As other answerers have pointed out, the first six lines are from Shakespeare's sonnets, numbered 3, 60, 5, 5, 5, and 12. However, I believe that they have been misinterpreting the mathematics. A decimal point must be placed somewhere near the start of those numbers. With this information alone we can't do anything yet, but supposedly it implies that those numbers must be strung together to create 36055512, and then a decimal point inserted somewhere. Then the number obtained by placing that decimal point must be squared, and then the difference from 60 taken of the result. If we square 3.6055512, the result is 12.99999945582144. The difference from this number and 60 is 47.00000054417856. This simply means "round your result to the nearest integer". This gives us a result of 47. Sonnet 05 occurs thrice, $3\times5=15$. All those numbers are divisors of 60. The 2-digits missing divisors of 60 are 10,18,20,30. It suggests that we're talking about hours, so let's exclude 30. It may be a reference to decimal system (based on the number 10), where point means dot or comma. And thine sharpened figure shall be rounded. In the number 10 the digit 1 looks "sharp" and shall be rounded. It is indicated by the poem that the actual point of the poem comes right after the first line, which is 2nd line. square(60) = 3600. 3600 seconds is in one hour as well. Square 3 to get 9. "Owe" sounds like "over" (as in division), and "hour" is a reference to line 3 (Sonnet 5). So we divide 9 by 5 to get 1.8. Round the number 1.8 to get 2. Will you need one answer or two? I am a five digit number, what number am I?
CommonCrawl
This book fills a real gap in the analytical literature. After many years and many results of analytic regularity for partial differential equations, the only access to the technique known as $(T^p)_\phi$ has remained embedded in the research papers themselves, making it difficult for a graduate student or a mature mathematician in another discipline to master the technique and use it to advantage. This monograph takes a particularly non-specialist approach, one might even say gentle, to smoothly bring the reader into the heart of the technique and its power, and ultimately to show many of the results it has been instrumental in proving. Another technique developed simultaneously by F. Treves is developed and compared and contrasted to ours. The techniques developed here are tailored to proving real analytic regularity to solutions of sums of squares of vector fields with symplectic characteristic variety and others, real and complex. The motivation came from the field of several complex variables and the seminal work of J. J. Kohn. It has found application in non-degenerate (strictly pseudo-convex) and degenerate situations alike, linear and non-linear, partial and pseudo-differential equations, real and complex analysis. The technique is utterly elementary, involving powers of vector fields and carefully chosen localizing functions. No knowledge of advanced techniques, such as the FBI transform or the theory of hyperfunctions is required. In fact analyticity is proved using only $C^\infty$ techniques. The book is intended for mathematicians from graduate students up, whether in analysis or not, who are curious which non-elliptic partial differential operators have the property that all solutions must be real analytic. Enough background is provided to prepare the reader with it for a clear understanding of the text, although this is not, and does not need to be, very extensive. In fact, it is very nearly true that if the reader is willing to accept the fact that pointwise bounds on the derivatives of a function are equivalent to bounds on the $L^2$ norms of its derivatives locally, the book should read easily.
CommonCrawl
One of the most prominent problems of ancient mathematics was the squaring of the circle: to construct the square with the same area as a given circle. A related problem is linearizing the circle: to find a natural transition between a given line segment of length $L$ (having constant curvature $0 = 1/\infty$) and the circle with circumference $U = 2\pi R = L$ (with constant curvature $1/R$) going through circle segments of length $L$ (with intermediate constant curvatures $1/R'$, $\infty > R' > R$). The question is: Along which paths do the points of the line segment move to finally yield the circle? To be honest: Even though these paths look very much like circle segments, I'm not quite sure and I didn't define them by an explicit formula (which I didn't have at hand) but heuristically using some support points and splining. Are these paths really circle segments? If so: How to parametrize them? If not so: What kind of curves are they otherwise? If we want this transition to have all the points on the boundary of a circle at all times, then it makes most sense to parameterize by the radius of this circle (and apply a transformation to get it in terms of finite time later). For simplicity, I will also have the transition be to a vertical line. We shall have the radius of the circle $C_r$ be $r$ and centre be $(-r,0)$, such that $(0,0)$ is on $C_r$ for all $r$. The coordinates of the point at arclength $s$ from $(0,0)$ are then given by $(r (\cos(s/r) - 1), r \sin(s/r))$. Since the goals here seem to be rather subjective, I would attempt this and see how it looks to you (beyond making substitutions as needed to result in a horizontal line). Not the answer you're looking for? Browse other questions tagged modular-arithmetic euclidean-geometry projective-geometry visualization art or ask your own question. Golden Ratio Conjecture in three simple Geogebra shapes--circle, triangle, and square. The Golden Ratio in a Circle and Equilateral Triangle. Geometric/Trigonometric Proof? Polygon circumscribed about circle has higher perimeter than circle? Given a circle of radius r, and two points ('X' and 'Z') on that circle, can some circumcircular arc "XYZ" be constructed of length r? Multiplication of angles on the unit circle?
CommonCrawl
1) find the value of the constant c. "for x greater or equal to 1" means $\displaystyle 1 \leq x < + \infty$. Exactly what i thought, from this would you find the continuous function, then use x=1? Remember the properties of a probability density function.. Yes, and since it's an improper integral care must be taken in evaluating it (using limits etc.). You ought to have examples in your textbook or class notes on this type of situation. Last edited by mr fantastic; Jan 13th 2011 at 12:56 PM. Reason: Merged posts. Unfortunately to my surprise i havent, we only have them them for the kind i have said find the cumulative and then use the upper limit. So honestly haven't got a clue, any help would be most appreciated.
CommonCrawl
Loco Mateo is crazy for tacos, so he decides to start a catering service which will feature his signature creation: Tacos el Fuego. Because he knows his tacos are so insanely delicious, he's absolutely certain that his new business is going to be a success. So, he needs to write up a business plan to make sure he's ready for all the HUGE orders he's definitely going to get once he launches. To help him prepare for his inevitable financial success, Mateo is going to have to enlist the help of some basic algebraic expressions. Mateo's plan is to cater to weddings, funerals, family reunions..anywhere there is a big group of people fiending for tacos. Since each group is going to be different, he needs to come up with an expression to represent how much he'll charge for each catering order, based on the number of tacos sold. Mateo decides to charge $2.50 for each Taco el Fuego plus, he's going to charge a $200 service fee for his impeccable taco servicing skills. By focusing in on the keywords, we can turn Mateo's business plan into an algebraic expression. Since each taco costs $2.50, we should multiply 2.50 by 't', or the number of tacos sold, to get the total amount of money Mateo can earn by selling tacos. The variable 't' represents the number of tacos Mateo sells. When multiplying by a variable, you can either use a multiplication sign or just put the number right next to the variable. Finally, we know that the keyword "plus" indicates addition, so we add a plus 200 to complete the expression. Next, Mateo is going to need to figure out how many people he can feed with each batch of tacos he makes. Because he wants to keep his business exclusive, he decides to limit the number of tacos to three per person. But, Mateo is also not the most efficient chef. Every time he prepares a batch of tacos, he always burns or breaks some of them, so he ends up with 10 fewer than the total number of tacos in the batch. Let's use this information to write an expression. We knows that Mateo will end up with 10 fewer tacos than the total number of tacos he makes. We don't know how many the total will be, so we can use the variable 't' to stand for the total number of tacos. The words "fewer than" tell us that we are going to need to subtract 10 from the total, 't'. We also know that he will serve three tacos per person. The keyword "per" tells that we need to set up a division problem. So we take the total number of tacos remaining after the 10 misfires... Then divide that all by three. There! Now when we know the number of tacos, 't', we can use this expression to figure out how many people Mateo can feed! Now that he's got a basic business model in place, Mateo just KNOWS that stocks in his company are going to soar! Right now, he figures that it's worth about $10,000 but if everything goes according to plan, his company will double in value every week. How can we use this information to come up with an expression? Let's start with the initial value of the company, $10,000. Next, we have the keywords "double each week", which means we're going to do repeated multiplication by 2 every week. Let's draw up a quick table. At the end of week one, we know the company will be worth $10,000 x 2, or $20,000. At the end of week two, it will be worth $10,000 x 2 x 2, or $40,000. At the end of week 3, it will be worth $10,000 x 2 x 2 x 2, or $80,000. Do you notice a pattern here? Each of these expressions can be rewritten using exponents. $10,000 x 2 can be written as $10,000 x 2 to the first power. $10,000 x 2 x 2 can be written as $10,000 x 2 to the second power. Similarly, $10,000 x 2 x 2 x 2 can be written as $10 x 2 to the third power. Using this pattern, we can calculate how crazy rich Mateo's going to be on week w, because we know that we'll have to multiply $10,000 by 2, "w" times! We can write this as the expression, $10,000 x 2 to the 'w' power! It looks like Loco Mateo has just finished up a batch of Tacos el Fuego in time to fill his first order! But where are all of his customers? I guess that's why they call him Loco! In order to write mathematical expressions it is important to be able to recognize keywords to help identify which operations, constants, variables, and so on need to be used. When perfected, this method can make it possible to easily translate word problems into corresponding math equations. Learn how to write expressions by finding out how Loco Mateo jumpstarts his catering business that features his signature Tacos El Fuego. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video How to Write Expressions kannst du es wiederholen und üben. Explain how to identify the operator by keywords. plus or more than or combine or in total ... for addition. minus or less than or fewer than or take away ... for subtraction. If each student has $2$ books, and letting $s$ stand for the number of student there are in total, then the total number of books is given by $2s$, or $2\times s$. Imagine you've got $12$ candies. You want to split those candies into packages of $3$ each. Then you have to divide $12\div 3=4$ to see that you get $4$ packages with $3$ candies each. We don't know the number of tacos, thus we assign the variable $t$ to it. The keyword for each indicates multiplication: $2.50\times t$ or $2.50t$. Each person gets 3 tacos. Mateo can only calculate with 10 fewer than the total number of tacos. Again we don't know the total number of tacos and thus assign $t$ to it. Fewer than indicates subtraction. So we have to subtract $10$ from $t$. Keep the order in mind. This gives us $t-10$. So the remaining number of tacos is given by $t-10$. Summarize your knowledge about variables. If you don't know the number of students in your grade, you can represent the unknown number by a variable, such as $s$. If you've decided one letter for an unknown value you have to use it consistently throughout the exercise. Let's have a look at the algebraic expression, $x+4$. If you'd like to write an algebraic expression with an unknown value, you first assign a variable to this unknown value. Often the variable $x$ or $y$ is used, but sure you can also use any letter. For example, $t$ for the number of tacos. If you've decided on using one variable you don't have to change it during an exercise. Take care: The number of persons can't be negative. To write a word problem as an algebraic expression you have to recognize keywords. In the following the keywords are highlighted. So, in other words, Lou has three siblings. So each of them has to solve five word problems. Determine the corresponding algebraic expressions. The total numbers of people is four - including Sasha. The total number of tacos is $12$ and the total number of drinks is $10$. The total bill is $61$ dollars. Each of them has eaten three tacos. Together we get $4\times 3=12$ tacos eaten. $2\times 2+2\times 3=4+6=10$, the total number of drinks. $12\times 3+10\times 2.50=36+25=61$ dollars. So they have to pay $61$ dollars in total. So each of them has to pay $16$ dollars. Match the keyword with the operation symbol. Take any of the given keywords and think about which operation symbol you have to take, and test out what you think by making up a word problem to go along with it. The following keywords in a word problem indicate addition: combine, in total, sum, plus, more than ... Sure, there are a lot more keywords for addition as well. for each or times or product. For example, the double of can be written as $2\times ...$. Highlight important keywords from various real world statements. You only have to highlight the keyword and do not have to decide the operation. Be careful: there are also non keywords that you can highlight. If any word is a keyword you should be able to write down a corresponding algebraic expression. First we look at the keywords and then we indicate the corresponding operation and write down the algebraic expression. Here only the keywords are highlighted. If any word isn't highlighted, then it isn't a keyword at all. The total number of shells is wanted. So we have to add the found shells to get $23+16=39$. Because $20$ students can fit in each classroom, we have to divide $320\div 20=16$. This is the number of classrooms the school needs. $t$ raised to the fourth power leads to $t^4$. Next it's increased: this indicates addition of the product of $2$ and $s$. So we have in total $t^4+2\times s$. Sasha has nine books. Chayenne has two times more books than Sasha. This is the number of books Chayenne owns.
CommonCrawl
are equivalent for discrete time systems. I defined controllability as being able to get from an initial state $x_1$ to any other state $x_2$, reachability to get from the origin $0$ to any other state $x_2$ and null-controllability to get from an initial state $x_1$ to the origin $0$. I see how being able to get from any state $x_1$ to any state $x_2$ (thus controllability) implies reachability as you can take the origin as $x_1$ and being able to reach any state $x_2$. However i can't figure out how to proof it the other way around (that reachability implies conrollability). Also i don't really understand how the last statement is different from the first statement. Reaching from $x_1$ to $x_2$ in $k$ steps is equivalent to reaching from $0$ to $x_2 - A^k x_1$ in $k$ steps, which can be immediately seen from the solution of the difference equation. Since all the states are reachable from $0$, this implies controllability. Null-controllability is equivalent to reachability if the system does not have finite modes, i.e. $0$ eigenvalues. If the system has an uncontrollable finite mode, this particular node will be $0$ in a finite time (more precisely at most $n$ steps, where $n$ is the system order), regardless of the input. Therefore null-controllability is equivalent to "all uncontrollable modes are finite". Note: Not to be confused with stabilizability, which is "all uncontrollable modes are stable". This doesn't imply null-controllability since uncontrollable stable modes may be infinite, i.e. they cannot reach $0$ in a finite time. However, null-controllability implies stabilizability. Not the answer you're looking for? Browse other questions tagged discrete-mathematics control-theory systems-theory or ask your own question. Reachability from non-zero initial state? Is a any controlled invariant subspace also a controllability subspace? Control a controllable linear system to a final state in finite time. 0-controllability of three simple systems. Show that if a linear dynamical equation is controllable at $t_0$, then it is controllable at any $t<t_0$. What is the difference between controllability and reachability?
CommonCrawl
Tak has $N$ cards. On the $i$-th $(1≤i≤N)$ card is written an integer $x_i$. He is selecting one or more cards from these $N$ cards, so that the average of the integers written on the selected cards is exactly $A$. In how many ways can he make his selection? $N, A, x_i$ are integers. 200 points will be awarded for passing the test set satisfying $1≤N≤16$. Print the number of ways to select cards such that the average of the written integers is exactly $A$.
CommonCrawl
Now, I claim that both formulas are indeed valid. The proofs are very similar so I'll focus on the first one. What I did is assuming by contradiction that the formula isn't valid, so it has the value $t\to f$. Then, it must be that there's an $a$ which satisfies $A$ but there isn't an $x$ which satisfies $B$. We assumed that $\forall x(A\to B)$ is $t$, but we get a contradiction because for that $a$ we get $A\to B = f$. A similar proof can be conducted the second formula. Am I right here? Or am I missing something? Because at first I guessed that one of them isn't valid. There is no need to use proof by contradiction, albeit the proof you gave is not incorrect. The latter is completely provable in a constructive logic. The former is provable with both a constructive interpretation of $\exists$ and a more faithful interpretation of the classical $\exists$ albeit only for "proposition"-valued $B$. Your use of proof by contradiction, though, is not only unnecessary, it's basically superfluous! You assume that the statement is false, then prove that it is true, then use double negation elimination on the resulting contradiction to complete your proof. You could've just used the proof that you made to make a contradiction! You can read off a more standard "proof" from the above terms (though personally I find the terms more compelling because they patently have computational content). For the former, we're given a proof, f, that $\forall x. A(x) \to B(x)$ and a proof, (t , a), that $\exists x.A(x)$, and in particular t is a witness to this, i.e. $A(t)$ holds. We need to provide a $t'$ and a proof that $B(t')$ holds. We choose $t' = t$ and proved $B(t)$ by instantiating $f$ with $t$ and using modus ponens to and a to get a proof of $B(t)$, namely f t a. The only difference for a more classical interpretation of $\exists$ is that we need to make sure the witness is not observable, or to put it another way that all proofs of $\exists x.B(x)$ are the same. This doesn't hold for arbitrary (constructive) predicates $B$, but it does for ones that are "proposition"-valued which I won't explain except to say that in normal, proof-irrelevant logics, everything is proposition valued. For the latter, we still start with the proof f, but now we're given a proof g that $\forall x.A(x)$ and need to prove that for an arbitrary $x$ we have $B(x)$. We do this by instantiating f with $x$ and g with $x$ and then using modus ponens to get our proof of $B(x)$, namely f x (g x). Not the answer you're looking for? Browse other questions tagged logic proof-verification predicate-logic first-order-logic or ask your own question. Best method to determine if a first order formula is logically valid? For formulas $\phi$, $\psi$ show (($\forall$$x_1$)($\phi$ -> $\psi$) ->(($\forall$$x_1$)$\phi$ -> ($\forall$$x_1$)$\psi$)) is logically valid. direct hint to showing a formula is valid?
CommonCrawl
Let $G$ be a complete DAG: It has vertices $v_1,\ldots,v_n$, and $v_iv_j$ is an edge if and only if $i<j$. Let $w(i,j)$ be the weight of the edge $v_iv_j$. The weight has the property that $w(i,j)<w(i,j+1)$ and $w(i+1,j)<w(i,j)$. We are given an integer $k$. We are interested in finding the minimum $\lambda$, such that there exists a path of length $k$ from $v_1$ to $v_n$, such that each edge in the path has weight at most $\lambda$. Let the optimal value be $\lambda^*$. Assume one has an oracle that takes $i,j$ and output $w(i,j)$. One does not have to inspect the entire graph in order to find $\lambda^*$. First, given a $\lambda$, we can decide if $\lambda < \lambda^*$ in $O(n)$ time using a greedy algorithm. Is there a faster algorithm? Browse other questions tagged ds.algorithms or ask your own question. Ref - Implicit selection made quick?
CommonCrawl
Lemma 23.8.7. Let $A$ be a Noetherian ring. Then $A$ is a local complete intersection if and only if $A_\mathfrak m$ is a complete intersection for every maximal ideal $\mathfrak m$ of $A$. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 09Q5. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 09Q5, in case you are confused.
CommonCrawl
Lemma 9.8.4. Let $K/E/F$ be a tower of field extensions. If $\alpha \in K$ is algebraic over $F$, then $\alpha $ is algebraic over $E$. if $K$ is algebraic over $F$, then $K$ is algebraic over $E$. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 09GF. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 09GF, in case you are confused.
CommonCrawl
If there are multiple solutions, output the one with the smallest possible number of summands. Each test case consists of one line containing an integer $N$ ($1 \leq N \leq 10^9$). For each test case, output a single line containing the equation in the format $N = a + (a+1) + \ldots + b$ as in the example. If there is no solution, output a single word IMPOSSIBLE instead.
CommonCrawl
This article will review converting a simple algorithm, such as a least common multiple (LCM) algorithm, into a VHDL description. The data path of the algorithm will be discussed in this article. In the next article, we'll discuss the control path of the algorithm. Traditional digital design splits a given problem into two sections: a data path and a control path (or a controller). As a familiar example, consider a microprocessor that consists of an arithmetic logic unit (ALU) and a control path. The ALU may have several arithmetic units, such as adders and multipliers. The data processing is performed by the ALU, and that's why the ALU is considered as the data path of this system. The control path determines which operation will be performed by the ALU. By specifying a particular sequence of operations, the control unit can implement a given algorithm. The following block diagram shows the idea of separating a system into a data path and a control path. Figure 1. A controller and a data path work together to implement an algorithm. Image courtesy of Digital Systems Design Using VHDL. As you can see in the diagram, some of the system inputs go to the controller and some go into the data path. For example, there may be a "start" input that will trigger a multiplication algorithm implemented by the system. In this case, "start" corresponds to one of the "control inputs" shown in the diagram and the multiplication operands correspond to the "data in." The controller also receives some inputs as "status signals" from the data path. A multiplication overflow flag is an example of a "status signal." Based on the control inputs and the status signals, the controller determines the upcoming operations for the data path. Separating the data path from the controller allows us to more easily find the errors in the design process. Moreover, this design methodology makes future modifications of the system easier. To further explore this design method, we'll use the example of a least common multiple (LCM) algorithm implemented as a VHDL description. Listing 1 shows the pseudocode to find the LCM of m and n. Let's assume that $$1 \leq m \leq 7$$ and $$1 \leq n \leq 7$$. This algorithm uses repeated addition operations to find the multiples of m and n. These multiples are stored in a and b, respectively. After each iteration of the algorithm, we check a and b; if they are equal, we have found a common multiple and the algorithm ends. Since the smaller multiples are checked first, the algorithm will give the LCM of m and n. Let's see what building blocks are required to implement the data path of the above algorithm in hardware. A computer programming language uses a "variable" to store information that will be referenced or used later. We can use some flip-flops as memory elements to store the value of the variables a and b. These two registers must be wide enough to store the LCM of m and n. Since $$1 \leq m \leq 7$$ and $$1 \leq n \leq 7$$, the LCM will be smaller than $$7 \times 6=42$$. (Note that the LCM of 7 and 7 is 7.) Hence, we need two six-bit registers to implement a and b. Moreover, considering lines 7 and 11 of the code, we need two adders to calculate a+m and b+n. Since the maximum value of this addition result is 42, six-bit adders are wide enough to avoid addition overflow. As you see, sometimes we are simply retaining the current value of a and b (lines 12 and 8, respectively). Since there are three different values that can be assigned to each of the a and b registers, we'll need two three-to-one multiplexers. The required blocks are shown in Figure 2. The following figure shows a possible connection of these components for implementing the above algorithm. In this figure, the set of D-type flip-flops (DFFs) used to store the values of a and b are represented by a single DFF. As you can see, the control input sel goes to the select input of the two multiplexers. By choosing different values for sel, we can perform the three assignments of the algorithm. For example, when the two multiplexers choose the input denoted by "0", the inputs m and n will be passed to a_next and b_next, respectively. With the upcoming clock edge, the a and b registers will be updated with the value of the inputs m and n, respectively. This corresponds to Lines 1 and 2 of Listing 2. Note that we are assigning m and n, which are three-bit numbers, to a and b, which are six-bit registers. That's why, in Figure 3, the concatenation operator is used to append three zeros to the left of m and n (and this in turn is why the adders in Figure 3 have two six-bit inputs, in contrast to the adders in Figure 2, which have a three-bit input and a six-bit input). When the two multiplexers of Figure 3 select the input denoted by "1", the next value of a will be equal to m plus the current value of the a register. In this case, the b register will retain its current value. This corresponds to lines 7 and 8 of Listing 2. Similarly, the multiplexers can choose the red paths, which will correspond to lines 12 and 13 of Listing 2. The schematic of Figure 3 can perform the required operations on the inputs m and n; however, an appropriate signal must be generated for the select input of the multiplexers. At the beginning of the algorithm, sel will choose the path in dark blue to update the registers with the value of the inputs. For the rest of the algorithm, either the light blue paths or the red paths will be chosen. This choice will be made based on the result of the comparison of a and b ( see lines 5 and 10 of Listing 2). Hence, two other circuits need to be added to the schematic of Figure 3: a circuit to compare a with b, and one to generate an appropriate signal for the sel input. Comparing two binary numbers is a trivial task, but what about generating the sel signal? By determining which operations will be performed, the sel signal is actually specifying the state of the system, and thus it is not surprising that we can use a finite state machine (FSM) to generate sel. An FSM is in exactly one of a finite number of states at any given time, and it can be designed to go from one state to another in response to certain conditions. In the case of the LCM example, a state transition can occur in response to the result of the comparison between a and b. We will discuss the control path of the LCM algorithm in the next article, and then we will use our findings to write the VHDL code for the algorithm. Traditional digital design splits a given problem into two sections: a data path and a control path (or a controller). The data path performs the actual processing on the input data, and the control path determines which operation should be performed by the data path. Separating the data path from the controller makes it easier to find design errors and implement modifications. An FSM can be used to generate the signal for the select input of the routing multiplexers in the data path.
CommonCrawl
Herbert Heyer, Satoshi Kawakami, Tatsuya Tsurii, Satoe Yamanaka, "Hypergroups Related to a Pair of Compact Hypergroups", SIGMA, 12 (2016), 111, 17 pp. Abstract: The purpose of the present paper is to investigate a hypergroup associated with irreducible characters of a compact hypergroup $H$ and a closed subhypergroup $H_0$ of $H$ with $ |H/H_0|< + \infty$. The convolution of this hypergroup is introduced by inducing irreducible characters of $H_0$ to $H$ and by restricting irreducible characters of $H$ to $H_0$. The method of proof relies on the notion of an induced character and an admissible hypergroup pair. Keywords: hypergroup; induced character; semi-direct product hypergroup; admissible hypergroup pair. Citation: Herbert Heyer, Satoshi Kawakami, Tatsuya Tsurii, Satoe Yamanaka, "Hypergroups Related to a Pair of Compact Hypergroups", SIGMA, 12 (2016), 111, 17 pp.
CommonCrawl
Structure $\left<X,B\right>$ is an $n$-dimensional Euclidean incidence space iff $B$ is ternary betweenness relation associated with $n$-dimensional Euclidean geometry. That is, Euclidean incidence space is Euclidean geometry minus the notions of congruences. $(*)$ There is a matroid $\left<X,\mathcal S\right>$ of rank $n+1$. If $A \subseteq X$ is a flat of rank $k: 1 \leq k \leq n$ then $\left<A,B|A\right>$ is a $k-1$-dimensional Euclidean space. I believe this to be the case looking at a way in which Hilbert axioms handle dimensions, with planes and lines axiomized extremely carefully, but axiomization of an entire $3$-dimensional space basically boiling down to stating something very similar to $(*)$. If that was, indeed, a case, it would be a rather nice tool for checking whether or not something is a high-dimensional Euclidean incidence space. Browse other questions tagged geometry axioms affine-geometry matroids axiomatic-geometry or ask your own question. Applications of the fundamental theorems of affine and projective geometry. The first fundamental form is enough to determine the geometry of a hypersurface? What's the need for Hilbert's 7th axiom of incidence? How to study Euclidean geometry from axioms?
CommonCrawl
I found the following method to find one's age. It is working exactly for my case. I like to understand and solve this puzzle. If this is wrong forum, my sincere apologies. Please guide me solve this. Now you will get 3 digit number. The first one is last digit of your mobile number, last two digits are your age. This question came from our site for users of Wolfram Mathematica. Let $n$ be the last digit of your mobile number. Let $y$ be your year of birth. Then we have $100n+(2014-y)$. Since $2014-y$ is your age (assumed to be $<100$), the hundreds digit is $n$ and the last two digits are your age. The first term is the digit shifted two positions to the left (times $100$); the other two terms compute your age. You'll have to increment the constant $1964$ every year, and the trick won't work for centenarians. If you were born before August 1st in year y, then (2014 - y) is your age. If you are under 100, then it's a 2-digit number. So the right-hand side of the equation is just a three digit number whose first digit is $x$ (doesn't matter what one-digit number $x$ we started with) and whose last two digits are your age. Not the answer you're looking for? Browse other questions tagged puzzle or ask your own question. Why does this age calculation trick work? $4\times ABCDE = EDCBA$: Four times a five digit integer is that integer backwards. If 4+7+2 = 281435, 7+3+9 = 212781, 6+2+7 = 121456, and 2+8+5 = 164036, then what would 8+4+6 equal? How is moving the last digit of a number to the front and multiplying related to multiplicative orders? What is the relationship between 1/7, 1/11, 1/13, and the number 1001?
CommonCrawl
Why use $m_k$ to approximate $f_k$ in trust region method for optimization? In the second algorithmic strategy, known as trust region, the information gathered about $f$ is used to construct a model function $m_k$ whose behavior near the current point $x_k$ is similar to that of the actual objective function $f$. we find the candidate step $p$ by approximately solving the following subproblem: $$\min_p ~~m_k(x_k + p)$$ where $x_k + p$ lies inside the trust region. I have a naive question to ask: Specifically, I understand how to approximate $f$ but not why: we can use Taylor expansion to approximate, but I do not understand why we need approximate at fist place? Is that because we have a simpler system (quadratic form) that we can solve? Note, the approximation (quadratic form) may not be convex, as shown in the example in book page 20. Yes, a quadratic approximation of the objective function is used, and updated/improved, as the optimization progresses. It is usually a simpler problem to solve than the original, and its solution doesn't require any (possibly long-running) evaluation of the actual objective function. But it's not entirely accurate, that's why the trust region is used to restrict the solution of the trust region problem to a region in which the quadratic approximation is accurate enough so that the actual objective function improves when the solution to the trust region problem is obtained. And if it doesn't improve enough, the trust region problem is re-solved using a smaller trust region. Note that in some cases, such as when the objective function can only be evaluated based on for instance the solution to some differential equations, there isn't even a closed form for the actual objective function, but a quadratic approximation can still be used. Note that steepest descent and gradient descent also use a quadratic approximation to the objective function, but with the Hessian (quadratic term) always being the identity matrix. Indeed, trust region approach can be used on steepest descent/gradient descent (well, a trust region gradient descent would be called trust region steepest descent), Quasi-Newton, and Newton, with exact or finite difference gradients. If BFGS Quasi-Newton method is used (damped if constrained), then each quadratic objective approximation can be convex. Using SR1 Quasi-Newton, or the actual Hessian, the quadratic objective approximation may or may not be convex. But each trust region problem does not need to be solved to global optimality, or even local optimality, and they will be bounded due to the trust region. Finally, note that trust regions can be used for optimization subproblems without the subproblem objective function necessarily being quadratic. You could use a rational approximation, or even a cubic (if you're ready to have some fun with 3rd order tensors). Not the answer you're looking for? Browse other questions tagged self-study optimization or ask your own question. Why are symmetric positive definite (SPD) matrices so important? What is a trust region reflective algorithm? Why logarithmic scale for hyper-parameter optimization? What optimization method does "survreg" use? Why not use the third derivative for numerical optimization? Is it important to have Hessian positive definite for trust region method optimization?
CommonCrawl
I know that animals can't make poly-unsaturated fatty acids (PUFA) and so require them from dietary sources. For eg.Omega -3 and Omega 6 fatty acids. My questions : Can animals synthesize other unsaturated fatty acids from scratch? If not, does the unsaturated fatty acids in the phospholipids of plasma membrane of animals come only from dietary sources? Yes, animals can make their own unsaturated fatty acids. Mammalian fatty acyl desaturases can introduce double bonds at the Δ5, Δ6 and Δ9 positions (i.e. numbering from the functional group). As shown in the diagram below, this means that we cannot introduce double bonds at the ω3 or ω6 positions (i.e. numbering from the methyl end of the molecule) in fatty acids of a reasonable length (see any Biochemistry textbook for confirmation). I've illustrated the impossibility of us making α-linolenic acid as an example. Only linoleic acid ($\omega 6$) and $\alpha$-linolenic acid ($\omega 3$) are essential. Other PUFAs are synthesized from these by desaturases (as Alan Boyd pointed out). I haven't come across any study that reports de-novo synthesis of $\alpha$-linolenic acid in animals. However, this study reports that mites can synthesize linoleic acid. Not the answer you're looking for? Browse other questions tagged metabolism lipids or ask your own question. Why are mammals unable to produce Essential Fatty Acids? Where do transamination and deamination take place?
CommonCrawl
This paper studies Newtonian Sobolev-Lorentz spaces. We prove that these spaces are Banach. We also study the global p,q- capacity and the $p,q$-modulus of families of rectifiable curves. Under some additional assumptions (that is, $X$ carries a doubling measure and a weak Poincar ́e inequality), we show that when $1 \leq q < p$ the Lipschitz functions are dense in those spaces; moreover, in the same setting we also show that the $p,q$-capacity is Choquet provided that $q > 1$. We provide a counterexample for the density result in the Euclidean setting when $1 < p \leq n$ and $q = \infty$.
CommonCrawl
The short-rate tag has no usage guidance. What's the difference between the short rate model projection and the 3M forward curve? A term structure has a forward curve So what is it that the short rate model is projecting exactly? Why is it needed? How are they different? why $f(t,u) \neq E_t^Q [r(u)]$ when $r$ is random? There are a number of short rate models that give $r(t)$. How can those be used to construct the whole of the yield curve $y(t,T)$ (where $y(t, 0) = r(t)$)? Why is logarithmic mean equal to the arithmetic expectation less one-half its variance? Almost spent the whole day. Could anyone give a link to the Gsr model specification that is implemented in QuantLib? Or give an explanation? Any help is highly appreciated. I have a conceptual question that needs help. Does anyone know whether the short rate model generate discount rate or forward rate? CallableFloatingRateBond in QuantLib: just a matter of multiple inheritance? CIR model: is the short rate really non-central $\chi^2$ distributed? For the Dothan model $E^Q[B(t)]=\infty$? How can I show that for the Dothan short rate model We have $E^Q[B(t)]=\infty$ ? Where Dothan short rate model is " $dr_t=ar_tdt+\sigma r_tdW_t$ ". I appreciate any help. Thanks. How to price zero coupon bonds with short term rates model?
CommonCrawl
We present some elementary ideas to prove the following Sylvester–Gallai type theorems involving incidences between points and lines in the planes over the complex numbers and quaternions. 1. Let $A$ and $B$ be finite sets of at least two complex numbers each. Then there exists a line $\ell$ in the complex affine plane such that $\lvert(A\times B)\cap\ell\rvert=2$. 2. Let $S$ be a finite noncollinear set of points in the complex affine plane. Then there exists a line $\ell$ such that $2\leq \lvert S\cap\ell\rvert \leq 5$. 3. Let $A$ and $B$ be finite sets of at least two quaternions each. Then there exists a line $\ell$ in the quaternionic affine plane such that $2\leq \lvert(A\times B)\cap\ell\rvert \leq 5$. 4. Let $S$ be a finite noncollinear set of points in the quaternionic affine plane. Then there exists a line $\ell$ such that $2\leq \lvert S\cap\ell\rvert \leq 24$.
CommonCrawl
"New Contributions to Semipositive and Minimally Semipositive Matrices" by Projesh Nath Choudhury, Rajesh M. Kannan et al. Semipositive matrices (matrices that map at least one nonnegative vector to a positive vector) and minimally semipositive matrices (semipositive matrices whose no column-deleted submatrix is semipositive) are well studied in matrix theory. In this article, this notion is revisited and new results are presented. It is shown that the set of all $m \times n$ minimally semipositive matrices contains a basis for the linear space of all $m \times n$ matrices. Apart from considerations involving principal pivot transforms and the Schur complement, results on semipositivity and/or minimal semipositivity for the following classes of matrices are presented: intervals of rectangular matrices, skew-symmetric and almost skew-symmetric matrices, copositive matrices, $N$-matrices, almost $N$-matrices and almost $P$-matrices. Choudhury, Projesh Nath; Kannan, Rajesh M.; and Sivakumar, K. C.. (2018), "New Contributions to Semipositive and Minimally Semipositive Matrices", Electronic Journal of Linear Algebra, Volume 34, pp. 35-53.
CommonCrawl
Let G be a group and K a normal subgroup in G of finite index. Then the set X of all automorphisms phi of G that fix K (meaning phi(K)=K, so not necessarily pointwise) has finite index in Aut G. Is this even true? I am trying to work out a larger problem and this would be the final step but I am not sure this is even correct. Thanks very much for your help. I really appreciate it! If $\alpha$ and $\beta$ are automorphisms of $G$, then the cosets $\alpha X$ and $\beta X$ are the same iff $\alpha(K)=\beta(K)$. Thus $X$ will fail to have finite index if there are infinitely many different subgroups of $G$ that are conjugate to $K$ under automorphisms of $G$. For instance, if $G$ is an infinite-dimensional vector space over a finite field and $K$ is subspace such that $G/K$ is finite-dimensional, then automorphisms of $G$ can send $K$ to any subspace $K'$ such that $G/K'$ has the same dimension, and there are infinitely many such subspaces. Take $G$ the (two-sided) infinite sequences with entries in 0,1 and as operation component-wise addition (so its an infinite direct product of $C_2$ with itself), and as $K$ the kernel of the projection on one particular component. Clearly right/left shift are automorphisms of $G$, and under these $K$ has infinitely many images, so the stabilizer of $K$ in the automorphism group must have infinite index. If the derived subgroup is finite, does the center have finite index? Does every subgroup of finite index contain a power of each element of the group?
CommonCrawl
The '-' in 'masons-days' is not a minus sign but a dash that should be interpreted as multiplication; yes, math is a language that, like any other language, sometimes has ambiguities. So now we only need to memorize the tables of 3, 4, 5, 6, 7, and 8, from 2 to 8.. that is 6 x 6 = 36 products, instead of 110, which is much better, and since multiplication is commutative, e.g., 3 x 7 = 7 x 3, we only need to memorize half of these 36 products, i.e., we only need to memorize 18 products. Most people have little problem memorizing the tables of 3, 4, and 5, and memorizing the squares, i.e., the products of the digit by itself, like 7×7 and 8×8; we are left with only two 'difficult' products to memorize: 6 x 7 and 7 x 8. $%6 \times 7 = 42$% is a prestigious number since it is divisible by 1, 2, 3, 6, 7, 14, 21, and 42, which is a lot, plus, according to Douglas Adam's "The Hitchhiker's Guide to the Galaxy", 42 is "The Answer to the Ultimate Question of Life, the Universe, and Everything", as found by the supercomputer "DeepThought". yes… "five, six, seven, eight". Simple, right? The standard multiplication algorithm is not an "in place" algorithm but instead we need to find the partial products separately and then bring the results as partial products that we need to add; if there is an error, we have to look for it in different place. If we need the partial products, we can add the terms generated by each digit and end up with the same set of sums from the U.S. standard algorithm. which we read as "five factorial". Now let's consider some factorials, expressing n! in terms of the previous factorial (n-1)! These laws are analogous to the laws of addition. Since regrouping does not affect the result of multiplication, we usually omit the parentheses but, in reality, the parentheses are implicitly there; it just happens that it does not matter where they are. The existence of a multiplicative identity introduces the number 1; it says that there exists one unique number, namely 1, that does not change the value of $%a$% when we multiply it by it. The number a-1 is called the reciprocal, or the multiplicative inverse, of a.
CommonCrawl
Let $\mathcal L$ be a formal language used in the field of symbolic logic. Then the well-formed formulas of $\mathcal L$ are often referred to as logical formulas. They are symbolic representations of statements, and often of compound statements in particular. In mathematics, the two most universal types of logical formula are propositional formulas and first-order formulas.
CommonCrawl
Abstract: The group-theoretical aspects of spontaneous breaking in linear $\Sigma$ models are discussed. General conditions are formulated which must be satisfied by a multiplet of the group $G$ (compact or noncompact) for the construction on it of a $\Sigma$ model with given stability subgroup $H$ of the vacuum. It is shown that application of the general formalism of models to the case of spontaneously broken space-time symmetries requires the introduction of additional coordinates beyond the four coordinates $x_\mu$. An investigation is also made of the connection between $\Sigma$ models of internal symmetries and the corresponding nonlinear realizations.
CommonCrawl
has a finite number of solutions in integers $q$ ($\|\alpha\|$ is the distance from $\alpha$ to the nearest integer). Mahler's problem was solved affirmatively in 1964 by V.G. Sprindzhuk . He also proved similar results for complex and $p$-adic numbers, and also for power series over finite fields. The original paper of Sprindzhuk is [a1]. This page was last modified on 3 April 2016, at 22:25.
CommonCrawl
I'm having trouble trying to make a $3\times3$ magic square with magic number $12$ and I can't figure it out. all you need to do is come up with some 9-number sequence whose average is 4. Enter the sequence, in order, into the bottom cell, the top right cell, the left cell, the top left cell, the middle cell, the bottom right cell, the right cell, the bottom left cell, and the top cell. This process will work for any nine-number sequence; the magic number will be three times the number in the middle of the square. In my magic square, the sequence is 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6. Not the answer you're looking for? Browse other questions tagged mathematics magic-square or ask your own question. How to Determine a Magic Constant in a Magic Square? This four-by-four Magic Square uses all the integers from -7 to 8. Complete the square. More Magic Squares!
CommonCrawl
We show that for a metric space with an even number of points there is a $1$-Lipschitz map to a tree-like space with the same matching number. This result gives the first basic version of an unoriented Kantorovich duality. The study of the duality gives a version of global calibrations for $1$-chains with coefficients in $\mathbb Z_2$. Finally we extend the results to infinite metric spaces and present a notion of ``matching dimension'' which arises naturally.
CommonCrawl
Ps or Pdf files of the following papers can been sent upon request, if there exists no link for the download. The publications are sorted by type: Book, Journal papers, Conference papers, Book chapters and Industrial patents. You can also consult Scholar Google, Mathscinet, or Hal CNRS. F. Bribiesca Argomedo, E. Witrant, and C. Prieur, Safety Factor Profile Control in a Tokamak, SpringerBriefs in Electrical and Computer Engineering: Control, Automation and Robotics, ISBN 978-3-319-01957-4, 2014. Preliminary version. C. Roman, D. Bresch-Pietri, C. Prieur, O. Sename, Robustness to in-domain viscous damping of a collocated boundary adaptive feedback law for an anti-damped boundary wave PDE, IEEE Trans. Aut. Control, to appear, 2019. V. Magron, and C. Prieur, Optimal Control of PDEs using Occupation Measures and SDP Relaxations, IMA Mathematical Control & Information, to appear, 2019. C.-I. Chesneau, R. Robin, H. Meier, M. Hillion, and C. Prieur, Calibration of a magnetometer array using motion capture equipment, Asian J. Control, to appear, 2019. A. Seuret, C. Prieur, S. Tarbouriech, A. R. Teel, and L. Zaccarian, A nonsmooth hybrid invariance principle applied to robust event-triggered design, IEEE Trans. Aut. Control, to appear, 2019. M. A. Davo, D. Bresch-Pietri, C. Prieur, and F. Di Meglio, Stability analysis of a 2x2 linear hyperbolic system with a sampled-data controller via backstepping method and looped-functionals, IEEE Trans. Aut. Control, vol. 64 (4), pp.1718-1725, 2019. Liguo Zhang, C. Prieur, and Junfei Qiao, PI boundary control of linear hyperbolic balance laws with stabilization of ARZ traffic flow models, Systems and Control Letters, vol. 123, pp. 85-91, 2019. C. Prieur, I. Queinnec , S. Tarbouriech, and L. Zaccarian, Analysis and synthesis of reset control systems, NOW Foundations and Trends in Systems and Control, vol. 6 (2-3), pp. 117-338, 2019. C. Prieur, and E. Trélat, Feedback stabilization of a 1D linear reaction-diffusion equation with delay boundary control, IEEE Trans. Aut. Control, vol. 64 (4), pp. 1415-1425, 2019. S. Tarbouriech, I. Queinnec, and C. Prieur, Nonstandard use of anti-windup loop for systems with input backlash, IFAC Journal of Systems and Control, vol. 8, pp. 33-42, 2018. C. Roman, D. Bresch-Pietri, E. Cerpa, C. Prieur, and O. Sename, Backstepping control of a wave PDE with unstable source terms and dynamic boundary, IEEE Control Systems Letters, vol. 2 (3), pp. 459-464, 2018. N. Espitia, A. Girard, N. Marchand, and C. Prieur, Event-based boundary control of a linear $2\times2$ hyperbolic system via backstepping approach, IEEE Trans. Aut. Control, vol. 63 (8), pp. 2686-2693, 2018. A. Caldeira, C. Prieur, D. Coutinho, and V. Leite, Regional stability and stabilization of a class of linear hyperbolic systems with nonlinear quadratic dynamic boundary conditions, European Journal of Control, vol. 43, pp. 46-56, 2018. A. Janon, M. Nodet, Ch. Prieur, and Cl. Prieur, Goal-oriented error estimation for parameter-dependent nonlinear problems, ESAIM: Mathematical Modelling and Numerical Analysis, vol. 52 (2), pp. 705-728, 2018. B. Mavkov, E. Witrant, C. Prieur, B. Maljaars, F. Felici, and O. Sauter, Experimental validation of a Lyapunov-based controller for the plasma safety factor and plasma pressure in the TCV tokamak, Nuclear Fusion, doi: 10.1088/1741-4326/aab16a, 2018. A. Tanwani, B. Brogliato, and C. Prieur, Well-posedness and output regulation for implicit time-varying evolution variational inequalities, SIAM J. Control Opt., vol. 56 (2), pp. 751-781, 2018. D. Bresch-Pietri, C. Prieur, and E. Trélat, New formulation of predictors for finite-dimensional linear control systems with input delay, Systems and Control Letters, vol. 113, pp. 9-16, 2018. M. Davo, C. Prieur, M. Fiacchini, and D. Nesic, Enlarging the basin of attraction by a uniting output feedback controller, Automatica, vol. 90, pp. 73-80, 2018. C. Prieur, and J. Winkin, Boundary feedback control of linear hyperbolic systems: application to the Saint-Venant - Exner equations, Automatica, vol. 89, pp. 44-51, 2018. L. Zhang, C. Prieur, and J. Qiao, Local exponential stabilization of semi-linear hyperbolic systems by means of a boundary feedback control, IEEE Control Systems Letters, vol. 2 (1), pp. 55-60, 2018. M. A. Davo, C. Prieur, and M. Fiacchini, Stability analysis of output feedback control systems with memory-based event-triggering mechanism, IEEE Trans. Aut. Control, vol. 62 (12), pp. 6625-6632, 2017. I. Queinnec, S. Tarbouriech, J.-M. Biannic, and C. Prieur, Anti-windup algorithms for Pilot-Induced-Oscillation alleviation, Aerospace Lab Journal, vol. 13, 2017. S. Marx, V. Andrieu, and C. Prieur, Cone-bounded feedback laws for $m$-dissipative operators on Hilbert spaces, Math. Control Signals Systems, vol. 29 (18), 2017. L. Zhang, and C. Prieur, Stochastic stability of Markov jump hyperbolic systems with application to traffic flow control, Automatica, vol. 86, pp. 29-37, 2017. S. Marx, E. Cerpa, C. Prieur, and V. Andrieu, Global stabilization of a Korteweg-de Vries equation with saturating distributed control, SIAM J. Control Opt., vol. 55 (3), pp. 1452-1480, 2017. L. Zhang and C. Prieur, Necessary and sufficient conditions on the exponential stability of positive hyperbolic systems, IEEE Trans. Aut. Control, vol. 62 (7), pp. 3610-3617, 2017. B. Mavkov, E. Witrant, C. Prieur, D. Moreau, Multi-experiment state space identification of coupled magnetic and kinetic parameters in tokamak plasmas, Control Engineering Practice, vol. 60, pp. 28-38, 2017. A. Seuret, C. Prieur, S. Tarbouriech, and L. Zaccarian, LQ-based event-triggered co-design for saturated linear systems, Automatica, vol. 74, pp. 47-54, 2016. S. Marx, V. Andrieu, and C. Prieur, Semi-global stabilization by an output feedback law from a hybrid state controller, Automatica, vol. 74, pp. 90-98, 2016. A. Tanwani, C. Prieur, and M. Fiacchini, Observer-based feedback stabilization of linear systems with event-triggered sampling and dynamic quantization, Systems and Control Letters, vol. 94, pp. 46-56, 2016. P.-O. Lamare, A. Girard, and C. Prieur, An optimisation approach for stability analysis and controller synthesis of linear hyperbolic systems, ESAIM: Control, Optim. Cal. Var., vol. 22 (4), pp. 1236-1263, 2016. V. Andrieu, C. Prieur, S. Tarbouriech, and L. Zaccarian, A hybrid scheme for reducing peaking in high-gain observers for a class of nonlinear systems, Automatica, vol. 72, pp. 138-146, 2016. F. Fichera, C. Prieur, S. Tarbouriech, and L. Zaccarian, LMI-based reset Hinf analysis and design for linear continuous-time plants, IEEE Trans. Aut. Control, vol. 61 (12), pp. 4157-4163, 2016. N. Espitia, A. Girard, N. Marchand, and C. Prieur, Event-based control of linear hyperbolic systems of conservation laws, Automatica, vol. 70, pp. 275-287, 2016. A. Janon, M. Nodet, Ch. Prieur, and Cl. Prieur, Global sensitivity analysis for the boundary control of an open channel, Math. Control Signals Systems, vol. 28 (1), 2016. Y. Tang, C. Prieur, and A. Girard, Singular perturbation approximation of linear hyperbolic systems of balance laws, IEEE Trans. Aut. Control, vol. 61, 10, pp. 3031-3037, 2016. A. Tanwani, B. Brogliato, and C. Prieur, Observer design for unilaterally constrained Lagrangian systems: a passivity-based approach, IEEE Trans. Aut. Control, vol. 61, 9, pp. 2386-2401, 2016. Y. Tang, C. Prieur, and A. Girard, Singular perturbation approximation by means of a H2 Lyapunov function for linear hyperbolic systems, Systems and Control Letters, vol. 88, pp. 24-31, 2016. N. Meslem, and C. Prieur, Event-based controller synthesis by bounding methods, European Journal of Control, vol. 26, pp. 12-21, 2015. P.-O. Lamare, A. Girard, and C. Prieur, Switching rules for stabilization of linear systems of conservation laws, SIAM J. Control Opt., vol. 53, 3, pp. 1599-1624, 2015. Y. Tang, C. Prieur, and A. Girard, Tikhonov theorem for linear hyperbolic systems, Automatica, vol. 57, pp. 1-10, 2015. M. Fiacchini, C. Prieur, and S. Tarbouriech, On the computation of set-induced control Lyapunov functions for continuous-time systems, SIAM J. Control Opt., vol. 53, 3, pp. 1305-1327, 2015. H. Stein Shiromoto, V. Andrieu, and C. Prieur, Region-dependent gain condition for asymptotic stability, Automatica, vol. 52, pp. 309-316, 2015. A. Tanwani, B. Brogliato, and C. Prieur, Stability notions for a class of nonlinear systems with measure controls, Math. Control Signals Systems, vol. 27, 2, pp. 245-275, 2015. F. Castillo, E. Witrant, C. Prieur, L. Dugard, and V. Talon, Fresh air fraction control in engines using dynamic boundary stabilization of LPV hyperbolic systems, IEEE Trans. Control Syst. Technology, vol. 23, 3, pp. 963-974, 2015. A. Tanwani, B. Brogliato, and C. Prieur, Stability and observer design for Lur'e systems with multivalued, non-monotone, time-varying nonlinearities and state jumps, SIAM J. Control Opt., vol. 52, 6, pp. 3639-3672, 2014. D. Matignon, and C. Prieur, Asymptotic stability of Webster-Lokshin equation, Mathematical Control and Related Fields, vol. 4, 4, pp. 481-500, 2014. C. Prieur, A.R. Teel, and L. Zaccarian, Relaxed persistent flow/jump conditions for uniform global asymptotic stability, IEEE Trans. Aut. Control, vol. 59, 10, pp. 2766-2771, 2014. S. Tarbouriech, I. Queinnec, and C. Prieur, Stability analysis and stabilization of systems with input backlash, IEEE Trans. Aut. Control, vol. 59, 2, pp. 488-494, 2014. F. Castillo, E. Witrant, C. Prieur, and L. Dugard, Boundary observers for linear and quasi-linear hyperbolic systems with application to flow control, Automatica, vol. 49, 11, pp. 3180-3188, 2013. H. Stein Shiromoto, V. Andrieu, and C. Prieur, Relaxed and hybridized backstepping, IEEE Trans. Aut. Control, vol. 58, 12, pp. 3236-3241, 2013. R. G. Sanfelice, and C. Prieur, Robust supervisory control for uniting two output-feedback hybrid controllers with different objectives, Automatica, vol. 49, 7, pp. 1958-1969, 2013. F. Bribiesca Argomedo, C. Prieur, E. Witrant, and S. Bremond, A strict Control Lyapunov Function for a diffusion equation with time-varying distributed coefficients, IEEE Trans. Aut. Control, vol. 58, 2, pp. 290-303, 2013. F. Bribiesca Argomedo, E. Witrant, C. Prieur, S. Brémond, R. Nouailletas, and J.-F. Artaud, Lyapunov-based infinite-dimensional control of the safety factor profile in a Tokamak plasma, Nuclear Fusion, 53, 033005, 2013. C. Prieur, S. Tarbouriech, and L. Zaccarian, Lyapunov-based hybrid loops for stability and performance of continuous-time control systems, Automatica, vol. 49, 2, pp. 577-584, 2013. F. Fichera, C. Prieur, S. Tarbouriech, and L. Zaccarian, Using Luenberger observers and dwell-time logic for feedback hybrid loops in continuous-time control systems, International Journal of Robust and Nonlinear Control, vol. 23, 10, pp. 1065-1086, 2013. L. Hetel, J. Daafouz, S. Tarbouriech, and C. Prieur, Stabilization of linear impulse systems through nearly-periodic reset, Nonlinear Analysis: Hybrid Systems, vol. 7, 1, pp. 4-15, 2013. M. Fiacchini, S. Tarbouriech, and C. Prieur, Quadratic stability for hybrid systems with nested saturations, IEEE Trans. Aut. Control, vol. 57, 7, pp. 1832-1838, 2012. B. Robu, L. Baudouin, and C. Prieur, Active vibration control of a fluid/plate system using a pole placement controller, Int. J. Control, vol. 85, 6, pp. 684-694, 2012. B. Robu, L. Baudouin, C. Prieur, and D. Arzelier, Simultaneous H_\infty vibration control of fluid/plate system via reduced-order controller, IEEE Trans. Control Syst. Technology, vol. 20, 3, pp. 700-711, 2012. C. Prieur, and F. Mazenc, ISS-Lyapunov functions for time-varying hyperbolic systems of balance laws, Math. Control Signals Systems, vol. 24, 1, pp. 111-134, 2012. F. Mazenc, and C. Prieur, Strict Lyapunov functions for semilinear parabolic partial differential equations, Mathematical Control and Related Fields, vol. 1, 2, pp. 231-250, 2011. S. Agarwal, G. Carbou, S. Labbé, and C. Prieur, Control of a network of magnetic ellipsoidal samples, Mathematical Control and Related Fields, vol. 1, 2, pp. 129-147, 2011. S. Tarbouriech, T. Loquen, and C. Prieur, Anti-windup strategy for reset control systems, International Journal of Robust and Nonlinear Control, vol. 21, 10, pp. 1159-1177, 2011. S. Tarbouriech, C. Prieur, and I. Queinnec, Stability analysis for linear systems with input backlash through sufficient LMI conditions, Automatica, vol. 46, pp. 1911-1915, 2010. C. Roos, J.-M. Biannic, S. Tarbouriech, C. Prieur, and M. Jeanneau, On-ground aircraft control design using a parameter-varying anti-windup approach, Aerospace Science & Technology, vol. 14, 7, pp. 459-471, 2010. R. Goebel, C. Prieur, and A.R. Teel, Smooth patchy control Lyapunov functions, Automatica, vol. 45, 3, pp. 675-683, 2009. C. Prieur, Control of systems of conservation laws with boundary errors, Networks and Heterogeneous Media, vol. 4, 2, pp. 393-407, 2009. V. F. Montagner, R. C. L. F. Oliveira, T. R. Calliero, R. A. Borges, P. L. D. Peres, and C. Prieur, Robust absolute stability and nonlinear state feedback stabilization based on polynomial Lur'e functions, Nonlinear Analysis: Theory, Methods & Applications, vol. 70, pp. 1803-1812, 2009. V. Dos Santos, and C. Prieur, Boundary control of open channels with numerical and experimental validations, IEEE Trans. Control Syst. Tech., vol. 16, 6, pp. 1252-1264, 2008. P. Le Gall, C. Prieur, and L. Rosier, Exact controllability and output feedback stabilization of a bimorph mirror, ESAIM Proc., vol. 25, pp. 19-28, 2008. C. Prieur, J. Winkin, and G. Bastin, Robust boundary control of systems of conservation laws, Math. Control Signals Systems, vol. 20, 2, pp. 173-197, 2008. J. B. Lasserre, D. Henrion, C. Prieur, and E. Trélat, Nonlinear optimal control via occupation measures and LMI-relaxations, SIAM J. Control Opt., vol. 47, 4, pp. 1643-1666, 2008. L. Baudouin, C. Prieur, F. Guignard, and D. Arzelier, Robust control of a bimorph mirror for adaptive optics system, J. of Applied Optics, vol. 47, 20, pp. 3637-3645, 2008. C. Prieur, R. Goebel, and A. R. Teel, Hybrid feedback control and robust stabilization of nonlinear systems, IEEE Trans. Auto. Control, vol. 52, 11, pp. 2103-2117, 2007. P. Le Gall, C. Prieur, and L. Rosier, Output feedback stabilization of a clamped-free beam, Internat. J. Control, vol. 80, 8, pp. 1201-1216, 2007. C. Prieur, and E. Trélat, On two hybrid robust optimal stabilization problems, Int. J. of Tomography Statistics, vol. 6 (special volume), pp. 86-91, 2007. P. Le Gall, C. Prieur, and L. Rosier, Stabilization of a clamped-free beam with collocated piezoelectric sensor/actuator, Int. J. of Tomography Statistics, vol. 6 (special volume), pp. 104-109, 2007. M. Lenczner, and C. Prieur, Asymptotic model of an active mirror, Int. J. of Tomography Statistics, vol. 5 (special volume), pp. 68-72, 2007. P. Le Gall, C. Prieur, and L. Rosier, On the control of a bimorph mirror, Int. J. of Tomography Statistics, vol. 5 (special volume), pp. 97-102, 2007. C. Prieur, and E. Trélat, Quasi-optimal robust stabilization of control systems, SIAM J. Control Opt., vol. 45, 5, pp. 1875-1897, 2006. C. Prieur, and E. Trélat, Hybrid robust stabilization in the Martinet case, Control and Cybernetics, vol. 35, 4, pp. 923-945, 2006. S. Tarbouriech, C. Prieur, J.M. Gomes da Silva Jr., Stability analysis and stabilization of systems presenting nested saturations, IEEE Trans. Auto. Control, vol. 51, 8, pp. 1364-1371, 2006. E. Crépeau, and C. Prieur, Control of a clamped-free beam by a piezoelectric actuator, ESAIM: Control, Optim. Cal. Var., vol. 12, pp. 545-563, 2006. C. Prieur, Robust stabilization of nonlinear control systems by means of hybrid feedbacks, Rend. Sem. Mat. Univ. Pol. Torino, vol. 64, 1, pp. 25-38, 2006. C. Prieur, and E. Trélat, Robust optimal stabilization of the Brockett integrator via a hybrid feedback, Math. Control Signals Systems, vol. 17, 3, pp. 201-216, 2005. D. Matignon, and C. Prieur, Asymptotic stability of linear conservative systems when coupled with diffusive systems, ESAIM: Control, Optim. Cal. Var., vol. 11, pp. 487-507, 2005. C. Prieur, Asymptotic controllability and robust asymptotic stabilizability, SIAM J. Control Opt., vol.43, 5, pp. 1888-1912, 2005. C. Prieur, and J. de Halleux, Stabilization of a 1-D tank containing a fluid modeled by the shallow water equations, Systems and Control Letters, vol 52, 3-4, pp. 167-178, 2004. J. de Halleux, C. Prieur, J.-M. Coron, B. d'Andréa-Novel, and G. Bastin, Boundary feedback control in networks of open channels, Automatica, vol. 39, 8, pp. 1365-1376, 2003. C. Prieur, and A. Astolfi, Robust stabilization of chained systems via hybrid control, IEEE Trans. Auto. Control, vol. 48, 10, pp. 1768-1772, 2003. C. Prieur, Uniting local and global controllers with robustness to vanishing noise, Math. Control Signals Systems, 14, pp. 143-172, 2001. C. Kitsos, G. Besancon, and C. Prieur, A high-gain observer for a class of 2x2 hyperbolic systems with C1 exponential convergence, 3rd IFAC Workshop on Control of Systems Governed by Partial Differential Equation, XI Workshop Control of Distributed Parameter Systems, Oaxaca, Mexico, 2019. F. Ferrante, and C. Prieur, Boundary control design for linear 1-D balance laws in the presence of in-domain disturbances, European Control Conference (ECC'18), Napoli, Italy, 2019. F. Ferrante, and C. Prieur, Boundary control design for linear conservation laws in the presence of energy-bounded measurement noise, IEEE Conf. on Dec. and Cont. (CDC'18), Miami Beach, FL, USA, 2018. C. Kitsos, G. Besancon, and C. Prieur, High-gain observer design for a class of hyperbolic systems of balance laws, IEEE Conf. on Dec. and Cont. (CDC'18), Miami Beach, FL, USA, 2018. C. Roman, D. Bresch-Pietri, E. Cerpa, C. Prieur, and O. Sename, Backstepping control of a wave PDE with unstable source terms and dynamic boundary, IEEE Conf. on Dec. and Cont. (CDC'18), Miami Beach, FL, USA, 2018. A. Tanwani, S. Marx, and C. Prieur, Local Input-to-State stabilization of 1-D linear reaction-diffusion equation with bounded feedback, Math. Theory Network Syst. (MTNS'18), Hong Kong, 2018. S. Marx, Y. Chitour, and C. Prieur, Stability results for infinite-dimensional linear control systems subject to saturations, European Control Conference (ECC'18), Limassol, Cyprus, 2018. D. Pilbauer, D. Bresch-Pietri, F. Di Meglio, C. Prieur, and T. Vyhlidal, Input shaping for infinite dimensional systems with application on oil well drilling, European Control Conference (ECC'18), Limassol, Cyprus, 2018. L. Zhang, C. Prieur, and J. Qiao, Boundary feedback stabilization for a class of semi-linear hyperbolic systems, IEEE Conf. on Dec. and Cont. (CDC'17), Melbourne, Australia, 2017. N. Espitia, A. Girard, N. Marchand, and C. Prieur, Dynamic boundary control synthesis of coupled PDE-ODEs for communication networks under fluid flow modeling, IEEE Conf. on Dec. and Cont. (CDC'17), Melbourne, Australia, 2017. E. Cerpa, and C. Prieur, Effect of time scales on stability of coupled systems involving the wave equation, IEEE Conf. on Dec. and Cont. (CDC'17), pp. 1236-1241, Melbourne, Australia, 2017. C.-I. Chesneau, M. Hillion, and C. Prieur, Improving magneto-inertial attitude and position estimation by means of a magnetic heading observer, 8th Conf. on Indoor Positioning and Indoor Navigation (IPIN'17), Sapporo, Japan, 2017. N. Espitia, A. Girard, N. Marchand, and C. Prieur, Fluid-flow modeling and stability analysis of communication networks, IFAC World Congress on Automatic Control, Toulouse, France, 2017. A. Vieira, B. Brogliato, and C. Prieur, Optimal control of linear complementarity systems, IFAC World Congress on Automatic Control, Toulouse, France, 2017. C. Roman, D. Bresch-Pietri, E. Cerpa, C. Prieur, and O. Sename, Backstepping observer based-control for an anti-damped boundary wave PDE in presence of in-domain viscous damping, IEEE Conf. on Dec. and Cont. (CDC'16), Las Vegas (NV), USA, 2016. A. Tanwani, C. Prieur, and S. Tarbouriech, Input-to-State Stabilization in $H^1$-Norm for Boundary Controlled Linear Hyperbolic PDEs with Application to Quantized Control, IEEE Conf. on Dec. and Cont. (CDC'16), Las Vegas (NV), USA, 2016. M.A. Davo, M. Fiacchini, and C. Prieur, Output memory-based event-triggered control, IEEE Conf. on Dec. and Cont. (CDC'16), Las Vegas (NV), USA, pp. 3106-3111, 2016. C.-I. Chesneau, M. Hillion, and C. Prieur, Motion estimation of a Rigid Body with an EKF using Magneto-Inertial Measurements, 7th Conf. on Indoor Positioning and Indoor Navigation (IPIN'16), Madrid, Spain, 2016. A. Seuret, C. Prieur, S. Tarbouriech, and L. Zaccarian, Event-triggered control via reset control systems framework, 10th IFAC Symposium on Nonlinear Control Systems (NOLCOS'16), Monterey (CA), USA, 2016. N. Espitia, A. Girard, N. Marchand, and C. Prieur, Event-based stabilization of linear systems of conservation laws using a dynamic triggering condition, 10th IFAC Symposium on Nonlinear Control Systems (NOLCOS'16), Monterey (CA), USA, 2016. S. Marx, E. Cerpa, C. Prieur, and V. Andrieu, Global stabilization of a Korteweg-de Vries equation with a distributed control saturated in $L^2$-norm, 10th IFAC Symposium on Nonlinear Control Systems (NOLCOS'16), Monterey (CA), USA, 2016. N. Meslem, and C. Prieur, Using the monotone property and sensors placement as basic tools to design set-membership state estimators, 22nd International Symposium on Math. Theory Network Syst. (MTNS'16), Minneapolis (MN), USA, 2016. C. Roman, D. Bresch-Pietri, C. Prieur, and O. Sename, Robustness of an adaptive output feedback of an anti-damped boundary wave PDE in presence of in-domain viscous damping, American Control Conference (ACC'16), Boston (MA), USA, 2016. A. Tanwani, A.R. Teel, and C. Prieur, On using norm estimators for event-triggered control with dynamic output feedback, IEEE Conf. on Dec. and Cont. (CDC'15), Osaka, Japan, pp. 5500-5505, 2015. Y. Tang, C. Prieur, and A. Girard, Stability analysis of a singularly perturbed coupled ODE-PDE system, IEEE Conf. on Dec. and Cont. (CDC'15), Osaka, Japan, 2015. A.F. Caldeira, C. Prieur, D. Coutinho, and V.J.S. Leite, Modeling and control of flow with dynamical boundary actions, IEEE Multi-Conference on Systems and Control, Sydney, Australia, 2015. P.-O. Lamare, A. Girard, and C. Prieur, Numerical computation of Lyapunov function for hyperbolic PDE using LMI Formulation and polytopic embeddings, 1st IFAC Workshop on Linear Parameter Varying Systems (LPVS 2015), Grenoble, France, 2015. S. Marx, E. Cerpa, C. Prieur, and V. Andrieu, Stabilization of a linear Korteweg-de Vries equation with a saturated internal control, Eur. Cont. Conf. (ECC'15), Linz, Austria, 2015. N. Meslem and C. Prieur, Event-based stabilizing controller using a state observer, First IEEE International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP), Krakow, Poland, 2015. C. Prieur, S. Tarbouriech, and J. M. Gomes da Silva Jr, Well-posedness and stability of a 1D wave equation with saturating distributed input, IEEE Conf. on Dec. and Cont. (CDC'14), Los Angeles (CA), USA, pp. 2846-2851, 2014. Y. Tang, C. Prieur, and A. Girard, Boundary control synthesis for hyperbolic systems: a singular perturbation approach, IEEE Conf. on Dec. and Cont. (CDC'14), Los Angeles (CA), USA, 2014. A. Janon, M. Nodet, C. Prieur, and C. Prieur, Global sensitivity analysis for the boundary control of an open channel, IEEE Conf. on Dec. and Cont. (CDC'14), Los Angeles (CA), USA, 2014. A. Tanwani, B. Brogliato, and C. Prieur, On Output Regulation in Systems with Differential Variational Inequalities, IEEE Conf. on Dec. and Cont. (CDC'14), Los Angeles (CA), USA, 2014. Y. Tang, C. Prieur, and A. Girard, Approximation of singularly perturbed linear hyperbolic systems, 21st Symp. Mathematical Theory of Networks and Systems (MTNS'14), Groningen, The Netherlands, 2014. A. Tanwani, B. Brogliato, and C. Prieur, On output regulation in state-constrained systems: An application to polyhedral case, IFAC World Congress on Automatic Control, Cape Town, South Africa, 2014. N. Meslem, and C. Prieur, State Estimation Based on Self-Triggered Measurements, IFAC World Congress on Automatic Control, Cape Town, South Africa, 2014. S. Marx, V. Andrieu, and C. Prieur, Using a high-gain observer for a hybrid output feedback: finite-time and asymptotic cases for SISO affine systems, American Control Conference (ACC'14), Portland (OR), USA, 2014. Y. Tang, C. Prieur, and A. Girard, A new H2-norm Lyapunov function for the stability of a singularly perturbed system of two conservation laws, IEEE Conf. on Dec. and Cont. (CDC'13), Firenze, Italy, pp. 3026-3031, 2013. H. Stein Shiromoto, V. Andrieu, and C. Prieur, Interconnecting a System Having a Single Input-to-State Gain With a System Having a Region-Dependent Input-to-State Gain, IEEE Conf. on Dec. and Cont. (CDC'13), Firenze, Italy, pp. 624-629, 2013. P.-O. Lamare, A. Girard, and C. Prieur, Lyapunov techniques for stabilization of switched linear systems of conservation laws, IEEE Conf. on Dec. and Cont. (CDC'13), Firenze, Italy, pp. 448-453, 2013. S. Tarbouriech, I. Queinnec, and C. Prieur, Stability analysis for systems with saturation and backlash in the loop, IEEE Conf. on Dec. and Cont. (CDC'13), Firenze, Italy, pp. 6652-6657, 2013. N. Meslem, and C. Prieur, Event-triggered algorithm for continuous-time systems based on reachability analysis, IEEE Conf. on Dec. and Cont. (CDC'13), Firenze, Italy, pp.~2048-2053, 2013. M. Fiacchini, C. Prieur, and S. Tarbouriech, Necessary and sufficient conditions for invariance of convex sets for discrete-time saturated systems, IEEE Conf. on Dec. and Cont. (CDC'13), Firenze, Italy, pp.~3788-3793, 2013. A. Tanwani, B. Brogliato, and C. Prieur, Passivity-Based Observer Design for a Class of Lagrangian Systems with Perfect Unilateral Constraints, IEEE Conf. on Dec. and Cont. (CDC'13), Firenze, Italy, pp. 3338-3343, 2013. A. Tanwani, B. Brogliato, and C. Prieur, On Stability of Measure Driven Differential Equations, 9th IFAC Symposium on Nonlinear Control Systems (NOLCOS'13), Toulouse, France, pp. 241-246, 2013. A. Seuret, C. Prieur, S. Tarbouriech, and L. Zaccarian, Event-triggered control with LQ optimality guarantees for saturated linear systems, 9th IFAC Symposium on Nonlinear Control Systems (NOLCOS'13), Toulouse, France, 2013. Y. Tang, C. Prieur, and A. Girard, Lyapunov stability of a singularly perturbed system of two conservation laws, 1st IFAC Workshop on Control of Systems Governed by Partial Differential Equations (CPDE'13), Paris, France, 2013. F. Fichera, C. Prieur, S. Tarbouriech, and L. Zaccarian, Static anti-windup scheme for a class of homogeneous dwell-time hybrid controllers, Eur. Cont. Conf. (ECC'12), Zurich, Switzerland, pp. 1681-1686, 2013. F. Castillo, E. Witrant, C. Prieur, and L. Dugard, Dynamic boundary stabilization of hyperbolic systems, IEEE Conf. on Dec. and Cont. (CDC'12), Maui (HI), USA, pp. 2952-2957, 2012. C. Prieur, S. Tarbouriech, and L. Zaccarian, Hybrid high-gain observer without peaking for planar nonlinear systems, IEEE Conf. on Dec. and Cont. (CDC'12), Maui (HI), USA, 2012. F. Fichera, C. Prieur, S. Tarbouriech, and L. Zaccarian, A convex hybrid H_infty synthesis with guaranteed convergence rate, IEEE Conf. on Dec. and Cont. (CDC'12), Maui (HI), USA, pp. 4217-4222, 2012. C. Prieur, A. Girard, and E. Witrant, Lyapunov functions for switched linear hyperbolic systems, 4th IFAC Conference on Analysis and Design of Hybrid Systems (ADHS'12), Eindhoven, The Netherlands, pp. 382-387, 2012. F. Fichera, C. Prieur, S. Tarbouriech, and L. Zaccarian, Hybrid state-feedback loops based on a dwell-time logic, 4th IFAC Conference on Analysis and Design of Hybrid Systems (ADHS'12), Eindhoven, The Netherlands, 2012. F. Fichera, C. Prieur, S. Tarbouriech, and L. Zaccarian, Improving The Performance of Linear Systems by Adding a Hybrid Loop: The Output Feedback Case, American Control Conference (ACC'12), Montréal, Canada, pp. 3192-3197, 2012. F. Bribiesca Argomedo, E. Witrant, and C. Prieur, D1-Input-to-State Stability of a Time-Varying Nonhomogeneous Diffusive Equation Subject to Boundary Disturbances, American Control Conference (ACC'12), Montréal, Canada, pp. 2978-2983, 2012. C. Prieur, and F. Mazenc, ISS Lyapunov functions for time-varying hyperbolic partial differential equations, IEEE Conf. on Dec. and Cont. and Eur. Cont. Conf. (CDC-ECC'11), Orlando (FL), USA, pp. 4915-4920, 2011. M. Fiacchini, S. Tarbouriech, and C. Prieur, Invariance of symmetric convex sets for discrete-time saturated systems, IEEE Conf. on Dec. and Cont. and Eur. Cont. Conf. (CDC-ECC'11), Orlando (FL), USA, pp. 7343-7348, 2011. A. Seuret, and C. Prieur, Event-based sampling algorithms based on a Lyapunov function, IEEE Conf. on Dec. and Cont. and Eur. Cont. Conf. (CDC-ECC'11), Orlando (FL), USA, pp. 6128-6133, 2011. F. Bribiesca Argomedo, C. Prieur, E. Witrant, and S. Bremond, Polytopic control of the magnetic flux profile in a Tokamak plasma, IFAC World Congress on Automatic Control, Milan, Italy, 2011. L. Hetel, J. Daafouz, S. Tarbouriech, and C. Prieur, Reset control systems: stabilization by nearly-periodic reset, IFAC World Congress on Automatic Control, Milan, Italy, 2011. F. Mazenc, and C. Prieur, Strict Lyapunov functionals for nonlinear parabolic partial differential equations, IFAC World Congress on Automatic Control, Milan, Italy, 2011. C. Casenave, and C. Prieur, Controllability of SISO Volterra models via diffusive representation, IFAC World Congress on Automatic Control, Milan, Italy, 2011. C. Prieur, S. Tarbouriech, and L. Zaccarian, Improving the performance of linear systems by adding a hybrid loop, IFAC World Congress on Automatic Control, Milan, Italy, pp. 6301-6306, 2011. H. Stein Shiromoto, V. Andrieu, and C. Prieur, Combining a backstepping controller with a local stabilizer, American Control Conference (ACC'11), San Francisco (CA), USA, 2011. M. Fiacchini, S. Tarbouriech, and C. Prieur, Ellipsoidal invariant sets for saturated hybrid systems, American Control Conference (ACC'11), San Francisco (CA), USA, pp. 1452-1457, 2011. M. Fiacchini, S. Tarbouriech, and C. Prieur, Polytopic control invariant sets for differential inclusion systems: a viability theory approach, American Control Conference (ACC'11), San Francisco (CA), USA, pp. 1218-1223, 2011. B. Robu, V. Pommier-Budinger, L. Baudouin, C. Prieur, and D. Arzelier, Simultaneous H infinity vibration control of fluid/plate system via reduced-order controller, IEEE Conf. on Dec. and Cont. (CDC'10), Atlanta (GA), USA, pp. 3146-3151, 2010. F. Bribiesca Argomedo, E. Witrant, C. Prieur, D. Georges, and S. Bremond, Model-based control of the magnetic flux profile in a tokamak plasma, IEEE Conf. on Dec. and Cont. (CDC'10), Atlanta (GA), USA, pp. 6926-6931, 2010. J. Boada, C. Prieur, S. Tarbouriech, C. Pittet, and C. Charbonnel, Extended model recovery anti-windup for satellite control, IFAC Symp. on Automatic Control in Aerospace (ACA'10), Nara, Japan, 2010. T. Loquen, D. Nesic, C. Prieur, S. Tarbouriech, A. R. Teel, and L. Zaccarian, Piecewise quadratic Lyapunov functions for linear control systems with First Order Reset Elements, 8th IFAC Symp. on Nonlinear Control Systems (NOLCOS'10), Bologna, Italy, 2010. C. Prieur, L. Zaccarian, and S. Tarbouriech, Guaranteed stability for nonlinear systems by means of a hybrid loop, 8th IFAC Symp. on Nonlinear Control Systems (NOLCOS'10), Bologna, Italy, pp. 72-77, 2010. R. G. Sanfelice, and C. Prieur, Uniting two output-feedback hybrid controllers with different objectives, American Control Conference (ACC'10), Baltimore (MD), USA, pp. 910-915, 2010. J. Boada, C. Prieur, S. Tarbouriech, C. Pittet, and C. Charbonnel, Multi-saturation anti-windup structure for satellite control, American Control Conference (ACC'10), Baltimore (MD), USA, pp. 5979-5984, 2010. S. Tarbouriech, C. Prieur, I. Queinnec, and T. Simoes dos Santos, Global Stability for systems with nested backlash and saturation operators, American Control Conference (ACC'10), Baltimore (MD), USA, pp. 2665-2670, 2010. C. Prieur, R. C.L.F. Oliveira, S. Tarbouriech, and P. L.D. Peres, Stability analysis and state feedback control design of discrete-time systems with a backlash, American Control Conference (ACC'10), Baltimore (MD), USA, pp. 2688-2693, 2010. B. Robu, L. Baudouin, and C. Prieur, A controlled distributed parameter model for a fluid-flexible structure system: numerical simulations and experiment validations, 48th IEEE Conf. on Dec. and Cont. and 28th Chinese Cont. Conf. (CDC'09), Shanghai, China, pp. 5532-5537, 2009. V. Andrieu, C. Prieur, S. Tarbouriech, and D. Arzelier, Synthesis of a global asymptotic stabilizing feedback law for a system satisfying two different sector conditions 48th IEEE Conf. on Dec. and Cont. and 28th Chinese Cont. Conf. (CDC'09), Shanghai, China, 2009. J. Boada, C. Prieur, S. Tarbouriech, C. Pittet, and C. Charbonnel, Anti-windup design for satellites control with microtrusters, Am. Inst. of Aeronautics and Astronautics Conf. (AIAA'09), Chicago (IL), USA, 2009. B. Robu, L. Baudouin, and C. Prieur, A distributed parameter model for a fluid-flexible structure system, 5th IFAC Workshop on Control of Distributed Parameter Systems (CDPS'09), Toulouse, France, 2009. S. Agarwal, G. Carbou, and C. Prieur, A network of ferromagnetic ellipsoidal samples, 5th IFAC Workshop on Control of Distributed Parameter Systems (CDPS'09), Toulouse, France, 2009. V. Andrieu, and C. Prieur, Uniting two control Lyapunov functions for affine systems, IEEE Conf. on Dec. and Cont. (CDC'08), Cancun, Mexico, pp. 622-627, 2008. T. Loquen, S. Tarbouriech, and C. Prieur, Stability of reset control systems with nonzero reference, IEEE Conf. on Dec. and Cont. (CDC'08), Cancun, Mexico, pp. 3386-3391, 2008. C. Pittet, N. Despré, S. Tarbouriech, and C. Prieur, Nonlinear controller design for satellite reaction wheels unloading using anti-windup techniques, American Inst. of Aeronautics and Astronautics (AIAA'08) Conference, Honolulu (HI), USA, 2008. L. Baudouin, C. Prieur, F. Guignard, and D. Arzelier, Control of adaptive optics system: an H-infinite approach, IFAC World Congress on Automatic Control, Seoul, Korea, 2008. E. Crépeau, and C. Prieur, Motion planning of a reaction-diffusion system with a nontrivial dispersion matrix, 18th Symp. Mathematical Theory of Networks and Systems (MTNS'08), Blacksburg (VA), USA, 2008. C. Prieur, and A.R. Teel, Uniting a high performance, local controller with a global controller: the output feedback case for linear systems with input saturation, American Control Conference (ACC'08), Seattle, WA, pp. 2901-2908, 2008. S. Tarbouriech, and C. Prieur, Stability analysis for systems with nested backlash and saturation operators, IEEE Conf. on Dec. and Cont. (CDC'07), New-Orleans (LA), USA, pp. 5892-5897, 2007. T. Loquen, S. Tarbouriech, and C. Prieur, Stability analysis for reset systems with input saturation, IEEE Conf. on Dec. and Cont. (CDC'07), New-Orleans (LA), USA, pp. 3272-3277, 2007. V. Dos Santos, and C. Prieur, Boundary control of a channel: practical and numerical studies, 3rd IFAC Symp. on Syst. Struc. and Control (SSSC'07), Foz do Iguacu, Brazil, 2007. E. Crépeau, and C. Prieur, Motion planning of reaction-diffusion system arising in combustion and electrophysiology, IFAC Workshop on Control of Distributed Parameter Systems (CDPS'07), Namur, Belgium, 2007. V. Dos Santos, C. Prieur, and J. Sau, Boundary control of a channel in presence of small perturbations: a Riemann approach, IFAC Workshop on Control of Distributed Parameter Systems (CDPS'07), Namur, Belgium, 2007. C. Prieur, and A.R. Teel, Uniting local and global output feedback controllers, IFAC: Symp. on Nonlinear Control System (NOLCOS'07), Pretoria, South-Africa, 2007. R. Goebel, C. Prieur, and A.R. Teel, Relaxed characterizations of smooth patchy control Lyapunov functions, IFAC: Symp. on Nonlinear Control System (NOLCOS'07), Pretoria, South-Africa, 2007. S. Tarbouriech, and C. Prieur, Stability analysis for systems with backlash and saturated actuator, IFAC: Symp. on Nonlinear Control System (NOLCOS'07), Pretoria, South-Africa, 2007. V. F. Montagner, R. C. L. F. Oliveira, T. R. Calliero, R. A. Borges, P. L. D. Peres, and C. Prieur, Robust absolute stability and stabilization based on homogeneous polynomially parameter-dependent Lur'e functions, American Control Conference (ACC'07), New-York (NY), USA, 2007. R. Goebel, C. Prieur, and A.R. Teel, Smooth patchy control Lyapunov functions, IEEE Conf. on Dec. and Cont. (CDC'06), San Diego (CA), USA, 2006. S. Tarbouriech, and C. Prieur, L2-performance analysis for sandwich systems with backlash, IEEE Conf. on Dec. and Cont. (CDC'06), San Diego (CA), USA, pp. 1364-1371, 2006. C. Prieur, Boundary control of non-homogeneous hyperbolic systems, 8th french-romanian Conf. of applied math., Chambéry, France, 2006. L. Baudouin, C. Prieur, and D. Arzelier, Robust control of a bimorph mirror for adaptive optics system, 17th Symp. Mathematical Theory of Networks and Systems (MTNS'06), Kyoto, Japan, 2006. S. Tarbouriech, and C. Prieur, Stability Analysis for Sandwich Systems with Backlash: an LMI approach, 5th IFAC Symposium on Robust Control Design (ROCOND'06), Toulouse, France, 2006. R.G. Sanfelice, A.R. Teel, R. Goebel, and C. Prieur, On the robustness to measurement noise and unmodeled dynamics of stability in hybrid systems, American Control Conference (ACC'06), Minneapolis, Minnesota, pp. 4061-4066, 2006. C. Prieur, and E. Trélat, On two hybrid robust optimal stabilization problems, 13th IFAC Workshop on Control applications of Optimization, Cachan, France, 2006. M. Lenczner, and C. Prieur, Asymptotic model of an active mirror, 13th IFAC Workshop on Control applications of Optimization, Cachan, France, 2006. P. Le Gall, C. Prieur, and L. Rosier, Stabilization of a clamped-free beam with collocated piezoelectric sensor/actuator, 13th IFAC Workshop on Control applications of Optimization, Cachan, France, 2006. P. Le Gall, C. Prieur, and L. Rosier, On the control of a bimorph mirror, 13th IFAC Workshop on Control applications of Optimization, Cachan, France, 2006. C. Prieur, R. Goebel, and A.R. Teel, Results on robust stabilization of asymptotically controllable systems by hybrid feedback, IEEE Conf. on Dec. and Cont. and Eur. Cont. Conf. (CDC-ECC'05), Sevilla, Spain, pp. 2598-2603, 2005. C. Prieur, and E. Trélat, Semi-global minimal time hybrid robust stabilization of analytic driftless control-affine systems, IEEE Conf. on Dec. and Cont. and Eur. Cont. Conf. (CDC-ECC'05), Sevilla, Spain, pp. 5438-5443, 2005. S. Tarbouriech, C. Prieur, and J. M. Gomes Da Silva Jr, L2 performance for systems presenting nested saturations, IEEE Conf. on Dec. and Cont. and Eur. Cont. Conf. (CDC-ECC'05), Sevilla, Spain, pp. 5000-5005, 2005. C. Prieur, J. Winkin, and G. Bastin, Boundary control of non-homogeneous systems of two conservation laws, IEEE Conf. on Dec. and Cont. and Eur. Cont. Conf. (CDC-ECC'05), Sevilla, Spain, pp. 1899-1904, 2005. J.B. Lasserre, C. Prieur, and D. Henrion, Nonlinear optimal control: Numerical approximations via moments and LMI-relaxations, IEEE Conf. on Dec. and Cont. and Eur. Cont. Conf. (CDC-ECC'05), Sevilla, Spain, pp. 1648-1653, 2005. S. Tliba, C. Prieur, and H. Abou-Kandil, Active Vibration Damping of a Smart Flexible Structure Using Piezoelectric Transducers: H-infinity Design and Experimental Results, IFAC World Congress on Automatic Control, Prague, Czech Republic, 2005. S. Tarbouriech, C. Prieur, and J.M. Gomes da Silva Jr., An anti-windup strategy for a flexible cantilever beam, IFAC World Congress on Automatic Control, Prague, Czech Republic, 2005. S.Tarbouriech, C. Prieur, and J.M.Gomes Da Silva, Stability analysis and stabilization of systems presenting nested saturations, IEEE Conf. on Dec. and Cont. (CDC'04), Nassau, Bahamas, pp. 5493-5498, 2004. J. de Halleux, C. Prieur, and G. Bastin, Boundary control design for cascades of hyperbolic 2x2 PDE systems via graph theory, IEEE Conf. on Dec. and Cont. (CDC'04), Nassau, Bahamas, pp. 3313-3318, 2004. C. Prieur, and L. Praly, A tentative direct Lyapunov design of output feedbacks, IFAC Symp. on Nonlinear Control Systems (NOLCOS'04), Stuttgart, Germany, pp. 1121-1126, 2004. C. Prieur, and E. Trélat, Robust optimal stabilization of the Brockett integrator via a hybrid feedback, invited by F. Lamnabhi-Lagarrigue, mini-symposium on Stabilization of uncertain hybrid and nonlinear systems, Symp. Mathematical Theory of Networks and Systems (MTNS'04), Leuven, Belgium, 2004. C. Prieur, P. Bendotti, and L. El-Ghaoui, Robust optimization-based control: an LMI approach, IEEE Conf. on Dec. and Cont. (CDC'00), Sydney, Australia, Vol. 3 , pp. 2317-2322, 2000. Y. Tang, C. Prieur, and A. Girard, Singular perturbation approach for linear coupled ODE-PDE systems, in "Delays and Interconnections: Methodology, Algorithms and Applications", G. Valmorbida, R. Sipahi, I. Boussaada and A. Seuret (eds.), Advances in Delays and Dynamic, Springer, pp. nn-nn, 2018. A. Tanwani, C. Prieur, and S. Tarbouriech, Stabilization of Linear Hyperbolic Systems of Balance Laws with Measurement Errors, in "Control subject to computational and communication constraints", S.Tarbouriech, A. Girard, and L. Hetel (eds.), Springer, pp. 357-374, 2018. S. Tarbouriech, A. Seuret, C. Prieur, and L. Zaccarian, Insights on event-triggered control for linear systems subject to norm-bounded uncertainty, in "Control subject to computational and communication constraints", S. Tarbouriech, A. Girard, L. Hetel (eds.), Springer, pp. 181-196, 2018. S. Tarbouriech, I. Queinnec, J.-M. Biannic, and C. Prieur, Pilot-Induced-Oscillations alleviation through anti-windup based approach, in "Space Engineering", G. Fasano and J. D. Pinter (eds.), Springer Optimization and Its Applications, vol. 114, Springer, pp. 401-424, 2016. F. Castillo, E. Witrant, C. Prieur and L. Dugard, Dynamic boundary stabilization of first order hyperbolic systems, in "Recent results on time-delay systems: analysis and control", E. Witrant, E. Fridman, O. Sename and L. Dugard (eds.), vol. 5 of the series Advances in Delays and Dynamics, pp 169-190, Springer, 2016. M. Fiacchini, S. Tarbouriech, and C. Prieur, Exponential stability for hybrid systems with saturations, in "Hybrid Systems with Constraints", J. Daafouz, S. Tarbouriech, and M. Sigalotti (eds.), Wiley, 2013. J. Boada, C. Prieur, S. Tarbouriech, C. Pittet, and C. Charbonnel, Formation flying control for satellites: anti-windup based approach, in "Modeling and optimization in space engineering", G. Fasano and J. D. Pinter (eds.), Springer Optimization and Its Applications, vol. 73, Springer, 2012. B. De Schutter, W. P. M. H. Heemels, J. Lunze, and C. Prieur, Survey of modeling, analysis and control of hybrid systems, in "Handbook of Hybrid Systems Control, Theory-Tools-Applications", J. Lunze and F. Lamnabhi-Lagarrigue (eds.), Cambridge University Press, Cambridge, 2009. C. Roos, J-M. Biannic, S. Tarbouriech, C. Prieur, On-ground aircraft control design using an LPV anti-windup approach, in "Nonlinear analysis and synthesis techniques for aircraft control", Lectures Notes in Control and Information Sciences, vol. 365, Springer-Verlag, 2007. C. Prieur, Perturbed hybrid systems, applications in control theory, in Fourth Nonlinear Control Network (NCN) Workshop, Nonlinear and Adaptive Control, A. Zinober and D. Owens (eds.), Lectures Notes in Control and Information Sciences, 281, Springer, Berlin, 2002, pp. 285-294. C. Prieur, A Robust globally asymptotically stabilizing feedback: the example of the Artstein's circles, in Nonlinear Control in the Year 2000, A. Isidori, F. Lamnabhi-Lagarrigue and W. Respondek (eds.), Lectures Notes in Control and Information Sciences, Vol. 258, Springer Verlag, London (2000), pp. 279-300. C.-I. Chesneau, M. Hillon, C. Prieur, and D. Vissière, Magnetic heading estimation with magnetic sensors, Sysnav, CNRS, 1757223, July 2017. C.-I. Chesneau, M. Hillon, D. Vissière, and C. Prieur, Navigation estimation in a perturbed magnetic field, Sysnav, CNRS, 1756958, July 2017.
CommonCrawl
The area of applied harmonic analysis provides a variety of multiscale systems such as wavelets, curvelets, shearlets, or ridgelets. A distinct property of each of those systems is the fact that it sparsely approximates a particular class of functions. Some of these systems even share similar approximation properties such as curvelets and shearlets which both optimally sparsely approximate functions governed by curvilinear features, a fact that is usually proven on a case-by-case basis for each different construction. The recently developed framework of parabolic molecules, which includes all known anisotropic frame constructions based on parabolic scaling, provides a unified concept for a sparse approximation results of such systems. In this talk we will introduce the novel concept of $\alpha$-molecules which allows for a unified framework encompassing most multiscale systems from the area of applied harmonic analysis with the parameter $\alpha$ serving as a measure for the degree of anisotropy. The main result essentially states that the cross-Gramian of two systems with the same degree of anisotropy exhibits a strong off-diagonal decay. One main consequence we will discuss is that all such systems then share similar approximation properties, and desirable approximation properties of one can be deduced for virtually any other system with the same degree of anisotropy. Joint work with P. Grohs (ETH Zurich), S. Keiper (TU Berlin) and M. Schäfer (TU Berlin).
CommonCrawl
For a while now I've been looking for a Latex editor that allows real time preview. It's much more convenient and allows for a flexible and smooth workflow. So I wanted to give it a try. What I found was an Apple App called "Latexian" and I was a bit skeptical about it at the first place because its icon (a globe) looked a bit – well let's say unsuitable. Furthermore it is not free, but since $9.99 is still reasonable and it first gives you 30 days of free trial I went for it. And now I must say it's just great! I've been working with Texmaker and TeXShop before – they weren't bad but now that I have a really good app with live preview, I would not go back to these at all! The live preview is shown in the lower half of the window and allows you to instantly see the changes you make. Of course the live preview is just an option – so you don't have to use it all the time. I know a proverb is not meant literally, but in that case it is. I ordered a used copy of Reed and Simon's Volume 3 from Amazon and I was very happy when I got it because from the outside it looked very good for being a used book. The cover was the original blue and grey hardcover and the title said "Scattering theory". I flipped over the pages briefly to see whether I find markings, but no there weren't. At that point I just thought: Wow, this text looks very narrative! When I finally got to the first pages (I have the habit to always flip the pages starting from the end of the book), the word "Africa" caught my attention. My first thought was: Since when do they advertise non-mathematical titles in a math text? But then I realized: it was not just an ad, it was actually the title of the book! Now that I went over the pages again, I saw that I had purchased 480 pages on the African union and new strategies for development in Africa! I have always thought that double strike letters like , , were some smart invention to give sets we find so incredibly important a clever distinguished notion. It turns out the evolution of this notation was quite a coincidence. These letters originated from the blackboard – formerly the sets of natural, real, rational numbers and so on were denoted by bold letters (which you can actually still find in old textbooks). However, writing bold letters on the blackboard – well that's beyond our artistic skills (or just very inconvenient) and therefore people started to write double strike letters on the blackboard to represent bold letters. And since mathematicians are always short of symbols, people adapted these letters to printed texts too. Now this explains the command \mathbb in – math blackboard bold. Some people think it is not ok to use this notation, there is even an entire Wikipedia article about this subject.
CommonCrawl
Recent progress in the Langlands programm provides a significant step towards the understanding of the arithmetic of global fields. The geometric Langlands program provides a systematic way to construct l-adic sheaves (resp. D-modules) on algebraic curves which subsumes the construction of classical sheaves, like rigid local systems, used in inverse Galois theory (by Belyi, Malle, Matzat, Thompson, Dettweiler, Reiter) for the construction of field extension of the rational function fields $\mathbb F_p(t)$ or $\mathbb Q(t)$ (recent work of Heinloth, Ngo, Yun and Yun). On the other hand, using Langlands correspondence for the field $\mathbb Q$, Khare, Larsen and Savin constructed many new automorphic representations which lead to new Galois realizations for classical and exceptional groups over $\mathbb Q$. It was the aim of the workshop, to bring together the experts working in the fields of Langlands correspondence and constructive Galois theory.
CommonCrawl
Rants about programming stuff, discussions about personal projects, or whatever else I feel like posting. A bounding box generally refers to an axis-aligned rectangular region of space used as a first, coarse step of collision detection. Every object is given a bounding box which covers all space the object could possibly occupy. If the bounding boxes of two objects overlap, the simulation needs to do a more precise but expensive collision check; but if they do not overlap, they certainly do not collide and the pair can be skipped. Since Hexeline uses an oblique coordinate system, the same approach results in bounding rhombi instead of boxes, though that distinction is not particularly interesting here. What is interesting is how bounding boxes/rhombi can be handled extremely efficiently with SIMD. I fully expect that this technique has been discovered by someone else previously, but it's still worth discussing. The rest of this post will just use "bounding box" and normal (X,Y) coordinates so familiarity with "hexagonal coordinates" is not necessary. There are a couple ways a bounding box can be represented. The one we're interested in here is the 2D interval representation, wherein we have two points $(x_0,y_0)$ and $(x_1,y_1)$ such that $x_0 ≤ x_1$ and $y_0 ≤ y_1$. We then say that any point $(x,y)$ is within the box if $x_0≤x≤x_1$ and $y_0≤y≤y_1$. 2 // uses of `i32x4`. Before getting into how we check these for overlap, let's think about another operation: union. In order to efficiently find candidate objects for collision checks, a common approach is to build some form of tree of bounding boxes. This requires each node to have a bounding box that encompasses at least all space of its children, i.e., to be a union of its children. We can certainly implement union with the current representation. Unfortunately, this isn't great. With SSE4.1, it's just a pminsd, pmaxsd, pblendw sequence. But for systems without min and max instructions, these need to be emulated, and the fact that there are two of them causes a large portion of the register file to be used just for this operation. It could be improved by inlining the min and max emulation to remove a lot of the redundant work, but we can still make something better for both cases. The only reason we needed the extra instructions here is that the condition for choosing the lower-bound coordinate is different from that of choosing the upper-bound coordinate. With a simple tweak to the representation, we can make that condition the same. We simply negate $x_0$ and $y_0$, which allows finding the minimum $x_0$ and $y_0$ to be done by finding the maximum of their negated values. Thus, all lanes are selected by the same criterion. With SSE4.1, this produces a single pmaxsd instruction. ARM with Neon is also a single instruction. max still needs to be emulated elsewhere, but there's now a lot less to do. Checking whether two interval-based bounding boxes overlap is simply a matter of checking whether both one-dimensional intervals they comprise overlap. Suppose we have a pair of one-dimensional intervals, $(l_0,l_1)$ and $(r_0,r_1)$. It may be non-obvious at first, but they overlap if and only if $l_0 ≤ r_1$ and $r_0 ≤ l_1$. However, in our bounding box representation, we don't have the lower bounds directly; we have the negated lower bounds, so we would need to check $–(–l_0) ≤ r_1$ and $–(–r_0) ≤ l_1$. This looks pretty inconvenient from a SIMD perspective: we're doing a different operation on each lane. But it can actually be simplified quite a bit. We can start by putting all the $l$ terms on the left-hand side. One of the $r$ terms is negated here; the other is not, so multiply both sides of that equation by $–1$ (which also changes ≤ to ≥). We now have something SIMD-friendly. Every lane of the right-hand side is negated, and we then perform the same comparison on every lane after a shuffle. We can view this as a two-step process: "invert" the right-hand side by negating all the terms and putting the upper bounds in the lower bounds' lanes and vice-versa, then perform the comparison. For the comparison step, we want to check that every lane of the left-hand side is greater than or equal to the corresponding lane of the right-hand side. Assuming space is not so large that overflow is a concern, we know that if $a≥b$, the result of $a-b$ will never have its sign bit set. The "move mask" primitive (movmskps in SSE speak) will give us the sign bit of all lanes in a single integer. Putting these together, we can simply subtract the two vectors and test that all the sign bits are zero. That's about it for bounding boxes. It may seem like all this just makes already fast operations even faster, but both unions and overlap tests must be performed multiple times per update for every object and so count against Hexeline's 100 nanosecond budget, so making them as fast as possible is extremely important. Copyright © 2013, 2014, 2017, Jason Lingle. This work is licensed under CC-BY-4.0.
CommonCrawl
She was there every day, hour after hour." a mischievious damsel, whose hair was a handful" was in love with Rap-Punzel... as in love as can be. when the prince could scale the tower with a short little climb." "But he's tired of waiting, what a waste of days. He should evaluate expressions but he doesn't know the way" Let's help MC Prince. At the start of this tale, Rap-Punzel's hair is 50 inches long. Each month, it grows 5 inches." Let's write this as an expression… 50 is the constant number in this expression, meaning it does not change. We said before that her hair grows at a rate of 5 inches per month. The number of months is the unknown quantity, which we will call x. We can also use any other variable. The number five is the coefficient of the variable. So… the expression to describe the length of Rap-Punzel's hair after a variable number of months is 5x + 50. Ok, let's use our expression to evaluate the length of Rap-Punzel's glorious hair over a period of a few months. Let's set up a function table… When evaluating expressions, always remember to use the correct order of operations. ... she cut her hair. Now it's just 10 inches long. To keep her hair healthy, Rap-Punzel must trim her hair two inches every month. Despondent, MC Prince wondered, how much longer must he wait to be with his beloved? Let's modify our expression. 5x for the growth per month minus 2x for the trimming per month plus ten for the starting length. Now, simplify the expression. 3x + 10. MC Prince wondered, '...after six more months, how long will her hair be?...' Evaluate the expression, this time, letting x equal six… use the correct order of operations, 3(6) = 18; plus ten it is equal to twenty-eight. Her hair will only be twenty-eight inches long! he had an idea, on which he was keen" To evaluate expressions, it is helpful to review some words associated with this topic. First, what is an expression, and how does it differ from an equation? An expression does not have an equal sign while an equation does. An equation includes two expressions with an equal sign in between the expressions. Expressions are made up of terms. A term can be a number by itself, and this is called a constant. A term can also include a variable which is a letter or other symbol that stands in for an unknown value, usually the letter x. Variables can be alone or attached to a numbers by the operation of multiplication or division such as 3x or x/3. This number attached to the variable is called a coefficient. Now that you know the words to describe expressions, you are ready to simplify them. To simplify expressions, follow PEMDAS then combine the like terms. Constants go with constants, and same variables go with same variables. When you are ready to evaluate the expressions which means to solve them for a specific value of the variable, follow the correct order of operations, PEMDAS. To learn more about evaluating expressions and have a laugh, watch this video. if you need to get rid of the coefficient as in 0.5x it is best to divide. so divide both sides by 0.5 and that will give you the answer. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Evaluating Expressions kannst du es wiederholen und üben. Calculate the length of hair after given number of months. You can use a linear function to represent the length of Rap-Punzel's hair. The unknown quantity is the number of months. Let's look at another example: The amount of money you receive on your birthday increases by $\$5$ each year. You start with $\$15$. You have to multiply the number of months that have passed by the rate her hair grows, then add $50$. Label the different parts of the expression. A constant is independent of the variable. The term above represents the length of Rap-Punzel's hair after $x$ months. In the beginning, Rap-Punzel's hair is $50$ inches long. This expression represents the length of Rap-Punzel's hair after $x$ months. The coefficient to the variable is the rate of growth, $5$. $x$ represents the unknown number of months. Finally, our constant is the starting length of her Rap-Punzel's hair in inches, $50$. Find an equation that represents Rap-Punzel's hair length after $x$ months. To avoid split ends, Rap-Punzel has to trim her hair. This means she has to cut her hair. Is her hair getting longer or shorter? You can simplify expressions and equations by combining like terms. For example: $2$ apples plus $3$ bananas plus $4$ apples results in $6$ apples and $3$ bananas. Poor Rap-Punzel and poor Prince MC. Rap-Punzel's hair grows $5$ inches per month. But she has split ends. To avoid getting split ends, she has to cut her hair $2$ inches per month. How can we write this as a mathematical expression? We can combine the like terms to get $3x+10$ as our final expression. Decide which function table belongs to which equation. To match to the correct function table, make sure more than $x$-$y$ pair satisfies the equation. For each equation on the right, plug in several different $x$s and compare the answers. A function table is a useful tool used to set up a linear equation. You can plug in different values for the variable $x$ and check the corresponding $f(x)$. Evaluate how long Rap-Punzel's hair is after one year. Rap-Punzel's starting hair length is the constant. To write the simplified expression, the coefficient can be found by determining the difference between the growth rate of Rap-Punzel's hair minus the monthly trimming in order to avoid split ends. To start, Rap-Punzel's hair is $10$ inches long. This is our constant. The variable is the unknown quantity of months, which we'll call $x$. The coefficient can be found by calculating the net growth per month of Rap-Punzel's hair. In this case, her hair grows $5$ inches per month, but she cuts $2$ inches each month as well. So our coefficient becomes $5-2=3$. Combining this information, we have the expression: $3x+10$. To determine the length of Rap-Punzel's hair after one year, we can plug in $x=12$. So after one year, Rap-Punzel's hair is $46$ inches long. The tower is a BIT taller than $46$ inches... Poor Prince MC. Determine how long it takes until the magic beanstalk grows to a height of $100$ inches. coefficient $\times x +$ constant. Did the expression simplify to $100$? So we can write the expression: $0.5x+20$. So, after $160$ days, more than $5$ months, the magic beanstalk will be $100$ inches high. Is this tall enough to reach Rap-Punzel?
CommonCrawl
Takashi Takenouchi, Takafumi Kanamori; 18(56):1−26, 2017. In this paper, we focus on parameters estimation of probabilistic models in discrete space. A naive calculation of the normalization constant of the probabilistic model on discrete space is often infeasible and statistical inference based on such probabilistic models has difficulty. In this paper, we propose a novel estimator for probabilistic models on discrete space, which is derived from an empirically localized homogeneous divergence. The idea of the empirical localization makes it possible to ignore an unobserved domain on sample space, and the homogeneous divergence is a discrepancy measure between two positive measures and has a weak coincidence axiom. The proposed estimator can be constructed without calculating the normalization constant and is asymptotically consistent and Fisher efficient. We investigate statistical properties of the proposed estimator and reveal a relationship between the empirically localized homogeneous divergence and a mixture of the $\alpha$-divergence. The $\alpha$-divergence is a non- homogeneous discrepancy measure that is frequently discussed in the context of information geometry. Using the relationship, we also propose an asymptotically consistent estimator of the normalization constant. Experiments showed that the proposed estimator comparably performs to the maximum likelihood estimator but with drastically lower computational cost.
CommonCrawl
The Bézier points $\mathbf b_i \in \mathbb R^d$ form the control polygon. input Bézier points $\mathbf b_i$ for $i = 0, \dots, n$, and parameter $t \in [0,1]$. output The point $\mathbf b_0^n$ on the curve. Visualisation of the steps of the De Casteljau's algorithm, $t=0.5$. Animation of the De Casteljau's algorithm for a quintic curve ($n=5$). Implement the De Casteljau algorithm and use it to evaluate the Bézier curves in the data folder. Visualise the curves together with their control polygons. Try varying the sampling density. How many samples are needed to give the impression of a smooth curve? Pick one dataset and visualise all intermediate polygons $\mathbf b_i^k$ from the De Casteljau algorithm for a fixed parameter, for instance $t=0.5$. Hint: each column in the above schema represents one such polygon.
CommonCrawl
One of the most fundamental structures in abstract algebra is called a magma. A magma is a pair $(M,\bullet )$, where $M$ is a nonempty set and $\bullet $ is a binary operation on $M$. This means that $\bullet $ maps each ordered pair of elements $(x,y)$ in $M$ to an element in $M$ (in other words, $\bullet $ is a function from $M \times M$ to $M$). The element to which $\bullet $ maps $(x,y)$ is denoted $x \bullet y$. P1 (associativity): For all $x,y,z \in M$, $(x \bullet y) \bullet z = x \bullet (y \bullet z)$. P2 (identity): There is an element $I \in M$ such that $x \bullet I = I \bullet x = x$ for all $x \in M$ ($I$ is called an identity). P3 [depends on P2] (inverses): There is an identity $I \in M$, and for every $x \in M$, there exists $x^* \in M$ such that $x \bullet x^* = x^* \bullet x = I$ ($x^*$ is called an inverse of $x$). A magma that satisfies P1 is called a semigroup. A semigroup that satisfies P2 is called a monoid. A monoid that satisfies P3 is called a group. Given a magma $(M,\bullet )$, determine its most specialized name. The input consists of a single test case specifying a magma $(M,\bullet )$. The first line contains an integer $n$ $(1 \leq n \leq 120)$, the size of $M$. Assume that the elements of $M$ are indexed $0,1,2,\ldots ,n-1$. Each of the next $n^2$ lines contains three integers $i~ j~ k$ $(0 \leq i,j,k \leq n-1)$, meaning that $x \bullet y = z$, where $x$ is the element indexed by $i$, $y$ is the element indexed by $j$, and $z$ is the element indexed by $k$. Each ordered pair $(i,j)$ will appear exactly once as the first two values on a line. The output consists of a single line. Output group if $(M,\bullet )$ satisfies P1, P2, and P3. Output monoid if $(M,\bullet )$ satisfies P1 and P2, but not P3. Output semigroup if $(M,\bullet )$ satisfies P1, but not P2 (and therefore not P3). Otherwise, output magma.
CommonCrawl
You are on a rowboat in the middle of a large, perfectly circular lake. On the perimeter of the lake is a monster who wants to eat you, but fortunately, he can't swim. He can run (along the perimeter) exactly $4\times$ as fast as you can row, and he will always run towards the closest bit of shore to you. If you can touch shore even for a second without the monster already being upon you, you can escape. Suggest a strategy that will allow you to escape, and prove that it works. If two paths take the monster to this location equally quickly, he will arbitrarily choose one. The monster can reverse direction instantaneously, and you can turn your boat instantaneously. Follow up: What is the minimum speed of the monster (relative to your boat) such that escape becomes impossible? First of all, row out to a radius $R/4$ (where the lake has radius $R$) keeping you, the centre of the lake and the monster in a straight line - with you on the far side to the monster. This is always possible; radius $R/4$ is the first point where the angular speed you can achieve just matches that of the monster as he runs round to get you. You are now a distance $3R/4$ away from the shore, directly opposite the monster so he needs to run a distance $\pi R$ to get you. You will take time $3R/4V$ at speed $V$ if you now row directly towards the nearest shore, and he will take $\pi R/4V$, which is fractionally greater. For the followup: If instead of $4\times$, the monster runs $N\times$ your speed... then you row out to radius $R/N$, you then take time $(N-1)R/NV$ to reach shore and he takes $\pi R/NV$ to reach the same point. You escape provided that $N < \pi + 1 \approx 4.1459$. It is possible to escape a monster that is a little more than $4.6$ times faster. Given a monster running at speed $X$ times rowing speed and a lake with radius $R$, you must first row to the circle that is $R/X$ from the centre of the lake by spiralling out while keeping the centre of the lake between you and the monster. Thereafter you should take a route that is tangential to this circle in the opposite direction of what the monster is currently running. This is longer than going the direct route to shore, but the monster also get a longer route to run. Your path in red, monster path in green. The shape of the inner spiral is not exact. Why is this exact solution best? Because of differential maths. That is a long explanation, personally I find that stuffing some equations into Wolfram Alpha works. The monster wants to eat me, but he is also particuarly fond of my boat since "he will always run towards the closest bit of shore to your boat." So I row the boat in one direction and then jump off and swim in the opposite direction. Even the fastest monsters would never catch me! Not the answer you're looking for? Browse other questions tagged mathematics logical-deduction strategy pursuit-evasion or ask your own question. How fast must Jerry swim?
CommonCrawl
The number of $n \times n$ matrices whose entries are either $-1$, $0$, or $1$, whose row- and column- sums are all $1$, and such that in every row and every column the non-zero entries alternate in sign, is proved to be $$[1!4! \dots (3n-2)!] \over [n!(n+1)! \dots (2n-1)!],$$ as conjectured by Mills, Robbins, and Rumsey.
CommonCrawl
The lattice of a general crystal structure is determined by giving six lattice parameters, $a, b, c, \alpha, \beta,$ and $\gamma$. The first 3 parameters are connected with the length of the 3 primitive vectors of the crystal; the last 3 are the angles between the primitive vectors. The definition of the lattice parameters can be seen in the following figure taken from the wikipedia page on lattice constants. The values of the parameter ngridk in the input file have been chosen in order to allow for a fast calculation. Of course, the complete optimization procedure presented in this tutorial should be repeated for more accurate values of ngridk and the other computational parameters up to the desidered accuracy in the resulting lattice constants. In the next, we illustrate the iterative procedure for performing the optimization of the crystal structure of hexagonal Beryllium. The only relevant parameters in this case are the volume of the unit cell and the $c/a$ ratio. 0.03, the absolute value of the maximum strain for which we want to perform the calculation. Optimized lattice parameter saved into the file: "BM-optimized.xml". The BM_eos.out file, for instance, contains all final fit parameters and pressures which are calculated from equation of state fitting procedure. Also, the script generates a plot, PostScript (BM_eos.eps) or PNG (BM_eos.png) file, which looks like the following. On this plot, you can also find the optimized values of the parameters appearing in the equation of state (minimum energy, equilibrium volume, bulk modulus, and bulk modulus pressure derivative). The bulk modulus and bulk modulus pressure derivative which are derived here have to be interpreted only as fitting parameters. This is due to the fact that in this tutorial we are changing the volume, however, we are keeping all other lattice parameter constant. In order to obtain the real physical bulk modulus and bulk modulus pressure derivative, one has to fully optimize at each given volume with respect to all other lattice and internal parameters (e.g., in the case of hexagonal beryllium, with respect to the $c/a$ ratio, too). A file corresponding to an exciting input file for the optimized geometry is created with the name BM-optimized.xml. If you are interested to check how accurate the calculated equilibrium parameters at this step are, you can find more information here. Optimized lattice parameter saved into the file: "coa-optimized.xml". Repeat now the procedure already explained in STEP1, running the script OPTIMIZE-lattice.py and using as entries values 2-COA.xml, 1, 0.005, and 11 in the given order. After having performed the calculation (running the script OPTIMIZE-submit.sh inside the directory VOL), you run OPTIMIZE-lattice.py and get the following plot. Proceed in a similar way to to STEP2. Run the script OPTIMIZE-lattice.py using as entries values 3-VOL.xml, 2, 0.005, and 11 in the given order. Using the same procedure as in the previous steps, you will end up with the following plot. The equilibrium volume is converged within 2$\times$10-1 Bohr3. The c/a ratio is converged within 4$\times$10-4. The energy at the minimum seems to be converged within 3$\times$10-4 mHa. Indeed, such a small value should be considered an artifact of the optimization procedure, which assumes that the calculated total energies are exact. However, the accuracy in the determination of the minimum energy cannot be smaller than the accuracy of the total energy in a single SCF calculation. For the calculations performed in this tutorial, total energies are calculated with the default value of the accuracy, i.e., 10-3 mHa.
CommonCrawl
In this system, $a,b,c,d,e,f$ are constants and the sequences $x_n,y_n,z_n$ can run forever. Let's have $z_0 = x_0 = y_0 = 0$. Given this setup, can we imagine what the plot of $(x,y)$ will look like? To demonstrate this I've written a small app with canvas that does exactly this. You can draw the values of $a,b,c,d,e,f$ manually or press the randomize button to select random values. Once you use one of the sliders the pattern will be drawn in bulk. If you're interested in a more visual effect, you can also press the stream points button. Usually a lot of these generated images contain a lot of noise but after a few random tries one might be suprised of how beautiful some of the results can be. You might be tempted to think that randomness plays a part here. Don't be fooled though, you are merely looking at sines, consines and recursion. To show diverse this sytem can be, I've kept track of some of my favorite outcomes. You may notice that each image has a form of symmetry. That is because of the relationship between $x_n, y_n$, they influence eachother and the effect of $z_n$ is constant in comparison. Notice that $z_n$ is not influenced by $x_n,y_n$. If I were to change that, then the images show very different behavior. You will still be able to recognize repitition but the symmetry tends to leave if $z_n$ is dependant on $x_n$ and $y_n$. It is worthwhile to note that before I had the issue of 'too much noise' when generating images I have the issue of 'too much convergence' when sampling sequences where $z_n$ is dependant. These wondrous graphics are a result of dynamical systems. If $z_n$ simply increasing the recursive sines and cosines create a repetitive, yet counterintuitive, pattern which makes for pretty plotting. If this is not the case then the sequences still seem to follow an attractor pattern. Other systems that have more exponential aspects or more stocahstics in them might not have the same effect. The "art" of finding appropriate values for $a,b,c,d,e,f$ comes down to wanting enough variation such that there are a lot of states but not so much that the results appear to be somewhat uniformly random. I'll look into making a VR demo or something 3d/WebGL/Unity of this to occur some time in the future. It seems like a fun experience to input a formula and tweak the parameters of such a system in real time in 3d.
CommonCrawl
is there some analytical explanation of why this happens? Browse other questions tagged frequency-spectrum power-spectral-density or ask your own question. Does convolution done via the FFT _undo_ computational gains when signal has zeros every other sample? What is meant by *sampling* in terms of the *sampling theorem*? When can the $\mathcal Z$-transform be inverted? When not?
CommonCrawl
$\alpha_S(M_Z)$= 0.1210 +/- 0.0007(stat.) +/- 0.0021(expt.) +/- 0.0044(had.) +/- 0.0036(theo.) and with the NNLO+NLLA calculations the combined value is $\alpha_S$= 0.1172 +/- 0.0006(stat.) +/- 0.0020(expt.) +/- 0.0035(had.) +/- 0.0030(theo.) . The stability of the NNLO and NNLO+NLLA results with respect to missing higher order contributions, studied by variations of the renormalisation scale, is improved compared to previous results obtained with NLO+NLLA or with NLO predictions only. The observed energy dependence of $\alpha_S$ agrees with the QCD prediction of asymptotic freedom and excludes absence of running with 99% confidence level.
CommonCrawl
Volume 9, Number 3 (1996), 481-498. A one-dimensional Sobolev-type inequality supplemented by a Prüfer transformation argument is used to derive upper and lower bounds for the eigenvalues of regular, self-adjoint second-order eigenvalue problems. These inequalities are shown to have applications to counting eigenvalues in the intervals $\scriptstyle (-\infty,\lambda]$, estimating eigenvalue gaps, Liapunov inequalities, and de La Valée Poussin-type inequalities. Differential Integral Equations, Volume 9, Number 3 (1996), 481-498.
CommonCrawl
with $8$ moves needed to swap the red and blue knights. What is the minimum numbers of moves to swap the knights on a $4\times4$ grid? All credit goes to @klm123 for demonstrating a good way to visualize this in a similar puzzle. 20 half moves, or 10 move pairs. Not the answer you're looking for? Browse other questions tagged mathematics checkerboard knight-moves or ask your own question. What's the Maximum Moves Needed for this kind of Puzzle?
CommonCrawl
In NFU, is there a bijection between the set of all sets and the set of all one-element sets? How can I minimize the real part of the roots of this function involving both $x$ and $e^x$ terms? How many rounds are required in a "Swiss tournament sorting algorithm"? What algorithm can I use to find this ellipse inscribed in a quadrilateral? Given the Laplace transform of a function $f(t)$, can I find the "total squared error" $\int_0^\infty f(t)^2\ dt$?
CommonCrawl
To compute the minor M 2,3 and the cofactor C 2,3, we find the determinant of the above matrix with row 2 and column 3 removed. , = [ − ] = [−] = (− (−)) = So the cofactor of the (2,3) entry is , = (−) + (,) = − General definition. Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × k minor of A, also called minor determinant of order k of A or, if... Enter a 4x4 matrix and press "Execute" button. Then the cofactor matrix is displayed. The cofactors of a $2\times 2$-matrix are $\pm$ determinants of $1 \times 1$-matrices, and the determinant of a $1\times 1$-matrix is its unique entry. – darij grinberg Sep 21 '15 at 15:05 1 Try calculating the cofactor matrix of a $2 \times 2$ matrix. how to know what gift to buy Enter a 4x4 matrix and press "Execute" button. Then the cofactor matrix is displayed. Formula for finding the inverse of a 2x2 matrix. 2. Formula for finding the inverse of a 3x3 matrix requires to find its determinant, cofactor and finally the adjoint matrix and the apply one of the following formulas: how to find the reflection of a rational function The cofactor matrix replaces each element in the original matrix with its cofactor (plus or minus its minor, which is the determinant of the original matrix without that row and column. The plus or minus rule is the same for determinant expansion -- if the sum of the row and column is even, it's positive, if negative, it's odd). Enter a 4x4 matrix and press "Execute" button. Then the cofactor matrix is displayed.
CommonCrawl
This may seem like a rather simple question, but I haven't been able to come up with an explanation myself or find one on the Internet. but there seems to be something missing in the equation that I've given. Would anybody be able to help me understand the derivation? The textbook that I'm using is Introduction to Probability (1e) - Blitzstein & Hwang. Conditional expectation has some very useful properties, that often allow us to solve problems without having to go all the way back to the definition. The remainder of the textbook related to the law is about how to prove the equality, which I understand how to do. Let $X$ and $Y$ be random variables with finite variances, and let $W = Y - E(Y|X)$. and this is the particular part that threw me off. The correct answer would be that $E(E(Y|X)|X) = E(Y|X)$ therefore giving us $0$. All you need it to read your book (the one by Blitzstein-Hwang you mentioned in the post) more thoroughly, really. The definition you need is that of conditional expectations given a random variable. The first thing you should know is what the notation $E(Y|X)$ means before discussing anything about it. Section 9.2: Conditional expectation given an r.v. is the place you should refer to in the first place. Applying Theorem 9.3.2, one has $$ E(E(Y|X)|X)=E(g(X)\cdot 1|X)=g(X)E(1|X)=g(X)\cdot 1=g(X), $$ where $g(X):=E(Y|X)$. 2) $\int_A XdP=\int_AZdP$ for all $A\in\mathcal F$. $$\int_AE(E(Y|X)|X)dP=\int_AE(Y|X)dP=\int_AYdP,$$ with first equality by applying property 2) to the random variable that results from taking the expectation of $E(Y|X)$, conditioning on $\sigma(X)$; the second equality follows from applying property 2) to the random variable that results from taking the expectation of $Y$, conditioning on $\sigma(X)$. Not the answer you're looking for? Browse other questions tagged probability-theory conditional-expectation or ask your own question. Does law of total probability apply here? Is it true that E [ X | E [ X | Y] ] = Ex [ X | Y] ? Does this law have a name? Can we take conditional expectations on a random variable that has been fixed to a value, such as $E( E\left( Y \mid X,Z=1 \right) \mid X)$?
CommonCrawl
Abstract: The large time asymptotics of the solutions of the sine-Gordon equation which tend to zero when $x\to\infty$ and tend to the finite-gap solution of this equation when $x\to-\infty$ are investigated. It is proved that at $t\to\infty$ these solutions split into infinite series of solitons with variable phases. These solitons are generated by the continuous spectrum of the $L$-operator from the corresponding Lax representation.
CommonCrawl
Koyama, Y. , R. H. Coker, J. C. Denny, B. D Lacy, K. Jabbour, P. E. Williams, and D. H. Wasserman. 2001. Role of carotid bodies in control of the neuroendocrine response to exercise. American Journal of Physiology-Endocrinology And Metabolism 281:E742–E748. Coker, R. H. , M. G. Krishna, B. D Lacy, E. J. Allen, and D. H. Wasserman. 1997. Sympathetic drive to liver and nonhepatic splanchnic tissue during heavy exercise. Journal of Applied Physiology 82:1244–1249. Coker, R. H. , N. P. Hays, R. H. Williams, A. D. Brown, S. C. O. T. T. A. Freeling, P. A. T. R. I. C. K. M. Kortebein, D. E. N. N. I. S. H. Sullivan, R. A. Y. M. O. N. D. D. Starling, W. I. L. L. I. A. M. J. Evans, and . 2006. Exercise-induced changes in insulin action and glycogen metabolism in elderly adults. Medicine and science in sports and exercise 38:433. Coker, R. H. , M. G. Krishna, B. D Lacy, D. P. Bracy, and D. H. Wasserman. 1997. Role of hepatic $\alpha$-and $\beta$-adrenergic receptor stimulation on hepatic glucose production during heavy exercise. American Journal of Physiology-Endocrinology And Metabolism 273:E831–E838. Hays, N. P. , P. R. Galassetti, and R. H. Coker. 2008. Prevention and treatment of type 2 diabetes: current role of lifestyle, natural product, and pharmacological interventions. Pharmacology & therapeutics 118:181–191. Evidence that carotid bodies play an important role in glucoregulation in vivo. Koyama, Y. , R. H. Coker, E. E. Stone, B. D Lacy, K. Jabbour, P. E. Williams, and D. H. Wasserman. 2000. Evidence that carotid bodies play an important role in glucoregulation in vivo.. Diabetes 49:1434–1442. Read more about Evidence that carotid bodies play an important role in glucoregulation in vivo.
CommonCrawl
Vaidyanathan, VV and Sastry, PS and Ramasarma, T (1993) Regulation of the activity of glyceraldehyde 3-phosphate dehydrogenase by glutathione and $H_2O_2$. In: Molecular and Cellular Biochemistry, 129 (1). pp. 57-65. The activity lost during storage of a solution of muscle glyceraldehyde 3-phosphate dehydrogenase was rapidly restored on adding a thiol, but not arsenite or azide. On treatment with $H_2O_2$, the enzyme was partially inactivated and complete loss of activity occurred in the presence of glutathione. Samples of the enzyme pretreated with glutathione followed by removal of the thiol by filtration on a Sephadex column showed both full activity and its complete loss on adding $H_2O_2$, in the absence of added glutathione. Most of the activity was restored when the $H_2O_2$-inactivated enzyme was incubated with glutathione (25 mM) or dithiothreitol (5 mM), whereas arsenite or azide were partially effective and ascorbate was ineffective. The need for incubation for a long time with a strong reducing agent for restoration of activity suggested that the oxidized group (disulfide or sulfenate) must be in a masked state in the $H_2O_2$-inactivated enzyme. Analysis by SDS-PAGE gave evidence for the formation of a small quantity of glutathione-reversible disulfide-form of the enzyme. CD spectra indicated a decrease in $\alpha$-helical content in the activated form of the enzyme. The evidence suggested that glutathione and $H_2O_2$ could regulate the active state of this enzyme.
CommonCrawl
I am looking for Postdoc for machine learning research. I have moved to University of Tokyo (Apr/2017). I am interested in Statistics and Machine Learning, especially the following research topics. Born in Japan, Oct, 1981. Bachelor of Engineering from Mathematical Engineering Course, Department of Mathematical Engineering and Infomation Physics, Faculty of Engineering, The University of Tokyo, 2004. Taiji Suzuki: Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. ICLR2019, accepted. arXiv:1810.08033. Atsushi Nitanda, Taiji Suzuki: Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors. AISTATS2019, accepted. arXiv:1806.05438. Taiji Suzuki, Hiroshi Abe, Tomoya Murata, Shingo Horiuchi, Kotaro Ito, Tokuma Wachi, So Hirai, Masatoshi Yukishima, Tomoaki Nishimura: Spectral-Pruning: Compressing deep neural network via spectral analysis. arXiv:1808.08558. Atsushi Nitanda and Taiji Suzuki: Stochastic Particle Gradient Descent for Infinite Ensembles. arXiv:1712.05438. Ryota Tomioka and Taiji Suzuki: Spectral norm of random tensors. arXiv:1407.1870. Taiji Suzuki: Fast Learning Rate of Non-Sparse Multiple Kernel Learning and Optimal Regularization Strategies. Electronic Journal of Statistics, Volume 12, Number 2 (2018), 2141--2192. doi:10.1214/18-EJS1399. Yuichi Mori and Taiji Suzuki: Generalized ridge estimator and model selection criteria in multivariate linear regression. Journal of Multivariate Analysis, volume 165, pages 243--261, May 2018. arXiv:1603.09458. Song Liu, Taiji Suzuki, Relator Raissa, Jun Sese, Masashi Sugiyama, and Kenji Fukumizu: Support Consistency of Direct Sparse-Change Learning in Markov Networks. The Annals of Statistics, vol. 45, no. 3, 959–990, 2017. DOI: 10.1214/16-AOS1470. Song Liu, Kenji Fukumizu and Taiji Suzuki: Learning Sparse Structural Changes in High-dimensional Markov Networks: A Review on Methodologies and Theories. Behaviormetrika. 44(1):265–286, 2017. DOI: 10.1007/s41237-017-0014-z. Yoshito Hirata, Kai Morino, Taiji Suzuki, Qian Guo, Hiroshi Fukuhara, and Kazuyuki Aihara: System Identification and Parameter Estimation in Mathematical Medicine: Examples Demonstrated for Prostate Cancer. Quantitative Biology, 2016, 4(1): 13--19. DOI: 10.1007/s40484-016-0059-0. Taiji Suzuki, and Kazuyuki Aihara: Nonlinear System Identification for Prostate Cancer and Optimality of Intermittent Androgen Suppression Therapy. Mathematical Biosciences, vol. 245, issue 1, pp. 40--48, 2013. Taiji Suzuki: Improvement of Multiple Kernel Learning using Adaptively Weighted Regularization. JSIAM Letters, vol. 5, pp. 49--52, 2013. Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, M. C. du Plessis, Song Liu, Ichiro Takeuchi: Density Difference Estimation. Neural Computation, 25(10): 2734--2775, 2013. Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Masashi Sugiyama, Relative Density-Ratio Estimation for Robust Distribution Comparison. Neural Computation, vol. 25, number 5, pp. 1324--1370, 2013. Takafumi Kanamori, Taiji Suzuki, and Masashi Sugiyama: Computational complexity of kernel-based density-ratio estimation: A condition number analysis. Machine Learning, vol. 90, pp. 431-460, 2013. Takafumi Kanamori, Taiji Suzuki, and Masashi Sugiyama: Statistical analysis of kernel-based least-squares density-ratio estimation. Machine Learning, vol. 86, Issue 3, pp. 335-367, 2012. Takafumi Kanamori, Taiji Suzuki, and Masashi Sugiyama: f-divergence estimation and two-sample homogeneity test under semiparametric density-ratio models. IEEE Transactions on Information Theory, Vol. 58, Issue 2, pp. 708-720, 2012. Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori: Density ratio matching under the Bregman divergence: A unified framework of density ratio estimation. Annals of the Institute of Statistical Mathematics, vol. 11, pp. 1--36, 2011. Masashi Sugiyama, Taiji Suzuki, Yuta Itoh, Takafumi Kanamori, and Manabu Kimura: Least-Squares Two-Sample Test. Neural Networks, vol.24, no.7, pp.735--751, 2011. Masashi Sugiyama, Makoto Yamada, Paul von Bunau, Taiji Suzuki, Takafumi Kanamori, and Motoaki Kawanabe: Direct density-ratio estimation with dimensionality reduction via least-squares hetero-distributional subspace search. Neural Networks, vol.24, no.2, pp.183-198, 2011. Taiji Suzuki, Nicholas Bruchovsky, and Kazuyuki Aihara: Piecewise Affine Systems Modelling for Optimizing Hormonal Therapy of Prostate Cancer. Philosophical Transactions A of the Royal Society, 368 (2010), 5045--5059. Masashi Sugiyama, and Taiji Suzuki: Least-squares independence test. IEICE Transactions on Information and Systems, vol.E94-D, no.6, pp.1333-1336, 2011. Takafumi Kanamori, Taiji Suzuki, and Masashi Sugiyama: Theoretical analysis of density ratio estimation. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol.E93-A, no.4, pp.787--798, 2010. Masashi Sugiyama, Ichiro Takeuchi, Takafumi Kanamori, Taiji Suzuki, Hirotaka Hachiya, and Daisuke Okanohara: Least-squares conditional density estimation. IEICE Transactions on Information and Systems, vol.E93-D, no.3, pp.583-594, 2010. Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Shohei Hido, Jun Sese, Ichiro Takeuchi, and Liwei Wang: A density-ratio framework for statistical data processing. IPSJ Transactions on Computer Vision and Applications, 1 (2009), 183--208. Taiji Suzuki, Masashi Sugiyama, Takafumi Kanamori, and Jun Sese: Mutual information estimation reveals global associations between stimuli and biological processes. BMC Bioinformatics, 10(Suppl 1):S52, 2009. Masashi Sugiyama, Taiji Suzuki, Shinichi Nakajima, Hisashi Kashima, Paul von Bunau, and Motoaki Kawanabe: Direct importance estimation for covariate shift adaptation. Annals of the Institute of Statistical Mathematics. 60(4) (2008), 699--746. Taiji Suzuki, and Fumiyasu Komaki: On prior selection and covariate shift of $\beta$-Bayesian prediction under $\alpha$-divergence risk. Communications in Statistics --- Theory and Methods, 39(8) (2010), 1655--1673. Akimichi Takemura, and Taiji Suzuki: Game-Theoretic Derivation of Discrete Distributions and Discrete Pricing Formulas. Journal of Japan Statistical Society, 37 (1) (2006), 87--104. Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami, and Taiji Suzuki: Cross-domain Recommendation via Deep Domain Adaptation. 41st European Conference on Information Retrieval (ECIR2019), accepted. arXiv:1803.03018. Taiji Suzuki: Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. The 7th International Conference on Learning Representations (ICLR2019), accepted. [The proof of Proposition 4 can be found here (provided by Satoshi Hayakawa who pointed out the technical flaw).] (arXiv:1810.08033). Kazuo Yonekura, Hitoshi Hattori, and Taiji Suzuki: Short-term local weather forecast using dense weather station by deep neural network. In Proceedings of 2018 IEEE International Conference on Big Data (Big Data), pp.10--13, 2018. DOI: 10.1109/BigData.2018.8622195. Tomoya Murata, and Taiji Suzuki: Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation. Advances in Neural Information Processing Systems 31 (NeurIPS2018), pp.5312--5321, 2018. arXiv:1809.01765. Atsushi Yaguchi, Taiji Suzuki, Wataru Asano, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa: Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks. In Proceedings of IEEE 17th International Conference on Machine Learning and Applications (ICMLA 2018), pp.17--20, 2018. DOI: 10.1109/ICMLA.2018.00054. Atsushi Nitanda and Taiji Suzuki: Functional gradient boosting based on residual network perception. ICML2018, Proceedings of the 35th International Conference on Machine Learning, 80:3819--3828, 2018. arXiv:1802.09031. Atsushi Nitanda and Taiji Suzuki: Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models. AISTATS2018, Proceedings of Machine Learning Research, 84:454--463, 2018. arXiv:1801.02227. Masaaki Takada, Taiji Suzuki, and Hironori Fujisawa: Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables. AISTATS2018, Proceedings of Machine Learning Research, 84:1008--1016, 2018. arXiv:1711.01796. Taiji Suzuki: Fast generalization error bound of deep learning from a kernel perspective. AISTATS2018, Proceedings of Machine Learning Research, 84:1397--1406, 2018. arXiv:1705.10182. Song Liu, Akiko Takeda, Taiji Suzuki and Kenji Fukumizu: Trimmed Density Ratio Estimation. NIPS2017, 4518--4528, 2017. arXiv:1703.03216. Tomoya Murata and Taiji Suzuki: Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization. NIPS2017, 608--617, 2017. arXiv:1703.00439. Atsushi Nitanda and Taiji Suzuki: Stochastic Difference of Convex Algorithm and its Application to Training Deep Boltzmann Machines. The 20th International Conference on Artificial Intelligence and Statistics (AISTATS2017), Proceedings of Machine Learning Research, 54:470--478, 2017. Taiji Suzuki, Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, and Yukihiro Tagami: Minimax Optimal Alternating Minimization for Kernel Nonparametric Tensor Learning. The 30th Annual Conference on Neural Information Processing Systems (NIPS2016), pp. 3783-3791, 2016. Heishiro Kanagawa, Taiji Suzuki, Hayato Kobayashi, Nobuyuki Shimizu, and Yukihiro Tagami: Gaussian process nonparametric tensor estimator and its minimax optimality. Proceedings of The 33rd International Conference on Machine Learning, pp. 1632–1641, 2016. Song Liu, Taiji Suzuki, Masashi Sugiyama, and Kenji Fukumizu: Structure Learning of Partitioned Markov Networks. International Conference on Machine Learning (ICML2016), Proceedings of The 33rd International Conference on Machine Learning, pp. 439–448, 2016. Taiji Suzuki and Heishiro Kanagawa: Bayes method for low rank tensor estimation. International Meeting on "High-Dimensional Data Driven Science" (HD3-2015). Dec. 14th-17th/2015, Kyoto Japan. Oral presentation. Journal of Physics: Conference Series, 699(1), pp. 012020, 2016. Taiji Suzuki: Convergence rate of Bayesian tensor estimator and its minimax optimality. The 32nd International Conference on Machine Learning (ICML2015), JMLR Workshop and Conference Proceedings 37:pp. 1273--1282, 2015. Satoshi Hara, Tetsuro Morimura, Toshihiro Takahashi, Hiroki Yanagisawa, Taiji Suzuki: A Consistent Method for Graph Based Anomaly Localization. The 18th International Conference on Artificial Intelligence and Statistics (AISTATS2015), JMLR Workshop and Conference Proceedings 38:333--341, 2015. Song Liu, Taiji Suzuki, and Masashi Sugiyama: Support Consistency of Direct Sparse-Change Learning in Markov Networks. The Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI2015), 2015. (arXiv:1407.0581). Taiji Suzuki: Stochastic Dual Coordinate Ascent with Alternating Direction Method of Multipliers. International Conference on Machine Learning (ICML2014), JMLR Workshop and Conference Proceedings 32(1):736--744, 2014. supplementary. (arXiv version: arXiv:1311.0622) This paper was also presented in OPT2013, NIPS workshop "Optimization for Machine Learning". Source code (Matlab). Ryota Tomioka, and Taiji Suzuki: Convex Tensor Decomposition via Structured Schatten Norm Regularization. Advances in Neural Information Processing Systems (NIPS2013), 1331--1339, 2013. Taiji Suzuki: Dual Averaging and Proximal Gradient Descent for Online Alternating Direction Multiplier Method. International Conference on Machine Learning (ICML2013), 2013, JMLR Workshop and Conference Proceedings 28(1): 392--400, 2013. Source code (Matlab). Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus du Plessis, Song Liu, and Ichiro Takeuchi: Density-Difference Estimation . Advances in Neural Information Processing Systems (NIPS2012), 692--700, 2012. Takafumi Kanamori, Akiko Takeda and Taiji Suzuki: A Conjugate Property between Loss Functions and Uncertainty Sets in Classification Problems. Conference on Learning Theory (COLT2012), 2012, JMLR Workshop and Conference Proceedings 23: 29.1--29.23, 2012. Ryota Tomioka, Taiji Suzuki, Kohei Hayashi and Hisashi Kashima: Statistical Performance of Convex Tensor Decomposition. Advances in Neural Information Processing Systems 24 (NIPS2011). pp.972--980. Taiji Suzuki, Masashi Sugiyama, and Toshiyuki Tanaka: Mutual information approximation via maximum likelihood estimation of density ratio. 2009 IEEE International Symposium on Information Theory (ISIT2009). pp.463--467, Seoul, Korea, 2009. Taiji Suzuki, and Masashi Sugiyama: Estimating Squared-loss Mutual Information for Independent Component Analysis. ICA 2009. Paraty, Brazil, 2009. Lecture Notes in Computer Science, Vol. 5441, pp.130--137, Berlin, Springer, 2009. Taiji Suzuki, Masashi Sugiyama, Takafumi Kanamori and Jun Sese: Mutual information estimation reveals global associations between stimuli and biological processes. In Proceedings of the seventh asia pacific bioinformatics conference (APBC 2009). Beijing, China, 2009. Taiji Suzuki, Masashi Sugiyama, Jun Sese, and Takafumi Kanamori: Approximating mutual information by maximum likelihood density ratio estimation. In Proceedings of the 3rd workshop on new challenges for feature selection in data mining and knowledge discovery (FSDM2008), JMLR workshop and conference proceedings, Vol. 4, pp.5--20, 2008. Taiji Suzuki, Masashi Sugiyama, Jun Sese, and Takafumi Kanamori: A least-squares approach to mutual information estimation with application in variable selection. In Proceedings of the 3rd workshop on new challenges for feature selection in data mining and knowledge discovery (FSDM2008). Antwerp, Belgium, 2008. Taiji Suzuki, Takamasa Koshizen, Kazuyuki Aihara and Hiroshi Tsujino: Learning to estimate user interest utilizing the variational Bayes estimator. Intelligent Systems Design and Applications (ISDA) 2005, 94--99. Wroclaw, Poland, September 2005. Tetsuya Hoya, Gen Hori, Havagim Bakardjian, Tomoaki Nishimura, Taiji Suzuki, Yoichi Miyawaki, Arao Funase, and Jianting Cao: Classification of Single Trial EEG Signals by a Combined Principal + Independent Component Analysis and Probabilistic Neural Network Approach. Proc. ICA2003, pp. 197-202. Nara, Japan, January 2003. Masashi Sugiyama, Taiji Suzuki, & Takafumi Kanamori: Density Ratio Estimation in Machine Learning. Cambridge University Press, 2012. Taiji Suzuki: Generalization error bound of Bayesian deep learning: a kernel perspective. 2017 Probabilistic Graphical Model Workshop: Structure, Sparsity and High-dimensionality, 2017. Tachikawa, Japan. 22-24/Feb,2017 (presented in 22/Feb/2017). Taiji Suzuki: Statistical Performance and Computational Efficiency of Nonparametric Low Rank Tensor Estimators. 2016 International Workshop on Spatial and Temporal Modeling from Statistical, Machine Learning and Engineering perspectives (STM2016), 2016. Tachikawa, Japan. 20-23/Jul,2016 (presented in 23/Jul/2016). Taiji Suzuki: Statistical Performance and Computational Efficiency of Nonparametric Low Rank Tensor Estimators. The First Korea-Japan Machine Learning Symposium, 2016. Seoul, Korea. 2-3/Jun,2016 (presented in 3/Jun/2016). Taiji Suzuki: Stochastic Optimization. Machine Learning Summer School 2015 Kyoto, 2015. Kyoto, Japan. 23/Aug-4/Sep,2015 (presented in 2-4/Sep/2015). Taiji Suzuki: Stochastic Dual Coordinate Ascent with ADMM. SIAM Conference on Optimization (SIAM-OPT2014), 2014. San Diego, USA. 19-22/May,2014 (presented in 20/May/2014). Taiji Suzuki: Some convergence results on multiple kernel additive models. Nonparametric and High-dimensional Statistics. CIRM, Luminy (17/December/2012--21/December/2012), presented in 18/December/2012. Taiji Suzuki: Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies. The 2nd Institute of Mathematical Statistics Asia Pacific Rim Meeting (2/July/2012--4/July/2012), Tsukuba, Japan. 4th July, 2012. Taiji Suzuki, Ryota Tomioka, and Masashi Sugiyama: Sharp Convergence Rate and Support Consistency of Multiple Kernel Learning with Sparse and Dense Regularization. arXiv:1103.5201. Taiji Suzuki: Fast Learning Rate of lp-MKL and its Minimax Optimality. arXiv:1103.5202. Ryota Tomioka, Taiji Suzuki, & Masashi Sugiyama: Augmented Lagrangian methods for learning, selecting, and combining features. In S. Sra, S. Nowozin, and S. J. Wright (Eds.), Optimization for Machine Learning, MIT Press, Cambridge, MA, USA, 2011. Ryota Tomioka, Taiji Suzuki, & Masashi Sugiyama: Optimization algorithms for sparse regularization and multiple kernel learning and their applications to image recognition. Image Lab, vol.21, no.4, pp.5-11, 2010. Taiji Suzuki: ``Some convergence results of nonparametric tensor estimators.'' International Symposium on Statistical Analysis for Large Complex Data. Tsukuba University, Japan. 23rd/Nov/2016. Oral presentation. Taiji Suzuki, Nicholas Bruchovsky, and Kazuyuki Aihara: A computational method to predict PSA evolution for androgen deprivation therapy. Sixth International Symposium on Hormonal Oncogenesis (12/09/2010 -- 16/09/2010), Sheraton Grande Tokyo Bay Hotel, Tokyo, Japan. pp.46, poster session. Taiji Suzuki and Kazuyuki Aihara: Hybrid system therapy for prostate cancer. International Symposium on Complexity Modelling and Its Applications 2005, poster session. Taiji Suzuki: The Japan Society for Industrial and Applied Mathematics, Best paper award 2016. Improvement of Multiple Kernel Learning using Adaptively Weighted Regularization. Taiji Suzuki: IBISML (Information-Based Induction Sciences and Machine Learning), Best paper award 2012 (2012年度IBISML研究会賞). Dual Averaging and Proximal Gradient Descent for Online Alternating Direction Multiplier Method. MIRU優秀論文賞, Meeting on Image Recognition and Understanding 2008 (MIRU2008), 2008年 "Direct Importance Estimation - A New Versatile Tool for Statistical Pattern Recognition" Masashi Sugiyama (Tokyo Institute of Technology), Takafumi Kanamori (Nagoya University), Taiji Suzuki (University of Tokyo), Shohei Hido (IBM Research), Jun Sese (Ochanomizu University), Ichiro Takeuchi (Mie University), and Liwei Wang (Peking University).
CommonCrawl
Bessie likes downloading games to play on her cell phone, even though she does find the small touch screen rather cumbersome to use with her large hooves. She is particularly intrigued by the current game she is playing. The game starts with a sequence of $N$ positive integers ($2 \leq N \leq 262,144$), each in the range $1 \ldots 40$. In one move, Bessie can take two adjacent numbers with equal values and replace them a single number of value one greater (e.g., she might replace two adjacent 7s with an 8). The goal is to maximize the value of the largest number present in the sequence at the end of the game. Please help Bessie score as highly as possible! The first line of input contains $N$, and the next $N$ lines give the sequence of $N$ numbers at the start of the game. In this example shown here, Bessie first merges the second and third 1s to obtain the sequence 1 2 2, and then she merges the 2s into a 3. Note that it is not optimal to join the first two 1s.
CommonCrawl
Efficient way to solve equal sums $x_1^k+x_2^k+\dots+x_5^k=y_1^k+y_2^k+\dots+y_5^k$ with Mathematica? P.S. The system $S_1$ leads to an $11$th degree identity, in case one is wondering why I am interested in it. This isn't really an answer, but it's too big for a comment. Here's some code to brute force solve the problem (since there wasn't any code provided in the question). It lists all the possible numbers; finds the ones whose product matches; then of those, finds the ones whose sum of squares match, etc.. It's quite fast, but runs into memory issues fairly quickly after 50. I've managed to run it up to $1 \leq x_i \leq 60$ and found no more solutions than in the question. I've got things to be and places to do, but hopefully somebody might find this useful as a starting point. On the other hand, if you want to check up to 300, there will be 20 billion combinations to sift through using this method. Not the answer you're looking for? Browse other questions tagged list-manipulation equation-solving data-structures diophantine-equations or ask your own question. Cannot solve system of linear equation sums? What is the efficient way to code Successive Over-relaxation (SOR) method in Mathematica?
CommonCrawl
Ned is trying to play music using a steel ruler extending beyond the table edge. When he strikes the ruler, a twang sound is produced. If he wants to produce a higher frequency note, then what should he do? A pen costs an integer number of cents. There are 100 cents in a dollar. 9 of these pens cost between $11 and $12. 13 of these pens cost between $15 and $16. What is the price of one pen in cents? A filament bulb is connected with a battery, a resistor, and two switches \(S_1\) and \(S_2,\) as shown in the diagram above. For which combination of the switches will the bulb glow? Assume the connecting wires have negligible resistance. The above shows the first few powers of 3, starting from \(3^3\). Is it true that the second last digit (from the right) of \(3^n\) for integers \(n\,(> 2)\) is always an even number? There is a knight on the bottom left square of a \(101\times 101\) chess board. What is the minimum number of moves required for the knight to get to the top right corner? Note: A knight's valid moves are shown by the stars in the picture to the right.
CommonCrawl
Single index linear models for binary response with random coefficients have been extensively employed in many settings under various parametric specifications of the distribution of the random coefficients. Nonparametric maximum likelihood estimation (NPMLE) as proposed by Kiefer and Wolfowitz (1956) in contrast, has received less attention in applied work due primarily to computational difficulties. We propose a new approach to computation of NPMLEs for binary response models that significantly increase their computational tractability thereby facilitating greater flexibility in applications. Our approach, which relies on recent developments involving the geometry of hyperplane arrangements by Rada and Černý (2018), is contrasted with the recently proposed deconvolution method of Gautier and Kitamura (2013). We study predictive density estimation under Kullback-Leibler loss in $\ell_0$-sparse Gaussian sequence models. We propose proper Bayes predictive density estimates and establish asymptotic minimaxity in sparse models. A surprise is the existence of a phase transition in the future-to-past variance ratio $r$. For $r < r_0 = (\surd 5 - 1)/4$, the natural discrete prior ceases to be asymptotically optimal. Instead, for subcritical $r$, a `bi-grid' prior with a central region of reduced grid spacing recovers asymptotic minimaxity. This phenomenon seems to have no analog in the otherwise parallel theory of point estimation of a multivariate normal mean under quadratic loss. For spike-and-slab priors to have any prospect of minimaxity, we show that the sparse parameter space needs also to be magnitude constrained. Within a substantial range of magnitudes, spike-and-slab priors can attain asymptotic minimaxity. This is joint work with Gourab Mukherjee. The curse of dimensionality is a well-known phenomenon in nonparametric statistics, which is echoed by the phenomenon of noise accumulation that is common in high-dimensional statistics. Even for the null case of a standard Gaussian random vector, the empirical linear correlation between the first component and the set of all remaining components can shift toward one as the dimensionality grows, and it is unclear how to correct exactly for such a dimensionality-induced bias in finite samples in a simple generalizable way. In this talk, I will present a new theoretical study that reveals an opposite phenomenon of the blessing of dimensionality in high-dimensional nonparametric inference with distance correlation, in which a pair of large random matrices are observed. This is a joint work with Lan Gao and Qiman Shao. Let $X ~ Nd(q,s2I)$, $Y ~ Nd(q, s2I)$, $U ~ Nk(q, s2I)$ be independently distributed, or more generally let $(X, Y, U)$ have a spherically symmetric distribution with density $hd+k/2f (h(||x − q||2+ ||u||2+ ||y − cq||2))$ with unknown parameters $h \in Rd$, and with known density $f( . )$ and constant $c > 0$. Based on observing $X = x, U = u$, we consider the problem of obtaining a predictive density $q_hat( • ; x, u)$ for $Y$ with risk measured by the expected Kullback–Leibler loss. A benchmark procedure is the minimum risk equivariant density $𝑞_hat$ MRE, which is Generalized Bayes with respect to the prior $ \pi(q,h)=1/h$. In dimension $d ≥ 3$, we obtain improvements on $𝑞_hat$ MRE, and further, show that the dominance holds simultaneously for all $f (. )$ subject to finite moment and finite risk conditions. We also show that the Bayes predictive density with respect to the "harmonic prior",$ \pih(q,h) = q2-d/h$ dominates $𝑞_hat$ MRE simultaneously for all $f(.)$ that are scale mixture of normals. The results hinge on a duality with a point prediction problem, as well as posterior representations for $(, )$, which are of independent interest. In particular, for $d ≥ 3$, we obtain point predictors $(X, U)$ of $Y$ that dominate the benchmark predictor $c X$ simultaneously for all $f(.)$, and simultaneously for risk functions $EEf [r(||Y − (X, U)||2+ (1 + c2)||U||2)]$, with $r(.)$ increasing and concave on $R+$, including the squared error case, $r(t)=t$. We consider sparse Bayesian estimation in the classical multivariate linear regression model with p regressors and q response variables. In univariate Bayesian linear regression with a single response y, shrinkage priors which can be expressed as scale-mixtures of normal densities are a popular approach for obtaining sparse estimates of the coefficients. In this paper, we extend the use of these priors to the multivariate case to estimate a p times q coefficients matrix B. Our method can be used for any sample size n and any dimension p. Moreover, we show that the posterior distribution can consistently estimate B even when p grows at nearly exponential rate with the sample size n. Concentration inequalities are proved and our results are illustrated through simulation and data analysis. This talk will address the estimation of predictive densities and their efficiency as measured by frequentist risk. For Kullback-Leibler, $\alpha-$divergence, $L_1$ and $L_2$ losses, we review several recent findings that bring into play improvements by scale expansion, as well as duality relationships with point estimation and point prediction problems. A range of models is studied and include multivariate normal with both known and unknown covariance structure, scale mixture of normals, Gamma, as well as models with restrictions on the parameter space. A class of improper priors for nonhomogeneous Poisson intensity functions is proposed. The priors in the class have shrinkage properties. The nonparametric Bayesian predictive densities based on the shrinkage priors have reasonable properties, although improper priors have not been widely used for nonparametric Bayesian inference. In particular, the nonparametric Bayesian predictive densities are admissible under the Kullback-Leibler loss. We develop a novel shrinkage rule for prediction in a high-dimensional non-exchangeable hierarchical Gaussian model with an unknown spiked covariance structure. We propose a family of commutative priors for the mean parameter, governed by a power hyper-parameter, which encompasses from perfect independence to highly dependent scenarios. Corresponding to popular loss functions such as quadratic, generalized absolute, and linex losses, these prior models induce a wide class of shrinkage predictors that involve quadratic forms of smooth functions of the unknown covariance. By using uniformly consistent estimators of these quadratic forms, we propose an efficient procedure for evaluating these predictors which outperforms factor model based direct plug-in approaches. We further improve our predictors by introspecting possible reduction in their variability through a novel coordinate-wise shrinkage policy that only uses covariance level information and can be adaptively tuned using the sample eigen structure. We extend our methodology to aggregation based prescriptive analysis of generic multidimensional linear functionals of the predictors that arise in many contemporary applications involving forecasting decisions on portfolios or combined predictions from dis-aggregative level data. We propose an easy-to-implement functional substitution method for predicting linearly aggregative targets and establish asymptotic optimality of our proposed procedure. We present simulation experiments as well as real data examples illustrating the efficacy of the proposed method. This is joint work with Trambak Banerjee and Debashis Paul. Bayes factor is a widely used tool for Bayesian hypothesis testing and model comparison. However, it can be greatly affected by the prior elicitation for the model parameters. When the prior information is weak, people often use proper priors with large variances, but Bayes factors under convenient diffuse priors can be very sensitive to the arbitrary diffuseness of the priors. In this work, we propose an innovative method called calibrated Bayes factor, which uses training samples to calibrate the prior distributions, so that they reach a certain concentration level before we compute Bayes factors. This method provides reliable and robust model preferences under various true models. It makes no assumption on model forms (parametric or nonparametric) or on the integrability of priors (proper or improper), so is applicable in a large variety of model comparison problems. I will start by presenting some Hellinger accuracy results for the Nonparametric Maximum Likelihood Estimator (NPMLE) for Gaussian location mixture densities. I will then present two applications of the NPMLE: (a) empirical Bayes estimation of multivariate normal means, and (b) a multiple hypothesis testing problem involving univariate normal means. I will also talk about an extension to the mixture of linear regressions model. This is based on joint work with several collaborators who will be mentioned during the talk. We study several compound decision problems for a large scale teacher quality evaluation using detailed administrative data from North Carolina. The longitudinal data structure permits a rich Gaussian hierarchical model with heterogeneity in both the location and scale parameters. Optimal Bayes rules are derived for effect estimation and effect ranking under various loss functions. We focus on nonparametric empirical Bayes methods which reveal some interesting features of the teacher quality distribution and allow more flexible nonlinear shrinkage rules. These results are contrasted with those obtained using the commonly used linear shrinkage method in the teacher value-added literature. In addition, one of the proposed incentive schemes to maintain accountability of the education system involves replacing teachers in the lower tail of the student performance distribution. We investigate the implementation of such policies and discuss empirical Bayes ranking methods based on the nonparametric maximum likelihood methods for general mixture models of Kiefer and Wolfowitz (1956). In particular, a close connection is revealed between the ranking problem and the multiple testing problem. Nowadays a large amount of data is available, and the need for novel statistical strategies to analyze such data sets is pressing. This talk focuses on the development of statistical and computational strategies for a sparse regression model in the presence of mixed signals. The existing estimation methods have often ignored contributions from weak signals. However, in reality, many predictors altogether provide useful information for prediction, although the amount of such useful information in a single predictor might be modest. The search for such signals, sometimes called networks or pathways, is for instance an important topic for those working on personalized medicine. We discuss a new "post selection shrinkage estimation strategy" that takes into account the joint impact of both strong and weak signals to improve the prediction accuracy, and opens pathways for further research in such scenarios. Markov chain Monte Carlo (MCMC) algorithms are commonly used to fit complex hierarchical models to data. In this talk, we shall discuss some recent efforts to scale up Bayesian computation in high-dimensional and shape-constrained regression problems. The common underlying theme is to perturb the transition kernel of an exact MCMC algorithm to ease the computational cost per step while maintaining accuracy. The effects of such approximations are studied theoretically, and new algorithms are developed for the horseshoe prior and constrained Gaussian process priors in various applications. The Fama–French three factor models are commonly used in the description of asset returns in finance. Statistically speaking, the Fama–French three factor models imply that the return of an asset can be accounted for directly by the Fama–French three factors, i.e. market, size and value factor, through a linear function. A natural question is: would some kind of transformed Fama–French three factors work better? If so, what kind of transformation should be imposed on each factor in order to make the transformed three factors better account for asset returns? In this paper, we are going to address these questions through nonparametric modelling. We propose a data driven approach to construct the transformation for each factor concerned. A generalised maximum likelihood ratio based hypothesis test is also proposed to test whether transformations on the Fama–French three factors are needed for a given data set. Asymptotic properties are established to justify the proposed methods. Extensive simulation studies are conducted to show how the proposed methods perform with finite sample size. Finally, we apply the proposed methods to a real data set, which leads to some interesting findings. We show that any lower-dimensional marginal density obtained from truncating multivariate normal distributions to the positive orthant exhibits a mass-shifting phenomenon. Despite the truncated multivariate normal having a mode at the origin, the marginal density assigns increasingly small mass near the origin as the dimension increases. The phenomenon is accentuated as the correlation between the random variables increases; in particular we show that the univariate marginal assigns vanishingly small mass near zero as the dimension increases provided the correlation between any two variables is greater than $0.8$. En-route, we develop precise comparison inequalities to estimate the probability near the origin under the marginal distribution of the truncated multivariate normal. This surprising behavior has serious repercussions in the context of Bayesian shape constrained estimation and inference, where the prior, in addition to having a full support, is required to assign a substantial probability near the origin to capture flat parts of the true function of interest. Without further modifications, we show that commonly used priors are not suitable for modeling flat regions and propose a novel alternative strategy based on shrinking the coordinates using a multiplicative scale parameter. The proposed shrinkage prior guards against the mass shifting phenomenon while retaining computational efficiency. This is joint work with Shuang Zhou, Pallavi Ray and Anirban Bhattacharya. Since the invention of instrumental variable regression in 1928, its analysis has been predominately frequentist. In this talk we will explore whether Bayes or empirical Bayes may be more appropriate for this purpose. We will start with Mendelian randomization—-the usage of genetic variation as the instrument variable, and demonstrate how an empirical partially Bayes approach proposed by Lindsay (1985) is incredibly useful when there are many weak instruments. Selective shrinkage of the instrument strength estimates is crucial to improve the statistical efficiency. In a real application to estimate the causal effect of HDL cholesterol on heart disease, we find that the classical model with a homogeneous causal effect is not realistic. I will demonstrate evidence of this mechanistic heterogeneity and propose a Bayesian model/shrinkage prior to capture the heterogeneity. To conclude the talk, several other advantages of using (empirical) Bayes in instrumental variable regression will be discussed. In recent years, interest in spatial statistics has increased significantly. However, for large data sets, statistical computations for spatial models are a challenge, as it is extremely difficult to store a large covariance or an inverse covariance matrix, and compute its inverse, determinant or Cholesky decomposition. This talk will focus on spatial mixed models and discuss scalable matrix-free conditional samplings for their inference. The role of shrinkage in the estimation will be considered. Both Bayesian computations and frequentist method of inference will be considered. The work arose in collaboration with Somak Dutta at Iowa State University. I talk about two recent studies on singular value shrinkage. 1. We develop singular value shrinkage priors for the mean matrix parameters in the matrix-variate normal model with known covariance matrices. Our priors are superharmonic and put more weight on matrices with smaller singular values. They are a natural generalization of the Stein prior. Bayes estimators and Bayesian predictive densities based on our priors are minimax and dominate those based on the uniform prior in finite samples. In particular, our priors work well when the true value of the parameter has low rank. 2. We develop an empirical Bayes (EB) algorithm for the matrix completion problems. The EB algorithm is motivated from the singular value shrinkage estimator for matrix means by Efron and Morris. Numerical results demonstrate that the EB algorithm attains at least comparable accuracy to existing algorithms for matrices not close to square and that it works particularly well when the rank is relatively large or the proportion of observed entries is small. Application to real data also shows the practical utility of the EB algorithm. Spectral statistics play a central role in many multivariate testing problems. It is therefore of interest to approximate the distribution of functions of the eigenvalues of sample covariance matrices. Although bootstrap methods are an established approach to approximating the laws of spectral statistics in low-dimensional problems, these methods are relatively unexplored in the high-dimensional setting. The aim of this paper is to focus on linear spectral statistics as a class of prototype statistics for developing a new bootstrap in the high-dimensional setting — and we refer to this method as the "Spectral Bootstrap". In essence, the method originates from the parametric bootstrap, and is motivated by the notion that, in high dimensions, it is difficult to obtain a non-parametric approximation to the full data-generating distribution. From a practical standpoint, the method is easy to use, and allows the user to circumvent the difficulties of complex asymptotic formulas for linear spectral statistics. In addition to proving the consistency of the proposed method, we provide encouraging empirical results in a variety of settings. Lastly, and perhaps most interestingly, we show through simulations that the method can be applied successfully to statistics outside the class of linear spectral statistics, such as the largest sample eigenvalue and others. We discuss predictive density for Poisson sequence models under sparsity constraints. Sparsity in count data implies situations where there exists an overabundance of zeros or near-zero counts. We investigate the exact asymptotic minimax Kullback--Leibler risks in sparse and quasi-sparse Poisson sequence models. We also construct a class of Bayes predictive densities that attain exact asymptotic minimaxity without the knowledge of true sparsity level. Our construction involves the following techniques: (i) using spike-and-slab prior with an improper prior; (ii) calibrating the scaling of improper priors from the predictive viewpoint; (iii) plugging a convenient estimator into the hyperparameter. For application, we also discuss the performance of the proposed Bayes predictive densities in settings where current observations are missing completely at random. The simulation studies as well as applications to real data demonstrate the efficiency of the proposed Bayes predictive densities. This talk is based on the joint work with Fumiyasu Komaki (University of Tokyo) and Ryoya Kaneko (University of Tokyo). Bayesian models are increasingly fit to large administrative data sets and then used to make individualized predictions. In particular­­, Medicare's Hospital Compare webpage provides information to patients about specific hospital mortality rates for a heart attack or Acute Myocardial Infarction (AMI). Hospital Compare's current predictions are based on a random-effects logit model with a random hospital indicator and patient risk factors. Except for the largest hospitals, these predictions are not individually checkable against data, because data from smaller hospitals are too limited. Before individualized Bayesian predictions, people derived general advice from empirical studies of many hospitals; e.g., prefer hospitals of type 1 to type 2 because the observed mortality rate is lower at type 1 hospitals. Here we calibrate these Bayesian recommendation systems by checking, out of sample, whether their predictions aggregate to give correct general advice derived from another sample. This process of calibrating individualized predictions against general empirical advice leads to substantial revisions in the Hospital Compare model for AMI mortality, revisions that hierarchically incorporate information about hospital volume, nursing staff, medical residents, and the hospital's ability to perform cardiovascular procedures. And for the ultimate purpose of meaningful public reporting, predicted mortality rates must then be standardized to adjust for patient-mix variation across hospitals. Such standardization can be accomplished with counterfactual mortality predictions for any patient at any hospital. It is seen that indirect standardization, as currently used by Hospital Compare, fails to adequately control for differences in patient risk factors and systematically underestimates mortality rates at the low volume hospitals. As a viable alternative, we propose a full population direct standardization which yields correctly calibrated mortality rates devoid of patient-mix variation. (This is joint research with Veronika Rockova, Paul Rosenbaum, Ville Satopaa and Jeffrey Silber). Characterizing the exact asymptotic distributions of high-dimensional eigenvectors for large structured random matrices poses important challenges yet can provide useful insights into a range of applications. To this end, in this paper we introduce a general framework of asymptotic theory of eigenvectors (ATE) for large structured symmetric random matrices with heterogeneous variances, and establish the asymptotic properties of the spiked eigenvectors and eigenvalues for the scenario of the generalized Wigner matrix noise, where the mean matrix is assumed to have the low-rank structure. Under some mild regularity conditions, we provide the asymptotic expansions for the spiked eigenvalues and show that they are asymptotically normal after some normalization. For the spiked eigenvectors, we establish novel asymptotic expansions for the general linear combination and further show that it is asymptotically normal after some normalization, where the weight vector can be arbitrary. We also provide a more general asymptotic theory for the spiked eigenvectors using the bilinear form. Simulation studies verify the validity of our new theoretical results. Our family of models encompasses many popularly used ones such as the stochastic block models with or without overlapping communities for network analysis and the topic models for text analysis, and our general theory can be exploited for statistical inference in these large-scale applications. This talk is based on joint works with Jianqing Fan, Xiao Han and Jinchi Lv. We consider estimation of a heteroscedastic multivariate normal mean. Under heteroscedasticity, estimators shrinking more on the coordinates with larger variances, seem desirable. However, they are not necessarily ordinary minimax. We show that such James-Stein type estimators can be ensemble minimax, minimax with respect to the ensemble risk, related to empirical Bayes perspective of Efron and Morris. This is a joint work with Larry Brown and Ed George. Estimation of spectral distribution of large covariance matrices in the limit where the sample size as well as the dimension grow proportionately unbounded, has been studied by many authors in recent times. We consider a new methodology for spectrum estimation, where a suitable entropy is maximised under constraints mandated by Marchenko-Pastur equation. Initial studies demonstrate an improved performance of our method for discrete as well as continuous spectrums. We further discuss some theoretical properties of our estimator. This work is joint with Debashis Paul, Department of Statistics, University of California Davis and Wen Jun, Department of Statistics and Applied Probability, National University of Singapore.
CommonCrawl
Mr. K. I. has a very big movie collection. He has organized his collection in a big stack. Whenever he wants to watch one of the movies, he locates the movie in this stack and removes it carefully, ensuring that the stack doesn't fall over. After he finishes watching the movie, he places it at the top of the stack. Since the stack of movies is so big, he needs to keep track of the position of each movie. It is sufficient to know for each movie how many movies are placed above it, since, with this information, its position in the stack can be calculated. Each movie is identified by a number printed on the movie box. Your task is to implement a program which will keep track of the position of each movie. In particular, each time Mr. K. I. removes a movie box from the stack, your program should print the number of movies that were placed above it before it was removed. one line with $r$ integers $a_1, \ldots , a_ r$ ($1 \le a_ i \le m$) representing the identification numbers of movies that Mr. K. I. wants to watch. For simplicity, assume that the initial stack contains the movies with identification numbers $1, 2, \ldots , m$ in increasing order, where the movie box with label $1$ is the top-most box. one line with $r$ integers, where the $i$-th integer gives the number of movie boxes above the box with label $a_ i$, immediately before this box is removed from the stack. Note that after each locate request $a_ i$, the movie box with label $a_ i$ is placed at the top of the stack.
CommonCrawl
For those who are familiar with machine learning literatures would probably figure out that this is what most of machine learning problems deal with. In particular, if the output space '$Y$' is a real line '$R$', we call such problems as a regression problem and this is what this article will mainly deal with. The seven black circles shown in the above figure are our training data where X axis and Y axis indicate input and output spaces, respectively. Simply speaking, our goal is to find a curve interpolating the seven points. Perhaps, the first method that you can think of would be a linear model. It is a very simple, very efficient method. As there is no free lunch, however, the result of linear regression is likely to be unsatisfactory. The blue line in the below figure is the result of a linear regression, and clearly, we might want something better that this. One natural extension would be using higher order polynomial models such as second, third, or forth polynomials which are shown with different colors (check the legend). However, these result aren't also that satisfactory. In this case, the representer theorem can give you a reasonable solution. where '$R(g)$' is a regularizer of the functional '$g(\cdot)$' such as a Hilbert norm '$\| g \|_H$'. where '$x_i$' is '$i$'th training input data, '$n$' is the number of training data, '$k(x, x')$' is a kernel function, and '$\alpha_i$' is the parameters we wish to optimize. In other words, using the representer theorem, finding an arbitrary function '$g(\cdot)$' converts to finding parameters '$\alpha$' whose size is equivalent to the number of training data. I would like to emphasize that this analytic solution is equivalent to that of kernel ridge regression or Gaussian process regression. In other words, kernel ridge regression and GPR can be explained within the framework of the representer theorem. The red curve in the below figure is the regression result using the representer theorem.
CommonCrawl
Abstract. We consider Sch\"odinger operators on the half-line, both discrete and continuous, and show that the absence of bound states implies the absence of embedded singular spectrum. More precisely, in the discrete case we prove that if $\Delta + V$ has no spectrum outside of the interval $[-2,2]$, then it has purely absolutely continuous spectrum. In the continuum case we show that if both $-\Delta + V$ and $-\Delta - V$ have no spectrum outside $[0,\infty)$, then both operators are purely absolutely continuous. These results extend to operators with finitely many bound states.
CommonCrawl
The seminar will be held in room 901 of Van Vleck Hall on Mondays from 3:30pm - 4:30pm, unless indicated otherwise. Abstract: The main goal of this talk is to discuss the classical (well known) versions of the strong maximum principle of Hopf and Oleinik, as well as the generalized maximum principle of Protter and Weinberger. These results serve as steps towards the theorem of characterization of the strong maximum principle of the speaker, Molina-Meyer and Amann, which substantially generalizes a popular result of Berestycki, Nirenberg and Varadhan. Abstract: The Ericksen-Leslie system is the governing equation that describes the hydrodynamic evolution of nematic liquid crystal materials, first introduced by J. Ericksen and F. Leslie back in 1960's. It is a coupling system between the underlying fluid velocity field and the macroscopic average orientation field of the nematic liquid crystal molecules. Mathematically, this system couples the Navier-Stokes equation and the harmonic heat flow into the unit sphere. It is very challenging to analyze such a system by establishing the existence, uniqueness, and (partial) regularity of global (weak/large) solutions, with many basic questions to be further exploited. In this talk, I will report some results we obtained from the last few years. Abstract: In this talk, I will present some recent results concerning the 1D isentropic Euler equations using the theory of compensated compactness in the framework of finite energy solutions. In particular, I will discuss the convergence of the vanishing viscosity limit of the compressible Navier-Stokes equations to the Euler equations in one space dimension. I will also discuss how the techniques developed for this problem can be applied to the existence theory for the spherically symmetric Euler equations and the transonic nozzle problem. One feature of these three problems is the lack of a priori estimates in the space $L^\infty$, which prevent the application of the standard theory for the 1D Euler equations. Abstract: I will discuss recent results on the analysis of the vanishing viscosity limit, that is, whether solutions of the Navier-Stokes equations converge to solutions of the Euler equations, for incompressible fluids when walls are present. At small viscosity, a viscous boundary layer arise near the walls where large gradients of velocity and vorticity may form and propagate in the bulk (if the boundary layer separates). A rigorous justification of Prandtl approximation, in absence of analyticity or monotonicity of the data, is available essentially only in the linear or weakly linear regime under no-slip boundary conditions. I will present in particular a detailed analysis of the boundary layer for an Oseen-type equation (linearization around a steady Euler flow) in general smooth domains. Abstract: Hydrodynamic limits concern the rigorous derivation of fluid equations from kinetic theory. In bounded domains, kinetic boundary corrections (i.e. boundary layers) play a crucial role. In this talk, I will discuss a fresh formulation to characterize the boundary layer with geometric correction, and in particular, its applications in 2D smooth convex domains with in-flow or diffusive boundary conditions. We will focus on some newly developed techniques to justify the asymptotic expansion, e.g. weighted regularity in Milne problems and boundary layer decomposition. Abstract: This talk deals with the problem of global existence of solutions to a quadratic coupled wave-Klein-Gordon system in space dimension 2, when initial data are small, smooth and mildly decaying at infinity.Some physical models, especially related to general relativity, have shown the importance of studying such systems. At present, most of the existing results concern the 3-dimensional case or that of compactly supported initial data. We content ourselves here with studying the case of a model quadratic quasi-linear non-linearity, that expresses in terms of « null forms » . Our aim is to obtain some energy estimates on the solution when some Klainerman vector fields are acting on it, and sharp uniform estimates. The former ones are recovered making systematically use of normal forms' arguments for quasi-linear equations, in their para-differential version, whereas we derive the latter ones by deducing a system of ordinary differential equations from the starting partial differential system. We hope this strategy will lead us in the future to treat the case of the most general non-linearities. A biological evolution model involving trait as space variable has a interesting feature phenomena called Dirac concentration of density as diffusion coefficient vanishes. The limiting equation from the model can be formulated by Hamilton Jacobi equation with a maximum constraint. In this talk, I will present a way of constructing a solution to a constraint Hamilton Jacobi equation together with some uniqueness and non-uniqueness properties. Abstract: We consider the local well-posedness of the Cauchy problem for the gravity water waves equations, which model the free interface between a fluid and air in the presence of gravity. It has been known that by using dispersive effects, one can lower the regularity threshold for well-posedness below that which is attainable by energy estimates alone. Using a paradifferential reduction of Alazard-Burq-Zuily and low regularity Strichartz estimates, we apply this idea to the well-posedness of the gravity water waves equations in arbitrary space dimension. Further, in two space dimensions, we discuss how one can apply local smoothing effects to further extend this result. Abstract: Many biological systems exhibit the property of self-organization, the defining feature of which is coherent, large-scale motion arising from underlying short-range interactions between the agents that make up the system. In this talk, we give an overview of some simple models that have been used to describe the so-called flocking phenomenon. Within the family of models that we consider (of which the Cucker-Smale model is the canonical example), writing down the relevant set of equations amounts to choosing a kernel that governs the interaction between agents. We focus on the recent line of research that treats the case where the interaction kernel is singular. In particular, we discuss some new results on the wellposedness and long-time dynamics of the Euler Alignment model and the Shvydkoy-Tadmor model. Abstract: In this talk we will give sufficient conditions for the local solvability of a class of degenerate second order linear partial differential operators with smooth coefficients. The class under consideration, inspired by some generalizations of the Kannai operator, is characterized by the presence of a complex subprincipal symbol. By giving suitable conditions on the subprincipal part and using the technique of a priori estimates, we will show that the operators in the class are at least $L^2$ to $L^2$ locally solvable. Abstract: The calculus of variations asks us to minimize some energy and then describe the shape/properties of the minimizers. It is perhaps a surprising fact that minimizers to ``nice" energies are more regular than one, a priori, assumes. A useful tool for understanding this phenomenon is the Euler-Lagrange equation, which is a partial differential equation satisfied by the critical points of the energy. However, as we teach our calculus students, not every critical point is a minimizer. In this talk we will discuss some techniques to distinguish the behavior of general critical points from that of minimizers. We will then outline how these techniques may be used to solve some central open problems in the field. We will then turn the tables, and examine PDEs which look like they should be an Euler-Lagrange equation but for which there is no underlying energy. For some of these PDEs the solutions will regularize (as if there were an underlying energy) for others, pathological behavior can occur. In this talk, we first introduce the inverse mean curvature flow and its well known application in the the proof of Riemannian Penrose inequality by Huisken and Ilmanen. Then our main result on the existence and behavior of convex non-compact solution will be discussed. The key ingredient is a priori interior in time estimate on the inverse mean curvature in terms of the aperture of supporting cone at infinity. This is a joint work with P. Daskalopoulos and I will also mention the recent work with P.-K. Hung concerning the evolution of singular hypersurfaces.
CommonCrawl
Abstract: The highly radiopure $\simeq$ 250 kg NaI(Tl) DAMA/LIBRA set-up is running at the Gran Sasso National Laboratory of the I.N.F.N.. In this paper the first result obtained by exploiting the model independent annual modulation signature for Dark Matter (DM) particles is presented. It refers to an exposure of 0.53 ton$\times$yr. The collected DAMA/LIBRA data satisfy all the many peculiarities of the DM annual modulation signature. Neither systematic effects nor side reactions can account for the observed modulation amplitude and contemporaneously satisfy all the several requirements of this DM signature. Thus, the presence of Dark Matter particles in the galactic halo is supported also by DAMA/LIBRA and, considering the former DAMA/NaI and the present DAMA/LIBRA data all together (total exposure 0.82 ton$\times$yr), the presence of Dark Matter particles in the galactic halo is supported at 8.2 $\sigma$ C.L..
CommonCrawl
The question is inspired by a short passage on the LMM in Mark Joshi's book. The LMM cannot be truly Markovian in the underlying Brownian motions due to the presence of state-dependent drifts. Nevertheless, the drifts can be approximated 'in a Markovian way' by using predictor-corrector schemes to make the rates functions of the underlying increments across a single step. Ignoring the drifts, the LMM would be Markovian in the underlying Brownian motions if the volatility function is separable. The volatility function $\sigma_i(t)$ is called separable if it can be factored as follows $$\sigma_i(t)=v_i\times\nu(t),$$ where $v_i$ is a LIBOR specific scalar and $\nu(t)$ is a deterministic function which is the same across all LIBOR rates. The separability condition above is sufficient for the LMM to be Markovian in the Brownian motion. How far is it from being a necessary one? What is the intuition behind the separability condition? Are there any weaker sufficient conditions? You can use a matrix type seperability condition as well. This is similar but the equation has more flexibiliity. The rates are then markovian in some combinations of the Brownian motion. See More Mathematical Finance for details. Re 2: It's a mathematical trick. Insisting on the separability of volatility function makes LMM useless. Its power lies in its powerful calibration abilities. If you constrain the vol function to separable form, you throw that ability out of the window. You might just as well use LGM then, and it will be more intuitive and faster. LIBOR Market Model - tenors?
CommonCrawl
Is it true "If $X \subset Y$ then $\Bbb I(Y) \subset \Bbb I(X)$"(proper inclusion)? Is it true "If $X \subset Y$ then $\Bbb I(Y) \subset \Bbb I(X)$; here I am using proper inclusion. Couldn't prove it though. Trying for long time please help. Actually I saw here "https://people.maths.bris.ac.uk/~mp12500/teaching/Lectures5-7.pdf" in proposition 17 $\Rightarrow$ direction. Looks like it's actually Prop 11 (part 2) in the notes you link that shows inclusion. To see proper inclusion, suppose $I(Y) = I(X)$. Then $V(I(Y)) = V(I(X))$. But as $X$ and $Y$ are algebraic varieties, $V(I(X)) = X$ and $V(I(Y)) = Y$. So you must have had $X = Y$. Not the answer you're looking for? Browse other questions tagged abstract-algebra algebraic-geometry ring-theory commutative-algebra ideals or ask your own question. Is $\Bbb C$ a purely transcendental extension of a proper subfield? Let $I$ and $J$ be prime ideals. Then is $I.J=I\cap J$? Let $V$ be a 0-dimensional variety. If $Z$ is a proper closed subset of $V$, then $Z$ is discrete? If every proper overring of $R$ is a valuation domain, then $R$ is a valuation domain? Showing that if $\Bbb Z/ab\Bbb Z$ is isomorphic to $\Bbb Z/a\Bbb Z \times \Bbb Z/b\Bbb Z$, then $gcd(a,b)=1$ necessarily. Determining ideals, isomorphic rings of $\Bbb C[x, y]/(y^2 - x^3)$? Proper inclusion of ideals localizes to a proper inclusion at one of the generators of the ring.
CommonCrawl
Specify the start of the range in which to interpolate. Specify the end of the range in which to interpolate. Specify the value to use to interpolate between x and y. mix performs a linear interpolation between x and y using a to weight between them. The return value is computed as $x \times (1 - a) + y \times a$. The variants of mix where a is genBType select which vector each returned component comes from. For a component of a that is false, the corresponding component of x is returned. For a component of a that is true, the corresponding component of y is returned. Components of x and y that are not selected are allowed to be invalid floating-point values and will have no effect on the results.
CommonCrawl
A somewhat neglected model in the Data Science/Machine Learning community are Multivariate Adaptive Regression Splines (MARS). Despite being less popular than Neural Networks or Trees, I find the concept of MARS pretty interesting, especially because it comes as a fully linear model and thus can preserve model interpretability. One of the drawbacks of the standard MARS algorithm is the greedy selection scheme for basis functions. Like in CART based algorithms, MARS only looks for the locally best term to add or remove. While this still leads to good results in many cases, it would be cool to also find a globally optimal solution. This is actually quite simple with Penalized Regression, although the computational complexity grows quickly with data-size. The major increase in complexity for this approach comes from taking higher order interactions into account. For that reason, I'll keep the data pretty simple for now. The test set simply consists of another 1000 points scattered evenly in $[-5,5]$. For the univariate case, we want to use every training example as a possible knot point for the spline and let the penalized regression find the optimal linear combination among all possible knots. where $N$ is the size of the training data. I chose Lasso here to make the result look more like Piecewiese Linear Regression. In MARS, the reverse function $max(0,x_i – x)$ would actually also be used as a regressor. To avoid explosion of features, I left these mirrored hinge functions out, recognizing no disadvantages in results. Now, we start by creating the feature matrices. As a test set, I used 1000 evenly spaced points on $[-5,5]$ to plot the resulting regression function (see above). The $\alpha$ (penalization) parameter should actually be selected via some sort of cross validation to find its optimimum given the data. For this toy example, I just tested a little bit until I found a feasible solution. /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. To get even more smoothness, Ridge Regression could be used instead of Lasso. The reason why I reduced the interval for $x$ and $y$ was to reduce the additional spacing among datapoints that comes with the adding one dimension (curse of dimensionality).
CommonCrawl
Abstract: A theory is constructed for attractors of all finite-energy solutions of conservative one-dimensional wave equations on the whole real line. The attractor of a non-degenerate (that is, generic) equation is the set of all stationary solutions. Each finite-energy solution converges as $t\to\pm\infty$ to this attractor in the Frechet topology determined by local energy seminorms. The attraction is caused by energy dissipation at infinity. Our results provide a mathematical model of Bohr transitions ("quantum jumps") between stationary states in quantum systems.
CommonCrawl
[astro-ph/9912499] RC J1148+0455 identification: gravitational lens or group of galaxies ? Title:RC J1148+0455 identification: gravitational lens or group of galaxies ? Abstract: The structure of the radio source RC B1146+052 of the ``Cold'' catalogue is investigated by data of the MIT-GB-VLA survey at 4850 MHz. This source belongs to the steep spectrum radio sources subsample of the RC catalogue. Its spectral index is $\alpha$ = -1.04. The optical image of this source obtained with 6m telescope is analysed. The radio source center is situated in a group of 8 galaxies of about 24$^m$ in the R-filter. The possible explanations of the complex structure of radio components are considered.
CommonCrawl
Abstract: We complete the study of static BPS, asymptotically AdS$_4$ black holes within N=2 FI-gauged supergravity and where the scalar manifold is a homogeneous very special Kahler manifold. We find the analytic form for the general solution to the BPS equations, the horizon appears as a double root of a particular quartic polynomial whereas in previous work this quartic polynomial further factored into a pair of double roots. A new and distinguishing feature of our solutions is that the phase of the supersymmetry parameter varies throughout the black hole. The general solution has $2n_v$ independent parameters; there are two algebraic constraints on $2n_v+2$ charges, matching our previous analysis on BPS solutions of the form $AdS_2\times \Sigma_g$. As a consequence we have proved that every BPS geometry of this form can arise as the horizon geometry of a BPS AdS$_4$ black hole. When specialized to the STU-model our solutions uplift to M-theory and describe a stack of M2-branes wrapped on a Riemman surface in a Calabi-Yau fivefold with internal angular momentum.
CommonCrawl
In Ron Maimon's answer (linked below), he says that we should remove the "gadgets" at the top of the editor. I disagree that we should remove all the buttons, since this is a WYSIWYG editor, however, indeed some of them can go. So I have set this thread up to discuss exactly which buttons we don't need, etc. Also, I have noticed that it is difficult to understand all the buttons at first glance. I think some of the icons should be changed or replaced with text. So, basically, use this thread to suggest changes to the editor. @physicsnewbie That surely doesn't work on chrome/chromium. It is also useless to have a hoard of unhelpful buttons, which nobody will ever use, and keep scrolling for simple tasks (in the styles). More than anything else, I would like to be able to write using markdown. It is slowly but almost surely becoming an internet-wide standard. Fwiw, I think the Phy.SE editor is quite excellent. @Siva It was my first idea to use this markdown editor with LaTeX. The issue of this editor is that Markdown is applied to the page first. For example the underscore is used to indicate italics, and this usage will conflict with MathJax's use of the underscore to indicate a subscript. Before the MathJax renders the page, the editor will convert the subscripts markers into italics (inserting <i> tags into your mathematics, which will cause MathJax to ignore the math). Another example are escape sequences, as know from the programming language C or C++. For instance \n is an end-of-line or \t is a tab, but \\ is a backslash. During the transform of an edited text into a html-coded text, the Markdown Editor "eats" one of these backslashes. Therefore, a \\\ was required to make the \\ survive in MathJax arrays. During the first developments I found a way of preprocessing the edited text by replacing Latex sequences by tags and inserting these blocks at the end by a postprocessing. This worked fine for the live preview, but by unknown reasons not for the real posting. After some days of frustration and anger, I gave up and developed the actual editor plugin based on CKEditor with a MathJax plugin. The importing system uses the already formatted html code, as provided by the API. I'm so used to md+mathjax ( I use stackedit.io ) and I like to write first in another editor and finally paste the contents to the answer window. It is more confortable without clicking in the icons. @HelderVelez You're right, however, the markdown editor is unfortunately incompatible with PhysicsOverflow. The best you can do is use the HTML editor (using the Source button). @Siva I remember that PolarKernel told us that when one writes the double-slashes in the arrays \\, one of the slashes is swallowed up by the editor because it interferes with the internal code for the editor or something like that, so one would need to write \\\ instead of \\. This will be confusing for new users, and it will be a huge task to fix all the old imported questions. How does the importing system currently work? Does it import HTML text that is already formatted? @polarkernel Thanks, however for me at least I still see them. \(\Omega\) (symbols) is unnecessary as most of these symbols are either not used in physics or can simply be added from the keyboard. (cut, copy, paste, paste2, undo, redo) all can be done trivially through the keyboard, and most browsers don't even allow the first 4, for obvious reasons. Get rid of "Font". One doesn't need so many font choices in a Q&A site. You really can't have all users choosing their favourite font. Done. I removed also the font size select. OK? When one clicks on the editor (for answers), and anyway, for questions and comments, the "Format" is changed to "Normal". This conveys little information on which the dropdown is for. Instead, the first dropdown (Styles) should be labelled "Special Styles", the second dropdown (Normal/Format) should be labelled "Level". Done. "Special Styles" was too long, I used abbreviation. "Level" does not yet work, will come soon. With both IE and Chrome, I can hover the cursor over a symbol to get a short message explaining its function. The editor does a fine job as it is. No, it is too bulky. It is also extremely difficult for new users to understand what all the symbols mean. @dimension10 I'm pretty thick, yet had no problems when I first used it; I just hovered the cursor over a symbol to get quick info on what it does. There's also a very nice help button. @physicsnewbie At least on chrome/chromium, there is no hover text. @dimension10 For me, hover text works for both Chrome and IE. Q2A doesn't even allow me to give an answwer with Firefox. @physicsnewbie Ah, I see, so it's a problem with chromium only. Still, a pretty major browser. Can the $\Sigma$ button please be changed to $\TeX$? It is more descriptive. I think that TeX is more intuitive. When I saw \(\Sigma\) for the first time I though that it was a non-tex equation editor. New users would likely write tex equation of the kind of $\alpha ^2 \sim 1/137$ if they don't see a TeX button. I guess that it is not possible to directly use TeX like in SE, isn't it? Lol. I do see that it is indeed possible to directly write TeX. Why do we need that button then? @drake there's no preview if you use enclosing $$, so you can't check until you've posted, whereas the inbuilt latex editor is WYSIWYG. Done. The 16x16 pixel icon is very small for these characters. Can yo read \(\TeX\)? I already can see them. Maybe the old versions are still in the cache? @polarkernel Yes that seems to be the case. I tried a browser I haven't used on PO before, and it can be seen. Thanks! Oh, and the fonts removal of course. I can't see any changes for the rest at all. Please get rid of the underline button. Even on SE, there is no underline option. Nobody uses it, underlining makes the text look like a link. Please get rid of the remove text formatting (Tx) button. It can't take more than a few clicks to manually remove formatting. Please remove the list indentation buttons (after the unordered / bulleted list hutton and before the blockquote button). They can be done trivially with tab/shift-tab, and they look cryptic (it seems to be general indentation, but that doesn't work, and it isn't, it's list level changing) so nobody will understand them or use them. Can the icon for the button which adds the horizontal rule be changed to the text "hr"? It's more descriptive. I didn't get confused, but many may not recognise it, or think it is a template for adding a book citation, or something.
CommonCrawl
How to find sets of polynomially bounded numbers whose subset sums are different? No common terms between polynomials: an efficient check? How can we prove Schwartz Zippel PIT is applicable to natural polynomials? How to handle generator polynomial in CRC if given in (x+1) (x^3+ x^2 +1) form? Is the following problem coNP-complete? Inputs: $p=$ a possibly non-convex multivariate polynomial over $\mathbb Z$ $k\in \mathbb Z$, an integer Question: Is $\forall x\in\mathbb Z: p(x)\geq k$? How does one generate all the terms of a multivariate polynomial algorithmically? Fitting a low-degree polynomial to a function on a finite 1d grid - a combinatorial problem? Can we evaluate a polynomial of degree N modulo M at all M points, faster than Θ(mn) time? Let $f$ be a boolean function with minimum degree real polynomial representing it be of degree $d$. Is there a relation between number of zeros $f$ or $1-f$ and degree $d$? What is the use of Horner's Method?
CommonCrawl
We assume that a base transmitter station (BTS) in a code division multiple access (CDMA) mobile network is equipped with a smart antenna. Each transmitting user is tracked by a beam of fixed width centered at the user. Mobiles are assumed to apply perfect power control such that each is received with the same unit power at the BTS. Under these assumptions outage occurs if the number of overlapping beams exceeds a certain threshold. The according probability is calculated under different random traffic models. Particularly homogeneous Poisson fields and isotropic planar phase processes are considered. The investigations are strongly related to the first hitting time in an M/D/$\infty$ queue as well as object recognition for a photon detector.
CommonCrawl
I'm trying to understand the advantages and disadvantages of using Direct Alpha versus Excess IRR for computing excess returns over a market index for private assets. Wikipedia references a highly informative paper that compares the Direct Alpha against PME, PME+, mPME, and KS-PME and discusses the limitations of these as well as analyzes the correlations between them. I'm looking for a similar resource to that compares the two in my title. where $c_i$ and $d_i$ are the contribution and distribution at time $t_i$, respectively, $b$ is the annualized public market benchmark return over the relevant periods, $r$ is the annualized excess IRR/IPP, and $q$ is the compounding frequency (typically $q=1$ in the IPP setting). If you let $q\to\infty$ (i.e., continuous compounding) and with some redefinition (e.g., $e^\alpha = 1+a$), it is easy to show that excess IRR/IPP converges to direct alpha. So simply put, direct alpha is just the limiting case of excess IRR/IPP, but they're conceptually the same thing. As such, they have the same advantages and disadvantages. IMO, the derivation of IPP/excess IRR is more direct (resembling the way OAS is defined in the fixed income market) and the use of discrete compounding makes the output more comparable to other reported returns. Not the answer you're looking for? Browse other questions tagged modern-portfolio-theory reference-request capm asset-returns or ask your own question.
CommonCrawl
Paper summary davidstutz Schmidt et al. theoretically and experimentally show that training adversarially robust models requires a higher sample complexity compared to regular generalization. Theoretically, they analyze two very simple families of datasets, e.g., consisting of two Gaussian distributions corresponding to a two-class problem. On such datasets, they proof that "robust generalization", i.e., generalization to adversarial examples, required much higher sample complexity compared to regular generlization, i.e., generalization to the test set. These results are interesting because they suggest that the sample complexity might be even worse for more complex and realistic data distributions – as we commonly tackle in computer vision. Experimentally, they show similar result son MNIST, CIFAR-10 and SVHN. Varying the size of the training set and plotting the accuracy on adversarially computed examples results in Figure 1. As can be seen, there seems to be a clear advantage of having larger training sets. Note that these models were trained using adversarial training using an $L_\infty$ adversary constrained by the given $\epsilon$. https://i.imgur.com/SriBAt4.png Figure 1: Training set size plotted against the adversarial test accuracy on MNIST, CIFAR-10 and SVHN. The models were trained using adversarial training. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/). Schmidt et al. theoretically and experimentally show that training adversarially robust models requires a higher sample complexity compared to regular generalization. Theoretically, they analyze two very simple families of datasets, e.g., consisting of two Gaussian distributions corresponding to a two-class problem. On such datasets, they proof that "robust generalization", i.e., generalization to adversarial examples, required much higher sample complexity compared to regular generlization, i.e., generalization to the test set. These results are interesting because they suggest that the sample complexity might be even worse for more complex and realistic data distributions – as we commonly tackle in computer vision. Experimentally, they show similar result son MNIST, CIFAR-10 and SVHN. Varying the size of the training set and plotting the accuracy on adversarially computed examples results in Figure 1. As can be seen, there seems to be a clear advantage of having larger training sets. Note that these models were trained using adversarial training using an $L_\infty$ adversary constrained by the given $\epsilon$. Figure 1: Training set size plotted against the adversarial test accuracy on MNIST, CIFAR-10 and SVHN. The models were trained using adversarial training.
CommonCrawl
I have been trying the build up my intuition on finding rotational symmetries of shapes and I have been looking at the dodecahedron from the platonic solids. I am convinced that I have the correct number of rotational symmetries, but I should only be finding $60$ rotational symmetries and $120$ if we include inversion symmetries. The rotational symmetries I have found include $1$ identity. If we look at the opposite edges and draw a line which bisects both of these opposite edges then we find a $180$ rotation for every pair of opposite edges, which introduces $15$ new axis of rotations. Next we have an axis going through every pair of opposite faces and have 4 non-trivial rotations for every pair of opposite faces. This adds $4\times12 = 48$ new rotations. Lastly I looked at the opposite pairs of vertices and we may rotate the axis which passes through both vertices and have $2$ more rotations for each pair of vertices. This adds $20$ more rotations. With this I think I have found $1 + 15 + 48 + 20 = 84$ rotational symmetries, which looks like I am over counting somewhere but I am not sure where. For "$4\times12=48$" read $4\times6=24$, since there are $6$ pairs of opposite faces. With this correction you have $1+15+24+260$ rotations, the right number. Not the answer you're looking for? Browse other questions tagged abstract-algebra rotations symmetry or ask your own question. Describe the orbits of poles for the group of rotations of an octahedron. What is going on with the symmetry group of the cube? Why rotation of dodecahedron corresponds to an even permutation of inscribed five tetrahedra?
CommonCrawl
Recently there has been a lot of interest in the mathematics of the popular game Sudoku. In a typical Sudoku puzzle, a number of initial clues are given, and the solver uses strategies to fill in the remaining clues to complete the board. A well-known open problem is, "How many initial clues are necessary for the puzzle to have a unique completion?" In this talk, we shift the focus of study from clues to what we call packets. A packet gives information about what entries can NOT be in a cell. Introducing packets gives rise to some interesting questions about Sudoku and its $4\times4$ counterpart, Shidoku. One such question is ``what is the minimum number of packets needed to describe a puzzle with a unique completion?'' This question parallels the minimum clue question. Packets are also intimately related to the Boolean system of polynomial equations used to describe the constraints of a Sudoku puzzle. They can be used to more efficiently calculate a Gr\"obner basis of the ideal generated by this system of equations. Packets are also inherently related to human methods for solving Sudoku puzzles. To emulate human solving strategies we introduce the idea of solving symmetries -- functions which manipulate a puzzle while maintaining the same solutions. We show that these solving symmetries form a group which acts on the set of Sudoku puzzles. This research was performed at a summer REU at James Madison University and supported by NSF DMS-1004516. Foremost we want to thank our advisor Dr. Elizabeth Arnold for her continued dedication in helping us on this project. We would also like to thank our REU colleagues Bjorn Wastvedt, Eddie Tu, and Dr. Stephen Lucas for their help and support. Chapman, Harrison and Rupert, Malcolm E. (2012) "A Group-theoretic Approach to Human Solving Strategies in Sudoku," Colonial Academic Alliance Undergraduate Research Journal: Vol. 3 , Article 3.
CommonCrawl
I think this is going to end up being a long one 1, and it possibly won't be the easiest post to follow; mostly because I will likely end up introducing a decent number of topics I haven't talked about here before. I guess we'll see how things turn out 2. Historically, one topic of interest to number theorists has been diophantine equations. The are equations where you are looking for integer solutions. One famous example is where you look for integer solutions. In general, there's no overarching method to solve any diophantine equation 3, and so individual equations may be solved using ad hoc seeming methods. For example, the pythagorean equation can be solved by projecting points from the unit circle onto a line 4. Another (class of) well-known example(s) is due to Fermat: , but we'll put off solving this one until a later post. Why do we require ? What happens if ? I never mentioned this in the original post 5, but we also want to assume that is not a square number. If , then the equation becomes which means so and are the only solutions. Before solving Pell's equations, we'll start with a simpler task (although it may not be immediately obvious that this equation is any easier to solve). At this point, if it seems like things here will be really novel to you, then I recommend that you check out my previous post on number theory. It's not required to understand this post, and won't necessarily add a bunch to your knowledge of the ideas used here, but I think it could serve as good motivation for seeing that both geometric reasoning and working in number systems larger than can be helpful in tackling number theoretic problems 6. If you were to decide to stop reading, leave this post, and go start finding and solving diophantine equations, one thing you will notice is that multiplication makes things so much easier. Hence, this particular diophantine equation has no solutions. If you can set your problem up as one thing times another thing equals a third thing, then since everything is an integer, the things on the left hand side must be factors of the right hand side! This vastly reduces the number of potential solutions 7, and often can lead directly into an actual solution (ot show that non exist). That being said, the key insight to solving our warmup problem is that we can rewrite it as . I'll take a second to pause so you can let out a gasp 8 of amazement. Once things are in this form, we can see that the left hand side is almost a difference of squares. The only problem is that it's not a difference and 's not a square, but motivated by the possibility of factoring the left hand side, we ignore these constraints, stop restricting ourselves to , and from here on out, do our work in instead 9. I don't know if this feels illegitamate, but it shouldn't because it's not, so I'm gonna move on. We can now write our equation as . At this point, we really hope that are coprime so that they must both be perfect cubes; this would be a fairly restrictive condition. However, hoping this would be getting ahead of ourselves. This line of thinking would work in , but the reason it works (and the reason we can have a sensible definiton of coprime in the first place) is because is a unique factorization domain 10, but we're working with instead of just . Luckily, it turns out that this is a UFD as well, but this is a non-trivial claim that could have failed if we had added a different square root instead 11. Show that is a UFD. Hint: It suffices to show that it's a Euclidean domain, and you can do this by considering points closest to the "ambient quotient" 12. Also, you might want to read ahead a little before tackling this exercise. To show that are coprime, we'll introduce a norm. This definition should look familar to anyone who read my previous post, and so it should not come as a surprise that this norm is multiplicative. That is, for any , we have . Let be a common factor of ; this means that so . Before proceeding, a quick note. When considering factoring and related concepts (like primality), we don't care about units (numbers dividing 1) because units are annoying and change nothing. Furthermore, a number is a unit iff . Proving this is left as an exercise to the reader. Now, back to our problem. The following proposition implies that for some unit and integer . In , is prime 13. Show that the only units in are . Finally, the following proposition shows that this is impossible so must be a unit, and hence are coprime. If satisfies , then is odd. I didn't feel like explaining all those implications, but the moral of the story is that we have two solutions given by which correspond to and . Both of these solutions for corresponsd to , so our original equation has two solutions: . A number field is a finite field extension 15 of . Furthermore, if is of the form for squarefree , then is called a (real or imaginary) quadratic number field. Number fields play the role of in the big picture. From these, we extract nice subsets of so-called algebraic integers. An algebraic integer is a root of a monic polynomial 16 with integer coefficients. Given a number field , it's ring of integers 17 is the set of algebraic integers in . If this definition, seems weird, then maybe the next exercise will help you see why it's actually reasonable. If that doesn't work, then try to come up with another definition that's well-defined for any number field and makes sense in the case of . Because of this exercise, mathematicians sometimes refer to as the ring of "rational" integers in order to distinquish it from other rings of integers. Also, I keep calling these things rings, but it is in no way obvious that they actually do form rings (go ahead and try to prove that and are algebraic integers if are. I won't cover the proof here, but the secret is Cramer's formula). For the purposes of this post, we'll only need to study quadratic number fields, but it's worth noting that number fields in general – and even arbitrary finite field extensions – have a norm. This definition is quite a bit to digest, but we'll unpack it in the case of quadratic number fields. One thing of note we can quickly gleen from this definition is that it makes the statement that the norm is multiplicative almost trivial (why?). Thus, which turns out to also be 19. The fact that this norm takes this multiplicative form means the following theorem is really easy to prove in the quadratic case 20. The diligent reader will be somewhat bothered by the above proof. That's because it implicitly relies upon something I forgot to prove first, which is the following (where does the above proof rely on this theorem?). Let be a quadratic number field. Then, which is to say that the norm of an algebraic integer is a rational integer 21. Now, one last thing, and then we'll say how all of this discussion on integers and norms relates to Pell's equations 22. I've tried to be careful so far to be conscious of the fact than a priori, an elment of could look like a general member of in the sense that it's coefficients could be general rational numbers. From here on out, we'll be a little more concrete because we're going to actually compute in the quadratic case. Pf: Assume $$K,d$$ are as above, pick any $$\alpha\in\ints K$$, and write $$\alpha=x+y\sqrt d$$ with $$x,y\in\Q$$. Then, $$\conj\alpha=x-y\sqrt d\in\ints K$$ as well, and so is $$\alpha+\conj\alpha=2x$$. This means that $$2x\in\Q\cap\ints K=\Z$$ so $$x$$ is either an integer or half an integer. We also know that $$\knorm(\alpha)=\alpha\conj\alpha=x^2-dy^2\in\Z$$ so by taking a difference, this means that $$dy^2\in\Z$$. This means that $$y\in\Z$$. If it were not, then the denominator of $$y^2$$ would be divided by some prime $$p$$ more than once. However, $$d$$ is divisible by $$p$$ at most once, so the product $$dy^2$$ would contain a $$p$$ in the denominator and hence not be an integer. Thus, $$\alpha\in\Z[\sqrt d]$$. so this was the right setting for the warmup problem. At this point, I think we know everything about rings of integers that we'll need 23. In case you have forgotten, our goal is find all integer solutions to Pell's equations which are for integers and positive integer . As this discussion hinted at, for the time being, we'll further restrict to be square free. This has the advantage that since Pell's equation can then be written as , which means that we're really just looking for units of , which is convenient because for square free (or at least it is 2 times out of 3) 24. I debated whether I should talk about what comes next in one section or two. I ultimately decided on two because I didn't want to introduce too much stuff all at once. A priori, the material of this section isn't relevant to the larger discussion at hand, but in the next section, we'll see it play a crucial role. This is the point in the post where we open up to the possibility of me throwing in some pictures. A lattice of a real vector space is the -span of some -basis. If is a lattice of a real vector space , then we say the rank of is the dimension of 25. One property of lattices that will come up is that they are discrete. A subset is called discrete if only finitely many of its points are contained in any bounded region. That is, it is discrete if is finite for all where is the (solid) ball of radius centered at the origin. I won't give a full, formal proof that all lattices are discrete, but I will sketch one direction you could take. The idea is that in a lattice, there exists some such that any two lattice points are a distance greater than apart. So, if you have some bounded set , you can split it up into finitely many balls of radius . Each of these contains at most 1 lattice point, so contains a finite number of lattice points. Now, lattices are discrete and look a lot like , so it's natural to think that have some connection with numbers27. Because of this, and because they have applications in number theory, some results on or relating to lattices form the so-called geometry of numbers. Here, we'll prove and use one such theorem, but before that, we need to describe the volume of a lattice. Let be a lattice in . Then, we say the volume of is the volume of a parrallelogram 28 spanned by a -basis of . The standard lattice has volume since the basis spans a unit square. I know what you're thinking. What if I choose a different basis for my lattice? Instead of writing , I might want to write . Well, doesn't matter. The volume of a lattice is well-defined. Now, a couple definitions just to make sure everyone is on the same page, and then the main theorem. We say is compact if it is closed 29 and bounded. We say is convex if any line between points in is contained in . That is, for any and , the point . Fixing some point , we say is symmetric about if for all , we also have . Let be a lattice in , and be compact, convex, and symmetric about the origin. Furthermore, assume . Then, . The idea behind the proof is that is just too big to miss all of . You essentially take a big parralellogram spanned by a basis of and tile with it. After that you move all the pieces touching back to the original piece about the origin, and if the volume of is greater than the volume of the original piece, then two points of must end up at the same point of the parallelogram. This means that their difference must be twice a lattice point, so their midpoint is a lattice point. I wasn't sure what the best way to visualize this without it being a mess was, so here's a picture of the parralellogram to keep in mind. You can add in and whatnot using your imagination. If you follow the sketch above, in the end, it relies on have strictly greater volume, but the theorem doesn't. This is reconciled by the following. If Minkowski's theorem holds for all with , then it holds for all with as well. Awesome, now we handle the main theorem with the further assumption that has strictly greater volume. Now we know some things about about quadratic number fields, and we know a little about the geometry of numbers, so let's put the two together. The bridge between abstract fields and more concrete geometric ideas will be embeddings. A real embedding of a number field is a ring homomorphism 30 . For a real quadratic number field , is a lattice. The discriminant of a real quadratic number field is if and if . Depending on how we feel, we may denote this or . Thus, once we figure out what is, we will perfectly understand the structure of units in real quadratic fields. As it turn out, this group will be infinite cyclic, but we'll first show it's discrete. I've already been kind of loose with this, but just keep in mind that is of the form for some square free ; I won't always specify this. Now, remember earlier when I said that lies on the line ? Well, the previous theorem says this it is a discrete subgroup of this line. This line is a 1-dimensional real vector space, and there aren't many discrete subgroups of such a space. In fact, there are two: and . Thus, if we can show that has more than one element, then we will show this it must be which will in turn show that Pell's equations have infinitely many solutions. Even more than this, this will show that there exists an such that any unit has the form for some , so all solutions to Pell's equations are generated from a single solution! Getting ahead of myself because this is all still conjecture at this point, we will call such an a fundamental unit of . The idea is to find to elements of that are equal in norm but not absolute value. Then, they must differ by a unit , and (why?), so there's some nonzero elment which means the group must be , and we win. When writing the below proof, I got kinda lost in the details. To help me remember what everything is, and what's going on, I quickly put together the following image. It's not labelled or anything, but it illustrates the (the x-coordinate of the green point) we are going to find, and why we can bound it's absolute value both above and below. I should mention by appealing to absolute value, the above proof implicitly fixes a choice of an embedding . It doesn't really matter which one is used, but worth noting what's going on behind the scenes. Well, we've gone over a lot, and if you're still here, kudos to you 36, but we're finally ready to actually solve Pell's equations. Fix any square free . Integer solutions to the equation are units of , and these units are all in the form for some fundamental unit . In order to call this equation solved, we only need to find a fundamental unit. I'll handle the case that . The other case can be done analagously, and figuring out its details is left as an exercise. Assume and is a fundamental unit of . Then, , and are all fundamental units as well 37. Write with . We can always get positive coefficients by appropriately choosing one of the four fundamental units. Now let be the positive powers of and note that so the sequence is increasing. Thus, if you want to find a fundamental unit, just guess and check. Start with and check to see if is a perfect square. If not, move on to and repeat. Once you've found a value that works, write and your fundamental unit is . Let . If we take , then so no good. If we take , then so still no luck. Now we try to get and we have a winner. Our fundamental unit is . Indeed, is a solution to Pell's equation. Now, take instead. If we let , then so our fundamental unit is . However, this has norm so it's not a solution to Pell's equation. In cases like this, we instead focus our attention on and use this to generate solutions. I'd like to say that's everything, but I've left a few loose ends. These include what to do if isn't square free, and what about the case where so the fundamental unit can have non-integer coefficients. Honestly, I wanted to take care of them myself, but this post became much longer than I anticipated, so I'll leave them to you. I will say that they have similar resolutions. The main issue in both cases is that may not be all of . However, it can be shown that in general, has finite index in . This means in particular that it is still infinite cyclic (why?), and so we still can find a fundamental unit . Then, solutions to Pell's equation either correspond to powers of or even powers of depending on if .
CommonCrawl
Lemma 15.42.8. Let $A \to B$ be a flat local homomorphism of Noetherian local rings such that $\mathfrak m_ A B = \mathfrak m_ B$ and $\kappa (\mathfrak m_ A) = \kappa (\mathfrak m_ B)$. Then $A \to B$ induces an isomorphism $A^\wedge \to B^\wedge $ of completions. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0AGX. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0AGX, in case you are confused.
CommonCrawl
The exact subject matter of the seminar will be determined by the participants and their interests. hypothesis, bicategories, higher Picard and Brauer groups...and beyond! Participants should have some familiarity with the theory of $\infty$-categories. Please consult the course website: http://nullplug.org/infty-seminar-wi-2016.html for more details. Master, PhD students, postdocs, and beyond. nach Vereinbarung / by appt.
CommonCrawl
Many decades of work culminating in Perelman's proof of Thurston's geometrisation conjecture showed that a closed, connected, orientable, prime 3-dimensional manifold \(W\) is essentially determined by its fundamental group \(\pi_1(W)\). This group consists of classes of based loops in \( W\) and its multiplication corresponds to their concatenation. An important problem is to describe the topological and geometric properties of $W$ in terms of \(\pi_1(W)\). For instance, geometrisation implies that \(W\) admits a hyperbolic structure if and only if \(\pi_1(W)\) is infinite, freely indecomposable, and contains no \(\mathbb Z \oplus \mathbb Z\) subgroups. In this talk I will describe recent work which has determined a surprisingly strong correlation between the existence of a left-order on \(\pi_1(W)\) (a total order invariant under left multiplication) and the following two measures of largeness for \(W\): a) the existence of a co-oriented taut foliation on \(W\) - a special type of partition of \(W\) into surfaces which fit together locally like a deck of cards. b) the condition that \(W\) not be an L-space - an analytically defined condition representing the non-triviality of its Heegaard-Floer homology. I will introduce each of these notions, describe the results which connect them, and state a number of open problems and conjectures concerning their precise relationship.
CommonCrawl
Mishka got a six-faced dice. It has integer numbers from $$$2$$$ to $$$7$$$ written on its faces (all numbers on faces are different, so this is an almost usual dice). Mishka wants to get exactly $$$x$$$ points by rolling his dice. The number of points is just a sum of numbers written at the topmost face of the dice for all the rolls Mishka makes. Mishka doesn't really care about the number of rolls, so he just wants to know any number of rolls he can make to be able to get exactly $$$x$$$ points for them. Mishka is very lucky, so if the probability to get $$$x$$$ points with chosen number of rolls is non-zero, he will be able to roll the dice in such a way. Your task is to print this number. It is guaranteed that at least one answer exists. Mishka is also very curious about different number of points to score so you have to answer $$$t$$$ independent queries. The first line of the input contains one integer $$$t$$$ ($$$1 \le t \le 100$$$) — the number of queries. Each of the next $$$t$$$ lines contains one integer each. The $$$i$$$-th line contains one integer $$$x_i$$$ ($$$2 \le x_i \le 100$$$) — the number of points Mishka wants to get. Print $$$t$$$ lines. In the $$$i$$$-th line print the answer to the $$$i$$$-th query (i.e. any number of rolls Mishka can make to be able to get exactly $$$x_i$$$ points for them). It is guaranteed that at least one answer exists. In the first query Mishka can roll a dice once and get $$$2$$$ points. In the second query Mishka can roll a dice $$$3$$$ times and get points $$$5$$$, $$$5$$$ and $$$3$$$ (for example). In the third query Mishka can roll a dice $$$8$$$ times and get $$$5$$$ points $$$7$$$ times and $$$2$$$ points with the remaining roll. In the fourth query Mishka can roll a dice $$$27$$$ times and get $$$2$$$ points $$$11$$$ times, $$$3$$$ points $$$6$$$ times and $$$6$$$ points $$$10$$$ times. Server time: Apr/20/2019 00:31:45 (f2).
CommonCrawl
Abstract: Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this 1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures, we generalize MKL to arbitrary norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary norms, like p-norms with p>1. Empirically, we demonstrate that the interleaved optimization strategies are much faster compared to the commonly used wrapper approaches. A theoretical analysis and an experiment on controlled artificial data experiment sheds light on the appropriateness of sparse, non-sparse and $\ell_\infty$-norm MKL in various scenarios. Empirical applications of p-norm MKL to three real-world problems from computational biology show that non-sparse MKL achieves accuracies that go beyond the state-of-the-art.
CommonCrawl
In this talk based on arXiv:1602.01674, we provide evidence for universality in the low-energy expansion of tree-level string interactions. More precisely, in the $\alpha'$-expansion of tree-level scattering amplitudes, we conjecture that the leading transcendental coefficient at each order in $\alpha'$ is universal for all perturbative string theories. We have checked this universality up to seven points and trace its origin to the ability to restructure the disk integrals of open bosonic string into those of the superstring. The accompanying kinematic functions have the same low-energy limit and do not introduce any transcendental numbers in their $\alpha'$-corrections. Universality in the closed-string sector then follows from the Kawai-Lewellen-Tye-relations.
CommonCrawl
Let $\alpha_k(\lambda)$ denote the number of $k$-hooks in a partition $\lambda$ and let $b(n,k)$ be the maximum value of $\alpha_k(\lambda)$ among partitions of $n$. Amdeberhan posed a conjecture on the generating function of $b(n,1)$. We give a proof of this conjecture. In general, we obtain a formula that can be used to determine $b(n,k)$. This leads to a generating function formula for $b(n,k)$. We introduce the notion of nearly $k$-triangular partitions. We show that for any $n$, there is a nearly $k$-triangular partition which can be transformed into a partition of $n$ that attains the maximum number of $k$-hooks. The operations for the transformation enable us to compute the number $b(n,k)$.
CommonCrawl
In the first research task of this theme we will build a new ultra-stable and low noise optical resonator. We will build a new optical cavity with 30-40 cm long spacer made of ULE glass and with passive thermal shielding to improve cavity temperature stability. Cavity will have a finesse greater than 200000, short therm linewidth below 1 Hz, and active thermal stabilisation and passive thermal shielding designed for thermal time constant of the order of days. We will explore Sr atoms coupled to an optical cavity in the bad cavity regime to reduce dominant clock laser frequency noise arising from reference cavity length fluctuations and to make it possible to operate an active optical clock. The founding for most of the scientific equipments' development for this task is already secured by the governmental grant. In the second research task of this theme we will consider alternative approaches to sensing variations in fundamental constants. In particular, we will combine the cavity-enhanced molecular spectroscopy setups (already operating in our group) with the ultra-stable optical cavity to develop a sensor of variations in the electron-to-proton mass ratio. We will also estimate experimentally feasibility of using relativistic effects in atoms, molecules, ions and cavities to increase the sensitivity of such sensors. We will calculate the response of the usual (passive) optical atomic clock to short variations in $\alpha$, which is pivotal for a proper estimations of the sensitivity with our present approach. Finally, with our Hg trap set-up, we will broaden the range of sensitivity of our quantum sensors to other possible fundamental constants.
CommonCrawl
Neutrino Factory and Muon Collider R&DNov 05 2001European, Japanese, and US Neutrino Factory designs are presented. The main R&D issues and associated R&D programs, future prospects, and the additional issues that must be addressed to produce a viable Muon Collider design, are discussed. A cubical model of homotopy type theoryJul 21 2016We construct an algebraic weak factorization system $(L, R)$ on the cartesian cubical sets, in which the canonical path object factorization $A \to A^I \to A\times A$ induced by the 1-cube $I$ is an $L$-$R$ factorization for any $R$-object $A$. How Many Muons Do We Need to Store in a Ring For Neutrino Cross-Section Measurements?Apr 26 2012Analytical estimate of the number of muons that must decay in the straight section of a storage ring to produce a neutrino & anti-neutrino beam of sufficient intensity to facilitate cross-section measurements with a statistical precision of 1%. Muon Cooling R&DAug 15 2001International efforts are under way to design and test a muon ionization cooling channel. The present R&D program is described, and future plans outlined. Families of Cyclic Cubic FieldsNov 15 2015We describe a procedure for generating families of cyclic cubic fields with explicit fundamental units. This method generates all known families and gives new ones. Quantum Ergodicity and MixingMar 10 2005This is an expository article for the Encyclopedia of Mathematical Physics on the subject in the title. An introduction to upper half plane polynomialsNov 26 2007This is a straightforward introduction to the properties of polynomials in many variables that do not vanish in the open upper half plane. Such polynomials generalize many of the well-known properties of polynomials with all real roots.
CommonCrawl
How to dock aggregated structures comprised of "elementary" protein units like LEGO pieces? I was wondering how to use transformations such as: rotations, translation, replications to construct possible structures using 2 types of proteins. Assume I found a binding site of the 2 proteins using Molecular Dynamics simulators. Can I use group theory way to find possible configurations of the 2 proteins? How can I use the tensor product of the symmetry groups ($G_1 \times G_2$) to find possible structures? What graphic/framework would you recommend do perform all the manipulations on the proteins? Is it possible to use the transfer to create some algebraic structure that will help to construct a mechanical configurations comprised of proteins? Is there any tool from Baker's Lab I can use for this task? Amy recommendation for a tool I could use to replicate, rotate and shift structure would be appreciated us well. My goal is to use group theory or Monte Carlo simulation driven by group theory insight to create macro molecular structures using multiple (dozens) instances of the two proteins (the LEGO building blocks). Any idea that relates symmetry, transformations, group theory to create structures is much welcome. Given a binding point between 2 proteins, which software can I use to create the dimers into macro-structures like in LEGO namely only by replicating, translating and rotating the Lego pieces? Predicting protein folding is an unsolved problem. Sometimes even small modifications to a sequence can change the folding propensity. Predicting protein-protein interactions is often difficult, due to challenges in understanding protein dynamics, intermolecular interactions, and accurate predictions of protein electrostatics. There has been some progress in terms of forcing particular protein-protein assembly through intentional side-chain modification and cross-linking, and reactive end-groups (e.g protein-based metal-organic frameworks (MOFs). That said, use of proteins for supramolecular assemblies is far harder and less prevalent than using DNA assemblies. At the moment, DNA origami is far superior for these purposes. Edit: If you're really looking to just copy and displace atoms, I don't think you need any particular software. I'd write a Python (or other script) that reads in the PDB files, and writes out duplicate atoms displaced in the XYZ directions as needed. Group theory can help predict the angles, but you'll still need to know the protein size to work out the geometry. Not the answer you're looking for? Browse other questions tagged molecular-structure proteins group-theory structural-biology crystallography or ask your own question. How can I determine if there are π-π interactions between an amide and an aromatic ring in a protein? How does formylation of a nitrogen change the hydrophobicity of a target protein? What is the difference between crystal structures of proteins, organic and inorganic materials? RCSB PDB - how to choose between multiple copies of the same protein? Is the structure of a crystalized protein the native one? What will "Efavirenz amino alcohol methyl carbamate" look like? "Cage-like" Formazine particles, how are molecules arranged? Would an (NH3)2+ molecule be trigonal planar like BH3 rather than trigonal pyramidal?
CommonCrawl
If $n \in \mathbb N$ and $p$ is prime number solve equation $n^4+n^2+1=p$ i can write that like this $$n^4+2n^2-n^2+1=p$$ $$(n^2+1)^2-n^2=(n^2+1-n)(n^2+1+n)=p=1 \cdot p$$ Since $n^2+1+n>1$ then $n^2+1-n=1$ so $n=0$ or $n=1$ if I put $n=0$ then $p=1$ that is not true since one is not prime number, then $n=1$ and $p=3$, is this ok? Besides punctuation, which I'd use more of in your case, the proof is perfect. Perhaps elaborate on some things a little more (like emphasizing you're taking $n^4+2n^2+1$ and turning those terms specifically into $(n^2+1)^2$, or showing that $n^2+1+n>1$) but the level of detail and preciseness always depends on the context. In short, nicely done. Good work! Not the answer you're looking for? Browse other questions tagged elementary-number-theory proof-verification diophantine-equations or ask your own question. Divisibility by a prime number p. Arithmetic sequence where every term is prime?
CommonCrawl
We show that every isoperimetric set in $\mathbb R^N$ with density is bounded if the density is continuous and bounded by above and below. This improves the previously known boundedness results, which basically needed a Lipschitz assumption; on the other hand, the present assumption is sharp, as we show with an explicit example. To obtain our result, we observe that the main tool which is often used, namely a classical ``$\varepsilon-\varepsilon$'' property already discussed by Allard, Almgren and Bombieri, admits a weaker counterpart which is still sufficient for the boundedness, namely, an ``$\varepsilon-\varepsilon^\beta$'' version of the property. And in turn, while for the validity of the first property the Lipschitz assumption is essential, for the latter the sole continuity is enough. We conclude by deriving some consequences of our result about the existence and regularity of isoperimetric sets.
CommonCrawl
Current in a circuit is driven by voltage differences. Imagine you would disconnect the point Q from ground. Then it is clear that the battery drives a current $I = 0.5$ A from "+" to "-" through the resistors, and there is a voltage drop $\Delta U = R\times I$ across each resistor in the circuit. However, in this setting, it is undefined, what the potential difference between point Q and ground is. It could be any arbitrary value. When you connect Q to ground, then you define the potential difference between this point and ground to be zero. This will not affect the voltage difference between "+" and "-" of the battery, hence the current in the circuit stays the same. But it defines the potential between "+" and ground, and between "-" and ground. For example, without the ground connection the voltage between "+" (point P) and ground could be 110 V, and that between "-" (point S) and ground could be 100 V. With the ground connection, the voltage between "+" (point P) and ground is +2 V, and the voltage between "-" (point S) and ground it is -8 V, such that the voltage between point Q and ground is zero. It appears that you are asking why it is that the answer is not option $C$ on the assumption that if the current starts at the positive terminal of the battery (node $P$) it then flows through the $4\,\Omega$ resistor and out through the earth contact at node $Q$ so no current flows through the other two resistors? If this is a conventional circuit then the current is actually a flow of electrons coming out of the negative terminal of the battery and flow from right to left through the resistors. Using your logic there would be no current through the $4\,\Omega$ resistor! However it is the conservation of charge (Kirchhoff's first law) which gives the simple answer. The current out of the positive terminal of the battery must equal the current in to the negative terminal of the battery as there can be no accumulation or loss of charge within the battery. If some current leaks out of the circuit through the earth connection at node $Q$ how does that current get back into the circuit to make sure that the currents in and out of the battery are the same? In this problem the purpose of the earth label at node $Q$ is to give you a value for the potential of node $Q$ so that then the potentials of the other nodes can be assigned. I think this is an exercise in high school. Your question is why does not the current leave the circuit and enter the ground. And the answer is that the current will continue to flow through the circuit. No current in the circuit means any two points on the wire are at the same potential because it is a conductor. But we know that it is impossible for there is a fixed 10V difference between the two points of the battery. This is the only possible state the circuit should be at. And is the answer to this exercise. However, you may ask, why the current should be continuous when the system get static? That is because if it does not, the battery will accumulate the charge by time or lose charge. If it accumulates the charge and gets a higher potential than other parts, the electron (with the negative charge) will be attracted to the battery (Coulomb interaction) and then it cannot gather more charge, vice versa. This looks like the current is continuous in macroscopic. You may also ask, why the system must get the static state. And the answer is similar to the one to the question above, there is a negative feedback in the system because of the electric interaction. The point is that in metal (in fact the conductor), the electric field will drive the electrons move. In the classical view, electrons meet the ions and the interaction between them is similar to the damping of the water flow. The electric field driving and the damping force lead the system to static at last. It is similar to the behavior of 2nd order equation: $x''=F-x'$ , or the motion of an object under wind resistance. Finally, why there should be earth connects to the point Q? In high school exam or homework, it performs the reference level of the potential: 0V. Sometimes it is also an infinity charge pool. You may meet such exercise in the future study and it should not bother you. Not the answer you're looking for? Browse other questions tagged homework-and-exercises electric-circuits electrical-resistance voltage batteries or ask your own question. Can the Hall effect drive a current? Why is current the same in a series circuit? Can someone break down what exactly happens in this situation? Will any charge flow in this circuit? Does this kind of circuit work? Why does the LED turn off if the high current is going through the short? How Ground wire triggers breaker?
CommonCrawl
specified, a tab-delimited table without lines is produced. returns its results in e(). See the Introduction in the Examples section for an introduction on using estout. See help estimates for general information about managing estimation results. Furthermore, see help eststo for an alternative to the estimates store command. The default for estout is to produce a plain table containing point estimates. Producing a fully formatted end-product may involve specifying many options. to tabulate results from non-estimation commands such as summarize or tabulate. of providing a namelist of stored estimation sets. See the examples below. or an r()-matrix. The cells() option is disabled if tabulating a matrix. to be arranged. The default is for cells to report point estimates only, i.e. row beneath the point estimates. statistics that can be displayed. set the first level to 1 in the starlevels() option). defaults to %9.0g or the format for the first statistic in cells(). decimal places for coefficients and two decimal places for t-values. can be added by specifying a modelwidth() (see the Layout options). default is the name of the statistic. default is to leave such cells empty. global drop() option (see below). analogous to droplist in drop() (see below). t-statistics (relevant only if used with t()). transpose to specify that e(myel) be transposed for tabulation. equation name followed by a colon (e.g. mean:), or a full name (e.g. elements for which no match can be found. is the default. Type noomitted to drop omitted coefficients. keep(keeplist [, relax]) selects the coefficients to be included in the table. change the order of coefficients. table. orderlist is specified analogous to droplist in drop() (see above). rows are inserted for elements in orderlist that are not found in the table. and list is a list of coefficient specifications as defined in drop() above. and name is specified. Note that name may contain spaces. "Yes" for models in which at least one of the dummies is present, and "No" spaces, e.g. labels("in model" "not in model"). relabeling coefficients after matching models and equations. equations have different names) and match the remaining equations by name. first matches equation 1 in the third. specified or that equation 2 matches across all models specified. last transformation to be applied to all remaining coefficients. transformation to Models 1 and 3, but not Model 2. confidence intervals, t-statistics, and p-values are transformed as well. redundant rows. The equations() option might also be of help here. on tabulating results from margins. @discrete variable (see the Remarks on using @-variables). discrete variables unless nodiscrete is specified). equation names and not to the original equation names in the models. in place of the estimates. The default text is "(dropped)". of the coefficients (see help level). corresponds to the Likelihood-Ratio or Wald chi2 test. extra statistics and other information to the e()-returns. and @ is used as a placeholder for the statistics, one after another. for example, denote insignificant results, type starlevels(* 1 "" 0.05). t-statistics, summary statistics, etc.) although it may add blanks if needed. the functions in help file. the begin option above for further details. option for details. The default string is a single blank. help format) is replaced by string. in _cons or F_p) with it's LaTeX equivalent \_. the left stub of the table. and/or a varwidth() is specified. "). For style(tex) the default is interaction(" $\times$ "). legend adds a legend explaining the significance symbols and thresholds. the text lines specified in strlist (see the Remarks on using @-variables). see the hlinechar() option) or @M inserts the number of models in the table. hlinechar(`=char(151)') (Windows only; other systems may use other codes). varlabels(_cons Constant) to replace each occurrence of _cons with Constant. if the unstack option is specified. label_subopts, which are explained in their own section. multiple header rows), i.e. specify strlist as "line 1" "line 2" etc. default is the value of modelwidth(). information on the reference category of a categorical variable in the model. label(string) to specify the label that is printed in the table columns. variable of the model be used as model label. numbers to cause the model labels to be numbered consecutively. label_subopts are explained in their own section below. eqlabels("", none) to not replace _cons. physical column of the output for the group of models to which they apply. the span suboption might be of interest here. replace permits estout to overwrite an existing file. even if the file does not yet exist. notype to suppress the display of the table. tabs are written to the text file specified by using. substitute() does not apply to text inserted by topfile() or bottomfile(). table in Stata's results window and is the default unless using is specified. It includes SMCL formatting tags and horizontal lines to structure the table. tab is the default export style (i.e. if using is specified). noabbrev will turn abbreviation off if using the fixed style. See Defaults files in the Remarks section to make available your own style. prefix(string) specifies a common prefix to be added to each label. suffix(string) specifies a common suffix to be added to each label. the suffix will be repeated for each regressor or summary statistic. number of spanned columns if it is included in the label, prefix, or suffix. for the mgroups(), mlabels(), eqlabels(), and collabels() options. list. Also see the examples below. transpose causes the matrix to be transposed for tabulation. determines the basic formatting of the table. results window or the log). However, the table would look tidy if "example.txt" were opened, for example, in a spreadsheet program. option, see estout's Labeling options. column and p-values in the top row of the second column for each model. and to indicate the significance of individual coefficients with stars. The estout default is to display * for p<.05, ** for p<.01, and *** for p<.001. (see estout's Significance stars options). . label variable foreign "Foreign car type" . label variable forXmpg "Foreign*Mileage" as a synonym for %9.0g. cells(t(fmt(3))) would display t-statistics with three decimal places. large or very small absolute numbers are displayed in exponential format. Stata 8 (the \newline command in TeX), type \\\. in LaTeX I recommend using \(...\) instead of $...$. strfun). In particular, "`=char(9)'" results in a tab character and "`=char(13)'" that a tab character with a leading and a trailing blank be used as delimiter. estout features several variables that can be used within string specifications. The following list provides an overview of these variables. physical columns of the table. @M to return the number of models in the table. @E to return the total number columns containing separate equations. @width to return the total width of the table (number of characters). @title to return the title specified with the title() option. @note to return the note specified with the note() option. (provided that the margin option is activated). @starlegend to return a legend explaining the significance symbols. @span to return the number of spanned columns. file from SSC and store it in the working directory). as estout_newstyle.def (see help sysdir). specify option without an argument). in the defaults file (where args must not include suboptions; see below). type, is not available here. in the defaults file; the no form is allowed. suboptions cannot be included in the definition of a higher-level option. suboption is a concatenation of the option's name and the suboption's name, i.e. user. The Stata Journal 3(3): 245-269.
CommonCrawl
The independence number of a graph is the maximum number of vertices from the vertex set of the graph such that no two vertices are adjacent. We systematically examine a collection of upper bounds for the independence number to determine graphs for which each upper bound is better than any other upper bound considered. A similar investigation follows for lower bounds. In several instances a graph cannot be found. We also include graphs for which no bound equals $\alpha$ and bounds which do not apply to general graphs.
CommonCrawl
Korosensei is fast, but can he do better? Korosensei can move at Mach 20 and takes advantage of this fact to make frequent trips around the world. Obviously, in the interest of efficiency, he often takes the opportunity to make multiple stops per trip. But what is the most efficient way for him to do so? We can think about this using graphs. Suppose all the possible destinations are vertices and edges with costs connect the vertices, representing the cost of traveling from some destination $u$ to another destination $v$. At first glance, it might make sense to use physical distance in our case, but that is not necessarily a measure of the "best" way to get there. For instance, taking a bus or train between two cities may get you to the same place, but the actual cost depends on the path that's taken and you may choose to include other criteria like the cost of tickets and other stuff like that. So the problem is this: Korosensei has a list of places he wants to hit up during lunch hour. What is the most cost-effective way of getting to each of those places exactly once and get back to school? This is what's known as the Traveling Salesman problem (TSP). As it turns out, this problem, like many graph theory problems, is NP-complete, meaning it's quite computationally hard and taxing for computers to solve. Formally, if we're given a graph $G = (V,E)$ and a cost function $d: V \times V \to \mathbb R$, the problem is to find a cycle that visits every vertex exactly once with smallest possible cost. Usually, if we have $n$ vertices, then $G$ is the complete graph on $n$ vertices, that is, a graph where every vertex has an edge to every other vertex. Otherwise, it may not be possible to find such a cycle (and the problem of finding such a cycle is also an NP-complete problem, Hamiltonian cycle). It's called the Traveling Salesman problem because the idea is that if you're a traveling salesman, you'd want to find that least-cost cycle because it'd be the most cost-effective way to hit up all the cities you need to go to in order to sell whatever it is that you're selling. Of course, the hardness of the problem means that our poor salesman or Korosensei are stuck having to check the cost of every cycle. How many are there? Well, since we're dealing with a complete graph, this means we can get to any vertex from any vertex so we can take any permutation of our vertices and that's a cycle, and there are $n!$ permutations. This means that as $n$ gets larger, the difficulty of solving this problem grows exponentially. One way we can attempt to make NP-complete problems feasible to solve is to loosen our demands a bit. Right now, we want the absolute least-cost cycle, but what if it's not necessary for us to get the least cost cycle? What if we're okay with getting something that's within a factor of 2 or 3 or $f(n)$ for some function $f$ that depends on the size of the input? These are called approximation algorithms, in that they approximate the optimal solution by some factor. Unfortunately this doesn't work for TSP. meaning it must've used one of the new expensive edges I added and so we know the original graph didn't have a Hamiltonian cycle. Now this is a bit discouraging, but Korosensei would encourage us to keep on thinking about how to assassinate the hell out of this problem. In the most general case, the cost function $d$ has no constraints and the way that I've initially motivated Korosensei's problem, $d$ can be arbitrary, with costs adjusted to his needs. However, some of the attempts to make TSP more computationally feasible have to do with making some reasonable restrictions on our cost function. This is another fairly standard approach to making computationally hard problems easier to deal with: figure out some cases of the problem that are still useful but might help to simplify or restrict the problem a bit. It's called the triangle inequality because of what I just described: draw a triangle out of $u$, $v$, and $w$ and the distance between any two should be shorter than going through the third point. I say should because for an arbitrary cost function, this isn't always the case. One example is flight pricing. Sometimes, it's more expensive to fly from $u$ to $w$ than it is to fly from $u$ to $v$, even if the flight stops over at $w$. Anyhow, Korosensei doesn't have to deal with silly things like flight pricing and so we impose this reasonable triangle inequality condition on his cost function. Does it help? It turns out it does and we can get an approximation that'll be at most twice the cost of the optimal solution. How? Well, first we find a minimum spanning tree. A spanning tree of a graph is a tree (a graph that contains no cycles) that connects all of the vertices of a graph together. The minimum spanning tree is the spanning tree with the least weight out of all the spanning trees. The nice thing about MSTs is that we know how to do this efficiently. Given how fast Korosensei is, this is probably good enough for him and is a decent tradeoff between the time it takes to solve the problem and his actual travel time. This entry was posted in Anime, math and tagged Anime, assassination classroom, manga, math, traveling salesman by blkmage. Bookmark the permalink.
CommonCrawl
In the beginning mathematicians only used natural numbers: $1,2,3, \ldots$. Then, negative numbers were invented to represent things like debt. For example, $+5$ means a profit of $5$ units, while $-5$ means $5$ units of debt. Multiplying a positive number with a negative number yields another negative number: $ -3 \times 5 = -15$. Multiplying a negative number with another negative number yields a positive number: $ -3 \times -5 = 15$. However, there is no number that yields a negative number when multiplied by itself. Therefore, with positive and negative numbers alone, simple equations like $x^2 = -1$ can't be solved. For this reason, the imaginary number $i$ was invented. This new number is defined by the property: $i \times i=-1$. In this sense, $i$ was introduced to fill the gap that we discovered above since now we have a number that yields something negative when multiplied by itself. In the beginning, $i$ was merely a convenient tool to help simplify calculations and its introduction was criticised by many mathematicians. This early criticism is also where the name "imaginary numbers" comes from, which was meant as a derogatory term. Nowadays, imaginary numbers are an essential tool. Combinations of real and imaginary numbers like, for example, $ 4+3i $, are known as complex numbers. Complex numbers are the standard number system that physicists use. There are further expanded number systems like, for example, quaternions and octonions where multiple "complex units" are introduced. There is a theorem due to Hurwitz that the only "normed division algebras" are the real numbers, the complex numbers, the quaternions, and the octonions. Imaginary numbers are essential in modern theories of physics like quantum mechanics and quantum field theory. In these theories, we describe a physical system using complex functions, which means functions that contain combinations of imaginary numbers as arguments. Moreover, imaginary numbers can often be used to make calculations simpler. For some further motivation, see the nice list here. The name "imaginary numbers" was introduced by Descartes as a derogatory term.
CommonCrawl