text
stringlengths
100
500k
subset
stringclasses
4 values
1. Magnetic properties of ultrathin Ni81Fe19films with Ta and Ru capping layers, J. Phys.: Condens. Matter 25 (2013) 476003 (6pp). 2. Mixed-solvent thermal synthesis and magnetic properties of flower-like microstructured nickel ,Science Direct Particuology 10(2012)392-396. 4. Hydrothermal synthesis of monodisperse $\alpha$-Fe2O3 hexagonal platelets, Science Direct Particuology 8(2010)386–389. 5. Hydrothermal growth of octahedral Fe3O4 crystals ,Science Direct Particuology 7(2009)35–38.
CommonCrawl
Here are a few practical tips for training sequence-to-sequence models with attention. If you have experience training other types of deep neural networks, pretty much all of it applies here. This article focuses on a few tips you might not know about, even with experience training other models. The architecture of a sequence-to-sequence model with attention. On the left is the encoder network. On the right is the decoder network predicting an output sequence. The hidden states of the encoder are attended to at each time-step of the decoder. Here $X = [x_1, \ldots, x_T]$ is the input sequence and $Y = [y_1, \ldots, y_U]$ is the output sequence. The input $X$ is encoded into a sequence of hidden states. The decoder network then incorporates information from these hidden states via the attention mechanism. This is just a language model over the output sequences. Reasonable learning can actually happen in this case even if the model never learns to condition on $X$. That's one reason it's not always obvious if the model is truly working. Visualize Attention: This brings us to our first tip. A great way to tell if the model has learned to condition on the input is to visualize the attention. Usually it's pretty clear if the attention looks reasonable. An example of the attention learned by two different models for a speech recognition task. Top: A reasonable looking "alignment" between the input and the output. Bottom: The model failed to learn how to attend to the input even though the training loss was slowly reduced over time (the loss didn't diverge). I recommend setting up your model so that it's easy to extract the attention vectors as an early debugging step. Make a Jupyter notebook or some other simple method to load examples and visualize the attention. Sequence-to-sequence models are trained with teacher forcing. Instead of using the predicted output as the input at the next step, the ground truth output is used. Without teacher forcing these models are much slower to converge, if they do so at all. Sequence-to-sequence models are trained with teacher forcing. The input to the decoder is the ground-truth output instead of the prediction from the previous time-step. Teacher forcing causes a mismatch between training the model and using it for inference. During training we always know the previous ground truth but not during inference. Because of this, it's common to see a large gap between error rates on a held-out set evaluated with teacher forcing versus true inference. Scheduled Sampling: A helpful technique to bridge the gap between training and inference is scheduled sampling.1 The idea is simple – select the previous predicted output instead of the ground truth output with probability $p$. The probability should be tuned for the problem. A typical range for $p$ is between 10% and 40%. Scheduled sampling randomly chooses whether to use the predicted output or the ground truth output as the input to the next time-step. Tune with Inference Rates: There can be a big gap between the teacher forced loss and the error rate when properly inferring the output. Also, the correlation between the two metrics may not be perfect. Because of this, I recommend performing model selection and hyper-parameter tuning based on the inferred output error rates. If you save the model which performs best on a development set during training, use this error rate as a performance measure. For example in speech recognition tune directly with the word (or character) error rate computed on the predicted output. In machine translation, text summarization and other tasks where many correct output sentences exist, use the BLEU or ROUGE score. This tip is perhaps more important on smaller datasets when there is likely more variance in the two metrics. However, in these cases it can make a big difference. For example on the phoneme recognition task above we see a 13% relative improvement by taking the model with the best inferred error rate instead of the best teacher forced loss. This can be a key difference if you're trying to reproduce a baseline. One downside to using these models is that they can be quite slow. The attention computation scales as the product of the input and output sequence lengths, e.g. $O(TU)$. If the input sequence doubles in length and the output sequence doubles in length the amount of computation quadruples. Bucket by Length: When optimizing a model with a minibatch size greater than 1, make sure to bucket the examples by length. For each batch, we'd like the inputs to all be the same length and the outputs to all be the same length. This won't usually be possible, but we can at least attempt to minimize the largest length mismatch in any given batch. One heuristic that works pretty well is to make buckets based on the input lengths. For example, all the inputs with lengths 1 to 5 go in the first bucket. Inputs with lengths 6 to 10 go in the second bucket and so on. Then sort the examples in each bucket by the output length followed by the input length. Naturally, the larger the training set the more likely you are to have minibatches with inputs and outputs that are mostly the same length. Striding and Subsampling: When the input and output sequences are long these models can grind to a halt. With long input sequences, a good practice is to reduce the encoded sequence length by subsampling. This is common in speech recognition, for example, where the input can have thousands of time-steps.5 You won't see it as much in word-based machine translation since the input sequences aren't as long. However, with character based models subsampling is more common.6 The subsampling can be implemented with a strided convolution and/or pooling operation or simply by concatenating consecutive hidden states. A pyramidal structure in the encoder. Here the stride or subsampling factor is 2 in each layer. The number of time-steps in the input sequence is reduced by a factor of 4 (give or take depending on how you pad the sequence to each layer). Often subsampling the input doesn't reduce the accuracy of the model. Even with a minor hit to accuracy though, the speedup in training time can be worth it. When the RNN and attention computations are the bottleneck (which they usually are), subsampling the input by a factor of 4 can make training the model 4 times faster. As you can see, getting these models to work well requires the right basket of tools. These tips are by no means comprehensive, my aim here is more for precision over recall. Even so, they certainly won't generalize to every problem. But, as a few first ideas to try when training and improving a baseline sequence-to-sequence model, I strongly recommend all of them. Thanks to Ziang Xie for useful feedback and suggestions.
CommonCrawl
If you know $E(X)$ and $SD(X)$ you can get some idea of how much probability there is in the tails of the distribution of $X$. In this section we are going to get upper bounds on probabilities such as the gold area in the graph below. That's $P(X \ge 20)$ for the random variable $X$ whose distribution is displayed in the histogram. To do this, we will start with an observation about expectations of functions of $X$. Suppose $g$ and $h$ are functions such that $g(X) \ge h(X)$, that is, $P(g(X) \ge h(X)) = 1$. Then $E(g(X)) \ge E(h(X))$. Now suppose $X$ is a non-negative random variable, and let $c$ be a positive number. Consider the two functions $g$ and $h$ graphed below. The function $h$ is the indicator defined by $h(x) = I(x \ge c)$. So $h(X) = I(X \ge c)$ and $E(h(X)) = P(X \ge c)$. The function $g$ is constructed so that the graph of $g$ is a straight line that is at or above the graph of $h$ on $[0, \infty)$, with the two graphs meeting at $x = 0$ and $x = c$. The equation of the straight line is $g(x) = x/c$. Thus $g(X) = X/c$ and hence $E(g(X)) = E(X/c) = E(X)/c$. By construction, $g(x) \ge h(x)$ for $x \ge 0$. Since $X$ is a non-negative random variable, $P(g(X) \ge h(X)) = 1$. This result is called a "tail bound" because it puts an upper limit on how big the right tail at $c$ can be. It is worth noting that $P(X > c) \le P(X \ge c) \le E(X)/c$ by Markov's bound. You can see that the bound is pretty crude. The gold area is clearly quite a bit less than 0.325. That is, $P(X \ge 2\mu_X) \le 1/2$, $P(X \ge 5\mu_X) \le 1/5$, and so on. The chance that a non-negative random variable is at least $k$ times the mean is at most $1/k$. $k$ need not be an integer. For example, the chance that a non-negative random variable is at least 3.8 times the mean is at most $1/3.8$. If $k \le 1$, the inequality doesn't tell you anything you didn't already know. If $k \le 1$ then Markov's bound is 1 or greater. All probabilities are bounded above by 1, so the inequality is true but useless for $k \le 1$. When $k$ is large, the bound does tell you something. You are looking at a probability quite far out in the tail of the distribution, and Markov's bound is $1/k$ which is small. Markov's bound only uses $E(X)$, not $SD(X)$. To get bounds on tails it seems better to use $SD(X)$ if we can. Chebyshev's Inequality does just that. It provides a bound on the two tails outside an interval that is symmetric about $E(X)$ as in the following graph. The red arrow marks $\mu_X$ as usual, and now the two blue arrows are at a distance of $SD(X)$ on either side of the mean. It is often going to be convenient to think of $E(X)$ as "the origin" and to measure distances in units of SDs on either side.
CommonCrawl
Note that you have to be careful when calculating determinants of large matrices; for a 100x100 matrix, it can easily overflow the maximum size of a float (or double). For this reason it's often better to calculate a log-determinant.... Once the matrix starts getting large, it can be easier to use row- or column-reduction to find the determinant, especially if there aren't many sparse rows or columns to take advantage of in iterated Laplace expansions. I need to find a $3 \times 3$ matrix and the determinant of this matrix has to be $0$. I also need to be able to delete randomly chosen column and row to make the determinant nonzero? Is it even possible?... For those people who need instant formulas! The general way to calculate the inverse of any square matrix, is to append a unity matrix after the matrix (i.e. 25/04/2017 · Exercise. In general, if the matrix contains numerical values (specially values of not so high precision, since it takes time to plug in, for example, 1.03427*10^-28), you should be able to calculate the determinant fairly quickly.... What is a determinant and how do you find it? This lesson explains what a determinant is and shows you a step-by-step process for finding the determinant of a 3 x 3 matrix. Once the matrix starts getting large, it can be easier to use row- or column-reduction to find the determinant, especially if there aren't many sparse rows or columns to take advantage of in iterated Laplace expansions.... What is a determinant and how do you find it? This lesson explains what a determinant is and shows you a step-by-step process for finding the determinant of a 3 x 3 matrix. Probably the most straightforward way to do it for small matrices (such as a 3x3) is to use Gauss-Jordan elimination. Subtract a multiple of the first row from the second and third such that you have a zero in the first column on those two rows. For those people who need instant formulas! The general way to calculate the inverse of any square matrix, is to append a unity matrix after the matrix (i.e. I need to find a $3 \times 3$ matrix and the determinant of this matrix has to be $0$. I also need to be able to delete randomly chosen column and row to make the determinant nonzero? Is it even possible? What is a determinant and how do you find it? This lesson explains what a determinant is and shows you a step-by-step process for finding the determinant of a 3 x 3 matrix.
CommonCrawl
Abstract: We present a complex frame of eleven vectors in 4-space and prove that it defines injective measurements. That is, any rank-one $4\times 4$ Hermitian matrix is uniquely determined by its values as a Hermitian form on this collection of eleven vectors. This disproves a recent conjecture of Bandeira, Cahill, Mixon, and Nelson. We use algebraic computations and certificates in order to prove injectivity.
CommonCrawl
A permutation of integers $1,2,\ldots,n$ is called beautiful if there are no adjacent elements whose difference is $1$. Given $n$, construct a beautiful permutation if such a permutation exists. Print a beautiful permutation of integers $1,2,\ldots,n$. If there are several solutions, you may print any of them. If there are no solutions, print "NO SOLUTION".
CommonCrawl
Michael Artin and Barry Mazur's classical comparison theorem tells us that for a pointed connected finite type $\mathbb C$-scheme $X$, there is a map from the singular complex associated to the underlying topological spaces of the analytification of $X$ to the étale homotopy type of $X$, and it induces an isomorphism on profinite completions. I'll begin with a brief review on Artin-Mazur's étale homotopy theory of schemes, and explain how I extended it to algebraic stacks under model category theory. Finally, I'll provide a formal proof of the comparison theorem for algebraic stacks using a new characterization of profinite completions.
CommonCrawl
University of Kalmar, Baltic Business School. Assessments and evaluations of incubators has been a topic of discussion for as long as incubators have been in existence due to the fact that there has not been an agreement on how to determine good performance. This paper demonstrates the use of Data Envelopment Analysis (DEA) when studying performance of incubators. More specifically, it does so within the four dimensions of cooperation with universities, business networks, external funding and competence development on a sample of 16 Swedish incubators. We show that DEA enables us to measure non-numerical dimensions, and to simultaneously take into account the efforts made by both the incubator and the outcomes. Moreover, DEA provides benchmarks and, based on a model that divides the incubators into four different groups, illustrates the difference between the benchmark and the incubators' current situation. This paper explores incubator facilitation of technology transfer for their New Technology-Based Firms (NTBFs). Empirical evidence gathered from six interviews with incubator managers, together with a survey of 131 NTBFs in incubators in Sweden, in 2005, and the findings made in a survey of 273 NTBFs situated inside-and-outside Science Parks in 1999, are used for the exploration. It is suggested that incubators do facilitate technology transfer for their NTBFs. It is further suggested that the development towards increased ability to facilitate technology transfer will continue as a results of the efforts made on the incubator and systemic level. School of Economics and Commercial Law, Department of Business Administration, Göteborg. The approach uses data from a sample of 183 small high-tech firms, new technology-based firms (small high-tech firms) in Sweden (54 variables under the headings of work experience, board and advice, financing, motivation—performance priorities, technological innovation and strategy). This study identifies some core areas of importance in corporate governance. Few managers in this study had a strong background and experience of finance and the preparation of business. Only 64 per cent of the managers have had previous work experience before starting the firm. The survey makes it clear that the small high-tech firms are likely to have a strong link with banking institutions. The consequence of these links is that most of the firm's capital supply is from banks, and that there are strong ownership links between banks and industry. The background of the founder does seem to have had an effect on the problem of financing and ownership issues. It is private sector organizations (banks) and families that are most frequently consulted by small high-tech firms (However, low means). It is also the private and public sector organizations, in connection with external board membership, regional development agencies and banks that are most frequently consulted. In the future, it is reasonable to search for factor patterns that can begin to explain and predict the direction of corporate governance in small new technology-based firms. Linnaeus University, Faculty of Social Sciences, Department of Sport Science. Syftet med denna studie är att undersöka vilken uppfattning elever i årskurs nio har om hälsa. Ämnet Idrott och hälsa bedrivs på olika sätt och syftet med studien är även att få en klarare bild över hur eleverna upplever att de arbetar med hälsa i undervisningen. Genom intervjuer vill vi höra elevernas tankar och erfarenheter om hälsa och vad de skulle vilja lära sig mer om. Vi vill även veta vad de tycker är viktigt för att kunna påverka sin framtida hälsa. The aim of this study was to investigate, at first, why students in upper secondary school in Sweden participate or not in Physical education and health (PEH) and also find out what changes the subject need that can affect their participation. To get answers out of the in-vestigation, we formulated questions about what preconceptions the students of an specific group have about participation and non-participation in PEH and what changes the subject need, according to the students, to become a subject for all students. The study was based on a qualitative method, by using interviews of ten students from Swedish upper secondary school. The students were from two schools in a mediumsized region and from different programmes. The results of the study shows clearly that the content and the grades in PEH affects the student's participation, which was crucial to whether they felt they wanted to participate actively in the lessons or not. The grades were considered as important and affected the participation even though some of the students were not interested in the content, but they still did the things that the teacher demanded. Despite this, the attitudes towards the subject were positive among the interviewed students. The factors the students think can change the participation to more actively is implement more activities as dance, individual training but also, delimit ballsports. It seems like the interviewed students are searching for meaningfulness in the lessons of the subject with a link between the curriculum and grading criteria. Linnaeus University, Faculty of Technology, Department of Forestry and Wood Technology. 1991. Linnaeus University, Faculty of Technology, Department of Forestry and Wood Technology. Sågverken i Sverige letar kontinuerligt efter lösningar för att kunna effektivisera produktionen. I denna studie analyseras specifikt Derome Kinnareds sågens timmerplan. Genom en stopptidsanalys, undersökningar av nya tekniska lösningar, muntliga kommunikationer med personal samt arbetsledare och andra företag införskaffades underlag till analys. Åtgärder som kan utföras för att minimera produktionskostnaderna eller öka produktionen på timmerplan är att implementera ny teknik t.ex. fjärrmätning, digitalisera mätbesked med hjälp av SDC:s arkiv, omorganisera personalstyrkan samt införskaffa bemyndigad mätare. Förslagen indikerar att implementering av dem kan förbättra Kinnaredsågens ekonomi. Beroende på hur mycket företaget är villigt att investera kan det effektiviseras i olika grad. I samband med kommunikation med företagsrepresentanter kom man dock fram till att ny teknik kan vara bättre att implementera vid en nybyggnation. I Sverige har det statliga apoteketsmonopolet ifrågasatts en längre tid och regeringen utreder nu möjligheten att konkurrensutsätta läkemedelsförsäljningen. Det har även föreslagits i den statliga utredningen (SOU 2008:4 del 2) att ett begränsat sortiment av OTC läkemedel (over the counter = receptfria läkemedel) ska få säljas i dagligvaruhandeln utan farmaceutiskt kompetenskrav. Vid korrekt användning och tillgång till rätt rådgivning kan OTC läkemedel vara till en stor hjälp för den enskilde individen vid egenvård och därigenom också bidra till avlastning på sjukvårdens resursers. Vid felanvändning av OTC läkemedel (över/underdosering, fel indikationsområde etc.), kan de istället få motsatt effekt. Syftet med denna enkätstudie var därför att utforska om konsumenter av OTC läkemedel i Sverige önskar få tillgång till dessa läkemedel i t ex livsmedelsbutiker, där de inte har tillgång till personlig farmaceutisk rådgivning, vidare var avsikten att undersöka hur de i dagligvaruhandeln önskade få läkemedelsinformation. I februari 2008 gjordes en enkätstudie i Västervik som inkluderade 48 deltagare varav 29 kvinnor och 19 män. Studien visade att 71 % av deltagarna hade en positiv inställning till att köpa OTC läkemedel i livsmedelsbutiker, 58 % skulle skaffa information genom läkemedelsförpackning och bipacksedel i kombination med att de tidigare använt läkemedlet. Önskan om tillgång till personlig rådgivning på inköpsstället var störst i åldern ≤ 35 år, där 38 % ansåg sig vilja det. Slutsats av studien är att majoriteten vill kunna handla OTC läkemedel i dagligvaruhandeln och information skulle de få främst från läkemedelsförpackning/bipacksedel i kombination med erfarenheter från tidigare användning. Linnaeus University, Faculty of Social Sciences, Department of pedagogy. Syftet med denna uppsats har varit att belysa behandlares uppfattningar om syskon och partners upplevelser av att ha en nära person med en ätstörning, samt betydelsen av att involvera dem i den drabbades behandling. Studien genomfördes med en kvalitativ metod där semistrukturerade intervjuer utfördes med 6 behandlare på olika ätstörningsverksamheter i Sverige. Resultat och slutsats visade att syskon och partners, samt andra närstående, är betydelsefulla för patientens tillfriskande. Det är viktigt att syskon och partners involveras i behandling, och att de får information om sjukdom och behandling. Syskon ska inte anta en ansvarsroll, medan partner behöver ta ett visst ansvar i vardagen. Ätstörningar har en tydlig påverkan på partners och syskon, samt på närstående i allmänhet. Närstående bör få goda verktyg till hur de kan vara ett stöd på bästa sätt till den drabbade. Behandlaren ska bemöta närstående med förståelse, respekt och ingen skuldbeläggning. Det är väsentligt för patientens motivation och tillfrisknande att det finns en god och nära relation till de närstående som är involverade i behandlingen. Det finns en brist på forskning inom detta område. Vidare forskning krävs för att stärka studiens resultat och slutsats. Linnaeus University, Faculty of Science and Engineering, School of Engineering. Our purpose with this study was to investigate whether there was any difference between high and low performing students from selected aspects such as choice of studies, achievement, self-image, motivation, confidence and career choices. A training professional in today's schools requires knowledge of how young people think about their lives for us to help students develop both high-and low-performing students. Students have different conditions, but no matter good or bad conditions, each individual pupil must be allowed to develop. The concepts used in the study are of study, self-image, motivation, achievement in the future and career choices. In the background, describes these concepts through literature study. The method we have used was semi-structured interviews. Which means a fixed structure, but with room for follow-up questions. The results of our study indicate that there were clear differences between high-and low-performing students. The differences suggest that the high-performing students were more conscious about their study options than low-performing students. Differences were also in the students' views of themselves,where the high-performance had a lower self- image than the low-performing students had. High performance compared each other more often with other people and was more unsure of themselves, however, the motivation was significantly higher among the high-performance students, performance showed the same results. The high-performing students had a clearer vision of the future and that they had a clearer picture of their career choice than the low-performing students. Discussion on how we as educators can strengthen each student based on their individual needs have been based on results of the study. Linnaeus University, Faculty of Business, Economics and Design, Linnaeus School of Business and Economics. Turismen i Kalmar har ökat de senaste åren, men hamnar till synes i skymundan av Öland och Glasriket. Hur får man besökare att få upp ögonen för Kalmar som primära resmålet och hur får man dem att stanna? Vi har i denna uppsats intresserat oss för att ta reda på hur ett starkt platsvarumärke skapas och vad som således krävs för att bli en framgångsrik destination, samt hur man kan använda sig av platsmarknadsföring för att nå ut med platsvarumärket till potentiella målgrupper. Linnaeus University, Faculty of Health and Life Sciences, Department of Chemistry and Biomedical Sciences. Världsmarknaden för läkemedlen beräknades år 2011 till 900 miljarder US$ enligt IMS-health. Marknaden för illegala läkemedel uppskattas vara värd mellan 75-200 miljarder dollar. I Sverige uppskattas den illegala läkemedelsmarknaden till motsvarande ≤0,5 %. Straffet för insmuggling av läkemedel till Sverige är böter eller max 2 års fängelse. Tullverket räknar med att man endast hittar 10 % av det som smugglas in. I andra länder kan straffet variera mellan böter (ekonomisk brottslighet i Afrika) till dödsstraff i Kina. I Utvecklingsländerna uppskattas 10-30 % av alla läkemedel som säljs vara förfalskade, jmf 1 % I-länderna. l. Förekomsten av förfalskade läkemedel har många allvarliga konsekvenser på människor som exempelvis, utebliven effekt, toxiska reaktioner, förgiftningar, som kan i värsta fall leda till döden. Ett annat alvarligt problem är resistensutveckling, ökad spridning av smittsamammasjukdomar som exempel, tuberkulos och/ eller HIV/AIDS. Syftet med detta examensarbete är att besvara frågan: Vilka problem ger den ökande förekomsten av förfalskade läkemedel i samhället. Undersökningen fokuserar på livstidsläkemedel, dvs ett läkemedel en person måste ta resten av sitt liv för behandling av sin kroniska sjukdom. För att komma till rätta med de problem, som förfalskade läkemedel, skapar krävs ett mer utvecklat samarbete mellan olika läkemedelsmyndigheter, läkemedelsföretag, internationella polisorganisationer, tull m.fl. Arbetet med att utveckla förpackningar som är svåra att förfalska bör intensifieras. Straffsatser bör kanske ses över. Det är viktigt att öka medvetandet bland allmänheten om risker med att köpa läkemedel utanför apotek (t ex via nätet). This paper is a study of the use of the prepositions to/with after the verb to talk in British and American English. The research is based on the material from the COBUILDDirect corpus, Longman American Spoken Corpus and New York Times CD-ROM. The common and different features of the use of talk to/with in different genres of American and British English as well as in written and spoken English were studied; special attention was paid to the factors which influence the choice of the prepositions. The research has shown that generally talk with is used much less than talk to and probably is undergoing the process of narrowing of meaning. With after talk seems to be used most often to refer to two-way communication while talk to is used to refer to both one- and two-way communication and is, therefore, more universal than talk with. Växjö University, Faculty of Humanities and Social Sciences, School of Social Sciences. Solid waste management is a challenge for the cities' authorities in developing countries mainly due to the increasing generation of waste, the burden posed on the municipal budget as a result of the high costs associated to its management, the lack of understanding over a diversity of factors that affect the different stages of waste management and linkages necessary to enable the entire handling system functioning. An analysis of literature on the work done and reported mainly in publications from 2005 to 2011, related to waste management in developing countries, showed that few articles give quantitative information. The analysis was conducted in two of the major scientific journals, Waste Management Journal and Waste Management and Research. The objective of this research was to determine the stakeholders' action/behavior that have a role in the waste management process and to analyze influential factors on the system, in more than thirty urban areas in 22 developing countries in 4 continents. A combination of methods was used in this study in order to assess the stakeholders and the factors influencing the performance of waste management in the cities. Data was collected from scientific literature, existing data bases, observations made during visits to urban areas, structured interviews with relevant professionals, exercises provided to participants in workshops and a questionnaire applied to stakeholders. Descriptive and inferential statistic methods were used to draw conclusions. The outcomes of the research are a comprehensive list of stakeholders that are relevant in the waste management systems and a set of factors that reveal the most important causes for the systems' failure. The information provided is very useful when planning, changing or implementing waste management systems in cities. Växjö University, Faculty of Humanities and Social Sciences, School of Management and Economics. Växjö University. Växjö University. Växjö University, Faculty of Humanities and Social Sciences, School of Management and Economics. Ansaldo STS, Italy ; University of Naples "Federico II", Italy. Université "Federico II" di Napoli, Italy. The Unified Modeling Language (UML) is widely used as a high level object oriented specification language. In this paper we present a novel approach in which reverse engineering is performed using UML as the modelling language used to achieve a representation of the implemented system. The target is the core logic of a complex critical railway control system, which was written in an application specific legacy language. UML perfectly suited to represent the nature of the core logic, made up by concurrent and interacting processes, using a bottom-up approach and proper modeling rules. Each process, in fact, was strictly related to the management of a physically (resp. logically) well distinguished railway device (resp. functionality). The obtained model deeply facilitated the static analysis of the logic code, allowing for at a glance verification of correctness and compliance with higher-level specifications, and opened the way to refactoring and other formal analyses. © 2006 IEEE. Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Modern software systems are increasingly more connected, pervasive, and dynamic, as such, they are subject to more runtime variations than legacy systems. Runtime variations affect system properties, such as performance and availability. The variations are difficult to anticipate and thus mitigate in the system design. Self-adaptive software systems were proposed as a solution to monitor and adapt systems in response to runtime variations. Research has established a vast body of knowledge on engineering self-adaptive systems. However, there is a lack of systematic process support that leverages such engineering knowledge and provides for systematic reuse for self-adaptive systems development. This thesis proposes the Autonomic Software Product Lines (ASPL), which is a strategy for developing self-adaptive software systems with systematic reuse. The strategy exploits the separation of a managed and a managing subsystem and describes three steps that transform and integrate a domain-independent managing system platform into a domain-specific software product line for self-adaptive software systems. Applying the ASPL strategy is however not straightforward as it involves challenges related to variability and uncertainty. We analyzed variability and uncertainty to understand their causes and effects. Based on the results, we developed the Autonomic Software Product Lines engineering (ASPLe) methodology, which provides process support for the ASPL strategy. The ASPLe has three processes, 1) ASPL Domain Engineering, 2) Specialization and 3) Integration. Each process maps to one of the steps in the ASPL strategy and defines roles, work-products, activities, and workflows for requirements, design, implementation, and testing. The focus of this thesis is on requirements and design. We validate the ASPLe through demonstration and evaluation. We developed three demonstrator product lines using the ASPLe. We also conducted an extensive case study to evaluate key design activities in the ASPLe with experiments, questionnaires, and interviews. The results show a statistically significant increase in quality and reuse levels for self-adaptive software systems designed using the ASPLe compared to current engineering practices. Example programs are well known as an important tool to learn computer programming. Realizing the signicance of example programs, this study has been conducted with a goalto measure and evaluate the quality of examples used in academia. We make a distinctionbetween good and bad examples, as badly designed examples may prove harmful for novice learners. In general, students differ from expert programmers in their approach to read and comprehend a program. How do students understand example programs is explored in the light of classical theories and models of program comprehension. Key factors that impact program quality and comprehension are identified. To evaluate as well as improve the quality of examples, a set of quality attributes is proposed. Relationship between program complexity and quality is examined. We rate readability as a prime quality attribute and hypothesize that example programs with low readability are difficult to understand. Software Reading Ease Score (SRES), a program readability metric proposed by Börstler et al. is implemented to provide a readability measurement tool. SRES is based on lexical tokens and is easy to compute using static code analysis techniques. To validate SRES metric, results are statistically analyzed in correlation to earlier existing well acknowledged software metrics. Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. We envision an Autonomic Software Product Line (ASPL). The ASPL is a dynamic software product line that supports self adaptable products. We plan to use reflective architecture to model and develop ASPL. To evaluate the approach, we have implemented three autonomic product lines which show promising results. The ASPL approach is at initial stages, and require additional work. We plan to exploit online learning to realize more dynamic software product lines to cope with the problem of product line evolution. We propose on-line knowledge sharing among products in a product line to achieve continuous improvement of quality in product line products. Software quality is critical in today's software systems. A challenge is the trade-off situation architects face in the design process. Designers often have two or more alternatives, which must be compared and put into context before a decision is made. The challenge becomes even more complex for dynamic software product lines, where domain designers have to take runtime variations into consideration as well. To address the problem we propose extensions to an architectural reasoning framework with constructs/artifacts to define and model a domain's scope and dynamic variability. The extended reasoning framework encapsulates knowledge to understand and reason about domain quality behavior and self-adaptation as a primary variability mechanism. The framework is demonstrated for a self-configuration property, self-upgradability on an educational product-line. Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science. Software architecture serves as a foundation for the design and development of software systems. Designing an architecture requires extensive analysis and reasoning. The study presented herein focuses on the architectural analysis and reasoning in support of engineering self-adaptive software systems with systematic reuse. Designing self-adaptive software systems with systematic reuse introduces variability along three dimensions; adding more complexity to the architectural analysis and reasoning process. To this end, the study presents an extended Architectural Reasoning Framework with dedicated reasoning support for self-adaptive systems and reuse. To evaluate the proposed framework, we conducted an initial feasibility case study, which concludes that the proposed framework assists the domain architects to increase reusability, reduce fault density, and eliminate differences in skills and experiences among architects, which were our research goals and are decisive factors for a system's overall quality. Advances in computing technologies are pushing software systems and their operating environments to become more dynamic and complex. The growing complexity of software systems coupled with uncertainties induced by runtime variations leads to challenges in software analysis and design. Self-Adaptive Software Systems (SASS) have been proposed as a solution to address design time complexity and uncertainty by adapting software systems at runtime. A vast body of knowledge on engineering self-adaptive software systems has been established. However, to the best of our knowledge, no or little work has considered systematic reuse of this knowledge. To that end, this study contributes an Autonomic Software Product Lines engineering (ASPLe) methodology. The ASPLe is based on a multi-product lines strategy which leverages systematic reuse through separation of application and adaptation logic. It provides developers with repeatable process support to design and develop self-adaptive software systems with reuse across several application domains. The methodology is composed of three core processes, and each process is organized for requirements, design, implementation, and testing activities. To exemplify and demonstrate the use of the ASPLe methodology, three application domains are used as running examples throughout the report. Designing a software architecture requires architectural reasoning, i.e., activities that translate requirements to an architecture solution. Architectural reasoning is particularly challenging in the design of product-lines of self-adaptive systems, which involve variability both at development time and runtime. In previous work we developed an extended Architectural Reasoning Framework (eARF) to address this challenge. However, evaluation of the eARF showed that the framework lacked support for rigorous reasoning, ensuring that the design complies to the requirements. In this paper, we introduce an analytical framework that enhances eARF with such support. The framework defines a set of artifacts and a series of activities. Artifacts include templates to specify domain quality attribute scenarios, concrete models, and properties. The activities support architects with transforming requirement scenarios to architecture models that comply to required properties. Our focus in this paper is on architectural reasoning support for a single product instance. We illustrate the benefits of the approach by applying it to an example client-server system, and outline challenges for future work. © 2016 IEEE. We describe ongoing work on a variability mechanism for Autonomic Software Product Lines (ASPL). The autonomic software product lines have self-management characteristics that make product line instances more resilient to context changes and some aspects of product line evolution. Instances sense the context, selects and bind the best component variants to variation-points at run-time. The variability mechanism we describe is composed of a profile guided dispatch based on off-line and on-line training processes. Together they form a simple, yet powerful variability mechanism that continuously learns, which variants to bind given the current context and system goals. This report describes a work in progress to develop Autonomic Software Product Lines (ASPL). The ASPL is a dynamic software product line approach with a novel variability handling mechanism that enables traditional software product lines to adapt themselves at runtime in response to changes in their context, requirements and business goals. The ASPL variability mechanism is composed of three key activities: 1) context-profiling, 2) context-aware composition, and 3) online learning. Context-profiling is an offline activity that prepares a knowledge base for context-aware composition. The context-aware composition uses the knowledge base to derive a new product or adapts an existing product based on a product line's context attributes and goals. The online learning optimizes the knowledge base to remove errors and suboptimal information and to incorporate new knowledge. The three activities together form a simple yet powerful variability handling mechanism that learns and adapts a system at runtime in response to changes in system context and goals. We evaluated the ASPL variability mechanism on three small-scale software product lines and got promising results. The ASPL approach is, however, is yet at an initial stage and require improved development support with more rigorous evaluation. More than two decades of research have demonstrated an increasing need for software systems to be self-adaptive. Self-adaptation is required to deal with runtime dynamics which are difficult to predict before deployment. A vast body of knowledge to develop Self-Adaptive Software Systems (SASS) has been established. We, however, discovered a lack of process support to develop self-adaptive systems with reuse. To that end, we propose a domain-engineering based methodology, Autonomic Software Product Lines engineering (ASPLe), which provides step-by-step guidelines for developing families of SASS with systematic reuse. The evaluation results from a case study show positive effects on quality and reuse for self-adaptive systems designed using the ASPLe compared to state-of-the-art engineering practices. We describe ongoing work in knowledge evolution management for autonomic software product lines. We explore how an autonomic product line may benefit from new knowledge originating from different source activities and artifacts at run time. The motivation for sharing run-time knowledge is that products may self-optimize at run time and thus improve quality faster compared to traditional software product line evolution. We propose two mechanisms that support knowledge evolution in product lines: online learning and knowledge sharing. We describe two basic scenarios for runtime knowledge evolution that involves these mechanisms. We evaluate online learning and knowledge sharing in a small product line setting that shows promising results. The concept of variability is fundamental in software product lines and a successful implementation of a product line largely depends on how well domain requirements and their variability are specified, managed, and realized. While developing an educational software product line, we identified a lack of support to specify variability in quality concerns. To address this problem we propose an approach to model variability in quality concerns, which is an extension of quality attribute scenarios. In particular, we propose domain quality attribute scenarios, which extend standard quality attribute scenarios with additional information to support specification of variability and deriving product specific scenarios. We demonstrate the approach with scenarios for robustness and upgradability requirements in the educational software product line. A search for muon neutrinos from Kaluza-Klein dark matter annihilations in the Sun has been performed with the 22-string configuration of the IceCube neutrino detector using data collected in 104.3 days of live time in 2007. No excess over the expected atmospheric background has been observed. Upper limits have been obtained on the annihilation rate of captured lightest Kaluza-Klein particle (LKP) WIMPs in the Sun and converted to limits on the LKP-proton cross sections for LKP masses in the range 250-3000 GeV. These results are the most stringent limits to date on LKP annihilation in the Sun. We present the results of searches for high-energy muon neutrinos from 41 gamma-ray bursts (GRBs) in the northern sky with the IceCube detector in its 22 string configuration active in 2007/2008. The searches cover both the prompt and a possible precursor emission as well as a model-independent, wide time window of -1 hr to + 3 hr around each GRB. In contrast to previous searches with a large GRB population, we do not utilize a standard Waxman-Bahcall GRB flux for the prompt emission but calculate individual neutrino spectra for all 41 GRBs from the burst parameters measured by satellites. For all of the three time windows, the best estimate for the number of signal events is zero. Therefore, we place 90% CL upper limits on the fluence from the prompt phase of 3.7 x 10(-3) erg cm(-2) (72 TeV-6.5 PeV) and on the fluence from the precursor phase of 2.3 x 10(-3) erg cm(-2) (2.2-55 TeV), where the quoted energy ranges contain 90% of the expected signal events in the detector. The 90% CL upper limit for the wide time window is 2.7 x 10(-3) erg cm(-2) (3 TeV-2.8 PeV) assuming an E-2 flux. We report on a search with the IceCube detector for high-energy muon neutrinos from GRB 080319B, one of the brightest gamma-ray bursts (GRBs) ever observed. The fireball model predicts that a mean of 0.1 events should be detected by IceCube for a bulk Lorentz boost of the jet of 300. In both the direct on-time window of 66 s and an extended window of about 300 s around the GRB, no excess was found above background. The 90% CL upper limit on the number of track-like events from the GRB is 2.7, corresponding to a muon neutrino fluence limit of 9.5 x 10(-3) erg cm(-2) in the energy range between 120 TeV and 2.2 PeV, which contains 90% of the expected events. Point source searches with the IceCube neutrino telescope have been restricted to one hemisphere, due to the exclusive selection of upward going events as a way of rejecting the atmospheric muon background. We show that the region above the horizon can be included by suppressing the background through energy-sensitive cuts. This improves the sensitivity above PeV energies, previously not accessible for declinations of more than a few degrees below the horizon due to the absorption of neutrinos in Earth. We present results based on data collected with 22 strings of IceCube, extending its field of view and energy reach for point source searches. No significant excess above the atmospheric background is observed in a sky scan and in tests of source candidates. Upper limits are reported, which for the first time cover point sources in the southern sky up to EeV energies. Over 5000 PMTs are being deployed at the South Pole to compose the IceCube neutrino observatory. Many are placed deep in the ice to detect Cherenkov light emitted by the products of high-energy neutrino interactions, and others are frozen into tanks on the surface to detect particles from atmospheric cosmic ray showers. IceCube is using the 10-in. diameter R7081-02 made by Hamamatsu Photonics. This paper describes the laboratory characterization and calibration of these PMTs before deployment. PMTs were illuminated with pulses ranging from single photons to saturation level. Parameterizations are given for the single photoelectron charge spectrum and the saturation behavior. Time resolution, late pulses and afterpulses are characterized. Because the PMTs are relatively large, the cathode sensitivity uniformity was measured. The absolute photon detection efficiency was calibrated using Rayleigh-scattered photons from a nitrogen laser. Measured characteristics are discussed in the context of their relevance to IceCube event reconstruction and simulation efforts. (C) 2010 Elsevier B.V. All rights reserved. We have measured the speed of both pressure waves and shear waves as a function of depth between 80 and 500 m depth in South Pole ice with better than 1% precision. The measurements were made using the South Pole Acoustic Test Setup (SPATS), an array of transmitters and sensors deployed in the ice at the South Pole in order to measure the acoustic properties relevant to acoustic detection of astrophysical neutrinos. The transmitters and sensors use piezoceramics operating at similar to 5-25 kHz. Between 200 m and 500 m depth, the measured profile is consistent with zero variation of the sound speed with depth, resulting in zero refraction, for both pressure and shear waves. We also performed a complementary study featuring an explosive signal propagating vertically from 50 to 2250 m depth, from which we determined a value for the pressure wave speed consistent with that determined for shallower depths, higher frequencies, and horizontal propagation with the SPATS sensors. The sound speed profile presented here can be used to achieve good acoustic source position and emission time reconstruction in general, and neutrino direction and energy reconstruction in particular. The reconstructed quantities could also help separate neutrino signals from background. (C) 2010 Elsevier B.V. All rights reserved. We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2σ after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E –2 spectrum is ##IMG## [http://ej.iop.org/images/1538-4357/701/1/L47/apjl318527ieqn1.gif] $E^2\,Φ _ν _μ < 1.4 \,\,\times\,\, 10^-11\; \mathrmTeV\;cm^-2\;\mathrms^-1$ , in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2. A search for muon neutrinos from neutralino annihilations in the Sun has been performed with the IceCube 22-string neutrino detector using data collected in 104.3 days of live time in 2007. No excess over the expected atmospheric background has been observed. Upper limits have been obtained on the annihilation rate of captured neutralinos in the Sun and converted to limits on the weakly interacting massive particle (WIMP) proton cross sections for WIMP masses in the range 250-5000 GeV. These results are the most stringent limits to date on neutralino annihilation in the Sun. The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance or quantum decoherence. Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on violation of Lorentz invariance and quantum decoherence parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV. We present a search for point sources of high energy neutrinos using 3.8 yr of data recorded by AMANDA-II during 2000-2006. After reconstructing muon tracks and applying selection criteria designed to optimally retain neutrino-induced events originating in the northern sky, we arrive at a sample of 6595 candidate events, predominantly from atmospheric neutrinos with primary energy 100 GeV to 8 TeV. Our search of this sample reveals no indications of a neutrino point source. We place the most stringent limits to date on E(-2) neutrino fluxes from points in the northern sky, with an average upper limit of E(2)Phi(nu mu)+nu(tau)<= 5.2x10(-11) TeV cm(-2) s(-1) on the sum of nu(mu) and nu(tau) fluxes, assumed equal, over the energy range from 1.9 TeV to 2.5 PeV. On 2006 December 13 the IceTop air shower array at the South Pole detected a major solar particle event. By numerically simulating the response of the IceTop tanks, which are thick Cerenkov detectors with multiple thresholds deployed at high altitude with no geomagnetic cutoff, we determined the particle energy spectrum in the energy range 0.6-7.6 GeV. This is the first such spectral measurement using a single instrument with a well-defined viewing direction. We compare the IceTop spectrum and its time evolution with previously published results and outline plans for improved resolution of future solar particle spectra. IceCube is a km-scale neutrino observatory under construction at the South Pole with sensors both in the deep ice (InIce) and on the surface (IceTop). The sensors, called Digital Optical Modules (DOMs). detect, digitize and timestamp the signals from optical Cherenkov-radiation photons. The DOM Main Board (MB) data acquisition subsystem is connected to the central DAQ in the IceCube Laboratory (ICL) by a single twisted copper wire-pair and transmits packetized data on demand. Time calibration is maintained throughout the array by regular transmission to the DOMs of precisely timed analog signals, synchronized to a central GPS-disciplined clock. The design goals and consequent features, functional capabilities, and initial performance of the DOM MB, and the operation of a combined array of DOMs as a system, are described here. Experience with the first InIce strings and the IceTop stations indicates that the system design and performance goals have been achieved. (c) 2009 Elsevier B.V. All rights reserved. Linnaeus University, Faculty of Humanities and Social Sciences, School of Social Sciences. Målet med denna uppsats var att undersöka vilka kommunikativa strategier personer inom PR- och informationsbranschen arbetar efter för att bygga upp och stärka sina personliga varumärken i sociala medier. Studiens huvudfråga lyder "Vilka kommunikativa strategier använder människor inom PR- och informationsbranschen för att stärka sitt personliga varumärkesbyggande i sociala medier?". Syftet med studien var tudelat, dels ville vi bidra med djupare förståelse kring vad det finns för drivkrafter bakom personligt varumärkesbyggande i sociala medier och dels ville vi bidra med nya infallsvinklar och ett nytt empiriskt material till den samhällsvetenskapliga forskningsfronten. Studiens teoretiska ramverk utgår från varumärkesteori, Erving Goffmans teori om självpresentation samt Pierre Bourdieus teori om kapital och social positionering. Studien är genomförd med kvalitativa metoder, för det första har vi tagit del av aktuell forskning på områdena sociala medier och personligt varumärkesbyggande, vi har även gjort en innehållslig kartläggning av tio digitala personliga varumärken samt genomfört intervjuer med personerna bakom varumärkena. När vi i uppsatsen använder begreppet sociala medier syftar vi enbart på bloggar och microbloggverktyget Twitter. Resultatet för denna studie visar att respondenternas strategier för digitalt personligt varumärkesbyggande skiljer sig åt, strategierna skiljer sig även beroende på socialt medium. Somliga respondenter uppger att de genomgående arbetar strategiskt både på Twitter och med sin blogg medan andra menar att de inte har några strategier alls. Oavsett hur stort utrymme de strategiska baktankarna får går det att skönja likheter i samtliga respondenters varumärkesbyggande. Utifrån dessa likheter har vi identifierat fem idealnormer och strategier för personligt varumärkesbyggande för personer verksamma inom PR- och informations-branschen: att tydligt identifiera sin yrkesidentitet och sina personliga egenskaper, att utelämna privat och ofördelaktig information, att skriva om ett avgränsat ämnesområde, att visa på aktualitet och kunskap inom branschen samt att nätverka med personer inom branschen. Dessa har legat till grund när vi konstruerat följande tre idealtyper för respondenternas varumärkesbyggande: reporter, kommentator och kåseriskribent. Sammanfattningsvis har vi kunnat konstatera att personligt varumärkesbyggande i sociala medier tillskrivs allt större värde inom PR- och informationsbranschen. Värdet ligger i möjligheten att stärka sin position i förhållande till andra inom branschen, för att lyckas med detta krävs en utmejslad strategi och kunskap om hur man på bästa sätt paketerar sig själv. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Asynchronous machines are considered nowadays the most commonly used electrical machines, which are mainly used as electrical induction motors. Starting the induction motor is the most important and dangerous step. The theory behind this project is based on representing the real motor by a set of equations and values in Matlab using the subsystem feature, forming a corresponding idealistic motor in a way where all the physical effects are similar. The motor is started under different loads in two methods: Direct and Soft starting. Each method is studied and discussed using supporting simulation of currents, torque, speed, efficiency and power factor curves. Linnaeus University, School of Business and Economics, Department of Organisation and Entrepreneurship. Entrepreneurship is in full-swing across the globe and it is widely regarded as a sustainable solution to long-unresolved economic issues like unemployment and poverty. Immigration is also a growing reality and the immigrating individuals can contribute to the new societies either by settling for a job or launching a business as an entrepreneur and in turn creating more jobs. However, social constraints are a key hurdle in the way of immigrant entrepreneurs. This paper aims to not only understand the social constraints faced by immigrant entrepreneurs but also provide a set of guidelines on how to overcome these social constraints. A qualitative research study focused on immigrant entrepreneurs in Sweden was designed around this purpose and was conducted in two cities of Sweden. Entrepreneurs in the study were from diverse nationalities of origin and business sectors. Key social constraints identified through the study are cultural differences, differences in business practices, and language – all acting as a wall for foreign entrepreneurs. Networking – both business and social – is regarded as the main solution to overcome these barriers and the weight for this lies equally on state, entrepreneurship industry, and the immigrant entrepreneurs. Immigrant entrepreneurs can overcome the social constraints by also researching their business area in detail as well as marketing themselves and their businesses especially by establishing a strong and trustable social media profile. Government needs to recognize the diversity of immigrant entrepreneurship communities and create tailor-made social interaction programs for different nationalities, educational backgrounds, and business sectors. It can also project positive image of successful immigrant entrepreneurs not only to inspire other immigrant entrepreneurs but also to increase trust regarding immigrant entrepreneurs among native population. Another important step by government could be early orientation for immigrant entrepreneurs to Swedish business market. Entrepreneurship advisory industry needs to understand immigrant entrepreneurs better and organize more multi-cultural events to lower the barriers between native and immigrant communities. Antipsykotiska läkemedel är basen för behandling av schizofreni, en psykisk sjukdom som uppträder redan hos unga människor. Symtomen vid schizofreni brukar delas in i positiva symtom (hallucinationer, vanföreställningar, paranoida tankar), negativa symtom (koncentrationssvårigheter, nedsatt språk- och tankeförmåga, minskat intresse för omgivningen, och initiativlöshet), samt kognitiva symtom (minnesproblem, problem med uppmärksamhet och koncentration). Antipsykotiska läkemedel delas in i typiska (den äldre generationen) och atypiska (den nyare generationen) antipsykotika. För båda grupperna antipsykotiska läkemedel finns det risk för biverkningar. De vanligaste biverkningarna vid behandling med den äldre generationen antipsykotika är extrapyramidala biverkningar. En biverkning som förefaller mer specifik för de nya atypiska preparaten är viktökning, vilken även kan orsaka utveckling av många allvarliga sjukdomstillstånd. Syftet med detta arbete var att jämföra typiska och atypiska antipsykotiska läkemedel med avseende på utveckling av viktökning. För att få svar på min frågeställning har en litteraturstudie av fem vetenskapliga artiklar genomförts. De vetenskapliga artiklarna har hittats genom databassökningar i PubMed, medan övriga fakta har hämtats från andra källor. Resultatet av de vetenskapliga artiklarna visar att det finns skillnader mellan traditionella och nyare generationer antipsykotika vad gäller tendens att orsaka viktökning. Med några undantag, är flera antipsykotiska läkemedel, som tillhör den nyare generationen, associerade med högre risk för utveckling av viktökning jämfört med den äldre generationen antipsykotika. Viktökning orsakas mest av klozapin, följt av olanzapin och risperidon. Quetiapin orsakar, i likhet med haloperidol, mindre viktökning. På grund av detta faktum, forskar man numera kring orsakerna till denna skillnad för att förbättra biverkningsprofilen hos framtida antipsykotika. Max Planck Institute for Nuclear Physics, Germany ; Dublin Institute for Advanced Studies, Ireland ; National Academy of Sciences of the Republic of Armenia, Armenia. Max Planck Institute for Nuclear Physics, Germany. Yerevan Physics Institute, Armenia ; National Academy of Sciences of the Republic of Armenia, Armenia. Instytut Fizyki Ja̧drowej PAN, Poland. Univ Namibia, Dept Phys, Private Bag 13301, Windhoek, Namibia.. Univ Amsterdam, Astron Inst Anton Pannekoek, GRAPPA, Sci Pk 904, NL-1098 XH Amsterdam, Netherlands.. North West Univ, Ctr Space Res, ZA-2520 Potchefstroom, South Africa.. Ruhr Univ Bochum, Inst Theoret Phys, Lehrstuhl Weltraum & Astrophys 4, D-44780 Bochum, Germany.. Univ Amsterdam, Astron Inst Anton Pannekoek, GRAPPA, Sci Pk 904, NL-1098 XH Amsterdam, Netherlands.;Univ Amsterdam, Inst High Energy Phys, Sci Pk 904, NL-1098 XH Amsterdam, Netherlands.. Leopold Franzens Univ Innsbruck, Inst Astro & Teilchenphys, A-6020 Innsbruck, Austria.. Max Planck Inst Kernphys, POB 103980, D-69029 Heidelberg, Germany.. Univ Adelaide, Sch Phys Sci, Adelaide, SA 5005, Australia.. Univ Paris Diderot, PSL Res Univ, CNRS, Observ Paris,LUTH, 5 Pl Jules Janssen, F-92190 Meudon, France.. UPMC Univ Paris 06, Univ Paris Diderot, Sorbonne Univ, CNRS,Sorbonne Paris Cite,LPNHE, 4 Pl Jussieu, F-75252 Paris 5, France.. Univ Montpellier, CNRS, IN2P3, Lab Univ & Particules Montpellier, CC 72,Pl Eugene Bataillon, F-34095 Montpellier 5, France.. Univ Bordeaux, CNRS, IN2P3, Ctr Etud Nucl Bordeaux Gradignan, F-33175 Gradignan, France.. CEA Saclay, DSM Irfu, F-91191 Gif Sur Yvette, France.. Friedrich Alexander Univ Erlangen Nurnberg, Erlangen Ctr Astroparticle Phys, Erwin Rommel Str 1, D-91058 Erlangen, Germany.. Univ Warsaw, Astron Observ, Al Ujazdowskie 4, PL-00478 Warsaw, Poland.. Univ Tubingen, Inst Astron & Astrophys, Sand 1, D-72076 Tubingen, Germany.. Aix Marseille Univ, CNRS, IN2P3, CPPM UMR 7346, F-13288 Marseille, France.. Max Planck Inst Kernphys, POB 103980, D-69029 Heidelberg, Germany.;PAN, Inst Fizyki Jadrowej, Ul Radzikowskiego 152, PL-31342 Krakow, Poland.. Univ Witwatersrand, Sch Phys, 1 Jan Smuts Ave, ZA-2050 Johannesburg, South Africa.. Univ Savoie Mt Blanc, CNRS, IN2P3, Lab Annecy Le Vieux Phys Particules, F-74941 Annecy Le Vieux, France.. Heidelberg Univ, Landessternwarte, D-69117 Heidelberg, Germany.. North West Univ, Ctr Space Res, ZA-2520 Potchefstroom, South Africa.;Univ Namibia, Dept Phys, Private Bag 13301, Windhoek, Namibia.. Ecole Polytech, CNRS, IN2P3, Lab Leprince Ringuet, F-91128 Palaiseau, France.. Univ Hamburg, Inst Expt Phys, Luruper Chaussee 149, D-22761 Hamburg, Germany.. Univ Paris Diderot, CNRS, IN2P3, CEA Irfu,Observ Paris,Sorbonne Paris Cite,APC,Ast, 10 Rue Alice Domon & Leonie Duquet, F-75205 Paris 13, France.. Univ Leicester, Dept Phys & Astron, Univ Rd, Leicester LE1 7RH, Leics, England.. Polish Acad Sci, Nicolaus Copernicus Astron Ctr, Ul Bartycka 18, PL-00716 Warsaw, Poland.. Univ Potsdam, Inst Phys & Astron, Karl Liebknecht Str 24-25, D-14476 Potsdam, Germany.. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Stockholm University. Humboldt Univ, Inst Phys, Newtonstr 15, D-12489 Berlin, Germany.. Uniwersytet Jagiellonski, Obserwatorium Astron, Ul Orla 171, PL-30244 Krakow, Poland.. Univ Grenoble Alpes, IPAG, F-38000 Grenoble, France.;CNRS, IPAG, F-38000 Grenoble, France.. Univ Paris Diderot, PSL Res Univ, CNRS, Observ Paris,LUTH, 5 Pl Jules Janssen, F-92190 Meudon, France.;Univ Calif Santa Cruz, Santa Cruz Inst Particle Phys, Santa Cruz, CA 95064 USA.;Univ Calif Santa Cruz, Dept Phys, Santa Cruz, CA 95064 USA.. Rikkyo Univ, Dept Phys, Toshima Ku, 3-34-1 Nishi Ikebukuro, Tokyo 1718501, Japan.. Nicolaus Copernicus Univ, Fac Phys Astron & Informat, Ctr Astron, Grudziadzka 5, PL-87100 Torun, Poland.. Japan Aerpspace Explorat Agcy JAXA, ISAS, Chuo Ku, 3-1-1 Yoshinodai, Sagamihara, Kanagawa 2298510, Japan.. Univ Adelaide, Sch Phys Sci, Adelaide, SA 5005, Australia.;Univ New South Wales, Sch Phys, Sydney, NSW 2052, Australia.. Univ Free State, Dept Phys, POB 339, ZA-9300 Bloemfontein, South Africa.. PAN, Inst Fizyki Jadrowej, Ul Radzikowskiego 152, PL-31342 Krakow, Poland.. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Univ Paris Diderot, France. Max Planck Inst Kernphys, POB 103980, D-69029 Heidelberg, Germany.;Heidelberg Univ, ITA, DFG, Heidelberg, Germany.. Dublin Inst Adv Studies, 31 Fitzwilliam Pl, Dublin 2, Ireland.. Natl Acad Sci Republ Armenia, Marshall Baghramian Ave 24, Yerevan 0019, Armenia.;Yerevan Phys Inst, 2 Alikhanian Bros St, Yerevan 375036, Armenia.. Univ Amsterdam, Inst High Energy Phys, GRAPPA, Sci Pk 904, NL-1098 XH Amsterdam, Netherlands.. Univ Potsdam, Inst Phys & Astron, Karl Liebknecht Str 24-25, D-14476 Potsdam, Germany.;DESY, D-15738 Zeuthen, Germany.. van der Wale, D. J. Univ Tokyo, Dept Phys, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1130033, Japan.;Univ Tokyo, Res Ctr Early Universe, Sch Sci, Bunkyo Ku, 7-3-1 Hongo, Tokyo 1130033, Japan.. Nagoya Univ, Inst Adv Res, Chikusa Ku, Nagoya, Aichi 4648601, Japan.;Nagoya Univ, Dept Phys, Chikusa Ku, Nagoya, Aichi 4648601, Japan.. Nagoya Univ, Dept Phys, Chikusa Ku, Nagoya, Aichi 4648601, Japan.. A search for new supernova remnants (SNRs) has been conducted using TeV gamma-ray data from the H.E.S.S. Galactic plane survey. As an identification criterion, shell morphologies that are characteristic for known resolved TeV SNRs have been used. Three new SNR candidates were identified in the H.E.S.S. data set with this method. Extensive multiwavelength searches for counterparts were conducted. A radio SNR candidate has been identified to be a counterpart to HESS J1534-571. The TeV source is therefore classified as a SNR. For the other two sources, HESS J1614-518 and HESS J1912 + 101, no identifying counterparts have been found, thus they remain SNR candidates for the time being. TeV-emitting SNRs are key objects in the context of identifying the accelerators of Galactic cosmic rays. The TeV emission of the relativistic particles in the new sources is examined in view of possible leptonic and hadronic emission scenarios, taking the current multiwavelength knowledge into account. North-West University, South Africa ; University of Hamburg, Germany ; Humboldt University of Berlin, Germany. National Academy of Sciences of the Republic of Armenia, Armenia ; Yerevan Physics Institute, Armenia. Institute of Nuclear Physics PAN, Poland. North West Univ, South Africa. Leopold Franzens Univ Innsbruck, Austria. Max Planck Inst Kernphys, Germany. Univ Paris Diderot, PSL Res Univ, Observ Paris, LUTH,CNRS, 5 Pl Jules Janssen, F-92190 Meudon, France.. Univ Paris Diderot, UPMC Univ Paris 06, Sorbonne Univ, Sorbonne Paris Cite,CNRS,LPNHE, 4 Pl Jussieu, F-75252 Paris 5, France.. Univ Montpellier, Lab Univers & Particules Montpellier, CNRS IN2P3, CC 72,Pl Eugene Bataillon, F-34095 Montpellier 5, France.. Univ Bordeaux, CNRS IN2P3, Ctr Etudes Nucl Bordeaux Gradignan, F-33175 Gradignan, France.. Aix Marseille Univ, CNRS IN2P3, CPPM UMR 7346, F-13288 Marseille, France.. Max Planck Inst Kernphys, POB 103980, D-69029 Heidelberg, Germany.;Inst Fizyki Jadrowej PAN, Ul Radzikowskiego 152, PL-31342 Krakow, Poland.. Univ Savoie Mt Blanc, CNRS IN2P3, Lab Annecy Le Vieux Phys Particules, F-74941 Annecy Le Vieux, France.. Stockholm Univ, Albanova Univ Ctr, Dept Phys, Oskar Klein Ctr, S-10691 Stockholm, Sweden.. Ecole Polytech, CNRS IN2P3, Lab Leprince Ringuet, F-91128 Palaiseau, France.. Univ Paris Diderot, APC, CNRSIN2P3, CEA Irfu,Observ Paris,Sorbonne Paris Cite, 10 Rue Alice Domon & Leonie Duquet, F-75205 Paris 13, France.. Univ Leicester, Dept Phys & Astron, Univ Rd, Leicester LE1 7RH, Leics, UK. Univ Potsdam, Inst Physik & Astron, Karl Liebknecht Str 24-25, D-14476 Potsdam, Germany.. Uniwersytet Jagiellonski, Obserwatorium Astronomiczne, Ul Orla 171, PL-30244 Krakow, Poland.. Univ Grenoble Alpes, CNRS, IPAG, F-38000 Grenoble, France.. Univ Paris Diderot, PSL Res Univ, Observ Paris, LUTH,CNRS, 5 Pl Jules Janssen, F-92190 Meudon, France.;Univ Calif Santa Cruz, Santa Cruz Inst Particle Phys, Santa Cruz, CA 95064 USA.;Univ Calif Santa Cruz, Dept Phys, Santa Cruz, CA 95064 USA.. Nicolaus Copernicus Univ, Ctr Astron, Fac Phys Astron & Informat, Grudziadzka 5, PL-87100 Torun, Poland.. Ruhr Univ Bochum, Inst Theoret Phys Lehrstuhl Weltraum & Astrophy 4, D-44780 Bochum, Germany.. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Paris Diderot University, France. Max Planck Inst Kernphys, POB 103980, D-69029 Heidelberg, Germany.;Heidelberg Univ, ITA, DFG, Heidelberg, Germany. Univ Amsterdam, Inst High Energy Phys, GRAPPA, Sci Pk 904, NL-1098 XH Amsterdam, Netherlands.. Univ Potsdam, Inst Physik & Astron, Karl Liebknecht Str 24-25, D-14476 Potsdam, Germany.;DESY, D-15738 Zeuthen, Germany.. van der Walt, D. J. Inst Fizyki Jadrowej PAN, Ul Radzikowskiego 152, PL-31342 Krakow, Poland.. Swinburne Univ Technol, Ctr Astrophys & Supercomp, Mail H30,POB 218, Hawthorn, Vic 3122, Australia.. SKA Org, Jodrell Bank Observ, Macclesfield SK11 9DL, Cheshire, UK;ARC Ctr Excellence All Sky Astrophys CAASTRO, New York, NY USA.. Univ Amsterdam, GRAPPA, Astron Inst Anton Pannekoek, Sci Pk 904, NL-1098 XH Amsterdam, Netherlands.;Univ Amsterdam, Inst High Energy Phys, Sci Pk 904, NL-1098 XH Amsterdam, Netherlands.;Netherlands Inst Radio Astron, ASTRON, Post Bus 2, NL-7990 AA Dwingeloo, Netherlands.. Aims. Following the detection of the fast radio burst FRB150418 by the SUPERB project at the Parkes radio telescope, we aim to search for very-high energy gamma-ray afterglow emission. Methods. Follow-up observations in the very-high energy gamma-ray domain were obtained with the H.E.S.S. imaging atmospheric Cherenkov telescope system within 14.5 h of the radio burst. Results. The obtained 1.4 h of gamma-ray observations are presented and discussed. At the 99% C.L. we obtained an integral upper limit on the gamma-ray flux of Phi(gamma)(E > 350 GeV) < 1.33 x 10(-8) m(-2) s(-1). Differential flux upper limits as function of the photon energy were derived and used to constrain the intrinsic high-energy afterglow emission of FRB 150418. Conclusions. No hints for high-energy afterglow emission of FRB 150418 were found. Taking absorption on the extragalactic background light into account and assuming a distance of z = 0 : 492 based on radio and optical counterpart studies and consistent with the FRB dispersion, we constrain the gamma-ray luminosity at 1 TeV to L < 5 : 1 x 10(47) erg/s at 99% C.L.
CommonCrawl
After constructing a binary search tree, you can read off the key values in ascending order by performing an in-order traversal. Will the resulting sorted order be stable? If so, how would the tree have to be coded to ensure this? If it is not possible, why not?" Stability ensures that if A and B share the same Key, if A comes before B originally, A comes before B after sorting as well. and we insert a 1,e into the tree. We can get that 1,e on the left or right child of 1,d. We want to return 1,d first then 1,e so that stability is retained. How do we change the code for the BST to do this? My suggestion was to make a linked list that returns the head of duplicate values whenever duplicates are encountered. However I'm not sure this is entirely the best method here. You can ensure that the inorder is stable by modifying the insertion routine (if necessary) so that when it compares an element $x_1$ already in the tree to the new element $x_2$, if $x_1 = x_2$ then it answers that $x_1 < x_2$. Caveat: I'm not sure that this works for self-balancing trees. Not the answer you're looking for? Browse other questions tagged trees binary-trees graph-traversal binary-search-trees or ask your own question. A BST can be broken by accessing one of its nodes, how can I always make sure this happens? What is the advantage of Day-Stout-Warren algorithm for balancing BST? If inorder traversal of a tree is in ascending order will the tree definitely be a BST?
CommonCrawl
1 . What will come in place of question mark (?) in the following number series? 2 . What will come in place of question mark (?) in the following number series? 3 . What will come in place of question mark (?) in the following number series? The series is +$1^ 2$ + 1, +$2^ 2$ + 1, +$3^ 2$ + 1, +$4^ 2$ + 1, +$5^ 2$ + 1, +$6^ 2$+ 1, .. 4 . In each of these questions, an equation is given with question (?) in place of the correct symbol. Based on the value on the right hand side and the left hand side of the question mark, you have to decide which of the following symbols will come in place of the question mark. 5 . In each of these questions, an equation is given with question (?) in place of the correct symbol. Based on the value on the right hand side and the left hand side of the question mark, you have to decide which of the following symbols will come in place of the question mark. 6 . In each of these questions, an equation is given with question (?) in place of the correct symbol. Based on the value on the right hand side and the left hand side of the question mark, you have to decide which of the following symbols will come in place of the question mark. 7 . In each of these questions, an equation is given with question (?) in place of the correct symbol. Based on the value on the right hand side and the left hand side of the question mark, you have to decide which of the following symbols will come in place of the question mark. 8 . In each of these questions, an equation is given with question (?) in place of the correct symbol. Based on the value on the right hand side and the left hand side of the question mark, you have to decide which of the following symbols will come in place of the question mark. 7.2 $\times$ 8.5 $\times$ 3.5 = ? 3$2 \over 7$ + 8 $1 \over 7$ - 5 $2 \over 7$ + 2 $1 \over 14$ = ?
CommonCrawl
In this talk we consider multivariate approximation of compact embeddings of periodic Sobolev spaces of dominating mixed smoothness into the $L_q,\ 2< q\leq \infty$ space by linear Monte Carlo methods that use arbitrary linear information. We construct linear Monte Carlo methods and obtain explicit-in-dimension upper estimates. These estimates catch up with the rate of convergence.
CommonCrawl
Structural and resistive changes in Ti-doped NiO resistive random access memory structures that occur upon electroforming have been investigated using hard X-ray microscopy with a spatial resolution of 50 nm. Analysis of 2D scans of the NiO (111) diffraction intensity across a 10 $\mu $m $\times $ 10 $\mu $m patterned Pt/NiO:Ti/Pt structure show that electroforming leads to structural changes in regions of size up to about one micrometer, which is much larger than the grain size of the structure (of the order of 15 nm). Such changes are consistent with a migration of ionic species or defects during electroforming over regions containing many crystalline grains. *Manuscript created by UChicago Argonne, LLC, Operator of Argonne National Lab, a U.S. DOE Office of Science Laboratory operated under Contract No. DE-AC02-06CH11357.
CommonCrawl
Format: MarkdownI redirected it from [[model structure on an under category]]. I redirected it from model structure on an under category. Format: MarkdownBy the way: I keep seeing in the literature _overcategory_ instead of _over category_ . For instance in the article by Hirschhorn linked to at [[model structure on an over category]]. Are we sure we want to have the entries named [[over category]] and so on? By the way: I keep seeing in the literature overcategory instead of over category . For instance in the article by Hirschhorn linked to at model structure on an over category. Are we sure we want to have the entries named over category and so on? Format: MarkdownWell, *I* like [[slice category]], but I remember putting it [[over category]] in the days before redirects to help insure that your links to it would work. I have put in redirects for [[overcategory]] and the like. Well, I like slice category, but I remember putting it over category in the days before redirects to help insure that your links to it would work. I have put in redirects for overcategory and the like. Format: MarkdownNow I am interested in the special case of Top with [[Strom's model structure]]. Then there are theorems on the connection between [[Dold fibration]]s and [[Hurewicz fibration]]s, one of them is that every Dold fibration p:E -> B is homotopy equivalent over B with a Hurewicz fibration p:E' to B. Is this giving some light on the open question as if there is a model category structure on Top where fibrations are Dold fibrations ? Another important thing is that you can verbatim repeat the definition of Hurewitz fibration to get Dold fibration if instead of homotopies you use delayed homotopies (this is a theorem). Is there a way to use delayed homotopies to nontrivially modify the notion of cofibration ? with a Hurewicz fibration p:E' to B. Is this giving some light on the open question as if there is a model category structure on Top where fibrations are Dold fibrations ? Another important thing is that you can verbatim repeat the definition of Hurewitz fibration to get Dold fibration if instead of homotopies you use delayed homotopies (this is a theorem). Is there a way to use delayed homotopies to nontrivially modify the notion of cofibration ? Format: TextIn the case of the under category, a relevant theorem may be Dold's theorem which states that a map whose underlying map is a homotopy equivalence is already a homotopy equivalence under provided its source and target are cofibrations. This is discussed in Kamps-Porter in quite a lot of detail. In the case of the under category, a relevant theorem may be Dold's theorem which states that a map whose underlying map is a homotopy equivalence is already a homotopy equivalence under provided its source and target are cofibrations. This is discussed in Kamps-Porter in quite a lot of detail. Format: MarkdownThese are still usual cofibrations, and these do not form a model category with Dold fibrations, but maybe there is a good modified choice of cofibrations which woudl be "complementary" with Dold fibrations (maybe silly idea knowing something specific banning this choice, but to me it looks still reasonable). These are still usual cofibrations, and these do not form a model category with Dold fibrations, but maybe there is a good modified choice of cofibrations which woudl be "complementary" with Dold fibrations (maybe silly idea knowing something specific banning this choice, but to me it looks still reasonable). Format: Htmlmodified Idea in [[over quasi-category]]. now it should be less evil. modified Idea in over quasi-category. now it should be less evil. Format: MarkdownThat reminds me: we should add a discussion about if and how the [[model structure on an over category]] models the corresponding [[over quasi-category]]. I was about to make the obvious statement, but I'll need to check something first. That reminds me: we should add a discussion about if and how the model structure on an over category models the corresponding over quasi-category. I was about to make the obvious statement, but I'll need to check something first. Format: MarkdownItexI have somewhat hastily added to [[model structure on an over category]] the argument that over a fibrant object this presents the correct over-$(\infty,1)$-category. However, I have to dash off now and go offline. Will try to look into this again later. I have somewhat hastily added to model structure on an over category the argument that over a fibrant object this presents the correct over-(∞,1)(\infty,1)-category. However, I have to dash off now and go offline. Will try to look into this again later. Format: MarkdownItexI added a new section <https://ncatlab.org/nlab/show/model+structure+on+an+over+category#quillen_adjunctions_between_slice_categories> about Quillen adjunctions between slice categories. I added a new section https://ncatlab.org/nlab/show/model+structure+on+an+over+category#quillen_adjunctions_between_slice_categories about Quillen adjunctions between slice categories. Format: MarkdownItexRe #14: I added Proposition 2.3, which shows that if C is a simplicial model category, then so is C/X. Re #14: I added Proposition 2.3, which shows that if C is a simplicial model category, then so is C/X.
CommonCrawl
To find a product using the distributive property, you take the coefficent--in this case the 2 outside of the parentheses--and multiply it by each term within the parentheses separately. The first term within the parentheses is $6x$, so the first term of the product is $2\times6x$, or $12x$. The second term within the parentheses is $5y$, so the second term of the product is $2\times5y$, or $10y$. The third term within the parentheses is $2z$, so the third term of the product is $2\times2z$, or $4z$. The terms within the parentheses are connected by plus signs, so the terms within the product will be too. The final product after distributing the 2, therefore, is $12x+10y+4z$.
CommonCrawl
Why doesn't penalized cubic regression reduce the number of knots in a GAM? As far as I understand, cubic regression penalization prevents overfitting by reducing the number of knots by penalizing wiggliness. The supplied parameter k serves only as a starting points for choosing the knots. In practice k-1 (or k) sets the upper limit on the degrees of freedom associated with an s smooth [...] However the actual effective degrees of freedom are controlled by the degree of penalization selected during fitting, by GCV, AIC, REML or whatever is specified. However, with my model specifications, penalization does not seem to have an effect. The demo below starts and ends with 114 knots. Obviously I'm mistaken somewhere. What am I missing? You do have a slight misunderstanding of how penalization works in the spline regression situation. Regression with (cubic, in this case) splines does not implement penalization by reducing the number of knots, but instead by reducing the magnitude of the coefficients of the spline terms. "Degrees of freedom" doesn't refer to the number of knots, but rather to "actual effective degrees of freedom", which take into account the fact that the regression wasn't allowed to fully optimize on the coefficients (unlike, say, least squares), therefore the effective loss in degrees of freedom is somewhat less than the number of coefficients estimated. The $X_i$ are the cubic spline terms, the $Z_i$ are terms whose coefficients aren't being regularized. The last term is a penalty on the magnitude of the $\beta$ coefficients; the larger $\lambda$ is, the more the coefficients are shrunk towards zero. It's this last term that penalizes "wiggliness"; to see this, imagine what happens as $\lambda \rightarrow \infty$ - the fitted curve comes closer and closer to ignoring the $X_i$ altogether, which is about as non-wiggly (with respect to the $X_i$) as you can get - it's saying that the estimated function is constant with respect to changes in the $X_i$. You certainly could regularize by reducing the number of knots, but, from an optimization perspective, this is very difficult. If you are optimizing with respect to knots, not only do you have to choose the number of knots but also their location. As far as I know, there is no good, fast solution to the latter problem. With regularization of the coefficients, there are high-quality approximations and shortcuts that are very effective at speeding up the computations. Not the answer you're looking for? Browse other questions tagged gam mgcv penalized or ask your own question. Plotting a logistic GAM model in R - why is the scale not 0-1? What's the typical range of possible values for the shrinkage parameter in penalized regression? What is the prediction equation for penalized logistic regression? Minimum number of observations needed for penalized regression? Why trend between smooth plots (of GAM) and scatter plot is the opposite?
CommonCrawl
I have a question from the paper "More Abelian Dualities in 2+1 Dimensions". On page 3, it says that we flow towards IR by sending $\alpha\rightarrow\infty$ and tuning the mass to zero. My question is that why this limit takes the theory to IR. $\alpha \to \infty$ in the IR because the $|\phi|^4$ term has dimension 1. Then there is a comma missing, the mass is tuned to zero to achieve criticality. The mass term is relevant like always though. It has to be tuned by hand.
CommonCrawl
For all our prospective med-school students (and others alike) who are currently enrolled in EPIB 507 here in McGill, you must have wondered about why you took a course such as biostatistics in the first place, and as you have already experienced, our biostats course — as opposed to courses in the regular semesters — has a span only one month, and covered an incredible amount of indigestible materials. In a few words, biostatistics, as far as we are concerned, pertains mostly to applied statistics within an epidemiological and biological framework. It differs from the usual statistics in its emphasis on concepts such as specificity, false positive and randomized clinical trials, and on metrics such as odd ratios, prevalence and relative risks. While we haven't quite approached biostatistics from a mathematician's standpoint, we did cover a plethora of ways on how it is applied in a medical research setting, which — as it turns out — could be useful to some of you who are newly involved with pharmaceuticals or otherwise interested in public health. Similar to the way statistics is taught in other faculties, descriptive biostatistics are concerned with describing the gist of the data, usually via clever visual data representations such as stacked bar graphs and two-way frequency tables. Here, concepts such as geometric mean and coefficient of variation can be easily found in other non-medical domains as well, and the materials are, in general, pretty straight forward and intelligible. While seemingly unrelated to the pursuit of statistical analysis, having some chop in basic probability paves the way for understanding more advanced applications in inferential statistics. It's no surprise that virtually all stats courses offered in universities tend to ramble a bit on probability, with some courses covering more than others. In EPIB 507, our focus is on learning the basic laws used in counting and computation of probability (e.g., law of union, law of intersection, Bayes' Theorem, law of total probability). Equipped with the basic notions, we then moved on to the concept of random variable, and its ancillaries metrics (e.g., expected value, standard deviation). We covered many examples of discrete distributions (e.g., binomial, geometric, hypergeometric and Poisson distributions), along with a few examples of continuous distributions (e.g., uniform and normal distributions), all of the while evading calculus altogether. Inferential biostatistics is mainly concerned with drawing practical conclusions/recommendations about the populations from the sample data, and this is where the water starts to get murky pretty quickly. The inferences can be generally categorized into 2 types: confidence interval and hypothesis testing. From there, a whole bunch of crazy formulas would pop up non-stop, and the procedures need to be followed and underlying assumptions respected. Parametric procedures are those with specific presuppositions about the nature and the inter-relationship of the populations, whereas non-parametric procedures are those that are to be used when the aforementioned presuppositions fail to hold. In any case, both procedures allow for 1-sample test for population mean (or median), and 2-sample test for differences in means (distributions) — whether the 2 samples are matched or unmatched. For example, here's what Fisher's Z-Transformation does to r! Of course, there are much, much more to this. Confidence interval for the difference in population means, two-sample proportion test, Fisher's z-transformation for correlation coefficient, Wilcoxon's Signed-Rank Test, Chi-Square Contingency Test…You name it. And if you just throw in a bunch of Greek letters such as $\alpha, \beta, \rho$ and $\chi^2$, then you would have got it. While medicine emphasizes on stress management, the act of creating medical practitioners induce conditions which would require stress management. How's your higher math going? Shallow learning and mechanical practices rarely work in higher mathematics. Instead, use these 10 principles to optimize your learning and prevent years of wasted effort. Math Vault and its Redditbots enjoy advocating for mathematical experience through digital publishing and the uncanny use of technologies. Check out their 10-principle learning manifesto so that you can be transformed into a fuller mathematical being too.
CommonCrawl
In this paper we continue investigations of the Parker-shearing instability performing numerical simulations of the magnetic flux tube dynamics in the thin flux-tube approximation. We show that evolution of flux tubes resulting from numerical simulations is very similar to that of linear solutions if the vertical displacements are smaller than the vertical scale height $H$ of the galactic disc. If the vertical displacements are comparable to $H$, the vertical growth of perturbations is faster in the nonlinear range than in the linear one and we observe a rapid inflation of the flux tube at its top, which leads to a singularity in numerical simulations, if only the cosmic rays are taken into account. Then we perform simulations for the case of nonuniform external medium, which show that the dominating wavelength of the Parker instability is the same as the wavelength of modulations of external medium. As a consequence of this fact, in the case of dominating cosmic ray pressure, the dynamo $\alpha$ effect related to these short wavelength modulations is much more efficient than that related to the linearly most unstable long wavelengths modes of the Parker instability. Under the influence of differential forces resulting from differential rotation and the density waves, the $\alpha$-effect is essentially magnified in the spiral arms and diminished in the interarm regions, what confirms our previous results obtained in the linear approximation.
CommonCrawl
The aim of this paper is to characterize global dynamics of locally linearizable complex two dimensional cubic Hamiltonian systems. By finding invariants, we prove that their associated real phase space $\mathbb R^4$ is foliated by two dimensional invariant surfaces, which could be either simple connected, or double connected, or triple connected, or quadruple connected. On each of the invariant surfaces all regular orbits are heteroclinic ones, which connect two singularities, either both finite, or one finite and another at infinity, or both at infinity, and all these situations are realizable. Keywords: Complex cubic Hamiltonian system, linearization, heteroclinic orbits, global dynamics, invariants. Mathematics Subject Classification: Primary: 37J35, 37C10; Secondary: 37C27, 34C14. M. J. Alvarez, A. Gasull and R. Prohens, Topological classification of polynomial complex differential equations with all the critical points of centre type, J. Difference Equ. Appl., 16 (2010), 411-423. doi: 10.1080/10236190903232654. V. I. Arnold, Ordinary Differential Equations, 3rd edition, Springer-Verlag, Berlin, 1992. J. C. Artés and J. Llibre, Quadratic Hamiltonian vector fields, J. Differential Equations, 107 (1994), 80-95. doi: 10.1006/jdeq.1994.1004. J. Chavarriga and M. Sabatini, A survey of isochronous centers, Qual. Theory Dyn. Syst., 1 (1999), 1-70. doi: 10.1007/BF02969404. A. Cima and J. Llibre, Bounded polynomial vector fields, Trans. Amer. Math. Soc., 318 (1990), 557-579. doi: 10.1090/S0002-9947-1990-0998352-5. F. Dumortier, J. Llibre and J. C. Artés, Qualitative Theory of Planar Differential Systems, Springer, Berlin, 2006. A. Garijo, A. Gasull and X. Jarque, Local and global phase portrait of equation z' = f(z), Discrete Contin. Dyn. Syst., 17 (2007), 309-329. doi: 10.3934/dcds.2007.17.309. A. Gasull, J. Llibre and X. Zhang, One-dimensional quaternion homogeneous polynomial differential equations, J. Math. Phys., 50 (2009), 082705, 17 pp. doi: 10.1063/1.3139115. L. M. Lerman and Ya. L. Umanskiy, Four-dimensional Integrable Hamiltonian Systems with Simple Singular Points (Topological Aspects), Transl. Math. Monographs, American Math. Soc., Providence, Rhode Island, 1998. J. Llibre and V. G. Romanovski, Isochronicity and linearizability of planar polynomial Hamiltonian systems, J. Differential Equations, 259 (2015), 1649-1662. doi: 10.1016/j.jde.2015.03.009. J. Llibre and C. Valls, Darboux integrability of 2-dimensional Hamiltonian systems with homogenous potentials of degree 3, J. Math. Phys., 55 (2014), 033507, 12 pp. doi: 10.1063/1.4868701. J. Llibre and C. Valls, Liouvillian first integrals for a class of generalized Liénard polynomial differential systems, Proc. Roy. Soc. Edinburgh Sect. A, 146 (2016), 1195-1210. doi: 10.1017/S0308210515000906. J. Llibre and X. Zhang, On the Darboux integrability of polynomial differential systems, Qual. Theory Dyn. Syst., 11 (2012), 129-144. doi: 10.1007/s12346-011-0053-x. A.J. Maciejewski, M. Przybylska and H. Yoshida, Necessary conditions for the existence of additional first integrals for Hamiltonian systems with homogeneous potential, Nonlinearity, 25 (2012), 255-277. doi: 10.1088/0951-7715/25/2/255. Y. P. Martnez and C. Vidal, Classification of global phase portraits and bifurcation diagrams of Hamiltonian systems with rational potential, J. Differential Equations, 261 (2016), 5923-5948. V. G. Romanovski and D. S. Shafer, The Center and Cyclicity Problems: A Computational Algebra Approach, Birkhäuser, Boston, 2009. doi: 10.1007/978-0-8176-4727-8. H. Shi, X. Zhang and Y. Zhang, Linearization and dynamics of complex planar Hamiltonian systems, Preprint. C. Valls, Rikitake system: Analytic and Darbouxian integrals, Proc. Roy. Soc. Edinburgh Sect. A, 135 (2005), 1309-1326. doi: 10.1017/S030821050000439X. X. Zhang, Global structure of quaternion polynomial differential equations, Comm. Math. Phys., 303 (2011), 301-316. doi: 10.1007/s00220-011-1196-y. X. Zhang, Integrability of Dynamical Systems: Algebra and Analysis, Springer, Singapore, 2017. doi: 10.1007/978-981-10-4226-3.
CommonCrawl
You are in a book shop which sells $n$ different books. You know the price, the number of pages and the number of copies of each book. You have decided that the total price of your purchases will be at most $x$. What is the maximum number of pages you can buy? You can buy several copies of the same book. The first input line contains two integers $n$ and $x$: the number of book and the maximum total price. The next line contains $n$ integers $s_1,s_2,\ldots,s_n$: the number of pages of each book. The last line contains $n$ integers $k_1,k_2,\ldots,k_n$: the number of copies of each book. Explanation: You can buy three copies of book 1 and one copy of book 3. The price is $3 \cdot 2 + 3 = 9$ and the number of pages is $3 \cdot 8 + 4 = 28$.
CommonCrawl
Is correct? Thanks in advance! Yes, that is a correct definition of the limit of $f$ at $x_0$. Not the answer you're looking for? Browse other questions tagged general-topology definition or ask your own question. Are general topology and real analysis school definitions of limit equivalent? When is an accumulation point not the limit of some sequence in a topological space? Why are punctured neighborhoods in the definition of the limit of a function? Is the intersection of two topological spaces, a topological space? How to capture the definition of a topological space as the fixed point of a function? Does there exist a compact topological space that doesn't possess the Bolzano-Weierstrass property?
CommonCrawl
This observation, called the n + 1 rule, only applies when all of the neighboring protons are chemically equivalent to each other. The first statement infers that for the n + 1 rule to be valid, the protons must be chemically equivalent - inferring that equivalent protons can couple, while the second statement directly contradicts this. Could my misunderstanding be clarified? The "n+1" rule refers to a situation where you have a proton of type A with $n$ protons of type B next to it. Proton A's signal will be split $n+1$ times by the B's. However, none of the B's will split each other because they are equivalent. The chemically equivalent proton energy levels are split by the static and local magnetic fields but nmr transitions out of some of these levels are forbidden by selection rules and those signals that remain are only the allowed transitions that give the appearance as if splitting has not happened. The case for AX spectra is shown in the figure where the spin-spin coupling moves degenerate levels apart. The signals that are observed all have the same energy so appear as if nothing has happened to them. (There are 2 spins labelled as $\alpha$ or $ \beta$, $J$ is the spin -spin coupling constant and $h$ the Planck constant. The energy $A$ is given by $\gamma (1-\sigma)B_0/(4\pi)$ where $\gamma$ is the magnetogyric ratio, $\sigma$ the shielding constant and $B_0$ the static magnetic field). Not the answer you're looking for? Browse other questions tagged nmr-spectroscopy spin or ask your own question. Splitting of multiplets in ¹H NMR spectra? NMR Splitting for Adjacent, Equivalent Hydrogens? Why can we not compare the coupling constants to chemically equivalent (but not identical) protons to determine magnetic equivalence? Where does magnetic inequivalence come from? Why do identical protons not couple with each other in 1H NMR?
CommonCrawl
Abstract: In a recent contribution [arXiv:0904:4151] entanglement renormalization was generalized to fermionic lattice systems in two spatial dimensions. Entanglement renormalization is a real-space coarse-graining transformation for lattice systems that produces a variational ansatz, the multi-scale entanglement renormalization ansatz (MERA), for the ground states of local Hamiltonians. In this paper we describe in detail the fermionic version of the MERA formalism and algorithm. Starting from the bosonic MERA, which can be regarded both as a quantum circuit or in relation to a coarse-graining transformation, we indicate how the scheme needs to be modified to simulate fermions. To confirm the validity of the approach, we present benchmark results for free and interacting fermions on a square lattice with sizes between $6 \times 6$ and $162\times 162$ and with periodic boundary conditions. The present formulation of the approach applies to generic tensor network algorithms.
CommonCrawl
I was wondering how to do the following in sage: let's say I have a number field $F$, an embedding $i$ of that field in the complex numbers, and an algebraic integer $\alpha\in F$, for which I know $i(\alpha)$ with good numerical precision. How can I find the exact value? I finally found the time to think about the matter and look for a solution. Let us say I'm interested in an element $z$ in the integer ring of a number field $\mathbb K$; then if that field is Galois and I have an approximation of all complex embeddings of $z$, I can get a hold of the precise element by multiplying my approximations as a vector-column by the Minkowski embedding of the field to the left. Just take the nearest integer in each component and this gives an expression of the exact element which corresponds to the approximations. (Added later) I must add that this example shows that a single embedding/approximation won't be enough : as you see the approximation $0.1$ could be an approximation of zero or of $239-169\sqrt2$, and there's no way to tell which. Instead of fixing one embedding upon creating your number field, you can also compute all the embeddings into a given field L with K.embeddings(L), and use one of them explicitly. The field QQbar represents the algebraic closure of the rationals and allows you to do exact arithmetic with arbitrary algebraic numbers. Depending on what you want to do this may also be suitable for you. What you say is nice, but explains how to obtain numerical approximations of known elements of a number field given an embedding. The question is about the converse: given a numerical approximation, how to find out the element?
CommonCrawl
Recently, the high-precision vibration attenuation technology becomes the essence fo the seccessful development of high-integrated and ultra-precision industries, and is expected to continue playing a key role in the enhancement of manufacturing technology. Vibration isolation system using an air-spring is widely employed owing to its excellent isolation characteristics in a wide frequency range. It has, however, some drawbacks such as low-stiffness and low-damping features and can be easily excited by exogenous disturbances, and then vibration of table is remained for a long time. Consequently, the need for active vibration control for an air-spring vibration isolation system becomes inevitable. Furthermore, for an air-spring isolation table to be successfully employed in a variety of manufacturing sites, it should have a guaranteed robust performance not only to exogenous disturbances but also to uncertainties due to various equipments which might be put on the table. In this study, an active vibration suppression control system using H.inf. theory is designed and experiments are performed to verify its robust performance. An air-spring vibration isolation table with voice-coil-motors as its actuators is designed and built. The table is modeled as 3 degree-of-freedom system. An active control system is designed based on $H_\infty$control theory using frequency-shaped weighting functions. Analysis on its performance and frequency responce properties are done through numerical simulations. Robust characteristics of$H_\infty$ control on disturbances and model uncertainties are experimentally verified through (i) the transient response to the impact excitation of the table, (ii) the steady-state response to the harmonic excitation, and (iii) the response to the mass change of the table itself. An LQG controller is also designed and its performance is compared with the $H_\infty$ controller.
CommonCrawl
Lopez, M. Carbo, Chavant P-Y., Molton F., Royal G., & Blandin V. (2017). Chiral Nitroxide/Copper-Catalyzed Aerobic Oxidation of Alcohols: Atroposelective Oxidative Desymmetrization.. ChemistrySelect. 2, 443–450. Lopez, M. Carbo, Royal G., Philouze C., Chavant P-Y., & Blandin V. (2014). Imidazolidinone Nitroxides as Catalysts in the Aerobic Oxidation of Alcohols, en Route to Atroposelective Oxidative Desymmetrization.. Eur. J. Org. Chem.. 2014, 4884–4896. Ranieri, K., Conradi M., Chavant P-Y., Blandin V., Barner-Kowollik C., & Junkers T.. (2012). Enhanced Spin-capturing Polymerization and Radical Coupling Mediated by Cyclic Nitrones.. Aust. J. Chem.. 65, 1110–1116. Demory, E., Farran D., Baptiste B., Chavant P-Y., & Blandin V. (2012). Fast Pd- and Pd/Cu-Catalyzed Direct C-H Arylation of Cyclic Nitrones. Application to the Synthesis of Enantiopure Quaternary α-Amino Esters.. J. Org. Chem.. 77, 7901–7912. Thiverny, M., Farran D., Philouze C., Blandin V., & Chavant P-Y. (2011). Totally diastereoselective addition of aryl Grignard reagents to the nitrone-based chiral glycine equivalent MiPNO.. Tetrahedron: Asymmetry. 22, 1274–1281. Demory, E., Blandin V., Einhorn J., & Chavant P-Y. (2011). Noncryogenic Preparation of Functionalized Arylboronic Esters through a Magnesium-Iodine Exchange with in Situ Quench.. Org. Process Res. Dev.. 15, 710–716. Thiverny, M., Demory E., Baptiste B., Philouze C., Chavant P-Y., & Blandin V. (2011). Inexpensive, multigram-scale preparation of an enantiopure cyclic nitrone via resolution at the hydroxylamine stage.. Tetrahedron: Asymmetry. 22, 1266–1273. PraveenGanesh, N., Demory E., Gamon C., Blandin V., & Chavant P-Y. (2010). Efficient borylation of reactive aryl halides with MPBH (4,4,6-trimethyl-1,3,2-dioxaborinane).. Synlett. 2403–2406. Thiverny, M., Philouze C., Chavant P-Y., & Blandin V. (2010). MiPNO, a new chiral cyclic nitrone for enantioselective amino acid synthesis: the cycloaddition approach.. Org. Biomol. Chem.. 8, 864–872. PraveenGanesh, N., & Chavant P-Y. (2008). Improved preparation of 4,6,6-trimethyl-1,3,2-dioxaborinane and its use in a simple [PdCl2(TPP)2]-catalyzed borylation of aryl bromides and iodides.. European J. Org. Chem.. 4690–4696. Burchak, O. N., Philouze C., Chavant P-Y., & Py S. (2008). A direct and versatile access to $\alpha$,$\alpha$-disubstituted 2-pyrrolidinylmethanols by SmI2-mediated reductive coupling.. Org. Lett.. 10, 3021–3023. PraveenGanesh, N., D'Hondt S., & Chavant P-Y. (2007). Methylpentanediolborane: Easy Access to New Air- and Chromatography-Stable, Highly Functionalized Vinylboronates.. J. Org. Chem.. 72, 4510–4514. Application of cooperative iron/copper catalysis to a palladium-free borylation of aryl bromides with pinacolborane. Chiral Nitroxide/Copper-Catalyzed Aerobic Oxidation of Alcohols: Atroposelective Oxidative Desymmetrization. Fast Pd- and Pd/Cu-Catalyzed Direct C-H Arylation of Cyclic Nitrones. Application to the Synthesis of Enantiopure Quaternary α-Amino Esters.
CommonCrawl
This week Pronto CycleShare, Seattle's Bicycle Share system, turned one year old. To celebrate this, Pronto made available a large cache of data from the first year of operation and announced the Pronto Cycle Share's Data Challenge, which offers prizes for different categories of analysis. There are a lot of tools out there that you could use to analyze data like this, but my tool of choice is (obviously) Python. In this post, I want to show how you can get started analyzing this data and joining it with other available data sources using the PyData stack, namely NumPy, Pandas, Matplotlib, and Seaborn. Here I'll take a look at some of the basic questions you can answer with this data. Later I hope to find the time to dig deeper and ask some more interesting and creative questions – stay tuned! For those who aren't familiar, this post is composed in the form of a Jupyter Notebook, which is an open document format that combines text, code, data, and graphics and is viewable through the web browser – if you have not used it before I encourage you to try it out! You can download the notebook containing this post here, open it with Jupyter, and start asking your own questions of the data. We'll start by downloading the data (available on Pronto's Website) which you can do by uncommenting the following shell commands (the exclamation mark here is a special IPython syntax to run a shell command). The total download is about 70MB, and the unzipped files are around 900MB. Each row of this trip dataset is a single ride by a single person, and the data contains over 140,000 rows! The big spike in short-term pass rides in April is likely due to the American Planning Association national conference, held in downtown Seattle that week. The only other time that gets close is the 4th of July weekend. Day pass users seem to show a steady ebb and flow with the seasons; the usage of annual users has not waned as significantly with the coming of fall. Both annual members and day-pass users seem to show a distinct weekly trend. We see a complementary pattern overall: annual users tend to use their bikes during Monday to Friday (i.e. as part of a commute) while day pass users tend to use their bikes on the weekend. This pattern didn't fully develop until the start of 2015, however, especially for annual members: it seems that for the first couple months, users had not yet adapted their commute habits to make use of Pronto! We see a clear difference between a "commute" pattern, which sharply peaks in the morning and evening (e.g. annual members during weekdays) and a "recreational" pattern, which has a broad peak in the early afternoon (e.g. annual members on weekends, and short-term users all the time). Interestingly, the average behavior of annual pass holders on weekends seems to be almost identical to the average behavior of day-pass users on weekdays! For those who have read my previous posts, you might recognize this as very similar to the patterns I found in the Fremont Bridge bicycle counts. Next let's take a look at the durations of trips. Pronto rides are designed to be up to 30 minutes; any single use that is longer than this incurs a usage fee of a couple dollars for the first half hour, and about ten dollars per hour thereafter. Here I have added a red dashed line separating the free rides (left) from the rides which incur a usage fee (right). It seems that annual users are much more savvy to the system rules: only a small tail of the trip distribution lies beyond 30 minutes. Around one in four Day Pass Rides, on the other hand, are longer than the half hour limit and incur additional fees. My hunch is that these day pass users aren't fully aware of this pricing structure ("I paid for the day, right?") and likely walk away unhappy with the experience. Now we need to find bicycling distances between pairs of lat/lon coordinates. Fortunately, Google Maps has a distances API that we can use for free. Reading the fine print, free use is limited to 2500 distances per day, and 100 distances per 10 seconds. With 55 stations we have $55 \times 54 / 2 = 1485$ unique nonzero distances, so we can just query all of them within a few minutes on a single day for free (if we do it carefully). To do this, we'll query one (partial) row at a time, waiting 10+ seconds between queries (Note: we might also use the googlemaps Python package instead, but it requires obtaining an API key). Keep in mind that this shows the shortest possible distance between stations, and thus is a lower bound on the actual distance ridden on each trip. Many trips (especially for day pass users) begin and end within a few blocks. Beyond this, trips peak at around 1 mile, though some extreme users are pushing their trips out to four or more miles. Interestingly, the distributions are quite different, with annual riders showing on average a higher inferred speed. You might be tempted to conclude here that annual members ride faster than day-pass users, but the data alone aren't sufficient to support this conclusion. This data could also be explained if annual users tend to go from point A to point B by the most direct route, while day pass users tend to meander around and get to their destination indirectly. I suspect that the reality is some mix of these two effects. Overall, we see that longer rides tend to be faster – though this is subject to the same lower-bound caveats as above. As above, for reference I have plotted the line separating free trips (above the red line) from trips requiring an additional fee (below the red line). Again we see that the annual members are much more savvy about not going over the half hour limit than are day pass users – the sharp cutoff in the distribution of points points to users keeping close track of their time to avoid an extra charge! Elevation data is not included in the data release, but again we can turn to the Google Maps API to get what we need; see this site for a description of the elevation API. """Convert Elevations JSON output to CSV""" We have plotted some shading in the background to help guide the eye. Again, there is a big difference between Annual Members and Short-term users: annual users definitely show a preference for downhill trips (left of the distribution), while daily users dont show this as strongly, but rather show a preference for rides which start and end at the same elevation (i.e. the same station). We see that the first year had 30,000 more downhill trips than uphill trips – that's about 60% more. Given current usage levels, that means that Pronto staff must be shuttling an average of about 100 bikes per day from low-lying stations to higher-up stations. The other common "Seattle is special" argument against the feasibility of cycle share is the weather. Let's take a look at how the number of rides changes with temperature and precipitation. For both temperature and precipitiation, we see trends in the direction we might expect (people ride more on warm, sunny days). But there are some interesting details: during the work week, Annual and Short-term users are equally affected by the weather. On the weekend, however, annual members appear less influenced by weather, while short-term users appear more influenced. I don't have any good theories for why this would be the trend – please let me know if you have any ideas! Annual Members and Day Pass users show markedly different behavior in aggregate: annual members seem to use Pronto mostly for commuting from point A to point B on Monday-Friday, while short-term users use Pronto primarily on weekends to explore particular areas of town. While annual members seem savvy to the pricing structure, one out of four short-term-pass rides exceeds the half hour limit and incurs an additional usage fee. For the sake of the customer, Pronto should probably make effort to better inform short-term users of this pricing structure. Elevation and weather affect use just as you would expect: there are 60% more downhill trips than uphill trips, and cold & rain significantly decrease the number of rides on a given day. There is much more we could do with this data, and I'm hoping to dig more into it over the next couple weeks to seek some more detailed insights, but I think this is a good start! If you're interested in entering the Pronto Data Challenge, feel free to use these scripts as a jumping-off point to answer your own questions about the data.
CommonCrawl
Volume 22, Number 1 (1994), 395-405. This paper extends the study of Wishart and multivariate beta distributions to the singular case, where the rank is below the dimensionality. The usual conjugacy is extended to this case. A volume element on the space of positive semidefinite $m \times m$ matrices of rank $n < m$ is introduced and some transformation properties established. The density function is found for all rank-$n$ Wishart distributions as well as the rank-1 multivariate beta distribution. To do that, the Jacobian for the transformation to the singular value decomposition of general $m \times n$ matrices is calculated. The results in this paper are useful in particular for updating a Bayesian posterior when tracking a time-varying variance-covariance matrix. Ann. Statist., Volume 22, Number 1 (1994), 395-405.
CommonCrawl
What do logicians mean by "type"? I enjoy reading about formal logic as an occasional hobby. However, one thing keeps tripping me up: I seem unable to understand what's being referred to when the word "type" (as in type theory) is mentioned. Now, I understand what types are in programming, and sometimes I get the impression that types in logic are just the same thing as that: we want to set our formal systems up so that you can't add an integer to a proposition (for example), and types are the formal mechanism to specify this. Indeed, the Wikipedia page for type theory pretty much says this explicitly in the lead section. The problem is that I have trouble reconciling these things. Types in programming seem to me quite simple, practical things (alhough the type system for any given programming language can be quite complicated and interesting). But in type theory it seems that somehow the types are the language, or that they are responsible for its expressive power in a much deeper way than is the case in programming. So I suppose my question is, for someone who understands types in (say) Haskell or C++, and who also understands first-order logic and axiomatic set theory and so on, how can I get from these concepts to the concept of type theory in formal logic? What precisely is a type in type theory, and what is the relationship between types in formal mathematics and types in computer science? tl;dr Types only have meaning within type systems. There is no stand-alone definition of "type" except vague statements like "types classify terms". The notion of type in programming languages and type theory are basically the same, but different type systems correspond to different type theories. Often the term "type theory" is used specifically for a particular family of powerful type theories descended from Martin-Löf Type Theory. Agda and Idris are simultaneously proof assistants for such type theories and programming languages, so in this case there is no distinction whatsoever between the programming language and type theoretic notions of type. It's not the "types" themselves that are "powerful". First, you could recast first-order logic using types. Indeed, the sorts in multi-sorted first-order logic, are basically the same thing as types. When people talk about type theory, they often mean specifically Martin-Löf Type Theory (MLTT) or some descendant of it like the Calculus of (Inductive) Constructions. These are powerful higher-order logics that can be viewed as constructive set theories. But it is the specific system(s) that are powerful. The simply typed lambda calculus viewed from a propositions-as-types perspective is basically the proof theory of intuitionistic propositional logic which is a rather weak logical system. On the other hand, considering the equational theory of the simply typed lambda calculus (with some additional axioms) gives you something that is very close to the most direct understanding of higher-order logic as an extension of first-order logic. This view is the basis of the HOL family of theorem provers. Set theory is an extremely powerful logical system. ZFC set theory is a first-order theory, i.e. a theory axiomatized in first-order logic. And what does set theory accomplish? Why, it's essentially an embedding of higher-order logic into first-order logic. In first-order logic, we can't say something like $$\forall P.P(0)\land(\forall n.P(n)\Rightarrow P(n+1))\Rightarrow\forall n.P(n)$$ but, in the first-order theory of set theory, we can say $$\forall P.0\in P\land (\forall n.n\in P\Rightarrow n+1\in P)\Rightarrow\forall n.n\in P$$ Sets behave like "first-class" predicates. While ZFC set theory and MLTT go beyond just being higher-order logic, higher-order logic on its own is already a powerful and ergonomic system as demonstrated by the HOL theorem provers for example. At any rate, as far as I can tell, having some story for doing higher-order-logic-like things is necessary to provoke any interest in something as a framework for mathematics from mathematicians. Or you can turn it around a bit and say you need some story for set-like things and "first-class" predicates do a passable job. This latter perspective is more likely to appeal to mathematicians, but to me the higher-order logic perspective better captures the common denominator. At this point it should be clear there is no magical essence in "types" themselves, but instead some families of type theories (i.e. type systems from a programming perspective) are very powerful. Most "powerful" type systems for programming languages are closely related to the polymorphic lambda calculus aka System F. From the proposition-as-types perspective, these correspond to intuitionistic second-order propositional logics, not to be confused with second-order (predicate) logics. It allows quantification over propositions (i.e. nullary predicates) but not over terms which don't even exist in this logic. Classical second-order propositional logic is easily reduced to classical propositional logic (sometimes called zero-order logic). This is because $\forall P.\varphi$ is reducible to $\varphi[\top/P]\land\varphi[\bot/P]$ classically. System F is surprisingly expressive, but viewed as a logic it is quite limited and far weaker than MLTT. The type systems of Agda, Idris, and Coq are descendants of MLTT. Idris in particular and Agda to a lesser extent are dependently typed programming languages.1 Generally, the notion of type in a (static) type system and in type theory are essentially the same, but the significance of a type depends on the type system/type theory it is defined within. There is no real definition of "type" on its own. If you decide to look at e.g. Agda, you should be quickly disabused of the idea that "types are the language". All of these type theories have terms and the terms are not "made out of types". They typically look just like functional programs. 1 I don't want to give the impression that "dependently typed" = "super powerful" or "MLTT derived". The LF family of languages e.g. Elf and Twelf are intentionally weak dependently typed specification languages that are far weaker than MLTT. From a propositions-as-types perspective, they correspond more to first-order logic. I've found they don't really help me to understand the underlying concept, partly because they are necessarily tied to the specifics of a particular type theory. If I can understand the motivation better it should make it easier to follow the definitions. The basic idea: In ZFC set theory, there is just one kind of object - sets. In type theories, there are multiples kind of objects. Each object has a particular kind, known as its "type". Type theories typically include ways to form new types from old types. For example, if we have types $A$ and $B$ we also have a new type $A \times B$ whose members are pairs $(a,b)$ where $a$ is of type $A$ and $b$ is of type $B$. For the simplest type theories, such as higher-order logic, that is essentially the only change from ordinary first-order logic. In this setting, all of the information about "types" is handled in the metatheory. But these systems are barely "type theories", because the theory itself doesn't really know anything about the types. We are really just looking at first-order logic with multiple sorts. By analogy to computer science, these systems are vaguely like statically typed languages - it is not possible to even write a well-formed formula/program which is not type safe, but the program itself has no knowledge about types while it is running. More typical type theories include ways to reason about types within the theory. In many type theories, such as ML type theory, the types themselves are objects of the theory. So it is possible to prove "$A$ is a type" as a sentence of the theory. It is also possible to prove sentences such as "$t$ has type $A$". These cannot even be expressed in systems like higher-order logic. In this way, these theories are not just "first order logic with multiple sorts", they are genuinely "a theory about types". Again by analogy, these systems are vaguely analogous to programming languages in which a program can make inferences about types of objects during runtime (analogy: we can reason about types within the theory). Another key aspect of type theories is that they often have their own internal logic. The Curry-Howard correspondence shows that, in particular settings, formulas of propositional or first-order logic correspond to types in particular type theories. Manipulating the types in a model of type theory corresponds, via the isomorphism, to manipulating formulas of first order logic. The isomorphism holds for many logic/type theory pairs, but it is strongest when the logic is intuitionistic, which is one reason intuitionistic/constructive logic comes up in the context of type theory. In particular, logical operations on formulas become type-forming operations. The "and" operator of logic becomes the product type operation, for example, while the "or" operator becomes a kind of "union" type. In this way, each model of type theory has its own "internal logic" - which is often a model of intuitionistic logic. The existence of this internal logic is one of the motivations for type theory. When we look at first-order logic, we treat the "logic" as sitting in the metatheory, and we look at the truth value of each formula within a model using classical connectives. In type theory, that is often a much less important goal. Instead we look at the collection of types in a model, and we are more interested in the way that the type forming operations work in the model than in the way the classical connectives work. I am not looking for a formal definition of a type so much as the core idea behind it. I can find several formal definitions, but I've found they don't really help me to understand the underlying concept, partly because they are necessarily tied to the specifics of a particular type theory. If I can understand the motivation better it should make it easier to follow the definitions. Let me answer this from a more philosophical point of view, which perhaps would help to answer your implicit question of why logicians call type theories type theories and set theories set theories. A set theory is a system intended to describes sets, in the specific sense that a set is a collection of objects. This means that the intended universe (also called world/domain) is split by each set $S$ into two parts, the members of $S$ and the non-members of $S$. There are a variety of set theories such as ZF[C] and NF[U] and extensions of them. A type theory is a system intended to describe types, where a type is a kind of categorization or classification of objects. Whether an object $x$ is of type $S$ or not is typically not a boolean question, in the sense that one cannot form a sentence that is true if and only if $x$ is of type $S$. Instead, type theories have inference rules for typing judgements. Many type theories use syntax like "$x : S$" for the judgement that $x$ is of type $S$. It is not a boolean expression; "$\neg ( x : S )$" is simply syntactically malformed. A typing rule might look like "$x : S \ , \ f : S \to T \vdash f(x) : T$", and in some sense this is one of the fundamental rules that we can expect every type theory to have in some form or other. In any case, the notion of a type is only made precise via a type theory, just as the notion of a set is only made precise via a set theory. Note that ZFC and NFU are incompatible, and so they describe incompatible notions of sets. Likewise, there are numerous incompatible type theories that describe different notions of types. In the worst case, a notion of sets or types may be inconsistent, which we of course have to discard. Ideally, one would like a philosophically cogent or justifiable notion. There is nothing inherent in types that can prevent paradoxes. As has been pointed out, some attempts at foundational type theories have turned out to be inconsistent in similar manner as naive set theory. But in type theory it seems that somehow the types are the language, or that they are responsible for its expressive power in a much deeper way than is the case in programming. In contrast, in classical mathematics we cannot have decidable typing in general, because we want to be able to perform classical arithmetic, which allows us to construct and reason about arbitrary programs (equivalently Turing machines), and Godel's incompleteness theorem forces our hand. Therefore in every type theory that supports constructing partial functions representing programs, it cannot be computably determined for general $x,S$ whether the judgement "$x : S$" is provable or not. The trade-off is always there. You can never have both decidable type-checking and the ability to handle arbitrary (classical) arithmetic reasoning. In type-theoretic terms, you cannot have both normalization of every term and the ability to construct arbitrary programs before proving their properties. However, the sorts in higher-order logic can be viewed as types in simple type theory, and in turn the function types in simple type theory could be understood via currying as types of programs. Can I recommend a (text)book to you? Types and Programming Languages, by Benjamin C. Pierce. I come from a math background and work as a programmer. I picked up this book because I would look at papers by computer scientists (think Philip Wadler when he's not bothering to be beginner-friendly) and feel uneasy because I didn't know the basis for what they would say, and without that basis it all seemed a little loosey-goosey. The text is written at the level of a good first-year grad school course in type theory, and I found it to be clear and well written, an enjoyable read. And even though I didn't work my way through hardly any of the exercises I feel a lot more comfortable now when I read things that use or reference types. To be clear, the book won't answer the core questions you asked, but I think it'll put you in a much better position for understanding the questions and answers, plus you'll have a bunch of concrete examples to anchor everything in. The types of type theory are the same as the types of theoretical computer science. In type theory an objects inhabits a type but do not share types. The important thing about types is that that there is a correspondence between types and intuistic logic. Sum/Union types which in Haskell you define by using the | correspond to the logical or. You can translate any mathematical statement into a type using product, sum, function and a few other extra types. If you can then write a computer program that can implement the type given that computer program is a proof of the mathematical statement. Not the answer you're looking for? Browse other questions tagged logic soft-question type-theory or ask your own question. Logic within Type theory. Is there a rough academic consensus on how this should be done? Is there a syntax for type quantification in higher order logic? What does it mean to axiomatize a logic? (Higher order) logic/type theory/category theory like (meta-)grammar/language/machine? If Type Theories are all Logics. Axiom checking as type checking?
CommonCrawl
Today, I'll be reviewing "FaceNet: A Unified Embedding for Face Recognition and Clustering"1, a paper by Schroff et al. from Google, published in CVPR 2015. Once this embedding has been produced, then … the tasks become straight-forward: face verification simply involves thresholding the distnace between the two embeddings; recognition becomes a k-NN classification problem; and clustering can be achieved using off-the-shelf techniques such as k-means or agglomerative clustering. Deep neural networks learn representations which, especially at the top levels, correspond well with higher-level representations that form useful features by themselves, even when they are not explicitly trained to generate an embedding. This is precisely how previous approaches to facial recognition worked: they were trained to classify the faces of $K$ different people, and generalized beyond these $K$ people by picking an intermediary layer as an embedding. However, the authors thought this was indirect; furthermore, such an intermediary layer typically has thousands of dimensions, which is inefficient. They proposed to optimize the embedding space itself, picking a small representation dimension (128). This is done by defining the triplet loss. The intuition behind the triplet loss is simple: we want to minimize the distance between two faces belonging to the same person, and maximize the distance between two which belong to different identities. So we then just directly maximize the absolute difference between the two. for some hyperparameter $\alpha$ representing the margin between the two. which they call the hard positive and hard negative, respectively. But this is infeasible to do so for the whole training set, and in fact "it might lead to poor training, since mislabelled and poorly imaged faces would dominate the hard positives and negatives". They also used all of the positive pairs in a minibatch, instead of just the hard positives, while only selecting the hard negatives. This was empirically shown to lead to more stable training and slightly faster convergence. Sometimes they also selected semi-hard negatives, i.e. negatives within the margin $\alpha$ which separated them, especially at the beginning of training, in order to avoid a collapsed model (i.e. $f(x) = 0$ for all $x$). For each minibatch, they selected 40 "exemplars" (faces) per identity, for a total of 1800 exemplars. They use two types of architectures: one based off of Zeiler&Fergus2 (Z&F) and the other based off of GoogLeNet/Inception3. Z&F results in a larger model, with 22 layers, 140 million parameters, requiring about 1.6 billion FLOPS per image; the Inception models are smaller, with up to 26M parameters and 220M FLOPS per image. They evaluate their model with the Labelled Faces in the Wild (LFW) dataset and the YouTube Faces Dataset (YTD), both standard for facial recognition. They evaluate on a hold-out test set (1M images). They also evaluate on a dataset manually verified to have very clean labels, of three personal photo collections (12K images total). On LFW, they achieve record-breaking $99.63% \pm$ standard error using an extra face alignment step; similarly, a record-breaking $95.12\pm 0.39$ on YTD over 93.2% state-of-the-art. There was a direct correlation between number of FLOPS and accuracy, but the smaller Inception architectures did almost as well as the larger Z&F ones while reducing the number of parameters and FLOPS significantly. Embedding dimensionality was cross-validated and picked to be 128, which was statistically insignificant. They also demonstrate robustness (invariance) to image quality, including different image sizes, occlusion, lighting, pose, and age.
CommonCrawl
Abstract : The ESA Rosetta spacecraft, currently orbiting around comet 67P, has already provided in situ measurements of the dust grain properties from several instruments, particularly OSIRIS and GIADA. We propose adding value to those measurements by combining them with ground-based observations of the dust tail to monitor the overall, time-dependent dust-production rate and size distribution. To constrain the dust grain properties, we take Rosetta OSIRIS and GIADA results into account, and combine OSIRIS data during the approach phase (from late April to early June 2014) with a large data set of ground-based images that were acquired with the ESO Very Large Telescope (VLT) from February to November 2014. A Monte Carlo dust tail code has been applied to retrieve the dust parameters. Key properties of the grains (density, velocity, and size distribution) were obtained from Rosetta observations: these parameters were used as input of the code to considerably reduce the number of free parameters. In this way, the overall dust mass-loss rate and its dependence on the heliocentric distance could be obtained accurately. The dust parameters derived from the inner coma measurements by OSIRIS and GIADA and from distant imaging using VLT data are consistent, except for the power index of the size-distribution function, which is $\alpha$=--3, instead of $\alpha$=--2, for grains smaller than 1 mm. This is possibly linked to the presence of fluffy aggregates in the coma. The onset of cometary activity occurs at approximately 4.3 au, with a dust production rate of 0.5 kg/s, increasing up to 15 kg/s at 2.9 au. This implies a dust-to-gas mass ratio varying between 3.8 and 6.5 for the best-fit model when combined with water-production rates from the MIRO experiment.
CommonCrawl
Gomoku or Five in a row is a board game played by two players on a \$15 \times 15\$ grid with black and white stones. Whoever is able to place \$5\$ stones in a row (horizontal, vertical or diagonal) wins the game. In this KoTH we'll play the Swap2 rule, meaning that a game consists of two phases: In the initial phase the two players determine who goes first/who plays black, after that they'll place one stone each round starting with the player who picked black. Each player places one stone of their colour on the board, starting with the player who plays black, this goes on until there are no more free spaces to play (in which case it's a tie) or one player manages to play \$5\$ stones in a row (in which case that player wins). A row means either horizontal, vertical or diagonal. A win is a win - it doesn't matter whether the player managed to score more than one row. To make this challenge accessible for as many languages as possible input/output will be via stdin/stdout (line-based). The judge program will prompt your program by printing a line to your bot's stdin and your bot will print one line to stdout. Once you receive an EXIT message you'll be given half a second to finish writing to files before the judge will kill the process. To make the tournaments verifiable the judge uses seeded randomization and your bot must do too, for the same reason. The bot will be given a seed via command-line argument which it should use, please refer to the next section. Because your program will always be started new for each game you'll need to use files to persist any information you want to keep. You are allowed to read/write any files or create/remove sub-folders in your current directory. You are not allowed to access any files in any parent-directory! BOARD will denote a list of the current stones, it only lists the positions where a stone is placed and each entry will be of the form ((X,Y),COLOR) where X and Y will be an integer in the range \$[0,15)\$ and COLOR will either be "B" (black) or "W" (white). Furthermore SP denotes a single space, XY a tuple (X,Y) of two integers each in the range \$[0,15)\$ and | denotes a choice. "C" SP BOARD -> "B" | "W" The first message asks for three tuples, the first two will be the positions of the black stones and the third one the position for the white one. where N is the number of the round (beginning with \$0\$) and XY will be the position where you place a stone. "EXIT" SP NAME | "EXIT TIE" where NAME is the name of the bot that won. The second message will be sent if the game ends due to nobody winning and no more free spaces to place stones (this implies that your bot can't be named TIE). Since messages from the bot can be decoded without any spaces, all spaces will be ignored (eg. (0 , 0) (0,12) is treated the same as (0,0)(0,12)). Messages from the judge only contain a space to separate different sections (ie. as noted above with SP), allowing you to split the line on spaces. Any invalid response will result in a loss of that round (you'll still receive an EXIT message), see rules. You can find the judge program here: To add a bot to it simply create a new folder in the bots folder, place your files there and add a file meta containing name, command, arguments and a flag 0/1 (disable/enable stderr) each on a separate line. To run a tournament just run ./gomoku and to debug a single bot run ./gomoku -d BOT. * You are encouraged to use Github to separately submit your bot directly in the bots directory (and potentially modify util.sh)! ** In case it becomes a problem you'll be notified, I'd say anything below 500ms (that's a lot!) should be fine for now. If you have questions or want to talk about this KoTH, feel free to join the Chat! It uses a very crude implementation of MiniMax principles. The depth of the search is also very low, because otherwise it takes way too long. Might edit to improve later. It also tries to play Black if possible, because Wikipedia seems to say that Black has an advantage. I have never played Gomoku myself, so I set up the first three stones randomly for lack of a better idea. // Let's play randomly, we don't care. // the other player chose to play 2 stones more, we need to pick.. Not the answer you're looking for? Browse other questions tagged king-of-the-hill board-game or ask your own question.
CommonCrawl
Uncertainty Quantification is the field of mathematics that focuses on the propagation and influence of uncertainties on models. Mostly complex numerical models are considered with uncertain parameters or uncertain model properties. Several methods exist to model the uncertain parameters of numerical models. Stochastic Collocation is a method that samples the random variables of the input parameters using a deterministic procedure and then interpolates or integrates the unknown quantity of interest using the samples. Because moments of the distribution of the unknown quantity are essentially integrals of the quantity, the main focus will be on calculating integrals. Calculating an integral using samples can be done efficiently using a quadrature or cubature rule. Both sample the space of integration in a deterministic way and several algorithms to determine the samples exist, each with its own advantages and disadvantages. In the one-dimensional case a method is proposed that has all relevant advantages (positive weights, nested points and dependency on the input distribution). The principle of the introduced quadrature rule can also be applied to a multi-dimensional setting. However, if negative weights are allowed in the multi-dimensional case a cubature rule can be generated that has a very small number of points compared to the methods described in literature. The new method uses the fact that the tensor product of several quadrature rules has many points with the same weight that can be considered as one group. The cubature rule is applied to the Genz test functions to compare the accuracy with already known methods. If the cubature rule is applied to $C_\infty$-functions, the number of points required to reach a certain error is a factor less than the number conventional methods need. However, if the cubature rule is applied to $C_0$-functions or to discontinuous functions, results are not good. Because no stringent upper bound of the number of points of this cubature rule can be determined, it is not possible to create a general multi-level method and balance all the relevant errors. Therefore, a new method is proposed that balances the errors for each sample point. It consists of determining the cost function of a cubature rule (which is essentially the sum of all the cost of all the points) and then minimizing this cost function given an maximum error bound. The solution can be found using either the Lagrange multipliers method or Karush-Kuhn-Tucker conditions. The new proposed method is applied to two different test cases to test the quality of the method. The first test case is the lid-driven cavity flow problem and is used to compare the new UQ method to conventional UQ methods. The second case is simulating the flow around an airplane with uncertain geometry and uncertain flow conditions. van den Bos, L.M.M. (2015, December). Non-Intrusive, High-Dimensional Uncertainty Quantification for the Robust Simulation of Fluid Flows. TU/e Master Report.
CommonCrawl
Your task is to find $k$ shortest flight routes from Syrjälä to Metsälä. A route can visit the same city several times. The first input line has three integers $n$, $m$, and $k$: the number of cities, the number of flights, and the parameter $k$. The cities are numbered $1,2,\ldots,n$. City 1 is Syrjälä, and city $n$ is Metsälä. After this, the input has $m$ lines that describe the flights. Each line has three integers $a$, $b$, and $c$: a flight begins at city $a$, ends at city $b$, and its price is $c$. All flights are unidirectional. You may assume that there are at least $k$ distinct routes from Syrjälä to Metsälä. Print $k$ integers: the prices of the $k$ cheapest routes sorted according to their prices. You can assume that there are at least $k$ different routes.
CommonCrawl
First, we met on the dance floor at this place playing loud music. Soon after we were madly in love! Before I knew it, we were engaged to be married. Now, here I am in the backyard turning over soil for her garden instead of playing cards with my friends or enjoying the multitudes of brain-bending puzzles on my favorite website. Oh well, she's worth it. Soon enough, of course, she peed on a plastic stick and it came back +. We were so excited! One day, after she had had a visit to her "special" doctor, I came home from work to find her sitting at my poker table with a big smile on her face. I said, "Do you have any news?" "When are we due, honey?" "Well," she said, "seeing as how you spend every waking moment that you can on that damn Puzzling website or playing cards with your buddies instead of spending time with me, I've decided to torture you." My smile lost some of its intensity. Through gritted yet still-smiling teeth I managed to ask, "Whatever do you mean, my little buttercup?" "Don't give me any of that crap, Buster! I'm not telling you the due date until you figure out my own little puzzle. Once you solve the puzzle I have made here, then I will tell you our due date. To reemphasize, this puzzle has nothing to do with the due date, do you understand?" What is the answer to her puzzle? The dates represent the 11th, 25th, 26th, 37th and 50th Tuesdays in 2019. Converting these numbers into cards (using the order of suits defined above) we get Jack of Clubs, Queen of Hearts, King of Hearts, Jack of Diamonds, Jack of Spades. As a poker hand this is Three of a Kind, so perhaps our couple are expecting triplets. The hand could also be taken as a representation of the family: King = dad, Queen = mum, 3 Jacks = 3 sons. Your wife has chosen a name for the baby. The first of these Tuesdays is the 11th tuesday of the year; the second is the 25th, and so on resulting in the sequence: $$11\, 25\, 26\, 37\, 50$$ If you set $$1=A, 2=B,\ldots, 26=Z,\qquad 27=A, 28=B,\ldots$$ this gives the word $$KYZKX$$ which after a $-6$ Caesar shift gives $ESTER$. All these days fall on Tuesdays. So, she might me pointing that due date will be 2's day as she is having a twin. There are 5 major events mentioned, and 5 dates given to him by his wife. So, I would think those are the dates to those 5 events. The 5th date, that is the only one remaining, December 10, 2019 is the due date. Not the answer you're looking for? Browse other questions tagged lateral-thinking story steganography or ask your own question.
CommonCrawl
Abstract: A pure multipartite quantum state is called absolutely maximally entangled (AME), if all reductions obtained by tracing out at least half of its parties are maximally mixed. Maximal entanglement is then present across every bipartition. The existence of such states is in many cases unclear. With the help of the weight enumerator machinery known from quantum error correction and the generalized shadow inequalities, we obtain new bounds on the existence of AME states in dimensions larger than two. To complete the treatment on the weight enumerator machinery, the quantum MacWilliams identity is derived in the Bloch representation. Finally, we consider AME states whose subsystems have different local dimensions, and present an example for a $2 \times3 \times 3 \times 3$ system that shows maximal entanglement across every bipartition.
CommonCrawl
Approximate computing is an emerging design paradigm that exploits the intrinsic ability of applications to produce acceptable outputs even when their computations are executed approximately. In this paper, we explore approximate computing for a key computation pattern, reduce-and-rank (RnR), which is prevalent in a wide range of workloads, including video processing, recognition, search, and data mining. An RnR kernel performs a reduction operation (e.g., distance computation, dot product, and L1-norm) between an input vector and each of a set of reference vectors, and ranks the reduction outputs to select the top reference vectors for the current input. We propose three complementary approximation strategies for the RnR computation pattern. The first is interleaved reduction-and-ranking, wherein the vector reductions are decomposed into multiple partial reductions and interleaved with the rank computation. Leveraging this transformation, we propose the use of intermediate reduction results and ranks to identify future computations that are likely to have a low impact on the output, and can, hence, be approximated. The second strategy, input-similarity-based approximation, exploits the spatial or temporal correlation of inputs (e.g., pixels of an image or frames of a video) to identify computations that are amenable to approximation. The third strategy, reference vector reordering, rearranges the order in which the reference vectors are processed such that vectors that are relatively more critical in evaluating the correct output, are processed at the beginning of RnR operation. The number of these critical reference vectors is usually small, which renders a substantial portion of the total computation to be amenable to approximation. These strategies address a key challenge in approximate computing—identification of which computations to approximate—and may be used to drive any approximation mechanism, such as computation skipping or precision scaling to realize performance and energy improvements. A second key challenge in approximate computing is that the extent to which computations can be approximated varies significantly from application to application, and across inputs for even a single application. Hence, input-adaptive approximation, or the ability to automatically modulate the degree of approximation based on the nature of each individual input, is essential for obtaining optimal energy savings. In addition, to enable quality configurability in RnR kernels, we propose a kernel-level quality metric that correlates well to application-level quality, and identify key parameters that can be used to tune the proposed approximation strategies dynamically. We develop a runtime framework that modulates the identified parameters during the execution of RnR kernels to minimize their energy while meeting a given target quality. To evaluate the proposed concepts, we designed quality-configurable hardware implementations of six RnR-based applications from the recognition, mining, search, and video processing application domains in 45-nm technology. Our experiments demonstrate a $1.13\times $ – $3.18\times $ reduction in energy consumption with virtually no loss in output quality (<0.5%) at the application level. The energy benefits further improve up to $3.43\times $ and $3.9\times $ when the quality constraints are relaxed to 2.5% and 5%, respectively.
CommonCrawl
What about the possible cardinality for the set of operations with elements numbering as N=2^Q, as studiot may be suggesting? If all arguments to the operation are rational, the number of possibke outputs of the operation is at most countably infinite. This is because each set of arguments produces a single output and the set of arguments is the set of $n$-tuples $(q_1,q_2,\ldots,q_n)$. In fact, an operator can be viewed as a single-valued function $f(q_1,q_2,\ldots,q_n)$. Thanks, both. I think I understand.
CommonCrawl
Simulations of long-term lithospheric deformation involve post-failure analysis of high-contrast brittle materials driven by buoyancy and processes at the free surface. Geodynamic phenomena such as subduction and continental rifting take place over millions year time scales, thus require efficient solution methods. We present pTatin3D, a geodynamics modelling package utilising the material-point-method for tracking material composition, combined with a multigrid finite-element method to solve heterogeneous, incompressible visco-plastic Stokes problems. Here we analyze the performance and algorithmic tradeoffs of pTatin3D's multigrid preconditioner. Our matrix-free geometric multigrid preconditioner trades flops for memory bandwidth to produce a time-to-solution $>2\times$ faster than the best available methods utilising stored matrices (plagued by memory bandwidth limitations), exploits local element structure to achieve weak scaling at $30\%$ of FPU peak on Cray XC-30, has improved dynamic range due to smaller memory footprint, and has more consistent timing and better intra-node scalability due to reduced memory-bus and cache pressure.
CommonCrawl
This is a nice introduction to a complicated subject. I am no expert, so my comments pertain only to how understandable the explanation is. 1) Instead of referring to Pieper's MS Thesis for further references in the into paragraph, please give the essential references here. .... RESPONSE TO COMMENT: We have addressed this points at the end of the article in the new section "Final Remarks and Further Readings." 2) The first use of |1> occur two lines before it is defined! .... RESPONSE TO COMMENT: This was a typo, that now is corrected. (b) if \psi is to describe a probability, then one chooses to normalize ||\psi|| = 1. .... RESPONSE TO COMMENT: We agree with these observations. Since the article is not intended to overemphasize concepts from quantum physics, we have avoided using terms such as "wave-function." We have however emphasized on point (b), saying: We note that quantum mechanics is inherently linear and stochastic; in particular, requiring that kets are unit vectors is just a way to normalize vectors so as to have a probabilistic interpretation. 4) the \theta used in the definition of T(\theta) is confusing since you used it in (1) for \psi too. Can you use a different symbol? .... RESPONSE TO COMMENT: For clarity of exposition, we have changed $\theta$ to $\alpha$. 5) The "addition mod 2" symbol \oplus, is defined in a somewhat hidden way. As this is used later, can you set it off (as an equation) and expand upon it? .... RESPONSE TO COMMENT: To address this point, we added a new equation defining mod 2 addition, and provided an external-link to clarify this concept further. .... RESPONSE TO COMMENT: We have replaced the term of "Pauli" and "Hadamard gates" to "Pauli" and "Hadamard matrices," respectively. 7) In the Multi-qubit Gates section, the CNOT gate equation is not compiled. Perhaps the blockmatrix construct is not defined for wikis? .... RESPONSE TO COMMENT: Unfortunately, the Wiki does not recognize the LaTEX "blockmatrix" command. Because of this, we display matrices that required the use of this command now as figures. 8) In Deutsche's Algorithm, it seems that the equations for U_f|x>|y> are defined for general |y>, not just H|1>? Is this true? Then why use this specific y? If it is not true, then thus reader needs more help to understand. .... RESPONSE TO COMMENT: The identity uses that |y>= H|1>, and we have made this now explicit in the paper by showing the main steps in the calculation. .... RESPONSE TO COMMENT: We have created links to the Wikipedia on terms like this but which are not yet part of the Scholarpedia. .... RESPONSE TO COMMENT: We have clarified that Deutsch's algorithm is no necessarily "faster" however it demonstrates how one could in principle determine if $f$ is constant or not, only via the manipulation of quantum gates. .... RESPONSE TO COMMENT: We have expanded on this point, citing an identity due to Boyer et. al. (1998). C) A final section that says: "To learn more about X, see Y, about Z see W, etc." .... RESPONSE TO COMMENT: We have now added at the end of the article a new section entitled "Final Remarks and Further Readings." There appears to be an error in the definition of the operator ⊕ in this article. Shou1dn't 1 ⊕ 1 = 0 ? .... RESPONSE TO COMMENT: Yes, thank you! The typo has been corrected! This page was last modified on 9 July 2018, at 15:56.
CommonCrawl
Search for certificates and/or products by WaterMark licence number, licensee name, product specification, product type, brand name, model name or model identification. Filters enable refined searching by product category, product specification or brand name. From the search results select a specific certificate/product to view detailed information.... Over 10,000 products are registered to carry the Australian Made, Australian Grown logo. Search for Australian Made and Australian Grown products and produce at www.australianmade.com.au. Search for Australian Made and Australian Grown products and produce at www.australianmade.com.au. you get the same total as when you multiply the three numbers in each column together and add the three products: $8\times 3\times 4+1\times 5\times 7+6\times 7\times 2=225$. This number is called the magic product of the square.... After receiving a GS1 Company Prefix, a company is ready to begin assigning identification numbers to their trade items (products or services), themselves (as a legal entity), locations, logistic units, individual company assets, returnable assets (pallets, kegs, tubs), and/or service relationships. Sellers make great use of product options when listing their products. Using model numbers can help ship orders with these more easily. Many sellers use the orders API or the CSV export to access information about their orders. how to put blinds on hollow door By adjusting part types, default product types, prices, and suffixes, a company could use the associated product feature to fulfill various company needs, such as bundling products together (similar to a kit), adding service items or delivery fees (similar to associated pricing), or many other possibilities. A universal product code, or UPC, is a bar code that is typically found on retail products. The UPC bar codes are scanned at cash registers upon checkout. A UPC is made up of a company prefix that is unique to a production company, an item number that is from the manufacturer for the specific item and a … best buy how to return a product 13/02/2012 · The item number supports the need to be able to identify products based on numbers that make sense to users within a legal entity. In this post we will describe the purpose and the use of product numbers and item numbers. The last column "Product" represents the result of multiplying the two arrays together. The summed result, 275, is the value that SUMPRODUCT returns. The summed result, 275, is the value that SUMPRODUCT returns. 26/05/2018 · The content you'll need before you make the catalog includes images of the products, a list of products and product features, and a list of other content that needs to be written, such as information about the company, customer testimonials, and any other information that will help your customers make the right decision.
CommonCrawl
1) First since $QCat$ is a model category it lives in $RelCat$ the category of small relative categories. So apply the hammock localizaiton and get a category enriched in simplicial sets. 2) Fibrantly replace the category in the Bergner model structure on categories enriched in simplicial sets and take the homotopy coherent nerve to get an $(\infty, 1)$-category. Question 1: Does this procedure work? Question 2: Are there some size issues? For example will $QCat$ really be an object of $RelCat$, also if I apply the hammock localization do I get a small simplicial enriched category? Browse other questions tagged at.algebraic-topology ct.category-theory higher-category-theory or ask your own question. Does the classification diagram localize a category with weak equivalences? Given a 2-category, is the hammock localization wrt the equivalences equivalent to taking the hom-wise nerve of the maximal subgroupoids? Is there an "injective version" of the Bergner model structure?
CommonCrawl
I am developing a 2D CFD solver for fluid-particle interaction. To solve Navier-Stokes equations on a grid of size $10000\times 10000$ cells (or >1 million cells), a large linear system $Ax=b$ with $A$ being the $10000\times10000$ sparse coefficient matrix needs to be solved efficiently in each time-step. What I am looking for is a high-performance/parallel C++ linear algebra library to solve this large sparse linear system. An iterative approach such as biconjugate gradient stabilized method is preferred. There are a lot of existing libraries out there like: Eigen3, PETSc, Trilinos, MLT4, GNU GSL, Armadillo, LAPACK++, and the list goes on. Among the well-known libraries, which one should I choose for my project, in terms of high-performance (better with OpenMP/MPI support), and easy-to-use in c++. Eigen 3 is a nice C++ template library some of whose routines are parallelized. c.f. Eigen documentation The parallelization is OMP only, so if you intend to parallelise using MPI (and OMP) it is probably not suitable for your purpose. The nice feature of Eigen is that you can swap in a high performance BLAS library (like MKL or OpenBLAS) for some routines by simply using #define EIGEN_USE_BLAS (and other macros). Similarly Armadillo allows for node-level parallelism only. In my experience it is better to use Eigen since it is easier to interface with the raw C++ arrays in Eigen, which facilitates use of other libraries (e.g. ARPACK++). In my experience I would advise against using GSL for linear algebra. I have found its performance to be lacking and the usability to be worse than that of Eigen. If you plan to execute linear solvers (e.g. BiCGstab) on multiple nodes I would advise you to use Trilinos. I have used it in my research codes with fairly little delving into its documentation due to the good examples available on the Trilinos homepage. Furthermore its performance is decent and can be fine-tuned by including good BLAS/LAPACK libraries during the compilation. Similar should hold for PETSc, although I have never actively used its LA routines. In my experience PETSc is dependency hell, if you want to use performance-optimized (e.g. optimized for the CPU architecture you're using) versions of the libraries it requires. The performance should be fairly decent, too, I think, since PETSc relies on common LA libraries (BLAS, LAPACK, SCALAPACK etc.). Long story short: For interoperability and good performance on a single node (using OpenMP) I advise to use Eigen with OpenBLAS. If you want to use multiple nodes via MPI and let the library figure out how to solve a system using multiple nodes then use Trilinos. Not the answer you're looking for? Browse other questions tagged linear-algebra linear-solver sparse conjugate-gradient or ask your own question. Solving huge dense linear system?
CommonCrawl
A survey of Banach space properties of most simple classical Sobolev Spaces in $L^p$-norms ($1\le p\le\infty$) defined on (open) subsets of $R^n$ and compact manifolds, especially on tori, is given. While for $1<p<\infty$ the Sobolev spaces in question are isomorphic to corresponding classical spaces $L^p$, the situation is different in the limit cases $p=1$ and $p=\infty$, for $k$-times continuously differentiable functions and the Sobolev measures in two or more variables. Pathological properties of these spaces related to the failure of the Grothendieck Theorem on absolutely summing operators are discussed. The proofs involve various analytic tools like Sobolev Embedding Type theorems, Theory of Fourier Multipliers, Whitney and Jones simultaneous extension theorems. Various results due to Grothendieck, Henkin, Mtyagin, Kislyakov, Sidorenko, Bourgain, Berkson, M. Wojciechowski and the author are discussed.
CommonCrawl
Let $\mathbb G$ be a free Carnot group (i.e. a connected simply connected nilpotent stratified free Lie group) of step 2. In this paper, we prove that the variational functional generated by ``intrinsic'' Maxwell's equations in $\mathbb G$ is the $\Gamma$-limit of a sequence of classical (i.e. Euclidean) variational functionals associated with strongly anisotropic dielectric permittivity and magnetic permeability in the Euclidean space.
CommonCrawl
We demonstrate the lattice QCD calculation of the long distance contribution to $\epsilon_K$. Due to the singular, short-distance structure of $epsilon_K$, we must perform a short-distance subtraction and introduce a corresponding subtraction term determined from perturbation theory, which we calculate at Next Leading Order (NLO). We perform the calculation on a $24^3 \times 64$ lattice with a pion mass of 329 MeV. This work is a complete calculation, which includes all connected and disconnected diagrams.
CommonCrawl
My previous post is here. Day 2 of SODA, and the tenth time I've been asked "are you chairing all the sessions" ? No, just that many of my PC colleagues didn't (or couldn't) show up :), so those of us who did are doing more lifting. In reward, we got a nice dinner in the French quarter, and I tasted boudin for the first time (and maybe the last). An interesting talk today morning by Dror Aiger on reporting near neighbors. They were able to show a not-super-exponential relation between the number of points at unit $\ell_\infty$ distance from a query, and the number of points at unit $\ell_2$ distance. This was wrapped into a fast algorithm for reporting Euclidean near neighbors in high dimensions that has some interesting (if preliminary) experimental backing as well in comparison with ANN, FLANN and LSH. Jan Vondrák gave the invited talk on submodular optimization. I mentioned Satoru Fujishige's talk at NIPS, and this was an excellent complement. Fujishige's talk focused on the geometric intuition behind submodular functions (especially the associated polymatroid). Vondrák's talk focused on the algorithmic implications of submodularity, and he gave very convincing arguments for why it can be viewed as discrete convexity OR discrete concavity, or even neither. He pointed out how the Lovasz extension is useful for minimization and the multilinear extension is more useful for maximization, and gave a number of "recipes" for designing algorithms that optimize submodular functions. I hope the slides go online at some point: they were very clear and well-balanced. There was some discussion over whether next year's SODA should adopt the two-tier PC that STOC is currently experimenting. The jury's still out on that, and since the STOC PC is not done with their work, we don't yet have formal feedback. I will admit to being a little frustrated with the level of conservativeness on display here: it's not as if EVERY OTHER COMMUNITY IN CS doesn't do this and doesn't have best practices that we can learn from, and given our reviewing loads, it's really crazy that we aren't desperately trying things to alleviate the problem. Boudin for the last time: because you didn't like it or because you don't foresee an opportunity? P.S. you forgot a backslash before infty. I can't say I cared for it that much. Fixing the typo, thanks. A sampling gem: sampling from $\ell_p$ balls. A SODA announcement, and a happy new year !
CommonCrawl
I will report on recent and ongoing work regarding self-avoiding polygons confined to narrow tubes in the cubic lattice. Two particular problems will be discussed: a classification scheme for "local" and "non-local" knots and their relative frequencies as found with Monte Carlo methods; and a proof of the critical exponent for certain knot types in the $2\times1$ tube. This is joint work with C. Soteros, J. Eng, K. Shimokawa and K. Ishihara.
CommonCrawl
We consider a clustering problem where we observe feature vectors $X_i \in R^p$, $i = 1, 2, \ldots, n$, from $K$ possible classes. The class labels are unknown and the main interest is to estimate them. We are primarily interested in the modern regime of $p \gg n$, where classical clustering methods face challenges. We propose Influential Features PCA (IF-PCA) as a new clustering procedure. In IF-PCA, we select a small fraction of features with the largest Kolmogorov-Smirnov (KS) scores, where the threshold is chosen by adapting the recent notion of Higher Criticism, obtain the first $(K-1)$ left singular vectors of the post-selection normalized data matrix, and then estimate the labels by applying the classical k-means to these singular vectors. It can be seen that IF-PCA is a tuning free clustering method. We apply IF-PCA to $10$ gene microarray data sets. The method has competitive performance in clustering. Especially, in three of the data sets, the error rates of IF-PCA are only $29\%$ or less of the error rates by other methods. We have also rediscovered a phenomenon on empirical null by Efron on microarray data. With delicate analysis, especially post-selection eigen-analysis, we derive tight probability bounds on the Kolmogorov-Smirnov statistics and show that IF-PCA yields clustering consistency in a broad context. The clustering problem is connected to the problems of sparse PCA and low-rank matrix recovery, but it is different in important ways. We reveal an interesting phase transition phenomenon associated with these problems and identify the range of interest for each.
CommonCrawl
This research involves two experimental investigations into the behaviour of prestressed concrete members strengthened using two typical techniques: the first is by application of external post-tensioning using carbon fiber reinforced plastic (CFRP) cables, whereas the second is by bonding composite straps on the tension side of the members. In the first investigation, a total of twelve partially prestressed concrete beams were tested. Nine of the beams were first subjected to loads large enough to cause considerable cracking and deformations and then strengthened with the external CFRP cables and loaded up to failure. The CFRP cables used were CFCC 1 $\times$ 7-5 mm and 7.5 mm diameter, manufactured by Tokyo Rope, Japan. The remaining three beams were loaded monotonically from zero up to failure without strengthening and were used as control beams for comparison purposes. The twelve beams had the same concrete cross-section dimensions and were divided into three groups; each group consisted of four beams of the same length but with different levels of internal prestressing. The purpose was to study the effects of the span-to-depth ratio and the partial prestressing ratio or the reinforcing index on the performance of reinforced or prestressed concrete beams after being strengthened with the external CFCC cables. xix, 324 leaves : ill. ; 29 cm.
CommonCrawl
where $P^\bullet $ is bounded above and consists of projective objects, and $\alpha $ is a quasi-isomorphism. There exists a map of complexes $\beta $ making the diagram commute up to homotopy. If $\alpha $ is surjective in every degree then we can find a $\beta $ which makes the diagram commute. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0649. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0649, in case you are confused.
CommonCrawl
Definition: A graph $G = (V(G), E(G))$ is considered Hamiltonian if and only if the graph has a cycle containing all of the vertices of the graph. Definition: A Hamiltonian cycle is a cycle that contains all vertices in a graph $G$. If a graph has a Hamiltonian cycle, then the graph $G$ is said to be Hamiltonian. For example, let's look at the following graphs (some of which were observed in earlier pages) and determine if they're Hamiltonian. Hamiltonian. Notice that a cycle can easy be formed since all vertices $x_i$ are connected to all other vertices in $V(G)$. Hamiltonian. Same case as above. Hamiltonian. Recall that a cycle of a complete bipartite graph uses the same number of vertices from vertex set $A$ and vertex set $B$. If we want to consider Hamiltonian graphs for complete bipartite graphs, then if $V(G) = A \cup B$ and $A \cap B = \varnothing$, then | A | = | B |. Not Hamiltonian. This should be rather obvious since a cycle cannot be formed containing all vertices of the graph! Not Hamiltonian. The degree of all vertices in a cycle is $2$, but the degree of vertex $x_1$ that is considered a leaf by definition has $\deg (x_1) = 1$. Thus a cycle cannot be formed. Hamiltonian. It should be obvious that a cycle graph in itself contains a Hamiltonian cycle. In fact, the graph is a Hamiltonian cycle. Hamiltonian. All platonic solids are Hamiltonian. Unlike determining whether or not a graph is Eulerian, determining if a graph is Hamiltonian is much more difficult. Dirac's and Ore's Theorem provide a suitable condition though.
CommonCrawl
Blood and/or Japanese yen are commonly used as wagers with a conversion rate of 100 ccs of blood equaling 1,000 points. This gameplay section assumes you know the basic rules of Japanese mahjong. Chow: Claimed from the player to your left or for mahjong. Pung: Claimed from any player. Kong: Claimed from any player. A replacement tile is drawn afterwards. Another tile in the dead wall if turned face up to reveal the kan dora. Concealed and Small Melded Kong: Announced only after drawing a tile (i.e. not after claiming a tile). A replacement tile is drawn afterwards. Another tile in the drawn and placed aside as the indicator for the kan dora. Discards are placed rows of six in front of the respective players' wall. Tiles claimed are rotated 90 degrees within the set to show which player had discarded it. 3-character is claimed from the person to our left. 5-dot is claimed from the person across from us. Concealed kongs have two outer tiles face up and two tiles face down. This is the distinguish concealed and exposed kongs when scoring. A concealed kong of 3-dots. The dealer wins the current hand or earns points for tenpai (one tile away from a complete hand) he/she deals again, retaining the East position, otherwise the deal passes. The hand ends in draw the deal passes to the player to the right. A round is over when all players get a chance to deal. If a primary player makes a tsumo or acheives rons off the opposing primary player, payment is made immediately. The winner can (re)claim payment either in money or blood. After the round is over the prevalent wind changes. The order of the winds are East and South. If a draw occurs any player who is tenpai can show their hand. Players who are noten (not one tile away from winning) have to pay a total of 3000 points to players who are tenpai. For example, Player A is in tenpai whereas B, C, and D are not, Player A receives 1000 from each player ($1000 \times 3 = 3000$). A 100-point counter is placed to the dealer's right-hand side. Nobody wins after four kans are declared. All four of the same wind tile is discarded within the first go around. A player has nine different terminal and honor tiles within the first uninterrupted go around. Four player's declare riichi. Hands are shown if nobody can win on the last discard. Under these conditions the noten penalty is not applied. A 100-point counter is placed to the dealer's right-hand side. A counter is placed on the dealer's right-hand side after a draw or after East wins. Each counter increases the payout to the winner by 300 points. In case of a self-drawn win each opponent plays 100 for reach counter. Counters are removed when another player declares a win. When there are five or more counters on the table a minimum of two yaku are need to win. If the deal passes the previous dealer's counters are returned and the new dealer replaces the counters with the appropriate amount. Riichi bets are collected by the winner. Should a draw occur the riichi bets remain on the table. The game ends when six han chans are completed, a player loses more than 2000 ccs of blood, or a player becomes bankrupt, which ever occurs first. At the end of each session a scoring adjustment is made to reward the top players and punish the losing players. v • e Japanese Modern Mahjong Series .
CommonCrawl
We investigate dimension-dependence estimates of the approximation error for linear algorithms of sampling recovery on Smolyak grids parametrized by $m$, of periodic $d$-variate functions from the space with Lipschitz-H\"older mixed smoothness $\alpha > 0$. For the subsets of the unit ball in this space of functions with homogeneous condition and of functions depending on $\nu$ active variables ($1 \le \nu \le d$), respectively, we prove some upper bounds and lower bounds (for $\alpha \le 2$) of the error of the optimal sampling recovery on Smolyak grids, explicit in $d$, $\nu$, $m$ when $d$ and $m$ may be large. This is a joint work with Mai Xuan Thao, Hong Duc University, Thanh Hoa, Vietnam. Early Latin to Neo-Latin: Festus and Scaliger How to write good papers Granite and Ice Cream: 'Meltamorphism' and Crystal Growth at (Relatively) Low Temperatures Curve fitting, errors and analysis of binding data Glucose monitoring for diabetes Cognitive training works: what are the mechanisms and why are so many experimental psychologists opposed?
CommonCrawl
Wound healing is a complex process imitated by the formation of fibrin fibers that are involved in clot formation and fibroblast migration. Normally this process is triggered by thrombin cleavage of the E domain on the fibrinogen molecules, which allows them to spontaneously self-assemble into the fibers. Here we demonstrate that this process can also be initiated in the absence of thrombin. We show that by simply placing the proteins in contact with hydrocarbon functionalized clay surfaces, molecular reorientation occurs which allows fibers to form from the intact fibrinogen protein. Furthermore, using monoclonal antibodies, we determined which regions on the $\alpha $C domains are involved in the formation of the new fibrinogen fibers. This allowed us to extend these findings to general hydrophobic surfaces, such as those presented by most hydrocarbon polymers. On the other hand, the carboxyl terminal part of the A$\alpha $ chain, can interact with amine containing polymers, and suppress formation of the fibers.
CommonCrawl
Location: School of Mathematics, Statistics and Operations Research, Victoria University of Wellington. Even a casual observer cannot fail to be impressed by the dizzying speed and scope of the development of algebraic number theory for the past hundred years. Yet our search for mathematical truth and beauty always leaves us wanting more. In this workshop, we will discuss some of the most amazing developments in Iwasawa theory and related areas. We hope that this event will lead to deeper appreciation of many old and new results, and hopefully another breakthrough. and also, our speakers include editors of three mathematical journals by my quick count, including the prestigious Journal of AMS. It is also noteworthy that Karl Rubin, Massimo Bertolini, Cristian Popescu, and Daniel Delbourgo will all attend the workshop and give presentations. They are leading experts in what is called "Iwasawa Theory", a major subfield of number theory. I am close to their field, and familiar with their recent and past works. I expect that discussions and collaborations with them which will come out of this workshop will be fruitful. Abstract: Both a rank-one Euler system and a rank-one Kolyvagin system consist of families of cohomology classes with appropriate properties and interrelationships. Given such an Euler system, Kolyvagin's derivative construction produces a rank-one Kolyvagin system, and a rank-one Kolyvagin system gives a bound on the size of a Selmer group. Ideas and conjectures of Perrin-Riou show that in some situations (for example, starting with an abelian variety of dimension r, or the global units in a totally real field of degree r) an Euler system is more naturally a collection of elements in the r-th exterior powers of cohomology groups. In this situation, Barry Mazur and I define a Kolyvagin system of rank r also to be a suitable collection of elements in r-th exterior powers, and we show how a Kolyvagin system of rank r bounds the size of the corresponding Selmer group. Abstract: Eight years ago Jeon, Kim and Schweizer determined those finite groups which appear infinitely often as torsion groups of elliptic curves over cubic number fields. In this talk we will construct such elliptic curves and cubic number fields for each possible torsion group. Abstract: Let E be an elliptic curve defined over the rational field \Q with L-function L(E,s). We are interested in studying E(K)as K varies over finite extensions of \Q. Analytically this questions translates (under the Birch and Swinnerton-Dyer conjecture) into when L(E,1, \chi)=0 for Artin characters \chi. For [K:\Q]=2, there is an extensive literature on this question. We present our results and conjectures when [K:\Q]>2. L-values, for each fixed elliptic curve. Abstract: Often in Iwasawa theory, (for a fixed prime $p$) we want to relate some kind of $p$-Selmer groups (which can be roughly considered as the set of rational points plus the Shafarevich-Tate group) on the algebraic side and some kind of $p$-adic $L$-function (which can be thought of as a $p$-adic power series that incorporates the special values of an $L$-function) on the analytic side. It is well-known that the Iwasawa theoretic properties of the conventional Selmer groups and p-adic L-functions break down when the prime $p$ is non-ordinary/supersingular. Kobayashi and Pollack proposed the plus/minus Selmer groups and plus/minus $p$-adic$L$-functions respectively as alternatives, and it is known that they work well over the cyclotomic extensions of the rational number field $\mathbb Q$. By building upon their ideas, the works of Katz and Hida, and our previous works, we construct two-variable p-adic L-functions, and $\pm/\pm$-Selmer groups over the $\mathbbZ_p^2$-extension of imaginary quadratic fields where $p$ is non-ordinary/supersingular, and splits completely over the imaginary quadratic field. We will argue that these are good objects to study by illustrating their properties. We also present a conjecture in the spirit of the main conjecture of Iwasawa theory, which hypothetically connects the algebraic and analytic properties of elliptic curves by way of relating the two objects we construct. for every sufficiently large prime number $p$. We also obtain an effective bound of such $p$. This is an analogue of the study of rational points or points over quadratic fields on the modular curve $X_0(p)$ by Mazur and Momose. Abstract: I will report on recent work, in collaboration with Darmon and Prasanna, relating values of Rankin p-adic L-series to the p-adic Abel-Jacobi image of certain algebraic cycles. Arithmetic applications will be discussed. Abstract: Let $X$ be a Mumford curve. We say that an elliptic curve is an optimal quotient of $X$ is there is a finite morphism $X\to E$ such that the homomorphism $\pi: Jac(X)\to E$ induced by the Albanese functoriality has connected and reduced kernel. We consider the functorially induced map $\pi_\ast: \Phi_X\to \Phi_E$ on component groups of the Neron models of $Jac(X)$ and $E$. We show that in general this map need not be surjective, which answers negatively a question of Ribet and Takahashi. Using rigid-analytic techniques, we give some conditions under which $\pi_\ast$ is surjective, and discuss arithmetic applications to modular curves. This is a joint work with Joe Rabinoff. properties of this differential module. Abstract: Jeon, Kim and Schweizer determined those finite groups which apprear infinitely often as torsion groups of elliptic curves over quartic number fields. In this talk we will construct infinite families of elliptic curves over quartic number fields for each possible torsion group. Abstract: I will describe applications of my joint work with Greither in equivariant Iwasawa theory to the construction of explicit models for Tate sequences and its consequences for the theory of special values of Artin L-functions. * Weta Workshop is the multi Academy Award winning special effects company that helped create movies like the Lord of the Rings, Avatar, District 9, King Kong and The Hobbit. They are currently at work on movies including The Adventures of Tintin. The Weta Cave is a unique visitor experience showcasing the creativity of Weta. Opened one year ago, it has hosted over 70,000 people from all over the world, including VIPs from TV, movies, gaming and sports, such as Guillermo del Toro (Director, Hellboy), John Stevenson (Director, Kung Fu Panda), and the All Blacks (New Zealand's national rugby team). * Te Papa is New Zealand's national museum, renowned for being bicultural, scholarly, innovative, and fun. The collections span five areas: Art, History, Pacific, Māori, and Natural Environment. It provides exhibition highlights for first-time visitors, covering the story of Māori in New Zealand and the interactive exhibition OurSpace. Te Papa also offers a daily 60-minute tour – Introducing Te Papa for individuals and small groups. You can simply buy your ticket ($14) on the day at the Information Desk, Level 1. * Transport will be provided from outside the cotton building near "Wishbone" to take you to Weta Workshop and Te Papa. You will need to make your own arrangements back to the hotel. All guests will be required to pay NZ$39 for their own dinner and drinks on Wed 19 Dec except the plenary speakers. Note: during the workshop and dinner catering for vegetarians will be provided. If you have a dietary request please email Ping (Kelly) Shen. There is a special rate of NZ$135 per night including breakfast. Note: There is a credit card surcharge paid in addition. The school of Mathematics, Statistics and Operations Research is located in the Cotton building, Kelburn campus of Victoria University. Please refer to the map of the Kelburn campus. The workshop will be held in the Cotton Building, Level 3, Room 350. New Zealand's unit of currency is the dollar (NZ$). There is no restriction on the amount of foreign currency that can be brought in or taken out of New Zealand. However, every person who carries more than NZ$10,000 in cash in or out of New Zealand is required to complete a Border Cash Report. All major credit cards can be used in New Zealand, with Visa and MasterCard accepted most widely, followed by American Express and Diners Club. If you're used to driving in the city, you should take care when driving on New Zealand's open country roads. We have a good motorway system but weather extremes, the terrain and narrow secondary roads and bridges require drivers to be very vigilant. New Zealand has a largely temperate climate. The average New Zealand temperature decreases as you travel south. In summer (December – February), the average maximum temperature ranges between 20-30ºC (70-90°F). In most places you can wear shorts and a t-shirt or singlet during the day, adding a light jumper at night. New Zealand weather can change unexpectedly. Be prepared for sudden changes in weather and temperature if you're going hiking or doing other outdoor activities. Situated at the southern end of the North Island, nestled between a sparkling harbour and rolling green hills, Wellington is New Zealand's capital city. Lonely Planet named Wellington 'the coolest little capital in the world' (2011), and the city is renowned for its arts, culture and native beauty.
CommonCrawl
Abstract: We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general approach for proving lower bounds in the setting of parallel convex optimization.
CommonCrawl
Abstract: We present an X-ray spectral analysis of 21 low redshift quasars observed with XMM-Newton EPIC. All the sources are Palomar Green quasars with redshifts between 0.05 and 0.4 and have low Galactic absorption along the line-of-sight. A large majority of quasars in the sample (19/21) exhibit a significant soft excess below ~1-1.5 keV, whilst two objects (PG 1114+445 and I Zw1) show a deficit of soft X-ray flux due to the presence of a strong warm absorber. Indeed, contrary to previous studies with ASCA and ROSAT, we find that the presence of absorption features near 0.6-1.0 keV is common in our sample. At least half of the objects appear to harbor a warm absorber, as found previously in Seyfert 1 galaxies. We find significant detections of Fe K $\alpha$ emission lines in at least twelve objects, whilst there is evidence for some broadening of the line profile, compared to the EPIC-pn resolution, in five of these quasars. The determination of the nature of this broadening (e.g., Keplerian motion, a blend of lines, relativistic effects) is not possible with the present data and requires either higher S/N or higher resolution spectra. In seven objects the line is located between 6.7-7 keV, corresponding to highly ionized iron, whereas in the other five objects the line energy is consistent with 6.4 keV, i.e. corresponding to near neutral iron. The ionized lines tend to be found in the quasars with the steepest X-ray spectra. We also find a correlation between the continuum power law index $\Gamma$ and the optical H $\beta$ width, in both the soft and hard X-ray bands, whereby the steepest X-ray spectra are found in objects with narrow H $\beta$ widths, which confirms previous ROSAT and ASCA results. The soft and hard band X-ray photon indices are also strongly correlated, i.e. the steepest soft X-ray spectra correspond the steepest hard X-ray spectra. We propose that a high accretion rate and a smaller black hole mass is likely to be the physical driver responsible for these trends, with the steep spectrum objects likely to have smaller black hole masses accreting near the Eddington rate. Rights: Copyright © 2004 ESO. Reproduced with permission from Astronomy & Astrophysics, © ESO.
CommonCrawl
Calculus of Variations in $L^\infty$ has a long history, the scalar case of which was initiated by G.Aronsson in the 1960s and is under active research ever since. Aronsson's motivation to study this problem was related to the optimisation of Lipschitz Extensions of functions. Mathematically, minimising the supremum is very challenging because the equations are non-divergence and highly degenerate. However, it provides more realistic models, as opposed to the classical case of minimisation of the average (integral). However, due to fundamental difficulties, until the early 2010s the field was restricted to the scalar case. In this talk I will discuss the vectorial case, which has recently been initiated by the speaker. The analysis of the $L^\infty$-equations is based on a recently proposed general duality-free PDE theory of generalised solutions for fully nonlinear systems.
CommonCrawl
Abstract: Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (e.g., Gaussian). For the corresponding limit experiment, we characterize the frequentist size of a certain Bayesian hypothesis test of (possibly nonlinear) inequalities. If the null hypothesis is that the (possibly infinite-dimensional) parameter lies in a certain half-space, then the Bayesian test's size is $\alpha$; if the null hypothesis is a subset of a half-space, then size is above $\alpha$ (sometimes strictly); and in other cases, size may be above, below, or equal to $\alpha$. Two examples illustrate our results: testing stochastic dominance and testing curvature of a translog cost function.
CommonCrawl
Have you tried some simple cases to see if there is a pattern? If you have a number "$x$" the next number is "$x +1$". How could you write four consecutive numbers? If you can express $x^4 + \ldots + 16$ as a perfect square, the two factors will be of the form $(x^2 +\ldots + 4)$. Mathematical reasoning & proof. Creating and manipulating expressions and formulae. Integers. Networks/Graph Theory. Generalising. Number theory. Making and proving conjectures. Expanding and factorising quadratics. Quadratic equations. Inequalities.
CommonCrawl
Doroshenko V. M., Kruglov V. P., Kuznetsov S. P. The principle of constructing a new class of systems demonstrating hyperbolic chaotic attractors is proposed. It is based on using subsystems, the transfer of oscillatory excitation between which is provided resonantly due to the difference in the frequencies of small and large (relaxation) oscillations by an integer number of times, accompanied by phase transformation according to an expanding circle map. As an example, we consider a system where a Smale – Williams attractor is realized, which is based on two coupled Bonhoeffer – van der Pol oscillators. Due to the applied modulation of parameter controlling the Andronov – Hopf bifurcation, the oscillators manifest activity and suppression turn by turn. With appropriate selection of the modulation form, relaxation oscillations occur at the end of each activity stage, the fundamental frequency of which is by an integer factor $M = 2, 3, 4, \ldots$ smaller than that of small oscillations. When the partner oscillator enters the activity stage, the oscillations start being stimulated by the $M$-th harmonic of the relaxation oscillations, so that the transformation of the oscillation phase during the modulation period corresponds to the $M$-fold expanding circle map. In the state space of the Poincaré map this corresponds to an attractor of Smale – Williams type, constructed with $M$-fold increase in the number of turns of the winding at each step of the mapping. The results of numerical studies confirming the occurrence of the hyperbolic attractors in certain parameter domains are presented, including the waveforms of the oscillations, portraits of attractors, diagrams illustrating the phase transformation according to the expanding circle map, Lyapunov exponents, and charts of dynamic regimes in parameter planes. The hyperbolic nature of the attractors is verified by numerical calculations that confirm the absence of tangencies of stable and unstable manifolds for trajectories on the attractor ("criterion of angles"). An electronic circuit is proposed that implements this principle of obtaining the hyperbolic chaos and its functioning is demonstrated using the software package Multisim. A nonautonomous system with a uniformly hyperbolic attractor of Smale – Williams type in a Poincaré cross-section is proposed with generation implemented on the basis of the effect of oscillation death. The results of a numerical study of the system are presented: iteration diagrams for phases and portraits of the attractor in the stroboscopic Poincaré cross-section, power density spectra, Lyapunov exponents and their dependence on parameters, and the atlas of regimes. The hyperbolicity of the attractor is verified using the criterion of angles.
CommonCrawl
In Kim Stanley Robinson's new novel Red Moon the first few pages describe a method of Earth-to-Moon transportation that I had not encountered before. The idea is to use a magnetically levitated and accelerated train on the surface of the Moon to catch a spaceship from Earth flying by the Moon at thousands of kilometers per hour. The advantage of this system is that the spaceship does not need to bring fuel to decelerate itself and instead is decelerated by the train. In more detail, a ship is launched from Earth and is put on a course tangential to the surface of the Moon such that it would just brush past the surface at 8300 kilometers per hour (according to the novel). As it approaches the Moon a maglev train on a 200-kilometer long track is accelerated to match speeds with the incoming spaceship. As the ship comes closest to the surface of the Moon the train is there to catch it and hold on to it. The train then gradually decelerates with the ship using the long track. Because the train is magnetically levitated and there is practically no air resistance on the Moon the train can easily reach such fast speeds. Because the ship doesn't have to bring its own decelerating fuel much more weight can be dedicated to cargo. This system is very economically attractive and if practical would appear to cut costs of sending people and supplies to the moon significantly. However, I have never encountered this idea before and a cursory search doesn't find any other references for the system. Will this scheme work or are there practical difficulties that make it unfeasible? P.S. I'm also curious whether this is a novel idea of Kim Stanley Robinson's or if someone else has proposed this before? It's a clever reverse on the old railgun-up-a-mountainside launcher concept. There are a few practical difficulties. For one, we can barely build fixed railway infrastructure capable of 400 km/h, much less 8300. Most current maglevs run slower than that, and it's not all due to air resistance. Turns out that a 1-2 cm error in surveying and construction and otherwise minor variations in magnet strength leads to a really bumpy ride. There is no reason to expect kinetic problems to decrease as the speed increases. Maintenance gets much more expensive as speeds increase, too. Another other big practical difficulty is the same one faced by missile-interception: Two very fast masses that must meet precisely. 8300 km/h is 2.31 km per second (barely short of escape velocity 2.38 km/s, so a rounding error might make a big, er, impact). In order for a 1m docking grapple to catch properly, both craft must reach the same target spot less than 0.0004 seconds apart. Let's go back to the guideway. It must contain the train-plus-grappled-spaceship forces vertically and horizontally. And sometimes that vertical force might be high-impulse, or strongly oscillating as the combined vehicle stabilizes in the seconds after grapple. Seems like a big fraction of your power must go toward simply holding the train in place vertically on the guideway against those unexpected vertical forces...lest it get torn off and dragged into space (or smashed into the guideway) by that pesky rounding error in the spaceship's vertical vector. Finally, the biggest problem is that there's just no way to make this thing fail safely under lots of conditions. Any kind of guideway failure would be catastrophic. A power disruption while the train was in motion would be catastrophic. A tiny mistake measuring the spaceship's position or velocity would result in missed meets (and a massive waste of energy)...or a catastrophe. Fail to brake and the train will leap up many kilometers high. It will take some minutes to fall on the ground again at an extremely high speed. In fact, if the ship collides and imparts momentum to the train, the train may even escape the Moon. To be stuck to a planet while flying this close or over its escape velociy requires an immoral amount of downwards force. Both the ship and the train must be made of an unobtanium-adamantium-uru alloy. Why do you need the train? The other answers make the very good point that even the smallest error would lead to disaster even if everything else could be made to work. But, unless you are aiming for an improvised Mission Impossible kind of scene (in which case anything goes), then the train is redundant. Here's what you could do: instead of a fixed-width maglev track, build a sequence of toroidal magnets along a very elongated horizontal cone. Give the first one a one kilometer diameter and make the last one just wider than the spaceship. The ship flies through the first one, gets slowed down slightly and its course is corrected towards the central axis. The next one slows it down further and corrects the course again, and so forth. By the time you're on the last one, your ship is centered and slow enough. Sadly, there are a bunch of reasons why this still wouldn't work. Just generating a magnetic field with meaningful strength over a large volume would be prohibitive. The volume energy of magnetic field is $B^2/2\mu_0$, which works out to $10^7/8\pi\>\> J/m^3$ for 1T field. Calculating the total energy inside a $100\:km\times1\:km$ cone is left as an exercise for the reader. As is the energy that would be deposited on the spaceship during the deceleration, and the stresses on the toroids, and the outcome of an approach that is just a bit too off-axis, and just how the deceleration and course corrections work, and... Nope. Sorry, no. I saw after posting that Kyle essentially proposed the same approach and even worked out the answers I was too lazy to calculate. I yield to you, sir. This is part of a larger system. The track is wrapped all the way around the equator. This gives you plenty of time to get the system up to speed and match with the target spacecraft. While you need sub-millisecond accuracy you have plenty of time to match the capture train to the spacecraft--the match should not be a problem. This also removes the failure-to-stop failure mode. If you have a problem you can just keep going. The spacecraft is grappled with long connectors. The spacecraft is in an orbit with a very low periapsis but it's not going to plow into the moon if a mistake is made. Fail a grapple and you just go around again. Maglev trains are speed limited by being in atmosphere. You have no issue of wheels on a track, no issue of pushing the air out of the way. Going 2000 m/s instead of 100 m/s isn't going to be a big problem. The whole system is safer if the train is actually moving above orbital velocity. (Note that the ends of the grapples inherently must be above orbital velocity, having the train itself above orbit is no problem.) The way you keep it from flying away is that it has 4 rails rather than the usual two. This can either be actually above the train or the train can have a piece that reaches down between the rails and rides on downward-pointing rails underneath. Mechanically the latter is simpler but I don't know if the magnets could be kept from interfering. In use, the train gets up to speed and then adjusts it's speed so that it will arrive under the spacecraft as it reaches periapsis. To be simple but wasteful you could simply keep the train directly under the spacecraft. The grapples are launched upwards. If they fail for whatever reason you just wind them back in and try again next orbit. If they grapple the spacecraft is first pulled into a circular orbit and then winched down onto the grapple car where it is more solidly connected for the deceleration phase. Note that I said this was part of a larger system: This track is useful for a lot more than landing spacecraft. Since the train exceeds orbital velocity it can be used for launch as well as landing. Not only that, but if you build it beefy it can generate some pretty high velocities. Orbit is 1.73 km/sec (at 100 km, I'm having zero luck finding it at 0km) which generates 1.62m/s of centrifugal force (matching the lunar gravity.) Lets speed our train up so the spacecraft feels 1g outward. Now it's moving 12.2 km/sec. Release it and it leaves the moon with more than 10 km/sec of velocity (remember Oberth, don't just subtract the escape velocity.) Very few NASA craft have exceeded this--but this is nowhere near the limit of this system. Lets take it up to 5g, about as much as we want for a manned launch. Now it ejects with 54 km/sec and loses almost none of that to the moon's gravity. That gives you anything from smacking the sun to solar escape. Unmanned missions can be launched even faster. This is literally every on-orbit docking maneuver ever. The above answers cover most of the important aspects, but the consensus that this is borderline impossible is slightly silly. You'll still need a fair bit of fuel on the spacecraft as it's primarily responsible for lining up the rendezvous, the train can only speed up or slow down and as the vehicle closes in it's going to have all six degrees of freedom. If any inclination changes were required you'd need to load up on fuel, but your particular use case addresses that. As for not seeing this anywhere, when I was looking into Launch Loops (basically using a self suspending linear accelerator - maglev train - to launch payloads directly into orbit) I discovered that on Earth this is possibly a worse idea than a space elevator. Which takes some doing. But on the Moon, you can have an accelerator on the surface and lob payloads directly into a very elliptical orbit, and just circularize at apogee. If you don't circularize, then the payload comes back and skims the surface at perigee - potentially a bad day. But that's the whole point of what you're up to, run the launch process in reverse and you're good to go. Some of the above answers brought up some valid concerns about the rail portion of your system. However, these problems are primarily control engineering challenges that, while very difficult, are probably solvable with enough time, money, and incentive (I can think of some big programs with plenty of time and money that don't seem to accomplish much, so #3 is important). I'd definitely recommend you consider why your people had both the reason and means to solve such a tricky problem. Partly a direct answer to the secondary question, P.S. I'm also curious whether this is a novel idea of Kim Stanley Robinson's or if someone else has proposed this before? This reportedly was first proposed as a concept of an "induction catapult". in the 1966 Robert A. Heinlein book The Moon Is a Harsh Mistress. However there is prior art most likely driven via The First Men in the Moon which in turn was based on De la terre à la lune (From the Earth to the Moon) an 1865 novel by Jules Verne. Thus I would say the theory "evolved" from that classic. However the closest previous physical example could be the one way usage in "When Worlds Collide" It reportedly is only intended to work from ground to "The Gateway" (a cisLunar Orbital Platform-Gateway) thus not Earth, such that the speed gravitational forces etc are MUCH lower than implied by the book. Also lunar spin may or may not be a significant factor. If the book is to be taken as an Earth to Lunar base journey then we have to go with those forces parameters etc. I did not notice anyone mention you can reduce the relative velocity required by the Lunar Surface rotational speed of very roughly 16 km/hr near the tilted equator (I can't find a given maximum kph, perhaps someone can confirm). "Suddenly their seats rotated 180 degrees, after which Fred felt pushed back into his seat. Not much reduction from 8300, I grant you, but in this case every bit helps. The greater factor is HOW we interpret the given parameters. From the book "but going as fast as they were, something like 8300 kilometers an hour at touchdown, their ship would have to decelerate pretty hard for the whole length of the track. And in fact they were still being decisively pushed back into their seats" I will "assume" in very simple terms, 8300 is the space craft Terra-Luna relative velocity. And one could be very generous and allow for the moon receding at an orbital average of approx. 3600 kph although that was not the tone of the statement. However it could be construed from the authors quote "it meant they had come in around forty times faster than a commercial jet on Earth landed" approximates to under 5000 kph, for this arguments sake lets reduce that to 4700 kph (=8300-3600), much more practical (and feasible) for these discussions. As a result of other comments and further corrections, this results in a very generous minimal velocity (only at some times in the elliptical year) of approx 1.3 km/s. So EVEN IF the train could start instantly at that speed (but it would need some significant energy input negating the supposed benefit), over the remainder of stated 200km (max) it would need to decelerate at -55 m/s^2 (well over. 5g), I don't know how to prove the g forces involved (since the book says gravity is only 16.5%) However I can now reduce my guestimate that by the time a human hit the station if they were very fit, a percentage would probably survive (But I'm not going). Cargo including eggs, should be ok, but that's another question 🙂. To reduce g forces to a couple of g My calcs suggest a minimum 500 Km landing track is needed. "they were landing on the moon! It was hard to believe they were really doing it. "Hard to believe," Fred said. Ta Shu smiled. "Hard to believe" While the other posters have provided excellent answers regarding the difficulty, what we are really looking at is a reversal of the mass driver on the moon used to launch payloads. Taken this way, and not as a train, it is doable, if somewhat hair rousingly challenging. Perhaps unmanned cargo pods would be a more reasonable use of the system (which also reduces constraints due to deceleration forces on the human passengers. Essentially, rather than trying to land a payload on a moving train platform, the pod simply is aimed at the throat of a large diameter funnel, which is the mass driver. As it passes each coil, the pod passes through a magnetic field and, much like the wire on a generator, generates electricity, which is gathered by the mass driver infrastructure and stored in giant capacitor banks, madly spinning flywheels or whatever other electrical system is in use at this time. Since it is an unmanned pod, the system could be as short as 200m (see here), while a manned pod might be tens or even 100km to reduce the deceleration or acceleration stresses. When the pod has slowed sufficiently, it can then "land" on a track, but for the majority of the journey it will be suspended in the magnetic field and not physically touching anything. Your massive maglev train is, obviously, levitated and accelerated by (electro) magnets, and must be many times the mass of the spaceship. The energy to power those magnets must come from somewhere on the planet, like a nuclear power plant, or perhaps by solar energy. There won't be any fossil fuels on Mars, and although nuclear fuel might be shipped from Earth, other forms of fuel would not be; that would defeat the purpose. The obvious solution, to me, is to put both the nuclear power plant and the electro magnets into space, a few million kilometers from Mars. The same magnetic energy used to lift and accelerate this giant train can be applied to the ship itself over a very long distance in space, and you won't have to deal with the problem of tolerances on an uneven and possibly shifting (and definitely rotating) planetary surface. In space, you don't have to deal with confounding force factors like the moving Martian atmosphere (it is thin but there is weather), terrain, Martian dust storms and compensating vectors for planetary rotation or curvature. It is a cleaner and simpler environment and for engineers, this allows much greater accuracy and closer approaches, tiny nudges from steering rockets can change course by a hundredth of a centimeter. So your ship can navigate to the same distance from the rails, the rails are the same length as those on Mars (and could be shorter, because in space you could have six rails circle the ship at the points of a hexagon), the power applied is the same. But the rails can be perfectly straight, the path of the spaceship perfectly straight and centered. The rails themselves can be linked into a ring, to keep them aligned. They can be as massive (or much more massive) than the train; iron is very cheap in space (asteroids). With the nuclear power plant, any shift of the deceleration rig can be corrected by nuclear powered magnetic propulsion (accelerating atoms at near-light speed in the opposite direction of desired travel). After slowing the ship enough, it lands by parachute, just like our probes have, or you could even guide it into orbit and (robotically) deploy only the supplies by parachute to the planetary surface. Then the ship could be turned around, and the exact same rail guns in space could accelerate it away from Mars back to Earth. It may be empty, or could carry crew and products back to Earth. However those return goods get to Mars orbit, it would certainly be less energy intensive to send just them into orbit, than it would be to send them AND the ship. By this scheme (which I invented here on the fly) the ship never leaves space, so the space ship can be just a space ship, it does not have to be engineered to work both on the ground (Earth or Mars), withstand launch stresses, have landing gear, or even be oriented for gravity, it can be, for example, a permanently rotating cylinder with centrifugal 0.25G gravity, more comfortable for human passengers (washing, sleeping, cooking, eliminating, exercising, working, etc) and more convenient for packing and storage (you don't have to net everything or tie it in place). Of course, this cylinder ship may have a centrifugal section and a non-rotating zero-G section, if zero-G is desired for storage or is useful for some scientific or technical operations. Not the answer you're looking for? Browse other questions tagged science-based space-travel spaceships or ask your own question. Can you catch a fish with a fishing rod… from orbit? Reasons that make a spaceship forced to land at the nearest planet?
CommonCrawl
Imagine you are a reinsurance pricing actuary tasked with pricing (or costing) an excess of loss contract. A typical method would be to determine the expected number of claims excess of some threshold, and then to also chose a severity distribution representing the probability of different sizes of loss above that threshold. A pareto curve would be a typical example or you also might use a semi-parametric mixed exponential distribution. Assuming these distributions represent only the ceding company's incurred loss, you can also apply their limit profile to get what's called an exposure estimate of the loss to the layer. But what about the ceding company's loss adjustment expenses, commonly known as ALAE? For many lines of business these expenses are covered in addition to the insured's policy limit, and that is the case we will assume here. Usually an excess of loss reinsurance contract will cover some of the ceding company's ALAE for claims in the layer, or even below the layer. There are two common reinsurance treatments: ALAE included which means you add the indemnity and ALAE together and the reinsurer is responsible for however much of that is in the layer, or ALAE pro-rata which means the reinsurer pays the same percentage of the ALAE as it paid of the loss. So we need to adjust our exposure loss cost estimate for ALAE. The traditional, and still very common, way this is done is to select an overall ratio of ALAE to loss, e.g. 5% or perhaps 20%, and then multiply each indemnity value by that amount to determine the amount of ALAE for that claim. For example, with a 20% ALAE load a $1M indemnity loss would have exactly $200k of ALAE, and every $1M claim would have exactly that same amount of ALAE. While this seems reasonable it actually makes two very strong implicit assumptions. It forces the distribution of ALAE to be a scaled copy of the distribution of indemnity and it forces the two to be 100% correlated. We might suspect that the ALAE distribution is not a scaled copy of the indemnity distribution especially if there is a significant effect of policy limit capping, which there often is. A $1M indemnity limit and a 20% ALAE load implies a maximum possible ALAE incurred of $200k. We will look at the data further down to evaluate the correlation assumption. A copula is simply a bivariate (or multivariate) distribution on the unit square (or cube, etc.). This means that we can fit univariate distributions to ALAE and indemnity each in isolation, something more actuaries are comfortable with, and then fit a copula to the bivariate ALAE-indemnity data transformed to $[0,1]\times[0,1]$ without worrying about any loss of information. See the references at the bottom for more information. Back to the premise: you are a reinsurance actuary trying to price an excess of loss contract. You already have a severity curve for indemnity, as discussed in the first paragraph, and an estimate of the expected number of claims excess of a certain threshold. You have a dataset of indemnity and ALAE amounts for a set of claims. We will not worry about trend or development (assume everything is trended and at ultimate already). What you are trying to do is refine the traditional assumption of ALAE as a fixed percent of indemnity and use a copula to model the bivariate nature of claims. What we will do in this blog is present and walk through R code that performs this analysis step by step. If you download the .csv files attached (see link at bottom of page), you should be able to follow along and reproduce the results. We will discuss places where changes could be made as well. The full R code is embedded in the blog and also attached. Most of the functions we use have thankfully been programmed by somebody else. Loading these packages gives us access to all those functions. R can automatically download these or you might have to manually download and unzip them. We started by importing a file containing the loss data with a column for indemnity (I will refer to indemnity as loss in the code, hopefully it will be clear from the context) and ALAE. The first handful of rows are shown below the code. We removed any ALAE data points at exactly 0. This will allow us to fit curves to the logarithm of the data. An adjustment could be made at the end to add back in the probability of 0 ALAE but I have not done that here. #The ALAE data will have a point mass at 0 which our fitted distributions do not account for. Finally we transformed the data to be in $[0,1]\times[0,1]$ by using the rank function and then dividing by the number of data points plus one. Some of the copula fitting procedures break down if there are ties or repeats in the data, so we applied a tie-break procedure which just randomly selects one of the equal entries to be ranked ahead of the other. # This defines a function giving the logliklihood of the data for a given set of distribution parameters. This line is for the log-normal distribution. #This finds the distribution parameters minimizing the function defined above, i.e. the maximum liklihood fit. # Same as above but for the Weibull distribution. # The following code created a graph displaying each of the fitted curves against the empirical distribution of the data. The x-axis is ALAE amount and the y-axis is cumulative probability of ALAE < y. x<-seq(0,max(alae_data),max(alae_data)/1000) # This defines the x-axis range for the following graph to encompas all ALAE data. #We give each distribution a different color as indicated in the code below. # At this point the user may select which fit they think is the best. I typically found the pareto to be the best fit and so the rest of the code assumes the pareto distribution is chosen. The code can be modified to make a different selection. The graph shows each of the 4 fitted distributions against the actual cumulative distribution of ALAE amounts. From practice the pareto often seems to be the best fit as it is here in green. There are two aspects to copula fitting. Just like with univariate distributions, there are different families of distributions and then within a family any given dataset will have a best fit member (according to some goodness of fit measure). From my own experimentation I found that the best fitting copula family to various datasets of liability indemnity and ALAE was the Gumbel copula. This was also chosen as the best family in Frees and Valdez , Micocci and Venter . The Gumbel has the desirable properties of being single parameter, an extreme-value copula (meaning it's appropriate for right tailed truncated data as we often work with in reinsurance), and has a closed form expression for traditional product-moment correlation and upper tail correlation. #We have assumed use of the Gumbel copula. # The upper tail correlation is one way of uniquely describing a member of a one-parameter copula family (e.g. Gumbel). You can see the upper tail correlation since there is a cluster of points in the upper right hand corner. That means that when indemnity is large or in the upper quantiles, then ALAE also tends to be relatively large, or in the upper quantiles of the ALAE distribution. You can also see even more distinctly the lack of points in the upper-left and lower-right corners. This means that it is very rare to have a small ALAE amount accompanying a large loss amount and vice versa. If ALAE and indemnity were independent, then the points would be uniformly scattered across the entire square and you would see a similar number of points in each of the four corners. #This assumes the user already has an indemnity distribution, i.e. exposure curve, they want to use. #This just defines the cumulative distribution function of the exposure curve as a distribution object in R. As mentioned above, we are not fitting a cure to the indemnity data, we are assuming that you already have an empirical distribution representing projected indemnity amounts. This is usually based on the types of business written by the ceding company and at what limits. The minimum value in the indemnity severity distribution is $100,000. This is what we will call the model threshold. This just simplifies the analysis by not requiring us to know about the severity distribution far below the reinsurance attachment point. # This simulates as many random draws from the fitted copula as there are in the original dataset. # This uses the cumulative ALAE and loss distributions to transform the copula data, which is in terms of percentiles, into loss/ALAE amounts. # This is a plot of the actual data versus simulated data. Of course, the above plot may be of limited value because we've only simulated as many points from the fitted distribution as there are points in the original dataset, so even two such plots form the same fitted distribution may look very different. Interestingly, we need to simulate only from the copula distribution. These points live on the unit square, then we use the "q" function of the fitted distributions to convert from cumulative percentiles to x-values of ALAE and indemnity. #Define the number of points desired for the final empirical loss plus alae distribution. Should be orders of magnitude less than n_simulations for stability. The other form of output is a summary of loss statistics for the layers of interest. The final goal is to have an estimate for the expected loss, frequency, severity and standard deviation of loss for each layer as these are typically used in the determination of the price or cost of the reinsurance. #This function calculates the reinsurance loss given the ground up loss and alae, limit and attachment. #Amount of loss plus ALAE within the layer for each simulated loss. #Average of the simulated layered loss. Note that layered losses of zero are included in the average. #Adjusting for the zeroes included in mean. #This assumes poisson frequency. We use the threshold instead of layer frequency because the squared layer mean includes simulated losses of zero. This code is mostly identical to Step 8a, but we do the calculations assuming ALAE pro-rata reinsurance treatment. We need to change the layeredloss variable definition. It is the indemnity (first simloss column entry) plus the ALAE amount (second entry) multiplied by the ratio of the layered indemnity over the total indemnity. #Amount of indemnity falling in the layer. #Reinsurance loss under ALAE pro-rata treatment which is layered indemnity plus an equal portion of the ALAE as the reinsured indemnity is of total indemnity. We should compare the final results of the copula method for modeling ALAE to the classical assumption we talked about before. The classical assumption is that ALAE is a fixed percent of indemnity for every loss. The first step is to determine what that fixed percentage should be. A very simple way is to take the total ALAE in our loss dataset and divide by the total indemnity. This is a commonly used method and so will give us a fair comparison. # We assume that the fixed ALAE load to be applied to each claim is the total ALAE in the dataset divided by the total loss (including below the threshold), which is a typical practice. #We load each indemnity amount by the ALAE load and then apply the layering. The rest of the calculations are identical. #Exercise: Work out that this is the correct formula for reinsurance loss in this case. Note that the frequency doesn't change because under ALAE pro-rata treatment, the frequency is determined by indemnity amount only which does not depend on the ALAE. Since the frequency is the same, it makes sense that the difference in loss cost is entirely due to differences in severity so we see these differences being equal. But why is it that the loss, or severity, is lower for the new method for every layer except the very highest layer where it is much higher? It could be that for very high indemnity amounts, the tail correlation of the copula tends to draw the very high ALAE amounts which, due to the heavy tailed-ness of the pareto distribution, are a much greater than the average ALAE based on the ALAE ratio. Or it could be simulation error since we have only a finite number of points. For very high indemnity amounts, the tail correlation of the copula tends to draw the very high ALAE amounts which, due to the heavy tailed-ness of the pareto distribution, are a much greater than the average ALAE based on the ALAE ratio. However, in this example, more simulations should be done to increase the stability of the top layer. This is interesting because we see the loss cost in the lowest layer increase and the highest layer decrease which is opposite of the ALAE pro-rata case. For the low layer, this may be because our model allows large ALAE amounts to occur for small indemnity amounts (with probability determined by the copula) and so we have additionally those indemnity amounts below the threshold divided by the ALAE load able to enter the layer, whereas under the classical assumption they would not. For the highest layer we may be getting the benefit of the partial correlation given by the copula, as opposed to 100% correlation in the classical assumptions. As you probably noticed from the comparison table, the classical method is doing a fine job most of the time (otherwise the alarm would have been sounded already!). What I would like you to take away from this, rather than just blindly implementing the method, is to think about how ALAE has its own distribution and is tail correlated with indemnity. This has implications for certain particular scenarios: a $1M layer attachment when all policy limits (applying only to indemnity) are $1M and the reinsurance covers ALAE included with the loss. What about layers that attach just above the ALAE load times a common policy limit, do losses from those policy limits really contribute no expected loss to the layer? Occasionally a reinsurance contract will say the ALAE treatment can either be included or pro-rata, whichever the client prefers. What should this cost and how does the distribution of ALAE and tail correlation with indemnity affect that cost? With this blogpost as a starting point, hopefully you are in a better position to answer those questions. 1. Camphausen, F. et al. "Package 'distr'". Cran.org. Version 2.4. February 7, 2013. 2. Dutang, C. et al. "actuar: An R Package for Actuarial Science". Journal of Statistical Software. March 2008, Volume 25, Issue 7. 3. Frees, E.; Valdez, E. "Understanding Relationships Using Copulas". North American Actuarial Journal, Volume 2, Number 1. 1998. 4. Genest, C.; MacKay, J. "The Joy of Copulas: Bivariate Distributions with Uniform Marginals". The American Statistician, Volume 40, Issue 4 (Nov., 1986),280-283. 5. Geyer, C. "Maximum Likelihood in R". www.stat.umn.edu/geyer. September 30, 2003. 6. Hofert, M. et al. "Package 'copula'". Cran.org. Version 0.999-5, November 2012. 7. Joe, Harry. Multivariate Models and Dependence Concepts. Monographs on Statistics and Probability 73, Chapman & Hall/CRC, 2001. 8. Kojadinovic, I.; Yan, J. "Modeling Multivariate Distributions with Continuous Margins Using the copula R Package". Journal of Statistical Software. May 2010, Volume 34, Issue 9. 9. Micocci, M.; Masala, G. "Loss-ALAE modeling through a copula dependence structure". Investment Management and Financial Innovations. Volume 6, Issue 4, 2009. 10. Ricci, V. "Fitting Distributions with R". Cran.r-project.org. Release 0.4-21, February 2005. 11. Ruckdeschel, P. et al. "S4 Classes for Distributions—a manual for packages "distr", "distrEx", "distrEllipse", "distrMod", "distrSim", "distrTEst", "distrTeach", version 2.4". r-project.org. February 5, 2013. 12. Venter, G. "Tails of Copulas". Proceedings of the Casualty Actuarial Society. Arlington, Virginia. 2002: LXXXIX, 68-113. 13. Yan, J. "Enjoy the Joy of Copulas: With a Package copula". Journal of Statistical Software. October 2007, Volume 21, Issue 4. I think Greg's blog would get more exposure and discussion if it were available as a shiny app. If you click on the "Files" link at the bottom of this wikidot page you will see Greg's R code. This would be loaded into RStudio, which would then "compile" it into a shiny app. Also at the bottom you will also see three csv files the program needs. Versions of those three files on one's computer could be selected using shiny drop-down boxes. It has been a few weeks/months since I read Greg's paper, but there may be one or two other defaults in his algorithm that could be changed with shiny selection widgets. RStudio will host the online app for free. I've had experience building csv file selection boxes in shiny online apps and have been intending to start this project for some time. But it would be more fun to work on this with other people — and might actually get done that way! Let me know if you're interested by replying to this post. Thanks. Dan: did anyone ever reply to this? I've recently tried to get back into shiny and this could be fun. Hi Brian, thanks for the reply. I think you are being modest — upon visiting the word-cloud example in the shiny gallery, I noted PirateGrunt lurking in the footnote! I've since started blogging about shiny-ing Greg's code on my tri-know-bits site, and have enough material for the next two weeks: displaying all plots, and uploading to shiny's free online hosting service. However, I note that you have also registered for shiny's free service, so it's up for discussion under whose name to upload mauc: yours, mine, or someone else on the committee who might want to give this a try. Ultimately, I believe mauc users will get a deeper appreciation of copule (Italian plural) in practice if the app could receive users' own data. But first things first.
CommonCrawl
The strong coupling constant $\alpha_S$ is the least well known of all constants of nature, which play a role in the Standard Model (SM) of particle physics and related fields such as cosmology and astrophysics. For many searches for new physics beyond the SM as well as for some important precision tests of the SM using collider data the uncertainty on the value of $\alpha_S$ is a limiting factor. In recent years progress in theoretical predictions of Quantum Chromodynamics (QCD), and the availability of collider data at the highest energies has led to many improved determinations of $\alpha_S$. The current world average quotes an uncertainty of less than 1%. However, there are noticeable discrepancies between different categories of determinations of $\alpha_S$, which may limit the ultimate precision of future world averages. We plan to bring together in this workshop the leading experts on determinations of $\alpha_S$ from theory and experiment and all important categories. With presentations of the latest results and intense discussion by all participants we will focus on a global view of advantages and problems of each method.
CommonCrawl
The image shows a grid on the surface. How do I calculate the connection points of this grid? I want to create a 3D mesh representation of it in other software (a video game actually). Display grid lines are almost always the isoparametric lines corresponding to some parameterization. Specifically, if the surface parameterization is $(u,v)\mapsto \mathbf S(u,v)$, then the grid lines are curves of the form $(u,v)\mapsto \mathbf S(u,v_0)$ and $(u,v)\mapsto \mathbf S(u_0,v)$, for fixed $u_0$ and $v_0$. So, if you have some choice about the kinds of surfaces you're going to use, choose ones that are easy to parameterise (at least piecewise, anyway). In your picture of the Cayley surface, it looks like it's parameterized by $x$ and $y$ (in each quadrant separately). So, for any given $x$ and $y$, you can "shoot" a ray in the vertical direction (parallel to the $z$-axis) and intersect it with the surface. This gives you a parameterization $(x,y)\mapsto \mathbf S(x,y)$. But this kind of intersection process is very slow, so, again, best to choose surfaces that have a nice simple parameterization, if you can. If parameterizations are difficult to construct, then another approach is direct tesselation of the implicit surface. This won't give you grid lines, but it will give you triangles, which is probably what you need for graphics. This is a fairly well researched topic. Many people use some variant of the "marching cubes" algorithm. If you search for "tesselation" and "implicit surfaces", you will find plenty of material. Not the answer you're looking for? Browse other questions tagged graphing-functions surfaces or ask your own question. Find the "surface vertices" of a collection of points. If a parametric surface is continuous, is it reasonable to expect that an integer grid will also be continuous? What is the test for determining which triangle the geodesic continues upon?
CommonCrawl
Abstract: We show how to solve a generalised version of the Multi-sequence Linear Feedback Shift-Register (MLFSR) problem using minimisation of free modules over $\mathbb F[x]$. We show how two existing algorithms for minimising such modules run particularly fast on these instances. Furthermore, we show how one of them can be made even faster for our use. With our modeling of the problem, classical algebraic results tremendously simplify arguing about the algorithms. For the non-generalised MLFSR, these algorithms are as fast as what is currently known. We then use our generalised MLFSR to give a new fast decoding algorithm for Reed Solomon codes.
CommonCrawl
Dave Richeson tweeted about a puzzle from Futility Closet (original source a Russian mathematical olympiad): can you split the integers 1, 2, …, 15 into two groups A and B, with 13 elements in A and 2 elements in B, so that the sum of the elements of A is the product of the elements of B? Think about it for a moment. There's of course the temptation to brute-force it, which is doable, but there's a more elegant solution. This got me thinking – when can you split the integers 1, 2, …, n into two groups A and B, where B has two elements, so that the sum of the elements of A is the product of the elements of B? x and y are both at most n. solutions takes an integer n as input and returns pairs [x, y] which are solutions to the problem. For example solutions(17) returns [[10, 13]]. So it appears that there's nothing particularly special about the number 15 in the initial puzzle. There are plenty of values n for which you can't do this, and plenty for which you can. Also, there are values of n for which there are multiple solution pairs (x, y), although not surprisingly they are rare. The smallest such n is 325, for which and are both solutions. In this case $latex n(n+1)/2 + 1 = 52976 = 24 \times 7 \times 11 \times 43$, from which 52976 has (5)(2)(2)(2) = 40 factors. A typical number of this size has about factors. This abundance of factors makes it more likely that 52976 would have two factorizations of the sort we're looking for. And in fact . Solutions to this problem appear to have some interesting statistical properties… more on that in a future post. A very insightful investigation of a fun puzzle. I'm a math undergraduate and have been simply browsing. You're solution was very easy to follow, but I had a little trouble understanding your boundaries for x. I really like how this puzzle led you to find other values for n. Very interesting read!
CommonCrawl
If I have a polynomial p in variables $x_0,...,x_n$, how do I specialize the algebra appropriately to substitute values for $x_i$'s? For example, how do I compute $p(1,1,...,1)$? Or replace $x_i$ by $q^i$ ($q$ a parameter) so to compute $p(1,q,...,q^n)$? In Mathematica, if the variables were x[[i]], one could do "./x[[i]] -> q^i //Simplify" and it is the equivalent of this replace and simplify that I am looking for. This is coming from symmetric polynomials/functions theory and I know some of the specializations are built in, but at the end of the day I want to try small examples with different specializations than what is already built in.
CommonCrawl
Abstract:In the recent progress of the classi fication theorem of C*-algebras, we have seen connections with the regularity properties and associated conditions in the theory of injective von Neumann algebras, which is shown by A. Connes and U. Haagerup. In particular, it is known that the proof with Connes' approach is based on his results on automorphisms of injective factors. In this talk, along the recent evolution of Elliott's program, I will revisit classifi cation theorems of automorphisms on nuclear C*-algebras and discuss the connections between them. Abstract: The first half of my talk is based on joint work with Hiroki Matui. Isomorphism classes of Cuntz--Krieger algebras are closely related to continuous orbit equivalence classes of one-sided topological Markov shifts. For two-sided topological Markov shifts, Boyle--Handelman (1996 Israel J.) have shown that orbit equivalence, ordered cohomology groups, flow equivalence and Ruelle's dynamical zeta functions are closely related to each other. In this talk, I would like to show that they have deep connections with continuous orbit equivalence of one-sided topological Markov shifts and classification of gauge actions on Cuntz--Krieger algebras. Abstract: After Rosenblatt, a group is said to be supramenable if it has no paradoxical subsets. In the first part of the talk, we characterize supramenability in terms of existence of tracial states on partial crossed products. In the second part, we show that a group is locally finite if and only if its Roe algebra is finite. We also discuss the problem of characterizing the group C*-algebra of locally finite groups. Title: Ordered Bratteli diagrams and Cantor minimal systems. It is well known that simple dimension groups appear as complete isomorphism invariants for (simple) AF-algebras as well as for C*-crossed products associated to Cantor minimal systems.Furthermore,simple dimension groups also appear as complete invariants for orbit equivalence, respectively,strong orbit equivalence,of Cantor minimal systems. In this talk we will mention some fairly recent results how change of the ordering of a given Bratteli diagram yield entirely different Cantor minimal systems,while the systems themselves are orbit equivalent, respectively,strong orbit equivalent. Abstract: We discuss a generalization of the notion of strongly self-absorbing C*-algebras to the setting of C*-dynamical systems. The main result is an equivariant McDuff-type theorem that characterizes exactly when an action of a locally compact group on a separable C*-algebra absorbs a given strongly self-absorbing action tensorially up to cocycle conjugacy. I then demonstrate what kind of (equivariant) permanence properties carry over in this context, similar to how D-stability is closed under various C*-algebraic operations. If time permits, we also discuss some natural examples and/or a non-trivial application to actions on Kirchberg algebras. Abstract: Recently, we showed that Kirchberg algebras satisfying the UCT are semiprojective if and only if their K-groups are finitely generated. Here we will discuss how one can avoid the UCT-assumption and obtain a characterization of semiprojectivity for Kirchberg algebras purely in KK-theoretic terms. Title: Nuclear dimension of C*-algebras of homeomorphisms. Abstract: Suppose X is a compact metrizable space with finite covering dimension, and h a homeomorphism of X. Let A be the crossed product of C(X) by the induced automorphism. It was shown first by Toms and Winter, and in a different way by the speaker, Winter and Zacharias, that if h is a minimal homeomorphism then A has finite nuclear dimension. Szabo then showed that it suffices to assume that h is free. In this talk, I'll discuss a recent preprint which settles the issue for arbitrary homeomorphisms. As a special case, we show that group C*-algebras of certain non-nilpotent groups have finite nuclear dimension. This is joint work with Jianchao Wu. Title: On the normal subgroups of invertibles and unitaries of a C*-algebra. Abstract: I will discuss a number of results on the structure of the normal subgroups of the invertibles and the unitaries in the connected component of the identity in a C*-algebra. Asbtract: I will outline the interplay of quasidiagonality and the Universal Coefficient Theorem in the recent classification result for tracial, separable, unital, simple, nuclear C*-algebras with finite nuclear dimension. Abstract: Developments in the classification and the structure theory of C*-algebras in the past decade have highlighted the importance of an assortment of regularity properties, one of the most prominent of them being the property of having finite nuclear dimension. This has spurred growing interests in the advances of noncommutative dimension theories, for which a focal challenge is to find ways of bounding nuclear dimension for crossed product C*-algebras. To this end, various dimensions of dynamical nature have been developed, including Rokhlin dimension, dynamical asymptotic dimension, amenability dimension, etc. Roughly speaking, these dimensions measure the complexity of the topological or C*-dynamical system that gives rise to a given crossed product. We will discuss some of these concepts as well as their generalizations and applications. The talk is based on joint works with Ilan Hirshberg, Gabor Szabo, Wilhelm Winter and Joachim Zacharias. Abstract: Recently Downarowicz, Huczek, and Zhang proved that every discrete amenable group can be tiled by translates of finitely many Følner sets with prescribed approximate invariance. I will show how this can be used to strengthen the Rokhlin lemma of Ornstein and Weiss, with applications to topological dynamics and the classification program for simple separable nuclear C*-algebras. Title: Near unperforation, almost unperforation, and almost algebraic order. Abstract: In this talk I will present a general overview of structural properties of the category Cu of abstract Cuntz semigroups, focusing on the tensor product of a semigroup of a C*-algebra of real rank zero with the semigroup $Z$ of the Jiang-Su algebra $\mathcal Z$. In particular, a somewhat surprising connection between the condition of almost unperforation and the so-called axiom of almost algebraic order. The talk is based on joint work with R. Antoine, H.Thiel, and also R. Antoine, H. Petzka. Abstract: The Cuntz semigroup of a C* -algebra is an analogue for positive elements of the semigroup of Murray-von Neumann equivalence classes of projections. The Cuntz semigroup is deeply connected to the classification program of C*-algebras. It has been used to classify certain classes of nonsimple C*-algebras as well as to distinguish simple unital C*-algebras that can not be classified using K-theory and traces. In general, the Cuntz semigroup contains the tracial information of the algebra. Also, it is known that for stably finite unital C*-algebras the K$_0$-group of the algebra can be recovered from this semigroup. This however does not hold in the projectionless case. In this talk I will introduce a variant of the Cuntz semigroup that fixes this problem. This semigroup was introduce by Leonel Robert in order to classify certain classes of (not necessary simple) inductive limits of 1-dimensional noncommutative CW-complexes. In this talk I will discuss properties of this semigroup and give some computations of it. In particular, I will show that for simple stably projectionless algebras that are $\mathcal Z$-absorbing, this semigroup together with the K$_1$-group contains the same information as the Elliott invariant of the algebra. This is a joint work with Leonel Robert. Abstract: This talk is based on joint work with Kang Li, Fernando Lledó and Jianchao Wu. I will survey some amenability notions arising in different contexts. Generalizing amenability of groups, Block and Weinberger introduced the concept of (coarse) amenability for metric spaces with bounded geometry. We study also the concept of amenability for general algebras, introduced by Gromov, and we obtain a dichotomy result in this context, generalizing a result of Elek. Finally F\o lner nets for C*-algebras of operators will be introduced, and all the notions will be related through the consideration of the uniform Roe algebra of a metric space with bounded geometry. Abstract: A masa of a C*-algebra is said to be a Cartan subalgebra if it is the image of a faithful conditional expectation and if it is regular in the sense that its normalizer generates the ambient C*-algebra. By a remarkable result of Renault, a C*-algebra that admits a Cartan subalgebra can be realized as the reduced twisted groupoid C*-algebra of an étale, locally compact, Hausdorff groupoid. Applying this and Tu's striking results and techniques used in the proof of the Baum-Connes Conjecture for amenable groupoids, we will show that a separable, nuclear C*-algebra possessing a Cartan subalgebra satisfies the UCT. As an application, we shall see how the UCT for separable, nuclear C*-algebras KK-equivalent to their tensorial CAR-algebra stabilization relates to Cartan subalgebras and order two automorphisms of the Cuntz algebra O_2. This is joint work with Xin Li. Abstract: We propose a bivariant version of the Cuntz Semigroup based on equivalence classes of order zero maps rather than positive elements. The resulting theory contains the ordinary Cunz Semigroup as special case similarly to KK-theory containing K-theory and admits a composition product and a number of other useful properties. We explain a couple of examples and indicate how this bivariant Cuntz Semigroup can be used to classify stably finite algebras in analogy to the Kirchberg-Phillips classification of simple purely infinite algebras via KK-theory. Abstract: We study different comparison properties at the category $\Cu$ framework aiming to lift this information to the C*-algebraic setting. In particular, we give analogous characterizations for the so-called Corona Factorization property and the $\omega$-comparison property. These help to both determine whenever a C*-algebra has CFP and narrow the range of the category $\Cu$ as invariant for C*-algebras. In particular, we show that the well-known C*-algebra described by Rordam with a finite and an infinite projection does not enjoy CFP and that the so-called Elementary $\Cu$-semigroups can not derive from a C*-algebra. It is joint work with H. Petzka. Abstract: This summer, Eilers-Restorff-Ruiz-Sørensen showed that the class of Cuntz-Krieger algebras (including those not purely infinite) is classified up to stable isomorphism by reduced filtered K-theory with ordered K_0-groups. I will describe the range of their invariant. This is joint work with Rasmus Bentmann. Abstract: I will try to demonstrate how the machinery of homological algebra can be set up for categories of C*-algebras (such as Kasparov categories of C*-algebras over a topological space or of dynamical systems), and in fact is the right framework for deriving results like UCT or classification of subcategories. Abstract: We prove that separable, nuclear, purely infinite C*-algebras with the ideal property (in particular of real rank zero) have nuclear dimension 1. These C*-algebras are not assumed to be simple, but in the special case of simple C*-algebras, we obtain a new and short proof of the fact that Kirchberg algebras have nuclear dimension 1. Abstract: We discuss some results and some things we think are close to being proved about the conjecture that the radius of comparison of the crossed product by a minimal homeomorphism is equal to half the mean dimension of the homeomorphism.
CommonCrawl
Principal rank characteristic sequence; enhanced principal rank characteristic sequence; minor; rank; symmetric matrix; finite field. The enhanced principal rank characteristic sequence (epr-sequence) of an $n \times n$ symmetric matrix over a field $\F$ was recently defined as $\ell_1 \ell_2 \cdots \ell_n$, where $\ell_k$ is either $\tt A$, $\tt S$, or $\tt N$ based on whether all, some (but not all), or none of the order-$k$ principal minors of the matrix are nonzero. Here, a complete characterization of the epr-sequences that are attainable by symmetric matrices over the field $\Z_2$, the integers modulo $2$, is established. Contrary to the attainable epr-sequences over a field of characteristic $0$, this characterization reveals that the attainable epr-sequences over $\Z_2$ possess very special structures. For more general fields of characteristic $2$, some restrictions on attainable epr-sequences are obtained. Martínez-Rivera, Xavier. (2017), "The enhanced principal rank characteristic sequence over a field of characteristic 2", Electronic Journal of Linear Algebra, Volume 32, pp. 273-290.
CommonCrawl
mostly I added in a section on homotopical categories, using some paragraphs from Andre Joyal's message to the CatTheory mailing list. I removed in the introduction the link to the page "Why (oo,1)-categories" and instead expanded the Idea section a bit. Format: MarkdownItexLooking back at _[[(infinity,1)-category]]_ I found that lots of context was missing there. As a first step in an attempt to correct this, I created a subsection "Properties" with some pointers to relevant other entries. Looking back at (infinity,1)-category I found that lots of context was missing there. As a first step in an attempt to correct this, I created a subsection "Properties" with some pointers to relevant other entries. Format: MarkdownI added the reference * Omar Antolín Camarena, _A whirlwind tour of the world of $(\infty,1)$-categories_ ([arXiv](http://arxiv.org/abs/1303.4669)) > This introduction to higher category theory is intended to a give the reader an intuition for what $(\infty,1)$-categories are, when they are an appropriate tool, how they fit into the landscape of higher category, how concepts from ordinary category theory generalize to this new setting, and what uses people have put the theory to. It is a rough guide to a vast terrain, focuses on ideas and motivation, omits almost all proofs and technical details, and provides many references. This introduction to higher category theory is intended to a give the reader an intuition for what $(\infty,1)$-categories are, when they are an appropriate tool, how they fit into the landscape of higher category, how concepts from ordinary category theory generalize to this new setting, and what uses people have put the theory to. It is a rough guide to a vast terrain, focuses on ideas and motivation, omits almost all proofs and technical details, and provides many references. Format: MarkdownItexIn the entry on [[(infinity,1)-category]] there is the phrase:an (∞,1)-category is an internal to in ∞-groupoids/basic homotopy theory. I tried to see how to clear up the grammar, but it was not clear to me what the wording was intended to be. There was a previous version: >To some extent an (∞,1)-category can be thought of as a category enriched in (∞,0)-categories, namely in ∞-groupoids. That is vague, so needed changing, but there seem to be 'typos' in the current version. In the entry on (infinity,1)-category there is the phrase:an (∞,1)-category is an internal to in ∞-groupoids/basic homotopy theory. To some extent an (∞,1)-category can be thought of as a category enriched in (∞,0)-categories, namely in ∞-groupoids. That is vague, so needed changing, but there seem to be 'typos' in the current version. Format: MarkdownItexIt was probably supposed to be "an internal category in ..." or "a category internal to ...". It was probably supposed to be "an internal category in …" or "a category internal to …". Format: MarkdownItexYes, but is that completely correct? The `enriched' version to me was clearer. Is an (∞,1)-category really an internal category in ∞-groupoids, as that would mean the object of objects would be an ∞-groupoid, or am I mistaken? Yes, but is that completely correct? The 'enriched' version to me was clearer. Is an (∞,1)-category really an internal category in ∞-groupoids, as that would mean the object of objects would be an ∞-groupoid, or am I mistaken? Format: MarkdownItexThanks for catching that, I have fixed the sentence now and expanded it such as to read as follows: *** More precisely, this is the notion of _[[category]]_ up to [[coherence|coherent]] [[homotopy]]: an $(\infty,1)$-category is equivalently * an [[internal category in an (∞,1)-category|internal category]] in [[∞-groupoids]]/basic [[homotopy theory]] (as such usually modeled as a [[complete Segal space]]). * a category [[enriched (infinity,1)-category |homotopy enriched]] over [[∞Grpd]] (as such usually modeled as a [[Segal category]]). *** > Is an (∞,1)-category really an internal category in ∞-groupoids, as that would mean the object of objects would be an ∞-groupoid, Yes, it's the completeness condition of [[complete Segal spaces]] that takes care of this issue. Details are at _[[internal category in an (∞,1)-category]]_. an internal category in ∞-groupoids/basic homotopy theory (as such usually modeled as a complete Segal space). a category homotopy enriched over ∞Grpd (as such usually modeled as a Segal category). Yes, it's the completeness condition of complete Segal spaces that takes care of this issue. Details are at internal category in an (∞,1)-category. Format: MarkdownItexAdded [pointer](http://ncatlab.org/nlab/show/%28infinity%2C1%29-category#AyalaFrancisRozenblyum15) to the new preprint by Ayala and Rozenblyum. Though it doesn't seem to have the previously announced statement about $(\infty,n)$-categories with duals yet. Added pointer to the new preprint by Ayala and Rozenblyum. Though it doesn't seem to have the previously announced statement about (∞,n)(\infty,n)-categories with duals yet. Format: MarkdownItexHas anything more been made of the other approach to stratified spaces where one moves up through strata and back down again? You may remember that discussion at the Cafe [here](https://golem.ph.utexas.edu/category/2006/11/this_weeks_finds_in_mathematic_2.html#c006196). It gave rise to [Transversal homotopy theory](http://arxiv.org/abs/0910.3322) by Jon Woolf, who also wrote a paper mentioned by Ayala and Rozenblyum, [The fundamental category of a stratified space](http://arxiv.org/abs/0811.2580). The idea was to give fundamental categories with duals. Has anything more been made of the other approach to stratified spaces where one moves up through strata and back down again? You may remember that discussion at the Cafe here. It gave rise to Transversal homotopy theory by Jon Woolf, who also wrote a paper mentioned by Ayala and Rozenblyum, The fundamental category of a stratified space. The idea was to give fundamental categories with duals. Added reference to Riehl-Verity's book.
CommonCrawl
I am using MATLAB to do an optimisation. The QP minimisation problem is set up in the standard form shown below. The optimisation is used to calculate the weights (x vector in the equation below) of a portfolio. Trying to follow an example but have two issues. Firstly say the portfolio has 500 stocks the x vector passed into the optimiser (x here is our initial guess) will have the dimension of 1000 x 1. The second 500 will have the opposite sign of the first 500, I do not understand why this is? Also the F matrix does something similar. Say I have a matrix R which contains some risk factors, which is 500 x 500. The solver is actually Tomlab (user guide of the solve is here link). Just stepping through the code. x0 is passed as an intial guess vector 1000 x 1. The first 500 weights are the previous weights. The next 500 weights are all set to zero. x_up is obviously also a 1000 x 1 vector to. Looking further into the code. The first 500 weights are the upper bounds on the buys the next 500 are the upper bounds on the sells. x_low is the same but for the lower bounds. First 500 weights are the lower bounds on the buys the next 500 are the lower bounds for the sales. If you have other linear constraints for $x_i$, you simply plug in $x_i = x_i^+ + x_i^-$. Not the answer you're looking for? Browse other questions tagged quant-trading-strategies optimization portfolio-optimization or ask your own question. Is there a standard method of scaling alpha forecasts to t-cost estimates?
CommonCrawl
A queen can be either black or white, and there can be unequal numbers of each type. A queen must not be threatened by other queens of the same color. Queens threaten all squares in the same row, column, or diagonal (as in chess). Also, threats are blocked by other queens. Would this number change if rule 1 was changed to enforce equal numbers of black and white queens? Every other row is filled with alternating queens. Starting queens on each filled row alternate. Filling white queens into the spaces and repeating the pattern gives us the 32 solution. Not the answer you're looking for? Browse other questions tagged mathematics combinatorics chess checkerboard or ask your own question. Place 16 queens in a $16\times 16$ board peacefully, also in subboards!
CommonCrawl
It is shown that if a point $x_0$ admits a bounded point derivation on $R^p(X)$, the closure of rational function with poles off $X$ in the $L^p(dA)$ norm, for $p >2$, then there is an approximate derivative at $x_0$. A similar result is proven for higher-order bounded point derivations. This extends a result of Wang which was proven for $R(X)$, the uniform closure of rational functions with poles off $X$. Bishop, E., A minimal boundary for function algebras, Pacific J. Math. 9 (1959), 629–642. Brennan, J. E., Invariant subspaces and rational approximation, J. Functional Analysis 7 (1971), 285–310. Browder, A., Point derivations on function algebras, J. Functional Analysis 1 (1967), 22–27. Dolženko, E. P., Construction on a nowhere dense continuum of a nowhere differentiable function which can be expanded into a series of rational functions, Dokl. Akad. Nauk SSSR 125 (1959), 970–973. Fernström, C. and Polking, J. C., Bounded point evaluations and approximation in $L^p$ by solutions of elliptic partial differential equations, J. Functional Analysis 28 (1978), no. 1, 1–20. Hedberg, L. I., Bounded point evaluations and capacity, J. Functional Analysis 10 (1972), 269–280. Sinanjan, S. O., The uniqueness property of analytic functions on closed see without interior points, Sibirsk. Mat. Ž. 6 (1965), 1365–1381.
CommonCrawl
Abstract: Symmetry is an important for many academic disciplines. Objects that are symmetric are often seen as possessing an aesthetic quality. In this talk, we will consider the types of symmetry patterns that can be placed on a sphere and mathematically demonstrate the number of these patterns. We will also look at how these symmetries can be mapped onto general three-dimensional objects. Abstract: Green Fuse Films' award-winning documentary Between the Folds chronicles the stories of ten fine artists and intrepid theoretical scientists who have abandoned careers and scoffed at hard-earned graduate degrees—all to forge unconventional lives as modern-day paperfolders. As they converge on the unlikely medium of origami, these artists and scientists reinterpret the world in paper, and bring forth a bold mix of sensibilities towards art, expressiveness, creativity and meaning. And, together these offbeat and provocative minds demonstrate the innumerable ways that art and science come to bear as we struggle to understand and honor the world around us—as artists, scientists, creators, collaborators, preservers, and simply curious beings. "Luminously photographed", with a "haunting" original score featuring the Budapest Symphony Orchestra, the film paints an arresting portrait of the mysterious creative threads that bind us all–fusing science and sculpture, form and function, ancient and new. See www.greenfusefilms.com for more information. Abstract: Algebraic geometry has been at the center of much of mathematics for hundreds of years. Its applications range from number theory to modern physics. Yet, it begins quite humbly with the study of conic sections: circles, ellipses, hyperbolas, and parabolas. What is algebraic geometry and how did it grow beyond the scope of these familiar curves to become one of the most important branches of mathematics today? In this talk, which will not be able to completely answer these questions, we will focus on its growth from the study of conic sections to the exploration of algebraic varieties. Abstract: Color, texture, pattern - there's more than first meets the eye in my crocheting. Come for a hands-on experience and new insights into finding mathematics in unlikely objects and expressing math concepts in artful ways. Invite your knitting and crocheting friends, bring someone who claims "I can't do math," attend with a classmate who thinks math is too abstract to be interesting, bring along that education major or art student. Build a bridge between math and their world: enjoy this event together. Title: Building an Actuarial Program - an exciting, interdisciplinary adventure! Abstract: Since the early 2000's, MSU has offered students the opportunity to prepare for their career as an actuary as aprt of their academic training. In 2011, the Bachelor of Science in Actuarial Science was introduced as the newest major on campus. The program has developed into a highly prized major among students on campus, and we continue to expand as we seek to a world leader in actuarial education and research. In this talk, we will talk about the development of our program and how Albion students and faculty can develop one as well! All questions are encouraged during the session. Abstract: We live in an era where there has never been greater access to information. Being able to sift through and analyze this information to understand what is "noise" and what can actually lead to valuable insights has become a highly demanded commodity. In turn, so to have Data Analysts. For profit-seeking companies, the realization of business objectives through reporting of data to analyze trends, creating predictive models for forecasting and optimizing business processes for enhanced performance has become pivotal for sustainable success. In this talk, we will provide an introduction to data analytics and we will review how our employer, EY, uses data analytics to build a better working world. Abstract: A research problem concerning the Colin de Verdiere number of a graph recently led me on a journey that provides a great example of the interconnected nature of mathematics. We'll take a relaxing cruise through some of the topics involved, including ideas from Analysis, Algebra, Geometry, and Graph Theory, see how they all fit together, and talk about some of the mysteries that remain. Only knowledge of basic arithmetic is needed. Abstract: While games are ordinarily thought of as a means for entertainment and distraction, they are also inherently useful to accomplish all manner of other purposes. Among other things, games--both digital and physical--can be used to teach, modify behaviors, influence opinions, and improve physical and mental health. I will share some of the major heuristics that are useful in designing games for "serious" purposes, as well more general knowledge of game design and the game industry. Additionally, I will share my experiences as a graduate student in the serious game design MA program at Michigan State University. Abstract: Michigan uses an unusual formula in the calculation of child support payments. For divorced parents in Michigan, the base monetary support each parent is expected to contribute to raising their child is adjusted according to the number of (over)nights spent with the parents. Curiously, this adjustment is based on a rational polynomial function parameterized by $k$ that describes the amount of money that $A$ must pay $B$, where $B$ must pay $A$ if the result is negative. In the 2004 Michigan Child Support Formula Manual, $k = 2$, meaning the polynomials are quadratic; while $k=3$ (for cubic polynomials) in both the 2008 and 2013 editions. In this talk, we will brainstorm and collaborate in using calculus to examine this function, explain the effect of changing $k$, and point out an alternative form that stretches and translates a simpler function. This talk is based on joint work with Jennifer Wilson (New School University, New York). Abstract: "Discount and Turnover in Closed-End Funds" "Wheel theory: How to divide by zero" "Predicting Extreme Events: Critical Ruptures and Applications to Common Complex Situations." "A Markov Chain Analysis of the National Football League's Overtime Rule" Abstract: Anaerobic digestion is a biochemical process in which organic matter is broken down to biogas and various byproducts in an oxygen-free environment. When used in waste treatment facilities, the biogas is captured before it escapes into the atmosphere. It can then be used as renewable energy either by combusting the gas to produce electrical energy or by extracting the methane and using it as a natural gas fuel. In industrial applications anaerobic digestion appears to be difficult to control and reactors often experience break-down resulting in little or no biogas production. In this talk we describe a model for anaerobic digestion and illustrate how qualitative and numerical analysis give guidelines for how to control the system to (1) stabilize and (2) optimize biogas production. At the same time the model explains various possible pitfalls in industrial installations. Abstract: Question: What do sums of powers have to do with approximations of factorials? Answer: Integration by parts. No, really! In this talk we will see how a clever use of standard calculus techniques leads to the Euler-Maclaurin formula, a powerful way of connecting sums to integrals, and how this formula solves several classic problems. Abstract: Blackjack, or 21, is among the most popular casino table games. Since unlike most other games of chance, successive hands of blackjack are not independent, the mathematics behind blackjack is at once more complicated and more interesting than for games like craps or roulette, and there can be times during play when the gambler has an edge over the casino. This talk will briefly review the rules of the game and then describe some of the calculations--both theoretical and experimental--that led to blackjack basic strategy and the advantages derived from card counting. Abstract: The Colley method was one of the six computer-based ranking methods used to determine the top ten NCAA football teams to play in bowl games as part of the Bowl Championship Series. The Colley method uses a matrix to rank order the teams based on win-loss data; the method accounts for strength of schedule. The Borda Count is a well-known voting procedure with a long history. By viewing voters' preferences as win-loss data, the Colley method can be used to determine the winner of an election. Surprisingly, the Colley ranking agrees with the ranking from the Borda count. Title: What exactly is half a derivative anyway? Abstract: The theory of differentiation is well-known to any student who has taken calculus. However, to make sense of a non-integer order derivative takes considerably more work. Tools are needed from complex analysis, harmonic analysis and linear algebra to understand a half derivative. In this talk, we will begin by investigating what it means to take the square root of a matrix, and viewing a derivative as a "really large matrix" we can begin to make sense of a half derivative. With these simple tools, we can make sense of even crazier objects such as derivatives of imaginary order! Abstract: Partial geometries were first described in 1963 by R. C. Bose. They are finite point line geometries specified by three parameters that are defined by a set of four basic axioms. Each partial geometry has a strongly regular point graph. While some very simple shapes can be understood as partial geometries, the number of proper ones is actually limited. In this talk we will define both the geometries and the graphs and explore some connections between them. We will also look at how we can use a group of automorphisms acting on the geometry to classify it as one of three types. Finally we will see how this work enables us to generate a list of parameters for potential partial geometries and how we are beginning to investigate these possibilities. Abstract: Erdős asked: when does the base 3 expansion of a power of 2 omit the digit 2? His conjectured answer is that this only happens for 1, 4, and 256, but this conjecture is still open, and has proven to be very elusive. There underlies a deep relationship between the primes 2 and 3. Our attempt to understand this relationship has led to interesting connections among symbolic dynamical systems, graph theory, p-adic analysis, number theory, and fractal geometry. Despite the awesome variety of mathematics involved, linear algebra should be sufficient background knowledge for this talk. I report on joint work with Jeff Lagarias of the University of Michigan and Artem Bolshakov of the University of Texas at Dallas. Abstract: Patterns appear everywhere in the world around us from zebra stripes, to hexagonal honeycombs, to spiral arrangements of sunflower seeds, to the periodic ups and downs of a population size due to seasonal migration. Similar patterns also arise in experiments done in many disciplines, such as physics, chemistry, and biology. One goal in studying pattern formation is to understand why and how these patterns are created. Another goal is to determine whether similar patterns from vastly different systems can be described and understood through similar mathematical model equations. This talk will describe how a pattern can be represented mathematically and how basic knowledge of functions and derivatives can help determine when and where the patterns will exist. Analytical and numerical results will be compared with experimental observations. Finally, the connection between the underlying pattern and the observation of a single, isolated pulse, called an oscillon, will be described. Abstract: Operations Research is an area of applied math that deals with analyzing and optimizing many different systems: industrial, nonprofit, government, healthcare, etc. It operates at the intersection of math, engineering, statistics, computer science, and business. We will talk about common focus areas like minimizing waiting times for important public services, and scheduling staff in an optimal way. The methods are incredibly powerful--optimization decisions can often involve hundreds of thousands of variables, and sometimes millions or billions. Abstract: Teachers have to make decisions everyday- about what material to cover, what activities to plan, how those activities will be assessed- and good teachers make pedagogically based decisions. In math classrooms, abstract content can, at times, be seemingly inaccessible to some students. By including hands-on manipulatives into the classroom, students can kinetically explore content to help bridge their understanding. These manipulatives can be incorporated in a variety of ways depending on the intended purpose. Using an example of a movable integral model, some of those possibilities for activities will be shown and explained. Abstract: Many models of coral reef dynamics created before 2012 fell into one of two categories: they were either conceptual models not intended to give realistic descriptions of natural reef dynamics or were fully specified in that they assumed particular parameters. Both types of models tell what is possible, but do not necessarily tell what occurs in nature. My presentation will be analyzing the mathematical model developed in the paper "Data-driven models for regional coral-reef dynamics," by Kamila Zychaluk, John F. Bruno, Damian Clancy, Tim R. McClanahan, and Matthew Spencer (Ecology Letters, 15:151-158). This model differed from past models in that it was only partially specified, allowing greater flexibility. Abstract: As a young student, we initially learned that we could not take the square root of a negative number. Eventually, we learned about the set of complex numbers, which was invented in order to calculate all of the roots of any polynomial, including those involving square roots of negative numbers. Additionally, we learned that parallel lines never intersect. Upon looking at them from a new perspective, we learned that in projective geometry, parallel lines intersect at a point "at infinity." Currently, we are told that division by zero is a forbidden mathematical operation due to the lack of a multiplicative inverse. Alternately, there is no solution to the equation $0x = 1$. However, mathematicians recently defined $\perp$ and $\infty$, consistent with previous mathematics, representing the two cases that arise when attempting to divide by zero. These two new elements are placed into a field, yielding a special structure called a wheel. However, division by zero is not entirely without consequences. This talk will review the familiar algebraic structures, including rings and fields, and then proceed onto wheels and how incorporating these two new elements change the properties of polynomials.
CommonCrawl
Abstract: Many algorithms that are originally designed without explicitly considering incentive properties are later combined with simple pricing rules and used as mechanisms. The resulting mechanisms are often natural and simple to understand. But how good are these algorithms as mechanisms? Truthful reporting of valuations is typically not a dominant strategy (certainly not with a pay-your-bid, first-price rule, but it is likely not a good strategy even with a critical value, or second-price style rule either). Our goal is to show that a wide class of approximation algorithms yields this way mechanisms with low Price of Anarchy. The seminal result of Lucier and Borodin [SODA 2010] shows that combining a greedy algorithm that is an $\alpha$-approximation algorithm with a pay-your-bid payment rule yields a mechanism whose Price of Anarchy is $O(\alpha)$. In this paper we significantly extend the class of algorithms for which such a result is available by showing that this close connection between approximation ratio on the one hand and Price of Anarchy on the other also holds for the design principle of relaxation and rounding provided that the relaxation is smooth and the rounding is oblivious. We demonstrate the far-reaching consequences of our result by showing its implications for sparse packing integer programs, such as multi-unit auctions and generalized matching, for the maximum traveling salesman problem, for combinatorial auctions, and for single source unsplittable flow problems. In all these problems our approach leads to novel simple, near-optimal mechanisms whose Price of Anarchy either matches or beats the performance guarantees of known mechanisms.
CommonCrawl
Test for $\alpha$ and $α$. This renders only the first math symbol correctly. What do I miss exactly? Test for $\alpha$ and $^^^^03b1$. Not the answer you're looking for? Browse other questions tagged fonts luatex or ask your own question. How does XeTeX typeset (math) symbols which are input as unicode? How to specify font preferences in XeTeX math mode?
CommonCrawl
A formal grammar that determines which collations belong to the formal language and which do not. Often, the collation system is left implicit, and taken simply to match the formal grammar. Let $\mathcal L$ be a formal language. The alphabet $\mathcal A$ of $\mathcal L$ is a set of symbols from which collations in $\mathcal L$ may be constructed. Depending on the specific nature of any particular formal language, these too may be subcategorized. A letter of a formal language is a more or less arbitrary symbol whose interpretation depends on the specific context. In building a formal language, letters are considered to be the undefined terms of said language. An important part of assigning semantics to a formal language is to provide an interpretation for its letters. A sign of a formal language $\mathcal L$ is a symbol whose primary purpose is to structure the language. In building a formal language, signs form the hooks allowing the formal grammar to define the well-formed formulae of the formal language. Common examples of signs are parentheses, "(" and ")", and the comma, ",". The logical connectives are also signs. Signs form part of the alphabet of a formal language. Unlike the letters, they must be the same for each signature for the language. A key feature of collations is the presence of methods to collate a number of collations into a new one. A collection of collations, together with a collection of such collation methods may be called a collation system. For example, words and the method of concatenation. Let $\mathcal L$ be a formal language whose alphabet is $\mathcal A$. The formal grammar of $\mathcal L$ comprises of rules of formation, which determine whether collations in $\mathcal A$ belong to $\mathcal L$ or not. Roughly speaking, there are two types of formal grammar, top-down grammar and bottom-up grammar.
CommonCrawl
Abstract: A two-dimensional hydrogen atom is analyzed in elliptic coordinates. Separation of the variables reduces the problem to the solution of the Ince equation in the complex plane subject to certain boundary conditions. It is shown that in the limits $R\rightarrow 0$ and $R\rightarrow\infty$ ($R$ is a parameter that specifies the elliptic coordinates) the obtained solutions go over to polar and parabolic bases, respectively. The explicit form of the elliptic basis is given for the lowest quantum states.
CommonCrawl
She is particularly intrigued by the current game she is playing. The game starts with a sequence of $N$ positive integers ($2 \leq N \leq 248$), each in the range $1 \ldots 40$. In one move, Bessie can take two adjacent numbers with equal values and replace them a single number of value one greater (e.g., she might replace two adjacent 7s with an 8). The goal is to maximize the value of the largest number present in the sequence at the end of the game. Please help Bessie score as highly as possible!
CommonCrawl
We study analytically and numerically the dynamics of the generalized Rosenzweig-Porter model, which is known to possess three distinct phases: ergodic, multifractal and localized phases. Our focus is on the survival probability $R(t)$, the probability of finding the initial state after time $t$. In particular, if the system is initially prepared in a highly-excited non-stationary state (wave packet) confined in space and containing a fixed fraction of all eigenstates, we show that $R(t)$ can be used as a dynamical indicator to distinguish these three phases. Three main aspects are identified in different phases. The ergodic phase is characterized by the standard power-law decay of $R(t)$ with periodic oscillations in time, surviving in the thermodynamic limit, with frequency equals to the energy bandwidth of the wave packet. In multifractal extended phase the survival probability shows an exponential decay but the decay rate vanishes in the thermodynamic limit in a non-trivial manner determined by the fractal dimension of wave functions. Localized phase is characterized by the saturation value of $R(t\to\infty)=k$, finite in the thermodynamic limit $N\rightarrow\infty$, which approaches $k=R(t\to 0)$ in this limit. non-trivial manner determined by the fractal dimension of wave functions. $k=R(t\to 0)$ in this limit.
CommonCrawl
Wajapeyee, Narendra and Raut, Chandrashekhar Ganpat and Somasundaram, Kumaravel (2005) Activator Protein 2$\alpha$ Status Determines the Chemosensitivity of Cancer Cells: Implications in Cancer Chemotherapy. In: Cancer Research, 65 (19). pp. 8628-8634. Cancer chemotherapeutic drugs induce apoptosis by several pathways. Inactivation of proapoptotic genes, or activation of survival signaling, leads to chemoresistance. Activator protein 2$\alpha$ (AP-2$\alpha$), a developmentally regulated sequence-specific DNA-binding transcription factor, has been shown to function like a tumor suppressor. Here, we show that controlled expression of AP-2$\alpha$, using tetracycline-inducible system, increased the chemosensitivity of cancer cells by severalfold by sensitizing cells to undergo apoptosis upon chemotherapy. Under these conditions, neither AP-2$\alpha$ expression nor drug treatment resulted in apoptosis induction, whereas in combination the cancer cells underwent massive apoptosis. We found that endogenous AP-2$\alpha$ protein is induced posttranscriptionally by various chemotherapeutic drugs. Blocking the endogenous AP-2$\alpha$ by small interfering RNA in human cancer cells lead to decreased apoptosis, increased colony formation, and chemoresistance irrespective of their p53 status upon chemotherapy. We further show that 5-aza-2'-deoxycytidine induced reexpression of AP-2$\alpha$ in MDA-MB-2$\alpha$31 breast cancer cells (wherein AP-2$\alpha$ expression is silenced by hypermethylation), resulted in massive apoptosis induction, increased chemosensitivity, decreased colony formation, and loss of tumorigenesis upon chemotherapy. However, in MDA-MB-231 cells transfected with AP-2$\alpha$ small interfering RNA, 5-aza-2'-deoxycytidine treatment failed to increase apoptosis and chemosensitivity. The treatment also resulted in increased colony formation and efficient tumor formation upon chemotherapy. These results establish an important role for AP-2$\alpha$ in cancer cell chemosensitivity and provide new insights for modifying the chemosensitivity of cancer cells by activating apoptotic pathways. The Copyright belongs to American Association for Cancer Research.
CommonCrawl
Abstract: We prove that the liftings of a normal functor $F$ in the category of compact Hausdorff spaces to the categories of (abelian) compact semigroups (monoids) are determined by natural transformations $F(-)\times F(-)\to F(-\times-)$ satisfying requirements that correspond to associativity, commutativity, and the existence of a unity. In particular, the tensor products for normal monads satisfy (not necessarily all) these requirements. It is proved that the power functor in the category of compacta is the only normal functor that admits a natural lifting to the category of convex compacta and their continuous affine mappings. Keywords: compact semigroup, compact monoid, convex compactum, normal functor, lifting.
CommonCrawl
$\pi$ is the symbol used for a special number which we call pi (pronounced "pie"). $\pi$ comes from working with circles. $\pi$ is the ratio of the circumference of a circle to its diameter. This means that you can work out $\pi$ by dividing the distance around a circle by the length of its diameter.... Pi is a name given to the ratio of the circumference of a circle to the diameter. That means, for any circle, you can divide the circumference (the distance around the circle) by the diameter and always get exactly the same number. Pi is an extremely interesting number that is important to all sorts of mathematical calculations. Anytime you find yourself working with circles, arcs, pendulums (which swing through an arc), etc. you find pi popping up. We have run into pi when looking at... 26/02/2008 · Hi, I have a BA II plus Prof. and there's no "pi" key, I tried using the trig functions to get the value but it always give me a something in degrees. Anthony - use pi which returns the floating-point number nearest the value of π. So in your code, you could do something like So in your code, you could do something like sin(pi) how to eat beets without losing the vitamins To verify your install worked, connect your Raspberry Pi to a TV via HDMI and make sure it boots into XBMC. Also, make sure that you can see the weather and RSS feeds so you know Internet access is working on the device. Navigate into the System: Info section and write down your device's IP address in case you need it in the future for SSH access or to set up your remote control. 26/01/2018 · In PI Vision 2017 I'm working on a prototype that includes a large list of attributes in a table symbol. The difficulty is that with a search that returns a large number of results, the Assets pane splits the results into pages of 50 items each. how to get into children& 39 The same warning applies to e, the square root of 2, Euler's constant, Phi, the cosine of any non-zero algebraic number, and the vast majority of all other real numbers. There's a reason why these numbers are always computed and shown in decimal, after all. Given that $\pi$ can be estimated using the function $4(1 – 1/3 + 1/5 – 1/7 + \ldots)$ with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places. To verify your install worked, connect your Raspberry Pi to a TV via HDMI and make sure it boots into XBMC. Also, make sure that you can see the weather and RSS feeds so you know Internet access is working on the device. Navigate into the System: Info section and write down your device's IP address in case you need it in the future for SSH access or to set up your remote control. PI needs to execute the query : Select Sequence_Number.NEXTVAL from dual. This query is currently being used by Tibco to get the next value of Sequence number and insert data into the database. Tibco will be replaced by PI. 17/12/2014 · Components First boot with the Raspberry Pi. This inexpensive microcomputer can be used for a variety of DIY projects. Here's what you need to know before you get started.
CommonCrawl
One fundamental interpretation of the derivative of a function is that it is the slope of the tangent line to the graph of the function. (Still, it is important to realize that this is not the definition of the thing, and that there are other possible and important interpretations as well). The precise statement of this fundamental idea is as follows. Let $f$ be a function. For each fixed value $x_o$ of the input to $f$, the value $f'(x_o)$ of the derivative $f'$ of $f$ evaluated at $x_o$ is the slope of the tangent line to the graph of $f$ at the particular point $(x_o,f(x_o))$ on the graph. The main conceptual hazard is to mistakenly name the fixed point '$x$', as well as naming the variable coordinate on the tangent line '$x$'. This causes a person to write down some equation which, whatever it may be, is not the equation of a line at all. Another popular boo-boo is to forget the subtraction $-f(x_o)$ on the left hand side. Don't do it. So the question of finding the tangent and normal lines at various points of the graph of a function is just a combination of the two processes: computing the derivative at the point in question, and invoking the point-slope form of the equation for a straight line. Write the equation for both the tangent line and normal line to the curve $y=3x^2-x+1$ at the point where $x=1$. Write the equation for both the tangent line and normal line to the curve $y=(x-1)/(x+1)$ at the point where $x=0$. Tangent and normal lines by Paul Garrett is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us.
CommonCrawl
Let $X$ be the Klein bottle. One construction of the Klein bottle is as follows. Take the unit box $[0, 1] \times [0, 1]$. Identify one pair of opposite edges in the same orientation, and the other pair of opposite edges in the opposite orientation to obtain a quotient space (see the illustration below). Use the Seifert-van Kampen theorem to find $\pi_1(X, x)$.
CommonCrawl
Results for "Evgeny M. Alexeev" ADE surfaces and their moduliDec 21 2017Jan 12 2018We define a class of surfaces and surface pairs corresponding to the ADE root lattices and construct compactifications of their moduli spaces, generalizing Losev-Manin spaces of curves. Majorana neutrinos and other Majorana particles:Theory and experimentDec 10 2014This is a somewhat modified version of Chapter 15 of the book "The Physics of Ettore Majorana", by Salvatore Esposito with contributions by Evgeny Akhmedov (Ch. 15) and Frank Wilczek (Ch. 14), Cambridge University Press, 2014. Bispectrality for the quantum open Toda chainJun 03 2013An alternative to Babelon's (2003) construction of dual variables for the quantum open Toda chain is proposed that is based on the 2x2 Lax matrix and the corresponding quadratic R-matrix algebra. Dehn surgeries on the figure eight knot: an upper bound for the complexityJul 05 2010We establish an upper bound $\omega(p/q)$ on the complexity of manifolds obtained by $p/q$-surgeries on the figure eight knot. It turns out that if $\omega(p/q)\leqslant 12$, the bound is sharp. Non-rational centers of log canonical singularitiesSep 19 2011Jun 28 2012We show that if $(X,B)$ is a log canonical pair with $\dim X\geq d+2$, whose non-klt centers have dimension $\geq d$, then $X$ is has depth $\ge d+2$ at every closed point. On decay of entropy solutions to multidimensional conservation lawsApr 02 2019Under a precise genuine nonlinearity assumption we establish the decay of entropy solutions of a multidimensional scalar conservation law with merely continuous flux. Graphs with multiple sheeted pluripolar hullsMar 16 2005In this paper we study the pluripolar hulls of analytic sets. In particular, we show that hulls of graphs of analytic functions can be multiple sheeted and sheets can be separated by a set of zero dimension. On decay of almost periodic viscosity solutions to Hamilton-Jacobi equationsNov 08 2017We establish that a viscosity solution to a multidimensional Hamilton-Jacobi equation with a convex non-degenerate hamiltonian and Bohr almost periodic initial data decays to its infimum as time $t\to+\infty$. Simultaneous two-dimensional best Diophantine approximations in the Euclidean normFeb 13 2010We prove a new lower bound for the exponent of growth of the best two-dimensional Diophantine approximations with respect to Euclidean norm. Determinantal identities for flagged Schur and Schubert polynomialsOct 25 2014Sep 28 2015We prove new determinantal identities for a family of flagged Schur polynomials. As a corollary of these identities we obtain determinantal expressions of Schubert polynomials for certain vexillary permutations.
CommonCrawl
Let $n \in \N$ and $n = p_1 \times p_2 \times \cdots \times p_j$, $j \ge 2$, where $p_1, \ldots, p_j \in \Bbb P$ are prime factors of $n$. Then $\exists p_i \in \Bbb P$ such that $p_i \le \sqrt n$. That is, if $n \in \N$ is composite, then $n$ has a prime factor $p \le \sqrt n$. Let $n$ be composite such that $n \ge 0$. From Composite Number has Two Divisors Less Than It, we can write $n = a b$ where $a, b \in \Z$ and $1 < a, b < n$. Without loss of generality, suppose that $a \le b$. Let $a > \sqrt n$. Then $b \ge a > \sqrt n$. From Positive Integer Greater than 1 has Prime Divisor it follows that there is some prime $p$ which divides $a$. Suppose we are testing a number to see whether it is prime, or so as to find all its divisors. One way to do this (which may not be the most efficient in all circumstances, but it gets the job done) is to divide it by successively larger primes until you find one that is a factor of the number. Eventually you're bound to find a prime that is a factor, by Positive Integer Greater than 1 has Prime Divisor. However, this result tells us that we don't need to go out that far. If we've tested all the primes up to the square root of our target number without finding a divisor, we don't need to go any further because we know that our target number is prime after all.
CommonCrawl
The seminar will meet on Fridays in Room 507, Mathematics Building, from 10:30 am to noon according to the schedule below. I will explain a new proof of the geometric stabilization theorem for Hitchin fibers, a key ingredients in Ngô's proof of the fundamental lemma. Our approach relies on ideas of Denef-Loeser and Batyrev on p-adic integration and Langlands duality for generic Hitchin fibers. This is joint work with Michael Groechenig and Paul Ziegler. Saito--Tunnell theorem is a local version of Waldspurger's formula, relating the existence of E^\times invariant linear forms on representation of GL_2 to local root numbers. I present a generalization of this which relates the existence of GL_n(E) invariant linear forms on GL_2n(F) to local root numbers. The proof relies on a "relative version" of Harish-Chandra' theory of local character expansions. In this talk, we introduce some Lefschetz-type theorems for Brauer groups of hyperplane sections of smooth projective varieties. This is more or less known when the dimension of the hyperplane section is at least 3, but we will also introduce a version which lowers the dimension from 3 to 2. As a consequence, we reduce the Tate conjecture for divisors on smooth projective varieties from general dimensions to dimension 2, and thus proves a results of Morrow by a different method. In this talk, I will introduce the functorial descent from cuspidal automorphic representations \pi of GL7(A) with L^S(s, \pi, \wedge^3) having a pole at s=1 to the split exceptional group G2(A), using Fourier coefficients associated to two nilpotent orbits of E7. We show that one descent module is generic, and under mild assumptions on the unramified components of \pi, it is cuspidal and having \pi as a weak functorial lift of each irreducible summand. However, we show that the other descent module supports not only the non-degenerate Whittaker integeral on G2(A), but also every degenerate Whittaker integral. Thus it is generic, but not cuspidal. This is a new phenomenon, compared to the theory of functorial descent for classical and GSpin groups. This work is joint with Joseph Hundley. For a smooth scheme of finite type over a field its motivic cohomology groups generalize the usual Chow groups and are an important algebraic invariant. In this talk we will explain how, for the special fibers of certain quaternionic Shimura varieties, its motivic cohomology can encode very rich arithmetic information. More precisely we will show that the cycle class map from motivic cohomology to étale cohomology gives a geometric realization of level raising between Hilbert modular forms. The main ingredient for this construction is a form of Ihara's Lemma for Shimura surfaces which we prove by generalizing a method of Diamond-Taylor. In 1984, Kazhdan and Patterson constructed what are now called the exceptional representations of metaplectic $r$-fold covers of $GL_n$. These representations, which include the Weil representation when $r=n=2$, have seen numerous applications; most spectacularly, when $r=2$ and $n$ is arbitrary, they appear in the integral representation of the symmetric square $L$ function on $GL_n$. Unfortunately they are somewhat difficult to work with in practice, even for $r=2$ and $n>2$. In this talk, I will explain how, for $r=2$ and $n=2q$, these representations can be described in terms of a model space akin to the Schrödinger model of the Weil representation. I will also explain some of my motivations for this problem, and some possible applications of this result. I will discuss a new construction of Eisenstein cohomology classes on GL_N first introduced by Nori and Sczech. In our approach the corresponding cocycles appear as regularised theta lifts for the dual pair (GL_1,GL_N). This suggests interesting generalisations of this construction by considering more general pairs (GL_k, GL_N), and I will present some calculations regarding this generalisation. Joint work (in progress) with Nicolas Bergeron, Pierre Charollois and Akshay Venkatesh. The category of mixed Hodge-Tate structures over Q is a mixed Tate category of homological dimension one. By Tannakian formalism, it is equivalent to the category of graded comodules of a commutative graded Hopf algebra. In my recent joint work with A. Goncharov, we give a canonical description A (C) of the Hopf algebra. Such construction can be generalized to A (R) for any dg-algebra R with a Tate line. We study the number of quadratic Dirichlet L-functions over the rational function field which vanish at the central point s=1/2. In the first half of my talk, I will give a lower bound on the number of such characters through a geometric interpretation. This is in contrast with the situation over the rational numbers, where a conjecture of Chowla predicts there should be no such L-functions. In the second half of the talk, I will discuss joint work with Ellenberg and Shusterman proving as the size of the constant field grows to infinity, the set of L-functions vanishing at the central point has 0 density.
CommonCrawl
I am trying to understand singular points on a complex projective -algebraic curve. I remember that singular points on a affine algebraic curve are determined by taking the partial derivatives and finding where they equal to 0. But then I found this definition of singular point on a complex projective-algebraic curve that says that the multiplicity $v_p(C)$ of a curve C at p is the order of the lowest non-vanishing term in the taylor expansion of f at p. And thus a point is singular when $v_p(C) >1$. This process just confuses me and I wondered if I can use the process for affine curves. I'm just confused on why these are different, for example for the curve $x^2-y^3$, why can I not set this equal to 0 , find the partial derivatives and tell me this has a singular point at $(0,0)$. Lastly the textbook I'm using mentions that examples for singular points are at double points and simple cusps. And explain that the equation for a double point is $xy =0$ which I'm not sure how this would help if I saw a curve. You absolutely can use either method to detect singular points on affine curves. These two approaches are equivalent because the coefficients in the Taylor expansion are the various partial derivatives. In your example, looking at the Taylor expansion is easier: there are no linear terms in the Taylor expansion $f = x^2 − y^3$, so you can immediately see that $(0,0)$, the center of the expansion, is a singular point and in fact a double point (i.e., point of multiplicity 2). If $P$ is a double point of a curve $C$, then $C$ has two (not necessarily distinct) tangent lines at $P$. If these tangent lines are the same (a double tangent line), then $P$ is non-ordinary or a cusp; if the tangent lines are distinct, then $P$ is ordinary or a node. For instance, your example $C_1 : x^2 − y^3 = 0$ has a cusp because the lowest order homogeneous term factors as $x^2 = x\cdot x$, so $C_1$ has $x = 0$ as a double tangent line at the origin. By contrast, the curve $C_2 : y^2 = x^3 + x^2$ has a node at the origin because the tangent lines are $y=x$ and $y=-x$, as can be seen by factoring the lowest order homogeneous term $y^2−x^2=(y−x)(y+x)$. Try plotting them to get a picture of what this means visually. For more information I recommend Fulton's Algebraic Curves. Not the answer you're looking for? Browse other questions tagged algebraic-geometry curves algebraic-curves singularity or ask your own question. Projective curve $x^3+y^3=2z^3$ in $\mathbb P^2$ singular? Irreducible projective algebraic planer curve of arbitrary degree with maximal singular points.
CommonCrawl
The goal is to tile rectangles as small as possible with the F pentomino. Of course this is impossible, so we allow the addition of copies of a rectangle. For each rectangle $a\times b$, find the smallest area larger rectangle that copies of $a\times b$ plus at least one F-pentomino will tile. Example shown, with the $1\times 1$, you can tile a $3\times 3$ as follows. Now we don't need to consider $1\times 1$ any longer as we have found the smallest rectangle tilable with copies of F plus copies of $1\times 1$. There are at least 17 more solutions. More expected. I tagged it 'computer-puzzle' but you can certainly work some of these out by hand. The larger ones might be a bit challenging. Note that these solutions also work for all $n$ not divisible by 5; e.g. two 1x7 rectangles fit in a single 1x14 rectangle, so we can just reuse that solution. There is a smaller one, though (see below). The solutions for $1 \times 10$, $1 \times 18$ and $1 \times 20$ (all below) also seem to form some kind of generalizable family. As @JaapScherphuis notes in the comment, there's a (probably non-optimal) generalizable solution for $1 \times n$, $n$ is even, rectangles, just like for the W-pentomino. By halving the rectangles, we can also obtain solutions for odd $n$, and the parts with just rectangles and no F-pentominos can be shortened. The last two are part of a generalizable solution, see the bottom of the post. In general, the generalizable solutions for $1 \times (10k+4)$ resp. $1 \times (10k+6)$ give rise to solutions for $2 \times (10k+9)$ resp. $2 \times (10k+11)$; the $2 \times 7$ solution mentioned above is basically a subdivision of the $2 \times 21$ one minus some extraoneous padding. The length of the red line is $10k+4$ or $10k+6$; this makes the purple line $10k+9$ or $10k+11$ and the total solution $(3n+3) \times (3n+3)$. a $3 \times 6$ rectangle. A tiling exists with two F pentominoes and four $1 \times 2$ rectangles. It can be is obtained by taking the $1 \times 1$ solution, reflecting it about the top edge, and joining together the pairs of $1 \times 1$ rectangles into $2 \times 2$ rectangles in an obvious way. the smallest dimension of the rectangle must be 3, because the diameter of the F pentomino is 3. Suppose there is only one F pentomino present. In this case, both dimensions must be odd: the pentomino covers an odd number of squares, and every $1 \times 2$ tile covers an even number, so the total number of squares in the grid must be odd. Suppose the smallest dimension is 3. It is obvious that the $3 \times 3$ case won't work. A little trial and error shows that the $3 \times 5$ case won't work either; effectively, we would have to extend the rectangle in the original diagram "upwards" to accomplish this, but a checkerboard coloring argument shows that one domino would have to cover two squares of the same color to accomplish this. For two F pentominoes, one or both dimensions must be even by the same parity argument as above. This means that the three smallest possible grids are $3 \times 4$, $4 \times 4$, and $3 \times 6$ (which we know to exist). It is not possible to fit two F pentominoes into a $3 \times 4$ grid; and while it is possible to fit them into a $4 \times 4$ grid, the remaining space can't be filled with dominoes. Thus $3 \times 6$ is minimal.
CommonCrawl
There has been a growing concern about the validity of scientific findings. A multitude of journals, papers and reports have recognized the ever smaller number of replicable scientific studies. In 2016, one of the giants of scientific publishing, Nature, surveyed about 1,500 researchers across many different disciplines, asking for their stand on the status of reproducibility in their area of research. One of the many takeaways to the worrisome results of this survey is the following: 90% of the respondents agreed that there is a reproducibility crisis, and the overall top answer to boosting reproducibility was "better understanding of statistics". Indeed, many factors contributing to the explosion of irreproducible research stem from the neglect of the fact that statistics is no longer as static as it was in the first half of the 20th century, when statistical hypothesis testing came into prominence as a theoretically rigorous proposal for making valid discoveries with high confidence. When science first saw the rise of statistical testing, the basic idea was the following: you put forward competing hypotheses about the world, then you collect some data, and finally you use these data to validate your hypotheses. Typically, one was in a situation where they could iterate this three-step process only a few times; data was scarce, and the necessary computations were lengthy. Remember, this is early to mid-20th century we are talking about. This forerunner of today's scientific investigations would hardly recognize its own field in 2019. Nowadays, testing is much more dynamic and is performed at a scale larger than ever before. Even within a single institution, thousands of hypotheses are tested in a short time interval, older test results inspire future potential analyses, and scientific exploration oftentimes becomes a never-ending stream of individual hypothesis tests. What enabled this explosion of exploratory research is the high-throughput technologies and large amounts of data that we started seeing only recently, at least relative to the era of statistical thinking. That said, as in any discipline with well-established and successful foundations, it is difficult to move away from classical paradigms in testing. Much of today's large-scale investigations still uses tools and techniques which, although powerful and supported by beautiful theory, do not take into account that each test might be just a little piece of a much bigger puzzle of exploratory research. Many disciplines have yet to acquire novel methodology for testing, one that promotes valid inferences at scale and thus limits grandiose publications comprised of irreplicable mirages. Let us analyze why classical hypothesis testing might lead to many spurious conclusions when the number of tests is large. We do so by elaborating the three main steps of a test: "hypothesize", "collect data" and "validate". In the "hypothesize" step, a well-defined null hypothesis is formulated. For example, this could be "jelly beans do not cause acne"; we will use this as our running example. Notice that the null hypothesis, or simply null, is the opposite of what would be considered a discovery. In short, the null is status quo. Also at the beginning of a test, a false positive rate (FPR) is chosen. This is the maximal allowed probability of making a false discovery, typically chosen around 0.05. In the context of our running example, this means the following: if the null is true, i.e. if jelly beans do not cause acne, we will only have a 5% chance of proclaiming causation between jelly beans and acne. In a frequentist manner, we assume that there is deterministic ground truth about the null hypothesis. That is, it is either true or not. We will refer to the null hypotheses that are true as true nulls, and to those that are false as non-nulls. In our example, if jelly beans do not cause acne, the null hypothesis is a true null. If it is a non-null, however, we would ideally like to proclaim a discovery. The second step is calculating a p-value based on collected data. This protagonist of many controversies around statistical testing is the probability of seeing the collected data, or something even more extreme, if the null is true. In our example, this is the probability of having some observed parameter of skin condition, or something "even more unusual", if jelly beans indeed do not cause acne. To illustrate this point, consider the plot below. Let the bell curve be the distribution of the skin parameter if jelly beans do not cause acne. Then, the p-value is the red shaded area under this curve, which is everything "right of" the observed data point. The smaller the p-value, the more unlikely it is that the observation can be explained purely by chance. The last step is validation. If the calculated p-value is smaller than the FPR, the null hypothesis is rejected, and a discovery is proclaimed. In our running example, if the red shaded area is less than 0.05, we say that jelly beans cause acne. Finally, let us lift the lid on why there are so many false discoveries in large-scale testing. By construction, valid p-values are uniformly distributed on $[0,1]$1, if the null is true. This means that, even if jelly beans do not really cause acne, there is still 0.05 probability that a discovery is falsely proclaimed. Therefore, if testing N hypotheses that are truly null and hence should not be discovered, one is almost certain to proclaim some of them as discoveries if $N$ is large. For example, if all tests are independent, around 5% of $N$ will be discovered. Already after 20 tests of true nulls, even if they are completely arbitrary, one is expected to make a false discovery! And this is how science goes wrong. To recap, around 5% of the tested true null hypotheses unfortunately have to be discovered either way, simply by laws of probability. This wouldn't really be an issue if most of the tested hypotheses were legitimate potential discoveries, i.e. non-nulls. Then, 5% of a small-ish number of true nulls would be negligible. Typically, however, this is not the case. We test loads of crazy, out-there hypotheses, which would attract a lot of attention if confirmed, and we do so simply because we can. In many areas, both observations and computational resources are abundant, so there is little incentive to stay on the "safe side". So, how can one make scientific discoveries without the fear of reporting too many false ones? Controlling FDR with no additional goal is an easy task; namely, making no discoveries trivially gives FDR = 0. The implict goal behind the vast literature on FDR is discovering as many non-nulls as possible, while keeping FDR controlled under a pre-specified level $\alpha$. We collectively refer to all methods with this goal as FDR methods. Initially, FDR methods were offline procedures. This means that they required collecting a whole batch of p-values before deciding which tests to proclaim as discoveries. The most notable example of this class is the successful Benjamini-Hochberg procedure, which has for a long time been the default of FDR methods. However, the scale and scope of modern testing have begun to outstrip this well-recognized methodology. It is far from convenient to wait for all the p-values one wants to test, especially at institutions where testing is a never-ending process. To be more precise, we typically want to make decisions during and between our tests, in particular because this allows us to shape future analyses based on outcomes of past tests. This inspired a new line of work on FDR control, in which decisions are made online. In online FDR control, p-values arrive one at a time, and the decision of whether or not to make a discovery is made as soon as a p-value is observed. Importantly, online FDR algorithms have enabled controlling FDR over a lifetime; even if the number of sequential tests tends to infinity, one would still have a guarantee that most of the proclaimed discoveries are indeed non-nulls. The basic principle of online FDR control is to track and control a dynamic quantity called wealth. The wealth represents the current error budget, and is a result of all previously performed tests. In particular, if a test results in a discovery, the wealth increases, while if a discovery is not made, the wealth decreases; note that this update is completely independent of whether the test is truly null or not. When a new test starts, its FPR is chosen based on the available wealth; the bigger the wealth, the bigger the FPR, and consequently the better the chance for a discovery. In fact, this idea has a perfect analogy with testing in a broader social context. To make scientific discoveries, you are awarded an initial grant (corresponding to the target FDR level $\alpha$). This initial funding decreases with every new experiment, and, if you happen to make a scientific discovery, you are again awarded some "wealth", which you can use toward the budget for subsequent tests. This is essentially the real-world translation of the mathematical expressions guiding online FDR algorithms. Although online FDR control has broadened the domain of applications where false discoveries can be controlled, it has failed to account for several important aspects of modern testing. The main observation is that large-scale testing is not only sequential, but "doubly sequential". Tests are run in a sequential fashion, but also each test internally is comprised of a sequence of atomic executions, which typically finish at an unpredictable time. This fact makes practitioners run multiple tests that overlap in time in order to gain time efficiency, allowing tests to start and finish at random times. For example, in clinical trials, it is common to test several different treatment variants against a common control. These trials are often called "perpetual'', as multiple treatments are tested in parallel, and new treatments enter the testing platform at random times in an online manner. Similarly, A/B testing in industry is typically distributed across many individuals and research teams, and across time, with companies running hundreds of tests per day. This large volume of tests, as well as their complex distribution across many analysts, inevitably causes asynchrony in testing. This circumstance is a problem for standard online FDR methodology. Namely, all existing online FDR algorithms assume tests are run synchronously, with no overlap in time; in other words, in order to determine a false positive rate for an upcoming test, online FDR methods need to know the outcomes of all previously started tests. The figure below depicts the difference between synchronous and asynchronous online testing. For each time step $t$, $W_t$, $P_t$ and $\alpha_t$ are respectively the available wealth at the beginning of the $(t+1)$-th test, the p-value resulting from the $t$-th test, and the FPR of the $t$-th test. Furthermore, the asynchronous nature of modern testing introduces patterns of dependence between p-values that do not conform to common assumptions. Prior work on online FDR either assumes perfect independence between p-values (overly optimistic), or arbitrary dependence between all tested p-values in the sequence (overly pessimistic). As data are commonly shared across different tests, the first assumption in clearly difficult to satisfy. In clinical trials, having a common control arm induces dependence; in A/B testing, many tests reuse data from the same shared pool, again causing dependence. On the other end, it is not natural to assume that dependence spills over the entire p-value sequence; older data and test outcomes with time become "stale," and no longer have direct influence on newly created tests. Modern testing calls for an intermediate notion of dependence, called local dependence, one that assumes p-values that are far enough in the sequence are independent, while any two close enough are likely to depend on each other. In a recent manuscript , we developed FDR methods that confront both of these difficulties of large-scale testing. Our methods control FDR in sequential settings that are arbitrarily asynchronous, and/or yield p-values that are locally dependent. Interestingly, from the point of view of our analysis, both local dependence and asynchrony are solved via the same technical instrument, which we call conflict sets. More formally, each new test has a conflict set, which consists of all previously started tests whose outcome is not known (e.g. if there is asynchrony so they are still running), or is known but might have some leverage on the new test (e.g. if there is dependence). We show that computing the FPR of a new test while assuming "unfavorable" outcomes of the conflicting tests is the right approach to guaranteeing FDR control (we call this the principle of pessimism). It is worth pointing out that FDR control under conflict sets has to be more conservative by construction; to account for dependence between tests, as well as the uncertainty about the tests in progress, the FPRs have to be chosen appropriately smaller. That said, our methods are a strict generalization of prior work on online FDR; they interpolate between standard online FDR algorithms, when the conflict sets are empty, and the Bonferroni correction (also known as alpha-spending), when the conflict sets are arbitrarily large. The latter controls the familywise error rate, which is a more stringent error metric than FDR, under any assumption on how tests relate. This interpolation has introduced the possibility of a tradeoff between the consideration of overall rate of discovery per unit of real time, and consideration of the complexity of careful coordination required to minimize dependence and asynchrony. The replicability of hypothesis tests is largely in crisis, as the scale of modern applications has long outstripped classical testing methodology which is still in use. Moreover, prior efforts toward remedying this problem have neglected the fact that testing is massively asynchronous, and hence the existing solutions for boosting reproducibility have not been suitable for many common large-scale testing schemes. Motivated by this observation, we developed methods that control the false discovery rate in complex asynchronous scenarios, allowing statisticians to perform hypothesis tests with a small fraction of false discoveries, and with minimal explicit coordination between tests. Zrnic, T., Ramdas, A., & Jordan, M. I. (2018). Asynchronous Online Testing of Multiple Hypotheses. arXiv preprint arXiv:1812.05068.
CommonCrawl
Electric charge is a physical quantity of matter which causes it to experience a force when near other electrically charged matter. Two positively charged bodies repel each other. Two negatively charged bodies repel each other. Positively charged and negatively charged bodies are attracted to one another. It is a scalar quantity which has been demonstrated to be quantized. Electric charge is frequently, at elementary levels at least, considered as one of the fundamental dimensions of physics. In dimensional analysis it is assigned the symbol $Q$ or $\mathbf Q$. The SI unit of electric charge is the coulomb $\mathrm C$.
CommonCrawl
In January 2016, I was honored to receive an "Honorable Mention" of the John Chambers Award 2016. This article was written for R-bloggers, whose builder, Tal Galili, kindly invited me to write an introduction to the rARPACK package. Eigenvalue decomposition is a commonly used technique in numerous statistical problems. For example, principal component analysis (PCA) basically conducts eigenvalue decomposition on the sample covariance of a data matrix: the eigenvalues are the component variances, and eigenvectors are the variable loadings. In R, the standard way to compute eigenvalues is the eigen() function. However, when the matrix becomes large, eigen() can be very time-consuming: the complexity to calculate all eigenvalues of a $n \times n$ matrix is $O(n^3)$. While in real applications, we usually only need to compute a few eigenvalues or eigenvectors, for example to visualize high dimensional data using PCA, we may only use the first two or three components to draw a scatterplot. Unfortunately in eigen(), there is no option to limit the number of eigenvalues to be computed. This means that we always need to do the full eigen decomposition, which can cause a huge waste in computation. And this is why the rARPACK package was developed. As the name indicates, rARPACK was originally an R wrapper of the ARPACK library, a FORTRAN package that is used to calculate a few eigenvalues of a square matrix. However ARPACK has stopped development for a long time, and it has some compatibility issues with the current version of LAPACK. Therefore to maintain rARPACK in a good state, I wrote a new backend for rARPACK, and that is the C++ library Spectra. The name of rARPACK was POORLY designed, I admit. Starting from version 0.8-0, rARPACK no longer relies on ARPACK, but due to CRAN polices and reverse dependence, I have to keep using the old name. The usage of rARPACK is simple. If you want to calculate some eigenvalues of a square matrix A, just call the function eigs() and tells it how many eigenvalues you want (argument k), and which eigenvalues to calculate (argument which). By default, which = "LM" means to pick the eigenvalues with the largest magnitude (modulus for complex numbers and absolute value for real numbers). If the matrix is known to be symmetric, calling eigs_sym() is preferred since it guarantees that the eigenvalues are real. For really large data, the matrix is usually in sparse form. rARPACK supports several sparse matrix types defined in the Matrix package, and you can even pass an implicit matrix defined by a function to eigs(). See ?rARPACK::eigs for details. An extension to eigenvalue decomposition is the singular value decomposition (SVD), which works for general rectangular matrices. Still take PCA as an example. To calculate variable loadings, we can perform an SVD on the centered data matrix, and the loadings will be contained in the right singular vectors. This method avoids computing the covariance matrix, and is generally more stable and accurate than using cov() and eigen(). SVD has some interesting applications, and one of them is image compression. The basic idea is to perform a partial SVD on the image matrix, and then recover it using the calculated singular values and singular vectors. Even if the recovered image is quite blurred, it already reveals the main structure of the original image. And if we increase the number of singular pairs to 50, then the difference is almost imperceptible, as is shown below. There is also a nice Shiny App developed by Nan Xiao, Yihui Xie and Tong He that allows users to upload an image and visualize the effect of compression using this algorithm. The code is available on GitHub. Finally, I would like to use some benchmark results to show the performance of rARPACK. As far as I know, there are very few packages available in R that can do the partial eigenvalue decomposition, so the results here are based on partial SVD. The first plot compares different SVD functions on a 1000x500 matrix, with dense format on the left panel, and sparse format on the right. The second plot shows the results on a 5000x2500 matrix. The code for benchmark and the environment to run the code can be found here.
CommonCrawl
Depending on the expected return model used, more assumptions need to be met. For the most common model, the market model, for example, the relationship between the stock and the market needs to remain stable throughout the estimation and the event window. Only then, the alpha and beta factors, which were established with a regression analysis during the estimation window, can be used to predict expected returns for the event window. Is the stock of the analyzed firm frequently traded? Infrequent trading of the firm's stock may lead to problems in deriving the estimation parameters $\alpha$ and $\beta$ of the market model. Further, infrequent trading suggests that the capital market might not be efficient, questioning the validity of the stock price reaction. Are the time series of prices between the stock and the reference matching? Mismatches in the time series of returns in the stock and market returns throughout the estimation window may lead to overall shorter estimation periods and potentially biased parameters. Has information leakage taken place prior to the event? If information about the event has leaked to capital markets prior to the event window, the CAR of the event is not correct since a certain part or the totality of the event has already been priced into the stock price during the estimation window. Have there been other events during the event window that could be responsible for the analyzed firm's stock price changes? Is the chosen reference index the best correlate to the firm's stock price? If the chosen reference index is not the best possible correlated, analysis results may turn out biased. Has the relationship between the reference index and the firm's stock price change over the estimation period? The $\beta$ factor that is calculated from the estimation period would be biased. Predictions of normal returns would turn out incorrect and with them also the abnormal returns.
CommonCrawl
Abstract: Using an approach proposed by Lunin in 1989, upper bounds are found for the norms of large submatrices of a fixed $(N\times n)$-matrix which defines an operator from $l_2^n$ into $l_1^N$ with unit norm. Keywords: submatrix, operator norm, Lunin's lemma, Kadison-Singer problem. This work was supported by the Russian Science Foundation under grant no. 14-50-00005.
CommonCrawl
Because the case studies of Hydro-Québec and Manitoba Hydro contain sensitive information and cannot be shared publicly, we created a fictive case study to showcase how the Robust Decision Making framework can be applied in the hydropower sector. This case study looks at different reservoir and generating station options to evaluate how their production and cost/benefit ratio is affected by long term changes in streamflow. The problem is defined through the lens of the XMLR framework, a framework designed to help decision-makers lay out the variables influencing their decisions. As discussed in the About page, it includes four components: Uncertainties (X), Metrics (M), Levers (L) and Relations (R). Future streamflow values are modeled by multiplying the observed streamflow record by a factor varying linearly in time. This factor starts at 100% in 2016, and evolves to different values for dQ in 2050 ranging between -20% to +30% of the historical values: \(Q'=Q \times (1+dQ(t))\). This linear scaling is of course a rather crude simplification, as climate projections suggest changes to the annual cycle of streamflow, for example higher winter flows and lower summer flows. The change in temperature is used to drive changes in the electrical demand for heating and cooling. With each degree C of warming, power demand shifts from a pattern typical for southern Québec to a pattern that resembles more that of New-England. The figure below shows the range of demand patterns that are explored. Reference and future demand power pattern varying with projected temperature. The price at which electricity will be sold in 2050 (dP) spans a large range of values, going from 10$/MWh to 110$/MWh. For comparison, the figure below shows the mean annual price in the Vermont market. Prices fluctuates from the annual scale to the hourly scale, but in this exercise we only account for annual changes and the mean monthly cycle. It's likely that with raising temperatures the pressure on winter prices would decrease, but this feedback is not included here. Mean annual electricity price in the Vermont market (left) and mean annual monthly cycle (right). Vertical bars describe the standard deviation of daily prices around the mean. The discount rate (dR) describes the expected returns brought in by an investment in the stock market with a similar risk profile as the hydropower investment under consideration. This comparison is the basis on which decision-makers decide whether to simply invest their capital in the stock market or build a new infrastructure. Since interests accrue over time, revenues that occur ten years into the future have a smaller net present value than the same revenues in five years. Here, we explore a range of discount rates going from 0% to 12%. Numerous metrics can be used to evaluate the performance of an hydropower investment, each with a relative importance that varies depending from person to person. Since the objective of decision-aiding tools is not to replace decision-makers but to provide them with easily accessible information, here we include seven different performance metrics that qualify the investment. Net Present Value: The current value of the investment, including costs and discounted future revenues. Among those metrics, energy would be used to plan resource adequacy, that is, making sure that future generation assets will be able to meet future estimated demand. Firm energy is a reliability metric that generators use to convince buyers and regulators that they are able to meet demand and honor the terms of sales contracts. Spill flow measures the amount of water that cannot be used to generate power, which engineers try to minimize either by increasing capacity or reservoir storage. This storage volume varies depending on flows to the reservoir and the energy generating outflows. The level of the reservoir thus fluctuates daily and seasonally according to the balance between water inflows and energy demand. When these fluctuations are large, they can have impacts on the shore erosion, biogeochemistry and habitats. Similarly, as the reservoir volume increases, so does the inundated area. Flooded area implies displacement of communities, loss of habitats and historical sites and conversion of forest carbon sinks into aquatic carbon sources. It should be noted however that these impacts are not necessarily all proportional to the inundated area. Internal rate of return and net present value are two economic measures of the value of an investment over its amortizement period. This case study compares four different options for an hydroelectric generating station located in a northern region. These four options constitute the levers (L) of the XMLR-E framework and combine different reservoir volumes and turbine capacity. They are denoted as the Small, Medium/Energy, Medium/Capacity and Large options respectively, referencing the reservoir volume and whether the focus is on generating energy or responding to peak demand with high capacity. Large: a reservoir of 47,000 hm³ with a large capacity (~1200 m³/s). These four different options perform of course differently with respect to the metrics described above, and our objective here is to be able to display and compare rapidly the results obtained from these options in different future conditions. In each one of these four cases, we assume that construction starts in 2020, that the power station is in service in 2025 and its costs amortized over the next 50 years, that is, until 2065. Approximate costs have been set-up as defaults, but can be changed within the application interface. The objective here being to display an example application of this decision-aiding tool, considerable simplifications are made to model the relations between climate change, hydropower generation and the different metrics. For one, the energy production model is extremely simple and based on the maximization of the firm energy. This energy model works based on the assumption that the monthly energy production has to meet at all times the monthly demand pattern multiplied by the annual firm energy. The optimization algorithm then finds the highest firm energy that can be reached without breaching reservoir minimum and maximum operating levels. Real production models are vastly more complex, taking into account instantaneous energy prices, flow forecasts, network congestion and many other operational constraints. This optimization of the production is done for all values of future flow (dQ), future temperature (dT) and levers (L). The results in terms of energy production, spilled flow and reservoir fluctuations are saved to compute the different metrics over the amortizement period. For decision-makers, it's often useful to consider the opinion of various experts on complex topics. This application includes information about future runoff and temperature changes from climate models included in the CMIP5 ensemble used for the fifth IPCC report, it also includes information about discount rates used by different institutions, as well as energy price forecasts for New-England from the U.S. Energy Information Agency. By clicking on the item name in the left hand side menu, points are displayed on the graphic and slider axes, their position indicating the forecasted values. The objective of this application is not to automatically find the best investment option, but rather to provide enough information for decision-makers to explore the consequences of each option so that they can make informed decisions. Try the application by selecting the metrics that you find the most relevant and comparing the different options over a range of futures that you think is plausible. Play with the sliders to evaluate the sensitivity of your decisions to changes in the uncertain variables. You can compare the metric values of multiple options over one dimension by clicking on Leverlines. You can also assess the regret of choosing one option, and not another, by clicking on the RegretMap button. Detailed explanations can be found by clicking on the "?" marker in the top menu of the application.
CommonCrawl
Now, imagine the simulator wants to simulate the real world's view in the ideal world. In order to simulate $x$ (honest party's input), the simulator must send $1$ to the trusted party and then receives $x$ from it. Thus the simulator succeeded to simulate $X$, but now the simulator computes the output as $X$ && $(Y = 1)$ = $X$, which is not the equal of the output in the real view $X$ && $0$ = $0$. Thus the simulator cannot simulate the output in the ideal model. The definition says that for every adversary A for the real model must exist a simulator S for the ideal model. Considering this definition, I could find an adversary (who sends $y = 0$ in the real world) that there is no any simulator for it, so the protocol is not secure in the malicious model. In the Lindell's book-page 27 (the below proof), it is said that this protocol is secure!!! I am so confused. (I found a scenario where the protocol is not secure). If I understand correctly, you consider an adversary $\mathcal A$ corrupting $P_2$ in the real world, and which ignores $P_2$'s input $y$ and just outputs $0$ regardless of the value $x$ sent by $P_1$, is that right? And you claim that this adversary is not simulatable. Well, of course this adversary is simulatable: the simulator sends whatever to the trusted party, receives whatever, and outputs $0$. The proof, by the way, shows more: it shows how to construct a simulator for any adversary, as follows. In the real world, the adversary receives $x$, performs whatever computation based on $x$, $y$, and its auxiliary output $z$, and outputs the result. The simulator sends $1$ to the trusted party, receives $x$, performs the same computation as the real-world adversary, and outputs its result. Not the answer you're looking for? Browse other questions tagged protocol-design protocol-analysis security-definition or ask your own question. Is function Deterministic or Probabilistic in the malicious security definition? Is the simulation-based proof difficult and needed?
CommonCrawl
* @param string $location The path or URL to redirect to. * @param int $status The HTTP response status code to use. * Filters the redirect HTTP response status code to use. * Filters the X-Redirect-By header. * Allows applications to identify themselves when they're doing a redirect. * @param string $x_redirect_by The application doing the redirect. * @param int $status Status code to use. * @param string $location The path to redirect to. 5.1.0 The $x_redirect_by parameter was added. wp_redirect() does not validate that the $location is a reference to the current host. This means that this function is vulnerable to open redirects if you pass it a $location supplied by the user. For this reason, it is best practice to always use wp_safe_redirect() instead, since it will use wp_validate_redirect() to ensure that the $location refers to the current host. Only use wp_redirect() when you are specifically trying to redirect to another site, and then you can hard-code the URL. // so we use wp_safe_redirect() to avoid an open redirect. // We are trying to redirect to another site, using a hard-coded URL. The code below redirects to the parent post URL which can be used to redirect attachment pages back to the parent.
CommonCrawl