text
stringlengths
100
500k
subset
stringclasses
4 values
Exam 2 will be in class on Thursday, 9 Nov. See Class 20 notes for details on the exam. Today we review the topics that we learned after Exam 1 with the exception of number theory (which will not be included in Exam 2). State Machines and how to argue about correctness of programs. Recursive Definitions and how to prove statements about them using structural induction. Infinite Sets and Cardinalities, and how to show sets are finite, infinite, countable, or uncountable. $M = (S, G \subset S \times S, q_0 \in S)$ defines a state machine. Invariant Principle: If $P$ is a preserved invariant and $P(q_0)$ is true, then property $P$ is true for all reachable states. Model it as a state machine, $M$. Show that $M$ eventually terminates. Find a suitable preserved invariant $P$ for $M$. Show $P(q_0)$ — the perserved invariant holds for the start state. Define one or more base objects, $d \in D$. Define one or more constructor cases that specify how to construct a new object $d \in D$ from one or more previoulsy-constructed objects, $d_1, d_2, \ldots \in D$.
CommonCrawl
Very good solutions came in from Christopher Kassam, age 13, Epsom; from Ian Green of Cooper's Coborn School; Lizzie and Sheli, Ruoyi Sun, Sarah Rogers, Arti Patel from the NLCS Puzzle Club; Alex Lam from St Peter's College, Adelaide; Beth Carroll, Sheila Luk, Alicia Maultby and Rachel Walker from the Mount School, York; Alex Filz from Ousedale School, Milton Keynes; Farhan Iskander, Foxford School, Coventry; and Christopher Dorrington and Lorn Tao, Stamford School. Well done all of you. which, when square rooted, gives $xyz =\pm 6$. We know that $xy = 1$which means $z$ must equal $\pm 6$. We then worked out that as $xz = 9$, so $x \times\ \pm 6 = 9$, so $x$ must be $\pm 1.5$. Then $xy = 1$ , that is $\pm 1.5 \times\ y =1$ , so $y$ must be $ \pm 0.6$ recurring (or two thirds). $x= 3/2$, $y= 2/3$ and $z = 6$ or $x=-3/2$, $y= -2/3$ and $z = -6$ (where $x, y$ and $z$ all have the same sign). Selecting and using information. Working systematically. Creating and manipulating expressions and formulae. Other equations. Generalising. Integers. Simultaneous equations. Linear equations. Diophantine equations. Solving equations graphically.
CommonCrawl
Purpose: In this tutorial, you will learn how to optimize a general crystal structure. An explicit example is given for hexagonal structures. Here, you will set up and execute a series of calculations for different volumes (at constant $c/a$ ratio) and for different $c/a$ ratios (at constant volume) for Be in the hexagonal structure. The tools which are used in this tutorial are applicable for any crystal type. Now, you have all requirements to optimize lattice parameters of any given crystal structure. Hexagonal I structure in the Laue classification. Which parameter would you like to optimize? 0.01, the absolute value of the maximum strain for which we want to perform the calculation. To execute the calculations, you have to run the script OPTIMIZE-submit.sh. If you do so, the screen output will be similar to the following. At this point, the script is asking whether you desire to use a Murnaghan (M) or Birch-Murnaghan (B) equation of state for extracting equilibrium parameters such as the equilibrium volume and bulk modulus. Optimized lattice parameters saved into the file: "BM-optimized.xml" Moreover, the script generates a plot (PostScript file BM_eos.eps) which looks like the following. The bulk modulus and bulk modulus pressure derivative which are derived here have to be interpreted only as fitting parameters. They do not coincide with the "exact" bulk modulus and bulk modulus pressure derivative of the crystal. Indeed, these "exact" values should be obtained by fitting, with an equation of state (Birch or Murnaghan), the function E=E(V), where for each volume V, E(V) is the energy obtained by optimizing, at that given V, all other lattice and internal parameters. The visual analysis of the plots is very important. The user should always check them at each step. In particular, if the minimum lies ouside the displayed region, the calculation should be restarted with more appropriate values of the initial parameters (e.g., volume or $c/a$ ratio). The optimal situation is when the minimum of the energy curves is located in the middle of the investigated region. If the difference in energy between the calculated points and the fit is larger than the final required accuracy in the energy, the calculation should be restarted with more appropriate computational parameters. In particular, one should consider the number of k points (ngridk), the value of rgkmax, and the accuracy in the calculated total energy (epsengy). A file corresponding to an exciting input file for the optimized geometry is created with the name BM-optimized.xml. If you are interested to check how accurate are the calculated equilibrium parameters at this step, you can find more information here. At this point, you have performed the first optimization step by varying only the volume. In order to be prepared for the next step, you should move now to the parent directory and rename the VOL directory to 1-VOL (first step, optimizing only the volume). Then, you should copy the BM-optimized.xml file to the current directory with the new name 1-VOL.xml. This file will be used as the input file in the next step. Optimized lattice parameters saved into the file: "coa-optimized.xml" In this case, the optimization is performed using a fourth-order polynomial fit for calculating the minimum energy and the corresponding strain. The resulting plot (also available as PostScript file as coa.eps) should look like the following. Repeat now the procedure already explained in STEP1, running the script OPTIMIZE-lattice.py and using as entries values 2-COA.xml, 1, 0.005, and 5 in the given order. After having performed the calculation (running the script OPTIMIZE-submit.sh inside the directory VOL), you run OPTIMIZE-lattice.py and get the following plot. At this point, you have optimized the volume for the second time. Follow the last part of STEP1 and copy the file BM-optimized.xml to the parent directory under the name 3-VOL.xml. Proceed in a similar way to to STEP2. Run the script OPTIMIZE-lattice.py using as entries values 3-VOL.xml, 2, 0.005, and 5 in the given order. Using the same procedure as in the previous steps, you will end up with the following plot. The equilibrium volume is converged within 10-1 Bohr3. The c/a ratio is converged within 3$\times$10-4. The energy at the minimum seems to be converged within 10-4 mHa. Indeed, such a small value should be considered an artifact of the optimization procedure, which assumes that the calculated total energies are exact. However, the accuracy in the determination of the minimum energy cannot be smaller than the accuracy of the total energy in a single SCF calculation. For the calculations performed in this tutorial, total energies are calculated with the default value of the accuracy, i.e., 10-4 Ha. If these results correspond to the desired accuracy you can stop the optimization procedure. Otherwise, you proceed with the next step and, using the new results, you check again the convergence behavior of the equilibrium parameters. perform a single calculation of exciting.
CommonCrawl
We present PROPS, a lightweight transfer learning mechanism for sequential data. PROPS learns probabilistic perturbations around the predictions of one or more arbitrarily complex, pre-trained black box models (such as recurrent neural networks). The technique pins the black-box prediction functions to "source nodes" of a hidden Markov model (HMM), and uses the remaining nodes as "perturbation nodes" for learning customized perturbations around those predictions. In this paper, we describe the PROPS model, provide an algorithm for online learning of its parameters, and demonstrate the consistency of this estimation. We also explore the utility of PROPS in the context of personalized language modeling. In particular, we construct a baseline language model by training a LSTM on the entire Wikipedia corpus of 2.5 million articles (around 6.6 billion words), and then use PROPS to provide lightweight customization into a personalized language model of President Donald J. Trump's tweeting. We achieved good customization after only 2,000 additional words, and find that the PROPS model, being fully probabilistic, provides insight into when President Trump's speech departs from generic patterns in the Wikipedia corpus. Python code (for both the PROPS training algorithm as well as experiment reproducibility) is available at https://github.com/cylance/perturbed-sequence-model. There have been recent efforts for incorporating Graph Neural Network models for learning full-stack solvers for constraint satisfaction problems (CSP) and particularly Boolean satisfiability (SAT). Despite the unique representational power of these neural embedding models, it is not clear how the search strategy in the learned models actually works. On the other hand, by fixing the search strategy (e.g. greedy search), we would effectively deprive the neural models of learning better strategies than those given. In this paper, we propose a generic neural framework for learning CSP solvers that can be described in terms of probabilistic inference and yet learn search strategies beyond greedy search. Our framework is based on the idea of propagation, decimation and prediction (and hence the name PDP) in graphical models, and can be trained directly toward solving CSP in a fully unsupervised manner via energy minimization, as shown in the paper. Our experimental results demonstrate the effectiveness of our framework for SAT solving compared to both neural and the state-of-the-art baselines. The Straight-Through Estimator (STE) is widely used for back-propagating gradients through the quantization function, but the STE technique lacks a complete theoretical understanding. We propose an alternative methodology called alpha-blending (AB), which quantizes neural networks to low-precision using stochastic gradient descent (SGD). Our method (AB) avoids STE approximation by replacing the quantized weight in the loss function by an affine combination of the quantized weight w_q and the corresponding full-precision weight w with non-trainable scalar coefficient $\alpha$ and $1-\alpha$. During training, $\alpha$ is gradually increased from 0 to 1; the gradient updates to the weights are through the full-precision term, $(1-\alpha)w$, of the affine combination; the model is converted from full-precision to low-precision progressively. To evaluate the method, a 1-bit BinaryNet on CIFAR10 dataset and 8-bits, 4-bits MobileNet v1, ResNet_50 v1/2 on ImageNet dataset are trained using the alpha-blending approach, and the evaluation indicates that AB improves top-1 accuracy by 0.9%, 0.82% and 2.93% respectively compared to the results of STE based quantization. This paper addresses the problem of object discovery from unlabeled driving videos captured in a realistic automotive setting. Identifying recurring object categories in such raw video streams is a very challenging problem. Not only do object candidates first have to be localized in the input images, but many interesting object categories occur relatively infrequently. Object discovery will therefore have to deal with the difficulties of operating in the long tail of the object distribution. We demonstrate the feasibility of performing fully automatic object discovery in such a setting by mining object tracks using a generic object tracker. In order to facilitate further research in object discovery, we release a collection of more than 360,000 automatically mined object tracks from 10+ hours of video data (560,000 frames). We use this dataset to evaluate the suitability of different feature representations and clustering strategies for object discovery. Deep structured-prediction energy-based models combine the expressive power of learned representations and the ability of embedding knowledge about the task at hand into the system. A common way to learn parameters of such models consists in a multistage procedure where different combinations of components are trained at different stages. The joint end-to-end training of the whole system is then done as the last fine-tuning stage. This multistage approach is time-consuming and cumbersome as it requires multiple runs until convergence and multiple rounds of hyperparameter tuning. From this point of view, it is beneficial to start the joint training procedure from the beginning. However, such approaches often unexpectedly fail and deliver results worse than the multistage ones. In this paper, we hypothesize that one reason for joint training of deep energy-based models to fail is the incorrect relative normalization of different components in the energy function. We propose online and offline scaling algorithms that fix the joint training and demonstrate their efficacy on three different tasks. Although convolutional neural networks (CNNs) currently dominate competitions on image segmentation, for neuroimaging analysis tasks, more classical generative approaches based on mixture models are still used in practice to parcellate brains. To bridge the gap between the two, in this paper we propose a marriage between a probabilistic generative model, which has been shown to be robust to variability among magnetic resonance (MR) images acquired via different imaging protocols, and a CNN. The link is in the prior distribution over the unknown tissue classes, which are classically modelled using a Markov random field. In this work we model the interactions among neighbouring pixels by a type of recurrent CNN, which can encode more complex spatial interactions. We validate our proposed model on publicly available MR data, from different centres, and show that it generalises across imaging protocols. This result demonstrates a successful and principled inclusion of a CNN in a generative model, which in turn could be adapted by any probabilistic generative approach for image segmentation. A growing number of state-of-the-art transfer learning methods employ language models pretrained on large generic corpora. In this paper we present a conceptually simple and effective transfer learning approach that addresses the problem of catastrophic forgetting. Specifically, we combine the task-specific optimization function with an auxiliary language model objective, which is adjusted during the training process. This preserves language regularities captured by language models, while enabling sufficient adaptation for solving the target task. Our method does not require pretraining or finetuning separate components of the network and we train our models end-to-end in a single step. We present results on a variety of challenging affective and text classification tasks, surpassing well established transfer learning methods with greater level of complexity. End-to-end optimization has achieved state-of-the-art performance on many specific problems, but there is no straight-forward way to combine pretrained models for new problems. Here, we explore improving modularity by learning a post-hoc interface between two existing models to solve a new task. Specifically, we take inspiration from neural machine translation, and cast the challenging problem of cross-modal domain transfer as unsupervised translation between the latent spaces of pretrained deep generative models. By abstracting away the data representation, we demonstrate that it is possible to transfer across different modalities (e.g., image-to-audio) and even different types of generative models (e.g., VAE-to-GAN). We compare to state-of-the-art techniques and find that a straight-forward variational autoencoder is able to best bridge the two generative models through learning a shared latent space. We can further impose supervised alignment of attributes in both domains with a classifier in the shared latent space. Through qualitative and quantitative evaluations, we demonstrate that locality and semantic alignment are preserved through the transfer process, as indicated by high transfer accuracies and smooth interpolations within a class. Finally, we show this modular structure speeds up training of new interface models by several orders of magnitude by decoupling it from expensive retraining of base generative models. In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO. We introduce a new beam search decoder that is fully differentiable, making it possible to optimize at training time through the inference procedure. Our decoder allows us to combine models which operate at different granularities (e.g. acoustic and language models). It can be used when target sequences are not aligned to input sequences by considering all possible alignments between the two. We demonstrate our approach scales by applying it to speech recognition, jointly training acoustic and word-level language models. The system is end-to-end, with gradients flowing through the whole architecture from the word-level transcriptions. Recent research efforts have shown that deep neural networks with attention-based mechanisms are powerful enough to successfully train an acoustic model from the final transcription, while implicitly learning a language model. Instead, we show that it is possible to discriminatively train an acoustic model jointly with an explicit and possibly pre-trained language model. This introduction aims to tell the story of how we put words into computers. It is part of the story of the field of natural language processing (NLP), a branch of artificial intelligence. It targets a wide audience with a basic understanding of computer programming, but avoids a detailed mathematical treatment, and it does not present any algorithms. It also does not focus on any particular application of NLP such as translation, question answering, or information extraction. The ideas presented here were developed by many researchers over many decades, so the citations are not exhaustive but rather direct the reader to a handful of papers that are, in the author's view, seminal. After reading this document, you should have a general understanding of word vectors (also known as word embeddings): why they exist, what problems they solve, where they come from, how they have changed over time, and what some of the open questions about them are. Readers already familiar with word vectors are advised to skip to Section 5 for the discussion of the most recent advance, contextual word vectors. Learning image representations to capture fine-grained semantics has been a challenging and important task enabling many applications such as image search and clustering. In this paper, we present Graph-Regularized Image Semantic Embedding (Graph-RISE), a large-scale neural graph learning framework that allows us to train embeddings to discriminate an unprecedented O(40M) ultra-fine-grained semantic labels. Graph-RISE outperforms state-of-the-art image embedding algorithms on several evaluation tasks, including image classification and triplet ranking. We provide case studies to demonstrate that, qualitatively, image retrieval based on Graph-RISE effectively captures semantics and, compared to the state-of-the-art, differentiates nuances at levels that are closer to human-perception. Deep neural networks (DNNs) have shown an inherent vulnerability to adversarial examples which are maliciously crafted on real examples by attackers, aiming at making target DNNs misbehave. The threats of adversarial examples are widely existed in image, voice, speech, and text recognition and classification. Inspired by the previous work, researches on adversarial attacks and defenses in text domain develop rapidly. To the best of our knowledge, this article presents a comprehensive review on adversarial examples in text. We analyze the advantages and shortcomings of recent adversarial examples generation methods and elaborate the efficiency and limitations on countermeasures. Finally, we discuss the challenges in adversarial texts and provide a research direction of this aspect. Time-frequency (TF) representations provide powerful and intuitive features for the analysis of time series such as audio. But still, generative modeling of audio in the TF domain is a subtle matter. Consequently, neural audio synthesis widely relies on directly modeling the waveform and previous attempts at unconditionally synthesizing audio from neurally generated TF features still struggle to produce audio at satisfying quality. In this contribution, focusing on the short-time Fourier transform, we discuss the challenges that arise in audio synthesis based on generated TF features and how to overcome them. We demonstrate the potential of deliberate generative TF modeling by training a generative adversarial network (GAN) on short-time Fourier features. We show that our TF-based network was able to outperform the state-of-the-art GAN generating waveform, despite the similar architecture in the two networks. The recently proposed Unbiased Online Recurrent Optimization algorithm (UORO, arXiv:1702.05043) uses an unbiased approximation of RTRL to achieve fully online gradient-based learning in RNNs. In this work we analyze the variance of the gradient estimate computed by UORO, and propose several possible changes to the method which reduce this variance both in theory and practice. We also contribute significantly to the theoretical and intuitive understanding of UORO (and its existing variance reduction technique), and demonstrate a fundamental connection between its gradient estimate and the one that would be computed by REINFORCE if small amounts of noise were added to the RNN's hidden units.
CommonCrawl
We covered three main techniques for doing this: local gradient-based search (providing a lower bound on the objective), exact combinatorial optimization (exactly solving the objective), and convex relaxations (providing a provable upper bound on the objective). The order of the min-max operations is important here. Specially, the max is inside the minimization, meaning that the adversary (trying to maximize the loss) gets to "move" second. We assume, essentially, that the adversary has full knowledge of the classifier parameters $\theta$ (this was implicitly assumed throughout the entire previous section), and that they get to specialize their attack to whatever parameters we have chosen in the outer maximization. The goal of the robust optimization formulation, therefore, is to ensure that the model cannot be attacked even if the adversary has full knowledge of the model. Of course, in practice we may want to make assumptions about the power of the adversary: maybe (or maybe not) it is reasonable to assume they could not solve the integer programs for models that are too large. But it can be difficult to pin down a precise definition of what we mean by the "power" of the adversary, so extra care should be taken in evaluating models against possible "realistic" adversaries. Using lower bounds, and examples constructed via local search methods, to train an (empirically) adversarially robust classifier. Using convex upper bounds, to train a provably robust classifier. There are trade-offs between both approaches here: while the first method may seem less desireable, it will turn out that the first approach empircally creates strong models (with empircally better "clean" performance as well as better robust performance for the best attacks that we can produce. Thus, both sets of strategies are important to consider in determining how best to build adversarially robust models. Perhaps the simplest strategy for training an adversarially robust model is also the one which seems most intuitive. The basic idea (which originally was referred to as "adversarial training" in the machine learning literature, though is also basic technique from robust optimization when viewed through this lense) is to simply create and then incorporate adversarial examples into the training process. In other words, since we know that "standard" training creates networks that are succeptible to adversarial examples, let's just also train on a few adversarial examples. Note however, that Danskin's theorem only technically applies to the case where we are able to compute the maximum exactly. As we learned from the previous section, finding the maximum exactly is not an easy task. And it is very difficult to say anything formally about the nature of the gradient if we do not solve the problem optimally. Nonetheless, what we find in practice is the following: the "quality" of the robust gradient descent procedure is tied directly to how well we are able to perform the maximization. In other words, the better job we do of solving the inner maximization problem, the closer it seems that Danskin's theorem starts to hold. In other words, the key aspects of adversarial training is incorporate a strong attack into the inner maximization procedure. And projected gradient descent approaches (again, this included the simple variants like projected steepest descent) are the strongest attack that the community has found. Although this procedure approximately optimizes the robust loss, which is exactly the target we would like to optimize, in practice it is common to also include a bit of the standard loss (i.e., also take gradient steps in the original data points), as this tends to also slightly improve the performance of the "standard" error of the task. It is also common to randomize over the starting positions for PGD, or else there can be issues with the procedure learning loss surface such that the gradients exactly at the same points point in a "shallow" direction, but very nearby there are points that have the more typical steep loss surfaces of deep networks. Let's see how this all looks in code. To start with, we're going to clone a bunch of the code we used in the previous chapter, including the procedures for building and training the network and for producing adversarial examples. """ Construct FGSM adversarial examples on the examples X""" The only real modification we make is that we modify the adversarial function to also allow for training. """Standard training/evaluation epoch over the dataset""" """Adversarial training/evaluation epoch over the dataset""" Let's start by training a standard model and evaluating adversarial error. So as we saw before, the clean error is quite low, but the adversarial error is quite high (and actually goes up as we train the model more). Let's now do the same thing, but with adversarial training. Ok, so with adversarial training, we are able to get a model that has an error rate of just 2.8%, compared to the 71% that our original model had (and increased test accuracy as well, though this is one are where we want to emphasize that this better clean error is an artifact of the MNIST data set, and not something we expect in general). This seems like a resounding success! Let's be very very careful, though. Whenever we train a network against a specific kind of attack, it's incredibly easy to perform well against that particular attack in the future: in a sense, this is just the standard statement about deep network performance: they are incredibly good at predicting precisely the class of data they were trained against. What about if we run some other attack, like FGSM? What if we run PGD for longer? Or with randomization? Or what if someone in the future comes up with some amazing new optimization procedure that works even better (for attacks within the prescribed norm bound)? Let's get a sense of this by evaluating our model against some different attacks. Let's try FGSM first. Ok, that is good news. FGSM indeed works worse than even the PGD attack we trained against, because FGSM is really just one step of PGD with a step size of $\alpha = \epsilon$. So it's not surprising it does worse. Let's try running PGD for longer. Also good! Error increases a little bit, but well within the bounds of what we might think in reasonable (you can try running for longer, and see that it doesn't change much … the examples have bit the boundaries of the $\ell_\infty$ ball in most cases, and taking more steps doesn't change things). But what about if we take more steps with a smaller step size, to try to get a more "fine-grained" attack? Ok, we're getting more confident now. Let's also add randomization. Alright, so at this point, we've done enough evaluations that maybe we are confident enough to put the model online and see if anyone else can actually break it (note: this is not actually the model that was put online, though it was trained in the roughly the same manner). But we should still probably try some different optimizers, try multiple randomized restarts (like we did in the past section), etc. Note: one evaluation which is not really relevant (except maybe out of curiosity), however, is to evaluate the performance of this robust model under some other perturbation region, say evaluating this $\ell_\infty$ robust model under an $\ell_2$ bounded attack. The model was trained under one single attack model; of course it will not work well to prevent some completely different attack model. If one does desire a kind of "generalization" across multiple attack models, then we need to formally define the set of attack models we care about, and train the model over multiple different draws from these attack models. This is a topic we won't get in to, except to say that for some classes like multiple different norm bounds, it would be easy to extend the approach to simultaneously defend against e.g. $\ell_1$, $\ell_2$, and $\ell_\infty$ attacks, or something like this. Of course, the real set of attacks we care about (i.e., the set of all images that a human thinks "look reasonably the same") is extremely hard to characterize, and an excellent subject for future work. What is happening with these robust models? So why do these models work well against robust attacks, and why have some other proposed methods for training robust models (in)famously come up short in this regard? There are likely many answers to this question, but one potential answer can be seen by looking at the loss surface of the trained classifier. Let's look at a projection of the loss function along two dimensions in the input space (one the direction of the actual gradient, and one a random direction). Let's look at the loss surface for the standard network. Very quickly the loss increases substantially. Let's then compare this to the robust model. The important point to compare here is the relative $z$ axes (the "bumpiness" in the second figure is just to do this much smaller scale; if put on the same scale the second figure would be completely flat). The robust model has a loss that is quite flat both in the gradient direction (that is the steeper direction), and in the random direction, whereas the traditionally trained model varies quite rapidly both in the gradient direction and (after moving some in the gradient direction) in the random direction. Of course, this is no guarantee that there is no direction of steep cost increase, but it at least gives some hint of what may be happening. In summary, these models trained with PGD-based adversarial training do appear to be genuinely robust, in that the underlying models themselves have smooth loss surfaces, and not by just a "trick" that hides the true direction of cost increase. Whether more can be said formally about the robustness is a quick that remains to be seen, and a topic of current ongoing research. As a final piece of the puzzle, let's try to use the convex relaxation methods not just to verify networks, but also to train them. To see why we might want to do this, we're going to focus here on the interval-based bounds, though all the same factors apply to the linear programming convex relaxation as well, just to a slightly smaller degree (and the methods are much more computationally intensive). To start, let's consider using our interval bound to try to verify robustness for the empirically robust classifier we just trained. Remember that a classifier is verified to be robust against an adversarial attack if the optimization objective is positive for all targeted classes. This is done by the following code (almost entirely copied from the previous chapter, but with an additional routine that computes the verified accuracy over batches). Let's see what happens if we try to use this bound to see whether we can verify that our robustly trained model provably will be insucceptible to adversarial examples in some cases, rather than just empirically so. Unfortunately, the interval-based bound is entirely vaccous for our (robustly) trained classifier. We'll save you the disappointment of checking ever smaller values of $\epsilon$, and just mentioned that in order to get any real verification with this method, we need values of $\epsilon$ less than 0.001. For example, for $\epsilon = 0.0001$, we finally achieve a "reasonable" bound. That doesn't seem particularly useful, and indeed, it is a property of virtually all the relaxation-based verification approaches, is that they are vaccuous when evaluated upon a network trained without knowledge of these bounds. Additionally, these errors tend to accumulate with the depth of the network, precisely because the interval bounds as we have presented them also tend to get looser with each layer of the network (this is why the bounds were not so bad in the previous chapter, when we were applying them to a three-layer network). To do this, we're going to use the interval bounds to upper bound the cross entropy loss of a classifier, and then minimize this upper bound. Specifically, if we form a "logit" vector where we replace each entry with the negative value of the objective for a targeted attack, and then take the cross entropy loss of this vector, it functions as a strict upper bound of the original loss. We can implement this as follows. Finally, let's train our model using this robust loss bound. Note that training rovably robust models is a bit of a tricky business. If we start out immediately by trying to train our robust bound with the full $\epsilon=0.1$, the model will collapse to just predicting equal probability for all digits, and will never recover. Instead, to reliably train such models we need to schedule $\epsilon$ during the training process, starting with a small $\epsilon$ and gradually raising it to the desired level. The schedule we use below was picked rather randomly, and we can do much better with a bit of tweaking, but it serves our basic purpose. It's not going to set any records, but what we have here is an MNIST model that where no $\ell_\infty$ attack of norm bounded by $\epsilon=0.1$ will ever be able to cause the classifier to experience more than 9.67% error on the test set of MNIST (acheiving a "clean" error of 5.15%). And just how bad can a real adversarial attack do? It's of course hard to say for sure, but let's see what PGD does. So somewhere right in the middle. Note also that training these provably robust models is a challenging task, and a bit of tweaking (even still using interval bounds) can perform quite a bit better. For now, though, this is sufficient to make our point that we can obtain non-trivial provable bounds for trained networks. Even on a dataset like CIFAR10, for example, the best known robust models that can handle a perturbation of $8/255 = 0.031$ color values achieve an (empirical) robust error of of 55%, and the best provably robust models have an error greater than 70%. On the flipside, the choices we have with regards to training procedures, network architecture, regularization, etc, have barely been touched in the robust optimization context. All our architecture choices come from what has been best for standard training, but these likely are no longer optimal architectures for robust training. Finally, as we will highlight in the next chapter, there is substantial benefit to be had from robust models right now, even if true robust performance still remains ellusive.
CommonCrawl
The Workshop on Operator Theory, Complex Analysis, and Applications 2018 / WOTCA 2018 aims to bring together researchers working in Operator Theory, Complex Analysis, and their applications, namely in Mathematical Physics, and to create an opportunity to highlight the current state of the art in these fields, present open problems and engage in fruitful discussions. This meeting, whose scope will comprise Analysis and applications, aims to promote integration of researchers of both IST and IME (and other institutions), with the presentation of research works and plenary lectures of the participants. This edition will honor Paulo Cordaro, on the occasion of his 65th birthday. This seminar emerged in 2010 from informal meetings between mathematicians from the Universities of Coimbra, Lisbon and Porto. The 6th Workshop on Representation Theory and Related Areas, to be held at Universidade do Algarve, aims to pursue and develop interactions between mathematicians working in Representation Theory, Geometry, Combinatorics and other relevant fields. The aim of this meeting is to stimulate interaction between people, who are working in higher structures interpreted in the broad sense (to mean higher stacks, $L_\infty$- and $A_\infty$-algebras, Batalin–Vilkovisky algebras, higher category theory, operad theory and related areas). The Fall Workshops on Geometry and Physics have been held yearly since 1992, and bring together Spanish and Portuguese geometers and physicists, along with an ever increasing number of participants from outside the Iberian peninsula. The meetings aim to provide a forum for the exchange of ideas between researchers from different backgrounds, and always include a substantial number of enthusiastic young researchers amongst the participants. Patrícia Gonçalves, researcher at CAMGSD and an Associate Professor at the Instituto Superior Técnico since May 2016, has been awarded an ERC Starting Grant of 1,180,000€ for 5 years. This is the first time the European Research Council has granted this award to a mathematician working in Portugal. Ricardo Schiappa is one of the organizers of the thematic period Resurgent Asymptotics in Physics and Mathematics at the Kavli Institute for Theoretical Physics, University of California, Santa Barbara.
CommonCrawl
There is a bit string that consists of $n$ bits. Then, there are some changes that invert bits. Your task is to report, after each change, the length of the longest substring whose each bit is the same. The first input line has a bit string that consists of $n$ bits. The bits are numbered $1,2,\ldots,n$. The next line contains an integer $m$: the number of changes. The last line contains $m$ integers $x_1,x_2,\ldots,x_m$ which describe the changes. After each change, print the length of the longest substring whose each bit is the same. Explanation: The bit string first becomes 000011, then 010011, and finally 010001.
CommonCrawl
In the paper "Cohomology of the complement of an elliptic arrangement", the authors (Levin and Varchenko) consider the complement to an arrangement of (elliptic) hyperplanes in a cartesian power of an elliptic curve and describe its cohomology with coefficients in a non-trivial rank one local system (Actually, their result concerns generic rank one local systems). What about the case of the trivial rank one local system $\mathbb C$? Is there a canonical map between the cohomology of orbifold Chiral de Rham on an orbifold and the cohomology of Chiral de Rham on a crepant resolution? What do Hecke eigensheaves actually look like? Why is there a factor $p$ in the definition of $T_p$ via Hecke correspondences on modular curves?
CommonCrawl
Neural network is no longer an uncommon phrase to the Computer Science society or lets say to the society in general. The main reason that makes it so cool is not just the amount of real-world problems it is solving, but also the kind of problems it is solving. How can they be so varied? Be it in the field of Cognitive Psychology, be it in the domain of Cyber Security, be it in the area of Health-care (You are not considering Computer Vision, Computer Graphics, Natural Language Processing, etc. for the time being.). Let's name the more uncommon ones! Almost each and every industry is getting tremendously benefited by the intelligence and automation a neural network has to offer. But why? This is the question that keeps coming and coming! Well, the answer for this is still under active research because Neural Networks are quite a black box in nature and its resemblance with a brain makes this question more complicated. Anyway, answering that question is not the objective of this post. One thing is for sure! To get expected results from a neural network the one thing that has to be ensured is its Training. And by now, you already might have discovered Training very large calls for a tremendous amount of computation power. Without good GPUs, SSDs it is almost impossible to train a very large neural network. Now how much is very large? Well, large enough to produce good results on an ImageNet dataset because that is kind of a benchmark. But this very idea of training vast neural networks got revolutionized entirely when a team of talented researchers from Fast.ai was able to beat Google's model achieving an accuracy of 93% in just 18 minutes that too was only $40. But what were the key ingredients behind this to occur? State-of-the-art GPUs? State-of-the-art TPUS? State-of-the-art SSDs? Absolutely not. The team's configuration was quite simple as the cost is only $40. The key ingredient was the use of state-of-the-art algorithms to train the neural network. In this blog, the great researcher and educator Jeremy Howard has discussed the main reasons for this big win. This is a classic example where the power of costly hardware gets lost to the power of powerful algorithms. In this post, you are going to uncover the details of one such technique that can ensure a neural network is trained with the best possible learning rate. This technique is known as Cyclical Learning Rate (CLR). This was proposed way back in 2015 by Leslie N. Smith. You can check the original paper here. But why are you going to cover only learning rate when there are other essential hyperparameters like dropout rate and activation functions? It is because the learning rate is the most important one among them. Just that! Quickly revisit why learning rates are needed? What are the techniques available for finding the most suitable learning rate for a neural network? Why are learning rates needed? Let's quickly revisit the primary purpose of using learning rates for training a neural net. The term in the rectangle is the update rule with which the network starts to learn its parameter $\theta$1 where $\alpha$ is the learning rate. In the first curve, the lowermost point is the minima of the loss function. Suppose, in the current iteration, your network is in near the left topmost point. Now, in order to converge the lowest point, you take partial derivatives of the loss function $J(\theta1)$ and compute the gradients for getting the directions towards arriving at that lowest point. To arrive a bit faster at that point, you add another term $\alpha$ which is the learning rate. The lower the value, the slower you travel along the downward slope. While this might be a good idea (using a low learning rate) in terms of making sure that you do not miss any local minima, it could also mean that you'll be taking a long time to converge — especially if you get stuck on a plateau region. That was a quick recap of the objectives of learning rates in simple words. Now you will study the techniques of choosing a good learning rate for your neural network. There is no fixed learning rate for a neural network. It depends on the kind of problem you are working on, the type of data you are feeding to your network, and most importantly the structure of the network which varies from problem to problem. Handpicking a learning rate is a very painful task because in the case you are training a large network you can incur massive amounts of costs. And it is very time-consuming as well. Should you run standard hyperparameter optimization methods like Grid Search or Random Search? That is again horrible for a large network. But why do you keep coming to large networks? It's because almost any real-world complex problem will need an extensive neural network. Before CLR, Adaptive Learning Rates were proposed which can be thought as a competitor to CLR but experimentations with Adaptive Learning Rates are computationally expensive which CLR is not. Still now, the most common practice is to set the learning rate to a constant value and decrease it by order of magnitude once the accuracy has plateaued. Therefore, there is a clear need for a systematic technique which can simplify the process of choosing a good learning rate for a particular neural network. Not only this but also, there has to be a sufficient amount of reasons which would support that approach as to why it is trust-worthy. It seems Cyclical Learning Rates (CLR) appeared just in time. CLR gives an approach for setting the global learning rates for training neural networks that eliminate the need to perform tons of experiments to find the best values with no additional computation. CLR provides an excellent learning rate range (LR range) for an experiment by introducing the concept of LR Range Test. You will study this section merely for building your intuition of how CLR works. In the next section, you will dive into more details. In the previous sections, you briefly understood why learning rates are used in any way. Let's again recall it. An ideal learning rate would be the one which performs a steep decrease in the network's loss. Here comes the wizardry of CLR. The original CLR paper talks about an experiment wherein you can observe the behavior of learning rate with respect to the loss. The experiment is straightforward to visualize where you gradually increase the learning rate after each mini-batch, recording the loss at each increment. This gradual increase can be either be linear or be exponential. And yes, this is essentially the LR Range Test. After performing the experiment Leslie showed us, for too low learning rates, the loss may decrease but at a very shallow rate. When entering the optimal learning rate zone, you'll observe a quick drop in the loss function. If you further increase the learning rate, then it can cause parameter loss in the network which in turn might lead to the increase in losses. So, from this experiment, it is clear that you are interested in a steep decrease of the loss function and for that, you can analyze the gradients of the loss function at different stages of the training. So, from the above graph, you can easily spot three different phases where the loss does not change much, then comes a time when a steep decrease happen and then again the loss starts to increase again slowly. Can you see the steepest decrease among all the other reductions? Yes, essentially you want to end up in that range, and CLR will give you a disciplined approach in finding it. Getting the hang of it, let's find out more. Source: The original CLR paper mentioned at the beginning of the tutorial. triangular2 - It is as same as the triangular policy except the learning rate difference is made half at the end of each cycle. This means the learning rate difference drops after each cycle. exp range - In this case, the learning rate varies between the minimum and maximum boundaries and each boundary value declines by an exponential factor. One question you might have quickly asked yourself at this point of time is - How can one estimate reasonable minimum and maximum boundary values? Remember the LR Range Test that you studied just a moment ago? Now, you should be able to find its relevance better. Run the model for several epochs while letting the learning rate increase linearly (use triangular learning rate policy) between low and high learning rate values. Next, plot the accuracy versus learning rate curve. Note the learning rate value when the accuracy starts to increase and when the accuracy slows, becomes ragged, or starts to fall. These two learning rates are good choices for defining the range of the learning rates. Let's do a quick case study now to see how CLR can give amazing results. You will be doing this using the classic MNIST dataset which is probably the most popular dataset for getting started into Computer Vision and Deep Learning. Check out this blog if you want to learn about MNIST in a very detailed manner. You will use keras extensively for all purposes of the experiment. keras provides a built-in version of the dataset. You will start off your experiment by importing that and by performing some basic EDA. You have imported the dataset successfully. Now, you will do some basic visualizations of the dataset. That's great! You will straight proceed towards building a simple multi-layer neural network. But before that, you will do some basic data preprocessing. The images in the dataset are of 28*28 dimensions which is difficult to accommodate in a simple multilayer neural network. That is why you converted the images into a single dimension where each image contains 784-pixel data using the reshape() function. The pixel values in the images are in the range of 0 - 255. A good idea will be to decrease this even further by normalizing the range to 0 - 1. The output variable is an integer from 0 to 9. This is a multi-class classification problem. You will perform one-hot encoding of the class labels for getting a vector of class integers into a binary matrix. You will do this to do a "binarization" of the category and so that you can include it as a feature to train the neural network. You can easily do this using the built-in np_utils.to_categorical() helper function in keras. Let's see what all you did in the above code. You are sequentially constructing the network (which is a linear stack of layers). Then you started to add the layer in your network wherein the first layer you added the neurons (which is equal to the number of pixels in an image, i.e., 784) and you specified the input dimension of the images which is in this case as same as the number of the pixels. You instructed your network to get itself initialized with weights from a normal distribution. Finally, you supplied relu as the activation function for the first layer. In the final layer, you kept the number of the neurons to 10 (which is the number of class labels), and you provided the activation to be softmax to turn the outputs into probability-like values and allow one class of the 10 to be selected as the model's output prediction. You compiled the model as to decide the optimization method of the model (which is ADAM in this case) and the which kind of loss this method will optimize (which is categorical_loss in this case). Now you will train the model and record the time it took to get trained. You will also test its performance. This simple model did quite well achieving an error rate of just 1.91% in approximately 87 seconds. Now you will see the power of CLR. You will start off by cloning the keras implementation of CLR from Github. After a successful clone, you should have the following files into your local working directory. The CLR policy is implemented as a keras callback here. You will pass on this clr_triangular to the callbacks parameter while fitting the network. You will use a larger batch_size this time. You will record the time as well. Can you spot the advantage? This is something amazing to see! You model took only 44 seconds to get trained and yielded even a better error rate than the previous one. Very good! You have made it to the end. In this tutorial, you studied a very crucial problem of finding a suitable learning rate and how CLR completely changed the way you used to approach this problem. You studied CLR covering a good amount of details and did small experiments to see how CLR can produce some excellent results in less time. Study the above two approaches to get even more insights on this topic. Also, after CLR Leslie published a paper titled A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay which revisits CLR and discusses efficient methods for choosing the values of other important hyperparameters of a neural network. Leslie also revisited one of his techniques called Super Convergence in this paper. This paper is a must read for anyone, who thinks, eats and sleeps neural networks. Limited applicability is one of the significant shortcomings of CLR. It has to be made full-proof before one can use it in production levels.
CommonCrawl
The $p$-adic valuation, for a prime number $p$, of the field of rational numbers is an important tool in number theory, and it is therefore an interesting question whether over fields which are complete with respect to a non-archimedean valuation (such as the field $\mathbb Q_p$ of $p$-adic numbers or suitable extension fields of it), there is a good analogue of the theory of complex geometry (which is the study of the geometric structure of zero sets of families of convergent power series in several variables). It turns out that such a theory of rigid analytic geometry can in fact be developed (and has numerous interesting applications to arithmetic). Classical rigid geometry was developed by Tate in the early 1960s. Later, Berkovich gave a variant of the theory. In the early 1990s, Huber developed the theory of adic spaces, so far the most general and powerful version of rigid analytic geometry. In this course, we will study the basics of Huber's theory (and the connection with the other variants of rigid analytic geometry mentioned above). Although their invention dates back quite some time, adic spaces have seen a rise in interest in recent years. One important reason is that they are an important ingredient in Scholze's theory of perfectoid spaces, a notion which led to (sometimes quite spectacular) progress on a variety of questions in arithmetic geometry and related areas. There will also be a lecture course on perfectoid spaces next term, taught by A. Chatzistamatiou. We expect that attending both of these courses will give a good first impression of these exciting developments. Date and time: Tue, 2pm – 4pm, N-U-4.04. On Tuesday, Dec. 12, there will be no lecture. Exercise group: Mon, 2pm-4pm, S-U-3.02. Prerequisites: Basics of algebraic geometry (you should have seen the notions of locally ringed space and of scheme). Being familiar with non-archimedean absolute values would also be helpful.
CommonCrawl
N.Tsilevich, A.Vershik, "On the Fourier transform on the infinite symmetric group" Zapiski Nauchn. Semin. POMI, 325 (2005), 61-82. English translation: J. Math. Sci. (N.Y.), 138, No. 3 (2006), 5663-5673. Abstract. We present a sketch of the Fourier theory on the infinite symmetric group $S_\infty$. As a dual space to $S_\infty$, we suggest the space (groupoid) of Young bitableaux B. The Fourier transform of a function on the infinite symmetric group is a martingale with respect to the so-called full Plancherel measure on the groupoid of bitableaux. The Plancherel formula determines an isometry of the space $l^2(S_\infty,m)$ of square summable functions on the infinite symmetric group with the counting measure and the space $L^2(B,\tilde \mu)$ of square integrable functions on the groupoid of bitableaux with the full Plancherel measure.
CommonCrawl
What fraction of a star's hydrogen store will be fused over its lifespan? A main sequence star will fuse some of its hydrogen, but not all. In massive stars ($>1.5M_\odot$) the core is convective but the rest of the atmosphere radiative and hence does not mix much: as it undergoes shell fusion it will produce an onion-like structure with unused hydrogen on top. Solar mass stars only do this up to helium, but again leave a mantle of unused hydrogen. Stars less than $0.35M_\odot$ are fully convective and can in principle use up all hydrogen. However, I suspect this is not complete except for very low-mass M dwarves that have trillions of years to mature. Looking at planetary nebulae, I have seen statements that they are about 90 percent hydrogen, 10 percent helium. This seems to fit with papers I have found (example, example) although in some cases helium may reach 29% (example). Given that for a G star about half of the mass is ejected, that would suggest a fraction $0.9\times 0.5 = 0.45$ of unused hydrogen. But surely the fractions will be different for other masses. So, to sum up, what is known about the fraction of hydrogen that is never fused over a stellar lifetime? Browse other questions tagged stellar-evolution hydrogen or ask your own question. Is there enough hydrogen left after a star dies so another star will have enough to light up? How do we know a star's age based on its spectrum? What affects the evolution curve of a star's luminosity as a function of time?
CommonCrawl
For simple inequalities such as x < 5, we know all the values less than 5 are in the solution set. But, what if there is a coefficient to the variable such as with this inequality: 2x > 10. This is a one-step inequality problem, and we can solve this type of inequality problem by using the inverse of multiplication or division. If a coefficient is multiplied with the variable, isolate the variable by dividing by the coefficient on both sides of the equation. If the coefficient divides the variable, isolate the unknown value by multiplying by the coefficient on both sides of the equation. Always, keep the inequality balanced. Whatever you do to one side of the sign you must do to the other side. Unlike solving one-step inequalities by addition and subtraction, for multiplication and division problems, you must also consider the direction of the inequality sign. Remember the rule, if multiplying or dividing by a positive number, maintain the direction of the sign. When multiplying or dividing by a negative number, flip the sign over, so it points in the opposite direction. Explain the rules of the cavemen's game. Calculate the number of times at least the first cavemen hit the bullseye. Analyze the scores of the cavemen's friends. Decide which sign has to be flipped to solve the inequality. Define the meaning of each inequality sign. Calculate the inequality $-2x-2 \ge (-2x+1) \times 4$.
CommonCrawl
Let $A$ be a $5 \times 5$ matrix with characteristic polynomial $(x-2)^3(x+1)^2$ and minimal polynomial $(x-2)^2(x+1)^2$. What are the possible Jordan forms for $A$. Now, I know that the Jordan form cannot be a diagonal matrix (as this is only true when the minimal polynomial is a product of 5 distinct linear factors); so we can't have $a_i = 0$, for each $i = 1, 2, 3, 4$. Moreover, we can't have $a_1 = 1$, for each $i = 1,2,3,4$, since there can be no Jordan blocks larger than $2 \times 2$. In fact, if we fix the diagonal entries as they appear above, it follows that $a_3 = 0$ and only one of $a_1, a_2$ can be 1. Is there anything I can conclude? The knowledge of the characteristic and minimal polynomials completely determines the Jordan Form only for matrices of dimension $3\times 3$ ( or $2 \times 2$). In your case we can have, in principle, different Jordan forms for the given polynomials. From the characteristic polynomial we know that the diagonal elements are three values $\lambda= 2$ and two values $\lambda=-1$ (this numbers are the algebraic multiplicities of the eigenvalues). The minimal polynomial say us that for the eigenvalue $\lambda=2$, and also for the eigenvalue $\lambda=-1$, we have a Jordan bloc of dimension $2$. This means that we can have a jordan bloc with two eigenvalues $2$ on the diagonal and a value$1$ over these, and the same for the eigenvalue $-1$. So, apart the position of the blocs, in this case we have one Jordan form, with yours $a_1=1$ (or $a_2=1$) and $a_4=1$. Note that $a_3$ is not an element of a Jordan bloch and must be $0$. Not the answer you're looking for? Browse other questions tagged linear-algebra jordan-normal-form or ask your own question. How many Jordan form for given characteristic and minimal polynomials?
CommonCrawl
For a few years I got interested in some mathematical puzzles, like Rubik's cube, the knight's tour problem, the peaceful queens problem, and the $(n^2-1)$-puzzle. Only some of those research projects bore fruit, the results of which you can see below. My coauthors in this work include colleague Ingo Wegener and his student Oleg Kyek. See more knight's tour images here. Knight's tours are a fascinating subject. New lower bounds on the number of knight's tours and structured knight's tours on $n \times n$ chessboards and even $n$ are presented. For the natural special case $n = 8$ a new upper bound is proved. A real-time algorithm for the $(n^2-1)$-puzzle is designed using greedy and divide-and-conquer techniques. It is proved that (ignoring lower order terms) the new algorithm uses at most $5n^3$ moves, and that any such algorithm must make at least $n^3$ moves in the worst case, at least $2n^3/3$ moves on average, and with probability one, at least $0.264n^3$ moves on random configurations. Created April 21, 2010. Written in HTML 4.01 and CSS 3 using vi. Last updated October 17, 2014.
CommonCrawl
In the last couple of weeks we learned a lot about the Simplex algorithm. I hope you enjoyed this series so far. Leave some comments if you're interested in some special articles. I have at least one more article in my mind about this topic. Now let's dive into this short article. In the last article I explained how to solve all different kinds of linear problems. If you missed the start of this series: Simplex: Solving linear problems. Both of these articles are about solving the problems using Python. How do you solve these problems by hand? Really? Who solves this by hand? Well, good point... but you'll be able to enhance your understanding by this. I promise! We can see that the argument is still true so we have a better lower bound for our objective: 220. Let's improve it further by adding the third constraint divided by 8 on top. Maybe you already see what we are doing here. We try to combine our three constraints in a way that it will be always smaller or equal to the objective function. Therefore we can rewrite our initial problem as the following problem. This problem is called the Dual problem. What we did in our first step was to set $y_1 = 0, y_2 = 1, y_3 = 0$. This fulfills all of the constraints and has an objective function of 55. These problem will give us the objective of our initial problem. That's somehow awesome I think. Well how do we get our values for $x_1,x_2,\dots,x_6$ or in this case: How many oats should we eat??? That's the real awesome stuff... We just get them for free. We get the optimal values for $x_1,\dots,x_6$ by having a look at the right hand side and which columns have only a one and zeros everywhere else. Now we can have a look on the objective row which gives us some additional information. Actually the last four values are interesting now. The simple part: 541 is our objective. Are you amazed or not? Probably not :D Well this is nearly our objective (just rounding errors). That's ok but not really amazing. The amazing part is that these are our values for $y_1,y_2,y_3$. Does this seem amazing enough? Okay we looked for the values of $x_1,\dots,x_6$ in our dual problem, right? Now after solving the primal, that's the name of our intial problem (like the normal one), we got the solution of the dual. We wanted to have the solution of the primal after solving the dual. What did we actually do to get the dual problem? We transposed our tableau matrix! And we changed from minimzation to maximization. If we have a maximization problem at the beginning we would change to a minimzation problem. Transposing a matrix twice will give us the normal matrix back. Therefore, we can transpose the dual matrix back into the primal one. What I want to say is that the dual of the dual is the primal. If we can get the solution of the dual by solving the primal we can also get the solution of the primal by solving the dual. Why did we even do all of this creepy stuff and what is the advantage? In the last post I explained that we have a standard type of problem and that minimzation is kind of abnormal. We had to convert the problem into a maximization problem. Then it was hard to find a basic feasible solution and all that takes time. Using the dual problem we can convert a minimzation problem more easily into a maximization problem just by transposing the matrix. Wow! Well, wait a second... Why just why did I read the last post and why do I ever have to do the crap with converting a minimization the hard way into a maximization one? Good question. In the last post I described not only how to get from a minimization problem to the standard form but also how to be able to use $\leq$ and $\geq$ in the same problem or what we do with a negative right hand side. It also isn't always the best to transpose the matrix. This depends on how many variables and how many constraints we have. If the number of variables in our diet problem far exceeds the number of constraints then it might be better to use the previous method. In different words if we have way more ingredients than nutrient constraints we maybe don't want to solve the dual. By solving the problem as a dual in that case we would increase the constraints. Having more constraints is harder than having more variables. This time I'll not include any code in this blog post because the steps were pretty simple to make. Have a look at the repository to get the newest version. I'm trying to improve that script nearly every day at the moment. Next post: How to add constraints.
CommonCrawl
360 Threat Intelligence Center captured several lure Excel documents written in Arabic in January 9, 2019. A backdoor dropped by macro in the lure documents can communicate with C2 server through DNS tunnel, as well as Google Drive API. We confirmed that this is a DarkHydrus Group's new attack targeting Middle East region. In July 2018, Palo Alto disclosed DarkHydrus Group which showed its special interest to governments in Middle East. Prior to that report, we published detail analysis on malware exploiting CVE-2018-8414 vulnerability (remote code execution in SettingContent-ms), which is believed a work of DarkHydrus. Finally, the PowerShell script drops %TEMP%\\OfficeUpdateService.exe for execution by extracting Based64-encoded content. The PDB path has a project name 'DNSProject', which illustrates that the malware may leverage some DNS techniques to achieve its goal. The backdoor checks if 'st:off' and 'pd:off' is given as paramters. If 'st:off' presents, no persistence entry is added; PDF file is not dropped if 'pd:off' exists. Then it detects existence of virtual machine and sandbox before malicious payload is triggered. The backdoor will send collected information to C2 server through DNS tunnel. queryTypesTest function is created for DNS tunnel communication. Then, the backdoor tries to retrieve commands from C2 server via DNS tunnel, then through HTTP if failed. After C2 commands is retrieved successfully, commands are dispatched by taskHandler. "^\\$x_mode" command sets file server address which is sent in DNS tunnel. DNS tunnel is a C2 communication technique in which malware send data and retrieve commands by DNS query packets. This technique is very effective since most gateways or firewalls allow both ingress and egress DNS traffic. If C2 server is assigned in the format of IP address in malware body, malware can contact C2 directly. But OfficeUpdateService.exe backdoor has C2 server in the format of DNS name, which requires a DNS resolution to C2 domain name first. To do that, the backdoor queries C2 domain in specific name server. Then the backdoor communicates C2 server in DNS tunnel. Malware will retrieve a process ID as victim ID, then treats victim ID as subdomain name in C2 communication. C2 commands are parsed out by regular expressions based on DNS record types. We manually send out a DNS TXT query with victim ID as illustration. The malware will use following regular expression to parse out command, ([\\w+).(akdns.live|akamaiedge.live|edgekey.live|akamaized.live](file://w+).(akdns.live|akamaiedge.live|edgekey.live|akamaized.live)). Finally, system configuration is sent to C2 server in DNS protocol. However, the malware will cancel operation if commands is matched by following regular expression: "216.58.192.174|2a00:1450:4001:81a::200e|2200::|download.microsoft.com|ntservicepack.microsoft.com|windowsupdate.microsoft.com|update.microsoft.com" We found some traces which lead us to believe that DarkHydrus is behind this attack. One interesting finding is that, there is one Twitter user Steve Williams with handle name @darkhydrus2. It's coincident that both 'darkhydrus' (APT group name) and 'Williams' (user name in PDB path) found in this Twitter user. In recent APT incidents, more and more threat actors tend to adopt Office VBA macro instead of Office 0day vulnerability in the consideration of cost reduction. It is recommended that users avoid to open documents from untrusted sources. And Office macro should be disabled by default. Products of 360 ESG can protect users from this new malware, including 360 Threat Intelligence Platform, SkyEye APT Detection, 360 NGSOC.
CommonCrawl
Formally what is a mathematical construction? More formally (though not too formal, as I am pretty under-educated in mathematical logic) what does it mean when one "constructs" another mathematical object from another within the context of introducing a new theory or new definitions. For example I've found when one writes of lets say "constructions of the real numbers" what they mean is they are going to use some deductive system that has no notion of a real number to define an algebraic structure that is isomorphic to the field of real numbers. Again I've seen this also done in "constructions of the rationals" where they might take lets say ordered pairs of integers (again never referencing/using anything involving rational numbers) to create the rationals. So to re-iterate what does "construction" mean formally in this context of introducing new objects? Generally speaking, in mathematics you work in a context where you have a universe of objects. These could be sets, or functions, or numbers, or all of them. When we construct something, we show that there is a way to define an object (or sometimes a collection of objects) which satisfies the properties making it "worth the name" of its construction. This formaly validates our claim as to the existence of something. When we say that we "construct a sequence", then we mean to say that we define the sequence. When we say that we "construct the real numbers", then we argue that given just the rational numbers and some background universe with "enough sets and functions", then we can define an object which has the same behavior we expect from the real numbers. We can then show that this structure is indeed unique up to isomorphism, so the method of construction (Dedekind completion, Cauchy completion, etc.) is in fact irrelevant. The idea is that a construction usually involves some objects to start with. It could be the rationals, or a specific number used to bootstrap a sequence, or just the empty set. And from that object we define another object, in a reasonably explicit way. For example, you cannot define a function $f\colon\Bbb R\to\Bbb R$ such that $f(x+y)=f(x)+f(y)$ and $f$ is not continuous, we know that because there are universes of mathematics where no such function exists. However, if you are given a Hamel basis for $\Bbb R$ over $\Bbb Q$, then from that basis you can construct such function (and it follows that in the aforementioned universes, no Hamel bases exist either). What does "construction" mean formally in this context of introducing new objects? If you were to set out to formally "construct," say, the set of real numbers, from scratch, you would have to begin with a list of axioms and rules of inference that would have to include at least one axiom that postulates the existence of some object, usually some set X with certain properties. Your axioms and rules of inference should allow you to infer the existence of (i.e. construct) new sets given the existence of others. So, starting with set X, you would construct one set after another in this way until you have the set of real numbers. Not the answer you're looking for? Browse other questions tagged logic terminology or ask your own question. Does the existence of a mathematical object imply that it is possible to construct the object? What is a construction (in mathematics)? Is the cardinality of a set necessarily a natural number? How do we formally "identify" objects using isomorphisms? What is the purpose of axiomatic systems? What "real numbers" (elements in $\mathbb R$), are people referring to? What is parameterization in general?
CommonCrawl
I've been tutoring for two months now, but I will assure anyone that it's gonna turn out well. Write a C++ code that recursively computes the nth power of a. How many distinct solutions (not necessarily real) exist for the equation? $(x-1)(x^2-1)(x^3-1)\ldots(x^10-1)=0$1$. Recall that if we have two numbers or terms that when multiplied together results to 0, its either one of them is 0, or both of them are 0, i.e. if $AB = 0$, then $A = 0$ or $B = 0$. Applying this concept to the equation, we can let $A$ and $B$ (take note we can arbitrarily partition the equation in two parts) to be $A = (x - 1)(x^2 - 1)$ and the remaining parts of the equation to be $B$, giving us either $(x - 1)(x^2 - 1) = 0$ or $(x^3 - 1)(x^4 - 1) \ldots (x^10 - 1) = 0$. Consider $A$ to be 0, as stated in the concept above. Take note that it can again be divided into two terms, $(x - 1)$ and $(x^2 - 1)$. Applying the concept again, we have either $(x - 1) = 0$ or $(x^2 - 1) = 0$. Solving the first one, we have $x = 1$. The second one gives us $x = \pm 1$. Applying the same concept repetitively on $B$, we have two distinct solutions for $B$, which is $x = 1$ and $x = -1$. Thus, taking the solutions of $A$ and $B$, we have two distinct solutions for the equation. needs and Neil Gerson will reply soon.
CommonCrawl
Ph. D. Thesis Title: Characterization of the Gaussian processes equivalent to a Gaussian martingale and their application. Master's Degree Thesis Title: Some estimates obtained by using order statistics and their distributions. Lazarova M. and Minkova L. (2017). I-Delaporte process and applications, Mathematics and Computers in Simulation, 133, 135-141, (IF 1.124). Kostadinova K. and Minkova L. (2016). Type II family of Bivariate Inflated-parameter Generalized Power Series Distributions, Serdica Math. J., 42, 27-42. Lazarova M. and Minkova L.D. (2015). A Family of Bivariate Inflated-parameter Generalized Power Series Distributions, Compt. Randue Bulg. Acad. Sci., 68(5), 577-588, (IF (2014) 0.284). Chukova S. and Minkova L.D. (2015). Polya - Aeppli of order k Risk Model, Communications in Statistics-Simulation and Computation, 44(3), 551-564 (IF (2011) 0,387). Omey E. and Minkova L.D. (2014). Bivariate Geometric Distributions, Compt. Randue Bulg. Acad. Sci., 67(9), 1201-1210, (IF (2014) 0.284). Minkova L.D. and N.Balakrishnan (2014) Type II Bivariate Polya-Aeppli distribution, Statistics & Probability Letters, 88, 40-49, (IF(2012) 0.531). Minkova L.D. and N.Balakrishnan (2014). On a bivariate Polya-Aeppli distribution, Commun. Statist. -Theory and Methods, 43, 5026-5038, (IF (2012) 0,298). Minkova L.D. and Omey E. (2014). A new Markov Binomial distribution, Commun. Statist. -Theory and Methods, 43, 2674-2688, (IF (2012) 0,298). Kostadinova K. and Minkova L. (2014). On a Bivariate Poisson Negative Binomial Risk Process, BIOMATH, 3, 47--52. Minkova L.D. and N.Balakrishnan (2013). Compound weighted Poisson distributions, Metrika, 76(4), 543-558, (IF (2012) 0.724). Kostadinova K. and Minkova L.D. (2013). On the Poisson process of order k, Pliska Stud.Math.Bulgar., 22, 117--128. Chukova S. and Minkova L.D. (2013). Characterization of the Polya - Aeppli process, Stochastic Analysis and Applications, 31(4), 590-599, (IF (2012) 0,459). Minkova L. and Radkov P. (2012). Distributions related to a Markov chain and Applications in Finance, Proceedings of the 11th Iranian Statistical Conference, August 28-30, Tehran Iran, 337 - 345 (invited). Minkova L.D. (2011). The I - Polya process and Applications, Commun.Statist. -Theory and Methods, 40, 2847 – 2855 (IF (2010) 0.351). Radkov P. and L.D.Minkova (2011). Markovian Option Pricing Model, Proceedings of the 14th International Conference "Applied Stochastic Models and Data Analysis", June 07 10, 2011, Roma, Italy, 1137 – 1143. Radkov P. and L.D.Minkova (2010). Markov-Binomial Option Pricing Model, Proceedings of the 9th International Conference in Computing Data Analysis and Modeling:Compex Stochastic Data and Systems, Belarusian State University, September 07-11, Minsk, Belarus, vol. 2, 169-172. Minkova L.D.(2010). Compound Binomial Risk Model, Proceedings of the Stochastic Modeling Techniques and Data Analysis International Conference, June 8-11, 2010, Chania, Crete, Greece. Minkova L.D. (2010). Stochastic Processes - Applications in Finance and Insurance, in:International Encyclopedia of Statistical Science, Miodrag Lovric (Ed.), ISBN:978-3-642-04897-5. Minkova L.D. (2010). The Polya – Aeppli distribution of order K, Commun.Statist. -Theory and Methods, 39(3), 408-415 (IF 0,351). Minkova L.D. (2009). Compound Compound Poisson Risk Model, Serdica Math.J. 35, 301 – 310. Minkova L.D. (2009). I - Polya Process and Applications, Proceedings of the XIIIth International Conference "Applied Stochastic Models and Data Analysis", June 30 - July 3, 2009, Vilnius, Lithuania. Minkova L.D. (2009). Stochastic Processes in Finance and Insurance, Math. and Educ. in Math. 61-69 (invited). Minkova L.D. (2008). Reinsurance by the Polya-Aeppli risk model, Proceedings of the 2008 International Workshop on Applied Probability, July 7-10, 2008, Universite de Technologie de Compiegne, France. Minkova L.D. and Etemadi R. (2008). Compound Poisson Counting Distributions, Math. and Educ. in Math. 226 – 231. Minkova L.D. (2007). The Polya - Aeppli distribution of order k, Proceedings of the XIIth International Conference "Applied Stochastic Models and Data Analysis", May 29, 30, 31 and June 1, Chania, Crete, Greece. Minkova L.D. (2004). A modified model of risk business, Pliska Stud. Math. Bulgar., 16} 129- 135. Minkova L.D. (2004). The Polya - Aeppli process and ruin problems, J. Appl. Math. Stoch. Analysis, 3, 221 - 234. Minkova L.D. (2002). A generalization of the classical discrete distributions, Commun. Statist. - Theory and Methods, 31, 871 - 888, (IF 0.171). Minkova L.D. (2001). Inflated-parameter modification of the pure birth process, Compt. Randue Bulg. Acad. Sci., 54(11), 17 - 22.21. Minkova L.D. (2001). Mixed Polya - Aeppli process, Compt. Randue Bulg. Acad. Sci., 54(8), 9 - 12. Minkova L.D. (2001). A family of compound discrete distributions, Compt. Randue Bulg. Acad. Sci., 54(2), 9 - 12. Minkova L.D. (2001). The bond market with stochastic volatility in high level of inflation, Math. and Educ. in Math., 270 - 275. Kolev N., Minkova L.D. and Neytchev P. (2000). Inflated-Parameter Family of Generalized Power Series Distributions and Their Application in Analysis of Overdispersed Insurance Data, ARCH Research Clearing House, 2, 295 - 320. Kolev N. and Minkova L. (2000). A characterization of the negative binomial distribution, Pliska Stud. Math. Bulgar., 13, 151 - 154. Minkova L.D.(2000). Modelling of financial markets in high level of inflation, Math. and Educ. in Math., 198 - 204. Kolev N. and Minkova L. (1999). Run and frequency quotas in a multi-state Markov chain, Commun. Statist. - Theory and Methods, 28, 2223 - 2233, (IF 0.209). Kolev N. and Minkova L. (1999). Quotas on runs of successes and failures in a multi - state Markov chain, Communications in Statistics - Theory and Methods, 28, 2235 - 2248, (IF 0.209). Minkova L. and Danchev D. (1998). Modelling of financial markets in the currency board conditions, Applications of Mathematics in Engineering, (Sozopol, 1997), 130 - 132, Heron Press, Sofia. Kolev N. and Minkova L. (1997). Discrete distributions related to success runs of length K in a multi - state Markov chain, Communications in Statistics - Theory and Methods, 26, 1031 - 1049, (IF 0.194). Minkova L. (1997). A stochastic model for the financial market with discontinuous prices, J. Appl. Math. Stoch. Analysis, 9, 271-280. Kolev N. and Minkova L. (1995). On joint distribution of successes and failures related to success runs of length K in homogeneous Markov chain, Compt. Randue Bulg. Acad. Sci., 48, Vol. 9, 19 – 22. Minkova L. D. (1994). Innovation of Gaussian semimartingale, Technical University Annuals - Applied Mathematics, Sofia, 181 – 193. Minkova L. (1993). A stochastic model for the financial market, In: Proc. Applications of Mathematics in Engineering, 153 - 158, Varna. Kolev N. and Minkova L. (1986). Poisson distribution of order K and some of its properties, Compt. Randue Bulg. Acad. Sci., 39, 31 - 33, (IF 0.149). Minkova L. (1985). Stochastic equation for a generalized Ornstein-Uhlenbeck process. In: Proc. Third International Conference of Differential Equations and their Applications, Russe, 825 - 828. Minkova L. and Hadziev D. (1984). Equivalence and singularity of some Gaussian measures, Pliska Math. Bulgar, 7, 163 - 169, (In Russian). Minkova L. and Hadziev D. (1980). Representation of Gaussian processes equivalent to a Gaussian martingale, Stochastics, 3, 251 - 266. Minkova L. and Hadziev D. (1979). Theorem of Girsanov's type for Gaussian martingales, Compt. Randue Bulg. Acad. Sci., 32, 1465 - 1466. Minkova L. (1978). Asymptotic estimates for a parameter of spread, Plovdiv Univ. Nauchn. Trud., 16, 157 - 164. Minkova L. and Varbanova M. (1976). Parametrische schatzungen mit hilfe der Positionsstrichprobenelementen, Math. and Educ. in Math., 207 - 214. Minkova L.D. (1997). Discussion to G. Parker, "Stochastic analysis of interaction between Investment and Insurance risks", North American Actuarial Journal 1, 55-84; 75. Minkova L.D. and Kolev N. (1998). Discussion to E. W. Frees, "Relative importance of risk sources in insurance systems", North American Actuarial Journal 2, 34-52, 50-51. Minkova L.D. (1997). Review of "Mathematical Methods in Finance" by H. P. Howison, F. P. Kelly and P. Willmot (Eds.), J. Appl. Math. Stoch. Analysis, 10, 305-306. Minkova L.D. (1998). Review of "Some Aspects of Brownian motion" by M. Yor, The Statistician, 47, 561-562. Minkova L.D. (2000). Review of "Stochastic Processes for Insurance and Finance" by T.Rolski, H.Schmidli, V.Schmidt and J.Teugels, The Statistician, 49, 128-129. Smaili K., Kadri T. and Kadry S. (2016). Finding the PDF of the Hypoexponential random variable using the Kad matrix similar to the general Vandermond matrix, Communications in Statistics-Theory and Methods, 45, 1542-1549. Smaili K., Kadri T. and Kadry S. (2013). Hypoexponential Distribution with Different Parameters, Applied Mathematics, 4, 624-631. Smaili K., Kadri T. and Kadry S. (2014). A Modified-Form Expressions for the Hypoexponential Distribution, British Journal of Mathematics & Computer Science, 4(3), 322-332. Kadri T., Smaili K. and Kadry S. (2015). Markov Modeling for Reliability Analysis Using Hypoexponential Distribution, In: Numerical Methods for Reliability and Safety Assessment, Eds: Kadry S. and El Hami A., Springer, 599-620. Kokonendji C.C. (2014). Over- and Underdispersion Models, In: Encyclopedia of Clinical Trials--Methods and Applications of Statistics in Clinical Trials, Vol 2--Planning, Analysis and Inferential Methods, Editor N.Balakrishnan, John Wiley & Sons, Newark, NJ, 506--526. Macci C. and Pacchiarotti B. (2015). Large deviations for a class of counting processes and some statistical applications, Statistics & Probability Letters, 104, 36-48. Kostadinova K.Y. (2013). On a Poisson Negative Binomial Process,in: Advanced Research in Mathematics, and Computer Science, Doctoral Conference in Mathematics, Informatics and Education, September, 19-21, Sofia, 25-33. Alhejaili A.D. and Abd-Elfattah E.F. (2013). Saddlepoint Approximations for Stopped-Sum Distributions, Communications in Statistics: Theory and Methods, 42, 3735--3743 (IF 0.295). Bao Z., Song L. and Liu He (2013). A note on the Inflated-parameter binomial distribution, Statistics & Probability Letters, 83, 1911--1914 (IF 0.531). Yin Ch. (2013). Optimal divident problem for a generalizaed compound Poisson risk model, arXiv: 1305.1747.2013. Beghin L. and Macci C. (2014). Fractional discrete processes: Compound and mixed Poisson representations, Journal of Applied Probability, and arXiv:1303.2861v1, 12 Mar 2013. Dragieva V. (2011). Queueing system with Polya - Aeppli input process, Ann. UACEG, Sofia, 43-44, 7-14. Haydn N and Vaienti S. (2008). The compound Poisson distribution and return times in dynamical systems, Probability Theory and Related Fields, 144, 517-542, (IF 1.569). Omey E. and S.Van Gulck (2006). Markovian Black and Scholes,Publications de l'institut mathematique, 79, 65-72. Borges P. (2012). Novos Modelos de Sobrevivencia com Fracao da Cura Baseados no Processo da Carcinogenese, Doctoral Thesis, Federal University of Sao Carlos, Brazil. Borges P., Rodrigues J. and Balakrishnan N. (2012). Correlated destructive generalized power series cure rate models and associated inference with application to a cutaneous melanoma data, Computational Statistics and Data Analysis, 56, 1703 - 1713 (IF (2011) 1.028). Jazi M.A. and Alamatsaz M.H. (2011). Some contributions to Inflated Generalized Power Series Distributions, Pakistan J. Statist., 27(2), 139 - 157 (IF 0.286). Stoynov P. (2011). Mixed Negative Binomial distribution by weighted gamma mixing distribution, Math. and Educ. in Math., 327 – 331. Jazi M.A. and Alamatsaz M.H. (2010) Ordering Comparison of Logarithmic Series Random Variables with their Mixtures, Communications in Statistics:Theory and Methods, 39(18), 3255-3263 (IF 0,351). Jazi M.A. and Alamatsaz M.H. (2010). Some Extensions of Discrete $\alpha-$monotone Distributions, Proceedings of the 10th Iranian Statistical Conference, 21--29. Chadjiconstantinidis S. and Pitselis G. (2009) Further Improved Recursions for a Class of Compound Poisson distributions, Insurance: Mathematics and Economics, 44, 278 – 286 (IF 1.477). Kazuki Aoyama, Kunio Shimizu and Ong S.H. (2008) A first - passage time random walk distribution with five transition probabilities: a generalization of the shifted inverse trinomial, Annals of the Institute of Statistical Mathematics, 60(1), 1-20 (IF 0.565). Vinogradov V. (2007). On Strustural and Asymptotic Properties of Some Classes of Distributions, Acta Appl. Math. 97: 335 – 351 (IF 0.43). Masashi Kitano, Kazuki Aoyama and Kunio Shimizu (2006) Recursion Formulae for Discrete Probability Distributions, Proceedings of the Institute of Statistical Mathematics, 54(1), 147-175 (in Japanese). Aoyama K. and Shimizu K. (2005). A generalization of the inverse trinomial, KSTS/RR-05/003, Jun.23.2005. Omey E. and S.Van Gulck (2006) Markovian Black and Scholes, Publications de l'institut mathematique, 79, 65-72. Omey E. and S.Van Gulck (2006). Markovian Black and Scholes, Publications de l'institut mathematique, 79, 65-72. Momeni F. (2011). The Generalized Power Series Distributions and their Application, The Journal of Mathematics and Computer Science, 2, 691 - 697. Bao Z., Song L. and Liu He (2013). A note on the Inflated-parameter binomial distribution, Statistics & Probability Letters, DOI: http:/dx.doi.org/10.1016/j.spl.2013.04.026 (IF). Borges P., Rodrigues J. and Balakrishnan N. (2012). Correlated destructive generalized power series cure rate models and associated inference with application to a cutaneous melanoma data, Computational Statistics and Data Analysis, 56, 1703 - 1713, (IF (2011) 1.028). Mostajeran F. (2011). Statistical analysis of the number of galaxies in cubic cells in the universe, Proc.ICCS-11 Lahore, Pakistan, Vol 22, 151-160. Momeni F. (2011). The Generalized Power Series Distributions and their Application, The Journal of Mathematics and Computer Science, 2(4), 691 - 697. Jazi M.A. and Alamatsaz M.H. (2011). Some contributions to Inflated Generalized Power Series Distributions, Pakistan J. Statist., 27(2), 139 - 157, (IF 0.286). Kamalja K.K. (2013). On the joint distribution of success runs of several lengths in the sequence of MBT and its applications, Statistical papers, DOI 10.1007/s00362-013-0560-8, (IF 0,683). Chaderjian B.J., Ebneshahrashoob M. and Gao J. (2012). Exact Distributions of Waiting Time Problems of Mixed Frequencies and Runs in Markov Dependent Trials, Applied Mathematics, 3, 1689-1696. Michelle L. Deppoy Smith and William S. Griffith (2011). Multi - state start - up demonstration tests, International Journal of Reliability, Quality and Safety Engineering, 18, 99 – 117. Martin D. E. K. and Aston J. A. D. (2008). Waiting time distribution on generalized later patterns, Comp. Statist. Data Analysis, 52, 4879 - 4890, (5 - years IF (2011) 1.373). Aston J.A.D. and Martin D.E.K. (2005). Waiting time distributions of competing patterns in higher order Markovian sequences, J. Appl. Probab., 42, 977 - 988, (IF (2008) 0.739). Kolev N. (2005). Run and Frequency Quotas Under Markovian Fashion and their Application in Risk Analysis, Economic Quality Control, 20, 97 - 109. Balakrishnan N. and Koutras M.V. (2002). Runs and Scans with Applications, Wiley series in Probability and Statistics. Michelle L. Deppoy Smith and William S. Griffith (2011). Multi - state start - up demonstration tests, International Journal of Reliability, Quality and Safety Engineering, 18, 99 - 117. Aston J. A. D. and Martin D. E. K. (2005). Waiting time distributions of competing patterns in higher order Markovian sequences, J. Appl. Probab., 42, 977 - 988, (IF (2008) 0.739). Kolev N. (2005). Run and Frequency Quatas Under Markovian Fashion and their Application in Risk Analysis, Economic Quality Control, 20, 97 - 109. Antzoulakos D. L. (2003). Waiting Times and Number of Appearances of Runs: A Unified Approach, Communications in Statistics – Theory and Methods, 32, 1289 - 1315 (IF). Balakrishnan N. and Koutras M. V. (2002). Runs and Scans with Applications, Wiley series in Probability and Statistics. Seiichi Yasui, Yyoshikazu Ojima and Tomomichi Suzuki (2006). Generalization of the Run Rules for the Shewhart Control Charts, in: Frontiers in Statistical Control, 207-219, Physica – Verlag HD. Hadziev D.I. (1985). Some remarks on Gaussian Solutions and explicit filtering formulae, Lecture Notes in Control and Information Science, 69, Springer, 207 – 216. Hadziev D.I. (1983). An example of explicit filtering and extrapolation, Compt. Randue Bulg. Acad. Sci., 36, 1379 - 1382. Bogachev V. I. (1998). Gaussian measures, Amer. Math. Soc., Rhode Island. Hadziev D. I. (1985). Some remarks on Gaussian Solutions and explicit filtering formulae, Lecture Notes in Control and Information Science, 69, Springer, 207 - 216. Butov A. A. (1982). The equivalence of measures corresponding to canonical Gaussian processes, Russ. Math. Surv., 37, 162 - 163. Hadziev D. I. (1981). Gaussian Solutions of Some Stochastic Equations, Compt. Randue Bulg. Acad. Sci., 34, 1647 - 1649.
CommonCrawl
In this blog post I'm going to write a little bit about permutations, and some connections with information theory and sorting algorithms, including giving a sketch-proof of the \(n \log n\) lower bound on comparison-based sorting. A permutation is a re-ordering of a list. For example, (1, 3, 2), (3, 2, 1) and (2, 1, 3) are all permutations of the list (1, 2, 3). Let's say you have a list with \(n\) elements in it (e.g. the length of the list is \(n\)). How many different ways of ordering the list (e.g. how many permutations) are there of the list? When picking the first element of your permutation, you have n elements to choose from. Pick one. Then when picking the 2nd element of your permutation, you have one less to choose from, e.g. \(n-1\) elements. By the time you come to choose the last element of your permutation, you only have one element left. So the total number of different permutations is $$n \times (n - 1) \times (n - 2) \times ... \times 3 \times 2 \times 1 $$ This is the definition of n factorial: $$n! = n \times (n - 1) \times (n - 2) \times ... \times 3 \times 2 \times 1$$ So there are \(n!\) permutations of a list of n elements. Let's say we want to store which permutation we have chosen in computer memory. Since there are \(n!\) possibilities, we need to use \(\log_2(n!)\) bits to store this information. Let's have a look at the expression \(\log_2(n!)\), and substitute in the definition of \(n!\): $$\log_2(n!) = \log_2(n \times (n-1) \times (n-2) \times ... \times 2 \times 1)$$ Using the rule above to break up the log: $$\log_2(n!) = \\ \log_2(n \times (n-1) \times (n-2) \times ... \times 2 \times 1) = \\ \log_2(n) + \log_2(n-1) + \log_2(n-2) + ... + \log_2(2) + \log_2(1) $$ The last line can be interpreted in an interesting way - it gives the number of bits required when storing each element choice for the permutation separately. For example, \(\log_2(n)\) is the information needed to store the first choice, which was from \(n\) options. \(\log_2(n-1)\) is the information needed to store the second choice, which was from \(n-1\) options. \( \log_2(1) \) is the information needed to store the last choice, which only has 1 option, e.g. there was no choice for it. Happily, \(\log_2(1) = 0\). No information is needed for this null choice! This is one of the main reasons (perhaps the only reason?) why the log function pops up in information theory. When you have two independent choices to make, the number of possibilities is the product of the two numbers of possibilities for each choice. But the information required for both is the sum of the two individual pieces of information. The log function is the only function with this property. The task of a sorting algorithm is to take a permutation of some original sorted list, and return the original sorted list. Consider a comparison based sorting algorithm, that at each step can compare two elements together with the less than (<) operator. At each step it can eliminate up to half of the possible permutations. In other words, it can gather one bit of information with each comparison. So to gather all the bits of information needed to determine which permutation it is looking at, it needs to make \(\log_2(n!)\) comparisons. It could do it less efficiently, but this is the lower bound on the number of comparisons which can handle the worst case. Stirling's approximation tells us that $$\ln(n!) \approx n \ln(n) - n$$ Here \(\ln(x)\) is the natural logarithm function, e.g. \(\log\) with base \(e\). We can convert natural logarithms to log base 2 by dividing the log function by \(\ln(2)\), which is about 0.6931, e.g. $$\log_2(x) = \ln(x) / \ln(2)$$ or $$\log_2(x) \times \ln(2) = \ln(x)$$ Using this rule in previous equation gives: $$\ln(n!) \approx n \ln(n) - n \\ \log_2(n!) \times 0.6931 \approx n \log_2(n) \times 0.6931 - n \\ \log_2(n!) \approx n \log_2(n) - n / 0.6931 $$ When you are doing asymptotic complexity analysis, constants don't matter. So the optimal asymptotic complexity of the comparison based sort algorithm is \(n \log_2(n)\), which is the same asymptotic complexity as \(n \ln(n)\). One interesting thing to note is that n*log_2(n) is quite a poor estimate of the number of comparisons needed (it is quite a lot too large), at least for these relatively small n values. the (ceiling'd) log_2(n!) values give the actual number of comparisons needed. So there you go, some curious things about permutations and sorting. I hope some people found this interesting. If there are any bits of maths that you don't understand, let me know and I will go into more depth. Corrections welcome of course.
CommonCrawl
However, storing notes in plain-text is crucial to me, for all the reasons that can be found on the blog and in the forum. I believe I would perceive and inline-image preview in a Markdown editor rather as a hindrance than as a feature. The first requirement can be easily fulfilled by using the Markdown preview app Marked 2. Setting Marked as the external editor for The Archive, a view of the note with images is only one Cmd+Shift+E away. The second and third requirement can be fulfilled by setting the (awesome) document converter pandoc as the custom processor in Marked. In Marked's advanced settings, I enabled the custom processor, setting the path to "/usr/local/bin/pandoc" and the args to "-f markdown -t html5 -s --filter pandoc-citeproc --bibliography /path/to/bibliography.bib". With this system, BibTex references (e.g. @bibtexkey) will the processed and formulas can be written using LaTeX syntax. I noticed some problems with inline formulas as in "we observe that $x_n$ is". Here, sometimes the indices are not set correctly. "Displayed formulas" surrounded by "$$" as seen in the attached image, everything seems to work perfectly. I am quite happy with this setup, maybe it is helpful for some of you as well. @cdguit: Nice setup, thanks for sharing! The only thing I would not be entirely happy with in the setup is that the bibliography data is not part of the stored zettel, but is just created when using the Markdown preview app. I would want the citation in the Bibliography section of the zettel. Trying to find out how to get that automated, possibly with a .bib-file as the source of my references (at the moment I'm using Zotero, but will probably switch soon). I am doing pretty much precisely this. I don't have as much need for formulae and citations, but for inline images I work in parallel in either The Archive or Emacs with Marked open next to it. Especially makes doing things like making sure tables and other formatting looks good for client worke. I don't know how to do what you want, but there should be a solution that is not to complicated. I was looking for a CLI tool to query .bib files, but didn't find anything sophisticated. Yes, tables and syntax highlighting look also very nice with Marked! I really like this separation of plain-text content and a pretty html view. @cdguit: Yesterday I had a closer look at your described setup and I must say that I like a lot about it and decided to go for it (or something similar) as well. I decided that it's not necessary to have the bibliography data in the original plain text file – the citekey and the associated .bib-file make it resilient enough. Bibliography data is mainly important if I want to share zettels with others that don't have the .bib-file, so if I create a nice file to share with Marked 2 and it creates the bibliography for me automatically that's perfect. @Vinho, I wouldn't recommend to invent your own entry types and fields, esp. not with BibTeX whose format has been fragmented enough already (due to the lack of a proper spec and coordinated development of the format). Instead, it may be worth looking at richer bibliographic exchange formats such as BibLaTeX or CSL/Citeproc JSON (or its YAML variant). These formats should be also supported by pandoc. @msteffens: Thanks for your informative post! Am I right in saying that you advise against inventing own types and fields and instead would recommend using a format that already provides suitable types and fields out of the box? BibLaTeX or CSL JSON offer more types, but neither of them offer all I want... In general, I'd much prefer it if I could freely invent my own types and fields and didn't have to try to fit my stuff into a system that wasn't really made for it. Do you know more specifically which problems arise when I invent my own types and fields and if these problems can be avoided somehow? In general, I'd much prefer it if I could freely invent my own types and fields and didn't have to try to fit my stuff into a system that wasn't really made for it. Yes, I can understand that. Still, I'd question myself if defining your own custom types & fields is really worth the hassle. What do you gain with this, and where will you exactly benefit by using your own types/fields. Could you maybe gain almost the same benefits by reusing general fields in a creative way? E.g., could you instead add special keywords to the notes or tags fields? Do you know more specifically which problems arise when I invent my own types and fields and if these problems can be avoided somehow? You loose any kind of interoperability – this not only regards interoperability with other users but also with your future self and with using different tools in the future. With the standard bibliographic exchange formats, interoperability is a tricky thing and often quite hard to achieve. The bibliographic formats aren't standardized enough, and everybody likes to use or interpret them slightly differently. The world of bibliographic exchange formats is a huge mess already, and a PITA for everyone trying to improve interoperability (I've been working in the field for >15 years so my opinion is probably biased). Sure, if you will only ever use this for yourself, then, of course, you're free to do whatever you want. But, as you noted, you also want to use tools like Zotero, BibDesk or Pandoc etc. And any bigger customization makes it harder to use such tools for your benefit. Thank you again for posting this and your helpful advice. I've decided to go with the same system as you – so am now working with Pandoc and Marked 2. Also created my own csl-style, so the citations look perfect (except from a few small problems that I still need to solve). Overall, it seems to me like a very elegant and easy-to-use solution – I'm very pleased with it. Thank you for your important advice as well, I've decided to use BibLaTeX with BibDesk for now and I'm trying to use a small amount of predefined entry types for everything I have – working with the field "type" to distinguish between different kinds of resources. If it was easier to find out how exactly pandoc-citeproc parses the BibLaTeX entry types and fields, I might refine this a bit more, but for now that'll do. Alright, I am glad the system works for you! Thank you for your contributions! I should have been more precise, when writing in LaTeX I am actually using BibLaTeX, I just (incorrectly) keep calling it BibTeX.
CommonCrawl
Derive the natural frequency $f_n$ of the system composed of two homogeneous circular cylinders , each of mass M, and the connecting link AB of mass m . Assume small oscillations . There is no slipping between cylinder and ground. I got the frequency by the energy method , can we find it by force method ? If yes , then how ? as there will be friction force and hinge force which are unknown . Please show your attempt to use the force method! The question does not mention work done against friction, so you can ignore it. Your energy method solution did not include it. The reaction forces at the hinge are more difficult. Draw FBDs for the cylinder and the link. The forces on the two cylinders are identical, so you need a diagram for only one. The hinge forces have horizontal and vertical components. Apply Newton's 2nd law to each body. As a further simplification, imagine that the cylinders overlap and become one. You can do this because their motion is identical. Then the link has zero length - ie it is only a point mass. (The question does not tell you the length of the link, only its mass. So the length is irrelevant, and could be zero.) Then you have one cylinder with mass 2M and an off-centre mass m, rocking on the table. Now there is no need to consider the forces at the hinge. But in this how to find F$_2$ . This answer uses a simplified method, which avoids the complications of having to deal with internal forces between the link and cylinders. The motion of the two cylinders is identical, so they can be treated as one cylinder of mass $2M$. The link is then a point mass $m$ located at distance $r_0$ from the centre O. We can treat the link as a point mass because its orientation does not change throughout the motion. Like a point mass it has only translational KE, it has no rotational KE. $$I=2(\frac12Mr^2+Mr^2)=3Mr^2$$ The moment of inertia of the double-cylinder and link about P is therefore $$I_P=3Mr^2+mr_1^2 \approx 3Mr^2+m(r-r_0)^2$$ where $r_1$ is the distance LP and for small oscillations $r_1 \approx r-r_0$. In your diagram the forces shown are those on the link, which is accelerating to the left while the cylinders accelerate to the right. For small oscillations the link has only horizontal acceleration; relative to point of contact P this is $(r-r_0)\alpha$ where $\alpha=\ddot\theta$ is the angular acceleration of the cylinders. But then also answer is not coming . Yes, you are right. I have corrected my answer. Applying a consistent sign convention when there are internal forces can be quite confusing.
CommonCrawl
We investigate the magnetic field generation in global solar-like convective dynamos in the framework of mean-field theory. We simulate a solar-type star in a wedge-shaped spherical shell, where the interplay between convection and rotation self-consistently drives large-scale dynamo. To analyze the dynamo mechanism we apply the test-field method for azimuthally (φ) averaged fields to determine the 27 turbulent transport coefficients of the electromotive force, of which 9 are related to the α effect tensor. This method has previously been used either in simulations in Cartesian coordinates or in the geodynamo context and it is applied here for the first time in simulations of solar-like dynamo action. We find that the φφ -component of the $\alpha$ tensor does not follow the profile expected from that of kinetic helicity. Beside the dominant $\alpha$-$\Omega$ dynamo, also an α dynamo is locally enhanced. The turbulent pumping velocities significantly alter the effective mean flows acting on the magnetic field and therefore challenge the flux transport dynamo concept. All coefficients are significantly affected due to dynamically important magnetic fields with quenching as well as enhancement being observed. This leads to a modulation of the coefficients with the activity cycle. The temporal variations are found to be comparable to the time-averaged value and seem to be responsible for a nonlinear feedback on the magnetic field generation. Furthermore, we quantify the validity of the Parker-Yoshimura rule for the equatorward propagation of the mean magnetic field in the present case.
CommonCrawl
We analyze LEP and SLC data from the 1995 Winter Conferences for signals of new physics. We compare the data with the Standard Model (SM) as well as a number of test hypotheses concerning the nature of new physics: (i) nonstandard Zbb couplings, (ii) nonstandard Zff couplings for the entire third generation, (iii) nonstandard oblique corrections, (iv) nonstandard lepton couplings, (v) general nonstandard W and Z couplings to all fermions, as well as combinations of the above. In most of our analyses, we leave the SM variables $\alpha_s$ and $m_t$ as free parameters to see how the various types of new physics can affect their inferred values. We find that the best fit ($\chi^2/d.o.f. = 8.4/10$) is obtained for the nonstandard Zbb couplings, which also give a `low' value (0.112) for $\alpha_s$. The SM also gives a good description of the Z data, having $\chi^2/d.o.f. = 12.4/12$. If $\alpha_s$ is held fixed to the low-energy value 0.112, then we find that a combination of the nonstandard Zbb couplings is fit to lie more than four standard deviations away from zero.
CommonCrawl
Multiplication Map, Is it invariant? Let $\pi:X\rightarrow Z$ a double cover of an elliptic curve with genus $g\geq 3$. Choose a general rank 2 and degree -1 vector bundle $F$ on $Z$, let $E=\pi^*F$ and fix $x\in X$. The involution $i$ of the cover induces an involution on $H^0(X,E)$. Consider the multiplication $$H^0(X,\mathcal O(x+i(x)))\otimes H^0(X,E(-x-i(x))\rightarrow H^0(X,E)$$ My question: Does the image of this multiplication lies in the invariant part of the action of $i$? Browse other questions tagged ag.algebraic-geometry cohomology vector-bundles or ask your own question. Is there a unique line bundle in the Kummer surface which pulls back to a totally symmetric line bundle?
CommonCrawl
Mirko buys a lot of candy in the candy shop. He cannot always pay the exact amount so the shopkeeper and he have an agreement. He tells the shopkeeper the smallest bill he has, and she rounds his amount to the nearest number he can pay. For example, if the smallest bill Mirko has is a hundred bill, and he wants to buy $150$ Kunas of candy, the shopkeeper rounds his amount to $200$ Kunas. If he wants to buy $149$ Kunas of candy, the shopkeeper rounds his amount to $100$ Kunas. Lately, Mirko suspects the shoopkeeper is trying to cheat him. He asked you to help him. Write a program that will help him. His mother only gives Mirko $1, 10, 100, 1\, 000, \ldots , 1\, 000\, 000\, 000$ Kuna bills. He never has bills that are not powers of $10$. The bills he does have, he has in large amounts. The first and only line of input contains two integers, $C$ ($0 \le C \le 1\, 000\, 000\, 000$), the price of candy Mirko is going to buy, and $K$ ($0 \le K \le 9$), number of zeros on the smallest bill Mirko has. The first and only line of output should contain one integer, $C$ rounded to the nearest amount Mirko can pay.
CommonCrawl
The Fair Nut lives in $$$n$$$ story house. $$$a_i$$$ people live on the $$$i$$$-th floor of the house. Every person uses elevator twice a day: to get from the floor where he/she lives to the ground (first) floor and to get from the first floor to the floor where he/she lives, when he/she comes back home in the evening. Moves from the $$$x$$$-th floor (initially it stays on the $$$x$$$-th floor) to the $$$a$$$-th and takes the passenger. Moves from the $$$a$$$-th floor to the $$$b$$$-th floor and lets out the passenger (if $$$a$$$ equals $$$b$$$, elevator just opens and closes the doors, but still comes to the floor from the $$$x$$$-th floor). Moves from the $$$b$$$-th floor back to the $$$x$$$-th. The elevator never transposes more than one person and always goes back to the floor $$$x$$$ before transposing a next passenger. The elevator spends one unit of electricity to move between neighboring floors. So moving from the $$$a$$$-th floor to the $$$b$$$-th floor requires $$$|a - b|$$$ units of electricity. Your task is to help Nut to find the minimum number of electricity units, that it would be enough for one day, by choosing an optimal the $$$x$$$-th floor. Don't forget than elevator initially stays on the $$$x$$$-th floor. The first line contains one integer $$$n$$$ ($$$1 \leq n \leq 100$$$) — the number of floors. The second line contains $$$n$$$ integers $$$a_1, a_2, \ldots, a_n$$$ ($$$0 \leq a_i \leq 100$$$) — the number of people on each floor. In a single line, print the answer to the problem — the minimum number of electricity units. In the first example, the answer can be achieved by choosing the second floor as the $$$x$$$-th floor. Each person from the second floor (there are two of them) would spend $$$4$$$ units of electricity per day ($$$2$$$ to get down and $$$2$$$ to get up), and one person from the third would spend $$$8$$$ units of electricity per day ($$$4$$$ to get down and $$$4$$$ to get up). $$$4 \cdot 2 + 8 \cdot 1 = 16$$$. In the second example, the answer can be achieved by choosing the first floor as the $$$x$$$-th floor. Server time: Apr/20/2019 00:30:21 (f2).
CommonCrawl
Abstract: Electroweak interactions based on a gauge group $\rm SU(3)_L \times U(1)_X$, coupled to the QCD gauge group $\rm SU(3)_c$, can predict the number of generations to be multiples of three. We first try to unify these models within SU(N) groups, using antisymmetric tensor representations only. After examining why these attempts fail, we continue to search for an SU(N) GUT that can explain the number of fermion generations. We show that such a model can be found for $N=9$, with fermions in antisymmetric rank-1 and rank-3 representations only, and examine the constraints on various masses in the model coming from the requirement of unification.
CommonCrawl
Abstract: We construct lump solutions of the Kadomtsev–Petviashvili-I equation using Grammian determinants in the spirit of the works by Ohta and Yang. We show that the peak locations depend on the real roots of the Wronskian of the orthogonal polynomials for the asymptotic behaviors in some particular cases. We also prove that if the time goes to $-\infty$, then all the peak locations are on a vertical line, while if the time goes to $\infty$, then they are all on a horizontal line, i.e., a $\pi/2$ rotation is observed after interaction. Keywords: Grammian determinant, lumps solutions, orthogonal polynomials, Wronskian. Ministry of Science and Technology, Taiwan 104-2115-M-606-001. This research is supported by the Ministry of Science and Technology (Grant No. 104-2115-M-606-001).
CommonCrawl
You are given a directed graph, and your task is to find out if it contains a negative cycle, and also give an example of such a cycle. The first input line has two integers $n$ and $m$: the number of nodes and edges. The nodes are numbered $1,2,\ldots,n$. After this, the input has $m$ lines that describe the edges. Each line has three integers $a$, $b$, and $c$: there is an edge from node $a$ to node $b$ whose length is $c$. If the graph contains a negative cycle, print first "YES", and then the nodes in the cycle in their correct order. If there are several negative cycles, you can print any of them. If there are no negative cycles, print "NO".
CommonCrawl
Weighted linear regression is one of those things that one needs from time to time, yet it is not a built-in function of many common packages, including spreadsheet programs. On the other hand, the problem is not sufficiently complicated to make it worth one's while to learn (or relearn!) more sophisticated statistical software packages, as with a modest effort, the formulae can be derived easily from first principles. which can be readily computed if the values of $\sum W_i$, $\sum W_ix_i$, $\sum W_ix_i^2$, $\sum W_iy_i$ and $\sum W_ix_iy_i$ are available. These, in turn, can be calculated in a cumulative fashion, allowing a weighted least squares calculation to take place even on a handheld calculator that lacks sufficient memory to store all individual $x_i$, $y_i$, and $W_i$ values.
CommonCrawl
There will be 70 stores in the Orlando area after the increase. We can use the following guideline to solve this problem. Current number of stores + Increase in the number of stores = Number of stores after the increase So, 50 + 40% $\times$ 50 = Number of stores after the increase 50 + .4 $\times$ 50 = Number of stores after the increase 50 + 20 = Number of stores after the increase 70 = Number of stores after the increase There will be 70 stores in the Orlando area after the increase.
CommonCrawl
My concern is that if I sample, I might have some sort of convergence or error problem. And I wouldn't know where to look for further research in the issue. I am not a mathematician, although I wish I were, so please forgive any miscommunication on my part. Thank you for any help. Browse other questions tagged ca.classical-analysis-and-odes operator-theory hilbert-spaces eigenvector wavelets or ask your own question. Mercer's Theorem: uniform $L_\infty$ bound on eigenfunctions? Is there a wavelet frame for $L^2[0,\infty)$? Is this proof of Basel identity known?
CommonCrawl
Find sufficient conditions in which the triangle $ABC$ is located inside the domain traced by the triangle $DEF$. Let us consider two triangles $ABC$ and $DEF$ in the plane. My question is: Find sufficient conditions in which the triangle $ABC$ is located inside the domain traced by the triangle $DEF$. Does the slope of the line stay constant if we change $x_0$? Does the notion of "rotation" depend on a choice of metric?
CommonCrawl
Theorem 1: Let $H$ be an inner product space. If $H$ is separable and $E \subset H$ is an orthonormal subset of $H$ then $E$ is countable. Recall that a space is said to be separable if it contains a countable and dense subset. Proof: Suppose instead that $E$ is uncountable. Since $E$ is assumed to be uncountable, so is $\mathcal F$. Furthermore, the open balls in $\mathcal F$ are disjoint. Since $D$ is dense in $\mathcal H$, every ball in $\mathcal F$ must contain a point of $D$. But $D$ is countable and so this is impossible since $E$ is uncountable. So the assumption that $E$ is uncountable is false. Thus $E$ must be countable.
CommonCrawl
Suppose that $V_1$ and $V_2$ are vector spaces (over $\mathbb C$, say) with the same underlying set $X$. Suppose that the set of permutations of $X$ that are linear automorphisms of $V_1$ is the same as the analogous set for $V_2$. Then are $V_1$ and $V_2$ isomorphic? EDIT: Stefan Kohl points out that it doesn't matter how the question arose, so probably I should clarify that I was asking because I don't know if a random curiosity is research-level—as opposed to something that arose in my research, which more or less by definition is (hopefully!). Many people like to know where a problem came from. But don't be afraid to reply that you just had some idea and found it interesting. We mathematicians can identify with that. Of course we might not agree with you on how interesting the problem is, and some of us (not me) will be eager to tell you so. It matters what your question is and how you pose it, but not how you got to it. -- It doesn't make your question any worse or any better if you got to it in this or in that way. Questions turning up in one's research are sometimes even not the best-suited ones for MathOverflow. Answering such questions frequently tends to be more work than an answerer on this site is likely to do, and answers are often longer than one or two pages of text, and thus rather too long for the format of this site.
CommonCrawl
How does sampling rate impact Discrete-Time Kalman Filter state space modeling assumptions? The Kalman filter will be run with update interval $T_s$ such that $x_k$ represents position at time $t=kT_s$. Typically, the process noise $w_k$ is modeled as a zero-mean, white stationary random process of variance $\sigma^2_w$. Now, assume that the position time series is strictly band-limited with bandwidth far lower than the Nyquist frequency $1/(2T_s)$. This corresponds to a high degree of oversampling. to the position time series. However, if the position time series is strictly bandlimited, then the recovered process noise is strictly bandlimited, which violates the white process noise model. Does this mean that the Kalman filter needs to use a correlated process noise model for highly oversampled systems? In my case, the state space model (which, in fact, is constant velocity not constant position) describes the time evolution of a certain biological quantity. The discrete-time state space model has not been chosen from a precise description of the dynamics of the underlying physical (physiological) phenomena, (This is because such descriptions are complex and require access to additional biological quantities which are rarely knowable in practice). Rather, the constant velocity model is a common choice in the literature motivated primarily by expedience, simplicity, and good tracking results. In the literature, all results seem to be reported for near-Nyqvist sampling. Some studies in the literature analyzed real biological data (again, sampled near Nyqvist) and claimed that the process noise for this model was white. In my case, for historical reasons, I am oversampling by a good factor of 15. When I analyzed a very clean sample signal, I found that at the oversampled rate, the process noise was highly correlated. A decimated-to-near-Nyqvist version of the signal gave rise to a significantly lower amount of temporal correlation in the process noise. This is what motivated my question. The answer to your question is : maybe. When one is at the discrete time model, it is generally assumed that you solved the state transition equations from the continuous time differential equations where the physics have already been appropriately modeled, which is where your answer lies. Time discretization is based on sampling the continuous time state transition matrix, thus the sampling intervals aren't related to Nyquist, they correspond to the fidelity of the modeling. Shorter intervals tend to be more accurate approximations. The time intervals don't have to be uniform, but uniform intervals provide a less complicated implementation. While shorter time intervals correspond to a heuristic where shorter has better approximation, you raise a valid concern, sampling too closely, may indeed introduce correlation in the process noise. This can be solved with an augmented state. These are all considerations. Kalman Filters need to be tuned. The measurement noise has been neglected in this discussion. So, the answer to your question is that it really depends. Not the answer you're looking for? Browse other questions tagged sampling autocorrelation kalman-filters nyquist or ask your own question. How to form Kalman filtering matrices for a problem with variable acceleration? How to use Kalman filter for altitude prediction based on barometer data?
CommonCrawl
Abstract: We show that pure quadratic gravity with quantum loop corrections yields a viable inflationary scenario. We also show that a large family of models in the Jordan frame, with softly-broken scale invariance, corresponds to the same theory with linear inflaton potential in the Einstein frame. It follows that all these quasi scale-invariant models have the same relation between the tensor-to-scalar ratio and the scalar spectral index, which is also consistent with the current data. Thus, they form a family of attractors, which is sharply distinct from the recently discovered $\alpha$-attractors of Kallosh, Linde et al.
CommonCrawl
The probability distribution of a random variable, to be contrasted with the conditional distribution of this random variable under certain additional conditions. Usually the term "a priori distribution" is used in the following way. Let $(\Theta,X)$ be a pair of random variables (random vectors or more general random elements). The random variable $\Theta$ is considered to be unknown, while $X$ is considered to be the result of an observation to be used for estimation of $\Theta$. The joint distribution of $\Theta$ and $X$ is given by the distribution of $\Theta$ (now called the a priori distribution) and the set of conditional probabilities $\mathbb P_\theta$ of the random variable $X$ given $\Theta=\theta$. According to the Bayes formula, one can calculate the conditional probability of $\Theta$ with respect to $X$ (which is now called the a posteriori distribution of $\Theta$). In statistical problems, the a priori distribution is often unknown (and even the assumption on its existence is not sufficiently founded). For the use of the a priori distribution, see Bayesian approach.
CommonCrawl
If two sets $A$ and $B$ have equal power sets, can we conclude that $A=B$? I feel like they are equal, by intuition and by analyzing particular cases, but I don't know how to write a formal proof of this. Thanks in advance. Note that for any set $X$ we have $\bigcup \mathcal P(X) = X$ where $\mathcal P(X)$ is a power set of $X$. Thus $$A=\bigcup \mathcal P(A) = \bigcup \mathcal P(B) = B$$. Yes. Since A is in the power set of B, A is a subset of B. Likewise, B is a subset of A. Therefore, A = B. Yes. If $\mathcal P(A)=\mathcal P(B)$, as $A\in\mathcal P(A)$, $A\in\mathcal P(B)$, which means $A\subset B$. Similarly, $B\subset A$, so $A=B$. Not the answer you're looking for? Browse other questions tagged discrete-mathematics elementary-set-theory or ask your own question. Let A and B be sets. Prove that A = B iff the power set of A is equal to the power set of B. How can I prove that these two sets are equal? We have $k\ge2$ sets, and none of them are equal. Show that at least one of the sets contains none of the other sets. Two sets are equal, Definition.
CommonCrawl
I have two eigenvectors: $(2, 1, -1)'$ with eigenvalue $1$, and $(0, 1, 1)'$ with eigenvalue $2$. The corresponding determinant is $8$. How can I calculate the $3\times3$ symmetric matrix $A$ and $AP$? I cannot solve several variables of the matrix. The determinant is the product of the eigenvalues, hence you can compute the third eigenvalue. Moreover, you know two eigenvectors. As the matrix is assumed to be symmetric, you can complete the eigenvectors to an orthogonal basis. So you know the diagonalized form of $A$ and the transformation matrix. Can you take it from here? Not the answer you're looking for? Browse other questions tagged linear-algebra matrices eigenvalues-eigenvectors diagonalization symmetric-matrices or ask your own question. Finding a matrix given eigenvalues and eigenvectors. Can full rank matrix have zero determinant? How to compute the eigenvalues of a symmetric matrix?
CommonCrawl
There are various strict monoidal model categories of spectra (e.g. symmetric spectra) where the honestly commutative monoid objects model the "coherently commutative" ring spectra (which might otherwise be expressed using, say, operads). Is there an analog for spaces? That is, there a monoidal model category, Quillen equivalent to spaces (in some monoidal sense), such that the category of commutative monoids in this category is (Quillen) equivalent to the category of algebras in spaces over some fixed and suitably free $E_\infty$-operad? In spaces, this is false; topological abelian groups are very far from modelling infinite loop spaces. Yes, such a model is developed in a paper of Blumberg, Cohen and Schlichtkrull about Thom spectra. You may also be interested in this paper. Not the answer you're looking for? Browse other questions tagged homotopy-theory at.algebraic-topology or ask your own question. Does the Monoid Axiom hold for k-spaces? Where does Segal's category come from? Do topological commutative monoids model all 0-connective spectra (after group completion)?
CommonCrawl
Abstract: We construct a solvable group $G$ of order 5 648 590 729 620 such that the set of element orders of $G$ coincides with that of the simple group $\mathrm S_4(3)$. This completes the determination of finite simple groups isospectral to solvable groups. Keywords: finite simple groups, element orders, recognition.
CommonCrawl
Format: MarkdownItex"There exists a model category structure on a category of dg-coalgebras whose fibrant objects are precisely L-∞ algebras." Isn't there that the operad for L-∞ algebras is the cofibrant replacement of the operad for dg-algebras in a model structure on the category of dg-operads ? Now the algebras over this copfibrant replacements are "the same as" fibrant objects in a model structure on the category of algebras over the original dg-operad. Is there a general pattern here ? Isn't there that the operad for L-∞ algebras is the cofibrant replacement of the operad for dg-algebras in a model structure on the category of dg-operads ? Now the algebras over this copfibrant replacements are "the same as" fibrant objects in a model structure on the category of algebras over the original dg-operad. Is there a general pattern here ? I am finally working on adding some substance to the entry model structure for L-infinity algebras (long overdue). For a survey, check out the Summary table. I have added a brief commented cross-link between model structure for L-infinity algebras (in the section On cosimplicial algebras) and monoidal Dold-Kan correspondence (in the section dual monoidal DK – Quillen equivalences) on how Jonathan Pridham's article has a Quillen equivalence between dg-algebras and cosimplicial algebras by the inverse to the normalized-cochains-functor, relating two models for L ∞L_\infty-algebras. Format: MarkdownItex@Urs: Jon Pridham told me of the following notes that Stefano Maggiolo took from a course Jonathan gave in Rome: http://poisson.phc.unipi.it/~maggiolo/wp-content/uploads/2008/12/WDTII_Pridham.pdf I would put a link on the [[model structure for L-infinity algebras]] page but am not sure if that is the best place for it. I would put a link on the model structure for L-infinity algebras page but am not sure if that is the best place for it. Format: MarkdownItexHi Tim, ah, thanks for the link, that looks like a useful survey of his article. Sure, that would fit wll at [[model structure for L-infinity algebras]], also at [[deformation theory]] etc, I presume. ah, thanks for the link, that looks like a useful survey of his article. Sure, that would fit wll at model structure for L-infinity algebras, also at deformation theory etc, I presume. I have finally added to model structure for L-infinity algebras right at the beginning the statement of the "default" model structure, that induced by L ∞L_\infty-algebras being homotopy algebras over an operad. I have split the definition-section now into one piece with this "default" model structure and then all the other model structures together as ingredients of the Quillen equivalence of that to infinitesimal derived ∞\infty-stacks. Format: MarkdownItexadded 1. the [remark](https://ncatlab.org/nlab/show/model+structure+for+L-infinity+algebras#OndgCoAlgWEs) that the class of weak equivalences (for $L_\infty$) on $dgCoAlg$ is _not_ the evident one, 1. the [proposition](https://ncatlab.org/nlab/show/model+structure+for+L-infinity+algebras#dgLieQIsTodgCoAlgs) that instead it is a proper subclass; 1. the analogous [remark](https://ncatlab.org/nlab/show/model+structure+for+L-infinity+algebras#OnWEsOnLInfinity) for the weak equivalence on $dgAlg$. the analogous remark for the weak equivalence on dgAlgdgAlg. Format: MarkdownItexChevalley-Eilenberg assumes finite-dimensional graded components, isn't it ? What are the finiteness conditions on the graded components of the objects when definining the model categories in question ? I do not see any finiteness conditions when skimming the entry...(as well as for the entry [[model structure on dg-coalgebras]]). Chevalley-Eilenberg assumes finite-dimensional graded components, isn't it ? What are the finiteness conditions on the graded components of the objects when definining the model categories in question ? I do not see any finiteness conditions when skimming the entry…(as well as for the entry model structure on dg-coalgebras). Format: MarkdownItexPridham uses pro-dg-algebras to get around the finiteness condition, that's why. Pridham uses pro-dg-algebras to get around the finiteness condition, that's why. Format: MarkdownItexI think it's known since * [[Moss Sweedler]], _Hopf algebras_, 1969 that every coalgebra is the filtered colimit of its finite-dimensional sub-coalgebras, and hence that its linear dual is a pro-fin-dim-algebra. See for instance p. 7 of * [[Lowell Abrams]], [[Charles Weibel]], _Cotensor products of modules_ ([arXiv:math/9912211](http://arxiv.org/abs/math/9912211)) In * [[Ezra Getzler]], [[Paul Goerss]], _A model category structure for differential graded coalgebras_ ([ps](http://www.math.northwestern.edu/~pgoerss/papers/model.ps)) it was observed that this remains true in the presence of differentials. I learned this first from Todd Trimble some years back. I seemed to recall that then Todd had written some notes related to this, but now I don't find them. it was observed that this remains true in the presence of differentials. I learned this first from Todd Trimble some years back. I seemed to recall that then Todd had written some notes related to this, but now I don't find them. Format: MarkdownItex@Urs they might be in Todd's private web but not published to be viewable by the general public. they might be in Todd's private web but not published to be viewable by the general public. Format: MarkdownItexI've been meaning to complete a long post to the Café on this topic, since it was discussed recently between John and Mike (how are spaces of measures coalgebras). But in the meantime, here are some past scribblings, roughly in backwards chronological order: * (https://golem.ph.utexas.edu/category/2007/12/transgression_of_ntransport_an.html#c014560) and the post following, * (https://golem.ph.utexas.edu/category/2007/12/transgression_of_ntransport_an.html#c014524), * (https://golem.ph.utexas.edu/category/2007/11/on_bv_quantization_part_viii.html#c013564), * (https://golem.ph.utexas.edu/category/2007/11/on_bv_quantization_part_viii.html#c013516) Mainly these were in view of giving an explicit description of the closed structure of the category of cocommutative coalgebras (and the enrichment of algebras therein). Urs has mentioned in the nLab that this fact can be deduced cleanly and abstractly from the Gabriel-Ulmer description of CocommCoalg as the category of lex functors from the opposite of the category of finite-dimensional cocommutative algebras to $Set$ (i.e. from the category of finite-dimensional algebras to $Set$), which follows straightaway from what Urs just wrote on colimits of finite-dimensional pieces. This fact is also mentioned at [[cofree coalgebra]], and called the "Fundamental Theorem of Coalgebras", with a reference to a Handbook of Algebras article by Walter Michaelis often mentioned by Jim Stasheff. And also at [[CocommCoalg]] under Local Presentability. In my personal web I had begun looking more generally at the category of commutative comonoids for a locally presentable symmetric monoidal closed category, but that takes us a little further off-topic I guess. I can probably think of more instances; it's something I come back to with some frequency. Mainly these were in view of giving an explicit description of the closed structure of the category of cocommutative coalgebras (and the enrichment of algebras therein). Urs has mentioned in the nLab that this fact can be deduced cleanly and abstractly from the Gabriel-Ulmer description of CocommCoalg as the category of lex functors from the opposite of the category of finite-dimensional cocommutative algebras to SetSet (i.e. from the category of finite-dimensional algebras to SetSet), which follows straightaway from what Urs just wrote on colimits of finite-dimensional pieces. This fact is also mentioned at cofree coalgebra, and called the "Fundamental Theorem of Coalgebras", with a reference to a Handbook of Algebras article by Walter Michaelis often mentioned by Jim Stasheff. And also at CocommCoalg under Local Presentability. In my personal web I had begun looking more generally at the category of commutative comonoids for a locally presentable symmetric monoidal closed category, but that takes us a little further off-topic I guess. I can probably think of more instances; it's something I come back to with some frequency. Format: MarkdownItex13: Even with new moments coming from Sweedler the very duality you talk about is known far before Sweedler, in the works of Cartier and Dieudonné (instead of pro-finite dimensional vector spaces/modules they work with linearly compact topological vector spaces/modules/rings, some related notions include [[linearly compact module]], [[Cartier duality]], [[Gabriel-Oberst duality]]), but I did not know that it was effectively applied to the theory of $L_\infty$-algebras and model categories in question. * [[Jean Dieudonné]], _Introduction to the theory of formal groups_, Marcel Dekker, New York 1973. P.S. The Abrams-Weibel reference is new to me; I recorded it into [[dual gebra]]. 13: Even with new moments coming from Sweedler the very duality you talk about is known far before Sweedler, in the works of Cartier and Dieudonné (instead of pro-finite dimensional vector spaces/modules they work with linearly compact topological vector spaces/modules/rings, some related notions include linearly compact module, Cartier duality, Gabriel-Oberst duality), but I did not know that it was effectively applied to the theory of L ∞L_\infty-algebras and model categories in question. Jean Dieudonné, Introduction to the theory of formal groups, Marcel Dekker, New York 1973. P.S. The Abrams-Weibel reference is new to me; I recorded it into dual gebra. Format: MarkdownItex[[Cartier duality]] (in its original sense) is general if [[linearly compact module]]s are used, not only for finite group schemes as presently stated in the entry. This is equivalent to working with pro-finite dimensional vector spaces. Unfortunately, I will have no time today to write the amendment (which is nicely explained in the Dieudonné's book cited in 16. Cartier duality (in its original sense) is general if linearly compact modules are used, not only for finite group schemes as presently stated in the entry. This is equivalent to working with pro-finite dimensional vector spaces. Unfortunately, I will have no time today to write the amendment (which is nicely explained in the Dieudonné's book cited in 16.
CommonCrawl
Why does energy have to be emitted in quanta? I was reading some popular science book and there was this sentence. It says that energy has to be emitted in discrete portions called quanta. Otherwise the whole energy in the universe would be converted into high frequency waves. I'm not a physicist, so this conclusion seems to me like a huge leap. First, we assume that energy is emitted in a continuous way (not in quanta). And how do we get to the statement "the whole energy in the universe would be converted into high frequency waves"? The book is almost surely referring to the ultraviolet catastrophe. where $\nu$ is frequency and $T$ is temperature. This is clearly a problem, since $u$ diverges as $\nu \to \infty$ (1). The problem was solved when Max Planck made the hypothesis that light can be emitted or absorbed only in discrete "packets", called quanta. You can verify that the low-frequency ($\nu \to 0$) approximation of Planck's law is the Rayleigh-Jeans law. with $n_x,n_y,n_z$ integers, are allowed. This basically means that we can consider frequencies as high as we want to, which is a problem, since we have seen that when the frequency goes to infinity the energy density diverges. So, if we used the Rayleigh-Jeans law, we would end up by concluding that a cubic box containing electromagnetic radiation has "infinite" energy. It is maybe this that your book is referring to when it says that "the whole energy in the universe would be converted into high frequency waves" (even if, if this is a literal quote, the wording is quite poor). I think the author is referring to the UV catastrophy, a historical problem in physics which first led physicists to discover that electromagnetic energy was quantized. The problem basically is this: For a system which is in thermal equilibrium each object is radiating and absorbing energy. Since it's in equilibrium the radiative energy emitted by any object in the system is equal to the energy absorbed by the rest of the objects. And the temperature of all the objects are equal. When trying to calculate the distribution of this radiative energy among the EM spectrum, physicists found that theoretically the proportion of energy contained by radiation of frequency $\nu$ should be proportional to $\nu^2$! (see Rayleigh-Jeans law). This meant that as you went to higher frequencies the energy contained by them would go on increasing without limit so not only practically all the energy would be contained by higher frequencies, but also any system in equilibrium would have infinite energy. This is obviously not what we observe in real life so something was wrong. It is only when they assumed that energy was quantized that they got a distribution law which not only made sense, but also fit the experimental data beautifully (see Planck's law). I believe that this explains the context of the statement that the author was making but answering why does energy have to be quantized in reality is a deep, rather philosophical question to which no one really knows the answer. There is also proof that the energy of an EM wave is transferred in discrete quanta from the photoelectric effect. The classical wave theory of light was unable to explain why electrons were only emitted from a metal plate when the frequency of the incident light was above a certain frequency, and why they were emitted instantaneously above this frequency. This could be explained the photon model, which states that each photon has a discrete amount of energy with $E=hf$ and interacts with only one electron, hence the instantaneous emission of electrons when the frequency of the incident light was greater than the threshold frequency. Without quanta, an electron attracted to the nucleus of an atom would be accelerated as it orbits around it. By accelerated, I don't mean the layman meaning of speeding up but its velocity direction change. Now classical electromagnetism tells us that an accelerated charge emits electromagnetic waves. This is how an antenna produces radio waves for example by accelerating the electrons inside the antenna (in that case speeding them up and slowing them down in turn). But then the energy radiated as electromagnetic waves implies that the electron looses energy to conserve the total energy, so basically the classical picture (that means without quanta) predicts an atom cannot be stable. So in that picture, all matter would nearly instantly collapse, leaving only a bath of electromagnetic waves. That is one possible answer to your question. See also my fellows guess that it could refer to the black body problem! I highly recommend the first chapter of the book Quantum Physics of atoms, molecules, solids, nuclei and particles by Eisberg & Resnick. It's an easy read and it's explanation of energy quanta is unparalleled. But I will still try to explain as briefly as possible. Now classically, a large system of non interacting entities are assumed to follow Boltzmann distribution which allots the probability of an entity possessing an energy in any range. Now combining this with the freedom of all possible continuous energies gives the average total energy to be $kT$ for each entity. In the case of black body radiation the entities are the standing waves with fixed wavelength satisfying the condition that they must have nodes at the walls of a black body. Since we allotted the same average total energy to each mode of standing waves and the possibility that every mode can combine to give the total power spectrum, we give rise to divergences at large frequencies as the number of modes can just keep on increasing. This is called the Ultraviolet Catastrophe of the Rayleigh-Jeans formula. Now experimentally the Rayleigh-Jeans formula suited well at low frequencies but not at higher frequencies . Whereas the power spectrum should go to 0 at higher frequencies, Rayleigh-Jeans formula gave infinities. So to get rid of this problem, the average total energy of the modes should go to 0 at large frequencies and $kT$ at small frequencies. There are two ways of fiddling with the average total energy for each mode. 2) Change the classical assumption that each mode gets equal average total energy $kT$. Now Planck was not aesthetically inclined to do the former as that distribution law explained many other phenomena splendidly. So he did the latter. He tried to guess the function by noting that instead of assuming continuous range of energies if he assumes that energy could only take values that are the multiples of a quantity(the minimum possible energy, say $E_o$) then he could get such a desired function. where $h$ is the proportionality constant (called Planck's constant) and $\nu$ is the frequency. Not the answer you're looking for? Browse other questions tagged quantum-mechanics energy thermal-radiation discrete or ask your own question. Light was known to be a wave but had to be considered as made up of lumps not waves to explain ultraviolet catastrophe. Why? Is quantization of energy a purely mathematical result or is there a fundamental reason behind it? What evidence refutes a "pre-eternally existent cosmic egg"? What does it mean to "convert energy into time"? Can Thermal Energy be converted into usable energy? How did Planck use the concept of statistical entropy in trying to understand the meaning of his own law? How exactly does wave theory of light fail to explain blackbody radiation? What did Tesla mean by "there is no energy in matter"? It is logical to think about a quantized time? Why is frequency used in the formula of energy that was invented by Planck?
CommonCrawl
Abstract: We analyse the evolution of a mildly inclined circumbinary disc that orbits an eccentric orbit binary by means of smoother particle hydrodynamic (SPH) simulations and linear theory. We show that the alignment process of an initially misaligned circumbinary disc around an eccentric orbit binary is significantly different than around a circular orbit binary and involves tilt oscillations. The more eccentric the binary, the larger the tilt oscillations and the longer it takes to damp these oscillations. A circumbinary disc that is only mildly inclined may increase its inclination by a factor of a few before it moves towards alignment. The results of the SPH simulations agree well with those of linear theory. We investigate the properties of the circumbinary disc/ring around KH 15D. We determine disc properties based on the observational constraints imposed by the changing binary brightness. We find that the inclination is currently at a local minimum and will increase substantially before setting to coplanarity. In addition, the nodal precession is currently near its most rapid rate. The recent observations that show a reappearance of Star B impose constraints on the thickness of the layer of obscuring material. Our results suggest that disc solids have undergone substantial inward drift and settling towards to disc midplane. For disc masses $\sim 0.001 M_\odot$, our model indicates that the level of disc turbulence is low $\alpha \ll 0.001$. Another possibility is that the disc/ring contains little gas.
CommonCrawl
Abstract: $\alpha$Check is a light-weight property-based testing tool built on top of $\alpha$Prolog, a logic programming language based on nominal logic. $\alpha$Prolog is particularly suited to the validation of the meta-theory of formal systems, for example correctness of compiler translations involving name-binding, alpha-equivalence and capture-avoiding substitution. In this paper we describe an alternative to the negation elimination algorithm underlying $\alpha$Check that substantially improves its effectiveness. To substantiate this claim we compare the checker performances w.r.t. two of its main competitors in the logical framework niche, namely the QuickCheck/Nitpick combination offered by Isabelle/HOL and the random testing facility in PLT-Redex.
CommonCrawl
No. 4: Special Issue on Image Processing Methods in Applied Mechanics. Reflectance models by pumping up the albedo function. MGV vol. 8, no. 1, 1999, pp. 3-17. The paper introduces a method, called the albedo pumping-up, to derive new, physically plausible BRDFs from an existing one or from any symmetric function. This operation can be applied recursively arbitrary number of times. An important application of this operation is the transformation of the Phong and Blinn models in order to make them produce metallic effects. The paper also examines the albedo function of reflectance models and comes to the conclusion that widely used models violate energy balance at grazing angles. Key words: reflectance function, BRDF representation, albedo function, energy balance, metal models, perceptual based fitting. Hierarchical correspondence of gray connected components in stereo images using epipolar geometry. MGV vol. 8, no. 1, 1999, pp. 19-54. In this paper, we propose a new feature-based method for stereo matching. The matching primitives that we use are the boundaries of certain parameter-dependent connected components of images. Under certain assumptions, there is a one-to-one correspondence between subsets of the points on the boundaries of connected components of stereo pairs. This correspondence can be identified using the epipolar geometry. Using matched boundaries, we can identify the corresponding connected components determined by the boundaries. By changing the values of the parameters, we obtain a hierarchy of connected components in a gray image, which in turn provides us with a hierarchical stereo correspondence method. Key words: stereo correspondence, hierarchy, connected component, gray image, epipolar geometry. A computer graphics system for the analysis of joint kinematics. MGV vol. 8, no. 1, 1999, pp. 55-62. The paper describes a system to diagnose disorders related to joint kinematics. By the use of a multidisciplinary approach, including medical imaging, three-dimensional reconstruction, kinematics and computer graphics, the system provides simultaneous visualization of both the three-dimensional morphology and the three-dimensional motion of a joint for which the kinematics parameters have been experimentally interpolated from medical imaging. Applications planned for the future include both an interactive diagnosis system for joint disorders and a system to illustrate normal and pathological joint kinematics to medical students. Key words: medical imaging, 3D reconstruction, diagnose disorders, joint kinematics. Warping-based interactive visualization on PC. MGV vol. 8, no. 1, 1999, pp. 63-76. Image-based rendering produces realistic-looking 3D graphics at relatively low cost. In this paper, an original post-warping rendering system using more than two sample views to derive a new view is presented. Owing to the warp-based compression and incremental computation, the computational expense is less or no more than conventional two-image synthesis approaches. The procedure consists of three steps. First, a set of sample images is selectively acquired with conventional geometry rendering or volume rendering or from photographs of the real scene. Next each of the neighboring image pair is compressed by warping transformation based on redundant pixels between them. Finally, the compressed sample images are directly re-projected to produce new images. In order to improve the speed more, an incremental warping flow is developed, which is computationally less expense. With the method described above, animation faster than fifty frames (300$\times$ 300) per second is achieved on PC. Key words: image-based rendering (IBR), warping transformation, field of view (FOV), compressing data. Fractal images with inverse replicas. MGV vol. 8, no. 1, 1999, pp. 77-82. Fractals are famous for their beauty and fractal techniques are employed for less storage space requirement while storing images. The fractals contain their scaled down, rotated and skewed replicas embedded in them. The concept of multiple reduction copy machine (MRCM) has been used for creating fractals since long. A modified MRCM has been designed to generate fractal images having inverse replicas embedded in them along with the scaled down, rotated, translated and skewed replicas. We restrict our experiments only to binary images in order to compare the results with the existing regular fractals. This paper only demonstrates the generation of fractal images with inverse replicas. Key words: multiple reduction copy machine, fractals, inverse replica. Image coding based on flexible contour model. MGV vol. 8, no. 1, 1999, pp. 83-94. This paper presents a new scheme of model-based image coding method. First a new image model called Flexible Contour Model that can extract features of nonrigid objects in images is proposed, then we deduce the fast algorithms for calculating the parameters of the model and for matching the model to images. Furthermore the combination of the model with multiscale analysis and the triangulation of the model has been studied. As a result, reconstruction of original images with high compression rate and unnoticeable distortion was obtained. Key words: image compression, facial image compression, flexible contour model, model based image coding. Fast, robust and adaptive lossless image compression. MGV vol. 8, no. 1, 1999, pp. 95-116. For applications like image transmission or storage we need fast and adaptive lossless compression algorithms. A speed improvement must not be achieved at the expense of significant compression ratio deterioration or too big memory requirements. The robustness, which may be defined as a performance on the worst case of data, is very important in practical applications. Presented algorithm uses the traditional decorrelation-statistical compression scheme of adaptive image compression. We introduce many modifications to improve speed and robustness of the algorithm. Firstly, we vastly increase the processing speed by altering the traditional statistical compression scheme. Instead of coding each symbol and updating the data model each time a symbol is coded, we update the model only after coding some symbols. We construct a robust family of codes based on the Golomb codes and adapted to the real image data - that is to the finite alphabet of not ideally exponential symbol distribution. In order to quickly adapt to the specific image data characteristic the data model uses a variable number of context buckets and is updated with a variable frequency -- starting with a single collective context bucket and a full model update. The introduced modifications allow us to increase the processing speed by a factor of two or more at no or negligible compression ratio deterioration. Our algorithm limits worst-case local and global data expansion and has strictly bounded memory requirements. We present the experimental results of introduced modifications and the comparison to some well-known algorithms. Key words: lossless image compression, image coding, adaptive algorithms, O(n), statistical compression. Automatic generation of nearly optimal decision trees for handwritten character recognition. MGV vol. 8, no. 1, 1999, pp. 117-126. This paper presents results of applying decision trees to printed and handwritten character recognition. An automatic feature generation method was employed during the construction of the trees, which improved the recognition rate for the testing set. This learning technique significantly reduces the drawback of the tree classifiers that is their rapid error accumulation with depth, while it does not influence the size of trees. It was shown that the proposed approach gives better results than increasing the size of the training sets used for construction of the trees. The recognition rate above 97% was obtained by means of a parallel classifier built of multiple decision trees despite no advanced preprocessing of input characters (like skeletonization or slant reduction) was performed. Key words: optical character recognition, automatic feature generation, feature extraction, decision tree, parallel classifier. Recognition of cartographic symbols based on a structural model of a shape. MGV vol. 8, no. 1, 1999, pp.129-142. This paper presents a method for recognition of cartographic symbols that is based on a structural model of a general 2D-curve. The presented method utilizes an algebraic description of a curve structure described in . Feature extraction is based on the vectorized skeleton generated by the non-pixelwise thinning algorithm, . From such representation, a structural description of a cartographic symbol is obtained. Finding a match between a model and a given, unknown 2D-shape, is performed through the devised, hybrid procedure, consisting of the structural matching algorithm with followed distance calculation in a parameter space. Experimental results shows that the method gives satisfying recognition rates. Key words: cartographic symbol recognition, feature extraction, structural description, shape analysis, matching. Optical recognition system for radioelectronic products. MGV vol. 8, no. 1, 1999, pp. 143-152. Nowadays optical recognition systems (ORS) are widely used in recognition and inspection of various radioelectronic products. In this paper the principles of ORS creation are presented. The principles of fuzzy adaptation and fuzzy recognition are also reported. Our fuzzy algorithms yield new insight into design of optical systems. These algorithms can process the low contrast images in real-time mode. They can recognise the multicontour object, which can be turned around, moved from the frame centre and scale changed. Key words: optical inspection systems, recognition, primary image processing, product quality control. Special Issue on Graph Transformations in Pattern Generation and CAD. Special Issue Editor: Ewa Grabska. Graph representation for the description and recognition of patterns: some issues. MGV vol. 8, no. 2, 1999, pp. 155-168. We propose a new approach for the representation and recognition of patterns. The primitives extraction process is based on the properties of an original contour profile. Corners, curves and line segments can easily be detected. The main characteristic of the method is the possible overlap among primitives, which allows multiple descriptions of ambiguous parts. The relation "is followed by" is the most important of the graph. As the graph representation is very simple, model graphs can be intuitively defined by hand. The recognition stage consists in finding the greatest common structure between a model graph and an input graph. Some results illustrate the method. Key words: pattern representation and recognition, primitives extraction, model graph, input graph. Directed acyclic graph compression of labelled trees. MGV vol. 8, no. 2, 1999, pp. 169-174. A new algorithm for compressing labelled trees is proposed in this paper. This algorithm allows to obtain a Directed Acyclic Graph (DAG) from a labelled tree in linear time. An experimental study is also given. Key words: algorithms, labelled tree, compression. A programming environment for graphics based on graph grammars and Java. MGV vol. 8, no. 2, 1999, pp. 175-194. The paper reports on the system OOPAGGDE which can be used in a wide area of visual programming. This field encompasses diagram techniques in the field of software engineering and the generation of "nice" patterns as well. The theoretical basis of OOPAGGDE are graph grammars enriched with features like object-orientation, programmability, and attributes. How OOPAGGDE works is shown by a small example, the creation of "Lindenmayer trees". Key words: programmed attributed graph grammars, object orientation, graphical modelling. A graph-edit algorithm for hand-drawn graphical document recognition and their automatic introduction into CAD systems. MGV vol. 8, no. 2, 1999, pp. 195-211. In this work, a graph-based algorithm for symbol recognition in hand-drawn architectural plans has been described. The algorithm belongs to a prototype of man-machine interface consisting in the introduction of hand-drawn designs to a CAD system. Documents and symbol prototypes are represented in terms of a Region Adjacency Graph (RAG) structure. Hence, the localization of symbol instances in documents is performed by an error-tolerant subgraph isomorphism algorithm that looks for the minimum cost edit sequence that transforms a model graph to an input one. In this paper we describe this algorithm and the set of graph edit operations designed to transform RAGs. The main idea of the algorithm is to formulate the distance between two RAGs in terms of the string edit distance between the boundary strings of the corresponding graph regions. The main advantage of the algorithm is its ability to cope with distorted structures and its invariance to rotation, translation and scaling. Key words: graphics recognition, graph matching, edit distance, CAD systems, structural pattern recognition. Application of attributed graph grammars to the synthesis of visual models of plants. MGV vol. 8, no. 2, 1999, pp. 213-229. Over the last years a rapid development of modelling and visualization methods of biological structures has taken place. In this paper I propose a method based on graph grammars. A method of modelling as well as of visualization is described. The implementation of this method is also presented. The implementation and a number of examples are shown for the selected case of two families of cactuses. Key words: graph transformation, design, modelling plants, rendering. Inexact graph matching for fingerprint classification. MGV vol. 8, no. 2, 1999, pp. 231-248. In this work we introduce a new structural approach to automatic fingerprint classification. The fingerprint directional image is partitioned into homogeneous connected regions according to the fingerprint topology. A relational graph is constructed in order to compactly summarize the fingerprint macro-structure resulting from the partitioning process. An inexact graph matching technique is adopted to compare this graph with a set of prototype graphs which have been a-priori derived starting from a well-known classification scheme. Key words: fingerprint classification, directional image, partitioning algorithms, relational graph, inexact graph matching. Model-based recognition of polyhedral objects from single intensity image using aspect graph. MGV vol. 8, no. 2, 1999, pp. 249-264. The method for recognition of polyhedral objects composed of blocks and pyramids from a single 2D--image is presented. The knowledge of an object appearance is provided by an explicit model of a shape composed of a graph representation of aspects, faces and boundary groups. The recognition of an unknown object is performed through graph matching. To avoid a combinatorial expolosion in the search process during recognition statistical properties of the chosen primitives have been used. Key words: 3-D shape recognition, aspect modelling, recognition by parts. Graphical representation of asymmetry in three-way dissimilarity data. MGV vol. 8, no. 2, 1999, pp. 265-279. Vector models for representation of asymmetry in three-way (dis)similarity data are proposed. We evaluate several different data matrices corresponding to observations, individuals and so on. We then propose models for representation of asymmetry on the basis of the INDSCAL (Carroll and Chang, 1970) and GEMSCAL (Young, 1984) models. Key words: asymmetry, data visualization, (dis)similarity, GEMSCAL, INDSCAL, MDS. Logical functions of arbitrary vicinities of geometric objects. MGV vol. 8, no. 3, 1999, pp. 285-294. For Constructive Solid Geomety (CSG) models of the complex geometric objects, the definition of the logical functions of arbitrary vicinities is proposed. These functions are equal to the initial functions in some vicinities from the viewpoint of membership tests. The new functions are much simpler than initial ones and allow to significantly accelerate basic membership tests on CSG models. Key words: constructive solid geometry, boundary representation, three-valued calculus. Finding outlines of objects in raster images. MGV vol. 8, no. 3, 1999, pp. 295-312. The object of the presented method is to extract structural information about a raster image. The structural information consists of strings of pixels describing outlines of objects. Various definitions of an object areused. Key words: outline extraction from color images. Approximating outlines of objects in raster images. MGV vol. 8, no. 3, 1999, pp. 313-339. The object of the presented method is to approximate outlines of objects found in a raster image. The input outlines are represented by strings of pixels, and the approximated output outlines are represented by curves. This method is meant to reduce distortion of outlines of objects, caused by a raster structure of an image. Key words: raster curve approximation, outlines in color images. A vectorized thinning algorithm for handwritten symbols recognition. MGV vol. 8, no. 3, 1999, pp. 341-352. In this paper a non-pixelwise thinning algorithm of binary line images, called vectorized thinning is proposed. The presented algorithm produces a skeleton of a 2D-object which is performed in three steps: (1) links finding and simple region extraction, (2) complex region extraction and multiple points finding, (3) transformation into vectorized skeleton. As opposed to other thinning algorithms, the obtained skeleton is obtained in vector form, particularly suitable for further structural recognition of an object. The proposed vectorized thinning algorithm has been used at the feature extraction in the cartographic symbol recognition from scanned, geodesic maps, with much better results than using other, pixelwise thinning methods. The main advantages of the proposed thinning algorithm lie in better extraction of multiple points representing corners, branch or crossing regions of 2D objects, and less sensitivity to a boundary noise which is one of the main problems in pixelwise thinning algorithms. Key words: hinning algorithm, skeleton, medial axis, feature extraction, shape analysis. MGV vol. 8, no. 3, 1999, pp. 353-365. The research, upon which this paper is based, focused upon the accurate acquisition of images that could be used for precision measurement processes. These precision measurements would either be used as a part of an inspection system or as a feedback mechanism to improve process quality. In this paper, a novel adaptive filtering method is proposed for the purpose of reducing noise and removing spurious pixel values on images, acquired within a manufacturing environment without blurring edges or displacing identified boundaries. The paper documents the experimental outcomes derived from testing this novel filtering technique. Key words: vision systems, image processing, filtering. MGV vol. 8, no. 3, 1999, pp. 367-381. This paper considers defining functional requirements of the designed object and transforming them into the object structure. The proposed Function Structure Editor (FSE) enables to bridge the gap between the design specification and the object structure. When utilising FSE the designer uses graph operations which are automatically transformed into graph rules allowing one to generate potential solutions of a given design problem. Relations between graph operations and graph rules are formulated in the form of certain statements. The proposed methodology is illustrated by examples of designing the teapot and the floor-layout of the house. Key words: design problem, object structure, design specification, graph rules, graph operations, function structure editor. Evolutionary pattern grammars in artificial intelligence and design. MGV vol. 8, no. 3, 1999, pp. 383-394. Evolutionary algorithms are a creative way of finding new designs. Grammars are a precise, concise way of describing the structure of possible designs. Patterns capture the required symmetries in designs. In this paper we show that these three ideas fit neatly together and give a powerful tool for design and other AI problems. Using graph transformations to support multilevel reasoning in engineering design. MGV vol. 8, no. 3, 1999, pp. 395-425. It is generally admitted that expert designers work with design entities - specifications and design solutions - described at different levels of abstraction, detail or generality and can switch from one representation level to another in a very effective and flexible way. Moreover, they often use graphic representations such as sketches and diagrams to externalize their ideas about the designed artefact during the course of a design process. The paper proposes a general framework for multilevel representation of design products based on the use of plex structures. In this frame we illustrate a set of graph transmutations that can be used to generate or modify design solutions during the design development stage. Our aim is to address these issues with the expectation that the results would provide insights into what sort of computational tool should support the cognitive need for multilevel reasoning and how. It is argued that by providing ICAD systems a stronger cognitive foundation we can guarantee to them a greater success among end users. Key words: conceptual schemes, model transmutations, design. Morphological method for extraction of microcalcifications in mammograms for breast cancer diagnosis. MGV vol. 8, no. 3, 1999, pp. 427-448. The paper presents a new morphological method for extraction of microcalcifications in mammograms for breast cancer diagnosis. The proposed method is based on the use of the morphological detector together with morphological pyramid for detection of local irregularities of brightness in a wide range of sizes and shapes. The binary maps obtained from the pyramid indicate locations of the candidates for microcalcifications in the mammogram. Independently, the gray level reconstruction of the original mammogram is carried out in order to obtain the exact shape of h-domes, which depict regional maxima (hills) of brightness in the image. By thresholding the image of h-domes, one obtains a binary map of h-domes. Subsequently, a binary reconstruction is carried out, in which the binary map of h-domes is used as a mask, and the map obtained from the pyramid after some modification is used as the marker. As a result of the reconstruction, the required map of microcalcifications is extracted. A number of tests of the proposed method on various mammograms are presented. Key words: breast cancer diagnosis, mammograms, microcalcification detection, microcalcification extraction, morphological pyramid, morphological reconstruction. Analysis of medical images by means of brain atlases. MGV vol. 8, no. 3, 1999, pp. 449-468. This paper: (i) introduces the taxonomy of the use of electronic brain atlases, (ii) identifies representations, features and tools available at various levels of this taxonomy structure, and (iii) demonstrates how the brain atlases can be applied for analysis of medical images focusing on stereotactic functional neurosurgery and human brain mapping. Key words: brain atlas, neuroimaging, stereotactic functional neurosurgery, human brain mapping. VirEn: a virtual endoscopy system. MGV vol. 8, no. 3, 1999, pp. 469-487. Virtual endoscopy systems are promising tools for the simplification of daily clinical procedures. In this paper, a conceptual framework for a virtual endoscopy system (VirEn) is proposed, which is intended to be an interactive system. So far, our efforts have concentrated on some elements of the system. The generation of an optimal path for the automated navigation is one of them. Extensions to existing thinning algorithms used to generate the optimal path are presented and discussed. First results produced with VirEn are shown. Key words: volume visualization, virtual endoscopy, navigation, thinning. Special Issue on Image Processing Methods in Applied Mechanics. Special Issue Editor: Tomasz A. Kowalewski. This special issue of Machine GRAPHICS & VISION presents a selection of 16 papers, preliminary versions of which were presented during the Euromech 406 colloquium on Image Processing Methods in Applied Mechanics held in Warsaw on May 6-8, 1999. Intention of the colloquiium was to create a forum in which both fluid and solid mechanics groups, working separately on the development and application of the same image processing and acquisition methods, would find a common ground. Image processing problems in fluid dynamics: selected digital procedures. MGV vol. 8, no. 4, 1999, pp. 493-507. Image processing in fluid dynamics, often in conjunction with quantitative flow visualisation, is an important tool used, in both computational and experimental studies, for analysis and data presentation. The development of inexpensive, powerful image capture and processing hardware is being complemented by imaginative software development, utilising ideas often evolved from earlier analogue, optical, and electronic image processing methods while evolving new concepts based on advances in computation and digital image processing. Key words: image filtering, colour, the Fourier transform, wavelet transform, flow visualisation. Quantitative infrared thermography and convective heat transfer measurements. MGV vol. 8, no. 4, 1999, pp. 509-528. When using infrared thermography to perform convective heat transfer measurements, it is necessary to restore the thermal images because of their degradation which is due to the heat flux sensor, the environment and the temperature sensor. This problem is addressed herein. Besides, infrared thermography is employed to study three different fluid flow configurations; in particular: the heat transfer to a jet centrally impinging on a rotating disk; the complex heat transfer pattern associated with a jet in cross-flow; and the heat transfer distribution along a 180 degree turn channel. Attention is focused on the capability of the infrared thermography to deal with complex flow dynamics, the interaction between the jet and the boundary layer linked to the disk rotation, heat transfer developing in the wake region of a jet in cross-flow, high heat transfer regions and recirculation bubbles in a 180 degree turn channel. Key words: image restoration, infrared thermography, convective heat transfer. Measurement of turbulent mixing using PIV and LIF. MGV vol. 8, no. 4, 1999, pp. 529-543. Experimental investigation of turbulent mixing requires the simultaneous measurement of the instantaneous velocity and concentration fields. The velocity is measured by means of particle image velocimetry (PIV), and the concentration by means of laser induced fluorescence (LIF). A combined measurement technique was developed in which we use PIV and LIF simultaneously, without influencing each other. To test the reliability and precision of the technique we took measurements on the mixing of a point source placed at the centerline of a fully-developed turbulent pipe flow. The experimental results are compared against results of a direct numericalsimulation, and against the analytical result for the mixing of a point source in homogeneous turbulence. The agreement with the experimental results is satisfactory, although there remains a small deficit in the mass-balance equation. It is conjectured that this is due to the finite resolution of the experimental data and the high intermittency of the concentration. Key words: PIV, LIF, turbulent mixing, velocity and concentration measurement. Principal components analysis for PIV applications. MGV vol. 8, no. 4, 1999, pp. 545-552. An application of the PIV technique based on cross-correlation method is described. In order to obtain single-exposed images from a double-exposed colour image, the separation in RGB components is performed and the effects on the PIV analysis are evaluated. A method to obtain uncorrelated colour bands, based on Principal Components Analysis, is proposed for PIV applications which utilises colour codes information. Key words: RGB images, color PIV, cross-correlation. Digital particle image velocimetry: a challenge for feature based tracking. MGV vol. 8, no. 4, 1999, pp. 553-569. Motion tracking is an important step of the analysis of flow image sequences. However, Digital Particle Image Velocimetry (DPIV) methods rarely use tracking techniques developed in computer vision: FFT and correlation are usually applied. Two major types of motion estimation algorithms exist in computer vision, namely, the optical flow and the feature based ones. Promising results have been recently obtained by optical flow techniques. In this paper, we examine the applicability of feature tracking algorithms to digital PIV. Two feature based and one optical flow based tracking algorithms are compared. Flow measurement and visualisation results for standard DPIV sequences are presented. Key words: digital PIV, computer vision, feature based motion tracking, optical flow. Comparative study of correlation-based PIV evaluation methods. MGV vol. 8, no. 4, 1999, pp. 571-578. Several correlation-based PIV evaluation methods are compared by applying them to the evaluation of simulated PIV recordings, in which the particle images are distributed stochastically and have a Gaussian gray value distribution. The influence of particle image displacement and the influence of interrogation window size on the evaluation accuracy in uniform and in non-uniform flow were investigated. In all these cases the best results in terms of a statistical error are obtained with the MQD method. Key words: PIV evaluation algorithm, cross-corelation. Large-scale structures forming in a cross flow: particle image velocimetry conditional analysis. MGV vol. 8, no. 4, 1999, pp. 579-595. A conditional Particle Image Velocimetry (PIV) acquisition technique and averaging procedure are developed to study coherent structures formed by the interaction between a jet and a cross-stream. The experiment is conducted in a water tunnel, the water transversal jet is perturbed by a mechanical device. Measurements are performed at Reynolds number 100 and cross-flow velocity ratio ranging from 2.0 to 4.5. Sequences of images are acquired synchronously to the perturbation so that a statistical process may be applied to obtain average velocity and vorticity in a selected cross-section of the flow. The averaged fields and the instantaneous images together with flow visualizations by Laser Induced Fluorescence (LIF) technique are used to interpret behaviour of the large-scale vortices generated in the cross-flow experiment. Key words: PIV, cross flow, coherent flow structures. Visualization of heat transfer enhancement regions modified by the interaction of inclined impinging jets into crossflow. MGV vol. 8, no. 4, 1999, pp. 597-609. Visualization of heat transfer enhancement regions was made for a pair of jets obliquely discharged into a crossflow. The examination of interaction between the two oblique jets and the comparison of different flow patterns caused by the vertically and obliquely issued jets were provided. The temperatures of the target surface were visualized with thermochromic liquid crystal sheets. The colours of the liquid crystal images taken by a CCD camera were transformed accurately and effectively into the temperatures by means of the neural network technique to obtain Nusselt number distributions on the target surface. Fluorescent dyes were added to the jet fluid to visualize the cross-sectional flow patterns with the light sheet of a laser. The most important parameter used in the present study was the velocity ratio VR of the jet to the crossflow besides the crossflow Reynolds number. Key words: heat transfer, impinging jet, visualization, neural network, thermochromic liquid crystal. Digital holography and holographic interferometry. MGV vol. 8, no. 4, 1999, pp. 611-624. Holography is a method for three-dimensional imaging frequently used in metrology and nondestructive testing. Up to now both the generation of the holograms as well as the reconstruction of the wavefields was performed optically. In digital holography optically generated Fresnel or Fraunhofer holograms are recorded by a CCD array. The reconstruction of the wavefields is done digitally by image processing methods based on the mathematical concept of the diffraction integral. Two approaches to its numerical solution are the finite discrete Fresnel transform and a procedure employing the convolution theorem. Both approaches result in a complex field from which intensity and phase can be determined. In digital holographic interferometry the sign-correct interference phase distribution is computed with high accuracy by subtraction of two numerically reconstructed phase distributions. Key words: digital holography, holographic interferometry, Fresnel transform, diffraction integral, three-dimensional imaging. Interferometric techniques for measuring flow velocity fields. MGV vol. 8, no. 4, 1999, pp. 625-636. Holographic interferometry and digital speckle pattern interferometry as techniques for measuring out-of-plane velocity fields are presented. The feasibility of introducing phase shifting techniques in order to improve the accuracy of holographic interferometry is investigated. The techniques are demonstrated in a Rayleigh-Bènard convective flow. Key words: fluid velocimetry, holographic interferometry, speckle pattern interferometry. Method and device for in-vivo measurement of elasto-mechanical properties of soft biological tissues. MGV vol. 8, no. 4, 1999, pp. 637-654. We present a method to determine elasto-mechanical properties of soft biological tissues, and a device able to perform the required measurements in-vivo. The device permits the controlled application of vacuum to small spots of organic tissue and registers the small deformation caused, during the whole measurement process. Deformation is measured with a vision based technique and the grabbed images are processed in real-time to avoid storage problems. We model biological tissue with a hyperelastic quasilinear viscoelastic material law and determine the unknown material parameters via inverse finite element methods. Key words: contour extraction, in-vivo measurement, elasto-mechanical properties, soft tissue aspiration, inverse finite element, hyperelastic. Educational applications of photoelastodynamics for solid dynamics and dynamics of structures. MGV vol. 8, no. 4, 1999, pp. 655-666. This paper presents developments performed on photoelastodynamic bench of ENSICA's Department of Mechanical Engineering. Classical wave, vibrating, shock and rotating parts theories, were compared with colour pictures of isochromatic lines obtained with rapid camera and urethane resin specimens. For nonlinear shock and large deflection, explicit code LSDYNA has been used. Then, the facility has been used to analyse dynamic work of gears for power transmission, in comparison with numerical computations. These developments have lead to demonstrations, now included in engineering general courseware, about stress analysis, theory of elasticity and dynamics of structures. Gear visualisations have been included in integrated France-Canada developments concerning dynamics of transmission, as a complement to theoretical models and experimental acoustic analysis of functioning gears. Key words: dynamic photoelasticity, dynamics of solids, dynamics of plates, dynamics of beams gears. Tomographic measurement techniques - visualization of multiphase flows. MGV vol. 8, no. 4, 1999, pp. 667-679. A tomographic measurement technique is applied for visualization of the local void fraction in the two phase flow of air and water in the mixing chamber of a two-phase-nozzle. With this measurement technique a high spatial and temporal resolution can be achieved. The measured physical property is the electric conductivity of the water. The conductivity is measured with pairs of wires strained in the investigated cross section. The measurement values are proportional to the relative liquid fraction. With an algebraic reconstruction technique (ART) the field of the liquid fraction in the investigated cross-section is calculated from the measurement values. The quality of the reconstruction is increased by a-priori-knowledge. Key words: tomographic measurement, multiphase flow, void fraction, wire-mesh sensor. West R.M., Bennett M.A., Jia X., Ostrowski K.L., Williams R.A. Flow-regime discrimination in bubble columns using electrical capacitance tomography . MGV vol. 8, no. 4, 1999, pp. 681-690. Electrical capacitance tomography has been used to image a bubble column. Sets of linear back projection tomograms are then analysed to yield gas hold-up values and to determine flow regime in a traditional manner. Further analysis is performed producing a statistic (heterogeneity index for tomograms) that is independent of the average hold-up. This is used to provide an alternative and superior means to determine flow regime. Key words: electrical capacitance tomography, heterogeneity index for tomograms. Optimising ray tracing for visualisation of volumetric medical image data. MGV vol. 8, no. 4, 1999, pp. 691-697. The presented optimizations provide an approach to fast rendering of medical volume data. They are based on the ray casting algorithm, which is substantially speeded up with regard to voxel addressing and interpolation. Key words: interpolation, volume rendering, volumetric medical data. Morphological detection and feature-based classification of cracked regions in ferrites. MGV vol. 8, no. 4, 1999, pp. 699-712. Automatic quality inspection of ferrite products is difficult as their surfaces are dark and in many cases covered with traces of grinding. A two-stage vision system for detection and measurement of crack regions was devised. In the first stage the regions with strong evidence for cracks are found using a morphological detector of irregular brightness changes with subsequent morphological reconstruction. In the second stage the feature-based k-Nearest Neighbors classifier analyzes the pixels indicated in the first stage. The classifier is optimized by using procedures of reclassification and replacement carried out on the reference set of pattern pixels to achieve a low error rate and a maximum speed of computation. Key words: morphological defect detection, surface defects, morphological reconstruction, defect classification, k-Nearest Neighbors classification, parallel classifier. Space orientation based on image from mobile camera. MGV vol. 8, no. 4, 1999, p. 713. Segmentation methods of digital images based on the Hough transform. MGV vol. 8, no. 4, 1999, p. 714. Graph operations and graph rewriting in graphic design. MGV vol. 8, no. 4, 1999, p. 715.
CommonCrawl
Using the latest poll data, it seems clear that Clinton's win is almost certain. On average she will get between 320 (worst case) to 333 (best case) electoral votes. So if something is going to change, then it is not in the poll data yet. These are the updated probability obtained running the Python code that computes the Bayesian posterior distribution over the electoral votes using near-ignorance priors. The worst and best case distribution for Clinton are in red and respectively, blue.. Winning probability above 0.99 (for both worst and best scenario). A. Benavoli, A. Facchini and M. Zaffalon. Quantum mechanics: The Bayesian theory generalized to the space of Hermitian matrices. Phys. Rev. A, 94:042106 , 1-27, 2016. It is worth to point out that entangled states violate these Fréchet bounds. Entangled states exhibit a form of stochastic dependence stronger than the strongest classical dependence and in fact they violate Fréchet like bounds. Another example of violation of probabilistic bounds is provided by the famous Bell's inequality. Module signtest in bayesiantests computes the probabilities that, based on the measured performance, one model is better than another or vice versa or they are within the region of practical equivalence. This notebook demonstrates the use of the module. We will load the classification accuracies of the naive Bayesian classifier and AODE on 54 UCI datasets from the file Data/accuracy_nbc_aode.csv. For simplicity, we will skip the header row and the column with data set names. Functions in the module accept the following arguments. x: a 2-d array with scores of two models (each row corresponding to a data set) or a vector of differences. rope: the region of practical equivalence. We consider two classifiers equivalent if the difference in their performance is smaller than rope. prior_strength: the prior strength for the Dirichlet distribution. Default is 1. prior_place: the region into which the prior is placed. Default is bayesiantests.ROPE, the other options are bayesiantests.LEFT and bayesiantests.RIGHT. nsamples: the number of Monte Carlo samples used to approximate the posterior. names: the names of the two classifiers; if x is a vector of differences, positive values mean that the second (right) model had a higher score. Function signtest(x, rope, prior_strength=1, prior_place=ROPE, nsamples=50000, verbose=False, names=('C1', 'C2')) computes the Bayesian sign test and returns the probabilities that the difference (the score of the first classifier minus the score of the first) is negative, within rope or positive. The first value (left) is the probability that the first classifier (the left column of x) has a higher score than the second (or that the differences are negative, if x is given as a vector). In the above case, the right (AODE) performs worse than naive Bayes with a probability of 0.29, and they are practically equivalent with a probability of 0.71. If we add arguments verbose and names, the function also prints out the probabilities. To check the effect of the prior, let us a put a greater prior on the left. The function signtest_MC(x, rope, prior_strength=1, prior_place=ROPE, nsamples=50000) computes the posterior for the given input parameters. The result is returned as a 2d-array with nsamples rows and three columns representing the probabilities $p(-\infty, `rope`), p[-`rope`, `rope`], p(`rope`, \infty)$. Call signtest_MC directly to obtain a sample of the posterior. The posterior is plotted by plot_simplex(points, names=('C1', 'C2')), where points is a sample returned by signtest_MC.
CommonCrawl
Bertoin, Jean (2008). Two-parameter Poisson-Dirichlet measures and reversible exchangeable fragmentation-coalescence processes. Combinatorics, Probability & Computing, 17(3):329-337. We show that for $0<\alpha<1$ and $\theta>-\alpha$, the Poisson-Dirichlet distribution with parameter $(\alpha, \theta)$ is the unique reversible distribution of a rather natural fragmentation-coalescence process. This completes earlier results in the literature for certain split and merge transformations and the parameter $\alpha =0$.
CommonCrawl
Is anti-matter matter going backwards in time? Some sources describe antimatter as just like normal matter, but "going backwards in time". What does that really mean? Is that a good analogy in general, and can it be made mathematically precise? Physically, how could something move backwards in time? To the best of my knowledge, most physicists don't believe that antimatter is actually matter moving backwards in time. It's not even entirely clear what would it really mean to move backwards in time, from the popular viewpoint. If I'm remembering correctly, this idea all comes from a story that probably originated with Richard Feynman. At the time, one of the big puzzles of physics was why all instances of a particular elementary particle (all electrons, for example) are apparently identical. Feynman had a very hand-wavy idea that all electrons could in fact be the same electron, just bouncing back and forth between the beginning of time and the end. As far as I know, that idea never developed into anything mathematically grounded, but it did inspire Feynman and others to calculate what the properties of an electron moving backwards in time would be, in a certain precise sense that emerges from quantum field theory. What they came up with was a particle that matched the known properties of the positron. Just to give you a rough idea of what it means for a particle to "move backwards in time" in the technical sense: in quantum field theory, particles carry with them amounts of various conserved quantities as they move. These quantities may include energy, momentum, electric charge, "flavor," and others. As the particles move, these conserved quantities produce "currents," which have a direction based on the motion and sign of the conserved quantity. If you apply the time reversal operator (which is a purely mathematical concept, not something that actually reverses time), you reverse the direction of the current flow, which is equivalent to reversing the sign of the conserved quantity, thus (roughly speaking) turning the particle into its antiparticle. For example, consider electric current: it arises from the movement of electric charge, and the direction of the current is a product of the direction of motion of the charge and the sign of the charge. Positive charge moving left ($+q\times -v$) is equivalent to negative charge moving right ($-q\times +v$). If you have a current of electrons moving to the right, and you apply the time reversal operator, it converts the rightward velocity to leftward velocity ($-q\times -v$). But you would get the exact same result by instead converting the electrons into positrons and letting them continue to move to the right ($+q\times +v$); either way, you wind up with the net positive charge flow moving to the right. By the way, optional reading if you're interested: there is a very basic (though hard to prove) theorem in quantum field theory, the TCP theorem, that says that if you apply the three operations of time reversal, charge conjugation (switch particles and antiparticles), and parity inversion (mirroring space), the result should be exactly equivalent to what you started with. We know from experimental data that, under certain exotic circumstances, the combination of charge conjugation and parity inversion does not leave all physical processes unchanged, which means that the same must be true of time reversal: physics is not time-reversal invariant. Of course, since we can't actually reverse time, we can't test in exactly what manner this is true. But in Feynman's particle path-integral picture, when you parametrize particles by their worldline proper time, and you renounce a global causal picture in favor of particles splitting and joining, the particle trajectories are consistent with relativity, but only if the trajectories include back-in-time trajectories, where coordinate time ticks in the opposite sense to proper time. Looked at in the Hamiltonian formalism, the coordinate time is the only notion of time. So those paths where the proper time ticks in the reverse direction look like a different type of particle, and these are the antiparticles. Sometimes there is an idenification, so that a particle is its own antiparticle. The "C" operator changes all particles to antiparticles, the P operator reflects all spatial directions, and the T operator reflects all motions (and does so by doing complex conjugation). It is important to understand that T is an operator on physical states, it does not abstractly flip time, it concretely flips all momenta and angular momenta (a spinning disk is spinning the other way), so that things are going backwards. The parity operator flips all directions, but not angular momenta. The CPT theorem says that any process involving matter happens exactly the same when done in reverse motion, in a mirror, to antimatter. The CPT operator is never the identity, aside from the case of a real scalar field. CPT acting on an electron produces a positron state, for example. CPT acting on a photon produces a photon going in the same direction with opposite polarization (if P is chosen to reflect all spatial coordinate axes, this is a bad convention outside of 3+1 dimensions). This theorem is proved by noting that a CPT operator corresponds to a rotation by 180 degrees in the Euclidean theory, as described on Wikipedia. Any amplitude involving particles A(k_1,k_2,...,k_n) is analytic in the incoming and outgoing momenta, aside from pole and cut singularities caused by producing intermediate states. In tree-level perturbation theory, these amplitudes are analytic except when creating physical particles, where you find poles. So the scattering amplitudes make sense for any complex value of the momenta, since going around poles is not a problem. In terms of mandelstam variables for 2-2 scattering, s,t,u (s is the CM energy, t is the momentum transfer and u the other momentum transfer, to the other created particle), the amplitude is an analytic function of s and t. The regions where the particles are on the mass shell are given by mandelstam plot, and there are three different regions, corresponding to A+B goes to C+D , Cbar + B goes to Abar+ D, and A + Dbar goes to C+Bbar. These three regimes are described by the exact same function of s,t,u, in three disconnected regions. In starker terms, if you start with pure particle scattering, and analyticaly continue the amplitudes with particles with incoming momentum k's (with positive energy) to negative k's, you find the amplitude for the antiparticle process. The antiparticle amplitude is uniquely determined by the analytic contination of the particle amplitude for the energy-momentum reversed. This corresponds to taking the outgoing particle with positive energy and momentum, and flipping the energy and momentum to negative values, so that it goes out the other way with negative energy. If you identify the lines in Feynman diagrams with particle trajectories, this region of the amplitude gives the contribution of paths that go back in time. So crossing is the other precise statement of "Antimatter is matter going back in time". The notion of going back in time is acausal, meaning it is excluded automatically in a Hamiltonian formulation. For this reason, it took a long time for this approach to be appreciated and accepted. Stueckelberg proposed this interpretation of antiparticles in the late 1930s, but Feynman's presentation made it stick. In Feynman diagrams, the future is not determined from the past by stepping forward timestep by timestep, it is determined by tracing particle paths proper-time by proper-time. The diagram formalism therefore is philosophically very different from the Hamiltonian field theory formalism, so much so Feynman was somewhat disappointed that they were equivalent. They are not as easily equivalent when you go to string theory, because string theory is an S-matrix theory formulated entirely in Feynman language, not in Hamiltonian language. The Hamiltonian formulation of strings requires a special slicing of space time, and even then, it is less clear and elegant than the Feynman formulation, which is just as acausal and strange. The strings backtrack in time just like particles do, since they reproduce point particles at infinite tension. If you philosophically dislike acausal formalisms, you can say (in field theory) that the Hamiltonian formalism is fundamental, and that you believe in crossing and CPT, and then you don't have to talk about going back in time. Since crossing and CPT are the precise manifestations of the statement that antimatter is matter going back in time, you really aren't saying anything different, except philosophically. But the philosophy motivates crossing and CPT. This refers to Feynman's 1949 theory. In 1949 Richard Feynman devised another theory of antimatter. The spacetime diagram for pair production and annihilation appears to the right. An electron is travelling along from the lower right, interacts with some light energy and starts travelling backwards in time. An electron travelling backwards in time is what we call a positron. In the diagram, the electron travelling backwards in time interacts with some other light energy and starts travelling forwards in time again. Note that throughout, there is only one electron. A friend of mine finds the image of an electron travelling backwards in time, interpreted by us as a positron, to be scary. Note that Feynman's theory is yet another echo of the fact, noted above, that a negatively charged object moving from left to right in a magnetic field has the same curvature as a positive object moving from right to left. Feynman's theory is mathematically equivalent to Dirac's, although the interpretations are quite different. Which formalism a physicist uses when dealing with antimatter is usually a matter of which form has the simplest structure for the particular problem being solved. Note that in Feynman's theory, there is no pair production or annihilation. Instead the electron is just interacting with electromagnetic radiation, i.e. light. Thus the whole process is just another aspect of the fact that accelerating electric charges radiate electric and magnetic fields; here the radiation process is sufficiently violent to reverse the direction of the electron's travel in time. "The time itself loses sense as the indicator of the development of phenomena; there are particles which flow down as well as up the stream of time; the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from past to future, or from future to past." (Progress in Theoretical Physics 5, (1950) 82). About Formally Equivalent Descriptions ...." Then you mix in another very interesting problem, namely the origin of the apparent matter antimatter asymmetry in the observable universe (observed absence of annihilation radiation except in special circumstances) and point out that it may be related to a very very hard problem indeed, namely the origin of time asymmetry. One problem at a time, please. Maybe separate questions, but the answers will likely be more or less over your head since, to the extent that they are even partially understood, they are still being figured out. There is one technical inaccuracy in saying that antimatter moves back in time (whatever it might mean). In quantum field theory we get positive energy solutions (usual particles) and negative energy solutions. Negative energy solutions behave in time as if they were propagating backward in time. But they are not the antiparticles, they are just the "negative-energy particles". Antiparticles are positive energy solutions, and they are obtained by acting with charge conjugation operator on the negative-energy solutions. So, antiparticles move forward in time, as usual particles. Anti matter is such a misleading term. It's not the opposite of "real" matter. It is made whenever particles are made. But it's just a function of the conservation of properties. Is like saying when one particle going left in pair production will go backwards in time but the one going right is going forward. Anti matter and matter anihalite for similar reasons. A negatively charged particle that interacts with a positive cant have a charge. So... Boom. They go away and usually a photon comes out (this is being very simplistic. But that is the root issue). If we called anti matter "opposite charge matter". No one would think it was so special. Yes. According to the CPT theorem, antimatter is matter going backwards in time, but when viewed through a mirror. Correct me if I get this wrong. No. If antimatter is going backwards in time, where did it go at the beginning of time (if indeed there is a beginning of time)? I have been investigating if tachyon faster than light speed is just an illusion of perspective. I used a hypothetical approach which avoids breaking both the speed of light boundary and causality. One of the solutions I got is that antimatter is tachyons. So it is great fun to find this question here, backed up by so many good answers. It seems like antimatter indeed is going backwards in time, and according to my research I might state it even more accurately: Anti-matter particles carry a reversed arrow of time. The reason why matter and antimatter can't coexist seems to be because they have oppositely directed arrows of time, and will upon interaction annihilate back into energy. We might say it so simple that that time itself nulls out for both particles and they both dissolve into pure energy. A tachyon is said to have greater than light speed velocity, and then it has according to special relativity faster than light backwards time travel. Every tachyon is then constantly traveling backwards in time. And as our arrow of time propagates forward in time, the tachyons arrow of time propagates backwards in time. Normally we should then not be able to observe tachyons, as they are in an opposite time perspective with a reversed arrow of time. This led me to wonder if there could be time symmetry in the universe, where tachyons are existing in a backwards time perspective, while we exist in a forwards time perspective. Two perspectives of space-time with oppositely directed arrows of time. With such a time symmetry a tachyon with infinite speed will not really have infinite speed, as this is just an illusion of perspective, in reversed time a tachyon will instead have the opposite of infinite speed, which is being at rest. And tachyon theory already state that a tachyon with infinite speed have energy as it was at rest. Tachyon theory also state that a tachyon gains energy as it decelerate towards the speed of light boundary, but seen from a reversed time perspective the tachyon actually gains energy as it accelerates towards the speed of light boundary, which is just like normal particles in our time perspective. So the calculated faster than light speeds of tachyons might be an illusion of perspective. The imaginary tachyon mass which is a result of faster than light speed, is then also an illusion of perspective. And tachyons neither breaks causality, as cause is happening before effect in their reversed time perspective. By adding symmetric time to super symmetry it seems like the physical problems of tachyons get solutions. If a tachyon, in some way, comes into our observable reality, it is then likely to have a velocity corresponding to the velocity it had in reversed time. It will also carry with it a reversed arrow of time, and can't then coexist with the particles of this opposite time perspective. If two opposite time particles meet, time will null out, and they will both transform into pure energy. This is when it struck me, what if the reversed time particles actually are antimatter particles? I did not know much about antimatter, so I googled antimatter and backward time, and found this question, where many answers suggest there is a relation. Great fun! And if there is a whole lot of tachyon antimatter existing in a reversed time perspective that could also resolve the antimatter asymmetry problem. How these opposite time perspectives might interact is also fascinating. The instant speed of the quantum link, measured to be close to infinite speed, might for instance also be an illusion of reversed time. But interaction between the two time perspectives, if possible, may also create problems with causality. So this hypothetical approach seems to make some sense, and does not seem to be in conflict with physics. We only have to add symmetric time to super symmetry, so everything can become more symmetrical. There might be some conflict with some theories, like the Big Bang theory, which again has problems to explain the antimatter asymmetry. With symmetric time there might even be a possibility that the Big Bang is sort of happening all the time, when energy transforms into matter and antimatter which end up in their opposite perspectives of time. That might again explain why the universe is expanding with an accelerating speed. We might also wonder if matter that goes into a wormhole, might shift into antimatter. As in a one way wormhole we get faster than light time travel, which shift the arrow of time direction for matter, which may cause matter to shift into pure energy and then into antimatter. What comes out of such a wormhole through a white hole could then mostly be antimatter and/or pure energy. Maybe we can even talk about sort of a spectrum of matter, going between matter, pure energy and antimatter. So there is a lot of possibilities here for being carried away with excitement, as if we can add symmetric time to super symmetry, this might open up a whole new avenue of physics, where we might find answers to many problems in physics and get a more fundamental understanding of our reality. It might boil down to the definition of time. By that same definition a particle evolving a certain way in spacetime would be "traveling in forwards time" And the same particle evolving that certain way but "backwards" in spacetime would be "traveling backwards in time" It's not really that antiparticles are travelling backwards in time. But mathematically speaking, an antiparticle travelling forwards in time is indistinguishable from the corresponding particle travelling backwards in time. They're just different ways of understanding the same physical situation. Not the answer you're looking for? Browse other questions tagged quantum-field-theory time antimatter causality arrow-of-time or ask your own question. How is relativity related to anti-particles? Is time travel impossible because it implies total energy in the universe is non-constant over time? Could the black hole in the center of the galaxy be a white hole? Is time travel possible? Is it possible to go back in time? Do particles and anti-particles attract each other? How would we perceive time going backwards? Why do arrows point backwards in time for Feynman Diagrams? Does antimatter travel faster than light? Does reversing time give parity reversed antimatter or just antimatter? Testing if antimatter is backwards-travelling matter? Why does a sign difference between space and time lead to time that only flows forward?
CommonCrawl
Old Library building at the main campus of the University of Warsaw, 26/28 Krakowskie Przedmiescie str. I present an update of the PDFs obtaining using the same overall framework as MSTW. In particular I concentrate on the effect of new data sets, particularly from the LHC, but compare the changes in the PDFs resulting from this source to changes due to some differences in theoretical procedure. An update on activities of the CTEQ-TEA global analysis project will be given. This includes discussion of the latest NNLO CT10 parton distribution functions and their implications for predictions at the LHC. We also present a study of QED effects, including photon PDFs, in the CT10 analysis, and show how the ZEUS DIS plus observed photon data can constraint the initial photon PDF. We consider impact of the recent data obtained by the LHC, Tevatron, and fixed-target experiments on the quark distributions in the nucleon with a particular focus on disentangling of different quark species. Improved determination of the poorly known strange sea distribution is obtained and the standard candle benchmarks for the Drell-Yan process at the LHC are updated. The HERAPDF1.5 PDF set, evolved in leading order (LO) $\alpha_s$ using DGLAP evolution equations, is presented. This LO PDF is particularly useful for Monte Carlo event generators, based on LO matrix elements plus parton showers. 93. Measurements of the production of vector bosons in association with heavy flavour quarks at ATLAS. 92. Are b-PDFs (and b-fragmentation functions) needed at the LHC?
CommonCrawl
ს. ფაღავა, Pagava S., Avtandilashvili A., Kakashvili P., Kharashvili G., Robakidze Z., Rusetski V., Togonidze G., Baratashvili D.. Environmental radioactivity investigations in the Georgian subtropical region. . Proceedings of the International Conference held in Vienna, 23-27 April 2001. IAEA, Vienna, 2002. pp. 480-481.. 2002წ. . K. Kachiashvili. Environmental water objects pollution level control and management systems. Collection of reports SMIA03, 4th-6th September, University of Geneva, UniMail. 2003წ. 477-481. Sh. Kekutia. Equation of motion for superfluid solutions filled porous media. J. Stat. Mech.. 2005წ. P12008 . L. Botsvadze, D. Sharabidze. Equation Optimization in Regional Agricultural Logistics Centers . International Journal, MTM, Year IX Essue 10/. 2015წ. ISSN 1313-0226, p. 17-19. Sh. Kekutia. Equations of motions and velocities of longitudinal waves for superfluid filled aerogel in the presence of finite magnetic field. . Fizika Nizkikh Temperatur. 2008წ. v. 34, p.215-218. ვ. ტარიელაძე, R. Vidal. Equicontinuity and quasi-uniformities. Georgian Math. J. . 2004წ. . L. Chkhartishvili. Equilibrium geometries of the boron nitride layered nanosystems. Proc. 4th Int. Boron Symp. Eskişehir: OGU. 2009წ. 161-170. J. Peradze. Error distribution in the solution of linear equations. Numerical Functional Analysis and Optimization. 1978წ. v.1, no.3, pp.281-287. დ. გორგიძე, გ. ჯავახაძე, ვ. ბურჯანაძე. Esatimation of Profit in Corporate Sistems. Georgian International Journal jf Science and Technology,New Yiork,. 2009წ. Vol.1,Issue #4. N. Fokina, K.O. Khutsishvili, and V.A. Atsarkin. ESR and Longitudinal Response in Metals Containing Localized Paramagnetic Centers with Spin S>1/2. . Applied Magnetic Resonance. 0წ. Vol. 24, pp. 197-123 (2003). T. Kacharava, Avtandil Korakhashvili, Nino Tsiklauri. Establishment and Upgrading of Phar-macological Gene Bank of Medicinal, Aromatic, Spice & Poisonous Plants. Bulleten of State Agrarian University of Armenia. 2010წ. ISSN 1829-0000, 2 (30) 2011, p 11-14 . Tamar Kacharava, Avtandil Korakhashvili. Establishment and Upgrading of Pharmacological Gene Bank of Medicinal, Aromatic, Spice plants,. Bulleten of State Agrarian University of Armenia,Yrevan,. 2010წ. Bulleten of State Agrarian University of Armenia,Yrevan, ISSN 1829-0000, N2,(30), p.11-15. D. Natroshvili. Estimate of Green's tensors of the theory of elasticity. Differentsial'nye Uravnenia. 1978წ. XIV, 7, 1272-1284. A. Meskhi, V. Kokilashvili, H. Rafeiro. Estimates for nondivergence elliptic equations with VMO coefficients in generalized grand Morrey spaces. Complex Variables and Elliptic Equations. 2014წ. Volume 59, Issue 8, 2014, pp. 1169-1184, DOI:10.1080/17476933.2013.831844. Z. Kiguradze, T. Jangveladze. Estimates of a Stabilization Rate as $t\to\infty$ of Solutions of a Nonlinear Integro-Diffe¬ren¬tial Equation. Georgian Math. J. 2002წ. V.9, N1, p.57-70. ჟ. წიკლაური-შენგელია, Tsiklauri-Shengelia, null, Shengelia natia. Estimation of Investment Activity Level in Different Transitional and High-Developed Countries. (Collected Articles-1(17)-Armenia,. 2013წ. 1(17. К. , Mamonova O.D. and Stepanov V.A. . Estimation of divergence between two empirical probability densities having different number of spacing out of random values giving intervals.. Is deposited in TsNIITEIpriboro¬stroenia No. 2413-B. DM 2413pr-D84. Bibl. Indicator VINITI "Deposited scientific works", . 1984წ. No. 8.. L. Chkhartishvili, D. L. Gabunia, O. A. Tsagareishvili. Estimation of the isotopic effect on the melting parameters of boron. Inorg. Mater.. 2007წ. 43, 6, 594-596. К. , Stepanishvili V.A. . Estimation of unknown parameters of some non-regularities probability distribution densities. Avtometria. 1988წ. No. 2, 109-111. K. Kachiashvili, Melikdzhanian D.I. . Estimators of the Parameters of Beta Distribution. Sankhya. 0წ. is accepted for publication.
CommonCrawl
We consider a model for complex networks that was introduced by Krioukov et al. In this model, $N$ points are chosen randomly inside a disk on the hyperbolic plane and any two of them are joined by an edge if they are within a certain hyperbolic distance. The $N$ points are distributed according to a quasi-uniform distribution, which is a distorted version of the uniform distribution. The model turns out to behave similarly to the well-known Chung-Lu model, but without the independence between the edges. Namely, it exhibits a power-law degree sequence and small distances but, unlike the Chung-Lu model and many other well-known models for complex networks, it also exhibits clustering. The model is controlled by two parameters $\alpha$ and $\nu$ where, roughly speaking, $\alpha$ controls the exponent of the power-law and $\nu$ controls the average degree. The present paper focuses on the evolution of the component structure of the random graph. We show that (a) for $\alpha > 1$ and $\nu$ arbitrary, with high probability, as the number of vertices grows, the largest component of the random graph has sublinear order; (b) for $\alpha < 1$ and $\nu$ arbitrary with high probability there is a "giant" component of linear order, and (c) when $\alpha=1$ then there is a non-trivial phase transition for the existence of a linear-sized component in terms of $\nu$. A corrigendum was added to this paper 29 Dec 2018.
CommonCrawl
Abstract: For a connected pasting scheme $\mathcal G$, under reasonable assumptions on the underlying category, the category of $\mathfrak C$-colored $\mathcal G$-props admits a cofibrantly generated model category structure. In this paper, we show that, if $\mathcal G$ is closed under shrinking internal edges, then this model structure on $\mathcal G$-props satisfies a (weaker version) of left properness. Connected pasting schemes satisfying this property include those for all connected wheeled graphs (for wheeled properads), wheeled trees (for wheeled operads), simply connected graphs (for dioperads), unital trees (for symmetric operads), and unitial linear graphs (for small categories). The pasting scheme for connected wheel-free graphs (for properads) does _not_ satisfy this condition. We furthermore prove, assuming $\mathcal G$ is shrinkable and our base categories are nice enough, that a weak symmetric monoidal Quillen equivalence between two base categories induces a Quillen equivalence between their categories of $\mathcal G$-props. The final section gives illuminating examples that justify the conditions on base model categories.
CommonCrawl
where $A_1,\ldots,A_n,B_1,\ldots,B_m$ are formulas. It is read as follows. Under the assumptions $A_1,\ldots,A_n$, at least one of $B_1,\ldots,B_m$ holds. The part of the sequent on the left of the arrow is called the antecedent, and the part on the right the succedent (consequent). The formula $(A_1\&\ldots\&A_n)\supseteq(B_1\lor\ldots\lor B_m)$ (note that an empty conjunction denotes truth, and an empty disjunction denotes falsity) is called the formula image of the sequent. i.e. the particular case $m=1$ of the above definition. For a discussion of Gentzen's sequent calculi cf. Gentzen formal system; Sequent calculus and, e.g., [a2]. This page was last modified on 17 July 2014, at 23:09.
CommonCrawl
The fact that we can choose an $n\in D$ is just the fact that $D$ is assumed nonempty. This doesn't require choice. As a heuristic, you don't need the axioms of choice to choose a sock from a pair of socks (or one sock from each pair of finitely many pairs of socks), but if I give you an infinite collection of pairs, you need some form of the axiom of choice to select one from each pair simultaneously. You can grab one element from each set in a finite collection of sets from the axioms of set theory. If you want to grab one from infinitely many sets, you need a stronger axiom in general. We use axiom of choice while choosing an element from infinitely many sets 'at the the same time'. Otherwise of course we can choose an element one by one. Here i think it is the case that we choose an element from infinitely many sets at the same time. Not the answer you're looking for? Browse other questions tagged elementary-set-theory axiom-of-choice or ask your own question. If I say "let $x_0$ be a point of global maximum…", am I using axiom of choice? Do we always use the Axiom of Choice when picking from uncountable number of sets? Axiom of Choice: What exactly is a choice, and when and why is it needed? Does choosing a counting function require the Axiom of Choice? Does this proof need the axiom of choice? Does this proof use Axiom of Choice? Why isn't the axiom of choice obvious?
CommonCrawl
I have a linear system with input $ x(t) $ and output $ y(t)$ given by $$ y(t) = \int_0^\infty K(t')x(t-t')dt', $$ where $K(t)$ is a known kernel, with some parameters. My question is: how is this related to the mean or maximum output amplitude $\langle|y(t)|\rangle$ or $\max|y(t)|$? For example, can we say something like, the amplitude is maximal, for a given set of parameters, when the correlation is maximal? Browse other questions tagged linear-systems correlation or ask your own question. Perfect system identification or am i doing something wrong?
CommonCrawl
The 2D Parker's mean-field dynamo equations with a various distributions of the $\alpha$- and $\omega$-effects are considered. We show that smooth profiles of $\alpha$ and $\omega$ can produce dipole configuration of the magnetic field with the realistic magnetic energy spectrum. We emphasize that fluctuating $\alpha$-effect leads to increase of the magnetic energy at the small scales, breaking the dipole configuration of the field. The considered geostrophic profiles of $\alpha$ and $\omega$ correspond to the small-scale polarwards/equatorwards travelling waves with the small dipole field contribution. The same result is observed for the dynamic form of the $\alpha$-quenching, where two branches of the weak and strong solution coexist. Received 28 July 2014; accepted 30 July 2014; published 8 August 2014. Citation: Reshetnyak M. Yu. (2014), The mean-field dynamo model in geodynamo, Russ. J. Earth Sci., 14, ES2001, doi:10.2205/2014ES000539.
CommonCrawl
Published September 1999,January 2004,February 2011. How can we solve equations like $13x+29y=42$ or $2x+4y=13$ with the solutions $x$ and $y$ being integers? Equations with integer solutions are called Diophantine equations after Diophantus who lived about 250 AD but the methods described here go back to Euclid (about 300 BC) and earlier. When people hear the name Euclid they think of geometry but the algorithm described here appeared as Proposition 2 in Euclid's Book 7 on Number Theory. First we notice that $13x+29y=42$ has many solutions, for example $x=1$, $y=1$ and $x=30$, $y=-12$. Can you find others (it has infinitely many solutions)? We also notice that $2x+4y=13$ has no solutions because $2x+4y$ must be even and $13$ is odd. Can you find another equation that has no solutions? If we can solve $3x+5y=1$ then we can also solve $3x+5y=456$. For example, $x=2$ and $y=-1$ is a solution of the first equation, so that $x=2\times 456$ and $y=-1\times 456$ is a solution of the second equation. The same argument works if we replace $456$ by any other number, so that we only have to consider equations with $1$ on the right hand side, for example $P x+Q y=1$. However if $P$ and $Q$ have a common factor $S$ then $P x+Q y$ must be a multiple of $S$ so we cannot have a solution of $P x+Q y=1$ unless $S=1$. This means that we should start by considering equations $P x+Q y=1$ where $P$ and $Q$ have no common factor. Let us consider the example $83x+19y=1$. There is a standard method, called Euclid's Algorithm, for solving such equations. It involves taking the pair of numbers $P=83$ and $Q=19$ and replacing them successively by other pairs $(P_k,Q_k)$. We illustrate this by representing each pair of integers $(P_k,Q_k)$ by a rectangle with sides of length $P_k$ and $Q_k$. Draw an $83$ by $19$ rectangle and mark off $4$ squares of side $19$, leaving a $19$ by $7$ rectangle. In a few steps we shall split this rectangle into 'compartments' to illustrate the whole procedure for solving this equation. (You may like to try the java applet Solving with Euclid's Algorithm which draws the rectangles and carries out all the steps automatically to solve equations of the form $P x+Q y=1$). We repeat this process using the $19$ by $7$ rectangle to obtain two squares of side $7$, and a $7$ by $5$ rectangle. Next, the $7$ by $5$ rectangle splits into a square of side $5$, and a $5$ by $2$ rectangle. and successively substitute the remainders from the other equations until we get back to the first one giving a combination of the two original values $P=83$ and $Q=19$. The method in this example has the following steps with the remainders given in square brackets. Thus a solution of $83x+19y=1$ is $x=-8$ and $y=35$. Can you find a solution of $83x+19y=7$? Can you now find a solution of $827x+191y=2$? You should first solve the equation $827x+191y=1$ (using the computer if you wish). For the next article in the series, click here .
CommonCrawl
Citation: JIANG YUN (2010-08-04). The phases of supersymmetric black holes in five dimensions. ScholarBank@NUS Repository. Abstract: In contrast to four dimensions, in five dimensions black hole solutions can have event horizons with nonspherical topology and violate the uniqueness theorems. The supersymmetric solutions of single black hole (also called BMPV black hole because it was first discovered by Breckenridge, Myers, Peet and Vafa) and single black ring (with horizon topology $S^1 \times S^2$) have been discovered in minimal $D=5$ supergravity. The first part of the thesis is devoted to the recent developments in five-dimensional supersymmetric black holes: first I briefly describe minimal $N=1,D=5$ supergravity theory, next I review the well-known supersymmetric black hole solutions in five dimensions and study the physical properties of the BMPV black hole and the supersymmetric black ring. However, the BPS (Bogomol'nyi-Prasad-Sommerfield) equations solved by the black ring appear to be nonlinear, hence this obscures the construction of multiple ring solutions via simple superpositions.\par In order to construct solutions describing multiple supersymmetric black rings or superpositions of supersymmetric black rings with BMPV black holes, in the second part I review an alternative approach --- the harmonic function method. I first introduce four harmonic functions to characterize the single supersymmetric black ring solution. Via simple superpositions of harmonic functions for each ring, supersymmetric multiple concentric black rings are then constructed, in which the rings have a common center, and can lie either in the same plane or in orthogonal planes. In addition, this solution-generating method can also be applied to supersymmetric solutions with a black hole by taking the limit $R\rightarrow0$. As a result, I construct the most general supersymmetric solution --- black multiple bi-rings Saturn, which consists of multiple concentric black rings sitting in orthogonal planes with a BMPV black hole at the center. In the thesis I focus particularly on the bi-rings Saturn solution.\par In the last part, I present the phase diagram of the established supersymmetric bi-ring Saturn in five dimensions and show that its structure is similar to those for extremal vacuum ones: a semi-infinite open strip, whose upper bound on the entropy is equal to the entropy of a static BMPV black hole of the same total mass for any value of the angular momentum. Following this, I provide a detailed analysis of the configurations that approach its three boundaries. Remarkably, I argue that for any $j\geq0$ the phase with highest entropy is a black bi-ring Saturn configuration with a central, close to static, $S^3$ BMPV black hole (accounting for the high entropy) surrounded by a pair of very large and thin orthogonal black rings (carrying the angular momentum). Moreover, I also study the outstanding feature of non-uniqueness arising from this exotic configuration. Possible generalizations to more supersymmetric black hole solutions including black Saturn with an off-center hole, non-orthogonal ring configurations and rings on Eguchi-Hanson space are discussed at the end of the thesis.
CommonCrawl
Volume 7, Number 11 (2002), 1343-1376. We deal with a free boundary problem in $\mathbb R^2$ modelling propagation of premixed flames. We prove existence, uniqueness and regularity results of the solution near a travelling wave solution. In particular, we prove time analyticity of the free boundary. Adv. Differential Equations, Volume 7, Number 11 (2002), 1343-1376.
CommonCrawl
We have already seen many examples of the symbols available in --\alpha, \psi, and so on. There are some other symbols which the amstex and amssymb packages make available. These includes fonts for blackboard bold capital letters ( , , ), caligraphic ( , , ) and Fraktur ( , , ) letters. The commands are \mathbb, \mathcal and \mathfrak. They are valid only in math mode, and they operate only on the letter or group that immediately follows. that follows them. For example, is given by $\mathfrak S = \mathbb R^2/G$. To make the following table of symbols completely available, include the following package-loading command after your documentclass command.
CommonCrawl
In this limit problem, two trigonometric functions are involved. So, the limit of the trigonometric function can be evaluated by using limit rule of trigonometric function. There is a limit rule in terms of sin function and it can be used in this limit problem to obtain the limit of this trigonometric function. Let's try to transform both numerator and denominator in this form. Express the second factor in the function in its reciprocal form. As per the product rule of limits, the limit of the product of trigonometric functions can be written as the product of their limits. According to reciprocal rule of limits, the limit of reciprocal trigonometric function can be written as reciprocal of its limit. The limit rule of trigonometric function cannot be applied at this time because the angle in the sine function should also be in its denominator. So, let's try to repeat the same technique. Now, use constant multiple rule of limits to separate the constants from functions. If $x \to 0$, then $3x \to 3 \times 0$ and $4x \to 4 \times 0$. Therefore, $3x \to 0$ and $4x \to 0$. Therefore, if $x$ approaches $0$, then $3x$ and $4x$ also approach to $0$. According to limit of sin(x)/x as x approaches 0 formula, the limit of each function is equal to $1$.
CommonCrawl
Algebraic language theory studies the behaviour of finite automata of various kinds (e.g., regular languages of finite words, of infinite words, of trees, or their weighted variants) in a machine independent way by relating them to finite algebraic structures. This has proved extremely fruitful. For example, regular languages can be described as the language recognized by finite monoids, and the decidability of star-freeness rests on Schützenberger's theorem: a regular language is star-free iff it is recognized by a finite aperiodic monoid. The backbone of algebraic language theory is formed by more generic correspondences of this kind, called Eilenberg-type correspondences, which relate varieties of languages to pseudovarieties of finite algebras. There exist more than a dozen Eilenberg-type correspondences in the literature today. We show that they all arise from the same recipe: one models languages and the algebras recognizing them by monads on an algebraic category, and applies a Stone-type duality. I will present a variety theorem that covers, besides Eilenberg's classical result, e.g.\ Wilke's and Pin's work on $\infty$-languages, the recent variety theorem for cost functions of Daviaud, Kuperberg, and Pin, and unifies the two categorical approaches of Boja\'nczyk and of Ad\'amek et al. In addition we derive new results, such as an extension of the local variety theorem of Gehrke, Grigorieff, and Pin from finite to infinite words.
CommonCrawl
Has anyone got a reference for the following fact? If $\mathcal X$ is a symmetric monoidal category, then $\_\otimes\_\colon\mathcal X\times\mathcal X \to \mathcal X$ is a strong monoidal functor. Moreover, the associators and unitors in $\mathcal X$ are monoidal natural transformations. The key point here is the existence of a natural transformation $$ (a \otimes b) \otimes (c \otimes d) \to (a \otimes c) \otimes (b \otimes d)\,, $$ which is why we need symmetry. Browse other questions tagged reference-request category-theory monoidal-categories or ask your own question. What functors are the natural isomorphisms relating in monoidal categories? Is the tensor product of a monoidal category associative?
CommonCrawl
And calculate n according to Sn. If you mean $$S_n = (1 + 2 + 3 + \ldots + n) - (3 + 4 + 5 + 6 +7+8+9)$$ the first bracket has a well-known closed form in terms of $n$ and the second is a constant. because in a toy with infinite sums. If you're not sure of the formula, google "arithmetic series". Last edited by Denis; February 20th, 2019 at 09:28 AM. You wanted to calculate the sum "according to n", where n is an integer greater than 9. This can't be infinite. You didn't state originally that n is the number of terms. Why define Un? There is no Un in the question or in your solution. Last edited by skipjack; February 20th, 2019 at 10:15 AM.
CommonCrawl
Working methodology: In these problems, two persons initial ages will be given. and before or after several years, their ratio of the ages will be given. Multiply the ratio of their initial age by x or some variable and take them as their initial age. Now if final ratio has been given, equate this ratio with that ratio and find x. Or proceed according to the problem. 2. Five years ago, the total of the ages of a father and his son was 40 years. The ratio of their present ages is 4 : 1. What is the present age of the father ? Let son's age = x. Then, father's age = 2x. Let Jaya's age = 2x or Ravi's age = 5x. 7. Ten years ago A was half of B in age. If the ratio of their present ages is 3:4 , what will be the total of their present ages ? 9. Jayesh is as much younger to Anil as he is older to Prashant. If the sum of the ages of Anil and Prashant is 48 years, what is the age of Jayesh? The given question says, the difference between Anil and Jayesh is same as Jayesh and Prashant. Given A + P = 48. 10. Three years ago the average age of A and B was 18 years. With C joining them, the average becomes 22 years. How old is C now ? Sum of ages of A and B, 3 years ago = $(18 \times 2) = 36$ years. Sum of ages of A,B and C, now = $(22 \times 3) = 66$ years. 11.One year ago the ratio between Samir and Ashok's age was 4:3. One year hence the ratio of their age will be 5:4. What is the sum of their present ages in years? 12.The ratio between the ages of A and B at present is 2: 3 Five years hence the ratio of their ages will be 3:4. What is the present age of A ? Let the ages of A and B be 2x and 3x years. 14.Ratio of Ashok's age to Pradeep's age is equal to 4:3 Ashok will be 26 years old after 6 years. How old is Pradeep now ? Explanation: Let Chandravati's age 10 years ago be x years. 17. The age of Arvind's father is 4 times his age. If 5 years ago, father's age was 7 times the age of his son at the time, what is Arvind's father's present age ? 18. Pushpa is twice as old as Rita was two years ago. If the difference between their ages is 2 years, how old is Pushpa today ? Let Rita's age 2 years ago be x years. Pushpa's present age = (2x) years. 19. Five years ago Vinay's age was one-third of the age of Vikas and now Vinay's age is 17 years. What is the present age of Vikas ? The present ages of father and son are 36 years and 9 years respectively. So their ages are 12 years, 44 years respectively. Let the daughter's present age be x years. Then, mother's present age = (50-x) years. So, their present ages are 40 years and 10 years.
CommonCrawl
There are $n$ applicants and $m$ free apartments. Your task is to distribute the apartments so that as many applicants as possible will get an apartment. Each applicant has a desired apartment size, and they will accept any apartment whose size is close enough to the desired size. The first input line has three integers $n$, $m$, and $k$: the number of applicants, the number of apartments, and the maximum difference. The next line contains $n$ integers $a_1, a_2, \ldots, a_n$: the desired apartment size of each applicant. If the desired size of an applicant is $x$, they will accept any apartment whose size is between $x-k$ and $x+k$. The last line contains $m$ integers $b_1, b_2, \ldots, b_m$: the size of each apartment. Print one integer: the number of applicants who will get an apartment.
CommonCrawl
John Dolittler comes home from work and reads his mail. One letter catches his interest. It's a letter from a distant aunt. What's this? The letter says that he will inherit all of her money and all of her beloved animals from all over the world. John really loves animals, and has always wanted to have a little zoo. This is his big chance! Animals needs space, but that shouldn't be a problem with his new-found fortune. To help with his planning, John will use the FOIL method. John finds an old map of his property and the nearby fields. But over time, the measurements of the fields have faded on the map. John knows that his property is 25 meters by 20 meters. He can't read the width of the field next to his property so he writes 'x' for its width. He also knows that the field above his property has a length that is twice as long as the other property's width, so it can be represented by 2x. If John buys both of these fields, the length of his new property would be 25 + 2x and the width would be 20 + x. For the total area, he multiplies these expressions by using the FOIL method. FOIL is an easy way to remember how to multiply two binomials. The 'F' in FOIL stands for First, this means that you should multiply the two numbers that come first in each parentheses, 25 times 20 is 500. The 'O' stands for Outer, this means that you should multiply the two numbers on the outside. 25 times 'x' is 25x. The 'I' stands for Inner, the two inner numbers 2x times 20 make 40x. The 'L' stands for Last. You should multiply the numbers that come last in each set of parentheses. 2x times x is 2x squared. After combining like terms and rearranging the terms in order of degree, we get 2x squared + 65x + 500. But FOIL only works when you are multiplying two binomials. So, why does it work anyway? When we use the FOIL method, we are really just using the Distributive Property. The overall goal is to multiply every term in the first set of parentheses by every term in the second set of parentheses. We can do this by using the Distributive Property twice. Our first 'a' is the quantity 25 plus 2x, our first 'b' is 20 and our first 'c' is 'x'. Distributing the quantity 25 plus 2x over the quantity 20 plus 'x' gives us the following. Now, let's rearrange our equation to look like the definition of the distributive property. What comes next? You guessed it! The distributive property comes to the rescue once again! 20 times 25 is 500. 20 times 2x is 40x. 'x' times 25 is 25x. 'x' times '2x' is 2x squared. After combining like terms and rearranging the terms by decreasing degree, we get 2x squared + 65x + 500. As you can see, it is the same as the FOIL method. But using FOIL cuts out a couple of steps. John isn't sure the area is big enough, so he's thinking about buying a third field. He knows that this field is three times as wide as the field next to his property. To calculate the bigger area, he rewrites the expression (25 + 2x) times (20 +x+3x). Since the FOIL method only works for multiplying two binomials, John has to use the Distributive Property. John breaks it up into three problems. First, he multiplies (25 + 2x) by 20, then by x, and at last by 3x. By using the Distributive Property here, we get the following: 25 times 20 is 500, 2x times 20 is 40x, 25 times 'x' is 25x, 2x times 'x' is 2x squared, 25 times 3x equals 75x and 2x times 3x is 6x squared. After combining like terms, the area is 8x squared + 140x + 500. Don't forget to write the terms in order of decreasing degree. But careful, sometimes you can simplify first. Since 'x' and 3x are like terms, John could have combined them to get (20 + 4x) in the second parentheses. Now we can use the FOIL method because it's the product of two binomials. First: 25 times 20 is 500. Outer: 25 times 4x is 100x. Inner: 2x times 20 is 40x. Last: 2x times 4x is 8x squared. Look how big the new plot will be! John decides to buy the whole area for his little zoo. John is just finishing up the zoo when the animals arrive. He's so excited that he can't wait to see what animals his aunt has collected from all over the world. What is this? Well, John tries to make the most out of his situation. One way of getting the product of two binomials is the FOIL method. The term "FOIL" represents the position of the terms of the binomials; i.e. It is an acronym for "First-Outer-Inner-Last". 1. Take the product of each pair of terms of the binomials - the first, outer, inner, and last terms. 2. Simplify the product by combining existing like terms. 3. Arrange the terms of the product in descending order according to their degree. making sure that we simplify by combining like terms, and rearrange terms from highest to lowest degree (by convention). The FOIL method is an essential tool, as it is needed in any instance involving multiplying polynomials. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video FOILing and Explanation for FOIL kannst du es wiederholen und üben. Calculate $(25 + 2x)(20 + x)$ using the FOIL method. F first multiply the first terms. O next multiply the outer terms. I now the inner terms. L last multiply the last terms. $25$ and $20$ are the first terms of the two binomials. Here you see an example using the FOIL method. FOIL is an easy way to remember how to multiply two binomials. F stands for first, meaning multiply the first terms $25\times 20=500$. O stands for outer, meaning multiply the outer terms $25\times x=25x$. I stands for inner, meaning multiply the inner terms $2x\times 20=40x$. L stands for last, meaning multiply the last terms $2x\times x=2x^2$. Decide when the FOIL method can be used. F stands for multiply the first terms. I stands for multiply the inner terms. L stands for multiply the last terms. $2x$ and $3$ are monomials. A trinomial is the result of adding or subtracting three monomials. And it only works when you are multiplying two binomials. Determine the area of the zoo via the distributive property and the FOIL method. The FOIL method only works for the multiplication of two binomials. The distributive property is $a(b\pm c)=ab\pm ac$. Remember to combine like terms. For example, $3x+5x=8x$. The total area of the fields on the old map can be represented as the product $(25+2x)(20+x+3x)$. F multiply the first terms $25\times 20=500$. O multiply the outer terms $25\times 4x=100x$. I multiply the inner terms $2x\times 20=40x$. L multiply the last terms $2x\times 4x=8x^2$. We can now see that with either method, we get the same result in the end. And John now knows the total area of the fields on the old map and can start planning the construction of his dream zoo! Find the mistakes in the calculations. You can use the FOIL method also for differences. Pay attention to the sign. If necessary, first simplify the terms. There are ten mistakes in total. For using the FOIL method we have to simplify the factors if necessary. You can also use the FOIL method for differences. But you have to pay attention to the sign. Lastly we still have a multiplication where we first have to simplify both factors: $(25+2x-4x)(20+3x-10-7x)=(25-2x)(10-4x)$. Identify the right formula for calculating the area. For example the leftmost one is $(30+3x)(45+x)$. Next combine like terms if there are any. You can use the FOIL method for all these examples. O stands for multiply the outer terms. F multiply the first terms. O multiply the outer terms. I multiply the inner terms. L multiply the last terms. Calculate the new area of the zoo. Combine the like terms in order to use the FOIL method. You have to calculate the product $(25+2x)(20+6x)$. Here is an example using the FOIL method, combining like terms, and rearranging the powers. To calculate the total needed area for John's Zoo we have to multiply the length $25+2x$ with the width $20+x+3x+2x$. To use the FOIL method we first have to simplify the width term to $20+6x$.
CommonCrawl
Two prevalent models in the data stream literature are the insertion-only and turnstile models. Unfortunately, many important streaming problems require a Theta(\log(n)) multiplicative factor more in their memory usage for turnstile streams than for insertion-only streams. This complexity gap often arises because the underlying frequency vector f is very close to 0, after accounting for all insertions and deletions to items. Signal detection in such streams is difficult, given the large number of deletions. In this talk we propose an intermediate model which, given a parameter \alpha \geq 1, lower bounds the norm $\|f\|_p$ by a $1/\alpha$-fraction of the $L_p$ mass of the stream had all updates been positive. We show that for streams with this $\alpha$-property, for many fundamental problems we can replace a $\log(n)$ factor in algorithms in the turnstile model with a $\log(\alpha)$ factor. This is true for identifying heavy hitters, inner product estimation, $L_0$ estimation, $L_1$ estimation, $L_1$ sampling, and support sampling. Since in practice many important turnstile data streams are in fact $\alpha$-property streams for small values of $\alpha$, these results represent significant improvements in efficiency for these applications.
CommonCrawl
The recent detection of the first gravitational wave signal (GW150914) produced by a binary black hole merger represents the dawn of a new age in astronomy. While the detection is significant in and of itself, one of the numerous questions yet to be answered is how such a binary black hole (BBH) system formed in the first place. Previous models of BBH systems suggest that these are formed in the dense cores of globular clusters, where black holes left over from old, collapsed stars eventually become bound together through gravitational processes. After the most massive stars collapse, the resulting black holes tend to sink to the center of clusters due to their higher masses relative to other objects. This leads to a particularly high concentration of black holes near the cluster center, and also increases the rate of close gravitational encounters between these objects. BBH systems are continuously being formed and disrupted through complex interactions with other objects and systems in a globular cluster's dense central environment, as illustrated in Fig. 1. Fig. 1: Two possible formation histories for the GW150914 BBH system (indicated by the pair of orbiting black holes). Essentially, this is an example of the complex formation histories that BBH candidates go through within a globular cluster. Before two black holes become bound and merge together, they undergo numerous interactions with other objects (ex. stars and other black holes, indicated by the blue and red spheres). The authors of today's paper explore the conditions and environment required to form the system that lead to the detection of GW150914. To do this, the authors refer to their previous models of globular clusters that detail the distribution of stellar masses and number of binary systems. From these models, the authors select binary systems that have physical properties similar to that of GW150914 (ex. in mass and redshift). They find that most of these binary systems come from globular clusters of relatively low metallicities and large cluster masses. Lower metallicity stars tend to have weaker stellar winds, which means that they lose less mass over their lifetimes. These stars then tend to produce more massive black holes once they collapse at the end of their lifetimes. Additionally, more massive clusters tend to have a large number of black holes, which then increases the probability of a BBH system forming. The paper concludes that the globular cluster hosting the GW150914 progenitor system must have been a low metallicity cluster with a mass between $3 \times 10^5 and $6 \times 10^5$ solar masses. The masses of the BBH systems that could be detected by Advanced LIGO depends on the sensitivity of its detectors, and the distribution of these types of BBH systems in mass and redshift. Based on their models, the authors find that the median total mass of a detectable BBH during Advanced LIGO's first observing run is 50 solar masses, which is consistent with the estimated mass of the GW150914 progenitor. While LIGO only has a single definitive BBH detection to its credit so far, future BBH merger detections from LIGO will help contrain the models and dynamical processes involves in these merger events.
CommonCrawl
The streets of Byte City form a regular, chessboardlike network - they are either north-south or west-east directed. We shall call them NS- and WE-streets. Furthermore, each street crosses the whole city. Every NS-street intersects every WE- one and vice versa. The NS-streets are numbered from $1$ to $n$, starting from the westernmost. The WE-streets are numbered from $1$ to $m$, beginning with the southernmost. Each intersection of the $i$'th NS-street with the $j$'th WE-street is denoted by a pair of numbers $(i,j)$ (for $1\le i\le n$, $1\le j\le m$). There is a bus line in Byte City, with intersections serving as bus stops. The bus begins its itinerary by the $(1,1)$ intersection, and finishes by the $(n,m)$ intersection. Moreover, the bus may only travel in the eastern and/or northern direction. reads from the standard input a description of the road network and the number of passengers waiting at each intersection,finds, how many passengers the bus can take at the most,writes the outcome to the standard output. Byte City 的街道形成了一个标准的棋盘网络 – 他们要么是北南走向要么就是西东走向. 北南走向的路口从 1 到 n编号, 西东走向的路从1 到 m编号. 每个路口用两个数(i, j) 表示(1 <= i <= n, 1 <= j <= m). Byte City里有一条公交线, 在某一些路口设置了公交站点. 公交车从 (1, 1) 发车, 在(n, m)结束.公交车只能往北或往东走. 现在有一些乘客在某些站点等车. 公交车司机希望在路线中能接到尽量多的乘客.帮他想想怎么才能接到最多的乘客. The first line of the standard input contains three positive integers $n$, $m$ and $k$ - denoting the number of NS-streets, the number of WE-streets and the number of intersections by which the passengers await the bus, respectively ($1\le n\le 10^9$, $1\le m\le 10^9$, $1\le k\le 10^5$). The following $k$ lines describe the deployment of passengers awaiting the bus, a single line per intersection. In the $(i+1)$'st line there are three positive integers $x_i$, $y_i$ and $p_i$, separated by single spaces, $1\le x_i\le n$,$1\le y_i\le m$,$1\le p_i\le 10^6$ . A triplet of this form signifies that by the intersection$(x_i,y_i)p_i$ passengers await the bus. Each intersection is described in the input data once at the most. The total number of passengers waiting for the bus does not exceed $1\ 000\ 000\ 000$. Your programme should write to the standard output one line containing a single integer - the greatest number of passengers the bus can take.
CommonCrawl
Abstract: We report complex dielectric and Raman spectroscopy measurements in four samples of $\alpha$- Fe$_2$O$_3$, consisting of crystallites which are either hexagonal shaped plates or cuboids. All four samples exhibit the spin reorientation transition from a pure antiferromagnetic (AFM) to a weak-ferromagnetic (WFM) state at the Morin Transition temperature (T$_M$) intrinsic to $\alpha$- Fe$_2$O$_3$. These samples, pressed and sintered in identical conditions for the dielectric measurements, reveal moderate but clear enhancement in the real part of the dielectric constant ($\epsilon'$) in the WFM region. However, a relaxation-like behavior in the imaginary part of $\epsilon''$ is observed only in nano plates or big cuboids. Further still, this relaxation patten is observed only in lower frequency region, lasting upto a few kHz and follows Arrhenius law within this limited range. The activation energy deduced from the fitting is suggestive of polaronic conduction. Temperature dependent Raman spectra reveal anomalies in all major phononic modes and also in 2Eu mode in the vicinity of the Morin transition. A peak like behavior in Raman Shifts, in conjuncture with anharmonic fitting reveals that the nature of spin phonon coupling is different in pure AFM and WFM regions and it is tied to the mild variations, as observed in the dielectric constant of$\alpha$- Fe$_2$O$_3$ near the T$_M$.
CommonCrawl
Abstract : The aim of this paper is to give a higher dimensional equivalent of the classical modular polynomials $\Phi_\ell(X,Y)$. If $j$ is the $j$-invariant associated to an elliptic curve $E_k$ over a field $k$ then the roots of $\Phi_\ell(j,X)$ correspond to the $j$-invariants of the curves which are $\ell$-isogeneous to $E_k$. Denote by $X_0(N)$ the modular curve which parametrizes the set of elliptic curves together with a $N$-torsion subgroup. It is possible to interpret $\Phi_\ell(X,Y)$ as an equation cutting out the image of a certain modular correspondence $X_0(\ell) \rightarrow X_0(1) \times X_0(1)$ in the product $X_0(1) \times X_0(1)$. Let $g$ be a positive integer and $\overn \in \N^g$. We are interested in the moduli space that we denote by $\Mn$ of abelian varieties of dimension $g$ over a field $k$ together with an ample symmetric line bundle $\pol$ and a symmetric theta structure of type $\overn$. If $\ell$ is a prime and let $\overl=(\ell, \ldots , \ell)$, there exists a modular correspondence $\Mln \rightarrow \Mn \times \Mn$. We give a system of algebraic equations defining the image of this modular correspondence. We describe an algorithm to solve this system of algebraic equations which is much more efficient than a general purpose Gr¨obner basis algorithm. As an application, we explain how this algorithm can be used to speed up the initialisation phase of a point counting algorithm.
CommonCrawl
The cluster analysis of diurnal precipitation patterns is performed by using daily precipitation of 59 stations in South Korea from 1973 to 1996 in four seasons of each year. Four seasons are shifted forward by 15 days compared to the general ones. Number of clusters are 15 in winter, 16 in spring and autumn, and 26 in summer, respectively. One of the classes is the totally dry day in each season, indicating that precipitation is never observed at any station. This is treated separately in this study. Distribution of the days among the clusters is rather uneven with rather low area-mean precipitation occurring most frequently. These 4 (seasons)$\times$2 (wet and dry days) classes represent more than the half (59 %) of all days of the year. On the other hand, even the smallest seasonal clusters show at least $5\sim9$ members in the 24 years (1973-1996) period of classification. The cluster analysis is directly performed for the major $5\sim8$ non-correlated coefficients of the diurnal precipitation patterns obtained by factor analysis In order to consider the spatial correlation. More specifically, hierarchical clustering based on Euclidean distance and Ward's method of agglomeration is applied. The relative variance explained by the clustering is as high as average (63%) with better capability in spring (66%) and winter (69 %), but lower than average in autumn (60%) and summer (59%). Through applying weighted relative variances, i.e. dividing the squared deviations by the cluster averages, we obtain even better values, i.e 78 % in average, compared to the same index without clustering. This means that the highest variance remains in the clusters with more precipitation. Besides all statistics necessary for the validation of the final classification, 4 cluster centers are mapped for each season to illustrate the range of typical extremities, paired according to their area mean precipitation or negative pattern correlation. Possible alternatives of the performed classification and reasons for their rejection are also discussed with inclusion of a wide spectrum of recommended applications.
CommonCrawl
Here lie my notes on the paper titled the above. Essentially this expands on previous works on phaseguides and shows the basic methods for making them and physical properties of them (as well as various applications which can be viewed in the linked video somewhere in the notes below). Constraints on the design results in concession between its envisioned functionality and the sheer need of filling the channel network. More complex use of capillary pressure is to use it for both passive valving as well as driving force. Introduced a technique to stepwise control the progress of a liquid-air interface (previous paper that they cite). Done by patterning stripes of metrial acting as a capillary pressure barrier, perpendicular to the advancement direction of the liquid-air meniscus. also define design principles for phaseguides? For bumps with a vertical sidewall, $\theta_1 + \theta_2$ should be larger than 90 deg. to obtain complete pinning. The angle of $\alpha_2$ in all of the above (the angle of the V) determines the stability of the phaseguide. Smaller angle means more likelihood over overflow occuring there? Channel connected to chamber in fig. BUT, the eq. does not hold for most designs… Instead, need to simulate breakthrough pressure at every point along the phaseguide? Used similar experimental procedure to prev. papers, so they seem to leave a lot out…. This video from the supplementary actually helped a lot in undertanding what they were talking about. They actually have several phaseguides, one in the middle and several to fill dead angles and get rid of airbubbles.
CommonCrawl
$p$ is true and $q$ is true. This is called the conjunction of $p$ and $q$. the members of the conjunction. Let $p_1, p_2, \ldots, p_n$ be statements. and is referred to as the conjunction of $P$. The conjunction is used to symbolise any statement in natural language such that two substatements are held to be true simultaneously. Thus it is also used to symbolise the concept of but as well as and. The conjunction is also known as the logical product. The conjuncts are thence known as the factors of the logical product. Treatments which consider logical connectives as functions may refer to this operator as the conjunctive function.
CommonCrawl
Department of Mathematics, Faculty of Basic Science, Shahed University, Tehran, Iran. Let $S$ be a dense subsemigroup of $(0,+\infty)$. In this paper, we state definition of thick near zero, and also we will introduce a definition that is equivalent to the definition of piecewise syndetic near zero which presented by Hindman and Leader in . We define density near zero for subsets of $S$ by a collection of nonempty finite subsets of $S$ and we investigate the conditions under these concepts. D. De and N. Hindman, Image partition regularity near zero, Discrete Mathematics, 309 (2009) 3219-3232. D. De and R.K. Paul, Image partition regularity near zero with real entries, New York Journal of Mathematics, 17 (2011) 149-161. D. De and R.K. Paul, Combined Algebraic Properties of IP* and Central* Sets Near 0, International Journal of Mathematics and Mathematical Sciences, (2012) 1-7. E. Følner, On groups with full Banach mean values, Math. Scand, 17 (1955) 243-254. A. Frey, Studies on Amenable Semigroups, Thesis, University of washington, 1960. N. Hindman and I. Leader, The Semigroup of Ultrafilters Near 0, Semigroup Forum, 59 (1999) 33-55. N. Hindman and D. Strauss, Algebra in the Stone-Cech Compactification, Theory and Applications, de Gruyter, Berlin, 2011. N. Hindman and D. Strauss, Density in Arbitrary Semigroups, Semigroup Forum, 73 (2006), 273-300. M. A.Tootkaboni and T. Vahed, The semigroup of ultrafilters near an idempotent of a semitopological semigroup, Topology and its Applications, V. 159, Issue 16, (2012), 3494–-3503.
CommonCrawl
1. arXiv:1412.8520 Understanding and Designing Complex Systems: Response to "A framework for optimal high-level descriptions in science and engineering---preliminary report". James P. Crutchfield, Ryan G. James, Sarah Marzen, Dowman P. Varn. physics.stat-mech (cs.AI cs.CE cs.IT nlin.CD). 2. arXiv:1412.7737 Well-posedness of the Muskat problem with $H^2$ initial data. C. H. Arthur Cheng, Rafael Granero-Belinchón, Steve Shkoller. math.AP. 3. arXiv:1412.7488 Spectral gap for random-to-random shuffling on linear extensions. Arvind Ayyer, Anne Schilling, Nicolas M. Thiery. math.PR (math.CO). 4. arXiv:1412.3818 Linked determinantal loci and limit linear series. John Murray, Brian Osserman. math.AG. 5. arXiv:1412.4001 An Instability of the Standard Model Creates the Anomalous Acceleration Without Dark Energy. Joel Smoller, Blake Temple, Zeke Vogler. physics.gr-qc (math.AP physics.ACO physics.math-ph). 6. arXiv:1412.2690 Computational Mechanics of Input-Output Processes: Structured transformations and the $\epsilon$-transducer. Nix Barnett, James P. Crutchfield. physics.stat-mech (cs.IT math.DS). 7. arXiv:1412.2445 Fluctuations of Linear Eigenvalue Statistics of Random Band Matrices. Indrajit Jana, Koushik Saha, Alexander Soshnikov. math.PR. 8. arXiv:1412.0070 Improved bounds for the mixing time of the random-to-random insertion shuffle. Ben Morris, Chuan Qin. math.PR. 9. arXiv:1411.7666 No Quantum Brooks' Theorem. Steven Lu. physics.quant-ph (cs.IT). 10. arXiv:1411.5121 An Electronic Compendium of Extreme Functions for the Gomory--Johnson Infinite Group Problem. Matthias Köppe, Yuan Zhou. math.OC (math.CO). 11. arXiv:1411.4176 A Morse Lemma for quasigeodesics in symmetric spaces and euclidean buildings. Michael Kapovich, Bernhard Leeb, Joan Porti. math.GR (math.DG math.MG). 12. arXiv:1411.0231 Determining isotopy classes of crossing arcs in alternating links Anastasiia Tsvietkova math.GT. 13. arXiv:1410.8632 Three Ehrhart Quasi-polynomials. Velleda Baldoni, Nicole Berline, Jesús A. De Loera, Matthias Köppe, Michèle Vergne. math.CO. 14. arXiv:1410.8584 Light on the Infinite Group Relaxation. Amitabh Basu, Robert Hildebrand, Matthias Köppe. math.OC (math.CO). 15. arXiv:1410.8174 On the dynamics of lattice systems with unbounded on-site terms in the Hamiltonian. Bruno Nachtergaele, Robert Sims. physics.math-ph. 16. arXiv:1410.7842 Tree simplification and the 'plateaux' phenomenon of graph Laplacian eigenvalues. Naoki Saito, Ernest Woei. math.CO. 17. arXiv:1410.7810 On Burdet and Johnson's Algorithm for Integer Programming. Babak Moazzez, Kevin Cheung. math.OC. 18. arXiv:1410.6900 Generalizations of an Expansion Formula for Top to Random Shuffles. Roger Tian. math.CO. 19. arXiv:1410.5450 Dimension counts for limit linear series on curves not of compact type. Brian Osserman. math.AG. 20. arXiv:1410.5028 Diffraction Patterns of Layered Close-packed Structures from Hidden Markov Models. P. M. Riechers, D. P. Varn, J. P. Crutchfield. physics.mtrl-sci (cs.IT math.ST stat.TH). 21. arXiv:1410.0398 Product Vacua and Boundary State Models in d Dimensions. Sven Bachmann, Eman Hamza, Bruno Nachtergaele, Amanda Young. physics.math-ph. 22. arXiv:1409.8102 Boundedness of large-time solutions to a chemotaxis model with nonlocal and semilinear flux. JanBurczak, Rafael Granero-Belinchón. math.AP . 23. arXiv:1409.7299 A pattern avoidance criterion for free inversion arrangements. William Slofstra. math.CO. 24. arXiv:1409.6778 Metric projective geometry, BGG detour complexes and partially massless gauge theories. A. R. Gover, E. Latini, A. Waldron. physics.hep-th (math.DG math.RT physics.gr-qc physics.math-ph). 25. arXiv:1409.6049 On the numerical solution of second order differential equations in the high-frequency regime. James Bremer, Vladimir Rokhlin. math.NA. 26. arXiv:1409.5944 Gödel for Goldilocks: A Rigorous, Streamlined Proof of (a variant of) Gödel's First Incompleteness Theorem. Dan Gusfield. math.LO (cs.LO). 27. arXiv:1409.5930 Chaotic Crystallography: How the physics of information reveals structural order in materials. Dowman P. Varn, James P. Crutchfield. physics.mtrl-sci (cs.FL cs.IT nlin.CD physics.dis-nn). 28. arXiv:1409.4381 On the existence of nonoscillatory phase functions for second order differential equations in the high-frequency regime. Jhu Heitman, James Bremer, Vladimir Rokhlin. math.NA (physics.math-ph). 29. arXiv:1409.4100 On the asymptotics of Bessel functions in the Fresnel regime. Jhu Heitman, James Bremer, Vladimir Rokhlin, Bogdan Vioreanu. math.NA (math.CA physics.math-ph). 30. arXiv:1409.2920 Crystal structure on rigged configurations and the filling map. Anne Schilling, Travis Scrimshaw. math.CO (math.QA). 31. arXiv:1409.0971 Towards the Bertram-Feinberg-Mukai Conjecture. Naizhen Zhang. math.AG. 32. arXiv:1408.6876 Informational and Causal Architecture of Discrete-Time Renewal Processes. Sarah Marzen, James P. Crutchfield. physics.stat-mech (cs.IT math.ST nlin.CD stat.TH). 33. arXiv:1408.5949 Finding geodesics in a triangulated 2-sphere. Abigail Thompson. math.GT. 34. arXiv:1408.5339 Nonparametric estimation of dynamics of monotone trajectories. Debashis Paul, Jie Peng, Prabir Burman. math.ST (stat.TH). 35. arXiv:1408.4079 On the effect of boundaries in two-phase porous flow. Rafael Granero-Belinchón, Gustavo Navarro, Alejandro Ortega. math.AP. 36. arXiv:1408.2768 Global existence for some transport equations with nonlocal velocity. Hantaek Bae, Rafael Granero-Belinchón. math.AP. 37. arXiv:1408.2469 Solvability and regularity for an elliptic system prescribing the curl, divergence, and partial trace of a vector field on Sobolev-class domains. C. H. Arthur Cheng, Steve Shkoller. math.AP. 38. arXiv:1408.2311 Packing subgroups in solvable groups. Pranab Sardar. math.GT. 39. arXiv:1408.2020 On a nonlocal analog of the Kuramoto-Sivashinsky equation. Rafael Granero-Belinchón, John K. Hunter. math.AP. 40. arXiv:1408.0320 Crystal approach to affine Schubert calculus. Jennifer Morse, Anne Schilling. math.CO (math.AG math.QA). 41. arXiv:1408.0084 Billey-Postnikov decompositions and the fibre bundle structure of Schubert varieties. Edward Richmond, William Slofstra. math.AG (math.CO). 42. arXiv:1407.6742 Generalising the Willmore equation: submanifold conformal invariants from a boundary Yamabe problem. A. Rod Gover, Andrew Waldron. physics.hep-th (math.DG physics.gr-qc). 43. arXiv:1407.5977 Is Quantum Gravity a Chern-Simons Theory?. R. Bonezzi, O. Corradini, A. Waldron. physics.hep-th (math.DG physics.gr-qc). 44. arXiv:1407.5371 Analysis on an extended Majda--Biello system. Yezheng Li. math.AP (math.NA). 45. arXiv:1407.2793 On a generalized Keller-Segel system in one spatial dimension. Jan Burczak, Rafael Granero-Belinchón. math.AP. 46. arXiv:1407.2317 Bootstrap Percolation on the Hamming Torus with Threshold 2. Erik Slivken. math.PR. 47. arXiv:1407.2283 Group Testing under Sum Observations for Heavy Hitter Detection. Chao Wang, Qing Zhao, Chen-Nee Chuah. cs.IT (math.OC). 48. arXiv:1407.1479 On the impossibility of finite-time splash singularities for vortex sheets. Daniel Coutand, Steve Shkoller. math.AP. 49. arXiv:1406.6699 Limit linear series for curves not of compact type. Brian Osserman. math.AG. 50. arXiv:1406.5553 Stability of cellular automata trajectories revisited: branching walks and Lyapunov profiles. Jan M. Baetens, Janko Gravner. math.PR. 51. arXiv:1406.1590 Dynamics of Sound Waves in an Interacting Bose Gas. D. -A. Deckert, J. Fröhlich, P. Pickl, A. Pizzo. physics.math-ph (physics.quant-ph). 52. arXiv:1405.3314 Linked symplectic forms and limit linear series in rank 2 with special determinant. Brian Osserman, Montserrat Teixidor i Bigas. math.AG. 53. arXiv:1405.2966 Richard Stanley through a crystal lens and from a random angle. Anne Schilling. math.CO. 54. arXiv:1405.2937 Limit linear series moduli stacks in higher rank. Brian Osserman. math.AG. 55. arXiv:1405.2480 A Quantitative Doignon-Bell-Scarf Theorem. Iskander Aliev, Robert Bassett, Jesus A. De Loera, Quentin Louveaux. math.MG (math.CO). 56. arXiv:1405.1764 Supercharacter Theories and Semidirect Products. Alexander Lang. math.RT (math.GR). 57. arXiv:1404.2355 High-dimensional genome-wide association study and misspecified mixed model analysis. Jiming Jiang, Cong Li, Debashis Paul, Can Yang, Hongyu Zhao. math.ST (stat.TH). 58. arXiv:1404.1401 Dirac Equation with External Potential and Initial Data on Cauchy Surfaces. D. -A. Deckert, F. Merkl. physics.math-ph (math.AP physics.hep-th). 59. arXiv:1404.0065 Intermediate Sums on Polyhedra II: Bidegree and Poisson Formula. Velleda Baldoni, Nicole Berline, Jesús A. De Loera, Matthias Köppe, Michèle Vergne. math.CO. 60. arXiv:1403.7671 Morse actions of discrete groups on symmetric space. Michael Kapovich, Bernhard Leeb, Joan Porti. math.GR (math.DG math.GT). 61. arXiv:1403.5533 Lifschitz Tails for Random Schrödinger Operator in Bernoulli Distributed Potentials. Michael Bishop, Vita Borovyk, Jan Wehr. physics.math-ph. 62. arXiv:1403.4628 Equivariant Perturbation in Gomory and Johnson's Infinite Group Problem. III. Foundations for the k-Dimensional Case with Applications to k=2. Amitabh Basu, Robert Hildebrand, Matthias Köppe. math.OC (math.CO). 63. arXiv:1403.3966 On the Singularities in the Susceptibility Expansion for the Two-Dimensional Ising Model. Craig A. Tracy, Harold Widom. physics.math-ph. 64. arXiv:1403.3864 Information Anatomy of Stochastic Equilibria. Sarah Marzen, James P. Crutchfield. physics.stat-mech (cs.IT math.DS nlin.CD). 65. arXiv:1403.1603 Gevrey regularity for a class of dissipative equations with analytic nonlinearity. Hantaek Bae, Animikh Biswas. math.AP. 66. arXiv:1402.5582 Intercusp geodesics and the invariant trace field of hyperbolic 3-manifolds Walter Neumann, Anastasiia Tsvietkova math.GT. 67. arXiv:1402.2203 A uniform model for Kirillov-Reshetikhin crystals II. Alcove model, path model, and P=X. Cristian Lenart, Satoshi Naito, Daisuke Sagaki, Anne Schilling, Mark Shimozono. math.QA (math.RT). 68. arXiv:1401.7023 Severi degrees on toric surfaces. Fu Liu, Brian Osserman. math.AG (math.CO). 69. arXiv:1401.4250 Markov chains, $\mathscr R$-trivial monoids and representation theory. Arvind Ayyer, Anne Schilling, Benjamin Steinberg, Nicolas M. Thiery. math.CO (math.GR math.PR math.RA). 70. arXiv:1401.4237 The Cognitive Compressive Sensing Problem. Saeed Bagheri, Anna Scaglione. cs.IT (math.OC). 71. arXiv:1401.2111 The Kakimizu complex of a surface. Jennifer Schultens. math.GT. 72. arXiv:1401.1574 Quantum curves. Albert Schwarz. physics.math-ph (math.AG math.QA physics.hep-th). 73. arXiv:1401.0556 Stability of vector bundles on curves and degenerations. Brian Osserman. math.AG.
CommonCrawl
The purpose of this paper is to propose a model where trade has a direct and positive impact on growth rate of two trading nations beyond the level effect. We use the idea of virtual trade in intermediates induced by non- overlapping time zones following Marjit (2007) and Kikuchi and Marjit (2011) and show how trade can increase the equilibrium optimal rate of growth. In this structure the trade impact goes beyond the level effect. Unlike other contemporary models of trade and growth, it does not depend on trade induced innovations, learning by doing, heterogeneity etc. Typically standard models of trade cannot generate an automatic growth impact. Virtual trade may allow production to continue for 24x7 in separated time zones such as between US and India and that can lead to higher growth for both countries. Later we extend the model to include labor market. Does E-Verify Discriminate against Hispanics? The ratcheting up of immigration enforcement has resulted in a number of unintended consequences featured in the news, such as family separations. We focus on yet another potentially unintended consequence -- increased employment difficulties faced by Hispanics legally authorized to work following the enactment of employment verification (E-Verify) mandates. Using confidential data from the 2002-2012 National Latino Surveys, we exploit the temporal and spatial variation in the adoption of E-Verify mandates to assess how they have impacted perceptions of discrimination in the labor market held by native and naturalized Hispanics who are clearly authorized to work. While E-verify mandates should not adversely impact their job prospects, these individuals could be hurt if some employers avoid hiring them for fear they might be undocumented. We believe the analysis will enrich our understanding of the collateral damage of increased immigration enforcement. I will present some definitions of risk measures and relate them to current practice. The use of some risk measures increases the risk appetite. To avoid future financial crises, we need more than technical changes in capital requirements. The talk is a general talk without mathematics. Below is the schedule for the Economics Seminar Series for the academic year 2014-2015, along with the abstract for each seminar. A downloadable version of the full schedule can be found here. I examine the hypothesis that gender preferences significantly influence childbearing behavior and family size. Using micro data, I estimate a discrete hazard model, according to which the probability of a subsequent birth is dependent on the gender composition of surviving children, controlling for other characteristics, such as maternal age, education and religious affiliation. In this talk, I will address arbitrage theory when the horizon is random. This random horizon can represent the death time of an agent, a default time of a firm, or an occurrence time of an event that might affect the liquidity and the viability of the market. Thus, my ultimate goal is to explain --via quantitative and qualitative results-- the impact of the extra uncertainty on the market model. I will start by discussing the role of absence of arbitrage in portfolio analysis for discrete-time market models. Then, I will analyze the effect of additional information on the arbitrage theory in these models. In the continuous-time setting, the notion of non-arbitrage has many competing definitions. I will single out the one that plays a key role in extending the results of Arrow-Debreu and those of Long (about numeraire portfolio) to the continuous-time framework. Finally, if time permits it, I will expose numerous results that determine how the randomness in the horizon can affect the market's viability in continuous-time. We revisit a seminal life-cycle consumption/saving and labor/leisure model (Heckman 1974), and argue that the model, and its commonly used variations, have not been solved properly in the literature, thus missing many interesting solutions involving highly counterfactual findings. We solve the model completely, and resolve some of these counterfactual findings using recent evidences from the leisure studies, and we also discuss additional viable solutions and challenges. In this paper, we quantitatively evaluate the benefit of improving transportation infrastructure. We do so by developing a model of internal trade in which asymmetric states trade with each other. Firms compete oligopolistically at the industry level, allowing for markups to change with changes in transportation costs. We apply the model to measure the welfare effects of building a large road infrastructure project in India: the Golden Quadrilateral (GQ). After calibrating our model to rich plant-level and geospatial data, we find large gains: benefits exceed the initial investment in just two years. We also find that: (i) pro-competitive gains are approximately 20% of total gains and (ii) the size of welfare gains are very heterogeneous across states. In this paper we propose a model of political change in which a revolutionary vanguard interacts with a continuum of individuals. The vanguard can mobilize a sizable contingent to push for a new regime and it reaps the bulk of the benefits-compared to individuals-if change occurs. Although both types stand to gain when supporting the prevailing regime, the vanguard prefers overthrowing the current regime while individuals favor it. We show that higher benefits from power make the vanguard more aggressive and individuals more inclined to join the insurrection, thus, increasing the likelihood of political change. We also show that vanguard's power is less effective the more individuals favor the current regime. In our model, vanguard and individuals also differ in their information about the underlying strength of the current regime. Hence, we also provide results stemming from the interplay of mobilization power, differential benefits, and quality of information. In particular, vanguard's influence is increasing in mobilization power and in its share of benefits from change, even when it does not have superior information. A number of free trade agreements (FTAs) between the United States and the rest of the world have become a matter of political and economic interest. This study provides the first assessment of the impact of FTAs on individual U.S. states. We assess the average treatment effects of such FTAs on state-level imports and exports using a gravity model and state-to-country bilateral trade data over the 2008-2011 period. We find that imports of U.S. states from countries with which the United States has an FTA are about 45 percent lower than imports from other non-FTA countries, whereas exports of U.S. states to countries with which the United States has an FTA are 234 percent larger than exports to other non-FTA countries. Monetary utility functions are -- except for the expected value -- not of von Neumann-Morgenstern type. In case the utility function has convex level sets in the set of probability measures on the real line, we can give some characterisation that comes close to the vN-M form. For coherent utility functions this was solved by Ziegel. The general concave case under the extra assumptions of weak compactness, was solved by Stephan Weber. In the general case the utility functions are only semi continuous. Using the fact that law determined utility functions are monotone with respect to convex ordering, we can overcome most of the technical problems. The characterisation is similar to Weber's theorem except that we need vN_M utility functions that take the value $-\infty$. Having convex level sets can be seen as a weakened form of the independence axiom in the vN-M theorem. Alan Blinder (2004, 2007) has characterized the monetary policy committee (MPC) of Sweden's Riksbank as "individualistic," meaning that members of the committee tend to vote their true preferences, without deferring to the Governor (the committee chairman) and without regard for achieving consensus. Chappell, McGregor, and Vermilyea (2014) have questioned this conclusion, providing econometric evidence that the Riksbank's Governor has considerable influence over other committee members. Their evidence suggests that the MPC should be considered "autocratically collegial." A third possibility is that members consensually make decisions; in Blinder's typology, they are "genuinely collegial."We use updated voting information from the Riksbank to explore these alternative possible characterizations of committee behavior. This brief summarizes findings from our study of some important macroeconomic implications of South Africa's demographic transition. "Demographic transition" refers to the pattern of changes in fertility, mortality, and population growth that have been observed with great regularity around the world. Initially death rates and birth rates are high and roughly equal, implying low rates of population growth. Then follows a decline in death rates but birth rates typically remain high, thus generating population growth. Eventually birth rates fall, slowing the rate of population growth. The transition ends when birth rates and death rates have both stabilized at a new low level, implying a return to low (or zero) population growth. Based on some useful analytical constructs such as life cycle deficit, support ratio, and fiscal support, we consider (1) the sustainability of life cycle wealth and hence long‐run public sector solvency; (2) the prospects for reaping the first and second demographic dividends, and (3) policy implications of our findings. The first dividend (FD) occurs when the growth of effective producers exceeds the growth of effective consumers. The basis of the first dividend is that the extra income not consumed can be used to fund asset accumulation or transfers. On this basis, South Africa began enjoying the first dividend approximately in 1980 long before its demographic window of opportunity opened. Given that it is projected to exit the window in 2065 by UN estimates, the country is exposed to a first demographic dividend potential that could last for 85 years. The support ratio yields information about the potential for reaping demographic dividend. We find that consumers will continue effectively to outweigh producers substantially in the economy implying that demand for government transfers will become a permanent feature of South African economy. Also, we find that South Africa actually contributes to, rather than dampen, global financial imbalances. Even though South Africa remains within the demographic window of opportunity, a combination of low support ratio and high consumption rates will impede the second demographic dividend. To avoid this, the country will have to make tough policy choices, particularly in terms of growing income through job creation and in the reassignment of savings from transfer wealth to asset accumulation (radical aggregate portfolio choices). In conclusion, failure to "bite the bullet" would mean demand for increased intermediation from (external) surplus saving units to (domestic) deficit spending units thus leading to pressure on the current account. The current account imbalances translate into exchange rate pressure, to macroeconomic instability, political backlash and ultimately crisis of sorts. In this study we use the Households' Income and Expenditure Survey of Iran (HIES) database to investigate the incidence of overeducation. Our descriptive analysis shows that during 2001-2012 the ratio of workers with more than 12 years of education (one year or more of higher education) has steadily increased in many low skill jobs that require less than or equal to twelve years of schooling. Our econometric analysis showed that the odds of overeducation for women were higher than men. We also observed that likelihood of overeducation had a strong negative association with a worker's experience and a positive correlation with being a female worker. Additional econometric tests revealed that overeducation had a negative impact on a worker's wage in private sector but the opposite was true for the public sector jobs. This paper investigates whether gender imbalance may be conducive to domestic terrorism in developing countries. A female-dominant society may not provide sufficient law and order to limit political violence and terrorism, especially since societies in developing countries primarily turn to males for policing and paramilitary forces. Other considerations support female imbalance resulting in grievance-generated terrorism. Because male dominance may also be linked to terrorism, empirical tests are ultimately needed to support our prediction. Based on panel data for 128 developing countries for 1975-2011, we find that female gender imbalance results in more total and domestic terrorist attacks. This gender imbalance does not affect transnational terrorism or domestic terrorism in developed countries. Further tests show that the addition of males reduces terrorism only when institutions are weak. A strong political and institutional environment can provide the enduring checks against terrorism, which is known to have adverse effects on the economy. Sharp increases in volatility of the exchange rate and other financial indicators have become an issue in many countries that are exposed to globalized financial markets. As large inflows, sudden stops, and reversals of international capital often turn out to be an important cause of the problem, the international financial community has started reconsidering regulating international capital movement. This paper contributes to the literature and policy debates on the effectiveness of capital controls in Korea in three dimensions. First, we employ newly developed financial stress indexes (FSI) and National Financial Conditions Index (NFCI) in the analysis of the effects of capital mobility. Second, we investigate the extent of global and regional financial contagion using the FSIs and the transmission mechanism of external financial shocks represented by innovations of NFCI. We find strong evidence of structural breaks at the time of known breaks such as a big push for financial deregulations of the early 1990s and the East Asian financial crisis of 1997-8. We also find the pre- and post-break regimes are qualitatively very different. For instance, the notion that financial stresses are positively transmitted across the border applies only to the post-break regime but not to the pre-break regime. There is strong evidence that before the break - representing either financial deregulation or a major financial crisis - an external financial shock signals time to move capital into emerging market economies such as Korea while, after the break, the exact opposite is more likely. We thus employ a threshold VAR model in which the index of financial openness serves as a threshold variable to determine regime changes. The results support that financial deregulation and increased financial openness is related to the increase in cross-border contagion of financial shocks and the changes in financial transmission mechanism. This paper reviews the experience of the Middle East and North Africa (MENA) region in education attainment over the last four decades (1970-2010). It documents the following main findings: (a) all MENA countries experienced significant improvements in educational attainment over this period; (b) most MENA countries did better in this regard than comparators who had roughly the same education stocks in 1970; (c) collectively, the MENA region achieved a greater percentage increase in education than other regions; (d) the region's better performance was in part due to higher rates of public spending on education, better food sufficiency status and a lower initial stock of education in 1970 in comparison to most other developing country regions; and (e) the MENA region had among the lowest payoffs to public spending in terms of increments in education stock; the impressive advance in education was achieved at high cost. We experimentally study behavior in a common property renewable resource extraction game with multiple equilibria. In the experiment, pairs of subjects competitively extract and consume a renewable resource in continuous time. We find that play evolves over time into multiple steady states with heterogeneous extraction strategies that contain components predicted by equilibrium strategies. We find that simple rule-of-thumb strategies result in steady-state resource levels that are similar to the best equilibrium outcome. We also find that the sensitivity of more aggressive strategies to the starting resource level suggests that improvement in renewable resource extraction can be attained by ensuring a healthy initial resource level. Our experiment provides empirical evidence for equilibrium selection in this widely used differential game. Demand for disaster insurance is a topic of some interest in both economics and related fields of study. Low levels of observed demand are difficult to reconcile with the usual economically-orthodox models of consumer behavior centered on expected utility. Previous research based on surveys has suggested that consumers make insurance purchase decisions based predominantly on subjective, psychological factors as opposed to objective, economic or financial ones. In a forthcoming paper, I use a simple, structural approach to model consumers' insurance buying decisions. This model is calibrated to fit actual market share data for earthquake insurance in California, under different assumptions of financial benefits from insurance and consumer perceptions of earthquake risk. My results show that heterogeneity in the financial payoffs (derived from different levels of equity) is a more likely explanation of the observed low levels of demand. Further motivating a structural approach, I am able to evaluate counterfactual scenarios to changes in the menu of available insurance contracts, such as lowering the policy deductible. As stated in the UAE (Qatar) Vision 2021 (2025) one important strategic objective common to the two countries is to become knowledge base economies. In order to assess their performance in this endeavor this paper benchmarks them to 17 other countries having at least some (necessary, but not exclusive) similar characteristics in terms of the following criteria: not being too dissimilar in terms of size, location, being emerging economies, being natural resource rich, having aspiration to become a knowledge driven economy or already being one. The benchmarked list of countries is: Australia, Bahrain, Chile, Costa Rica, Finland, Israel, Kuwait, Malaysia, Norway, Oman, Poland, Saudi Arabia, Singapore, South Africa, South Korea, Tunisia and Turkey. The comparison between the selected countries is made in terms of indicators considered essential in order to be (or be in the right path to become) a knowledge-driven economy, which are subdivided into the four pillars suggested by the World Bank Institute: economy and regime, ICT, education, and innovation. It is clear that both the UAE and Qatar got great achievements in a very short period of time, but still lack behind top knowledge base economies such us Finland and Korea or even less advanced ones as Costa Rica and Malaysia. On the stronger side, the UAE is in the top league in terms of firm-level technology absorption (World Economic Forum) and in terms of number telephones per capita. Similarly, Qatar is the leader in terms of intensity of local competition as well as internet access in schools. On the weaker side though, both countries show low technology exports as proportion to the GDP and have not gone far in terms of other indicators. Finding how far the UAE and Qatar are from achieving their ambitious goals and what needs to be improved have important policy implications. On the one hand, the identification of weaknesses would create awareness that eventually will support the countries' decision makers in the proper formulation of targeted policies. On the other hand, the identification of strengths will be useful not only for the policy makers, but also for foreign investors (or highly talented workers) who may consider establishing (pursuing) their business (career paths) in the country.
CommonCrawl
I'm trying to read (the introduction of) a survey by Toen on Derived Algebraic Geometry, specifically the "Simplicial Presheaves and Derived Algebraic Geometry" one. He motivates the introduction of DAG as a means to construct moduli spaces. His example is the moduli of linear representations of a group admitting a finite presentation. Now, DAG (AFAIU) enlarges the theory of stacks in two directions: a derived bit and a stacky bit. The derived bit concerns replacing rings with more general ring-like objects. The stacky bit comes from using stacks of oo-categories. His motivation for the `stacky' direction comes from taking quotients of a 'rigidified' moduli problem, which admits a moduli space (the aforementioned tensor product). The problem here is the usual fact that you want to remember the isomorphism groups of objects. Unfortunately this only motivates us to introduce stacks, not oo-stacks. So my question is: Can we complicate this example a bit more (but hopefully not too much) so that we need to use higher stacks? if we don't derive our geometry first, do we need to introduce higher stacks? Comments on the derived are also very much appreciated, thanks! Thinking this way you might get the feeling that you'll probably never practically need more than 2-stacks. But once you start working in a homotopical context you run into higher stacks very quickly. There's a very natural example where you run into the full structure of $\infty$-stacks immediately, as explained by To\"en and Vaqui\'e: moduli of objects in a derived category. If you have a [nice] abelian category you can define a moduli stack of objects in it --- since objects have automorphisms, families of objects are naturally groupoids so you find a stack. Suppose now you have a derived category [technically you need to "enhance" it to a dg or $A_\infty$ category or something equivalent, but let's ignore this]. In this context the presence of negative self-exts of objects means that automorphisms may have automorphisms which may have automorphisms.. i.e. you find higher homotopy groups of the natural moduli functor (which lands now in simplicial sets, or topological spaces). In any case you can make precise sense of moduli of objects in a derived category, and it is an $\infty$-stack! In summary once you're in a derived context, your moduli functors naturally land not in sets, not in groupoids (which are the same as 1-truncated homotopy types) but in higher groupoids, aka homotopy types, aka simplicial sets. Such moduli problems may be representable by higher stacks. I think one place higher stacks come up is if you want to make a moduli space of objects in some derived category. Like if instead of moduli of vector bundles you want to do moduli of perfect complexes. Then your groups of automorphisms can have higher homotopy groups coming from possible negative self-Ext's of your objects. Or again higher stacks could come up in the same classical way that higher categories come up: n-categories form an (n+1)-category. So for instance if you buy that stacks locally modelled on BG are interesting geometric objects, then if you want to consider a moduli of those it will have one more level of stackiness. I'll put this as a tentative answer, but I still hope someone who's actually worked with this stuff will come along and have a say. The moduli of linear categories (or abelian categories) in naturally a $2$-stack. You can take a look at the PhD thesis (in french) of Mathieu Anel (a former student of Bertrand Toën). When you compute the tangent complex of this $2$-stack then you get the $2$-truncation of the Hochschild complex. If you consider the corresponding derived stack then you get the full Hochschild complex. By the way, higher stacks are introduced to allow quotients, while derived schemes are introduced to allow fiber products. Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry higher-category-theory or ask your own question. Is there a meaningful difference between biased and unbiased composition? Does derived algebraic geometry allow us to take quotients with reckless abandon? A natural refinement of the $A_n$ arrangement is to consider all $2^n-1$ hyperplanes given by the sums of the coordinate functions. Have you seen this arrangement? Is it completely intractable? What is the relationship between connective and nonconnective derived algebraic geometry? How would you organize a cycle of seminars aimed at learning together some basics of Derived Algebraic Geometry? What is the big picture of algebraic geometry?
CommonCrawl
We consider semidefinite programs (SDPs) of size $n$ with equality constraints. In order to overcome scalability issues, Burer and Monteiro proposed a factorized approach based on optimizing over a matrix $Y$ of size $n\times k$ such that $X=YY^*$ is the SDP variable. The advantages of such formulation are twofold: the dimension of the optimization variable is reduced, and positive semidefiniteness is naturally enforced. However, optimization in $Y$ is non-convex. In prior work, it has been shown that, when the constraints on the factorized variable regularly define a smooth manifold, provided $k$ is large enough, for almost all cost matrices, all second-order stationary points (SOSPs) are optimal. Importantly, in practice, one can only compute points which approximately satisfy necessary optimality conditions, leading to the question: are such points also approximately optimal? To this end, and under similar assumptions, we use smoothed analysis to show that approximate SOSPs for a randomly perturbed objective function are approximate global optima, with $k$ scaling like the square root of the number of constraints (up to log factors). We particularize our results to an SDP relaxation of phase retrieval.
CommonCrawl
Аннотация: We investigate the following problem: When do two generalized real Bott manifolds of height 2 have isomorphic cohomology rings with $\mathbb Z/2$ coefficients and also when are they diffeomorphic? It turns out that in general cohomology rings with $\mathbb Z/2$ coefficients do not distinguish those manifolds up to diffeomorphism. This gives a negative answer to the cohomological rigidity problem for real toric manifolds posed earlier by Y. Kamishima and the present author. We also prove that generalized real Bott manifolds of height 2 are diffeomorphic if they are homotopy equivalent. Поступило в январе 2009 г.
CommonCrawl
Abstract: A flexible (2 $\times $ 1) multiple-input, multiple-output (MIMO) antenna with an electromagnetic band gap (EBG) unit cell is designed and measured. The proposed MIMO antenna is based on a circular slotted patch antenna to work at 3.5 GHz. An EBG unit cell is used to reduce the effect of the inherent mutual coupling produced between the closely spaced MIMO elements. The results show that the measured mutual coupling has been reduced by -11.6 dB (from -10 dB to -21.6 dB), and the measured realized gain has been increased by 1.074 dBi (from 1.59 dBi to 2.66 dBi). An improvement by 2.4% in the simulated efficiency was also noticed (from 78.7% to 81.1%). The proposed antenna also passed the flexibility test without a noticeable change in the S-parameter at the desired frequency.
CommonCrawl
Questions about algorithms that solve problems up to some bounded error. What is the source to understand Feedback edge arc set? I tried wikipedia and research papers any easy tutorial? Delivering to two or more locations in one go while respecting deadlines? Clarification on NP-hardness and hardness of approximation results for set cover? Is this partitioning problem NP-complete? We know that minimal edge cover for the normal graph is polynomial time solvable. Is it also true for hypergraph? How to find minimum cardinality maximal matching? I tried that pick a edge from highest degree vertex remove other edges from same vertex and so on. Is there an FPRAS for the number of min st cuts in general graphs? Where can i find a list of problems reducible to max flow and matching problems. I need such examples to learn and practice . what is a shearlet/shearlet transform and how can i use it? Different properties of Heavy-Hitters and Count-Min Sketch algorithms? Is deep learning appropriate to approximate dynamic programming problems? Are there matching upper bounds? What would be a generic strategy to proof $\alpha$-approximation algorithms? Why does economising the power series give me more error?
CommonCrawl
We design differentially private learning algorithms that are agnostic to the learning model assuming access to limited amount of unlabeled public data. First, we give a new differentially private algorithm for answering a sequence of $m$ online classification queries (given by a sequence of $m$ unlabeled public feature vectors) based on a private training set. Our private algorithm follows the paradigm of subsample-and-aggregate, in which any generic non-private learner is trained on disjoint subsets of the private training set, then for each classification query, the votes of the resulting classifiers ensemble are aggregated in a differentially private fashion. Our private aggregation is based on a novel combination of distance-to-instability framework [Smith & Thakurta 2013] and the sparse-vector technique [Dwork et al. 2009, Hardt & Talwar 2010]. We show that our algorithm makes a conservative use of the privacy budget. In particular, if the underlying non-private learner yields classification error at most $\alpha\in (0, 1)$, then our construction answers more queries, by at least a factor of $1/\alpha$ in some cases, than what is implied by a straightforward application of the advanced composition theorem for differential privacy. Next, we apply the knowledge transfer technique to construct a private learner that outputs a classifier, which can be used to answer unlimited number of queries. In the PAC model, we analyze our construction and prove upper bounds on the sample complexity for both the realizable and the non-realizable cases. As in non-private sample complexity, our bounds are completely characterized by the VC dimension of the concept class.
CommonCrawl
The solutions of quadratic equation are called the roots of quadratic equation. Quadratic equation is a second degree polynomial. So, it should have two solutions and they are called as the roots of a quadratic equation. The roots of quadratic equation are actually obtained by equating the quadratic expression to zero. $ax^2+bx+c = 0$ is a quadratic equation in standard algebraic form and the solutions are simply denoted by the below quadratic formula. The two roots of a quadratic equation are usually represented by alpha ($\alpha$) and beta ($\beta$) symbols in mathematics.
CommonCrawl
The first class of integral involving the logarithm that I would like to explore is in the following form: ...that is, the integral from $x=0$ to $\infty$ of $\ln(x)$ times the reciprocal of any quadratic function of $x$. I choose to begin the post with this type of integral because it is a type of integral that seems like it would be very difficult to evaluate (indeed, one may abandon all hope of finding an antiderivative of the integrand), but all steps taken in my solution of it are highly elementary and rely on nothing other than functional properties of the integrand and the basic properties of an integral. In other words, it is a perfect example of how a chain of trivial equalities can result in something that seems highly non-trivial. Now let us turn our attention to a different class of integrals, like the fourth and final example that I posed: Unfortunately, integrals like this will not yield themselves to such elementary methods... to solve this one, we'll have to use the Gamma function. If you haven't already, go back and read this post - it's essential to what I'm about to do. Just one more note: I'm not going to go into depth with this type of integral as much as I did for the previous class of integrals. The evaluation of some integrals of this type are replete with messy algebra, and are best left to computer algebra software (like Wolfram) to evaluate. So I'll do this example, explain how to use the technique for other examples, and then end the post. Okay, time to tell the truth: this integral could have been evaluated in a much easier way. If you were paying attention, you'd have noticed that a simple substitution of $x\to\sin x$ would transform the integral into ...which we evaluated using very elementary methods in this post. No Gamma function necessary! Thus, the purpose of that unnecessary ordeal with the Gamma function was to demonstrate a more widely applicable technique of integration (namely, differentiating the Beta function) as opposed to the limited "trick" applied to the integral in my other post. And with that, I conclude this post!
CommonCrawl
In the following, we present the setting for the numerical experiments. Numerical grid: $N_x=300$ in space, $N_t=100$ in time. Figure 1: $\alpha_c=\alpha_s=0$. Figure 2: $\alpha_c=0.01$, $\alpha_s=0$. Figure 3: $\alpha_c=0$, $\alpha_s=0.65$. Figure 4a: Optimal control for $\alpha_c=\alpha_s=0$. In red, the controllable subdomain; in blue, the stationary optimal controls. Figure 4b: Optimal control for $\alpha_c=0.01$, $\alpha_s=0$. In red, the controllable subdomain; in blue, the stationary optimal controls. Figure 4c: Optimal control for $\alpha_c=0$, $\alpha_s=0.65$. In red, the controllable subdomain; in blue, the stationary optimal controls. Figure 5a: Optimal state for $\alpha_c=\alpha_s=0$ . In red, the target $z$; in blue, the stationary optimal states. Figure 5b: Optimal state for $\alpha_c=0.01$, $\alpha_s=0$. In red, the target $z$; in blue, the stationary optimal states. Figure 5c: Optimal state for $\alpha_c=0$, $\alpha_s=0.65$. In red, the target $z$; in blue, the stationary optimal states. Figure 6a: Optimal adjoint for $\alpha_c=\alpha_s=0$. Figure 6b: Optimal adjoint for $\alpha_c=0.01$, $\alpha_s=0$. Figure 6c: Optimal adjoint for $\alpha_c=0$, $\alpha_s=0.65$. Peypouquet, J. Convex optimization in normed spaces: theory, methods and examples. With a foreword by Hedy Attouch. Springer Briefs in Optimization. Springer, Cham, 2015. xiv+124 pp.
CommonCrawl
This paper presents the solution for a fractional Bergman's minimal blood glucose-insulin model expressed by Atangana-Baleanu-Caputo fractional order derivative and fractional conformable derivative in Liouville-Caputo sense. Applying homotopy analysis method and Laplace transform with homotopy polynomial we obtain analytical approximate solutions for both derivatives. Finally, some numerical simulations are carried out for illustrating the results obtained. In addition, the calculations involved in the modified homotopy analysis transform method are simple and straightforward. Keywords: Bergman's model, fractional conformable derivative, Atangana-Baleanu fractional derivative, Laplace transform, modified homotopy analysis transform method. B. S. Alkahtani, O. J. Algahtani, R. S. Dubey and P. Goswami, The solution of modified fractional bergman's minimal blood glucose-insulin model, Entropy, 19 (2017), 114. doi: 10.3390/e19050114. A. Atangana, Derivative with a New Parameter: Theory, Methods and Applications, Academic Press, New York, 2016. doi: 10.1016/B978-0-08-100644-3.00001-5. A. Atangana, Fractional Operators with Constant and Variable Order with Application to Geo-Hydrology, Academic Press, London, 2018. A. Atangana and K. M. Owolabi., New numerical approach for fractional differential equations, Mathematical Modelling of Natural Phenomena, 13 (2018), 1-21. doi: 10.1051/mmnp/2018010. A. Atangana, Blind in a commutative world: Simple illustrations with functions and chaotic attractors, Chaos, Solitons & Fractals, 114 (2018), 347-363. doi: 10.1016/j.chaos.2018.07.022. A. Atangana and J. F. Gómez Aguilar, Decolonisation of fractional calculus rules: Breaking commutativity and associativity to capture more natural phenomena, The European Physical Journal Plus, 133 (2018), 1-23. A. Atangana and E. F. D. Goufo, On the mathematical analysis of Ebola hemorrhagic fever: Deathly infection disease in West African countries, BioMed Research International, 2014 (2014), Article ID 261383, 7 pages. doi: 10.1155/2014/261383. A. Atangana and B. S. T. Alkahtani, Modeling the spread of Rubella disease using the concept of with local derivative with fractional parameter, Complexity, 21 (2016), 442-451. doi: 10.1002/cplx.21704. A. Atangana and D. Baleanu, New fractional derivatives with nonlocal and non-singular kernel: Theory and application to heat transfer model, Therm Sci., 20 (2016), 763-769. doi: 10.2298/TSCI160111018A. R. N. Bergman, Y. Z. Ider, C. R. Bowden and C. Cobelli, Quantitative estimation of insulin sensitivity, American Journal of Physiology-Endocrinology And Metabolism, 236 (1979), 667-677. doi: 10.1152/ajpendo.1979.236.6.E667. A. Caumo, C. Cobelli and M. Omenetto, Overestimation of minimal model glucose effectiveness in presence of insulin response is due to under modeling, American Journal of Physiology, 278 (1999), 481-488. A. De Gaetano and O. Arino, Mathematical modelling of the intravenous glucose tolerance test, Journal of Mathematical Biology, 40 (2000), 136-168. doi: 10.1007/s002850050007. L. C. Gatewood, E. Ackerman, J. W. Rosevear, G. D. Molnar and T. W. Burns, Tests of a mathematical model of the blood-glucose regulatory system, Computional Biomedical Research, 2 (1968), 1-14. doi: 10.1016/0010-4809(68)90003-7. A. Fabre and J. Hristov, On the integral-balance approach to the transient heat conduction with linearly temperature-dependent thermal diffusivity, Heat and Mass Transfer, 53 (2017), 177-204. doi: 10.1007/s00231-016-1806-5. J. Hristov, Steady-state heat conduction in a medium with spatial non-singular fading memory: derivation of Caputo-Fabrizio space-fractional derivative with Jeffrey's kernel and analytical solutions, Thermal Science, 1 (2016), 115-115. R. Jain, K. Arekar and R. Shanker Dubey, Study of Bergman's minimal blood glucose-insulin model by Adomian decomposition method, Journal of Information and Optimization Sciences, 38 (2017), 133-149. doi: 10.1080/02522667.2016.1187919. F. Jarad, E. Ugurlu, T. Abdeljawad and D. Baleanu, On a new class of fractional operators, Advances in Difference Equations, 2017 (2017), Paper No. 247, 16 pp. doi: 10.1186/s13662-017-1306-z. R. Khalil, M. Al Horani, A. Yousef and M. Sababheh, A new definition of fractional derivative, Journal of Computational and Applied Mathematics, 264 (2014), 65-70. doi: 10.1016/j.cam.2014.01.002. S. Kumar, A. Kumar and I. K. Argyros, A new analysis for the Keller-Segel model of fractional order, Numerical Algorithms, 75 (2017), 213-228. doi: 10.1007/s11075-016-0202-z. S. Kumar, A new analytical modelling for telegraph equation via Laplace transform, Appl. Math. Modell, 38 (2014), 3154-3163. doi: 10.1016/j.apm.2013.11.035. S. Kumar and M. M. Rashidi, New analytical method for gas dynamic equation arising in shock fronts, Comput. Phy. Commun, 185 (2014), 1947-1954. doi: 10.1016/j.cpc.2014.03.025. G. A. Losa, On the fractal design in human brain and nervous tissue, Applied Mathematics, 5 (2014), 1725-1732. doi: 10.4236/am.2014.512165. V. F. Morales-Delgado, J. F. Gómez-Aguilar, S. Kumar and M. A. Taneco-Hernández, Analytical solutions of the Keller-Segel chemotaxis model involving fractional operators without singular kernel, The European Physical Journal Plus, 133 (2018), 200. doi: 10.1140/epjp/i2018-12038-6. Z. Odibat and A. S. Bataineh, An adaptation of homotopy analysis method for reliable treatment of strongly nonlinear problems: Construction of homotopy polynomials, Math. Meth. Appl. Sci, 38 (2015), 991-1000. doi: 10.1002/mma.3136. I. Podlubny, Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and some of Their Applications, Academic Press, an Diego, California, USA, 1999. Figure 2. Numerical simulations for the blood glucose concentration $G(t)$ , the effect of active insulin $X(t)$ and the blood insulin concentration $I(t)$ for several values of $\alpha_1-\beta$ , $\alpha_2-\beta$ and $\alpha_3-\beta$ . Figure 1. Numerical simulations for the blood glucose concentration G(t), the effect of active insulin X(t) and the blood insulin concentration I(t) for several values of α, β and γ.
CommonCrawl
The following are very familiar and basic items, individually. (1) The number $a(n)$ of rectangles (parallel to axes) in an $n\times n$ square grid. (2) The number $b(n)$ of cubes (parallel to axes) in an $n\times n\times n$ cube. However, I could not find a reference to a direct bijective proof for $a(n)=b(n)$. Can you provide such an argument of reference? Let $h$ be the side of the inner cube, and let $(i,j,k)$ be its corner nearest the origin. Then we have $0\le i,j,k < n-h+1 \le n$. It is easy to see that these are inverse. Not the answer you're looking for? Browse other questions tagged reference-request co.combinatorics enumerative-combinatorics or ask your own question. What collections of convex sets result in non-trivial uses of Helly's theorem? Why is the inverse of a bijective rational map rational?
CommonCrawl
where $x$ denotes the rank number of a gene, $x_0$ the expression level of the highest abundant gene, and $k$ and $x_1$ are cell type and/or experiment specific parameters. In the simulation, expression ranks are uniformly random assigned to transcripts of the reference, and subsequently their expression level in number of molecules and relative abundance is determined according to the law in Formula 1. Certainly, according to the corresponding settings a more or less substantial part of the transcripts from the reference annotation will remain unexpressed in the simulated run. After the number of RNA molecules has been determined for each transcript, in silico expressed transcripts are assigned individual variations in transcription start and the length of the attached poly-A tail. The FLUX SIMULATOR modeles differences in transcription start are modelled by random variables under an exponential model with a mean around 10nt. During poly-adenylation in the nucleus usually 200-250 adenine residues get added to the primary transcript. Disregarding other poly-adenylation mechanisms, as cytoplasmatic polyadenylation, and the exact mechanisms of degrading processes by exo- and endonucleases, our model describes poly-A lengths by randomly sampling under a Gaussian distribution with a mean of 125nt and shape adapted s.t. >99.5% of the random variables fall in the interval [0;250].
CommonCrawl
Proceedings of The 28th Conference on Learning Theory, PMLR 40:364-390, 2015. Motivated by a sampling problem basic to computational statistical inference, we develop a toolset based on spectral sparsification for a family of fundamental problems involving Gaussian sampling, matrix functionals, and reversible Markov chains. Drawing on the connection between Gaussian graphical models and the recent breakthroughs in spectral graph theory, we give the first nearly linear time algorithm for the following basic matrix problem: Given an n\times n Laplacian matrix \mathbfM and a constant -1 ≤p ≤1, provide efficient access to a sparse n\times n linear operator \tilde\mathbfC such that $\mathbfM^p ≈\tilde\mathbfC \tilde\mathbfC^⊤, where ≈denotes spectral similarity. When p is set to -1, this gives the first parallel sampling algorithm that is essentially optimal both in total work and randomness for Gaussian random fields with symmetric diagonally dominant (SDD) precision matrices. It only requires \em nearly linear work and 2n \em i.i.d. random univariate Gaussian samples to generate an n-dimensional \em i.i.d. Gaussian random sample in polylogarithmic depth. The key ingredient of our approach is an integration of spectral sparsification with multilevel method: Our algorithms are based on factoring \mathbfM^p$ into a product of well-conditioned matrices, then introducing powers and replacing dense matrices with sparse approximations. We give two sparsification methods for this approach that may be of independent interest. The first invokes Maclaurin series on the factors, while the second builds on our new nearly linear time spectral sparsification algorithm for random-walk matrix polynomials. We expect these algorithmic advances will also help to strengthen the connection between machine learning and spectral graph theory, two of the most active fields in understanding large data and networks. %X Motivated by a sampling problem basic to computational statistical inference, we develop a toolset based on spectral sparsification for a family of fundamental problems involving Gaussian sampling, matrix functionals, and reversible Markov chains. Drawing on the connection between Gaussian graphical models and the recent breakthroughs in spectral graph theory, we give the first nearly linear time algorithm for the following basic matrix problem: Given an n\times n Laplacian matrix \mathbfM and a constant -1 ≤p ≤1, provide efficient access to a sparse n\times n linear operator \tilde\mathbfC such that $\mathbfM^p ≈\tilde\mathbfC \tilde\mathbfC^⊤, where ≈denotes spectral similarity. When p is set to -1, this gives the first parallel sampling algorithm that is essentially optimal both in total work and randomness for Gaussian random fields with symmetric diagonally dominant (SDD) precision matrices. It only requires \em nearly linear work and 2n \em i.i.d. random univariate Gaussian samples to generate an n-dimensional \em i.i.d. Gaussian random sample in polylogarithmic depth. The key ingredient of our approach is an integration of spectral sparsification with multilevel method: Our algorithms are based on factoring \mathbfM^p$ into a product of well-conditioned matrices, then introducing powers and replacing dense matrices with sparse approximations. We give two sparsification methods for this approach that may be of independent interest. The first invokes Maclaurin series on the factors, while the second builds on our new nearly linear time spectral sparsification algorithm for random-walk matrix polynomials. We expect these algorithmic advances will also help to strengthen the connection between machine learning and spectral graph theory, two of the most active fields in understanding large data and networks. AB - Motivated by a sampling problem basic to computational statistical inference, we develop a toolset based on spectral sparsification for a family of fundamental problems involving Gaussian sampling, matrix functionals, and reversible Markov chains. Drawing on the connection between Gaussian graphical models and the recent breakthroughs in spectral graph theory, we give the first nearly linear time algorithm for the following basic matrix problem: Given an n\times n Laplacian matrix \mathbfM and a constant -1 ≤p ≤1, provide efficient access to a sparse n\times n linear operator \tilde\mathbfC such that $\mathbfM^p ≈\tilde\mathbfC \tilde\mathbfC^⊤, where ≈denotes spectral similarity. When p is set to -1, this gives the first parallel sampling algorithm that is essentially optimal both in total work and randomness for Gaussian random fields with symmetric diagonally dominant (SDD) precision matrices. It only requires \em nearly linear work and 2n \em i.i.d. random univariate Gaussian samples to generate an n-dimensional \em i.i.d. Gaussian random sample in polylogarithmic depth. The key ingredient of our approach is an integration of spectral sparsification with multilevel method: Our algorithms are based on factoring \mathbfM^p$ into a product of well-conditioned matrices, then introducing powers and replacing dense matrices with sparse approximations. We give two sparsification methods for this approach that may be of independent interest. The first invokes Maclaurin series on the factors, while the second builds on our new nearly linear time spectral sparsification algorithm for random-walk matrix polynomials. We expect these algorithmic advances will also help to strengthen the connection between machine learning and spectral graph theory, two of the most active fields in understanding large data and networks.
CommonCrawl
Esmaeelzadeh, F., Kamyabi Gol, R., Raisi Tousi, R. (2014). A Class of compact operators on homogeneous spaces. Sahand Communications in Mathematical Analysis, 01(2), 39-45. Fatemah Esmaeelzadeh; Rajab Ali Kamyabi Gol; Reihaneh Raisi Tousi. "A Class of compact operators on homogeneous spaces". Sahand Communications in Mathematical Analysis, 01, 2, 2014, 39-45. Esmaeelzadeh, F., Kamyabi Gol, R., Raisi Tousi, R. (2014). 'A Class of compact operators on homogeneous spaces', Sahand Communications in Mathematical Analysis, 01(2), pp. 39-45. Esmaeelzadeh, F., Kamyabi Gol, R., Raisi Tousi, R. A Class of compact operators on homogeneous spaces. Sahand Communications in Mathematical Analysis, 2014; 01(2): 39-45. 1Department of Mathematics, Bojnourd Branch, Islamic Azad University, Bojnourd, Iran. 2Department of Mathematics, Center of Excellency in Analysis on Algebraic Structures(CEAAS), Ferdowsi University of Mashhad, P. O. Box 1159, Mashhad 91775, Iran. 3Department of Mathematics, Ferdowsi University of Mashhad, P. O. Box 1159, Mashhad 91775, Iran. Let $\varpi$ be a representation of the homogeneous space $G/H$, where $G$ be a locally compact group and $H$ be a compact subgroup of $G$. For an admissible wavelet $\zeta$ for $\varpi$ and $\psi \in L^p(G/H),\ \ 1\leq p <\infty$, we determine a class of bounded compact operators which are related to continuous wavelet transforms on homogeneous spaces and they are called localization operators. S. T. Ali, J-P. Antoine and J-P. Gazeau, Coherent States, Wavelets and Their Generalizations, Springer-Verlag, New York, 2000. F. Esmaeelzadeh, R. A. Kamyabi Gol and R. Raisi Tousi , On the continuous wavelet transform on homogeneous spases, Int. J. Wavelets. Multiresolut, Vol. 10, No. 4 (2012). G. B. Folland, A Course in Abstract Harmonic Analysis, CRC Press, 1995. K. Zhu, Operator Theory in Function Spaces, Mathematical Surveys and Monographs, Vol. 138, 2007. M. W. Wong, Wavelet Transform and Localization Operators. Birkhauser Verlag, Basel-Boston-Berlin, 2002.
CommonCrawl
The St. Čapek Crowd Control Academy, June, 2115. Commander Gall is worried. On one hand, the graduating class this year is the most promising in the institution's history, every cadet's reasoning skills honed to perfection. On the other, General Domin is visiting and has insisted on writing and administering the robostanchion exam himself, despite the fact that robostanchion squads as a crowd-control technique came after his time, and he doesn't always grasp their limitations. A cadet can deploy an inactive robostanchion, commanding it to teleport itself to any unoccupied point on the field—which happens to be an infinite integer lattice—and become active. A cadet can move an active robostanchion, commanding it to teleport to a different, unoccupied point on the field. The robostanchion remains active. A cadet can recall an active robostanchion, commanding it to teleport back to the hangar and become inactive. Ordinarily, there are only two ways to fail a robostanchion exam: (1) damaging the squad by running out of power or (2) not achieving the requested formation within the generous time limit. Now the general, to gratify his ego, has decided that his version of the exam should be more challenging, and he has added (3) ever not using all of the available power. Commander Gall, appropriately mortified, has, after much obsequious persuasion, gotten the general to agree to some concessions. First, the general will allow cadets four commands before he enforces his additional requirement. Second, the general has agreed to only ask for connected formations, i.e., formations where every robostanchion is joined to the others by at least one laser barrier. While further promises from the general are improbable, there is one more fact working in the cadets' favor: To prevent cheating, school policy is that cadets are always given squads of distinct sizes. For instance, if one cadet makes the exam with a squad of 90, no other cadet will have exactly 90 robostanchions at their command. Unsure whether these facts are enough to contain the general's caprice, Gall consults with Chief Engineer Fabry, asking for a worst-case analysis. What does Fabry reply? That is, in the worst case possible, how many cadets can Gall expect to fail? Clarifications: General Domin is sensible enough not to ask for formations that require more robostanchions than are in the squad, nor for formations that violate his condition (3). one. The only impossible formation is a $3\times3$ square of robots with the center removed. Since all legal moves after the first four are reversible, in order to show a formation can be created from nothing, it suffices to show that it can be reduced to a small square of four robots. Call a robot with exactly one robot next to it a loner. We first show that any formation with loners can be reduced to a small square. Starting with such a formation, remove all loners, and continue removing newly created loners until there are none. What remains is a loop. Unless the formation is now a small square, the top edge of the formation must have 3 robots in a row, say at (0,0), (1,0) and (2,0). Deploy one of the removed robots to (1,1), then move (0,0)'s other neighbor to (0,1). This creates a small square of robots: after removing all loners until there are none, only this square will remain. So, assume there are no loners. Suppose without loss of generality that the lowest row in the formation has a $y$-coordinate of $0$, and on that row, the lowest $x$ coordinate is $0$. This implies that there are robots at (0,0), (1,0), and (0,1). If there is a robot at (1,1), that means we have the small square, so assume there isn't one. Furthermore, (1,0) can't have his second neighbor to the south, so there must be a robot at (2,0). Case 1: Neither (2,1) nor (1,2) have a robot. Move (2,0) to (1,1). This means that (2,0)'s old neighbor at (3,0) will now be a loner, so the formation is now reducible. Case 2: Exactly one of (2,1) or (1,2) has a robot. Move that robot to (1,1). The moved robot's two neighbors are now loners. Case 3: Both (1,2) and (2,1) have robots. (1,2) must have two neighbors. If they are both at (0,2) and (2,2), then this is the bad formation, so assume that only one of (0,2) and (2,2) is filled. By symmetry, we can assume (0,2) is unfilled. This means that the other neighbor of (0,1) must be at (-1,1). If there was a robot at (-1,0), that would mean there was a small square of four robots, contradicting that the entire formation was a loop. So, (-1,0) is empty. Case 3a: (-2,0) has a robot. Move that robot to (-1,0). The two neighbors that (-2,0) had are now loners. Case 3b: (-2,0) doesn't have a robot. Move (2,0) to (-1,0). The two neighbors that (2,0) had are now loners. Thus, in all cases except for the $3\times 3$ square without its center, the formation can be made. To see why this formation cannot be made, you can check that no moves can be made from it. If you were only given the exact number of robots needed, no deployments are possible. No moves are possible: a move destroys two lasers, so it must create two as well, but every spot you can move a robot would make it have 0,1,3 or 4 lasers. Finally, no recalls are possible, since this will destroy two lasers and only one robot. Not the answer you're looking for? Browse other questions tagged mathematics strategy story graph-theory or ask your own question.
CommonCrawl
Previously, I have created puzzles which used only one of the four mathematical operations ($+$, $-$, $\times$, and $\div$). In this Sudoku I have combined all the four operations together in a single puzzle. I call my new puzzle 'Bochap', which comes from a popular Hokkien word in Singapore.'Bochap' can be interpreted as 'oblivious to everything else'. Thus, the name of this new puzzle suggests that puzzlers are so focused on finding a solution that they reach a point of being 'Bochap' or oblivious of everything going on around them. This reminds me of the story of Archimedes who, as he was drawing some diagrams on the ground, was approached by an invading Roman soldier who drew a sword over his head and asked him who he was. Too much absorbed in the diagrams on the ground, Archimedes did not give his name but, protecting the dust with his hands, exclaimed: "Don't disturb my circles!" As a result, he was slaughtered by the Roman invader! The puzzle can be solved with the help of small clue-numbers which are either placed on the border lines between selected pairs of neighbouring squares of the grid or placed after slash marks on the intersections of border lines between two diagonally adjacent squares. Each small clue-number is the result produced in any order by a pair of digits in the two squares that are horizontally or vertically or diagonally adjacent to each other, using the mathematical operation indicated: addition ($+$), subtraction ($-$), multiplication ($\times$), and division ($\div$). The position of each pair of diagonally adjacent squares is indicated by either two forward slash marks // or two backward slash marks \\. The clue \\14+ on the intersection of border lines between the diagonally adjacent squares (r6c2) and (r7c3) means that possible pairs of numbers in the squares are: 7 and 7; 6 and 8, 8 and 6; 5 and 9, or 9 and 5. The clue 3$\div$ on the border line between the squares (r9c4) and (r9c5) means that possible pairs of numbers for these squares can be from the following combinations: 1 and 3, 3 and 1; 2 and 6, 6 and 2; 3 and 9, or 9 and 3. A word document containing the Sudoku problem for classroom use, can be found here. Games. Combinatorics. Logo. Visualising. Mental calculation strategies. Creating and manipulating expressions and formulae. Working systematically. Mathematical reasoning & proof. Simultaneous equations. Networks/Graph Theory.
CommonCrawl
Iodine 125 is a commonly used source for permanent implanted interstitial brachytherapy. Iodine 125 is manufactured into resin spheres which are encapsulated within a thin titanium shell. 125I has a half life of 59.4 days, decaying through electron capture to 125Te, a stable isotope. Gamma photons are released following the decay, with a maximum energy of 35 keV. The mean energy is 28 keV. The specific activity of 125I is $6.4 \times 10^2$ TeV/g. 125I is always supplied as an encapsulated seed, with a 0.05 mm shell of titanium around the iodine source. The iodine is located on resin spheres within the capsule or adsorbed on a silver rod. The source capsule for 125I seeds is fragile and can be damaged. It is important to perform wipe tests before use of the seeds to ensure the capsule is intact. Iodine may become highly reactive if stored at cold temperatures and it is important not to cold sterilise the seeds. It should be stored within a lead safe at least 3 mm thick. 125I seeds are typically unsterile when delivered and must be sterilised prior to use. As they are a permanent implant, they do not require specific disposal, although if the patient dies within a year of insertion cremation is not recommended unless the implants are removed from the body first. Relatively long half life (when compared with 103Pd), which may complicate dose calculations over time.
CommonCrawl
A binary operation on compatible matrices over a ring $R$. There are several such operations. The multiplication corresponds to composition of linear maps. If $A$ is the matrix of a linear map $\alpha : R^m \rightarrow R^n$ and $B$ is the matrix of a linear map $\beta : R^n \rightarrow R^p$, then $AB$ is the matrix of the linear map $\alpha\beta : R^m \rightarrow R^p$. This page was last modified on 23 July 2018, at 05:46.
CommonCrawl