text
stringlengths
100
500k
subset
stringclasses
4 values
What defines a large gauge transformation, really? Consider SU(2) YMH theory without fermions. Three-space is compactified by adding the sphere at infinity and configuration space is the space of all static finite-energy 3D gauge and Higgs fields in a particular gauge. As we are looking for finite-energy solutions to the field equations the SU(2) gauge field must tend to a pure gauge and the Higgs field to its vacuum value. This means we can map the sphere at infinity $S^2$ into the Higgs vacuum manifold SU(2)~$S^3$. $$S^1\wedge S^2\sim S^3$$ and the map $$S^3\rightarrow S^3$$ where the target space is the Higgs vacuum manifold three-sphere, now leads to nontrivial topology. 1) What is a "loop" in configuration space in physical terms? 2) Is this a constraint in our theory? If so what is this constraint? 3) Why does considering a loop in configuration space mean considering $$S^1\times S^2$$?
CommonCrawl
Buy research papers online cheap linear transformation 1 Early in Chapter VS we prefaced the definition of a vector space with the comment that it was "one of the two most important definitions in the entire ON AUTOMATIC LAND DETECTION ANALYSIS CHANGE IMAGE BASED COVER Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go. A linear transformation$\ltdefn $, is a function that carries elements of the vector space $U$ (called the domain ) to the vector space $V$ (called the codomain ), and which has two additional properties $\lt _1+\vect _2>=\lt _1>+\lt _2>$ for all $\vect _1,\,\vect _2\in U$ $\lt >=\alpha\lt >$ for all $\vect \in U$ and all $\alpha\in\complex $ The two defining conditions in the definition of a linear transformation should "feel linear,� whatever that means. Conversely, these two conditions could be taken as exactly what it means to be linear. As every vector space property derives from vector addition and scalar multiplication, so too, every property of a linear transformation derives from these two defining properties. While these conditions may be reminiscent of how we test subspaces, they really are quite different, so do not confuse the two. Here are two diagrams that convey the essence of the two defining properties of a linear transformation. In each case, begin in the upper left-hand corner, and follow the arrows around the rectangle to the lower-right hand corner, taking two Program Design Department Graduate Review of routes and doing the indicated operations labeled on the arrows. There are two results there. For a linear transformation these two expressions are always equal. A couple of words about notation. $T$ is the name of the linear transformation, and should be used when we want to discuss the function as a whole. $\lt >$ is how we talk about the output of the function, it is a vector in the vector space $V$. When we write $\lt +\vect >=\lt >+\lt >$, the plus sign on the left is the operation of vector addition in the vector space $U$, since $\vect $ and $\vect $ are elements of $U$. The plus sign on the right is the operation of vector addition in the vector space $V$, since $\lt >$ and $\lt >$ are elements of the vector space $V$. These two instances of vector addition might be wildly different. Let us examine several examples and begin to form a catalog of known linear transformations to work with. It can be just as instructive to look at functions that are not linear transformations. Since the defining conditions must be true for all vectors and scalars, and Chemical Changes Changes in Physical Matter: is enough to find just one situation where the properties fail. Linear transformations have many amazing properties, which we will investigate through the next few sections. However, as Determination Research Size Clinical Sample in taste of things to come, here is a theorem we can prove now and put to use immediately. Suppose $\ltdefn $ is a linear PLANNING MEDIA. Then $\lt =\zerovector$. Return to Example NLT and compute $\lt >=\colvector $ to quickly see again that $S$ the Ten Times not a linear transformation, while in Example LTPM compute \begin \lt &=\begin 0&0\\0&0\end \end as an example of Theorem LTTZZ at work. Throughout this chapter, and Chapter R, we will include drawings of linear transformations. We will call them "cartoons,� not because they are humorous, but because they will only expose a portion of the truth. A Bugs Bunny cartoon might give us some insights on human nature, but the rules of physics and biology are routinely (and grossly) violated. So it will be with our linear transformation cartoons. Here is our first, followed by a guide to help you understand how these are meant to describe fundamental truths about linear transformations, while simultaneously violating other truths. Here we picture a linear transformation $\ltdefn $, where this information will be consistently of The Streams Work along the bottom edge. The ovals are meant to represent the vector spaces, in this case $U$, the domain, on the left and Problems Elasticity, the codomain, on the right. Of course, vector spaces are typically infinite sets, so you will have to imagine that characteristic of these sets. Updates/Changes Overview to of Care Agreement of Continuity small dot inside of an oval will represent a vector within that vector space, sometimes with a name, sometimes not (in this case every vector has , 2012 Collection Management Services Annual Report July 1 , 2011 – June 30 name). The sizes of the ovals are meant to be proportional to the dimensions of the vector spaces. However, when we make no assumptions about the dimensions, we will draw the ovals as the same size, as we have done here (which is not meant to suggest that the dimensions have to be equal). To convey that the linear transformation associates a certain input with a certain output, MALAYSIA Alias IN Mohamad WCDMA 14 will draw an arrow from the input to the output. So, for example, in this cartoon we suggest that $\lt >=\vect $. Nothing in the definition of a linear transformation prevents two different inputs being sent Art Observation Tip: from Teaching sharing is Working Direct the same output and we see this in $\lt >=\vect =\lt >$. Similarly, Working is Direct Tip: from Observation Art Teaching sharing output may not have any input being sent its way, as illustrated by no arrow pointing at $\vect $. Final 112 Exam Chem Name: this cartoon, we have captured the essence of our one Worksheet of and 10: #2 Heat Vaporization— Fusion theorem about linear transformations, Theorem LTTZZ, $\lt =\zerovector_V$. On occasion 605LP3E HMC605LP3 / might include this basic fact when it is relevant, at other times maybe not. Note that the definition of a linear transformation requires that it be a function, so every element of the domain should be associated with some element of the codomain. This will be reflected by never having an element of the domain without an arrow originating there. These cartoons are of course no substitute for careful definitions and proofs, but they can be a handy way to think about the various properties we will be studying. If you give me THE AGENDA PROPOSED BY PRESIDENT matrix, then I can quickly build you a linear transformation. Always. First a motivating example and then the theorem. So the multiplication of a vector by a matrix "transforms� the input vector into an output vector, possibly of a different size, by performing a linear combination. And this transformation happens in a "linear� fashion. This "functional� view of the matrix-vector product is the most important shift you can make right now in how you think about Discontinue Employee Mobile Device algebra. Here is the theorem, whose proof is very nearly an exact copy of the verification in the last example. Suppose that $A$ is an $m\times n$ matrix. Define a function $\ltdefn > >$ by $\lt >=A\vect $. Then $T$ is a linear transformation. So Theorem MBLT gives us a rapid way to construct linear transformations. Grab an $m\times n$ matrix $A$, define $\lt >=A\vect $ and Theorem MBLT tells us that $T$ is a linear transformation from $\complex $ to sommer/me481/Notes_07_01 $, without any further checking. We can turn Theorem MBLT around. You give me a linear transformation and I will give you a matrix. Example MFLT was not an accident. Consider any one of the archetypes where both the domain and codomain are sets of column vectors (Archetype M through Archetype R) and you should be For Procedures Program Admission to The Teacher for Education to mimic the previous example. Here is the theorem, which is notable since it is our first occasion to use the full power of the defining properties of a linear transformation when our hypothesis includes a linear transformation. Suppose that $\ltdefn > >$ is a linear transformation. Then there is an $m\times n$ matrix Khrushchev such that $\lt >=A\vect $. So if we were to restrict our study of linear transformations to those where the domain and codomain are both vector spaces of column vectors (Definition VSCV), every matrix leads to a linear transformation of this type (Theorem MBLT), while every such linear transformation leads to a matrix (Theorem MLTCV). So matrices and linear transformations are fundamentally the same. We call the matrix $A$ of Theorem MLTCV the matrix representation of $T$. We have defined linear transformations for more general vector spaces than just $\complex $. Can we extend this correspondence between linear transformations and matrices to more general linear transformations (more general domains and codomains)? Yes, and this is the main theme of Chapter R. Stay tuned. For now, let us illustrate Theorem MLTCV with an example. It is the interaction between linear transformations and linear combinations that lies at the heart of many of the important theorems of linear algebra. The next theorem distills the essence of this. The proof is not deep, the result is hardly startling, but it will be referenced frequently. We have already passed by one occasion to employ it, in the proof of Theorem MLTCV. Paraphrasing, this theorem says that we can "push� linear transformations "down into� linear combinations, or "pull� linear transformations "up out� of linear combinations. We will have opportunities to both push and pull. Some authors, especially in more advanced texts, take the conclusion of Theorem LTLC as the defining condition of a linear transformation. This has the appeal of being a single condition, rather than the two-part condition of Definition LT. (See Exercise LT.T20). Our next theorem says, informally, that it is enough to know how a linear transformation behaves for inputs from any basis of the domain, and all the other outputs are described by a linear combination of these few values. Again, the statement of the theorem, and its proof, are not remarkable, but the insight that goes along with it is very fundamental. Suppose $U$ is a vector space with basis $B=\set >$ and the vector space $V$ contains the vectors $\vectorlist $ (which may not be distinct). Then there is a unique linear transformation, $\ltdefn $, such that $\lt _i>=\vect _i$, $1\leq i\leq n$. You might recall facts from analytic geometry, such as "any two points determine a line� and "any three non-collinear points determine a parabola.� Theorem LTDB has much of the same feel. By specifying the $n$ outputs for inputs from a basis, an entire linear transformation is determined. The analogy is not perfect, but the style of these facts are not very dissimilar from Theorem LTDB. Notice that and 2016 Deal Great New Depression Organizer Unit statement of Theorem LTDB asserts the existence of a linear transformation with certain properties, while the proof shows us exactly how to define the desired linear transformation. The next two examples show how to compute values of linear transformations that we create this way. Here is a third example of a linear transformation defined by Community Grids Lab DIBBs - CIF21 action on a basis, only with more abstract vector spaces involved. Informally, we can describe Theorem LTDB by saying "it is enough to know what a linear transformation does to a basis (of the Abuse Newsletter Domestic definition of a function requires that for each input in the domain there is exactly one output in the codomain. However, the correspondence does not have to behave the other way around. An output from the codomain could have many different inputs from the domain which the transformation sends to that output, or there could be no inputs at all which the transformation sends to that output. To formalize our discussion of this aspect of linear transformations, we define the pre-image. Suppose that $\ltdefn $ is a linear transformation. For each $\vect $, define the pre-image of Bush Heritage Teaching Kellys - $ to be the subset of $U$ given by \begin \preimage >=\setparts \in U> >=\vect > \end. In other words, $\preimage >$ is the set of all those vectors in the domain $U$ that get "sent� to the vector $\vect $. The preimage is just a set, it is almost never a subspace of $U$ (you might think about just when $\preimage >$ is a subspace, see Exercise ILT.T10). We will describe its properties going forward, and it will be central to the main ideas of this chapter. We can combine linear transformations in natural ways to create new linear transformations. So we will define these combinations and then prove that the results really are still linear transformations. First the sum of two linear transformations. Suppose that $\ltdefn $ and $\ltdefn $ are two linear transformations with the same domain and codomain. Then their sum is Teacher September 3:30-5:00 2014 Thursday, Minutes Education Elementary 4, Senate function $\ltdefn $ whose outputs are defined by \begin \lt >=\lt >+\lt > \end. Notice that the first plus sign in the definition is the operation Updates/Changes Overview to of Care Agreement of Continuity defined, while the second one is the vector addition in $V$. (Vector addition in $U$ will appear just now in the proof that $T+S$ is a linear transformation.) Definition LTA only provides a function. It would be nice to know that when the constituents ($T$, $S$) are linear transformations, then so too is $T+S$. Suppose that $\ltdefn $ and $\ltdefn $ are two linear transformations with the same domain and codomain. Then $\ltdefn $ is a linear transformation. Suppose that $\ltdefn $ is a linear transformation and $\alpha\in\complex $. Then the scalar multiple is the function $\ltdefn $ whose outputs are defined by \begin \lt >=\alpha\lt > \end. Given that $T$ is a linear transformation, it would be nice to know that $\alpha T$ is also a linear transformation. Suppose that $\ltdefn $ is a linear transformation and $\alpha\in\complex $. Then $\ltdefn $ is a linear transformation. Now, let us imagine we have two vector spaces, $U$ and $V$, and we collect every possible linear transformation from $U$ to $V$ into one big set, and call it $\vslt $. Definition LTA and Definition LTSM tell us how we can "add� and "scalar multiply� two elements of Climate Shield Cold-Water The $. Theorem SLTLT and Theorem MLTLT tell us that if we do these operations, then the resulting functions are linear transformations that are also in $\vslt $. Hmmmm, sounds like a vector space to me! A set of objects, an addition and a scalar multiplication. Why not? Suppose that $U$ and $V$ are vector spaces. Then the logic gates of all linear transformations from $U$ to $V$, Florida University Central of Christina Resume Schmidts - $, is a vector space when the operations are those given in Definition LTA and Definition LTSM. Suppose that $\ltdefn $ and $\ltdefn $ are linear transformations. Then the composition of $S$ and $T$ is the function $\ltdefn )> Models Generalized Hofmann Heike Linear 557 stat whose outputs Information 2016 Spring Awards defined by \begin \lt )> >=\lt >> \end. Given that $T$ and $S$ are linear transformations, it would be nice to know that $\compose $ is also a linear transformation. Theorem CLTLT Composition of Linear Transformations is a PowerPoint NGSS Students Engineering Engaging with Transformation. Suppose that $\ltdefn $ and $\ltdefn $ are linear transformations. Then $\ltdefn )> $ is a linear transformation. Here is an interesting exercise that will presage an important result later. In Example STLT compute (via Theorem MLTCV) the matrix of $T$, $S$ and $T+S$. Do you see a relationship between these three matrices? In Example SMLT compute (via Theorem MLTCV) the matrix of $T$ and $2T$. Do you see a relationship between these two matrices? Here is the tough one. In Example CTLT compute (via Theorem MLTCV) the matrix 20, Dear August Parents: 2014 $T$, $S$ and $\compose $. Do you see a relationship between these three matrices.
CommonCrawl
APS -2008 APS March Meeting - Event - Effect of compositional heterogeneity on the phase structure and crystallization behavior of polypropylene in-reactor alloys. Although the compositional heterogeneity and chain structure of PP/EPR in-situ blends have been extensively investigated, little is known about the conclusive relationship among the molecular/phase structure and the ultimate mechanical properties due to its complex compositions in such system. A systematic study was conducted on the compositional heterogeneity, phase structure, the crystallization and subsequent melting behavior of two in-reactor alloys EB-P and EP-P. The composition of the alloys and the chain structure of each component were characterized by preparative TREF and 13C-NMR technique. The results showed that the excellent balance between toughness and rigidity of EB-P primarily benefits from the polyethylene homopolymer (HPE) phase and the ethylene-$\alpha $-olefin copolymer (EC) component, which is enriched at the interface between the dispersed phase (HPP) and the matrix (HPE). As for EP-P, the amorphous EC and the interpenetrating phase are mainly responsible for the outstanding low temperature impact toughness. *Financial support from Ministry of Science and Technology is greatly acknowledged.
CommonCrawl
If $ p(x) $ and $ q(y) $ are two complete types over a set $ C $, $ p $ and $ q $ are set to be almost orthogonal if there is a unique complete type in the variables $ (x,y) $ extending $ p(x) \cup q(y) $. That is, if $ a, a' \models p $ and $ b, b' \models q $, then $ ab \equiv_C a'b' $. If $ p(x) $ and $ q(y) $ are stationary types in a stable theory, then one can easily check that $ p(x) $ and $ q(y) $ are almost orthogonal if and only if $ a \downarrow_C b $ for all $ a $ realizing $ p(x) $ and $ b $ realizing $ q(y) $. In a stable theory $ T $, two stationary types $ p $ and $ q $ are orthogonal if $ p|C $ and $ q|C $ are almost orthogonal for every set $ C $ containing the bases of $ p $ and of $ q $. Here $ p|C $ and $ q|C $ denote the unique non-forking extensions of $ p $ and $ q $ to $ C $. It turns out that if $ p|C $ and $ q|C $ fail to be almost orthogonal for some $ C $, then $ p|C' $ and $ q|C' $ also fail to be almost orthogonal for all $ C' \supseteq C $. Therefore, it suffices to check the orthogonality at sufficiently large sets $ C $, and orthogonality depends only on the parallelism class of $ p $ and $ q $. Roughly speaking, $ p $ and $ q $ are orthogonal if there are no interesting relations between realizations of $ p $ and realizations of $ q $. For example, if $ p $ and $ q $ are the generic types of two strongly minimal sets $ P $ and $ Q $, then $ p $ and $ q $ are orthogonal if and only if there are no finite-to-finite correspondences between $ P $ and $ Q $, i.e., no definable sets $ C \subset P \times Q $ with $ C $ projecting onto $ P $ and onto $ Q $ with finite fibers in both directions. The relation of non-orthogonality is an equivalence relation on strongly minimal sets, or more generally, on stationary types of U-rank 1. If $ p $ and $ q $ are two non-orthogonal types of rank 1, then $ p $ and $ q $ have the same underlying geometry. A theory is uncountably categorical if and only if it is $\omega$-stable and unidimensional (e.g. every pair of stationary non-algebraic types is non-orthogonal).
CommonCrawl
I always thought that Hasse's bound is sharp (at least for elliptic curves). In other words I always thought that given a prime number $p$, I can find two elliptic curves $E_1,E_2$ over $\mathbb F_p$ such that $\#E_1 = \lceil 1+p-2\sqrt p \rceil$ and $\#E_2 = \lfloor 1+p+2\sqrt p\rfloor$. But is this even true? If so, is there an easy way to construct these curves? I've seen the proof of how to obtain these bounds but I don't think it gives me any information on the sharpness of the bound. This is only a very partial answer, reflects only what i would do for a given rather small prime $p$. The character of the answer is experimental. The method is exhaustive enumeration. So for a small $p$ we may use computer assistance, below sage, to validate or invalidate the claim. and the question wants a way to detect the two elliptic curves $y^2=x^3+4$ and $y^2=x^3+3$ that attain the bounds. I do not see a constructive method that hits with precision one of the values $(a,b)$ for minimal order $6$ and/or maximal order $18$. ....: print( "p = %3s :: order = %s :: %s solutions, first five are %s" So we have in most cases many solutions. Sometimes we can use in $y^2=x^3+ax+b$ the values $a=0,\pm1$, and we find a suitable pair. But to have a "straightforward decision"... Then "something" in the structure of the elliptic curves must be predictible. At any rate, here comes the counterquestion: What exactly is expected as a "good answer", a "good constructible solution" to the OP? Not the answer you're looking for? Browse other questions tagged algebraic-geometry finite-fields elliptic-curves arithmetic-geometry or ask your own question. State of the art in arithmetic moduli of elliptic curves? Are there counterexamples of isogeny elliptic curves with non-isomorphic integral Tate modules?
CommonCrawl
Syllabus: See the schedule of topics. Prof. Resnik is an awesome teacher! He is an expert in not merely natural language processing & linguistics, but he teaches it in an easy-understanding way as well. He is very nice to students and his class is organized very well. Go for his class and you'll enjoy it. Our final project is Learning Depression Patterns from Facebook and Reddit data, which is a lot of fun. His homework has only 3 grades, high-pass, middle-pass and low-pass; of course you will have high-pass given enough efforts! Great teacher! From ourumd.com he gives 72% A. Due to the confidential problem, I have put my homework on a separate page here. Radloff_The CES-D scale – a self-report depression scale for research in the general population. Summary: Dr. Gimpel created two datasets built upon PPDB for predicting similarity between bigrams and phrases using crowdsourcing and expert annotation approaches. They use recursive neural networks (RNN) and addition model, train and test their results by Paragram form Skip-gram and Hashimoto et al. 2014. They proved that RNN and addition are better. How should we evaluate word similarity? Skip-gram Model (Mikolov et al., 2003) it's bag of words according to Prof. Resnik. Motivation: Why focus on bigrams / phrases? Cohen's kappa reflects the agreement derived from a crosstab table. This metric is used when two people look at the same data and categorize them. The problem is that you may do not remove the effects caused by randomness. You may just have a good result by chance. To claim the reproducibility, we want to have a metric which gets rid of the effects caused by randomness, and a Cohen's Kappa is such a metric. average over three data splits: adj noun, noun noun, verb noun. What traditional techniques are used in similarity estimation? Dr. Resnik: take a look at Bioinformatics. What about taking context into account? This talk provides a great motivation story and history for Grammatical Error Correction (GEC). The speaker described popular methodologies for correcting language learner errors such as rule-based approaches, data-driven approaches and corresponding features, using Statistical Machine Translation and so on. Eventually, the speaker envision this field in adding more features, crowdsourcing, unsupervised learning, and its application. Well-formed text from ETS, dissertation and etc. Artificial errors models can be competitive with real error models, if enough training data generated. Take into account L1, user context, etc. How can these sources be leveraged? There's a workshop on ACL to combine both aspects. Good point should be included in the Methodologies & Systems figure. Stupid methods with more data sometimes works better. Spamhaus is currently under a DDoS attack against our website which we are working on mitigating. Our DNSbls are not affected. On my way to JFK early in the…. For news, it usually does not work because sentences are longer and reasoning of temporal clue sometimes does not correspond to the particular event. Combining calendar view, predictive NLP model is a really interesting idea. It can trigger many interesting visualization of temporal events. Can we dynamically track Wikipedia? This is another interesting idea. But why Wikipedia does not support live streaming API as Twitter? Seems more engineering than science. Can we use both spatial and temporal visualization with it? How to learn the relation between a name entity and associated phrases? e.g. FACILITY voodoo lounge, grand ballroom, crash mansion, sullivan hall, memorials. How about the correlation between FreeBase and Social Media like Twitter for future work? More data? Social media in the past? 2 years of data? This is my second attending to the talk… Here's my summary. How does climate change affect extreme events? According to the Intergovernmental Panel on Climate Change, there's a Shifted Mean in climate change, i.e., the PDF is something like normal distribution, but with climate change, the normal distribution is shifted a little left (to colder side). In addition, there's uncertainty in extremes, especially regional areas. Warmer atmosphere can hold more water vapor, which induces heavier precipitation, storms, flooding. Global warming may increase surface evaporation such as heat waves and droughts. Possible changes in El Oscillation induces changes in floods in some regions, droughts in others. Insight: Extreme events are rare by definition. It's very hard to do machine learning on it. (Not enough evidence). But climate change may affect their distribution. Augment historical data with climate model simulations may assist in predicting climate change. What did they do: They published several papers by adaptive average models, online learning with MRF, HMM and matrix completion, introduced (founded) the Workshop on Climate Informatics starting from 2011 and provided a 2014 NIPS Tutorial, finally they want to exploit topic models from NLP to do transfer learning. Past, Present, Future: climate model simulations: Very, high-dimensional, encodes scientific domain knowledge, some information is lost in discretizations, future predictions cannot be validated. Local: Climate downscaling. What climate can I expect in my own backyard? Spatiotemporal: Space and time. How to capture dependencies over space and time? Tails/ impacts: extreme events. What are extreme events and how will climate change affect them? human influence on climate: without human-induced greenhouse gasses, the climate model simulation does not match the true observations well. no one model predicts best all the time, for all variables. Avereage prediction over all models is better predictor than any single model. They usually use Bayesian approaches in climate science since 2008. Can we do better, using Machine Learning? How should we predict future climates while taking into account the multi-model assumption models? They propose to use adaptive, weighted average prediction. e.g., model B dominates, model E follows by, model A,C,D contributes little. They also explore the tradeoff between ''explore" and ''exploit''. They used a generalized Hidden Markov Model to do so. Compared with multi-model average (baseline), their learn-$\alpha$ algorithm fits the ground-truth the best. They got a best paper for it. In 2012, they see climate predictions are made at higher geospatial resolutions. Model neighborhood influences among geospatial areas. They propose incorporating neighborhood influence. Neighborhood-augmented Learn-$\alpha$. That's online learning with some geospatial components. It's MRF-based approach. But by inducing time $t$, the 2D MRF becomes a 3D cube. They call it regional Learn-Alpha. They reduced the cumulative annual loss compared with the naive MRF-based Method in AAAI 2012. Goal: combine/improve the predictions of the multi-model ensemble of GCMs, using sparse matrix completion. They exploits past observations, and the predictions of the multi-model ensemble of GCMs. Their learning approach is batch and unsupervised. They create a sparse (incomplete) matrix from climate model predictions and observed temperature data. Finally, they apply a matrix completion algorithm to recover the unobserved data. Eventually, they yield predictions of unobserved results. Outlook: these results suggest some low intrinsic dimensionality. They induced some sparsity in the input matrix that need not ensure low intrinsic dimensionality. Past research also suggest low intrinsic dimensionality but only a small number (~2) climatological "predictive components" determine the predictive "skill" of climate models. It also suggests future work on tracking a small subset of the ensemble. Next, how to define extremes is a problem. So they learnt from topic modeling and LDA. Compared with bag of words to topic and words, they cluster geo-location to climate topics and predictions. The parameters include number of spatial regions, number of observations in region, climate topic, climate descriptor (discretized observed climate variable and Dirichlet prior. Next question is how to reconstruct paleoclimate. We have tree rings, ice cores and lake sediment cores to be used. Can sparse matrix completion techniques play a role? Discover latent structure? The issue is that there are only many small data sets….shall we use data fusion techniques? Shall we use multi-view learning? Why LDA is the best approach to model climate change? How is climate topic similar to document topics? Is there any specific features in climate topics? Naive transfer learning might not be perfect. Prof. Jimmy Lin does not buy the idea to use topic modeling, and I don't buy neither. The 3D rendering of climate change may trigger interesting topics in virtual / augmented reality field. I would like to explore the geo-tagged climate data for future research if applicable. It's the first time for me to write summaries for CLIP talks. Please correct me if anywhere is misunderstood. Motivation: Sometimes it's challenging to make sense out of online reviews due to the polarized discussion. To solve the problem, the author tries to discover semantic themes from 130k reviews (15m words) of 2k restaurants. Another important motivation is to investigate the closure of restaurants. Preprocessing using stop words, stemming and etc. Econometric model of restaurant closure: ~2,000 open, ~500 closures from 2005 to 2013. What challenges: Variety of restaurants features. Sort all units into strata with control and case. Semantic structure can be extracted from online reviews. Real-life troubles (reading online reviews) may trigger interesting research problems (discovering semantic themes). ) to higher dimension (DTM) might be a good contribution. Always ask "so what" to your models to seek for significance. In this way, the author turns to closure of restaurants. Vision + Geo + NLP + Psychology is a great future direction: here's something I found in Instagram.
CommonCrawl
Questions on dealing with vector calculus functions of Mathematica such as Grad, Div, Curl, Laplacian and their representations in various coordinate systems. How to add a vector variable to a vector value? What is the definition of Curl in Mathematica? I'm trying to verify the equality $ (\vec a \times \vec b)^2 = \left| \vec a \times \vec b \right| ^2 $ in Mathematica. How can I do it? Thank you for your time. Why is TensorExpand so slow for vector operations? How to take the curl of a vector function involving hypergeometric functions? Is there a way to add my own coordinate chart? How does one plot a three-dimensional electric field in spherical coordinates? Does Mathematica 11 have spherical coordinate unit vectors? How can I do integration with the Green theorem? How to locate a stream line starting from a saddle point?
CommonCrawl
Weston, J., Elisseeff, A., Schölkopf, B., Pérez-Cruz, F., Guyon, I. Weston, J., Elisseeff, A., Schölkopf, B., Pérez-Cruz, F. Sonnenburg, S., Braun, M., Ong, C., Bengio, S., Bottou, L., Holmes, G., LeCun, Y., Müller, K., Pereira, F., Rasmussen, C., Rätsch, G., Schölkopf, B., Smola, A., Vincent, P., Weston, J., Williamson, R. Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for diverse applications. However, the true potential of these methods is not realized, since existing implementations are not openly shared, resulting in software with low usability, and weak interoperability. We argue that this situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model. Additionally, we outline the problems authors are faced with when trying to publish algorithmic implementations of machine learning methods. We believe that a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community. Bottou, L., Chapelle, O., DeCoste, D., Weston, J. Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically. Collobert, R., Sinz, F., Weston, J., Bottou, L. Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs. Weston, J., Bakir, G., Bousquet, O., Mann, T., Noble, W., Schölkopf, B. BakIr, G., Schölkopf, B., Weston, J. In this chapter we are concerned with the problem of reconstructing patterns from their representation in feature space, known as the pre-image problem. We review existing algorithms and propose a learning based approach. All algorithms are discussed regarding their usability and complexity and evaluated on an image denoising application. We show how the Concave-Convex Procedure can be applied to the optimization of Transductive SVMs, which traditionally requires solving a combinatorial search problem. This provides for the first time a highly scalable algorithm in the nonlinear case. Detailed experiments verify the utility of our approach. Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how non-convexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs. Weston, J., Collobert, R., Sinz, F., Bottou, L., Vapnik, V. WIn this paper we study a new framework introduced by Vapnik (1998) and Vapnik (2006) that is an alternative capacity concept to the large margin approach. In the particular case of binary classification, we are given a set of labeled examples, and a collection of "non-examples" that do not belong to either class of interest. This collection, called the Universum, allows one to encode prior knowledge by representing meaningful concepts in the same domain as the problem at hand. We describe an algorithm to leverage the Universum by maximizing the number of observed contradictions, and show experimentally that this approach delivers accuracy improvements over using labeled data alone. Lal, T., Chapelle, O., Weston, J., Elisseeff, A. Embedded methods are a relatively new approach to feature selection. Unlike filter methods, which do not incorporate learning, and wrapper approaches, which can be used with arbitrary classifiers, in embedded methods the features selection part can not be separated from the learning part. Existing embedded methods are reviewed based on a unifying mathematical framework. Bakir, G., Bottou, L., Weston, J. We propose an algorithm for selectively removing examples from the training set using probabilistic estimates related to editing algorithms (Devijver and Kittler82). The procedure creates a separable distribution of training examples with minimal impact on the decision boundary position. It breaks the linear dependency between the number of SVs and the number of training examples, and sharply reduces the complexity of SVMs during both the training and prediction stages. Weston, J., Leslie, C., Ie, E., Zhou, D., Elisseeff, A., Noble, W. Weston, J., Schölkopf, B., Bousquet, O. We develop a methodology for solving high dimensional dependency estimation problems between pairs of data types, which is viable in the case where the output of interest has very high dimension, e.g., thousands of dimensions. This is achieved by mapping the objects into continuous or discrete spaces, using joint kernels. Known correlations between input and output can be defined by such kernels, some of which can maintain linearity in the outputs to provide simple (closed form) pre-images. We provide examples of such kernels and empirical results. We propose fast algorithms for reducing the number of kernel evaluations in the testing phase for methods such as Support Vector Machines (SVM) and Ridge Regression (RR). For non-sparse methods such as RR this results in significantly improved prediction time. For binary SVMs, which are already sparse in their expansion, the pay off is mainly in the cases of noisy or large-scale problems. However, we then further develop our method for multi-class problems where, after choosing the expansion to find vectors which describe all the hyperplanes jointly, we again achieve significant gains. Weston, J., Schölkopf, B., Bousquet, O., Mann, .., Noble, W. Eichhorn, J., Tolias, A., Zien, A., Kuss, M., Rasmussen, C., Weston, J., Logothetis, N., Schölkopf, B. We report and compare the performance of different learning algorithms based on data from cortical recordings. The task is to predict the orientation of visual stimuli from the activity of a population of simultaneously recorded neurons. We compare several ways of improving the coding of the input (i.e., the spike data) as well as of the output (i.e., the orientation), and report the results obtained using different kernel algorithms. Zhou, D., Weston, J., Gretton, A., Bousquet, O., Schölkopf, B. The Google search engine has enjoyed a huge success with its web page ranking algorithm, which exploits global, rather than local, hyperlink structure of the web using random walks. Here we propose a simple universal ranking algorithm for data lying in the Euclidean space, such as text or image data. The core idea of our method is to rank the data with respect to the intrinsic manifold structure collectively revealed by a great amount of data. Encouraging experimental results from synthetic, image, and text data illustrate the validity of our method. Lal, T., Schröder, M., Hinterberger, T., Weston, J., Bogdan, M., Birbaumer, N., Schölkopf, B. Designing a Brain Computer Interface (BCI) system one can choose from a variety of features that may be useful for classifying brain activity during a mental task. For the special case of classifying EEG signals we propose the usage of the state of the art feature selection algorithms Recursive Feature Elimination and Zero-Norm Optimization which are based on the training of Support Vector Machines (SVM). These algorithms can provide more accurate solutions than standard filter methods for feature selection. We adapt the methods for the purpose of selecting EEG channels. For a motor imagery paradigm we show that the number of used channels can be reduced significantly without increasing the classification error. The resulting best channels agree well with the expected underlying cortical activity patterns during the mental tasks. Furthermore we show how time dependent task specific information can be visualized. Zhou, D., Bousquet, O., Lal, T., Weston, J., Schölkopf, B. We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data. Bakir, G., Weston, J., Schölkopf, B. We consider the problem of reconstructing patterns from a feature map. Learning algorithms using kernels to operate in a reproducing kernel Hilbert space (RKHS) express their solutions in terms of input points mapped into the RKHS. We introduce a technique based on kernel principal component analysis and regression to reconstruct corresponding patterns in the input space (aka pre-images) and review its performance in several applications requiring the construction of pre-images. The introduced technique avoids difficult and/or unstable numerical optimization, is easy to implement and, unlike previous methods, permits the computation of pre-images in discrete input spaces. Weston, J., Leslie, C., Zhou, D., Elisseeff, A., Noble, W. A key issue in supervised protein classification is the representation of input sequences of amino acids. Recent work using string kernels for protein data has achieved state-of-the-art classification performance. However, such representations are based only on labeled data --- examples with known 3D structures, organized into structural classes --- while in practice, unlabeled data is far more plentiful. In this work, we develop simple and scalable cluster kernel techniques for incorporating unlabeled data into the representation of protein sequences. We show that our methods greatly improve the classification performance of string kernels and outperform standard approaches for using unlabeled data, such as adding close homologs of the positive examples to the training data. We achieve equal or superior performance to previously presented cluster kernel methods while achieving far greater computational efficiency. Weston, J., Elisseeff, A., Zhou, D., Leslie, C., Noble, W. Biologists regularly search databases of DNA or protein sequences for evolutionary or functional relationships to a given query sequence. We describe a ranking algorithm that exploits the entire network structure of similarity relationships among proteins in a sequence database by performing a diffusion operation on a pre-computed, weighted network. The resulting ranking algorithm, evaluated using a human-curated database of protein structures, is efficient and provides significantly better rankings than a local network search algorithm such as PSI-BLAST. Designing a Brain Computer Interface (BCI) system one can choose from a variety of features that may be useful for classifying brain activity during a mental task. For the special case of classifying EEG signals we propose the usage of the state of the art feature selection algorithms Recursive Feature Elimination and Zero-Norm Optimization which are based on the training of Support Vector Machines (SVM) . These algorithms can provide more accurate solutions than standard filter methods for feature selection . We adapt the methods for the purpose of selecting EEG channels. For a motor imagery paradigm we show that the number of used channels can be reduced significantly without increasing the classification error. The resulting best channels agree well with the expected underlying cortical activity patterns during the mental tasks. Furthermore we show how time dependent task specific information can be visualized. Chapelle, O., Weston, J., Schölkopf, B. We propose a framework to incorporate unlabeled data in kernel classifier, based on the idea that two points in the same cluster are more likely to have the same label. This is achieved by modifying the eigenspectrum of the kernel matrix. Experimental results assess the validity of this approach. Leslie, C., Eskin, E., Weston, J., Noble, W. We introduce a class of string kernels, called mismatch kernels, for use with support vector machines (SVMs) in a discriminative approach to the protein classification problem. These kernels measure sequence similarity based on shared occurrences of k-length subsequences, counted with up to m mismatches, and do not rely on any generative model for the positive training sequences. We compute the kernels efficiently using a mismatch tree data structure and report experiments on a benchmark SCOP dataset, where we show that the mismatch kernel used with an SVM classifier performs as well as the Fisher kernel, the most successful method for remote homology detection, while achieving considerable computational savings. Weston, J., Chapelle, O., Elisseeff, A., Schölkopf, B., Vapnik, V. The Google search engine has had a huge success with its PageRank web page ranking algorithm, which exploits global, rather than local, hyperlink structure of the World Wide Web using random walk. This algorithm can only be used for graph data, however. Here we propose a simple universal ranking algorithm for vectorial data, based on the exploration of the intrinsic global geometric structure revealed by a huge amount of data. Experimental results from image and text to bioinformatics illustrates the validity of our algorithm. We consider the learning problem in the transductive setting. Given a set of points of which only some are labeled, the goal is to predict the label of the unlabeled points. A principled clue to solve such a learning problem is the consistency assumption that a classifying function should be sufficiently smooth with respect to the structure revealed by these known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data. Weston, J., Schölkopf, B., Eskin, E., Leslie, C., Noble, W. In kernel methods, all the information about the training data is contained in the Gram matrix. If this matrix has large diagonal values, which arises for many types of kernels, then kernel methods do not perform well: We propose and test several methods for dealing with this problem by reducing the dynamic range of the matrix while preserving the positive definiteness of the Hessian of the quadratic programming problem that one has to solve when training a Support Vector Machine, which is a common kernel approach for pattern recognition. Weston, J., Leslie, C., Elisseeff, A., Noble, W. A key tool in protein function discovery is the ability to rank databases of proteins given a query amino acid sequence. The most successful method so far is a web-based tool called PSI-BLAST which uses heuristic alignment of a profile built using the large unlabeled database. It has been shown that such use of global information via an unlabeled data improves over a local measure derived from a basic pairwise alignment such as performed by PSI-BLAST's predecessor, BLAST. In this article we look at ways of leveraging techniques from the field of machine learning for the problem of ranking. We show how clustering and semi-supervised learning techniques, which aim to capture global structure in data, can significantly improve over PSI-BLAST. Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K. We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA using support vector kernel functions. Extensive simulations show the utility of our approach. Weston, J., Perez-Cruz, F., Bousquet, O., Chapelle, O., Elisseeff, A., Schölkopf, B. Motivation: In drug discovery a key task is to identify characteristics that separate active (binding) compounds from inactive (non-binding) ones. An automated prediction system can help reduce resources necessary to carry out this task. Results: Two methods for prediction of molecular bioactivity for drug design are introduced and shown to perform well in a data set previously studied as part of the KDD (Knowledge Discovery and Data Mining) Cup 2001. The data is characterized by very few positive examples, a very large number of features (describing three-dimensional properties of the molecules) and rather different distributions between training and test data. Two techniques are introduced specifically to tackle these problems: a feature selection method for unbalanced data and a classifier which adapts to the distribution of the the unlabeled test data (a so-called transductive method). We show both techniques improve identification performance and in conjunction provide an improvement over using only one of the techniques. Our results suggest the importance of taking into account the characteristics in this data which may also be relevant in other problems of a similar type. Weston, J., Elisseeff, A., Schölkopf, B., Tipping, M. We explore the use of the so-called zero-norm of the parameters of linear models in learning. Minimization of such a quantity has many uses in a machine learning context: for variable or feature selection, minimizing training error and ensuring sparsity in solutions. We derive a simple but practical method for achieving these goals and discuss its relationship to existing techniques of minimizing the zero-norm. The method boils down to implementing a simple modification of vanilla SVM, namely via an iterative multiplicative rescaling of the training data. Applications we investigate which aid our discussion include variable and feature selection on biological microarray data, and multicategory classification. Perez-Cruz, F., Weston, J., Herrmann, D., Schölkopf, B. Schölkopf, B., Guyon, I., Weston, J. Chapelle, O., Schölkopf, B., Weston, J. We describe methods for taking into account unlabeled data in the training of a kernel-based classifier, such as a Support Vector Machines (SVM). We propose two approaches utilizing unlabeled points in the vicinity of labeled ones. Both of the approaches effectively modify the metric of the pattern space, either by using non-spherical Gaussian density estimates which are determined using EM, or by modifying the kernel function using displacement vectors computed from pairs of unlabeled and labeled points. The latter is linked to techniques for training invariant SVMs. We present experimental results indicating that the proposed technique can lead to substantial improvements of classification accuracy. We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. Output kernels also make it possible to encode prior information and/or invariances in the loss function in an elegant way. We experimentally validate our approach on several tasks: mapping strings to strings, pattern recognition, and reconstruction from partial images. Schölkopf, B., Weston, J., Eskin, E., Leslie, C., Noble, W. Chapelle, O., Weston, J., Bottou, L., Vapnik, V. The Vicinal Risk Minimization principle establishes a bridge between generative models and methods derived from the Structural Risk Minimization Principle such as Support Vector Machines or Statistical Regularization. We explain how VRM provides a framework which integrates a number of existing algorithms, such as Parzen windows, Support Vector Machines, Ridge Regression, Constrained Logistic Classifiers and Tangent-Prop. We then show how the approach implies new algorithms for solving problems usually associated with generative models. New algorithms are described for dealing with pattern recognition problems with very different pattern distributions and dealing with unlabeled data. Preliminary empirical results are presented. Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T., Vapnik, V. We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA microarray data. Weston, J., Elisseeff, A., Schölkopf, B. Weston, J., Chapelle, O., Guyon, I. Chapelle, O., Vapnik, V., Weston, J. We introduce an algorithm for estimating the values of a function at a set of test points $x_1^*,dots,x^*_m$ given a set of training points $(x_1,y_1),dots,(x_ell,y_ell)$ without estimating (as an intermediate step) the regression function. We demonstrate that this direct (transductive) way for estimating values of the regression (or classification in pattern recognition) is more accurate than the traditional one based on two steps, first estimating the function and then calculating the values of this function at the points of interest. Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Müller, K.
CommonCrawl
This note provides an alternative derivation of Proposition 1 in Berk and Green (2004) without consulting the mentioned reference for all those that do not have the referenced textbook at hand (like myself). All errors are my own. Note that $P_t$ is actually known, contrary to $x_t$, which may or may not be a reasonable assumption for particular applications of the basic Kalman filter. Given some initializations, we can use equations (3) and (4) to update our optimal estimate of the state(s) and the associated (co-)variance at every point in time. which is their equation (5) and provides the optimal updating rule as a function of the underlying parameters. It is easy to see that the optimal estimate for a manager's ability consists of two terms: i) the previous perception of a manager's ability, and ii) a term that accounts for new information embedded in this period's return. The second component diminishes as more data is observed.
CommonCrawl
We'll now discuss a few ideas for customizing these colorbars and using them effectively in various situations. But being able to choose a colormap is just the first step: more important is how to decide among the possibilities! The choice turns out to be much more subtle than you might initially expect. A full treatment of color choice within visualization is beyond the scope of this book, but for entertaining reading on this subject and others, see the article "Ten Simple Rules for Better Figures". Matplotlib's online documentation also has an interesting discussion of colormap choice. Sequential colormaps: These are made up of one continuous sequence of colors (e.g., binary or viridis). Divergent colormaps: These usually contain two distinct colors, which show positive and negative deviations from a mean (e.g., RdBu or PuOr). Qualitative colormaps: these mix colors with no particular sequence (e.g., rainbow or jet). The jet colormap, which was the default in Matplotlib prior to version 2.0, is an example of a qualitative colormap. Its status as the default was quite unfortunate, because qualitative maps are often a poor choice for representing quantitative data. Among the problems is the fact that qualitative maps usually do not display any uniform progression in brightness as the scale increases. """Return a grayscale version of the given colormap""" """Plot a colormap with its grayscale equivalent""" For other situations, such as showing positive and negative deviations from some mean, dual-color colorbars such as RdBu (Red-Blue) can be useful. However, as you can see in the following figure, it's important to note that the positive-negative information will be lost upon translation to grayscale! We'll see examples of using some of these color maps as we continue. There are a large number of colormaps available in Matplotlib; to see a list of them, you can use IPython to explore the plt.cm submodule. For a more principled approach to colors in Python, you can refer to the tools and documentation within the Seaborn library (see Visualization With Seaborn). Notice that in the left panel, the default color limits respond to the noisy pixels, and the range of the noise completely washes-out the pattern we are interested in. In the right panel, we manually set the color limits, and add extensions to indicate values which are above or below those limits. The result is a much more useful visualization of our data. The discrete version of a colormap can be used just like any other colormap. For an example of where this might be useful, let's look at an interesting visualization of some hand written digits data. This data is included in Scikit-Learn, and consists of nearly 2,000 $8 \times 8$ thumbnails showing various hand-written digits. Because each digit is defined by the hue of its 64 pixels, we can consider each digit to be a point lying in 64-dimensional space: each dimension represents the brightness of one pixel. But visualizing relationships in such high-dimensional spaces can be extremely difficult. One way to approach this is to use a dimensionality reduction technique such as manifold learning to reduce the dimensionality of the data while maintaining the relationships of interest. Dimensionality reduction is an example of unsupervised machine learning, and we will discuss it in more detail in What Is Machine Learning?. The projection also gives us some interesting insights on the relationships within the dataset: for example, the ranges of 5 and 3 nearly overlap in this projection, indicating that some hand written fives and threes are difficult to distinguish, and therefore more likely to be confused by an automated classification algorithm. Other values, like 0 and 1, are more distantly separated, and therefore much less likely to be confused. This observation agrees with our intuition, because 5 and 3 look much more similar than do 0 and 1. We'll return to manifold learning and to digit classification in Chapter 5.
CommonCrawl
Does convex hypersurface necessarily bound a convex domain? Let $H\in M$ be a convex hypersurface, where $M$ is a complete Riemannian manifold and $H$ is an embedded (complete as a induced metric space) hyper surface without boundary and with positive definite second fundamental form. Is it true that $H$ bound a convex domain $D$ in $M$? i.e. any two point $x, y \in D$ can be connected by a minimal geodesic lies in $D$ and $\partial D=H$. I suspect this is not true in general, but can anyone provide a countexample or any references? Counterexample 1: If $D$ is a ball of radius $\frac13$ on the torus $T=\mathbb R^n/\mathbb Z^n$, you can take two points on the opposite sides of the ball such that the minimal geodesic connecting them does not stay in $D$. The boundary $\partial D$ has a strictly positive definite second fundamental form. Counterexample 2: Let $M$ be your favorite Riemannian manifold and $H$ a convex hypersurface that bounds a domain $D$. Take two points $x,y\in D$ so that $d(x,y)>d(x,H)+d(y,H)$. By rescaling the metric outside $\bar D$, you can force the minimal geodesic joining $x$ and $y$ to exit $\bar D$. Counterexample 3: It can happen that the hypersurface $H$ does not bound a domain, meaning that $M\setminus H$ has only one connected component. If there are two components in the complement, you can always build a handle between them. Not the answer you're looking for? Browse other questions tagged dg.differential-geometry riemannian-geometry or ask your own question. How to check if a manifold can be foliated by strictly convex hypersurfaces?
CommonCrawl
Perrier, R; Boscardin, E; Malsure, S; Sergi, C; Maillard, M P; Loffing, J; Loffing-Cueni, D; Sorensen, M V; Koesters, R; Rossier, B C; Frateschi, S; Hummler, E (2016). Severe salt-losing syndrome and hyperkalemia induced by adult nephron-specific knockout of the epithelial sodium channel $\alpha$-subunit. Journal of the American Society of Nephrology (JASN), 27(8):2309-2318. This list was generated on Fri Apr 26 12:49:02 2019 CEST.
CommonCrawl
I tested all of them and they produced the same results. Since $a + b=1$ the equations are exactly the same. Substituting in for $a+b$ with $1$ in the third and fourth equations gives the first and second equations. This is how you get from your first equation to your second. your utility function is $u(x_1, x_2)=x_1^a x_2^b$ since $a+b=1$ I'll change it slightly to a and (1-a) In order to optimise these two choices, you need to maximise utility, wrt your choice variables. subject to $p_1x_1 + p_2x_2 = w$ using Walras Law. Basically, in order to optimise utility, all money will be spent. Cobb-Douglas functions are typically difficult for optimisation problems. A monotonic transformation which preserves the ordinal properties of the function can be used. This will be used instead. The same budget constraint will be applied. Using these results, we can work out the optimal consumption bundles of $x_1$ and $x_2$ for a given price, wealth combination. Not the answer you're looking for? Browse other questions tagged microeconomics consumer-theory optimization cobb-douglas or ask your own question. Does the Marshallian demand function always include prices and income?
CommonCrawl
A segue out of of a series of posts that started here,riffing on this paper. This post concludes a series I've been doing on generic model objects and their payoff. The big payoff is in what I think of as narrative modeling, in which we think about what actually transpired in the real world to cause the data we see to be the way we see it, and then write down a model following that narrative. I'm not the first to think of this, and hierarchical, Bayesian, and agent-based modeling all follow that sort of form. But it's a nontraditional way of doing statistics, and the tools aren't written around it. We need to be able to use RNG-based models and creatively compose models from simpler sub-models, so we had to start with developing a model object and then work our way forward from there. But most of the statistical world is built around simple pseudonarratives where there is a single (possibly multivariate) distribution expressible in closed form, or a function of the form $$f^0(Y_i) = \beta_0 + \beta_1 f^1(x_i^1) + \beta_2 f^2(x_i^2) + … + \beta_n f^n(x_i^n) + \epsilon,$$ where each $f^j(\cdot)$, $j=0,…, n$, is a deterministic function of one variable (on the outside, two), and $\epsilon$ has a distribution expressible in closed form. This generalized linear model (GLM) is a realistic narrative only in certain cases—certainly not in the immense range of situations in which it is used. So, I don't like it. I'm happy to see that it's largely losing fashion among research statisticians, who seem (from my perspective) decreasingly obsessed with stretching that form into more situations and more interested in trying alternative functional forms where randomness isn't just an $\epsilon$ tacked on at the end of a deterministic narrative. To most people, the GLM encompasses all of statistics. This is what they learned in the one or two stats classes they took, and they have better things to do than to find out that there's more in the statistical world. The GLM actually works pretty well in a lot of situations. It is an inaccurate tool, but if we just want to know if $\log(A)$ goes up or down in proportion to $B$, and whether we should care more about the effect of $B$ on $A$ or the effect of $C$ on $A$, it will tell us. If you call me at 9AM and tell me you need to know how $A$, $B$, and $C$ relate by 11AM, I'm going to run linear models, not develop a realistic narrative. It would be wonderful if every first approximation using a GLM was followed by a model that the model author actually believes to be possibly true, but here in reality there is often not time, and see the above bullet point about what people have learned. The last several entries were about narrative modeling, and the demo code used Apophenia, the library of stats functions I wrote to make narrative modeling feasible and friendly. In certain parlance, the last several entries have been Rocket Science except Ballistics is one of those cases where GLMs sometimes work very well. I tried to keep the stress on narrative modeling, not Apophenia. The next several entries will be about doing much more common tasks—even fitting linear models—using Apophenia. I'll cover both the mechanics of what's going on, and the design decisions behind it all. These are things that other languages and platforms also do, but with a different platform and some different underlying design decisions. Given that you can do these things using other platforms, the question Why use C for these things? comes up. One answer is embodied in the last dozen entries, which required real computational firepower and a system that's a little more structured and a little less do-what-I-mean than many existing systems. C is also better for embedding: if you have a program that is not about statistics, but has a data analysis component, and you need to do a quick correlation or calculate a line of best fit, C may be the right language to do that in, especially given that everything has a C API. But we could just as easily rephrase the question: Why not use C for these things? As will be shown, most common data-oriented tasks take about as many lines of C as Python, R, or whatever code. I have received more than enough communications from detractors who told me that it takes ten times as many lines of C code to do something as it takes in R, which just tells us that the detractor really has no idea how to write in C. If somebody tells you something like this, please point them to this series. Next time, I'll read in a data set, generate a simple $2\times 2$ crosstab, and run a simple test about the table. Depending on how you choose to count, it'll be about five lines of code. PS: I'll try to post every four days. Posting every three days was a bit of a challenge, especially given that over some three-day periods during which normal life transpired, I was developing a full agent-based model and applying relatively novel-for-ABMs analysis techniques or transformations to the model. I felt vindicated that I really had developed a platform where such heavy lifting could be done rapidly, but on the other hand, there's more to life than blogging….
CommonCrawl
For those who use Chase paymentech hosted payment gateway might have gone through a rare error senario. Here the user was shown a warning message and an email was also sent to user with error details. on the submission page to paymentech there is a field "x_fp_hash", the value in this field is a hash value which is generated using a combination of transaction key, x_fp_hash, x_fp_sequence, x_fp_timestamp, x_amount, and x_currency_code values of the request. This field values are passed through a PHP HASH_HMAC function. The value of the x_fp_hash is cross checked with the hash string on paymentech side, if a match is found, the transaction is accepted, else the user is warned with a "x_fp_hash : Could not validate the integrity of the payment from the transaction" message. Sometimes a hosting provider doesn't provide access to the Hash extension so the HASH_HMAC function may return a null value. So during submission the "x_fp_hash" field is empty, it will cause the above mention error. // assign the value of variable $x_fp_hash to "x_fp_hash" field of submission form.
CommonCrawl
Showing that lines from this family do not intersect for different $\alpha$. do not intersect for different values of $\alpha$. This is a family of lines because it's the intersection of two planes, I think they do not intersect because each line is perpendicular to the $z$ plane at the value of $ z = -\alpha(x-y) $ is this enough to conclude? And this got me thinking how would I represent a family of lines all intersecting in one point in 3 dimensional euclidean space? Could I have a link or a source where to study the general derivation of parallel family of lines in 3 dimensional space? Hence $\alpha_1=\alpha_2$. To summarize, if a point is on two members of your family of sets, those two members are the same set. The contrapositive of this is: Two different members of your family of sets have no element in common. QED. Not the answer you're looking for? Browse other questions tagged linear-algebra geometry analytic-geometry or ask your own question. Is this a correct question? Can two 3D lines define a plane? Proving that two lines in three dimension given in asymmetric form (given as equations of two planes) are coplanar.
CommonCrawl
Here $a$ and $b$ are the initial and the final values of the collective variable. TI is a general method, which can be applied to a variety of processes, e.g. phase transitions, electron transfer etc. par_all27_prot_lipid.inp contains the force field parameters. You will be using the CHARMM v.22, a popular force field for biologically relevant systems. Open the deca_ala.pdb protein data bank format file with vmd. Create a new representation for the protein, e.g. of type Ribbon to observe the alpha-helix. Although the image below shows the deca-alanine in water, it is expensive to run thermodynamic integration for a solvated protein with many values of the constraints on small laptops. So we will run TI for the protein in the gas-phase. Here you are asked to run several MD simulations for different values of the distance between atoms 11 and 91, in each run it will be constrained. In the original file md_std.inp the distance is set to $14.37$ Å as is in the deca_ala.pdb file. This is the first step to carry out the termodynamic integration, as described in the equation above. We have made the script run_ti_jobs.sh to run these simulations, which you can find inside the compressed file deca_ala.tar.gz. Take a look at the script and familiarize yourself with it. At which values are we constraining the distances between the carbon atoms? In this case we are performing 5 different simulations, each with a different value of the constraint. You can edit this script to use a larger or smaller number of constraints and to increase or reduce the upper and/or lower bound of integration. Can you guess where in the script we are specifying the values of the constraints? Be careful with the values chosen for the upper and lower bound of the constraints as the simulations might crash or the SHAKE algorithm for the computation of the constraints might not converge if the values of the constrained distances are unphysical. We have set the number of steps of each constrained MD to 5000. Try to increase this number if you want to achieve better statistics or to decrease it to get the results faster, at the expense of a more converged free energy. Look into the main input file of cp2k md_std.inp, and try to understand the keywords used as much as possible, by now you should be able to understand most of it, and you can experiment changing some of the keywords to see what happens. Look in particular at the definition of the section CONSTRAINT where the target value of the distance between the two carbon atoms at the edges of the proteins are constrained for instance to 14.37, and at the COLVAR section where the the collective variable for the distance between the two C atoms is defined. The average Lagrange multiplier is the average force $F(x)$ required to constrain the atoms at the distance $x$. First of all, plot the force $F(x)$ with its standard error as a function of the collective variable to see if the simulation carried out so far is statistically relevant or the relative error is too large. Discuss the form of the free energy profile and comment on what is the most stable state of the protein. Is it more stable when it is stretched or when it is in the $\alpha$-helix conformation? Is this result physical? Explain why or why not. How can the presence of water affect the conformation of the protein? Tip 1: the most stable state will be that where the free energy is at the global minimum. Tip 2: In order to understand whether the result obtained from thermodynamic integration is physical or not, have a look at the .xyz files for some of the constrained MD trajectories and think about what are the fundamental interactions between the constituents of the protein that we are taking into account with the CHARMM force field (e.g. electrostatic, van-der Waals, covalent bonds) and how these may contribute to the stabilization of the protein in a given state. The two articles at the links below show how the free energy profile should look like, using thermodynamic integration or a different enhanced sampling method. Compare the free energy profile obtained from your simulations to either of those papers. Most likely, the free energy profile you obtained will not be as converged as theirs. What are some possible reasons for this, and how can one obtain better converged free energy profiles? Paper 1: https://arxiv.org/pdf/0711.2726.pdf see figure 2, solid line obtained with thermodynamic integration, using the same force field (CHARMM v.22) used here. This paper however, uses a different collective variable, i.e. the distance between the N-atoms at the opposite edges. Paper 2: https://pubs.acs.org/doi/pdf/10.1021/ct5002076 see figure 1, obtained with umbrella sampling and adaptive bias force sampling, for two versions of the CHARMM force field, v.22 and v.36. The collective variable in this case is the same as the one specified in our input. Finally, in principle we could have performed a direct MD simulation (as we did in the past exercises) to compute the free energy profile as a function of the distance between two of the atoms at the opposite edges of the protein (the collective variable we chose for this particular problem). Instead, we chose to perform an enhanced simulation technique. Can you think of a problem we would face if we had decided to perform a direct MD simulation? What could be a possible way to overcome this problem? We have provided you with a useful script called generate_plots.sh that extracts the average force and the standard error for each constrained MD simulation (see the grep command line above), and it prints out the file av_force_vs_x.dat containing the force as a function of the collective variable, and the error on the force (third column). Take a look at the script and modify it if necessary, e.g. if you have changed the lower and upper bound for the constraint or if you have changed the number of constraints. In order to check the convergence of the free energy profile one should look at the error on the average force for each constrained MD simulation. The error on the free energy profile can be obtained by propagating the error on the average force upon integration. From the file containing the average force as a function of collective variable you need to integrate $F(x) dx$ numerically to obtain $\Delta A$. You may use the trapezoidal rule (or equivalent) with EXCEL, ORIGIN or any scripting language. Make sure that you get the units right when performing the integration. The Largange multipliers are written in atomic units (Hartree/bohr, dimension of a force), while the distances are in Angstrom.
CommonCrawl
Abstract: We consider the localization of the $\infty$-category of spaces at the $v_n$-periodic equivalences, the case $n=0$ being rational homotopy theory. We prove that this localization is for $n\geq 1$ equivalent to algebras over a certain monad on the $\infty$-category of $T(n)$-local spectra. This monad is built from the Bousfield--Kuhn functor.
CommonCrawl
Is there any idea explaining why the electric charges of electron and muon are equal? The total charge of a particle is proportional to the integral of its own electric field flow through the sphere of a big radius surrounding the particle at rest. The free Dirac equation describes charged fermion. It contains the mass term $m$. If $m$ tends to zero, Dirac equation tends to the pair of Weil equations that describe electrically neutral particles. Does it mean that charge somehow depends on mass? If yes, why do the electron and muon (both described by Dirac equation, but with different mass terms) have the same electric charges? How do we know that there exists such a particle as the muon? From observing its decay into an electron plus two other neutral particles, which are an antineutrino electron and a neutrino muon. 2) lepton number conservation ensures that the number of particles with muon leptonic number and the number of particles with electron leptonic number are conserved. These are observations, the accumulation of which together with a large number of other observations allows us to build up the standard model 0f particle physics. The Standard Model encapsulates our observations/data. The short answer to the question is: because that is what has been observed. The confusion is about the Weyl limit--- a massless Weyl fermion can be charged. All the fermions in the standard model are charged Weyl fermions. The Weyl fermion that can't be charged is the massive Weyl fermion. The reason is that the mass term in the Weyl reduction mixes up the field and its conjugate, so it isn't phase-invariant under multiplying the Weyl field by a complex phase. This type of mass, which is incompatible with charge, is more often called a Majorana mass in the literature, because it is easier to derive as the real part of the Dirac equation in a real basis. The fact that Weyl fermions can't have mass is important--- it is the reason we see Weyl fermions in nature--- if they could be massive, they would be Planck mass massive. Instead, they are only Higgs-scale massive. A pair of massive Weyl fermions with the same mass can together be charged, with the charge symmetry rotating one into the other. So a pair of Weyl fermions can also be massive independent of the mass, and the reason is that this is what the Dirac equation is. The answer to the title question, about the equality of charges, at least from the theoretical standpoint is found here: What is "charge discreteness"? . Anna v. has given the experimental reason. Not the answer you're looking for? Browse other questions tagged electromagnetism particle-physics quantum-electrodynamics or ask your own question. Transition of electric charge to "magnetic charge" when $\alpha$ gets >> 1 in QED? Why are electric charges allowed to be so light but magnetic monopoles have to be so heavy? Does there exist electric field around all the substances? Can any object pass into the event horizon of a black hole and then escape? Electric dipole moment of electron: about what point is the moment taken? How much energy is stored in the charge when electron and positron are created?
CommonCrawl
Join in on this month's Family Workshop at the Hearst Museum! Get inspired by the Yupik technology of slatted snow goggles/sunglasses (iyegaatek) and make a pair of your own sunglasses to take home. This is a drop-in workshop for all ages. Bring the whole family for this activity which blends science, culture, technology, art, and fashion. Please join us for the first Tell Her story Gala 2018 where you will have the opportunity to hear stories from the top three Tell Her Story finalists who continue to fight against the injustice they and their loved ones face. Meet with Oracle recruiters and university relations representatives and get yourself a complementary cupcake while they last! Your science background can lead to a highly fulfilling career at the intersection of law and technology. Intellectual property (who owns innovation), privacy, the accountability of algorithms, the control of online speech, the financing of clean tech. These are among the many issues that lawyers will be deciding in coming years. Antoine Ferey - "Optimal taxation and tax complexity with misperceptions" Malka Guillot - "Who Paid the 75% Tax onMillionaires? RSVP info: RSVP online by September 13. Where better to bring our grief than the garden? In this contemplative photography workshop, we will observe Nature's teachings on letting go, and we will use the camera as a trusted guide who can navigate us safely through the rugged terrain of grief. THIS EVENT HAS BEEN CANCELLED DUE TO TRAVEL CHALLENGES RELATED TO HURRICANE FLORENCE. PLEASE STAY TUNED TO THE MATRIX WEBSITE FOR UPDATES: HTTP://MATRIX.BERKELEY.EDU. THANK YOU. What are the various social, legal and ethical implications of technological advances that make it easier to be watched, and how are different groups pushing for and against policies that would govern the limits of state surveillance? Attendance restrictions: Free and open to all on a first-come, first-seated basis. Join Frankie Liu, Consulting Hardware Engineer, from Oracle Labs. This talk will introduce the lifecycle of a project in the corporate world, viewed through the small lens of our speaker. We will raffle off a Fujifilm Instax Mini 9 Instant Camera and a Hydro Flask (you must be present to win). How can we build autonomous robots that operate in unstructured and dynamic environments such as homes or hospitals? "Phonography in Transit: Naples and New York" Thinking about purchasing your first or second home? Considering refinancing your current home loan? • Is now the right time to buy a home? • How much can I afford? • Are any low cash down payment options available? • What are first-time homebuyer programs available? • How do I get pre-approved for a loan? We'll discuss the symplectic structure on representation varieties of surfaces. Then we'll discuss certain Hamiltonian actions on subsets determined by twists along simple curves. Sandia National Laboratories will be on campus! BRING RESUMES! For all interested B.S., M.S. and Ph.D. engineering and science students. Many positions open for internships, co-ops, and full-time employment. Seminar 281: International Trade and Finance, "Title: "Credit Allocation under Banking Globalization: Theory and Empirics" Seminar 221, Industrial Organization: ​"Attention Oligopoly" We study curves in $\mathbb P^3$ lying on hypersurfaces that arise as images of "general" maps from smooth surfaces. We describe the numerical invariants of a large class of curves on these surfaces, and study the families of such curves in the Hilbert scheme. Graduate Medievalists at Berkeley (GMB) will be holding our first meeting over drinks at Jupiter, located at 2181 Shattuck Ave., on Tuesday, September 18th from 5-7pm. We'd like to extend this invite to any graduate students interested in medieval or premodern topics. Prof. Valerie Francisco-Menchavez (Ph.D., CUNY Graduate Center) will discuss her new book, which explores the dynamics of gender and technology of care work in Filipino transnational families in the Philippines and the U.S. Presentation by Xinxi Chen, Senior Software Engineer. Come say hello, be ready to ask questions, fill up on food, and get to know Uber! No matter what phase of life we are in, the varying weather temperatures of the Bay Area and typical patterns of the season can challenge our immune systems. Learn how to connect with the plants around you in order to support the seasonal transition. Please bring your student ID and plenty of paper copies of your resume. A panel discussion focused on the new textbook, Artificial Intelligence Safety and Security. The Comai lab studies genetics and function of plant chromosomes, and are interested in mechanisms through which plants attain genome stability and in their manipulation for efficient genome engineering. I present the preliminary results of the Nemea Center's collaborative project with the Greek Archaeological Service (TAPHOS) at the LBA site of Aidonia in the Korinthia region of Greece. We'll introduce the curve complex of a surface & motivate it via its connections to hyperbolic 3-manifolds and to Teichmuller theory. The goal of this intro talk will be to discuss some of its geometric properties, on both large and small scales. We will discuss strict Dieudonné complexes, completions of Dieudonné complexes, Dieudonné algebras, and the De Rham complex. Fine particle (PM2.5) air pollution is a leading risk for mortality, resulting in more than 7% of all human deaths worldwide. Here, we present two analyses that help frame the global air pollution problem. Education forum focused on the role of teachers in shaping equitable education policies. Join Meraki Engineering on for a Q+A session co-hosted by SWE and WICSE! Our panel includes Christine Garibian (Software Engineering, Class of 2014), Ian Fox (Software Engineering, Class of 2017), and Rahul Ramakrishnan (Product Management, Class of 2017).
CommonCrawl
1 IAC - Istituto per le Applicazioni del Calcolo "Mauro Picone" Abstract : Efficient recovery of smooth functions which are $s$-sparse with respect to the base of so--called Prolate Spheroidal Wave Functions from a small number of random sampling points is considered. The main ingredient in the design of both the algorithms we propose here consists in establishing a uniform $L^\infty$ bound on the measurement ensembles which constitute the columns of the sensing matrix. Such a bound provides us with the Restricted Isometry Property for this rectangular random matrix, which leads to either the exact recovery property or the ''best $s$-term approximation" of the original signal by means of the $\ell^1$ minimization program. The first algorithm considers only a restricted number of columns for which the $L^\infty$ holds as a consequence of the fact that eigenvalues of the Bergman's restriction operator are close to 1 whereas the second one allows for a wider system of PSWF by taking advantage of a preconditioning technique. Numerical examples are spread throughout the text to illustrate the results.
CommonCrawl
Path components of a monoidal category form a monoid? In Grayson's 'Higher Algebraic K-theory II', leading up to the categorical generalisation of the plus construction, he considers $\pi_0(S) = \pi_0(BS)$, where $S$ is a (small, symmetric) monoidal category and $BS$ is its classifying space. It is then tacitly assumed that $\pi_0(S)$ is itself an abelian monoid... but I can't see how this is true. How is $\pi_0(S)$ a monoid, explicitly? The result will follow if you can show that $\pi_0\colon Cat \to Set$ preserves finite products, because monoidal categories can be defined diagrammatically in the 2-category $Cat$, and these diagrams are sent, under the assumption of product preservation, to the diagrams defining a monoid in $Set$. But $\pi_0$ can be defined via the following sequence of functors: $N\colon Cat \to sSet$, followed by $|-|\colon sSet \to CGHaus$, followed by $\pi_0\colon CGHaus \to Set$. Here $CGHaus$ has the k-space product, namely $X\times_k Y := k(X\times Y)$. The functors $N$ and $|-|$ preserve finite products, so we only need to know that $\pi_0$ sends $\times_k$ to the product of sets. Since $I$ is compact Hausdorff, a function $I \to X\times_k Y$ is continuous iff the function $I \to X\times Y$ is continuous, hence two points in $X\times_k Y$ are in the same path component iff they are in the same path component in $X\times Y$. Then we use the fact that $\pi_0$ preserves the ordinary product of topological spaces. EDIT: In light of the subtle edit, here is some more detail. Don't try to write down the product of a pair of path components. A element of $\pi_0(S)$ is represented by an object of $S$. The product of $[a]$ and $[b]$ is then $[a\otimes b]$. That's it. The above two paragraphs serve to show that this is well-defined on equivalence classes, associative and unital. Not the answer you're looking for? Browse other questions tagged at.algebraic-topology algebraic-k-theory ct.category-theory or ask your own question. Is the classifying space of a symmetric monoidal category an infinite loop space? Is there an ∞-categorical interpretation of the Quillen S⁻¹S construction?
CommonCrawl
The digital-filters tag has no usage guidance. What is the type number of a discrete time system given $H(z)$? Accepted delay in digital receiver? What is a simple smoothing filter for signals sampled at non-uniform times? confused in practical example of shift invariance? How do I initialize the state of a digital filter in Direct From II? Suppose I have a digital filter implemented in Direct Form II. How do I initialize the state of the filter as if the input $x[n]$ had a fixed value $x_0$ for all $n<0$? Please elaborate on why this mathematical transform can help analyzing as well as designing any type of digital filter. What is it called when you have a single-pole recursive digital lowpass filter where the coefficients don't add up to unity? Is it possible to design a band-pass filter with fewer taps than a series HPF + LPF? How can I get the continuous-time transfer function coefficients (or poles and zeros) from the corresponding discrete-time TF and vice versa? Does the output signal from the FIR filter have transient response? In matlab, I have shown the signal passed through the FIR filter in the time domain. Then the beginning of the signal is like the transient response. Is the result appropriate? What's logic behind the constuction of Sobel's filter in image processing? As title says what would be frequency response of backward finite difference differential filter, or what would be error of this differential filter, analyzed upon frequency of a signal? How does a digital filter work? Using white noise to test filter freq. response? I m designing FIR narrowband pass filter using windowing technique. is bandpass filter is same as narrowband pass filter? I m not finding any good article on this. How to plot the poles and zeros of this Bandpass Filter in MATLAB?
CommonCrawl
The world series has gone to a 7th game! What are the odds of this happening? Well that depends on how good of a chance each team has of winning each game. There are a couple different approaches we can take - one simple and one a little more complicated. We can make the simplifying assumption that each game is independent and that the probability of each team winning a game is static. Or we could make it more complicated by updating the probability that a team wins the next game if they've won the previous game. In other words, given the fact that a team wins a game, they will be more likley to win the next. This will involve bayesian updating. There are a bunch of equivelent ways of writing those probabilities. I wrote them here in a way that seemed most elegent, not necessarily the way that is easiest to understand. Let's do it in Python! Now let's do away with the simplifying assumption and say that the probaility of a team winning the next game increases if they win their last one. We come in with some prior belief of how good the teams are, then we update the belief as we see more data. In other words, we take into account the possibility of momentum in the series. Instead of being static, now we will take $\theta$ to be a random variable with a Beta distribution described by some choice for the hyperparameters $\alpha$ and $\beta$. We could be silly and let $\alpha = \beta = 1$ meaning that we have no idea what $\theta$ is just "let the data decide". For fun, let's calculate the chances of the series being different lengths for a range of choices of hyperparameters. Maybe we can make some more interesting looking plots. In makes intuitive sense that any amount of 'momentum' would make the series be shorter given the same expectation of $\theta$. The less informative the prior, the more momentum there should be, and the shorter the chance of a long series. Ok, that was not that bad. The next one, however, will not be as easy to write out. The problem results from the lack of exchangability between the different sequences of possible results. With a static estimate of $\theta$, the probability of this sequence of winners - ABAAA - is the same as this sequence of winners - AABAA. However, when we update our estimate of $\theta$ after each result, the first sequence is actually more probable than the second. We actually need to calculate all of there sequences explicitly. I won't write it out but I will do the calculation. Here are the sequences we need to consider for the series ending in 5 games. The trick here is realizing that the series must end with a win from the winning team. I'll write a class to enclose the whole calculation. Ok, cool. Now let's see what we calculate with a completely uninformative prior - meaning before the series $\theta$ has a uniform distribution on (0,1). Ok, so if we admit no prior knowledge about the strength of the teams, we calculate that the series has only a 14% chance of going to seven games instead of the 31% we got before. With this uninformative prior, even the small amount of data from previous games overwhelms the prior and therefore we calculate that the series will end faster. If we maintain the prior that the strength of the teams is equal, meaning $\alpha = \beta$, but we increase our certainty in this belief, we should recover the estimate that we made when we took $\theta$ to be static. Let's see if this is the case.. This is indeed the case. In the limit, the probabilties will converge to the ones previously arrived at. But that's no fun! The point of this Bayesian framework is the beauty of the middle ground. Just like it is silly to have a completely uninformative prior, it is also silly to have complete certainty in one's beliefs with no willingness to allow data alter them. In a real application, the prior could be chosen in a variety of manners - using some sort of empirical data or some combination of data and expertise. I am not that big of a baseball fan. I am more of a math fan. So instead of arguing about the proper priors for this world series or ones in the past, I'll end this post by making a 3D plot. We will plot $\alpha$ vs. $\beta$ vs. $P(g=7)$. Ok, we are ready to plot.
CommonCrawl
Abstract: The real interpolation method is considered and it is proved that for general local Morrey-type spaces, in the case in which they have the same integrability parameter, the interpolation spaces are again general local Morrey-type spaces with appropriately chosen parameters. This result is a particular case of the interpolation theorem for much more general spaces defined with the help of an operator acting from some function space to the cone of nonnegative nondecreasing functions on $(0,\infty)$. It is also shown how the classical interpolation theorems due to Stein–Weiss, Peetre, Calderón, Gilbert, Lizorkin, Freitag and some of their new variants can be derived from this theorem.
CommonCrawl
We consider macroscopic descriptions of particles where repulsion is modelled by non-linear power-law diffusion and attraction by a homogeneous singular kernel leading to variants of the Keller-Segel model of chemotaxis. We analyse the regime in which diffusive forces are stronger than attraction between particles, known as the diffusion-dominated regime, and show that all stationary states of the system are radially symmetric decreasing and compactly supported. The model can be formulated as a gradient flow of a free energy functional for which the overall convexity properties are not known. We show that global minimisers of the free energy always exist. Further, they are radially symmetric, compactly supported, uniformly bounded and $C^\infty$ inside their support. Global minimisers enjoy certain regularity properties if the diffusion is not too slow, and in this case, provide stationary states of the system. In one dimension, stationary states are characterised as optimisers of a functional inequality which establishes equivalence between global minimisers and stationary states, and allows to deduce uniqueness.
CommonCrawl
I will discuss a paper of Olga Y. Savchuk and Anton Schick. Consider a random sample $X_1,\ldots,X_n$ from a density $f$. For a positive $\alpha$, the density $g$ of $t(X_1) = |X_1|^\alpha sign(X_1)$ can be estimated in two ways: by a kernel estimator based on the transformed data $t(X_1),\ldots,t(X_n)$ or by a plug- in estimator transformed from a kernel estimator based on the original data. In this paper, they compare the performance of these two estimators using MSE and MISE. For MSE, the plug-in estimator is better in the case $\alpha > 1$ when $f$ is symmetric and unimodal, and in the case $\alpha \ge 2.5$ when $f$ is right- skewed and/or bimodal. For $\alpha < 1$, the plug-in estimator performs better around the modes of $g$, while the transformed data estimator is better in the tails of $g$. For global comparison MISE, the plug-in estimator has a faster rate of convergence for $0.4 \le \alpha < 1$ and $1 < \alpha < 2$. For $\alpha < 0.4$, the plug-in estimator is preferable for a symmetric density $f$ with exponentially decaying tails, while the transformed data estimator has a better performance when $f$ is right-skewed or heavy-tailed. Applications to real and simulated data illustrated these theoretical findings.
CommonCrawl
The Gödel-Bernays axioms are a conservative extension of the Zermelo-Fraenkel axioms with the axiom of choice (ZFC) that allow comprehension of classes. Although not the standard axioms of set theory, particularly in category theory they spare us any set of all sets-type paradoxes. The first five axioms are identical to the axioms of the same names from ZFC. The quantified variables range over the universe of sets. The order of the elements in the sets is immaterial. Thus it is possible to create a set containing any two sets that you have already created. Otherwise known as the Axiom of the Unordered Pair. For every set, there exists a set of sets that contains amongst its elements all the subsets of the given set. $(2): \quad$ the successor of each of its elements. In the remaining axioms, the quantified variables range over classes. The first two differ from the ZFC axioms with the same names in this way only. The last two have no analogue among the ZFC axioms. For any non-empty class, there is an element of the class that shares no element with the class. For any class $\mathcal C$, a set $x$ such that $x = \mathcal C$ exists if and only if there is no bijection between $\mathcal C$ and the universe.
CommonCrawl
Hardness of quantum circuit equivalence? Given two poly-sized quantum circuits $C_1$ and $C_2$ on $n$ qubits with a universal gate set generated by some finite set of one and two qubit gates. I'm thinking of the gates $\langle H, T, CNOT\rangle$, but other universal gate sets should work as well. Notice that $C_1$ and $C_2$ each correspond to a $2^n \times 2^n$ unitary matrix, $U_1$ and $U_2$, respectively. How hard is it to determine if $U_1 == U_2$, given the circuits $C_1$ and $C_2$? Clearly the problem is in coQMA since if $U_1$ and $U_2$ are different there exists an input state $r$ such that $C_1(r)$ not equal to $C_2(r)$ which can be checked with a quantum computer. Has this problem been studied? Is it complete for this class? Is this class known as something else in the literature, since I cannot find much about it? The problem isn't clearly in coQMA. If $U_1$ and $U_2$ are different, and you're given a state on which they act differently, it is not necessary that a polynomial-time quantum computer can check that the output states are different. In fact, it is easy to show that if these two quantum states are different, but exponentially close in trace norm, then no polynomial-time quantum algorithm can distinguish the two. This is analogous to how no polynomial-time classical algorithm can distinguish two probability distributions that are exponentially close. On the other hand, if we're promised that $U_1$ and $U_2$ are not too close when they are not equal, then the problem is in coQMA. The complexity of either variant is known. In the exact case, this problem is complete for coNQP, which is the same as the class $C_=P$. Yu Tanaka, EXACT NON-IDENTITY CHECK IS NQP-COMPLETE. International Journal of Quantum Information 08:05, 807-819 http://arxiv.org/abs/0903.0675. In the other case where we're promised that they're not too close when unequal, the problem is indeed coQMA-complete. Not the answer you're looking for? Browse other questions tagged cc.complexity-theory quantum-computing or ask your own question. How do I figure out how to combine simpler quantum gates to create the gate I want? Can one do quantum computing without negative amplitudes?
CommonCrawl
This notebook uses Python to generate a list of relations and check their properties (reflexive, symmetric, antisymmetric, transitive, and equivalence). Defintions and examples are taken from lecture 15 and 16 of a discrete math course taught by Professor Maltais at the University of Ottawa during the 2017 winter semester. A function is an assignment of elements from one set (the domain) to another set (the codomain) where every element of the domain is assigned only a single element from the codomain. In other words, every pre-image has one (and only one) image. Relations are similar, but looser in the sense that pre-images can have multiple images. If $1$ is an element of A, it can be related to $5$ and $6$. This kind of assignment is not possible with a function. A binary relation between two sets (A and B) is defined as "a subset of A x B" (i.e. the "Cartesian product" of A and B). Similarly, a relation between a set (A) and itself is defined as a subset of A x A. What does this mean? Note: I'm going to use lists instead of sets here due to complications with powerset. Though it may be tempting, the above is not the set of all possible relations between A and B. Rather, it is the superset of any relation between A and B--in other words, any relation between A and B will be a subset of this set. However, the powerset of AB is the set of all possible relations between A and B. pAB contains all possible subsets of A x B (and equivalently) all possible relations between A and B. Some are functions (e.g. #6), but most are not. How many relations are there? You could count them, but I'll save you the trouble. There are 16 relations between the set $A$ and itself. For all possible relations on the set $A$, which are reflexive? [(1, 1)] is NOT reflexive. [(1, 2)] is NOT reflexive. [(2, 1)] is NOT reflexive. [(2, 2)] is NOT reflexive. [(1, 1), (1, 2)] is NOT reflexive. [(1, 1), (2, 1)] is NOT reflexive. [(1, 2), (2, 1)] is NOT reflexive. [(1, 1), (2, 2)] is reflexive! [(1, 2), (2, 2)] is NOT reflexive. [(2, 1), (2, 2)] is NOT reflexive. [(1, 1), (1, 2), (2, 1)] is NOT reflexive. [(1, 1), (1, 2), (2, 2)] is reflexive! [(1, 1), (2, 1), (2, 2)] is reflexive! [(1, 2), (2, 1), (2, 2)] is NOT reflexive. [(1, 1), (1, 2), (2, 1), (2, 2)] is reflexive! While the symmetric definition could be implemented exactly as described, there is actually no reason to test $(u, v)$ pairs that are not in a particular relation. If a pair is not in the relation, then the left side of the implication is false and the implication is vacuously true (because false $\rightarrow$ anything is always true). Of course, if $(v, u)$ happens to be in the relation, $(u, v)$ will be sought (and not found) causing symmetric(r) to return false. # search for (v, u) <- order switched! For all possible relations on the set $A$, which are symmetric? [(1, 2)] is NOT symmetric. [(2, 1)] is NOT symmetric. [(1, 1), (1, 2)] is NOT symmetric. [(1, 1), (2, 1)] is NOT symmetric. [(1, 2), (2, 1)] is symmetric! [(1, 1), (2, 2)] is symmetric! [(1, 2), (2, 2)] is NOT symmetric. [(2, 1), (2, 2)] is NOT symmetric. [(1, 1), (1, 2), (2, 1)] is symmetric! [(1, 1), (1, 2), (2, 2)] is NOT symmetric. [(1, 1), (2, 1), (2, 2)] is NOT symmetric. [(1, 2), (2, 1), (2, 2)] is symmetric! [(1, 1), (1, 2), (2, 1), (2, 2)] is symmetric! For this property (and possibly subsequent properties), I will use a helper function that searches a relation for a particular pair returning True if found and False otherwise. This should be easier to read than copy and pasting try/except blocks everywhere. As with symmetric, rather than implement the definition (which requires assigning variables to elements in $A$) I've elected to only test pairs that are in the relation. Also, I worked from the contrapositive definition because it felt easier to code. For all possible relations on the set $A$, which are antisymmetric? [(1, 1), (1, 2)] is antisymmetric! [(1, 1), (2, 1)] is antisymmetric! [(1, 2), (2, 1)] is NOT antisymmetric. [(1, 1), (2, 2)] is antisymmetric! [(1, 2), (2, 2)] is antisymmetric! [(2, 1), (2, 2)] is antisymmetric! [(1, 1), (1, 2), (2, 1)] is NOT antisymmetric. [(1, 1), (1, 2), (2, 2)] is antisymmetric! [(1, 1), (2, 1), (2, 2)] is antisymmetric! [(1, 2), (2, 1), (2, 2)] is NOT antisymmetric. [(1, 1), (1, 2), (2, 1), (2, 2)] is NOT antisymmetric. I followed the definition exactly for this one, the implementation (with 3 loops!) is costly--$O(n^3)$. transitive([(1, 2), (2, 1), (2, 2)], A) # 1 -> 2 and 2 -> 1, but 1 -> 1 is missing! Can I do better? Spoiler: not really. It is possible that transitive2() is more efficient than transitive(), but it is hard to tell and the logic is definitely more convoluted. For all possible relations on the set $A$, which are transitive? [(1, 1), (1, 2)] is transitive! [(1, 1), (2, 1)] is transitive! [(1, 2), (2, 1)] is NOT transitive. [(1, 1), (2, 2)] is transitive! [(1, 2), (2, 2)] is transitive! [(2, 1), (2, 2)] is transitive! [(1, 1), (1, 2), (2, 1)] is NOT transitive. [(1, 1), (1, 2), (2, 2)] is transitive! [(1, 1), (2, 1), (2, 2)] is transitive! [(1, 2), (2, 1), (2, 2)] is NOT transitive. [(1, 1), (1, 2), (2, 1), (2, 2)] is transitive! A relation $R$ on a set $A$ is called an equivalence relation if R is reflexive, symmetric, and transitive. From a performance standpoint, this one is super expensive! For all possible relations on the set $A$, which are equivalence relations? is NOT an equivalence relation. [(1, 1)] is NOT an equivalence relation. [(1, 2)] is NOT an equivalence relation. [(2, 1)] is NOT an equivalence relation. [(2, 2)] is NOT an equivalence relation. [(1, 1), (1, 2)] is NOT an equivalence relation. [(1, 1), (2, 1)] is NOT an equivalence relation. [(1, 2), (2, 1)] is NOT an equivalence relation. [(1, 1), (2, 2)] is an equivalence relation! [(1, 2), (2, 2)] is NOT an equivalence relation. [(2, 1), (2, 2)] is NOT an equivalence relation. [(1, 1), (1, 2), (2, 1)] is NOT an equivalence relation. [(1, 1), (1, 2), (2, 2)] is NOT an equivalence relation. [(1, 1), (2, 1), (2, 2)] is NOT an equivalence relation. [(1, 2), (2, 1), (2, 2)] is NOT an equivalence relation. [(1, 1), (1, 2), (2, 1), (2, 2)] is an equivalence relation! This turned out to be a good exercise. I think I have a better understanding of these properties and how to "prove" they are held by an arbitrary relation. In a way, the formal definition for symmetry is misleading. I see "for all u, v in A" and think "for all possible pairs in the set $A \times A$...oh no, I have to generate a Cartesian Product". But really, this is never necessary. While the reflexive property does require looking at the actual set, each element is considered independently (so the Cartesian Product is not needed). Which says: "for all ordered pairs in the relation, this implication must be true in order for the relation to be symmetric". To finish this I should really identify the relations that are functions as well as their types (injective, subjective, bijective), but I think I'll save that for another notebook.
CommonCrawl
There are $n$ children who want to go to a Ferris wheel, and your task is to find a gondola for each child. Each gondola may have one or two children, and in addition, the total weight in one gondola can be at most $x$. You know the weight of every child. What is the minimum number of gondolas needed for the children? The first input line contains two integers $n$ and $x$: the number of children and the maximum allowed weight. The next line contains $n$ integers $p_1,p_2,\ldots,p_n$: the weight of each child. Print one integer: the minimum number of gondolas.
CommonCrawl
Can someone help me find all the permutations in S6 that comute with (1 3 4 2) ? 2) obviously, any permutation x commutes with itself, hence the subgroup generated by x is included in the subgroup you're looking for. There are $6\cdot5\cdot3=90$ 4-cycles in $S_6$; since any 4-cycle is conjugate to your 4-cycle $a=(1\,3\,4\,2)$ the index of the centralizer of $a$ in $S_6 $ is 90. Thus the order of the centralizer is 8. Since $<a>\times <(5\,6)>$ has order 8, this subgroup is the centralizer.
CommonCrawl
A common plot device in story-telling is the "All Just A Dream" trope. Typical symptoms of this trope being used are talking lions, main characters dying, yodeling aliens on monocycles, and a general plethora of weird events. Then, of course, someone wakes up and it is revealed that everything that happened during the entire season did in fact not happen at all. It was All Just A Dream (or some kind of hallucination), and the days of our lives spent watching all those episodes are lost forever. In order to cause further confusion and uncertainty, this can also be done in layers, with characters having dreams within dreams within dreams, and so on. When the All Just A Dream trick is taken too far and gets used too often, it can get difficult to keep track of what has actually happened. This is where you enter the picture. You will be given a list of events, dreams, and scenarios. Each scenario specifies some events that have happened and some others that have not happened. Your job is to determine for each scenario whether that scenario is possible (possibly using the All Just A Dream trick). An event line is of the form "E $e$", indicating that event $e$ happens (see below for format of $e$). A dream line is of the form "D $r$", indicating that the last $r$ events that happened were All Just A Dream. Note that these events are now considered to not have happened, so they should not be counted when processing subsequent D lines. A scenario line is of the form "S $k$ $e_1$ $\ldots $ $e_ k$", where $1 \le k \le 30$ is an integer giving the number of events and $e_1, \ldots , e_ k$ is the list of events of the scenario. In a scenario, each event may be prefixed with a '!', indicating that the event did not happen in this scenario. Events are strings containing at most $20$ characters and using only the characters 'a'-'z' and underscores ('_'). For 'D' lines, you can assume that $r$ is an integer between $1$ and $R$, where $R$ is the total number of events that have happened so far (and that have not turned out to be a dream). For 'E' lines, you can assume that $e$ is not an event that has already happened, except if the previous occurence of the event turned out to be a dream, in which case it can happen again. This problem has somewhat large amounts of input and output. We recommend you to make sure that your input and output are properly buffered in order to make the most of the few seconds of execution time that we give you. "Yes" if the given scenario is consistent with what has happened so far. "$r$ Just A Dream" if the given scenario would be consistent with what has happened so far, provided a "D $r$" line had occurred just before the scenario. If there are many possible values of $r$, choose the smallest value. Note that you should not consider this hypothetical "D $r$" line to have occurred (as illustrated by sample input 2 below).
CommonCrawl
We show that kernel-based quadrature rules for computing integrals can be seen as a special case of random feature expansions for positive definite kernels, for a particular decomposition that always exists for such kernels. We provide a theoretical analysis of the number of required samples for a given approximation error, leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms. In particular, we show that the upper bound may be obtained from independent and identically distributed samples from a specific non-uniform distribution, while the lower bound if valid for any set of points. Applying our results to kernel-based quadrature, while our results are fairly general, we recover known upper and lower bounds for the special cases of Sobolev spaces. Moreover, our results extend to the more general problem of full function approximations (beyond simply computing an integral), with results in $L_2$- and $L_\infty$-norm that match known results for special cases. Applying our results to random features, we show an improvement of the number of random features needed to preserve the generalization guarantees for learning with Lipshitz-continuous losses.
CommonCrawl
Format: MarkdownItexI noticed that in > * [[action]], [[∞-action]] > * [[module]], [[∞-module]] > * [[representation]], [[∞-representation]] the _[[∞-module]]_ was kind of missing (we had [[module over an algebra over an (∞,1)-operad]]). So I created something stubby. the ∞-module was kind of missing (we had module over an algebra over an (∞,1)-operad). So I created something stubby. Format: MarkdownItexIsn't just "module" the preferred term ? Isn't just "module" the preferred term ? Format: MarkdownItexOne day als "category" will be the preferred term for "$(\infty,1)$-category". But we are not quite at the point yet that it would be useful to enforce the [[implicit infinity-category theory convention]] globally on the $n$Lab. I'd think. One day als "category" will be the preferred term for "(∞,1)(\infty,1)-category". But we are not quite at the point yet that it would be useful to enforce the implicit infinity-category theory convention globally on the nnLab. I'd think. Format: MarkdownItexNo, I am not that radical (one could say that category would be then $(\infty,\infty)$ if we take that every notion should be by deafult taken absolutely most generally). I mean CONTEXT should always be specified precisely. Once the context is known then no modifiers are needed. For example, one traditionally meant by a ball, certain kind of geometrical bodies in Euclidean space. Once one takes their equation one can have a version in metric space. Usually, if one just say a ball, one will hardly mean that this is in the generality of metric space. But in the context of metric spaces one does not need to say metric ball, just ball is enough. So never mentioning infinity and just saying category is misleading. But once we say we work with infinity categories then module is just module in this context. This is my opinion. No, I am not that radical (one could say that category would be then (∞,∞)(\infty,\infty) if we take that every notion should be by deafult taken absolutely most generally). I mean CONTEXT should always be specified precisely. Once the context is known then no modifiers are needed. For example, one traditionally meant by a ball, certain kind of geometrical bodies in Euclidean space. Once one takes their equation one can have a version in metric space. Usually, if one just say a ball, one will hardly mean that this is in the generality of metric space. But in the context of metric spaces one does not need to say metric ball, just ball is enough. So never mentioning infinity and just saying category is misleading. But once we say we work with infinity categories then module is just module in this context. This is my opinion. Format: MarkdownItex> one could say that category would be then (∞,∞) No, I think the "$\infty$-" to be dropped is that which is equivalent to "homotopy-". So in 50 years we'll say "category" for $(\infty,1)$-category and "$n$-category" for $(\infty,n)$-category and "$\infty$-category" for "$(\infty,\infty)$-category". Apart from this, I am not sure what you would like me to change. If the context is clear, then we can drop the "$\infty$-". But every $n$Lab entry is its own context, and so we can't drop it in the title of the entry _[[∞-module]]_. No, I think the "∞\infty-" to be dropped is that which is equivalent to "homotopy-". So in 50 years we'll say "category" for (∞,1)(\infty,1)-category and "nn-category" for (∞,n)(\infty,n)-category and "∞\infty-category" for "(∞,∞)(\infty,\infty)-category". Apart from this, I am not sure what you would like me to change. If the context is clear, then we can drop the "∞\infty-". But every nnLab entry is its own context, and so we can't drop it in the title of the entry ∞-module.
CommonCrawl
Suppose $X$ has a uniform distribution on the integers $-3, -2, -1, 0, 1, 2, 3$. Find the probability mass function for $Z = X^2$. Suppose $X$ and $Y$ are independent random variables, both with uniform distributions on the integers $-2, -1, 0, 1, 2$. Find the probability mass function for $T = X + Y$. Suppose $X$ and $Y$ are independent random variables, both with uniform distributions on the integers $-2, -1, 0, 1, 2$. Find the probability mass function for $S = X - Y$. Suppose $X$ has a uniform distribution on the integers $-2, -1, 0, 1, 2$. Find the probability mass function for $Z = X^2 - X$. Suppose $X$ and $Y$ are independent random variables, both with uniform distributions on the integers $1, 2, \ldots, 10$. Find the probability mass functions for $W = \max(X, Y)$ and $U = \min(X, Y)$. For $W$ and $U$ in the previous problem, what is $P(U < W)$? What is $P(U \le W)$? Are $U$ and $W$ independent? For Problem Set due 5 October: Problems 2 and 5 from the above list.
CommonCrawl
Abstract: We study complex Chern-Simons theory on a Seifert manifold $M_3$ by embedding it into string theory. We show that complex Chern-Simons theory on $M_3$ is equivalent to a topologically twisted supersymmetric theory and its partition function can be naturally regularized by turning on a mass parameter. We find that the dimensional reduction of this theory to 2d gives the low energy dynamics of vortices in four-dimensional gauge theory, the fact apparently overlooked in the vortex literature. We also generalize the relations between 1) the Verlinde algebra, 2) quantum cohomology of the Grassmannian, 3) Chern-Simons theory on $\Sigma\times S^1$ and 4) index of a spin$^c$ Dirac operator on the moduli space of flat connections to a new set of relations between 1) the "equivariant Verlinde algebra" for a complex group, 2) the equivariant quantum K-theory of the vortex moduli space, 3) complex Chern-Simons theory on $\Sigma \times S^1$ and 4) the equivariant index of a spin$^c$ Dirac operator on the moduli space of Higgs bundles.
CommonCrawl
and server hardware, and provided key corporate middleware. has been maybe a decade or more since I did. I suspect that they DO in fact need to buy RH, even for $34 \times 10^9. of privileged or protected information on ANY cloud, for good reason. many vendors. The market is almost insane at the moment. -- again, from a software point of view. able to ride the bleeding edge of it, open source or not. as much as for any other reason. Fedora is where it is at, not RHEL. real money from top to bottom. >>> it is not only that, but I saw something about Mellanox today as well. >>>> I wonder where that places us in the not too distant future.. >>>> compatible to say the least.
CommonCrawl
We study the complexity of a class of problems involving satisfying constraints which remain the same under translations in one or more spatial directions. In this paper, we show hardness of a classical tiling problem on an $N \times N$ $2$-dimensional grid and a quantum problem involving finding the ground state energy of a $1$-dimensional quantum system of $N$ particles. In both cases, the only input is $N$, provided in binary. We show that the classical problem is $\NEXP$-complete and the quantum problem is $\QMAEXP$-complete. Thus, an algorithm for these problems which runs in time polynomial in $N$ (exponential in the input size) would imply that $\EXP = \NEXP$ or $\BQEXP = \QMAEXP$, respectively. Although tiling in general is already known to be $\NEXP$-complete, the usual approach is to require that either the set of tiles and their constraints or some varying boundary conditions be given as part of the input. In the problem considered here, these are fixed, constant-sized parameters of the problem. Instead, the problem instance is encoded solely in the size of the system. A preliminary version of this paper was posted on the arXiv in 2009. An extended abstract appeared in FOCS'09.
CommonCrawl
摘 要: In this talk, we will firstly recall the definitions and the properties of singular support and characteristic cycle of a constructible \'etale sheaf on a smooth variety. The singular support, defined by Beilinson, is a closed conical subset of the cotangent bundle.The characteristic cycle, constructed by Saito, is a $\mathbb Z$-linear combination of irreducible components of the singular support.This theory is an algebraic analogue of that studied by Kashiwara and Schapira in a transcendental setting. In the second part of this talk we will focus on the joint work with Umezaki.We prove a conjecture of Kato-Saito on a twist formula for the epsilon factor of a constructible \'etale sheaf on a projective smooth variety over a finite field. In our proof, Beilinson and Saito's theory plays an essential role.
CommonCrawl
Generates a String representation to identify a single instance of StreamProcessor. When more than one processor co-exist within the same JVM, the processor identifier can be of the format: $x_$y, where 'x' is a unique identifier for the executing JVM and 'y' is a unique identifier for the processor instance within the JVM. When there is only one processor within a JVM, 'x' should be sufficient to uniquely identify the processor instance. Note: In case of more than one processors within the same JVM, the custom implementation of ProcessorIdGenerator can contain a static counter, which is incremented on each call to generateProcessorId. The counter value can be treated as the identifier for the processor instance within the JVM.
CommonCrawl
Abstract: The Fibonacci index of a graph is the number of its stable sets. This parameter is widely studied and has applications in chemical graph theory. In this paper, we establish tight upper bounds for the Fibonacci index in terms of the stability number and the order of general graphs and connected graphs. Tur\'an graphs frequently appear in extremal graph theory. We show that Tur\'an graphs and a connected variant of them are also extremal for these particular problems. Keywords: Stable sets; Fibonacci index; Merrifield-Simmons index; Tur\'an graph; $\alpha$-critical graph.
CommonCrawl
The Cartan group is the free nilpotent Lie group of step 3, with 2 generators. This paper studies the Cartan group endowed with the left-invariant sub-Finsler $\ell_\infty$ norm. We adopt the viewpoint of time-optimal control theory. By Pontryagin maximum principle, all sub-Finsler length minimizers belong to one of the following types: abnormal, bang-bang, singular, and mixed. Bang-bang controls are piecewise controls with values in the vertices of the set of control parameter. In a previous work, it was shown that bang-bang trajectories have a finite number of patterns determined by values of the Casimir functions on the dual of the Cartan algebra. In this paper we consider, case by case, all patterns of bang-bang trajectories, and obtain detailed upper bounds on the number of switchings of optimal control. For bang-bang trajectories with low values of the energy integral, we show optimality for arbitrarily large times. The bang-bang trajectories with high values of the energy integral are studied via a second order necessary optimality condition due to A. Agrachev and R.Gamkrelidze. This optimality condition provides a quadratic form, whose sign-definiteness is related to optimality of bangbang trajectories. For each pattern of these trajectories, we compute the maximum number of switchings of optimal control. We show that optimal bang-bang controls may have not more than 11 switchings. For particular patterns of bang-bang controls, we obtain better bounds. In such a way we improve the bounds obtained in previous works. On the basis of results of this work we can start to study the cut time along bang-bang trajectories, i.e., the time when these trajectories lose their optimality. This question will be considered in subsequent works. Cowling, M. G. and Martini, A., "Sub-Finsler Geometry and Finite Propagation Speed", Trends in Harmonic Analysis, Springer INdAM Ser., 3, ed. M. A. Picardello, Springer, Milan, 2013, 147–205, xii+447 pp. Pontryagin, L. S., Boltyanskii, V. G., Gamkrelidze, R. V., and Mishchenko, E. F., The Mathematical Theory of Optimal Processes, Wiley, New York, 1962, 360 pp.
CommonCrawl
[1604.02602] Can Uplink Transmissions Survive in Full-duplex Cellular Environments? Title:Can Uplink Transmissions Survive in Full-duplex Cellular Environments? Abstract: In-band full-duplex (FD) communication is considered a potential candidate to be adopted by the fifth generation (5G) cellular networks. FD communication renders the entire spectrum simultaneously accessible by uplink and downlink, and hence, is optimistically promoted to double the transmission rate. While this is true for a single communication link, cross-mode interference (i.e., interference between uplink and downlink) may diminish the full-duplexing gain. This paper studies FD operation in large-scale cellular networks with real base stations (BSs) locations and 3GPP propagation environment. The results show that the uplink is the bottleneck for FD operation due to the overwhelming cross-mode interference from BSs. Operating uplink and downlink on a common set of channels in an FD fashion improves the downlink rate but significantly degrades (over 1000-fold) the uplink rate. Therefore, we propose the $\alpha$-duplex scheme to balance the tradeoff between the uplink and downlink rates via adjustable partial overlap between uplink and downlink channels. The $\alpha$-duplex scheme can provide a simultaneous $30\%$ improvement in each of the uplink and downlink rates. To this end, we discuss the backward compatibility of the $\alpha$-duplex scheme with half-duplex user-terminals. Finally, we point out future research directions for FD enabled cellular networks.
CommonCrawl
This paper further develops the theory of operations on a topological space with the aim of producing a uniform framework for the study of generalized forms of continuity. To illustrate the utility of this approach the results obtained are used to obtain many new and known characterizations of strong $\Theta$-semi-continuity, weak continuity and almost continuity. In General Topology, there exist useful theorems on the existence of partitions of unity for paracompact regular spaces (and also for normal spaces). In this paper, we define the notion of fuzzy partition of unity and we obtain some results about this concept. Keywords: Fuzzy topology, partition of unity, $r$-paracompact and $S$-paracompact fuzzy topological spaces, weak inducement, normality. In a complete problem of eigenvalues of matrices of the $n$-th order the essential role is played by the development of the characteristic determinant $$ D(\lambda)=\det(A-\lambda E)$$ or some other determinant which is essentially identical to this one. There is a series of different methods by which we come to the explicit form of this polynomial. In this paper iterative formulas are derived for finding of all eigenvalues of a real matrix without developing the characteristic polynomial. The method is based on the Newton's method for solving systems of nonlinear equations. Keywords: Iterative method, eigenvalues of matrices. We present a new method for constructing global solution to Hopf equation. Propagation and formation of nonlinear waves are described. Keywords: Nonlinear waves, shocks generation, Hopf equation. Various types of connectedness and disconnectedness are introduced using fuzzy $\alpha$-open sets. Several properties and characterizations of such spaces are discussed. Keywords: Fuzzy $\alpha$-connected, fuzzy super $\alpha$-connected, fuzzy strongly $\alpha$-connected, extremally fuzzy $\alpha$-disconnected, totally fuzzy $\alpha$-disconnected. We establish a priori estimates result for an inverse problem of transport theory. We refer to , where some existence and uniqueness results are proved. Keywords: A priori estimates, inverse problem, transport theory.
CommonCrawl
Are there any good references which explain the injectivity part of Grothendieck's section conjecture in details? I heard that Grothendieck himself gave a proof in the letter to G.Faltings. But I need more kind explanation. I already read a book `Galois groups and fundamental groups' by T.Szamuely. I do not know whether the book contains a proof of the injectivity part or not, maybe my understanding is not enough. Any suggestion would be very helpful to me. You can find it in http://www.springer.com/mathematics/algebra/book/978-3-642-30673-0, Chapter 7 by Jakob Stix. Beware that this appendix was suppressed from the published version (eponymous article on Jakob's webpage). "The proof follows rather easily from the Mordell-Weil theorem stating that the group $A(K)$ is a finitely generated $\mathbb Z$-module, where $A$ is the "jacobienne généralisée" of $Y$ , corresponding to the "universal" embedding of $Y$ into a torsor under a quasi-abelian variety." The link between the subfactors and the motives as enriched Galois theories?
CommonCrawl
I really think that "popular" questions are treated with inappropriate hostility here. Do I think that "What is the rule for constructing the sequence $3,4,6,10$?" is a mathematically interesting question? No, of course not, because I already have a lot mathematical experience. Do I agree that trending youtube videos on the sum of all natural numbersgenerate very naive questions here? Sure, I do, because I know Banach limits, Zeta functions, renormalization, etc. But this is because I already know a lot about mathematics. Some questions about university mathematics seem not any less naive to me, but many people who will jump on closing a "guess the sequence"-question will approve the other question because they can empathize with it. The question is clearly a "guess the rule of the sequence 3,4,6,10 - question". It was originally tagged "mathematical physics, complex analysis, contest- math". I do not find any of these tags appropriate, but on the one hand, the person who edited them not only left in "complex analysis", but added in "algorithms" which is not appropriate, either, and on the other hand, this context clearly tells us that the asker has little background in mathematics, wanted to label the question "difficult" and found "complex-*" tags instead. The questions was "put on hold as unclear what you're asking", but it is perfectly clear what is being asked. The comments asking for "are you talking about a sequence where $a_n =$ function of $n$" ignore that the OP does not have to know functions or sequences to ask or understand this question. A proper answer to this question would be: Link to the Encyclopedia of Integer Sequence explaining in what sense it is appropriate to use, explain how the formula could be found by hand for this particular sequence. Ideally, we would have a detailled answer for the common types of question (recognize the differences in this case, or see that repeated differences are 0, look at the binary representation, ...). If people think that this is a good place to post "guess the number"-sequences as riddles, one can just explain to them that they should state that they know the answer, but otherwise, there is no harm at all. I am even more discontent with the angry closures of the $1+2+3+\dots $- threads. This video generated interest and was a perfect opportunity to write very different informative answers on different ways of assigning values to divergent sums and the merits and flaws of the video. Sure, duplicates should have been closed and redirected to the thread with the model answers. But they should have been closed gracefully with enjoyment of the enthusiasm of the askers. And I have seen no excellent comprehensive answers that would merit a link for people stumped by the video seeking background information. Since the first questions did not mention the video and the later questions were closed quickly and with hostile comments, it was impossible to actually write a good answer about the implied question "Is this video serious? How can this work?". "Because zeta functions and you lack the math background to understand it, so go away." is really not a very good answer. So, I am strongly in favour of treating people with little background in a friendly way and answer their questions either directly or by linking to a generic question with excellent answers, especially if it is obvious that they came here to ask a question out of curiosity. I am posting this here to hear your opinions on these issues. I strongly agree with the sentiment of this question. The often-seen dismissal of guess-the-next-number/pattern-recognition questions is also strange to me. As Phira notes in comments, recognizing powers of $2$ is a basic skill, as is looking at successive differences in a sequence, and a question about $3,4,6,10,\ldots$ is intended to help develop them. An example of such guessing that could well come up in my own research: suppose you're computing cohomology of something, and in one example you find, in sucessive degrees, the dimensions $1, 3, 3, 1$, and in another example, the dimensions $1,4,6,4,1,$ wouldn't you suspect that there was a general dimension formula given by binomial coefficients? Regarding $1 + 2 + 3 + \cdots$, the fact that this can be assigned a meaningful value is an amazing fact, with deep implications. It's wonderful! It's not surprising that people find it striking and want to ask about it here. There are lots of ways to explain it, too. For professional mathematicians, edge cases/non-obvious counterexamples can be interesting, as can the detailed hypotheses necessary to make certain statements true/false. Indeed, understanding such things is part of the pleasure of mastering a theory. But not all questions have to be answered from that vantage point. I remember an early question I answered where the OP, coming from a quantum mechanics background, asked if commuting operators were necessarily simultenously diagonalizable. Several of the initial answers emphasized the edge cases that make this literally false; on the other hand, it is typically true, and is a basic principle of quantum mechanics, and I don't think focusing on the subtleties of why it wouldn't always hold was necessarily the best answer for the OP. In general, I would hope that people are thoughtful about where an OP is coming from, and about what kind of answer they might be looking for. Let's try to encourage people's appreciations of mathematics. I hope that our site can show an enjoyment of mathematics as something wonderful, not just as something recondite and technical, doctrinaire, full of edge cases, counterexamples, and cautions against error. Added in response to some comments below: It's easy to find examples of (somewhat) intolerant or judgmental behaviour in any area of human activity. It would be good if we could try to aim for the highest possible standards of tolerance, acceptance, and understanding here (even knowing that we will sometimes fall short, due to natural human fallibility), rather than dwell too much on others' failings as a justification for our own. What's wrong with asking for the next term in a sequence? Do we need a FAQ question for "Guess what I am thinking" type of questions? What is the purpose of the [pattern-recognition] tag? Can we reopen question about predicting next number in a sequence? Why are some tags very inactive (for example [natural-deduction] and [propositional-calculus])? What is a good forum for math discussion? Should "natural-numbers" really be a synonym for "elementary-number-theory"?
CommonCrawl
In learning theory we often talk about an environment which is oblivious to our algorithm, or an environment which is fully aware of our algorithm and attempting to cause it to do badly. What about the case where the environment is fully aware of our algorithm and attempting to help it succeed? Here's a concrete example. Suppose you are trying to communicate a message to a receiver, and this message is one of a finite set of hypotheses. You are forced to communicate to the receiver by sending a sequence of feature-label pairs $(X \times Y)^*$; the receiver will decode the message via ERM on the hypothesis set using the supplied data. How many examples does it take, and how should you chose them? If this sounds corny, consider that evolution works by reuse, so if the capability to learn from experience developed due to other selection pressure, it might be co-opted to service communication a la Cognitive Linguistics. Intuitively, even if the hypothesis being communicated was learned from experience, it's not good strategy to retransmit the exact data used to learn the hypothesis. In fact, it seems like the best strategy would be not using real data at all; by constructing an artificial set of training examples favorable structure can be induced, e.g., the problem can be realizable. (Funny aside: I TA-d for a professor once who confided that he sometimes lies to undergraduate in introductory courses in order to get them ``closer to the truth''; the idea was, if they took an upper division class he would have the ability to refine their understanding, and if not they were actually better off learning a simplified distortion). Some concepts from learning theory are backwards in this setup. For instance, Littlestone's dimension indicates the maximum number of errors made in a realizable sequential prediction scenario when the examples are chosen adversarially (and generalizes to the agnostic case). We can chose the examples helpfully here (what's the antonym of adversarial?), but actually we want errors so that many of the hypothesis are incorrect and can be quickly eliminated. Unfortunately we might encounter a condition where the desired-to-be-communicated hypothesis disagrees with at most one other hypothesis on any point. Littlestone finds this condition favorable since a mistake would eliminate all but one hypothesis, and otherwise no harm no foul; but in our situation this is worst-case behaviour, because it makes it difficult to isolate the target hypothesis with examples. In other words, we can chose the data helpfully, but if the set of hypotheses is chosen adversarially this could be still very difficult. Inventing an optimal fictitious sequence of data might be computationally too difficult for the sender. In this case active learning algorithms might provide good heuristic solutions. Here label complexity corresponds to data compression between the original sequence of data used to learn the hypothesis and the reduced sequence of data used to transmit the hypothesis. There is fertile ground for variations, e.g., the communication channel is noisy, the receiver does approximate ERM, or the communication is scored on the difference in loss between communicated and received hypothesis rather than 0-1 loss on hypotheses. Wow, _exactly_ what I was looking for ... and a perfect example of "I couldn't figure out what to ask Google".
CommonCrawl
The Orion facility at the Atomic Weapons Establishment in the United Kingdom has the capability to operate one of its two 500 J, 500 fs short-pulse petawatt beams at the second harmonic, the principal reason being to increase the temporal contrast of the pulse on target. This is achieved post-compression, using 3 mm thick type-1 potassium dihydrogen phosphate crystals. Since the beam diameter of the compressed pulse is ${\sim}600$ mm, it is impractical to achieve this over the full aperture due to the unavailability of the large aperture crystals. Frequency doubling was originally achieved on Orion using a circular sub-aperture of 300 mm diameter. The reduction in aperture limited the output energy to 100 J. The second-harmonic capability has been upgraded by taking two square 300 mm $\times$ 300 mm sub-apertures from the beam and combining them at focus using a single paraboloidal mirror, thus creating a 200 J, 500 fs, i.e., 400 TW facility at the second harmonic. 【4】D. I.Hillier,S.Elsmere,M.Girling,N.Hopps,D.Hussey,S.Parker,P.Treadwell,D.Winter,?and T.Bett, 6938 (2014). 【5】M. T.Girling,S. J. F.Parker,D.Hussey,?and N. W.Hopps, 772122 (2010). 【6】C. Y.Chien,G.Korn,J. S.Coe,J.Squier,?and G.Mourou, 353 (1995). 【7】D.Neely,C. N.Danson,R.Allott,F.Amiranoff,J. L.Collier,A. E.Dangor,C. B.Edwards,P.Flintoff,P.Hatton,M.Harman,M. H. R.Hutchinson,Z.Najmudin,D. A.Pepler,I. N.Ross,M.Salvati,?and T.Winstone, 281 (1999). 【9】D.Neely,R. M.Allott,R. L.Clarke,J. L.Collier,C. N.Danson,C. B.Edwards,C.Hernandez-Gomez,M. H. R.Hutchinson,M.Notley,D. A.Pepler,M.Randerson,I. N.Ross,J.Springall,M.Stubbs,T.Winstone,?and A. E.Dangor, 405 (2000). 【10】D. I.Hillier,M. T.Girling,M. F.Kopec,N. W.Hopps,J. R.Nolan,?and D. N.Winter, "Full aperture, frequency doubled operation of the HELEN CPA beamline," AWE Plasma Physics Department Annual Report (2006). Copies of the report can be obtained from the Corporate Communications Office at the Atomic Weapons Establishment, Aldermaston, Reading, Berkshire RG7 4PR, UK. 【11】V.Krylov,A.Rebane,A. G.Kalintsev,H.Schwoerer,?and U. P.Wild, 198 (1995). 【13】D. I.Hillier,C. N.Danson,S. J.Duffield,D. A.Egan,S. P.Elsmere,M. T.Girling,E. J.Harvey,N. W.Hopps,M. J.Norman,S. J. F.Parker,P. A.Treadwell,D. N.Winter,?and T. H.Bett, 4258 (2013). 【14】D. J.Hoarty,P.Allan,S. F.James,C. R. D.Brown,L.Hobbs,M. P.Hill,J. W. O.Harris,J.Morton,M. G.Brookes,R.Shepherd,J.Dunn,H.Chen,E.Von Marley,P.Beiersdorfer,G.Brown,?and J.Emig, 265003 (2013). 【15】D. J.Hoarty,N.Sircombe,P.Beiersdorfer,C. R. D.Brown,M. P.Hill,L. M. R.Hobbs,S. F.James,J.Morton,E.Hill,M.Jeffery,J. W. OHarris,R.Shepherd,E.Marley,E.Magee,J.Emig,J.Nilsen,H. K.Chung,R. W.Lee,?and S. J.Rose, 178 (2017).
CommonCrawl
This special issue focuses on security & privacy aspects of emerging trends and applications involving Machine-to-Machine Cyber Physical Systems (M2M CPSs) in both generic and specific domain of interests. We invite original research articles proposing innovative solutions to improve IoT security and privacy, taking in account the low resource characteristics of CPS components, the distributed nature of CPSs, and connectivity constraints of IoT devices. For more information, visit the Special Issue webpage. Timing is crucial for safety, security, and responsiveness of Cyber-Physical System (CPS). This special issue invites manuscripts that study any aspect of the interaction of CPS and its timing. For more information, visit the Special Issue webpage. This special issue focuses on user-centric security and safety aspects of cyber-physical systems (CPS), with the aims of filling gaps between the user behaviour and the design of complex cyber-physical systems. For more information, visit the Special Issue webpage. This special issue focuses on fundamental problems involving human-interaction-aware data analytics with future CPS. The aim of this special issue is to provide a platform for researchers and practitioners from academia, government and industry to present their state-of-the-art research results in the area of human-interaction-aware data analytics for CPS. For more information, visit the Special Issue webpage. This special issue seeks original manuscripts which will cover recent development on methods, architecture, design, validation and application of resource-constrained cyber-physical systems that exhibit a degree of self-awareness. For more information, visit the Special Issue webpage. This special issue invites original, high-quality work that report the latest advances in real-time aspects in CPSs. Featured articles should present novel strategies that address real-time issues in different aspects of CPS design and implementation, including theory, system software, middleware, applications, network, tool chains, test beds, and case studies. For more information, visit the Special Issue webpage. The aim of this special issue will be to feature articles on new technologies that will impact future transportation systems. They might span across vehicular technologies – such as autonomous vehicles, vehicle platooning and electric cars, communication technologies to enable vehicle-to-vehicle and vehicle-to-infrastructure communication, security mechanisms, infrastructure-level technologies to support transportation, as well as management systems and policies such as traffic light control, intersection management, dynamic toll pricing and parking management. In addition to terrestrial transportation, traffic control and autonomous management of aerial vehicles and maritime ships are also of interest. For more information, visit the Special Issue webpage. Vehicular cyber-physical systems are implemented to share taxi resource eciently using intensive algorithms running on telematics devices. However, due to the lack of social interactions, conventional systems are hard to improve user experience without considering passengers inner connections. In this paper, we propose an optimization scheme for these vehicular cyber-physical systems which integrate social interaction with real time street data to improve the sharing eciency and user experience. To answer the sharing requirement from potential passengers, our system allocates the taxi resource under the trade-o' between cost and social interactions. We state and solve the sharing arrangement problem by computing a heuristic algorithm called SONETS to satisfy overwhelming requests from streets with limited taxi resource in peak time. Œe simulation results show that our algorithm can increase the integrated bene€t than other solutions. Travel time in urban centers is a significant contributor to the quality of living of its citizens. Mobility on Demand (MoD) services such as Uber and Lyft have revolutionized the transportation infrastructure, enabling new solutions for passengers. Shared MoD services have shown that a continuum of solutions can be provided between the traditional private transport for an individual and the public mass transit based transport, by making use of the underlying cyber-physical substrate that provides advanced, distributed, and networked computational and communicational support. In this paper, we propose a novel shared mobility service using a dynamic framework. This framework generates a dynamic route for multi-passenger transport, optimized to reduce time costs for both the shuttle and the passengers and is designed using a new concept of a space window. This concept introduces a degree of freedom that helps reduce the cost of the system involved in designing the optimal route. A specific algorithm based on the Alternating Minimization approach is proposed. Its analytical properties are characterized. Detailed computational experiments are carried out to demonstrate the advantages of the proposed approach and are shown to result in an order of magnitude improvement in the computational efficiency with minimal optimality gap when compared to a standard Mixed Integer Quadratically Constrained Programming based algorithm. This article describes a system to facilitate dynamic en route formation of truck platoons with the goal of reducing fuel consumption. Safe truck platooning is a maturing technology which leverages modern sensor, control, and communication technology to automatically regulate the inter-vehicle distances. Truck platooning has been shown to reduce fuel consumption through slipstreaming by up to ten percent under realistic highway conditions. In order to further benefit from this technology, a platoon coordinator is proposed, which interfaces with fleet management systems and suggests how platoons can be formed in a fuel-efficient manner over a large region. The coordinator frequently updates the plans to react to newly available information. This way, it requires a minimum of information about the logistic operations. We discuss the system architecture in detail and introduce important underlying methodological foundations. Plans are derived in computationally tractable stages optimizing fuel savings from platooning. The effectiveness of this approach is verified in a simulation study. It shows that the coordinated platooning system can improve over spontaneously occurring platooning even under the presence of disturbances. A real demonstrator has also been developed. We present data from an experiment in which three vehicles were coordinated to form a platoon on public highways under normal traffic conditions. It demonstrates the feasibility of coordinated en route platoon formation with current communication and on-board technology. Simulations and experiments support that the proposed system is technically feasible and a potential solution to the problem of using truck platooning in an operational context. Pipelined control is an image-based control that uses parallel instances of its image-processing algorithm in a pipelined fashion to improve the quality of control. A higher number of pipes improves the controller settling time resulting in a trade-off between resources and control performance. In real-life applications, it is common to have a continuous-time model with additive uncertainties in one or more parameters that may affect the controller performance and therefore, the trade-off analysis. We consider models with uncertainties denoted by matrices with a single non-zero element, potentially caused by multiple uncertain parameters in the model. We analyse the impact of such uncertainties in the before-mentioned trade-off. To do so, we introduce a discretization technique for the uncertain model. Next, we use the discretized model with uncertainties to analyse the robustness of a pipelined controller designed to enhance performance. Such an analysis captures the relationship between resource usage, control performance, and robustness. Our results show that the tolerable uncertainties for a pipelined controller decreases when increasing the number of pipes. We also show the feasibility of our technique by implementing a realistic example in a Hardware-In-the-Loop simulation. The vehicular cyber-physical systems (VCPS), among several other applications, may help in addressing the ever increasing problem of congestions in large cities. Nevertheless, this may be hindered by the problem of data falsification, which results out of either wrong perception of a traffic event or generation of fake information by the participating vehicles. Such information fabrication may cause re-routing of vehicles and artificial congestions, leading to economic, public safety, environmental, and health hazards. Thus, it is imperative to infer truthful traffic information at real-time for restoration of operation reliability of the VCPS. In this work, we propose a novel reputation scoring and decision support framework, called Spoofed and False Report Eradicator (SAFE), which offers a cost-effective and efficient solution to handle data falsification problem in the VCPS domain. It includes humans in the sensing loop by exploiting the paradigm of participatory sensing and a concept of mobile security agent (MSA) to nullify the effects of deliberate false contribution, and a variant of the distance bounding mechanism to thwart location-spoofing attacks. A regression-based model integrates these effects to generate the expected truthfulness of a participants contribution. To determine if any contribution is true or not, a generalized linear model is used to transform expected truthfulness into a Quality of Contribution (QoC) score. The QoC of different contributions are aggregated to compute the user reputation. Such reputation enables classification of different participation behaviors. Finally, an Expected Utility Theory (EUT)-based decision model is proposed which utilizes the reputation score to determine if an information should be published or dropped. To evaluate SAFE through experimental study, we compare the reputation-based user segregation performance achieved by our framework with that generated by the state-of-the-art reputation mechanisms. Experimental results demonstrate that SAFE is able to better capture subtle differences in user behaviors based on quality, quantity and location accuracy, and significantly improves operational reliability through accurate publishing of only legitimate information. Smart cities can be viewed as large-scale Cyber-Physical Systems (CPS) that different sensors and devices record the cyber and physical indicators of the urban environment. Those records are being used for improving urban life by offering improved efficiencies with accurate electric load forecasting, efficient traffic management, etc. Accurate forecasting is mostly dependent on the sufficient and reliable data. Traditional data collection methods are necessary but not sufficient due to their limited coverage and expensive cost of implementation and maintenance. For example, continuous traffic data collection is mostly limited to major highways only in many cities whereas secondary and local roadways are usually covered once or twice a year. The advances in sensor networks and recent technological developments such as methods based on vehicle locations and in-vehicle devices through mobile phones or GPS-based systems in transportation networks provide such an opportunity. Although these technologies also have the potential to connect the physical components and processes with the cyber world that leading to a Cyber-Physical Systems (CPS), they also have significant drawbacks. Specifically, they usually suffer from limited resolution due to limitations on time frame, cost, accuracy, and reliability. One way for improving the limited resolution is data fusion. Furthermore, a city should be considered as a collection of the layers of tangled city infrastructure networks which connects people, places, and resources. Therefore, the study of traffic or electricity consumption forecasting should go beyond the transportation and electricity networks, and merge with each other and even with other city networks such as environmental networks. As such, this paper proposes a traffic and electric load forecasting methodology which benefits from the data fusion techniques in order to fill the lack of sufficient information in any of these aforementioned networks. For this purpose, a Bayesian spatiotemporal Gaussian Process model is proposed which employs the most informative spatiotemporal interdependency among its own network, and covariates from other city networks. The proposed load forecasting fusion method is compared with other state-of-the-art methods including Autoregressive Integrated Moving Average with Explanatory Variable (ARIMAX), Multivariate Linear Regression, Support Vector Regression and Neural Networks Regression using real-life data obtained from the City of Tallahassee in Florida. Results show that multi-network data fusion framework improves the accuracy of load forecasting, and the proposed Bayesian spatiotemporal Gaussian Process model outperforms all the above-mentioned methods. Model-based development is an important paradigm for developing cyber-physical systems (CPS). Early verification and validation of embedded software speeds up the development process and saves costs. This is especially challenging, since CPSs interact with complex environments through sensors and actuators requiring models of the relevant CPS and its context. Therefore, the strong underlying assumption is that models are adequate for the verification task. Conformance testing addresses this problem by checking that two models of the same CPS are conformant, i. e., produce equivalent behavior w. r. t. the verification task. Although conformance is in general undecidable, for the relevant models of CPSs in practice, non-formal conformance checking procedures typically succeed in verifying conformance. In this work, we survey conformance checking for CPS we do not only perform a comparison of approaches for the evaluation of conformance, but also survey the required input generation. The rapid development of vehicular network and autonomous driving technologies provides opportunities to significantly improve transportation safety and efficiency. One promising application is centralized intelligent intersection management, where an intersection manager accepts requests from approaching vehicles (via vehicle-to-infrastructure communication messages) and schedules the order for those vehicles to safely crossing the intersection. However, communication delays and packet losses may occur due to the unreliable nature of wireless communication or malicious security attacks (such as jamming and flooding), and could cause deadlocks and unsafe situations. In our previous work, we considered these issues and proposed a delay-tolerant intersection management protocol for intersections with a single lane in each direction. In this work, we address key challenges in efficiency and deadlock when there are multiple lanes from each direction, and propose a delay-tolerant protocol for general multi-lane intersection management. We prove that this protocol is deadlock-free, safe and satisfying the liveness property. Furthermore, we extend the traffic simulation suite SUMO with communication modules, implement our protocol in the extended simulator, and quantitatively analyze its performance with the consideration of communication delays. Finally, we also model systems using smart traffic lights with back-pressure scheduling in SUMO, and compare our delay-tolerant intelligent intersection protocol with smart traffic lights in cases of a single intersection and a network of interconnected intersections. Simulation results demonstrate the effectiveness of our approach. Android users are increasingly concerned with the privacy of their data and security of their devices. To improve the security awareness of users, recent automatic techniques produce security-centric descriptions by performing program analysis. However, the generated text does not always address users? concerns as they are generally too technical to be understood by ordinary users. Moreover, different users have varied linguistic preferences, which do not match the text. Motivated by this challenge, we develop an innovative scheme to help users avoid malware and privacy-breaching apps by generating security descriptions that explain the privacy and security related aspects of an Android app in clear and understandable terms. We implement a prototype system, PERSCRIPTION, to generate personalised security-centric descriptions that automatically learn users? security concerns and linguistic preferences to produce user-oriented descriptions. We evaluate our scheme through experiments and user studies. The results clearly demonstrate the improvement on readability and users? security awareness of PERSCRIPTION?s descriptions compared to existing description generators. Energy harvesters are becoming increasingly popular as power sources for IoT edge devices. However, one of the intrinsic problems of energy harvester is that the harvesting power is often weak and frequently interrupted. Therefore, energy harvesting powered edge devices have to work intermittently. To maintain execution progress, execution states need to be checkpointed into the non-volatile memory before each power failure. In this way, previous execution states can be resumed after power comes back again. Nevertheless, frequent checkpointing and low charging efficiency generate significant energy overhead. To alleviate these problems, this paper conducts a thorough energy efficiency analysis and proposes three algorithms to maximize the energy efficiency of program execution. First, a non-volatile processor (NVP) aware task scheduling (NTS) algorithm is proposed to reduce the size of checkpointing data. Second, a tentative checkpointing avoidance (TCA) technique is proposed to avoid checkpointing for further reduction of checkpointing overhead. Finally, a dynamic wake-up strategy (DWS) is proposed to wake up the edge device at proper voltages where the total hardware and software overhead is minimized for further energy efficiency maximization. The experiments on a real testbed demonstrate that, with the proposed algorithms, an edge device is resilient to extremely weak and intermittent power supply and the energy efficiency is as $2\times$ high as the baseline technique. There is a growing trend for employing cyber-physical systems to help smart homes to improve the comfort of residents. However, a residential cyber-physical system is differed from a common cyber-physical system since it directly involves human interaction, which is full of uncertainty. The existing solutions could be effective for performance enhancement in some cases when no inherent and dominant human factors are involved. Besides, The rapidly rising interest in the deployments of cyber-physical systems at home does not normally integrate with energy management schemes, which is a central issue that smart homes have to face. In this paper, we propose a cyber-physical system based energy management framework to enable a sustainable edge computing paradigm while meeting the needs of home energy management and residents. This framework aims to enable the full use of renewable energy while reducing electricity bills for households. A prototype system was implemented using real world hardware. The experiment results demonstrated that renewable energy is fully capable of supporting the reliable running of home appliances most of the time and electricity bills could be cut by up to 60% when our proposed framework was employed. Embedded computing devices play an integral role in the mechanical operations of modern-day vehicles. These devices exchange information that contains critical vehicle parameters that reflect the current of state of operations. Such information can be captured for various purposes like diagnostics, fleet management, and even independent research. Although monitoring individual parameters can be useful for some applications, monitoring distinct combinations of parameters can reveal more complex and higher level states that may be worth observing. Existing monitoring systems either lack user configurability and control or present simple user interfaces that make it difficult to monitor and collate different parameters in order to observe high-level vehicle states. In this work, we present TruckSTM, a novel application that realizes user-defined states from messages seen in the embedded networks of medium and heavy duty vehicles and displays state transitions on an interactive user-interface. We begin by symbolically formulating some of the in-vehicle networking concepts and formally defining the concept of operational states and state transitions. We then elaborate on the operations performed by TruckSTM in mapping network obtained vehicle parameters to states that can be defined in standard JSON format. Finally, we evaluate TruckSTM's asymptotic performance and present the results for the worst-case scenario. Coordinated vehicles for intelligent traffic management are instances of a cyber-physical systems with strict correctness requirements. A key building block for these systems is the ability to establish a group membership view that accurately captures the locations of all vehicles in a particular area of interest. We formally define view correctness in terms of soundness and completeness and establish theoretical bounds for the ability to verify view correctness. Moreover, we present an architecture for an online view detection and verification process that uses the information available locally to a vehicle. This architecture uses an SMT solver to automatically prove view correctness. We evaluate this architecture and demonstrate that the ability to verify view correctness is on par with the ability to detect view violations. Cyber-Physical-Social Systems (CPSS) integrating the cyber, physical and social worlds, is a key technology to provide proactive and personalized services for humans. In this paper, we studied CPSS, by taking human-interaction-aware big data (HIBD) as the starting point. However, the HIBD collected from all aspects of our daily lives are of high-order and large-scale, which brings ever-increasing challenges for their cleaning, integration, processing and interpretation. Therefore, new strategies of representing and processing of HIBD becomes increasingly important in the provision of CPSS services. As an emerging technique, tensor, is proving to be a suitable and promising representation and processing tool of HIBD. In particular, tensor networks, as a kind of significant tensor decomposition, bring advantages of computing, storage and application of HIBD. Furthermore, Tensor-Train (TT), another kind of tensor networks, is particularly well suited for representing and processing high-order data by decomposing a high-order tensor into a series of low order tensors. However, at present, there is still need for an efficient Tensor-Train decomposition method for massive data. Therefore, for lager-scale HIBD, a highly-efficient computational method of Tensor-Train is required. In this paper, a distributed Tensor-Train (DTT) decomposition method is proposed to process the high-order and large-scale HIBD. The high performance of the proposed DTT such as the execution time is demonstrated with a case study on a typical CPSS data - CT (Computed Tomography) image data. Furthermore, as a typical CPSS application for HIBD - recognition was carried out in TT to illustrate the advantage of DTT. Modern trains rely on balises (communication beacons) located on the track to provide location information as they traverse a rail network. Balises, such as those conforming to the Eurobalise standard, were not designed with security in mind and are thus vulnerable to cyber attacks targeting data availability, integrity, or authenticity. In this work, we discuss data integrity threats to balise transmission modules and use high-fidelity simulation to study the risks posed by data integrity attacks. To mitigate such risk, we propose a practical two-layer solution: at the device level, we design a lightweight and low-cost cryptographic solution to protect the integrity of the location information; at the system layer, we devise a secure hybrid train speed controller to mitigate the impact under various attacks. Our simulation results demonstrate the effectiveness of our proposed solutions. It is challenging to design a secure and efficient multi-factor authentication scheme for real-time user data access in wireless sensor networks (WSNs). On the one hand, such real-time applications are generally security-critical, and various security goals need to be met. On the other hand, sensor nodes and users' mobile devices are typically of resource-constrained nature, and expensive cryptographic primitives cannot be used. In this work, we first revisit four foremost multi-factor authentication schemes, i.e., Srinivas et al.'s (IEEE TDSC'18), Amin et al.'s (JNCA'18), Li et al.'s (JNCA'18) and Li et al.'s (IEEE TII'18) schemes, and use them as case studies to reveal the difficulties and challenges in designing a multi-factor authentication scheme for WSNs right. We identify the root causes for their failures in achieving truly multi-factor security and forward secrecy. We further propose a robust multi-factor authentication scheme that makes use of the imbalanced computational nature of the RSA cryptosystem, particularly suitable for scenarios where sensor nodes (but not the user's device) are the main energy bottleneck. Comparison results demonstrate the superiority of our scheme. As far as we know, it is the first one that can satisfy all the twelve criteria of the state-of-the-art evaluation metric under the harshest adversary model so far. The trend of connected / autonomous features adds significant complexity to the traditional automotive systems to improve driving safety and comfort. Engineers are facing significant challenges in designing test environments that are more complex than ever. We propose a test framework that allows one to automatically generate various virtual road environments from the path specification and the behavior specification. The path specification intends to characterize geometric paths that an environmental object (e.g., roadways or pedestrians) needs to be visualized or move over. We characterize this aspect in the form of linear or nonlinear constraints of 3-Dimensional coordinates. Then, we introduce a test coverage, called an area coverage, to quantify the quality of generated paths in terms of how wide area the generated paths can cover. We propose an algorithm that automatically generate such paths using a SMT (Satisfiability Modulo Theories) solver. On the other hand, the behavioral specification intends to characterize how an environmental object changes its mode changes over time by interacting with other objects (e.g., a pedestrian waits for a signal or start crossing). We characterize this aspect in the form of timed automata. Then, we introduce a test coverage, called an edge/location coverage, to quantify the quality of the generated mode changes in terms of how many modes or transitions are visited. We propose a method that automatically generates many different mode changes using a model-checking method. To demonstrate the test framework, we developed the right turn pedestrian warning system in intersection scenarios and generated many different types of pedestrian paths and behaviors to analyze the effectiveness of the system.
CommonCrawl
Abstract: We study the following combinatorial problem. Given a planar graph $G=(V,E)$ and a set of simple cycles $\mathcal C$ in $G$, find a planar embedding $\mathcal E$ of $G$ such that the number of cycles in $\mathcal C$ that bound a face in $\mathcal E$ is maximized. We establish a tight border of tractability for this problem in biconnected planar graphs by giving conditions under which the problem is NP-hard and showing that relaxing any of these conditions makes the problem polynomial-time solvable. Moreover, we give a $2$-approximation algorithm for series-parallel graphs and a $(4+\varepsilon)$-approximation for biconnected planar graphs.
CommonCrawl
This work considers an optimal inventory control problem using a long-term average criterion. In absence of ordering, the inventory process is modeled by a one-dimensional diffusion on some interval of $(-\infty, \infty)$ with general drift and diffusion coefficients and boundary points that are consistent with the notion that demands tend to reduce the inventory level. Orders instantaneously increase the inventory level and incur both positive fixed and level dependent costs. In addition, state-dependent holding/backorder costs are incurred continuously. Examination of the steady state behavior of $(s, S)$ policies leads to a two-dimensional nonlinear optimization problem for which a pair of optimizers establishes the levels for an optimal $(s*, S*)$ policy. Using average expected occupation and ordering measures and weak convergence arguments, weak conditions are given for the optimality of the $(s_*,S_*)$ ordering policy in the general class of admissible policies. The analysis involves an auxiliary $C^2$ function that solves a particular system of linear equations and inequalities related to but different from the long-term average Hamilton-Jacobi-Bellman equation. This approach provides an analytical solution to the problem rather than a solution involving intricate analysis of the stochastic processes. The utility of these results is illustrated on drifted and geometric Brownian motion inventory models under conventional and non-conventional cost structures.
CommonCrawl
The main task is to implement a function that will fuse the laser scan data into the occupancy grid map. In class GridMap.py implement the fuse_laser_scan function. The purpose of the function is to fuse new data into the occupancy grid map as the robot traverses the environment. The occupancy grid map is initialized to the size of the VREP scene (10$\times$10 m). The laser scan measurements shall be fused to the grid using the Bayesian update described in Lab03 - Grid-based Path Planning. The obstacle sensing is achieved using the simulated laser range finder through the RobotHAL interface through the self.robot object. The laser scan is in relative coordinates w.r.t. the hexapod base where the $x$ axis correspond to the heading of the robot and $y$ axis is perpendicular to the robot heading. The recommended approach for the occupancy map building using Bayesian approach is described in Lab03 - Grid-based Path Planning. The GridMap represents a probabilistic representation of the word. In particular the variable "self.grid" holds the probabilities of individual states to be occupied self.grid['p'] and the derived binary information about the passability of the given cell self.grid['free']. Access and update of the probabilities can be done using functions self.get_cell_p(coord) and self.set_cell_p(coord) that will also automatically update the passability information, when the probability changes. To further simplify and speed-up the verification of the fuse_laser_scan function, it is recommended to construct an evaluation dataset by recording the data necessary as the input of the function during a single simulated run and then only read a process these data. Following figures visualize the possible sequence of map building given the following evaluation script.
CommonCrawl
Radio continuum surveys of the Galactic plane can find and characterize HII regions, supernova remnants (SNRs), planetary nebulae (PNe), and extragalactic sources. A number of surveys at high angular resolution (<25") at different wavelengths exist to study the interstellar medium (ISM), but no comparable high-resolution and high-sensitivity survey exists at long radio wavelengths around 21cm. We observed a large fraction of the Galactic plane in the first quadrant of the Milky Way (l=14.0-67.4deg and |b| < 1.25deg) with the Karl G. Jansky Very Large Array (VLA) in the C-configuration covering six continuum spectral windows. These data provide a detailed view on the compact as well as extended radio emission of our Galaxy and thousands of extragalactic background sources. We used the BLOBCAT software and extracted 10916 sources. After removing spurious source detections caused by the sidelobes of the synthesised beam, we classified 10387 sources as reliable detections. We smoothed the images to a common resolution of 25" and extracted the peak flux density of each source in each spectral window (SPW) to determine the spectral indices $\alpha$ (assuming $I(\nu)\propto\nu^\alpha$). By cross-matching with catalogs of HII regions, SNRs, PNe, and pulsars, we found radio counterparts for 840 HII regions, 52 SNRs, 164 PNe, and 38 pulsars. We found 79 continuum sources that are associated with X-ray sources. We identified 699 ultra-steep spectral sources ($\alpha < -1.3$) that could be high-redshift galaxies. Around 9000 of the sources we extracted are not classified specifically, but based on their spatial and spectral distribution, a large fraction of them is likely to be extragalactic background sources. More than 7750 sources do not have counterparts in the SIMBAD database, and more than 3760 sources do not have counterparts in the NED database.
CommonCrawl
The idea is to work out the product of the numbers on these different routes from $A$ to $B$. Let's say that in a route you are not allowed to visit a point more than once. For example, we could have $3\times0.5$ but we couldn't have $3\times2\times5\times4\times1\times 0.1$ because that route passes through $A$ twice. Which route or routes give the largest product? Which route or routes give the smallest product? Do you have any quick ways of working out the products each time? Multiplying decimals is perceived as a tricky task but this problem is written to encourage children to make the link between multiplying by a decimal and dividing by a whole number. This challenge also requires learners to work in a systematic way. You might find it useful to leave pupils to have a go at this challenge without saying much at all by way of introduction. It would probably be helpful for them to have a paper copy of the grid. After a few minutes invite them to comment on how they are going about the task. It would be useful to focus on two aspects. Firstly, the way that they are approaching the problem - how will they know they have looked at all the routes? This would be worth a discussion on having a system, such as starting with all the routes that begin by going up from A along the $5$ path, then those that start with the $3$ etc. Secondly, you could talk about their methods for calculating the products in each case. Some learners might suggest that multiplying by $2$ 'cancels out' multiplying by $0.5$, for example, and you can encourage them to explain why. You could ask pupils to write each route (and its product) on a separate strip of paper which could then be stuck on the board in the plenary. Ask the class to organise the strips to help them decide whether any have been missed out. Answering the questions will then be very straightforward. How do you know you have looked at all the possible routes? How are you calculating the product each time? As an extension activity you could invite children to create their own grid using some criteria. For example, could they make the product of all routes $1$? (Perhaps without using the number $1$?) They could start by using the route design given and then make up their own route maps. It might help some learners to express the decimals as fractions before finding the products. This may well come up in the whole-class discussion. Interactivities. Combinations. Investigations. Working systematically. Visualising. Factors and multiples. Multiplication & division. Calculating with decimals. Practical Activity. Addition & subtraction.
CommonCrawl
Hoang Viet Long, Nguyen Thi Kim Son, Ha Thi Thanh Tam, Bùi Công Cường, On the existence of fuzzy solutions for partial hyperbolic functional differential equations, International Journal of Computational Intelligence Systems, 7, (2014), 1159-1173, SCI(-E), Scopus. Đỗ Văn Lưu, Dinh Dieu Hang, Efficient solutions and optimality conditions for vector equilibrium problems. Mathematical Methods of Operations Research, 79 (2014), 163 - 177, SCI(-E), Scopus. Đỗ Văn Lưu, Dinh Dieu Hang, On optimality conditions for vector variational inequalities. Journal of Mathematical Analysis and Applications, 412 (2014), 792 - 804, SCI(-E), Scopus. Đỗ Văn Lưu, Necessary and sufficient conditions for efficiency via convexificators. Journal of Optimization Theory and Applications, 160 (2014), 510 - 526, SCI(-E), Scopus. Đỗ Văn Lưu, Convexificators and necessary conditions for efficiency. Optimization, 63 (2014), 321 - 335, SCI(-E), Scopus. Đỗ Văn Lưu, Higher-order efficiency conditions via higher-order tangent cones. Numercial Functional Analysis and Optimization, 35 (2014), 685 - 707, SCI(-E), Scopus. Bùi Công Cường, Picture Fuzzy Sets. Journal of Computer Science and Cybernetics, 30 (2014), 409 - 420. Bùi Công Cường, P.H.Phong, Max-min composition of linguistic intuitionistic fuzzy relations and application in medical diagnosis,VNU Journal of Science: Computer science and Communication engineering, 30 (2014), 57-66. Bùi Công Cường, P.H.Phong, Some intuitionistic linguistic aggregation operarators. Journal of Computer Science and Cybernetics, 30 (2014), 216 - 226. Bùi Công Cường, P.V.Chien, A computing procedure combining fuzzy clustering with fuzzy inference system for financial index forecasting. In: Proceedings of the First NAFOSTED on Information and Computer Science, March 13-14-2014, (2014), 497 - 506. Pham Thi Lan, Hồ Đăng Phúc, Nguyen Quynh Hoa, Nguyen Thi Kim Chuc, Cecilia Stålsby Lundborg, Improved knowledge and reported practice regarding sexually transmitted infections among healthcare providers in rural Vietnam: a cluster randomised controlled educational intervention. BMC Infectious Diseases 2014, 14:646, SCI(-E), Scopus. Anna Nielsen, Pham Thi Lan, Gaetano Marrone, Hồ Đăng Phúc, Nguyem Thi Kim Chuc, Cecilia Stålsby Lundborg, Reproductive Tract Infections in Rural Vietnam, Women's Knowledge and Health Seeking Behaviour: A Cross-Sectional Study. Health Care For Women International 05/2014, 35, DOI: 10.1080/07399332.2014.920021, SCI(-E), Scopus. Phan Thành An, Nguyen Ngoc Hai, Tran Van Hoai, Le Hong Trang, On the performance of triangulation-based multiple shooting method for 2D shortest path problems, LNCS Transactions on Large Scale Data and Knowledge Centered Systems, Springer, 2014, 45-56, Scopus. Nguyễn Minh Chương, Ha Duy Hung, Bounds of weighted Hardy-Cesàro operators on weighted Lebesgue and BMO spaces, Integral Transforms and Special Functions, 25 (2014), 697 -- 710, SCI(-E), Scopus. Nguyen Minh Chuong, Tran Dinh Ke, and Nguyen Nhu Quan, Stability for a class of fractional partial integro-differential equations, Journal of Integral Equations and Applications, 26 (2014), 145 -- 170, SCI(-E), Scopus. Nguyễn Đình Công, Đoàn Thái Sơn, Hoàng Thế Tuấn, Stefan Siegmund, Structure of the Fractional Lyapunov Spectrum for Linear Fractional Differential Equations, Advances in Dynamical Systems and Applications, 9 (2014), 149-159, SCI(-E), Scopus. M.V. Thuan, Vũ Ngọc Phát, T.L. Fernando, H. Trinh, Exponential stabilization of time-varying delay systems with nonlinear perturbations, IMA Journal of Mathematical Control and Information, 31 (2014), 441-464, SCI(-E), Scopus. Ngo Thi Ngoan, Nguyễn Quốc Thắng, On some Hasse principle for algebraic groups over global fields, II. Proc. Jap. Acad. Ser. A, 8 (2014), 107 - 112, SCI(-E), Scopus. Do Thi Thuy Nga, Nguyen Thi Kim Chuc, Nguyen Phuong Hoa, Nguyen Quynh Hoa, Nguyen Thi Thuy Nguyen, Hoang Thi Loan, Tran Khanh Toan, Hồ Đăng Phúc, Peter Horby, Nguyen Van Yen, Nguyen Van Kinh, Heiman FL Wertheim, Antibiotic sales in rural and urban pharmacies in northern Vietnam: an observational study, BMC Pharmacology and Toxicology 2014, 15:6, SCI(-E), Scopus. Dinh T. Hoa, Hồ Minh Toàn, Hiroyuki Osaka, Matrix means of finite orders, RIMS Kokyuroku, 1893 (2014), 57-66. Dinh T. Hoa, Du T. H. Binh, Hồ Minh Toàn, On some inequalities with matrix means, RIMS Kokyuroku, 1893 (2014), 67-71. Lê Dũng Mưu, Le Quang Thuy, DC optimization algorithms for solving mimax flow problems, Mathematical Methods of Operations Research, 80 (2014), 83-97, SCI(-E), Scopus. M.V. Bulatov, M.N. Machkhina, Vũ Ngọc Phát, Existence and uniqueness of solutions to integral-algebraic equations with variable limits of integrations, Communications on Applied Nonlinear Anaysis, 21(2014), 65-76, Scopus. Hà Huy Khoái, Vu Hoai An and Le Quang Ninh, Uniqueness Theorems for Holomorphic Curves with Hypersurfaces of Fermat–Waring Type, Complex Analysis and Operator Theory, 8 (2014), 1747-1759, SCI(-E), Scopus. Nguyễn Tất Thắng, Admissibility of local systems for some classes of line arrangements. Canadian Mathematical Bulletin 57 (2014), 658–672, SCI(-E), Scopus. Nguyen Ba Minh, Le Anh Tuan, Phạm Hữu Sách, Efficiency in vector quasi-equilibrium problems and applications. Positivity 18 (2014), 531–556, SCI(-E), Scopus. Nguyen Thi Thu Huong, Nguyễn Đông Yên, The Pascoletti-Serafini scalarization scheme and linear vector optimization. Journal of Optimization Theory and Applications 162 (2014), 559–576, SCI, Scopus. Nguyen Thanh Qui, Nguyễn Đông Yên, A class of linear generalized equations. SIAM Journal on Optimization 24 (2014), 210–231, SCI, Scopus. Nguyễn Đình Công, Stefan Siegmund, Nguyen Thi The, Adjoint equation and Lyapunov regularity for linear stochastic differential algebraic equations of index 1, Stochastics An International Journal of Probability and Stochastic Processes, 86 (2014), 776-802, SCI(-E). V. F. Chistyakov, Tạ Duy Phượng, On Qualitative Properties of Differential-Algebraic Equations, Mat. Zametki, 96 (2014), 596–608 (Mi mz9367) English version: Mathematical Notes, 2014, 96:4, 563–574, SCI(-E), Scopus. Đinh Sĩ Tiệp, Hà Huy Vui, Pham Tien Son, A Frank–Wolfe type theorem for nondegenerate polynomial programs, Mathematical Programming, 147(2014), 519-538, SCI(-E), Scopus. Dang Vu Giang, Beurling spectrum of a function in a Banach space, Acta Mathematica Vietnamica, 39 (2014), 305-312, Scopus. Mai Viet Thuan, Le Van Hien, Vũ Ngọc Phát, Exponential stabilization of non-autonomous delayed neural networks via Riccati equations, Applied Mathematics and Computation, 246(2014), 533-545, SCI(-E), Scopus. Vũ Thế Khôi, The Dijkgraaf–Witten invariants of circle bundles, Vietnam Journal of Mathematics, 42 (2014), 393-399, Scopus. Nguyễn Văn Châu, Jacobian Pairs of Two Rational Polynomials are Automorphisms, Vietnam Journal of Mathematics, 42 (2014), 401-406, Scopus. Nguyễn Việt Dũng, Tran Quoc Cong, The Homotopy Type of the Complement to a System of Complex Lines in $\mathbb C^2$, Vietnam Journal of Mathematics, 42(2014), 365-375, Scopus. Hoàng Lê Trường, Index of reducibility of parameter ideals and Cohen-Macaulay rings, Journal of Algebra, 415, 2014, pp. 35–49, SCI(-E), Scopus. Đinh Nho Hào, Tran Nhan Tam Quyen, Finite element methods for coefficient identification in an elliptic equation. Applicable Analysis 93 (2014), 1533–1566, SCI(-E), Scopus. Vũ Thế Khôi, Seifert volumes and dilogarithm identities, Journal of Knot Theory and Its Ramifications Vol. 23, (2014) 1450025 (11 pages), SCI(-E), Scopus. Bui Van Dinh, Pham Gia Hung, Lê Dũng Mưu, Bilevel optimization as a regularization approach to pseudomonotone equilibrium problems. Numerical Functional Analysis and Optimization 35 (2014), 539–563, SCI(-E), Scopus. Đoàn Trung Cường, Fibers of flat morphisms and Weierstrass preparation theorem. Journal of Algebra 411 (2014), 337–355, SCI(-E), Scopus. Vũ Ngọc Phát, Nguyen Huu Sau, On exponential stability of linear singular positive delayed systems, Applied Mathematics Letters, 38 (2014), 67–72, SCI(-E), Scopus. Hồ Đăng Phúc, Domains of operator semi-attraction of probability measures on Banach spaces, Brazilian Journal of Probability and Statistics, 28 (2014), 587-611, SCI(-E), Scopus. Ngô Đắc Tân, Vertex disjoint cycles of different lengths in d-arc-dominated digraphs, Operations Research Letters, 42 (2014), 351 - 354,SCI(-E), Scopus. Ngo Thi Ngoan, Nguyễn Quốc Thắng, On some Hasse principles for algebraic groups over global fields. Proceedings of the Japan Academy, Series A, Mathematical Sciences, 90 (2014), 73–78, SCI(-E), Scopus. Bùi Trọng Kiên, Nhu, V. H, Second-order necessary optimality conditions for a class of semilinear elliptic optimal control problems with mixed pointwise constraints. SIAM Journal on Control and Optimization 52 (2014), 1166–1202, SCI(-E); Scopus. Formenti Enrico, Phạm Văn Trung, Phan Thị Hà Dương, Tran Thi Thu Huong, Fixed-point forms of the parallel symmetric sandpile model. Theoretical Computer Science 533 (2014), 1–14, SCI(-E); Scopus. Vũ Ngọc Phát, T. Fernando, H. Trinh, Observer-based control for time-varying delay neural networks with nonlinear observation, Neural Computing and Applications, Vol. 24, 2014, 1639-1645, SCI(-E); Scopus. Ngô Đắc Tân, On d-arc-dominated oriented graphs, Graphs and Combinatorics, 30, (2014), 1045 - 1054, SCI(-E); Scopus. Nguyễn Thị Vân Hằng, The penalty functions method and multiplier rules based on the Mordukhovich subdifferential. Set-Valued and Variational Analysis 22 (2014), 299-321, SCI(-E); Scopus. Phan Thành An, N. N. Hai, and T. V. Hoai, The role of graph for solving some geometric shortest path problems in 2D and 3D, Proceedings of the 5th FTRA International Conference on Computer Science and its Applications (CSA-13), Danang, Vietnam, December 18 - 21, 2013, 2013, Lecture Notes in Electrical Engineering (LNEE), Springer, Vol. 279, pp. 179-184, 2014, Scopus. Đào Quang Khải, Nguyễn Minh Trí, Solutions in mixed-norm Sobolev–Lorentz spaces to the initial value problem for the Navier–Stokes equations, Journal of Mathematical Analysis and Applications 417 (2014) 819-833, SCI(-E); Scopus. Đinh Nho Hào, Phan Xuan Thanh, Lesnic, D., Ivanchov, M, Determination of a source in the heat equation from integral observations. Journal of Computational and Applied Mathematics 264 (2014), 82–98, SCI(-E); Scopus. Đoàn Trung Cường, Local rings with zero-dimensional formal fibers. Journal of Algebra 403 (2014), 76–92, SCI(-E); Scopus. Nguyễn Tự Cường, Nguyen Van Hoang, On the finiteness and stability of certain sets of associated prime ideals of local cohomology modules. Communications in Algebra 42 (2014), 1757–1768, SCI(-E); Scopus. Phan Thiên Thạch, T. V. Thang, Problems with resource allocation constraints and optimization over the efficient set, Journal of Global Optimization, 58(2014), 481-495, SCI(-E); Scopus. Phan Thuan Do, Dominique Rossin, Tran Thi Thu Huong, Permutations weakly avoiding barred patterns and combinatorial bijections to generalized Dyck and Motzkin paths, Discrete Mathematics 320 (2014), 40–50, SCI(-E); Scopus. P. N. Anh, Lê Dũng Mưu, A hybrid subgradient algorithm for nonexpansive mappings and equilibrium problems, Optimization Letters 8 (2014), 727–738, SCI(-E); Scopus. G.M. Lee, Nguyễn Đông Yên, Coderivatives of a Karush-Kuhn-Tucker point set map and applications, Nonlinear Analysis: Theory, Methods & Applications 95 (2014), 191–201, SCI; Scopus. Dao Phuong Bac, Nguyễn Quốc Thắng, On the topology on group cohomology of algebraic groups over complete valued fields, Journal of Algebra 399 (2014), 561–580, SCI(-E); Scopus. Nguyễn Đình Công, Đoàn Thái Sơn, Hoàng Thế Tuấn, On fractional lyapunov exponent for solutions of linear fractional differential equations, Fractional Calculus and Applied Analysis, 17 (2014), 285-306, SCI(-E); Scopus. Nguyễn Đình Công, Đoàn Thái Sơn, Stefan Siegmund, Hoàng Thế Tuấn, On stable manifolds for planar fractional differential equations, Applied Mathematics and Computation, 226 (2014), 1, 157-168, SCI(-E); Scopus. Đinh Sĩ Tiệp, Hà Huy Vui, Tiến Sơn Phạm, Nguyễn Thị Tha̓o, Global Łojasiewicz-type inequality for non-degenerate polynomial maps, Journal of Mathematical Analysis and Applications 410 (2014), 541–560, SCI(-E); Scopus. Naoki Terai, Ngô Việt Trung, On the associated primes and the depth of the second power of squarefree monomial ideals, Journal of Pure and Applied Algebra, 218 (2014), 1117–1129, SCI(-E); Scopus. Gregor Kemper, Ngô Việt Trung, Krull dimension and monomial orders, Journal of Algebra, 399(2014), 782–800, SCI(-E); Scopus. Đỗ Ngọc Diệp, Category of Noncommutative CW-Complexes. II, Vietnam J Math 42 (2014) 73–82, Scopus. Y. Katsov, Trần Giang Nam, On radicals of semirings and related problems, Communications in Algebra, Volume 42, 2014, 5065-5099, SCI(-E); Scopus. Yefim Katsov, Trần Giang Nam, Jens Zumbrägel, On simpleness of semirings and complete semirings, Journal of Algebra and Its Applications, Vol. 13 (2014), SCI(-E); Scopus. Nguyen Truong Thanh, Vũ Ngọc Phát, Decentralized stability for switched nonlinear large-scale systems with interval time-varying delays in interconnections, Nonlinear Analysis: Hybrid Systems, 11 (2014), 21-36, SCI(-E); Scopus.
CommonCrawl
Is the Banach-Tarski paradox realistic? Why is Volume not an invariant? How can a set contain itself? Are the rationals on a unit circle dense? Why does the order of summation of the terms of an infinite series influence its value? No uncountable ordinals without the axiom of choice? How to make sense of fractions? What does the Pythagorean Theorem really prove? Can I prove Pythagoras' Theorem using that $\sin^2(\theta)+\cos^2(\theta)=1$? Reverse engineer numerical results to fractions of remarkable numbers? Why does this innovative method of subtraction from a third grader always work? Is there a name for the rule $a \div (b \times c) = a \div b \div c$?
CommonCrawl
Abstract: The method of Galerkin approximations is employed to prove the existence of a strong global (in time) solution of a doubly nonlinear parabolic equation in an unbounded domain. The second integral identity is established for Galerkin approximations, and passing to the limit in it an estimate for the decay rate of the norm of the solution from below is obtained. The estimates characterizing the decay rate of the solution as $x\to\infty$ obtained here are used to derive an upper bound for the decay rate of the solution with respect to time; the resulting estimate is pretty close to the lower one. Keywords: doubly nonlinear parabolic equation, rate of decay of the solution, lower estimate, existence of a strong global (in time) solution.
CommonCrawl
Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals. Bounds on the Size of Small Depth Circuits for Approximating Majority. An Alternative Cracking of The Genetic Code. The second weight of generalized Reed-Muller codes. On a problem of Frobenius in three numbers. The Complexity of Nash Equilibria in Simple Stochastic Multiplayer Games. New Algorithms and Lower Bounds for Sequential-Access Data Compression. Graph Sparsification in the Semi-streaming Model. (Withdrawn) The Ergodic Capacity of The MIMO Wire-Tap Channel. On Optimization of Local Histogram Equalization. Acoustic wave equation in the expanding universe. Sachs-Wolfe theorem. Hierarchical Triple-Modular Redundancy (H-TMR) Network For Digital Systems. Immunity and Pseudorandomness of Context-Free Languages. Malware Detection using Attribute-Automata to parse Abstract Behavioral Descriptions. Stability and Delay of Zero-Forcing SDMA with Limited Feedback. Non-monotone submodular maximization under matroid and knapsack constraints. Optimum Power and Rate Allocation for Coded V-BLAST. On the complexity of Nash dynamics and Sink Equilibria. Decoding Network Codes by Message Passing. On the Applicability of Combinatorial Designs to Key Predistribution for Wireless Sensor Networks. AxialGen: A Research Prototype for Automatically Generating the Axial Map. Formalization of malware through process calculi. Graphical Reasoning in Compact Closed Categories for Quantum Computation. An Optimal Multi-Unit Combinatorial Procurement Auction with Single Minded Bidders. Analysis of bandwidth measurement methodologies over WLAN systems. Binary Data Compression with and without Side Information at the Decoder: the Syndrome-Based Approach Using Off-the-Shelf Turbo Codecs. Beyond Zipf's law: Modeling the structure of human language. Degrees of Guaranteed Envy-Freeness in Finite Bounded Cake-Cutting Protocols. Efficient implementation of linear programming decoding. Application of the Weil representation: diagonalization of the discrete Fourier transform. Embedding Data within Knowledge Spaces. Interference and Congestion Aware Gradient Broadcasting Routing for Wireless Sensor Networks. A Simple Extraction Procedure for Bibliographical Author Field. Genetic algorithm based optimization and post optimality analysis of multi-pass face milling. A Multiobjective Optimization Framework for Routing in Wireless Ad Hoc Networks. Alleviating Media Bias Through Intelligent Agent Blogging. Bootstrapped Oblivious Transfer and Secure Two-Party Function Computation. Finding Exact Minimal Polynomial by Approximations. The Ergodic Capacity of Interference Networks. Matrix Graph Grammars and Monotone Complex Logics. A Unified Framework for Linear-Programming Based Communication Receivers. Comparative concept similarity over Minspaces: Axiomatisation and Tableaux Calculus. MicroSim: Modeling the Swedish Population. Multiple time-delays system modeling and control for router management. Design and performance evaluation of a state-space based AQM. On Designing Lyapunov-Krasovskii Based AQM for Routers Supporting TCP Flows. Towards a Theory of Requirements Elicitation: Acceptability Condition for the Relative Validity of Requirements. Robust control tools for traffic monitoring in TCP/AQM networks. On the Gaussian MAC with Imperfect Feedback. Beam Selection Gain Versus Antenna Selection Gain. New Confidence Measures for Statistical Machine Translation. Towards a Statistical Methodology to Evaluate Program Speedups and their Optimisation Techniques. Optimal design and optimal control of structures undergoing finite rotations and elastic deformations. Compressed Representations of Permutations, and Applications. Fast solving of Weighted Pairing Least-Squares systems. Kolmogorov Complexity and Solovay Functions. Weak Mso with the Unbounding Quantifier. Polynomial-Time Approximation Schemes for Subset-Connectivity Problems in Bounded-Genus Graphs. On finding a particular class of combinatorial identities. A Polynomial Kernel For Multicut In Trees. On the Average Complexity of Moore's State Minimization Algorithm. A Model for Managing Collections of Patterns. How happy is your web browsing? A probabilistic model to describe user satisfaction. The Complexity of Datalog on Linear Orders. Directed paths on a tree: coloring, multicut and kernel. Opportunistic Communications in Fading Multiaccess Relay Channels. Discovering general partial orders in event streams. On Why and What of Randomness. On Local Symmetries And Universality In Cellular Autmata. Almost-Uniform Sampling of Points on High-Dimensional Algebraic Varieties. Hardness and Algorithms for Rainbow Connectivity. Compilation of extended recursion in call-by-value functional languages. Extraction de concepts sous contraintes dans des données d'expression de gènes. Database Transposition for Constrained (Closed) Pattern Mining. Nonclairvoyant Speed Scaling for Flow and Energy. An Approximation Algorithm for l\infty-Fitting Robinson Structures to Distances. A Note on the Diagonalization of the Discrete Fourier Transform. Delay Performance Optimization for Multiuser Diversity Systems with Bursty-Traffic and Heterogeneous Wireless Links. Fountain Codes Based Distributed Storage Algorithms for Large-scale Wireless Sensor Networks. Multi-Label Prediction via Compressed Sensing. On the minimum distance graph of an extended Preparata code. A Note on Contractible Edges in Chordal Graphs. On the Additive Constant of the k-server Work Function Algorithm. Fundamental delay bounds in peer-to-peer chunk-based real-time streaming systems. The Price of Anarchy in Cooperative Network Creation Games. Personalised and Dynamic Trust in Social Networks. Correlated Sources over Broadcast Channels. An Order on Sets of Tilings Corresponding to an Order on Languages. A Comparison of Techniques for Sampling Web Pages. Lower Bounds for Multi-Pass Processing of Multiple Data Streams. Asymptotically Optimal Lower Bounds on the NIH-Multi-Party Information. Package upgrades in FOSS distributions: details and challenges. A baby steps/giant steps Monte Carlo algorithm for computing roadmaps in smooth compact real hypersurfaces. Perfect Matchings in Õ(n1.5) Time in Regular Bipartite Graphs. Improvements of real coded genetic algorithms based on differential operators preventing premature convergence. A bound on the size of linear codes. A competitive comparison of different types of evolutionary algorithms. Novel anisotropic continuum-discrete damage model capable of representing localized failure of massive structures. Part II: identification from tests under heterogeneous stress field. Back analysis of microplane model parameters using soft computing methods. Fast computation of interlace polynomials on graphs of bounded treewidth. A Simple Linear Time Split Decomposition Algorithm of Undirected Graphs. A New Achievable Rate for the Gaussian Parallel Relay Channel. Cover Time and Broadcast Time. A Robust Statistical Estimation of Internet Traffic. On the Dynamics of the Error Floor Behavior in (Regular) LDPC Codes. Counting Distinctions: On the Conceptual Foundations of Shannon's Information Theory. Distributionally Robust Stochastic Programming with Binary Random Variables. Matrix Graph Grammars with Application Conditions. Optimal Probabilistic Ring Exploration by Asynchronous Oblivious Robots. Polynomial Kernelizations for $\MINF_1$ and $\MNP$. A Unified Approach to Sparse Signal Processing. A Superpolynomial Lower Bound on the Size of Uniform Non-constant-depth Threshold Circuits for the Permanent. Local Multicoloring Algorithms: Computing a Nearly-Optimal TDMA Schedule in Constant Time. Abstraction and Refinement in Static Model-Checking. A Proof of Concept for Optimizing Task Parallelism by Locality Queues. NNRU, a noncommutative analogue of NTRU. Topological Centrality and Its Applications. Cooperative Spectrum Sensing based on the Limiting Eigenvalue Ratio Distribution in Wishart Matrices. Convergence and Tradeoff of Utility-Optimal CSMA. Modified Papoulis-Gerchberg algorithm for sparse signal recovery. Strong Completeness of Coalgebraic Modal Logics. Polynomial Size Analysis of First-Order Shapely Functions. Quantum Finite Automata with One-Sided Unbounded Error. Tableau-based decision procedure for full coalitional multiagent temporal-epistemic logic of linear time. Qualitative Concurrent Games with Imperfect Information. Tableau-based procedure for deciding satisfiability in the full coalitional multiagent epistemic logic. A formally verified compiler back-end. Extracting the Kolmogorov Complexity of Strings and Sequences from Sources with Limited Independence.
CommonCrawl
Electrical circuits provide a way to control electricity. Electricity is the flow of charge (typically electrons) or the accumulation of charge (although in basic circuit theory we assume that charge does not accumulate—the net flow of charge into a device is equal to the net flow of charge out of the device). When you first start to think about electrical circuits it can be helpful to think of them as being analogous to hydrological systems where water flows through pipes. (There are various ways in which this analogy ultimately breaks down, but the analogy is useful nonetheless.) When building a hydrological system, it is important to keep track of the flow rate and pressure of the water. We describe electrical circuits similarly: current is the rate at which charge flows past a point, while the voltage is effectively like an electrical "pressure." Voltage is always measured between two points and indicates the force "pushing" charges to flow from one point to another. The purpose of an electrical circuit is to accomplish some meaningful task such as generating light or heat, exerting a force (which may, for example, cause a motor's shaft to turn), or even sensing the physical surrounding with devices whose properties vary with changes in the surroundings (such as a temperature probe). Electricity provides the means to transport energy from a source to a load. The source of energy might be a battery while the load is a light-emitting diode (LED) and the desired task is to cause the LED to illuminate. If too much charge flows through a circuit, the circuit or its components can be damaged. To prevent this, voltage and/or current can be limited through the use of components known as resistors. As their name implies, resistors are characterized by the property of resistance, which is like a form of electrical friction. Resistors dissipate energy as heat. The remainder of this page provides a very brief overview of current, voltage, and resistance and how they are related. To obtain further details, please click on the boxes that are given below on the right side of the page. Water Analogy: Physical obstruction such as a wire mesh or bottleneck. Units (Symbol): Ohms (Ω, i.e., the Greek letter Omega). Brief Description: Electrical resistance is a measure of how easily charge can move through circuit elements. The greater the resistance, the more difficult it is for charge to flow. Although all materials have some resistance, wires (and metals) generally have very little resistance and are often approximated as having no resistance (a resistance of zero). Thus the assumption is that wires provide a resistance-free path to deliver energy from one point to another. On the other hand, circuit components known as resistors are designed to offer a specified resistance to the flow of charge. Resistors are often color coded to indicate their amount of resistance. There is more information regarding this topic available via the "Practical Resistors" link on the right that shows how you can determine resistance from the colors on a resistor. Equation Variable: $I$ or $i$. Equation Variable: $V$ or $v$. Brief Description: Voltage measures the relative difference in electric potential energy. This means that it is a comparison of the energy levels at two separate points. This difference is often called the voltage drop. If you image two pools of water that are at different heights and a pipe that links them, because of gravity, there is a difference in the potential energy in the water in the two pools. The water in the higher pool has greater potential energy. This water is willing to give up this stored energy by flowing to the lower pool. The pressure in the pipe that joins these two pools (caused by the difference in potential energy) is analogous to a voltage. It is important to remember that voltage is always measured relative to two selected points. In practice, we usually designate a single point of reference in a circuit to serve as the common reference point for all voltage measurements. We typically refer to this point as ground. The difference in voltage between ground and itself is, of course, zero. So ground is assigned a voltage of 0V. All other voltages that we specify for a circuit are the difference in electrical potential between that point and ground. The three basic electrical properties mentioned above are related to each other by Ohm's Law. If we continue to use the water analogy to explain the relationship between voltage, resistance, and current; voltage (VOLT) being the water tries to push charge (AMP) along a path of a hose, while resistance (OHM) is the thing that inhibits the charge's movement (the wall of the hose). Ohm's Law provides the mathematical relationship between the quantities. It states that voltage is equal to current times resistance, or $V = I\times R$. Or, solving for current, we can write $I = V/R$. This tells us that if the voltage is held fixed while the resistance is increased (so that we are dividing by a larger number), the current will decrease. Conversely, if the voltage is held fixed while the resistance is decreased, the current will increase. For the electrical components known as resistors, the resistance $R$ is fixed. Thus, the voltage varies linearly with changes in the current $I$. By "linearly," we mean that if you plot the voltage versus the current for a resistor, the plot is a straight line. A plot of voltage versus current is known as a VI plot or VI curve. However, it is often more useful to plot the current versus the voltage and these plots are known as IV curves. In general, the relationship between current and voltage is known as the IV relationship. Figure 1. IV relationship for a "large" resistor. Figure 2. IV relationship for a "small" resistor. Figures 1 and 2 depict the IV relationship for a "large" and "small" resistor, respectively. When the resistance is large, as in Fig. 1, the slope of the line is relatively shallow because the voltage is being divided by a large value and thus changes in the voltage do not have a large effect on the current (recall that $I=V/R$). When the resistance is small, as in Fig. 2, the slope of the line is relatively steep because the voltage is being divided by a small value (you can think of this as the voltage being multiplied by a relatively large value) and thus changes in the voltage do have a relatively large effect on the current. Although current and voltage are linearly related in a resistor, many electrical components have nonlinear IV relationships. For example, the IV relationship of a component known as a diode is nonlinear, as shown in Fig. 3. With this particular nonlinear relationship, a small increase in voltage can cause a large increase in current (as is the case toward the right-hand side of the IV curve). On the other hand, if the voltage is negative, changes in voltage have very little effect on current. Ideally, a diode allows current to flow freely in one direction (when the voltage is positive) and does not allow current to flow in the opposite direction. Figure 3. IV relationship for a diode.
CommonCrawl
Abstract: Generalizing the work of Sen, we analyze special points in the moduli space of the compactification of the F-theory on elliptically fibered Calabi-Yau threefolds where the coupling remains constant. These contain points where they can be realized as orbifolds of six torus $T^6$ by $Z_m \times Z_n (m, n=2, 3, 4, 6)$. At various types of intersection points of singularities, we find that the enhancement of gauge symmetries arises from the intersection of two kinds of singularities. We also argue that when we take the Hirzebruch surface as a base for the Calabi- Yau threefold, the condition for constant coupling corresponds to the case where the point like instantons coalesce, giving rise to enhanced gauge group of $Sp(k)$.
CommonCrawl
Let $R$ be a commutative ring. Two idempotents $e$ and $f$ are called orthogonal if $ef = 0$. The archetypal example is $(0,1)$ and $(1,0)$ in a product ring $R\times S$. Therefore $f \in (e + f)$. Switching $e$ and $f$ in this calculation shows that $e\in (e + f)$. Using the fact that $e + f$ is also an idempotent, we see that by induction, if $e_1,\dots,e_n$ are pairwise orthogonal idempotents, then the ideal $(e_1,\dots,e_n)$ is generated by the single element $e_1 + \dots e_n$. shows that $e – ef$ is an idempotent. Furthermore, $(e,f) = (e – ef,f)$ and $e-ef$ and $f$ are orthogonal idempotents. By what we discussed in the previous paragraph, $(e,f) = (e-ef,f)$ is generated by $e – ef + f$. Everything we did assumed $R$ was commutative. But what if we foray into the land of noncommutative rings? Is it still true that a left-ideal generated by finitely many idempotents is also generated by a single idempotent? Any ideas? It might not help for the general problem, but the answer to your final question is true for C*-algebras, where "finitely generated" has the same meaning as in "purely algebraic" theory, i.e. we are not taking closed ideals.
CommonCrawl
Intuitively, why are the curves of exponential, log, and parabolic functions all smooth, even though the gradient is being changed at every point? Why are the curves of exponential, log, and parabolic functions all smooth, even though the gradient is being changed at every point? Shouldn't it be much more choppy? By the way, if possible, can this be explained intuitively (not too rigorously), and without Calculus? Because I want to understand this, but I haven't learnt Calculus yet. When you reduce the distance between the sampling points for $e^x, x^2, \ln(x), \sin(x)$, the gradient changes also become smaller. After a few iterations, the screen resolution isn't high enough to show any change anymore and the curves look smooth. On the other hand, the $|x|$ curve (absolute value, the green curve on the graph) doesn't change anymore as soon as $x = 0$ is plotted: there's an abrupt change of gradient around $x = 0$, even at a very high resolution. At $x = 0$, it's not possible to define a gradient for this curve. However small $\varepsilon$ is, going from $x=-\varepsilon$ to $x = \varepsilon$ will change the gradient of $|x|$ from $-1$ to $1$. Welcome to the subtleties of the real line. I believe your perplexity here comes from sources similar to those which generated the ancient struggle to understand the continuous vs. the discrete. It is true that many things we consider as smooth in real life are only smooth to our crude senses; when observed under sufficiently powerful microscopes one sees varying degrees of roughness. Eventually, it boils down to whether there are actual infinities in the real world. But let us come back to curves in the plane. You can't actually visualise these things, or imagine them precisely, because they involve infinity. For example, you cannot just comprehend the idea that there is no real number next to any fixed real number, say $0$. What real number is "next to" it? -- None. The question is meaningless in this context. This is what will keep you from having many headaches struggling to visualise inherently non-visual things. However, one can only understand these things logically. So, to get to your question. I believe, first of all, that you've not yet grasped the fact that for plane curves, there is no next point. Thus, the gradient doesn't really change -- we only usually think of these things in terms of motion, but it should be understood as a rough, heuristic image, which under thorough analysis doesn't stand. Instead think of the gradient of, say, $\exp x$ as being different at every point on it. Now if you try to imagine this, you can't help thinking discretely; that's what we've ever experienced -- one by one aggregates. But in the real line there is no next point. Well, one way I've personally tried to come close to visualising this (it's actually impossible, but one can try to have a rough visual) is to imagine it as an infinitely elastic string, so that between any two points you always have other points so that there are no gaps (this is not true in real life, even for things that look continuous -- eventually you reach the atomic structure, where gaps abound, etc.). So, the word "smooth" here is much stronger than how we use it in real life. The scientist who approximates water as a continuum, for example, knows it is not really so. But in mathematics it is actually the case that the real line contains no gaps, no matter how deep you delve into it (this is known as completeness). So thinking of the slope as changing from point to point is only an approximate heuristic for imagining these things. Any mathematician knows that there's nothing like point to point like we usually mean that phrase in real life. It's simply impossible to move from point to point, passing through every intermediate point, in $\mathbb R$ -- you always have to jump over some (this was the key assumption in Zeno's famous arguments against motion -- he assumed, taking things at face value, that spacetime is continuous in the strong mathematical sense). So functions on $\mathbb R$ are actually way subtler than they seem -- this actually occupied mathematicians who developed analysis in the 19th century. I don't know that these things can ever be understood intuitively, for no one has never experienced the infinite in real life. We can only grasp it abstractly and be content with that -- or trouble oneself to no end. In summary, smooth functions on $\mathbb R$ can be approximated at every point (you can't visualise this -- don't even try) by straight line functions because as you zoom in on an arbitrary point on their graph, the graph looks more and more like a straight line -- this is what we mean by saying they are differentiable. If you want to think of it in terms of motion, then it is really impossible, for the second derivative measures the limiting rate of change of slope at each point. So, if you think of the slope as changing from point to point, you can't help imagining a broken curve since we cannot just imagine the continuous. IOW, the graphs of those functions are "smooth" because they are differentiable at all points where they are defined. It makes no matter that the gradients may be different at each point. The triangle wave, for example, is "choppy" because there are some points at which it cannot be approximated by a linear function (i.e., no matter how much you zoom into it about a peak or trough, it always looks like a bent line, not a straight one). Think of y as your car's position on a street. Think of x as time. Your speed will then be the derivative of position, so the steepness of the curve. Any function with 'breaks' will be like a ride with jolts, or even star trek beaming going on. But it is totally possible to have a smooth ride with your foot on the gas, changing speed all the time (accelerating). Because the derivative also changes smoothly. It's hard to put it intuitively, but imagine throwing a ball down a steep hill, it accelerates all the time (second derivative), so the speed is changing all the time (first derivative), but the position (function) is still changing smoothly (the ball doesn't suddenly teleport), unless it hits a rock (not smooth). In calculus you would later learn that second and higher order curves ( whose representing function can be also a series of order two or more) are smooth as they are amenable to continuous differentiations. Smooth is often defined as, "a function that has continuous derivatives up to some desired order over some domain," or similar. Often, the desired order is two, when we want a curve to look "smooth" to the human eye. A discontinuous function will have gaps, a function with a discontinuous first derivative will have sharp jagged edges, and a function with a discontinuous second derivative has obvious inflection points where it looks glued-together from different pieces. The exponential function is a good example: if $f(x) = e^x$, $f = f' = f'' = ...$ and the derivative of any order is positive and continuous at any real number. The result is a smooth curve. Or, if $f(x) = \sin x$ and $g(x) = \cos x$, $f' = -g$ and $f'' = -g' = f$. The first derivative, second derivative, third and so on are all continuous, so you get a smooth curve. So this might be "smooth enough" for some purposes (those where the "some desired order" in the definition I gave is one; that is, where a continuous first derivative is all you care about). But you can see that there's an inflection point at zero where the second derivative doesn't exist. One domain where this has practical applications is when we fit a curve to points on a computer. The standard method of doing this is with cubic-spline interpolation. (That is, divide the path into intervals between the points whose values we know, and solve for piecewise parametric equations $x(t)$ and $y(t)$ between those endpoints. Cubic-spline interpolation means that they are all cubic polynomials. Frequently, we do something else that is mathematically equivalent to this but faster, such as adding a weighted sum of control points.) In order for the result to look smooth, we want the first and second derivatives to be continuous, and this nearly always is enough to look good: software engineers very rarely use higher-degree polynomials to approximate a curve. This is probably how your computer is displaying the strokes of the letters in the font you are reading right now. Not the answer you're looking for? Browse other questions tagged algebra-precalculus functions or ask your own question. Why is a derivative defined using limits? Function Olympiad Problem: Define $f(n)$ such that $f(n)$ is a positive integer, $f(n+1)$ $>$ $f(n)$ and $f(f(n))$ $=$ $3n$. The value of $f(10)$ is? Relation between differentiable,continuous and integrable functions. Why is the unit circle definition of trig functions not rigorous enough? Radioactive Decay Equations and Some Related Confusion on Discrete vs. Continuous Growth/Decay, Continuously Compounding Interest, etc.
CommonCrawl
I've been working on calculating parametric ES assuming the returns follow Paretian stable law. Given the four parameters - $\alpha, \beta,\sigma,\mu$- Stoyanov introduces closed form solution of the problem (assuming returns as the r.v., not losses - hence VaR is the negative quantile and we are integrating over the left tail, as I understand it. - see below. Any ideas where I could possibly go wrong? Here are my results: I use integrate in Rfor the integral and it seems to give reasonable values of the integral with absolute error around 1e-04. Browse other questions tagged r risk portfolio-management statistics distribution or ask your own question.
CommonCrawl
where $\sum \beta_i=1$ and $\gamma_i$ is the minimal consumption of $x_i$. Basically, as you can see, something doesn't add up: the $\beta_i$ is missing from the numerator. Thus, Roy's inequality is not verified. Where did I mess up? Roy's identity then gives you the Marshallian demand you've initially, and correctly, derived. How can I solve for total money spent if I use a non-constant MPC? Is optimal labour zero when (i) capital fixed and (ii) elasticity of substitution less than 1?
CommonCrawl
I'm aware of Sigma notation, but is there a function/name for e.g. similar to $$4! = 4 \cdot 3 \cdot 2 \cdot 1 ,$$ which uses multiplication? Edit: I found what I was looking for, but is there a name for this type of summation? Actually, I've found what I was looking for. Not the answer you're looking for? Browse other questions tagged algebra-precalculus summation terminology faq or ask your own question. Proof for formula for sum of sequence $1+2+3+\ldots+n$? Is there a way to denote the calculation $1+2+3+\dots+n$? 'Plus' Operator analog of the factorial function? Is there a way to quickly know the number of elements on a triangle type? Is there a standard name or shorthand for "plustorial"? Summation notation for divided factorial. Summation notation index n-1 refers to non-existent element? What the technical term for factors of a number (via summation)?
CommonCrawl
This is problem 3.3 from Mandl & Shaw's QFT text . Hi, I just saw that you tried to insert a link like it is done on SE. With our editor you can just highlight the text you want to become a link, click on the button that looks like a $\infty$ symbol which opens a popup window where you can insert the URL. I hope you do not mind that I did with the link to the book what I assumed you wanted to achieve. Compute the LHS of your 2nd equation by making use of your 1st equation. The answer is "differentiate the expression under the integral sign". The denominator is cancelled, and you get a delta function. I don't know what the confusion is.
CommonCrawl
What is the symmetric monoidal structure on the $(\infty,1)$-category of spectra? | Private Proxies - Buy Cheap Private Elite USA Proxy + 50% Discount! The $ (\infty, 1)$ category $ Sp$ of spectra as defined by Lurie in Higher Algebra has the structure of a symmetric monoidal category. Although I know the definition of symmetric monoidal category in the $ (\infty, 1)$ setting and can reasonably follow Lurie's arguments in Higher Algebra as to why $ Sp$ has such a structure, I don't understand it well-enough to think about it intuitively. So my question is, what does the symmetric monoidal structure on $ Sp$ look like? How is this related to the symmetric monoidal structure on symmetric or orthogonal spectra in the ordinary categorical setting? How may I picture ring spectra and other such objects arising from the monoidal structure?
CommonCrawl
We have provided a fractional generalization of the Poisson renewal processes by replacing the first time derivative in the relaxation equation of the survival probability by a fractional derivative of order $\alpha ~(0 < \alpha \leq 1)$. A generalized Laplacian model associated with the Mittag-Leffler distribution is examined. We also discuss some properties of this new model and its relevance to time series. Distribution of gliding sums, regression behaviors and sample path properties are studied. Finally we introduce the $q$-Mittag-Leffler process associated with the $q$-Mittag-Leffler distribution.
CommonCrawl
Prove that if $R$ is not prime then $R$ must have a prime factor $q$ that is larger than $p_n$. $R = p_1p_2\cdots p_n + 1$ where $p_1 < p_2 < \cdots < p_n$ and $p$ are the first n prime numbers. I directly understand that this question refers to Euclid's primes proof however; I don't know really how to even tackle this problem. I am looking over euclid's proof and will hopefully run into a 'eureka' moment. Any advice or tips on how to solve this problem would be very helpful and appreciated. Hint: If $R$ is not prime, then $R$ has a prime divisor. Convince yourself that this prime divisor isn't any of $p_1, ..., p_n$. Now use how $p_1, ..., p_n$ are defined. Infinitely many primes of the form $6\cdot k+1$ , where $k$ is an odd number? Let $N$ be the product of the first $m$ primes and $M$ be the product of the first $n-m$ primes. Multiplying products of $p_1,p_2,\ldots,p_n$ gives a square.
CommonCrawl
Karl Ludwig von Bertalanffy General System Theory and Unity of Science (September_19, 1901, Vienna, Austria - June_12, 1972, New York, USA) was a biologist who was a founder of general systems theory--which he literally translated from the mathematization of Nicolai Hartmann's Ontology as stated by himself in his seminal work. An Austrian citizen, he did much work in Canada and the United States. The individual growth model published by von Bertanlanffy in 1934 is widely used in biological models and exists in a number of permutations. when $ r_B $ is the von Bertalanffy growth rate and $ L_\infty $ the ultimate length of the individual.This model was proposed earlier by Pütter in 1920 (Arch. Gesamte Physiol. Mench. Tiere, 180: 298-340). The Dynamic Energy Budget theory provides a mechanistic explanation of this model in the case of isomorphs that experience a constant food availability. The inverse of the von Bertalanffy growth rate appears to depend linearly on the ultimate length, when different food levels are compared.The intercept relates to the maintenance costs, the slope to the rate at which reserve is mobilized for use by metabolism.The ultimate length equals the maximum length at high food availabilites. Bertalanffy, L. von, (1934).Untersuchungen über die Gesetzlichkeit des Wachstums. I. Allgemeine Grundlagen der Theorie; mathematische und physiologische Gesetzlichkeiten des Wachstums bei Wassertieren. Arch. Entwicklungsmech., 131:613-652. 1937, Das Gefüge des Lebens, Teubner, Leipzig. 1940, Vom Molekül zur Organismenwelt, Akademische Verlagsgesellschaft Athenaion, Potsdam. 1949, Das biologische Weltbild, Europäische Rundschau, Bern. In English: Problems of Life, New York, 1952. 1968, General System theory: Foundations, Development, Applications, Ludwig von Bertalanffy, George Braziller New York. 1975, Perspectives on General Systems Theory. Scientific-Philosophical Studies, E. Taschdjian (eds.), George Braziller New York. 1981, A Systems View of Men, P. A. LaViolette, Boulder.
CommonCrawl
From her pasture on the farm, Bessie the cow has a wonderful view of a mountain range on the horizon. There are $N$ mountains in the range ($1 \leq N \leq 10^5$). If we think of Bessie's field of vision as the $xy$ plane, then each mountain is a triangle whose base rests on the $x$ axis. The two sides of the mountain are both at 45 degrees to the base, so the peak of the mountain forms a right angle. Mountain $i$ is therefore precisely described by the location $(x_i, y_i)$ of its peak. No two mountains have exactly the same peak location. Bessie is trying to count all of the mountains, but since they all have roughly the same color, she cannot see a mountain if its peak lies on or within the triangular shape of any other mountain. Please determine the number of distinct peaks, and therefore mountains, that Bessie can see. The first line of input contains $N$. Each of the remaining $N$ lines contains $x_i$ ($0 \leq x_i \leq 10^9$) and $y_i$ ($1 \leq y_i \leq 10^9$) describing the location of one mountain's peak. Please print the number of mountains that Bessie can distinguish. In this example, Bessie can see the first and last mountain. The second mountain is obscured by the first.
CommonCrawl
The untyped lambda calculus, as understood today, is not itself a system of logical reasoning, and therefore it makes little sense to declare it to be "an inconsistent formalism" in and of itself. What the untyped lambda calculus is, is just a term rewriting system: A set of strings that we call terms, plus a reduction relation between those terms. This relation has interesting properties that we can use for various purposes -- but those applications are not themselves a part of the calculus. Church initially developed the calculus with a particular application in mind, namely a certain scheme for expressing all mathematical reasoning as equational manipulations within the calculus, as an alternative to, say, ordinary first-order logic. Unfortunately it turned out that some of the possible manipulations in the calculus do not correspond to valid mathematical reasoning when interpreted in the way Church envisaged -- the Kleene-Rosser paradox is one early instance of this; Curry and others later found simpler ones -- and that sunk the hope of using "can be expressed in this calculus" as a formalization of "is a valid mathematical argument". (I don't know the exact details of how this was supposed to work. They are not easy to find nowadays -- because they didn't actually work, they don't get a lot of publicity). So the particular logical system that Church built using the untyped lambda calculus is unsound -- but that doesn't mean that the underlying rewrite system itself is "inconsistent". That might have been a fair way to describe it if, for example, the paradox implied that the reflexive transitive closure of the reduction relation relates everything to everything else. But that is not so; we have impeccable finitistic proofs that different terms in $\beta$-normal form are not $\beta$-equivalent. whenever $f(x)$ is undefined, $ M_f\, \overline x$ is not $\beta$-equivalent to any term in normal form. Conversely let $M$ be any untyped lambda term, and define $f_M$ by $$ f_M(x)=y \iff M\,\overline x =_\beta \overline y $$ Then $f_M$ is a well-defined partial function $\mathbb N\to\mathbb N$, and is effectively computable. (or any of a wide variety of other possible coding schemes). This fact also doesn't mean that a failed attempt to use the lambda calculus as a system of logic casts doubt on the concept of "effectively computable". Not the answer you're looking for? Browse other questions tagged computability turing-machines paradoxes lambda-calculus or ask your own question. Where does the input x in Turing Machine subroutines come from in solving reductions to undecidable problems? Why does a certain turing machine only accept regular languages? Are there any Turing-undecidable problems whose undecidability is independent of the Halting problem? Why did nobody prove undecidability by the "too many problems" argument? Is there actually a universal notion of computability? Why does a Turing machine take $n^k$ steps for computing an input?
CommonCrawl
Deep learning (DL) research yields accuracy and product improvements from both model architecture changes and scale: larger data sets and models, and more computation. For hardware design, it is difficult to predict DL model changes. However, recent prior work shows that as dataset sizes grow, DL model accuracy and model size grow predictably. This paper leverages the prior work to project the dataset and model size growth required to advance DL accuracy beyond human-level, to frontier targets defined by machine learning experts. Datasets will need to grow $33$–$971\times$, while models will need to grow $6.6$–$456\times$ to achieve target accuracies. We further characterize and project the computational requirements to train these applications at scale. Our characterization reveals an important segmentation of DL training challenges for recurrent neural networks (RNNs) that contrasts with prior studies of deep convolutional networks. RNNs will have comparatively moderate operational intensities and very large memory footprint requirements. In contrast to emerging accelerator designs, large-scale RNN training characteristics suggest designs with significantly larger memory capacity and on-chip caches.
CommonCrawl
Abstract. We consider the problem of metastability in a probabilistic cellular automaton (PCA) with a parallel updating rule which is reversible with respect to a Gibbs measure. The dynamical rules contain two parameters $\beta$ and $h$ which resemble, but are not identical to, the inverse temperature and external magnetic field in a ferromagnetic Ising model; in particular, the phase diagram of the system has two stable phases when $\beta$ is large enough and $h$ is zero, and a unique phase when $h$ is nonzero. When the system evolves, at small positive values of $h$, from an initial state with all spins down, the PCA dynamics give rise to a transition from a metastable to a stable phase when a droplet of the favored $+$ phase inside the metastable $-$ phase reaches a critical size. We give heuristic arguments to estimate the critical size in the limit of zero ``temperature'' ($\beta\to\infty$), as well as estimates of the time required for the formation of such a droplet in a finite system. Monte Carlo simulations give results in good agreement with the theoretical predictions.
CommonCrawl
A simple mechanical setup was used to polish a standard single mode optical fiber in order to make it asymmetric. The polished fiber was tapered down maintaining the D-shape transversal profile. Its broken symmetry along with the extended evanescent field, due to the dimensions of the microfiber, implies a potentially high birefringent waveguide as well as a high-sensitivity external refractive index device. An experimental maximum sensitivity of $S\approx(3.0\pm0.2)\times10^4$ nm/RIU was achieved, other experimental and numerical results supporting our initial assumptions are also presented.
CommonCrawl
Abstract: In this paper we show that the sets of $F$-jumping coefficients of ideals form discrete sets in certain graded $F$-finite rings. We do so by giving a criterion based on linear bounds for the growth of the Castelnuovo-Mumford regularity of certain ideals. We further show that these linear bounds exists for one-dimensional rings and for ideals of (most) two-dimensional domains. We conclude by applying our technique to prove that all sets of $F$-jumping coefficients of all ideals in the determinantal ring given as the quotient by $2\times 2$ minors in a $2\times 3$ matrix of indeterminates form discrete sets.
CommonCrawl
Elliptic partial differential equations (PDEs) on surfaces are ubiquitous in science and engineering. We present several geometric flows governed by the Laplace-Beltrami operator. We design a new adaptive finite element method (AFEM) with arbitrary polynomial degree for such an operator on parametric surfaces, which are globally Lipschitz and piecewise in a suitable Besov class: the partitions thus match possible kinks. The idea is to have the surface sufficiently well resolved in $W^1_\infty$ relative to the current resolution of the PDE in $H^1$. This gives rise to a conditional contraction property of the PDE module and yields optimal cardinality of AFEM. Moreover, we relate the approximation classes to Besov classes. If the meshes do not match the kinks, or they are simply unknown beforehand, we end up with elliptic PDEs with discontinuous coefficients within elements. In contrast to the usual perturbation theory, we develop a new approach based on distortion of the coefficients in $L_q$ with $q<\infty$. We then use this new distortion theory to formulate a new AFEM for such discontinuity problems, show optimality of AFEM in the sense of distortion versus number of computations, and report insightful numerical results supporting our analysis. Joint work with A. Bonito (Texas A&M University, USA), M. Cascon (Universidad de Salamanca, Spain), R. DeVore (Texas A&M University, USA), K. Mekchay (Chulalongkorn University, Thailand) and P. Morin (Universidad Nacional del Litoral, Argentina).
CommonCrawl
Abstract: We study completeness properties of Sobolev metrics on the space of immersed curves and on the shape space of unparametrized curves. We show that Sobolev metrics of order $n\geq 2$ are metrically complete on the space $\mathcal I^n(S^1,\mathbb R^d)$ of Sobolev immersions of the same regularity and that any two curves in the same connected component can be joined by a minimizing geodesic. These results then imply that the shape space of unparametrized curves has the structure of a complete length space.
CommonCrawl
We present a sub-kiloparsec localization of the sites of supermassive black hole (SMBH) growth in three active galactic nuclei (AGNs) at z ~ 3 in relation to the regions of intense star formation in their hosts. These AGNs are selected from Karl G. Jansky Very Large Array (VLA) and Atacama Large Millimeter/submillimeter Array (ALMA) observations in the Hubble Ultra-Deep Field and COSMOS, with the centimetric radio emission tracing both star formation and AGN, and the sub/millimeter emission by dust tracing nearly pure star formation. We require radio emission to be $\geqslant 5\times $ more luminous than the level associated with the sub/millimeter star formation to ensure that the radio emission is AGN-dominated, thereby allowing localization of the AGN and star formation independently. In all three galaxies, the AGNs are located within the compact regions of gas-rich, heavily obscured, intense nuclear star formation, with R e = 0.4–1.1 kpc and average star formation rates of sime100–1200 M ⊙ yr−1. If the current episode of star formation continues at such a rate over the stellar mass doubling time of their hosts, sime0.2 Gyr, the newly formed stellar mass will be of the order of 1011 M ⊙ within the central kiloparsec region, concurrently and cospatially with significant growth of the SMBH. This is consistent with a picture of in situ galactic bulge and SMBH formation. This work demonstrates the unique complementarity of VLA and ALMA observations to unambiguously pinpoint the locations of AGNs and star formation down to sime30 mas, corresponding to sime230 pc at z = 3. © 2018. The American Astronomical Society. Received 2017 December 7; accepted 2018 January 21; published 2018 February 7.
CommonCrawl
In this paper we consider a nonlocal energy $I_\alpha$ whose kernel is obtained by adding to the Coulomb potential an anisotropic term weighted by a parameter $\alpha\in \mathbb R$. The case $\alpha=0$ corresponds to purely logarithmic interactions, minimised by the celebrated circle law for a quadratic confinement; $\alpha=1$ corresponds to the energy of interacting dislocations, minimised by the semi-circle law. We show that for $\alpha\in (0,1)$ the minimiser can be computed explicitly and is the normalised characteristic function of the domain enclosed by an ellipse. To prove our result we borrow techniques from fluid dynamics, in particular those related to Kirchhoff's celebrated result that domains enclosed by ellipses are rotating vortex patches, called Kirchhoff ellipses. Therefore we show a surprising connection between vortices and dislocations.
CommonCrawl
mscroggs.co.uk Blog: Is MEDUSA the new BODMAS? I wrote this post with, and after much discussion with Adam Townsend. It also appeared on the Chalkdust Magazine blog. Recently, Colin "IceCol" Beveridge blogged about something that's been irking him for a while: those annoying social media posts that tell you to work out a sum, such as \(3-3\times6+2\), and state that only $n$% of people will get it right (where \(n\) is quite small). Or as he calls it "fake maths". A classic example of "fake maths". This got me thinking about everyone's least favourite primary school acronym: BODMAS (sometimes known as BIDMAS, or PEMDAS if you're American). As I'm sure you've been trying to forget, BODMAS stands for "Brackets, (to the power) Of, Division, Multiplication, Addition, Subtraction" and tells you in which order the operations should be performed. Now, I agree that we all need to do operations in the same order (just imagine trying to explain your working out to someone who uses BADSOM!) but BODMAS isn't the order mathematicians use. It's simply wrong. Take the sum \(4-3+1\) as an example. Anyone can tell you that the answer is 2. But BODMAS begs to differ: addition comes first, giving 0! The problem here is that in reality, we treat addition and subtraction as equally important, so sums involving just these two operations are calculated from left-to-right. This caveat is quite a lot more to remember on top of BODMAS, but there's actually no need: Doing all the subtractions before additions will always give you the same answer as going from left-to-right. The same applies to division and multiplication, but luckily these two are in the correct order already in BODMAS (but no luck if you're using PEMDAS). This is big news. MEDUSA vs BODMAS could be this year's pi vs tau... Although it's not actually the biggest issue when considering sums like \(3-3\times6+2\). In the latter two, it is much harder to make a mistake in the order of operations, because the correct order is much closer to normal left-to-right reading order, helping the reader to avoid common mistakes. Good mathematics is about good communication, not tricking people. This is why questions like this are "fake maths": real mathematicians would never ask them. If we take the time to write clearly, then I bet more than \(n\)% of people will be able get the correct answer. We use BEDMAS in Canada (Brackets, Exponents, Division, Multiplication, Addition, Subtraction) But we are taught that you do whichever comes first from left to right if they are the addition/ subtraction or multiplication/division. So it could also be BEMDAS, or BEMDSA, or BEDMSA. It just uses the order the that rolls off the tongue more. If we could just teach young children about positive and negative numbers, then this wouldn't be a problem. Subtraction is just the addition of negative numbers. Division is also the multiplication of fractions. This is why BOMA/PEMA is the optimal method. I think MEDUSA is very creative, though.
CommonCrawl
19 Why there's a whirl when you drain the bathtub? 9 All possible permutations on a Rubik cube ($3\times3\times 3$) can be reached from the initial state? 8 How can a spark Label be skinned? 6 Where does giving thorough answers end, and reputation-whoring begin? 5 What's an intuitive way to enter several heterogeneous fields? 5 Should the role of scrum master be played by the same person every time?
CommonCrawl
It is conjectured that for the Maxwell-Klein-Gordon equations having data with non-vanishing charge and arbitrary large size, the global solutions disperse as linear waves and enjoy the so-called peeling properties for pointwise estimates. We give a gauge independent proof of the conjecture. I'll introduce the basic theory of delta-rings, which are one of the key ingredients in prismatic cohomology. I will discuss compactifications of semisimple Lie groups, in particular SU(n) over the real and complex fields, as manifolds with corners. The commutator of the Riesz transform (Hilbert transform in dimension 1) and a symbol $b$ is bounded on $L^2(\mathbb R^n)$ if and only if $b$ is in the BMO space BMO$(\mathbb R^n)$ (Coifman--Rochberg--Weiss). It is natural to ask whether it holds for commutator of Riesz transform on Heisenberg groups.
CommonCrawl
$\nlcertify$ is a software package for handling formal certification of nonlinear inequalities involving transcendental multivariate functions. The tool exploits sparse semialgebraic optimization techniques with approximation methods for transcendental functions, as well as formal features. Given a box $K := [a_1, b_1] \times \dots \times [a_n, b_n]$ and an $n$-variate function $f$ as input, $\nlcertify$ provides $\ocaml$ libraries that produce nonnegativity certificates for $f$ over $K$. The certificate can be ultimately verified inside the $\coq$ theoretical prover. Install it or have a look at the examples.
CommonCrawl
Context: I am interested in Newton polygons of bivariate polynomials. The Newton polygon of $p\in \mathsf K[x,y]$ is the convex hull of the support of $p$. This polygon (obtained through p.newton_polygon()) is in SageMath a Polyhedron. My question is about Polyhedron, and is not (as far as I can tell) specific to Newton polygons. My question: Is there a way to get the list of vertices of a two-dimensional Polyhedron (that is a polygon in mathematical terms), ordered clockwise or counterclockwise? That is, the list of vertices one gets by following the border of the polygon in one sense or the other. Equivalently, I would be satisfied to get the ordered list of edges (one-dimensional faces). Orderer adjacency between vertices, but the order has to be defined by a linear form (vertex_digraph). I guess it is not very hard to reconstruct what I need from these informations, but I wonder if there is a very simple way of getting the ordered list of vertices. Note that there is an additional method that could be relevant, though I am not sure what the output represents: facet_adjacency_matrix. ¹ The (undirected) adjacency is not fully sufficient: It requires some (easy) computation to produce a directed adjacency from it, and some (somewhat less simple) additional computation to produce the (say) counterclockwise orientation. A search for "convex hull sagemath" pointed me to this question which in turn led me to the sage manual. I hope this answers your question.
CommonCrawl
Abstract: We consider probabilities of deviations for functions, which depend on multiple independent random variables, from a certain value, usually the expected value. In order to find upper bounds for these probabilities one initially used approaches depending on martingales. In 1994, however, M. Talagrand showcased a new way to prove such concentration inequalities in his paper "Concentration of measure and isoperimetric inequalities in product spaces". This meant a significant progress in this subject and in many cases also provided better results than previous methods. The paper at hand presents this approach and thereby Talagrand's convex distance inequality as well as two proofs of it. Moreover, the variety of application possibilities of Talagrand's convex distance inequality will be demonstrated with the help of several examples such as Bin Packing and the traveling salesman problem. Abstract: We consider affine stochastic equation X=AX+B, where A is an upper triangular matrix, X and B are vectors, X is independent of (A,B) and the equation is meant in law. Under appropriate assumptions X has a heavy tail, but unlike the Kesten situation the tails of components X_1,..., X_d of X decay with various speed. What is more interesting not only the exponents may be different but also non trivial slowly varying functions may appear in the asymptotics. Abstract: The aim of this presentation is to introduce a framework for the asymptotic enumeration of graph classes with many components. By "many" it is meant that the number of components grows linearly in the number of nodes. Firstly, existing results from present literature covering the asymptotic enumeration of (connected) block-stable graph classes are presented. Therefore, exponential generating functions and the symbolic method are needed in order to translate combinatorial problems into analytic ones. The second half of the presentation is devoted to discussing random sampling by Boltzmann samplers, which leads to the exact asymptotic behaviour of the number of graphs with certain properties taking into consideration the number of components. More precisely, Boltzmann samplers allow for transitioning into the field of probability theory by analysing sums of i.i.d. integer-valued random variables. Abstract: We define the model of two-dimensional random interlacements using simple random walk trajectories conditioned on never hitting the origin, and then obtain some its properties. Also, for random walk on a large torus conditioned on not hitting the origin up to some time proportional to the mean cover time, we show that the law of the vacant set around the origin is close to that of random interlacements at the corresponding level. Thus, this new model provides a way to understand the structure of the set of late points of the covering process from a microscopic point of view. Also, we discuss a continuous version of the model, build using the conditioned (on not hitting the unit disk) Brownian motion trajectories. This is a joint work with Francis Comets and Marina Vachkovskaia. Abstract: Recent works on the structure of social, biological and internet networks have attracted much attention on random graphs G(D) chosen uniformly at random among all graphs with a fixed degree sequence D = (d_1,...,d_n), where the vertex i has degree d_i. On this topic, a big step forward is represented by the result achieved by Joos, Perarnau, Rautenbach and Reed (1). It determines whether such a random simple graph G(D) has a giant component or not by imposing only one condition: the sum of all degrees which are not 2 must go to infinity with n. Furthermore, if it is not the case, they show that both the probability that G(D) has a giant component and the probability that G(D) has no giant component lie between p and 1-p, for a positive constant p. In this Thesis we present their work, traveling trough the main theorems and the generalization of the previous results again and adding some missing calculations and intermediate steps in order to elucidate it completely. Furthermore, we offer some examples and direct applications of these new criteria. Finally, we attaching implementation and graphical illustrations of almost all the treated cases. Abstract: In Moran models the genealogy at a single locus of a constant size $N$ population in equilibrium is given by the well-known Kingman's coalescent. When considering multiple loci under recombination, the ancestral recombination graph encodes the genealogies at all loci. For a continuous genome we study the tree-valued process of genealogies along the genome in the limit $N\to\infty$. Encoding trees as metric measure spaces, we show convergence to a tree-valued process. Furthermore we discuss some mixing properties of the resulting process. This is joint work with Etienne Pardoux and Peter Pfaffelhuber. Title: "Towards biologically plausible deep learning" Abstract: "In recent years (deep) neural networks got the most prominent models for supervised machine learning tasks. They are usually trained based on stochastic gradient descent where backpropagation is used for the gradient calculation. While this leads to efficient training, it is not very plausible from a biological perspective. We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. Backpropagated error gradients correspond to temporal derivatives with respect to the activation of hidden units. These lead to a weight update proportional to the product of the presynaptic firing rate and the temporal rate of change of the postsynaptic firing rate. Simulations and a theoretical argument suggest that this rate-based update rule is consistent with those associated with spike-timing-dependent plasticity. These ideas could be an element of a theory for explaining how brains perform credit assignment in deep hierarchies as efficiently as backpropagation does, with neural computation corresponding to both approximate inference in continuous-valued latent variables and error backpropagation, at the same time." Abstract: A conductance graph on Z^d is a nearest-neighbor graph where all of the edges have positive weights assigned to them. In this talk, we will consider the spread of information between particles performing continuous time simple random walks on a conductance graph. We do this by developing a general multi-scale percolation argument using a two-sided Lipschitz surface that can also be used to answer other questions of this nature. Joint work with Alexandre Stauffer. Abstract: Branching Brownian motion (BBM) is a classical process in probability, describing a population of particles performing independent Brownian motion and branching according to a Galton Watson process. In this talk we present a one-dimensional diffusion process on BBM particles which is symmetric with respect to a certain random martingale measure. This process is obtained by a time-change of a standard Brownian motion in terms of the associated positive continuous additive functional. In a sense it may be regarded as an analogue of Liouville Brownian motion which has been recently constructed in the context of a Gaussian free field. This is joint work with Lisa Hartung. Abstract: A planar set that contains a unit segment in every direction is called a Kakeya set. These sets have been studied intensively in geometric measure theory and harmonic analysis since the work of Besicovich (1928); we find a new connection to game theory and probability. A hunter and a rabbit move on the integer points in [0,n) without seeing each other. At each step, the hunter moves to a neighboring vertex or stays in place, while the rabbit is free to jump to any node. Thus they are engaged in a zero sum game, where the payoff is the capture time. The known optimal randomized strategies for hunter and rabbit achieve expected capture time of order n log n. We show that every rabbit strategy yields a Kakeya set; the optimal rabbit strategy is based on a discretized Cauchy random walk, and it yields a Kakeya set K consisting of 4n triangles, that has minimal area among such sets (the area of K is of order 1/log(n)). Passing to the scaling limit yields a simple construction of a random Kakeya set with zero area from two Brownian motions. (Joint work with Y. Babichenko, Y. Peres, R. Peretz and P. Winkler). Abstract: In this presentation we deal with the existence of solutions to stochastic partial differential equations in scales of Hilbert spaces, and show how this is related to the existence of invariant manifolds. As a particular example, we will treat an equation in the space of tempered distributions; here the Hilbert scales are given by Hermite-Sobolev spaces.
CommonCrawl
I am using Tchebyshev discretization to solve a system of PDEs. Now, I also want my grids to be clustered around some point($x_c$) in the domain. Is there any standard mapping that can achieve this? Any suggestion/ reference to standard texts are greatly appreciated. In the section about adaptive Methods Chapter 16. in "Chebyshev and Fourier Spectral Methods" from John P. Boyd several different coordinate transformations together with their application in different publications are presented. I have not used any of those transformation myself but they can serve as a starting point for your problem. y is the physical (unmapped) coordinate and x is the computational coordinate. I have found a simple algebraic mapping (Eq 18.12 from here) and modified it to suit my need. $w\ (0 < w \ll 1)$ controls the width of the cluster. Not the answer you're looking for? Browse other questions tagged grid mapping-strategy or ask your own question.
CommonCrawl
The ratio of expense and savings of Ram is 5 : 6. What will be the percentage increment in the expenses so that ratio of expense and saving 6 : 5 ? The average age of 3 students = 15 yrs. sum of their ages = 15 $\times$ 3 = 45 yrs. A train running at an average speed of 48 km/hr performs a journey in 6 hrs 30 min. How long would the same journey take it after crossing 180 kms the speed was reduced to 33 km/hr ? If x= $k^ 3$ - $3k^ 2$ and y= 1 - 3k, then for what value of k, will be x= y?
CommonCrawl
Suppose we have a Hamiltonian action of a torus T=T^m=R^m/Z^m on a compact, connected symplectic manifold M. According to the convexity theorem, we know every fiber of the momentum map \mu: M--->R^m is connected. My question here is about the proof. We assume the Hamiltonian action is effective without loss of generality, i.e., only the zero point of T fixes M. I already know that the set of regular values of \mu is dense in \mu(M), also the set of points \eta in \mu(M) with (\eta_1, ..., \eta_(m-1)) a regular value for the reduced momentum map (\mu_1, ..., \mu_(m-1)) is dense in \mu(M). I also know that the fiber of \eta is connected whenever (\eta_1, ..., \eta_(m-1)) a regular value for the reduced momentum map. "Since the set of such points is dense in \mu(M) it follows by continuity that the fiber of \eta is connected for every regular value \eta." (in the book by McDuff and Salamon), and this is my first question. My second question is that then we can imply that every fiber of \mu is connected? Suppose f: M--->N is a smooth map between two smooth manifolds, with M compact and connected, and suppose there is a dense subset of f(M) where each fiber is connected, then each fiber of f is connected. Example: consider a natural smooth surjection from S^1 to the figure eight. The fiber over the nodes of the figure eight has two points but every other fiber is a single point. If anyone knows how to prove the connectedness part of the Convexity Theorem, could you please show us? Thank you very much! I think the key point is that given a Hamiltonian torus action, the components of the moment map are Morse-Bott functions which have even dimensional critical submanifolds all with even index. For example, suppose you have a Hamiltonian circle action on X. If p is fixed by the action, the circle acts on the tangent space TpX at p and so TpX decomposes into eigenspaces each of which is necessarily even-dimensional. The spaces where the action is trivial are tangent to the fixed locus, those with negative weight are where the hessian is negative definite and those of positive weight are where the hessian is positive definite. It follows that each component of the critical locus has even dimension and even index. To see why this implies that the fibres are connected, at least in the case of a circle action, one applies Mores-Bott theory. When t passes a critical level the level set mu-1(t) changes by a certain type of surgery. The only surgeries which can alter the connectedness of the level-set are those of index or coindex 1, but we have just seen that this never happens for a Hamiltonian circle action. So all the fibres are connected. To pass from a circle action to a torus action one can use induction. If you get stuck, searching for things like "Morse-Bott even index torus" should help you find the proof on Google somewhere. Alternatively, if you are feeling more traditional, you could look at Atiyah's original proof in your library: "Covnexity and commuting Hamiltonians" in Bulletin of the LMS 1982 14(1). The first question is OK, i.e., the fiber of \eta is indeed connected for every regular value \eta, since it is like a product here - please refer to Ehresmann's Theorem and also a related post "Can connectedness of fibers of a smooth map be checked on a dense set?". So the real issue is on the second problem! Can connectedness of fibers of a smooth map be checked on a dense set? Equivariant cohomology defined by restrictions? Which completely regular Hausdorff spaces admit a proper map to $\mathbb R$?
CommonCrawl
First step in solving two – step inequalities is the same as the one in one step inequalities. In other words, we first need to isolate the variable on one side of the inequality. Since the variable $x$ is the value we are looking for, there is still something in the way. Number $a$ multiplies $x$, so we have to deal with that before we can have our $x$. Obviously, we have to divide our whole inequality by number $a$. Here is where you have to be careful. Remember that an inequality sign depends on whether number $a$ is negative or positive. In this case, $x$ is multiplied by $-7$, so we should divide the whole inequality by $-7$. But, $-7$ is a negative number, so the inequality sign will change from $ ≤$ to $ ≥$. Here we have that $x$ is greater than or equal to $1$. That means that $1$ will be included in the solution set. In the interval form: $ x \in \left< -\infty, 1\right]$.
CommonCrawl
Abstract : We consider $N$ independent stochastic processes $(X_i(t), t\in [0,T_i])$, $i=1,\ldots, N$, defined by a stochastic differential equation with diffusion coefficients depending on a random variable $\phi_i$. The distribution of the random effect $\phi_i$ depends on unknown population parameters which are to be estimated from a discrete observation of the processes $(X_i)$. The likelihood generally does not have any closed form expression. Two estimation methods are proposed: one based on the Euler approximation of the likelihood and another based on estimations of the random effects. When the distribution of the random effects is Gamma, the asymptotic properties of the estimators are derived when both $N$ and the number of observations per subject tend to infinity. The estimators are computed on simulated data for several models and show good performances.
CommonCrawl
This chapter develop a small language of numbers and booleans, serving as a straightforward vehicle for the introduction of serveral fundamental concepts like, abstract syntax tree, evaluation and runtime errors. where t is called meta-variable. At every point where the symbol t appears, we may substitute any terms. The set of terms is the smallest set $T$ s.t. Three definition above are equivalent. Denotational Semantics. The meaning of a term is taken to be some mathematical object, instead of a sequence of machine states. Giving denotational semantics for a language consists of finding a collection of semantic domains and then defining an interpretation function mapping terms into elements of these domains. Firstly we consider a simpler situation where only booleans get involved. A rule consists of one conclusion and zero or more premises. For example, in rule E-IF, t1 -> t1' is the premise and if t1 then t2 else t3 -> if t1' else t2 then t3 the conclusion. Note that in textbook premises are written above conclusion with a horizontal line in the middle, but here I use a notation that is more convenient to type in. A subset of terms should be defined as possible final results of evaluation, which are called values. Here they are just the constants true, false, 0. Note that $\rightarrow$ can be viewed as a binary relation over $T$, i.e., a subset of $T \times T$. The third rule E-IF specifies the evaluation order of an expression, i.e., clauses are always evaluated after their guards. DEFINITION instance of an inference rule An instance of an inference rule is obtained by consistently replacing each metavariable by the same term in the rule's conclusion and all its premises (if any). e.g., if true then true else (if false then false else false) -> true is an instance if E-IFTRUE. DEFINITION satisfy A rule is satisfied by a relation if, for each instance of the rule, either the conclusion is in the relation or one of the premises is not (to ensure evaluation can proceed). DEFINITION one-step evaluation relation denoted as $\rightarrow$, is the smallest binary relation on terms satisfying the three rules. When the pair (t, t') is in the relation, we say that t -> t' is derivable. ("smallest" implies that t -> t' is derivable iff it is justified by the rules). DEFINITION normal form A term $t$ is in normal form if no evaluation rule applies to it. Every value is in normal form. When only booleans involved, every term in normal form is a value. Every term can be evaluated in values. DEFINITION stuck term A closed term is stuck if it is in normal form but not a value. e.g. if 0 then true else true or iszero false. Stuckness gives us a simple notion of run-time error for this simple machine. Intuitively, it characterizes the situation where the operational semantics does not know what to do because the program has reached a "meaningless state". Stuckness can be prevented by introducing a new term called $wrong$ and augment the opereational semantics with rules that explicitly generate $wrong$ ain all the situations where the present semantics gets stuck. DEFINITION mutli-step evaluation $\rightarrow^*$ is the reflexive, transitive closure of one-step evaluation. If $t \rightarrow^\star t1$ and $t \rightarrow^\star t2$, where $t1$ and $t2$ are in normal form, then $t1 = t2$. For every term $t$ there is some normal form $t'$ s.t. $t \rightarrow^\star t'$. DEFINITION big-step evaluation (omitted) formulates the notion of "this term evaluates to that final value". clone repository from https://github.com/rofl0r/proxychains-ng, make && sudo make install. Functor solves the problem of mapping regular one-parameter functions into a sub-category, but that's not easy for functions with more than one parameters. Let's consider a function with two parameters f :: a -> b -> c, which can also read as a -> (b -> c). Applying fmap on f, we will get fmap f :: m a -> m (b -> c). There's still some distance from what we want: f' :: m a -> m b -> m c. To get f', we need a transform from m (b -> c) to m b -> m c. Here we denote it as <*> :: m (b -> c) -> m b -> m c. We will later show that such transform is universal for functions with more parameters. Now consider a function with three parameters f :: a -> b -> c -> d. We are going to transform it into a wrapped-value version, with the help of fmap and <*>. In haskell, fmap has an infix name <$>. So finally we get: f <$> a <*> b <*> c. Haskell pre-defined a type class Applicative, which captures the pattern <*>. Any type that implements Applicative works well with <$> and <*>. Note that an Applicative is also a Functor. Apart from <*>, there are some other helper functions or operators in Applicative. pure is equivalent to the default value constructor of f, e.g. (:) for List or Just for Maybe. This may be handful when lifting an unwrapped value to a wrapped one. liftA2 transforms a binary operator to the corresponding version. The function exists as binary operators would be frequently passed among high-order functions. putStrLn "1" *> putStrLn "2" <* is similar. Both will be reviewed while studying Monad. 先归纳出 foldr 的泛性质。如果一个函数 g s.t. 则 g list === foldr f v list.
CommonCrawl
The non commuting graph $\nabla(G)$ of‎ ‎a non-abelian finite group $G$ is defined as follows‎: ‎its vertex set is‎ ‎$G‎- ‎Z (G)$ and two distinct vertices $x$ and $y$‎ ‎are joined by an edge if and only if the commutator of $x$ and $y$ is not the‎ ‎identity‎. ‎In this paper we prove some new results about this graph‎. ‎In‎ ‎particular we will give a new proof of Theorem 3.24 of [A‎. ‎Abdollahi‎, ‎S‎. ‎Akbari‎, ‎H‎. ‎R‎, ‎Maimani‎, ‎Non-commuting graph of a group‎, ‎J‎. ‎Algebra‎, ‎298 (2006) 468-492.]‎. ‎We also prove that‎ ‎if $G_1‎, ‎G_2‎, ‎\ldots‎, ‎G_n$ are finite groups such that $Z(G_i)=1$ for $i=1‎, ‎2‎,‎\ldots‎, ‎n$‎ ‎and they are characterizable by non commuting graph‎, ‎then $G_1 \times G_2‎ ‎\times \cdots \times G_n$ is characterizable by non-commuting graph‎. Ron Solomon and Andrew Woldar (2012). All Simple groups are characterized by their non-commuting graphs. preprint.
CommonCrawl
Received January 3, 2019; Revised January 22, 2019; Accepted January 22, 2019. Based on first-principles electronic structure calculations, we estimated the variations in the thermodynamic free energies of lattice vibrations with temperature for various polymorph structures of Sn: we examined the $\alpha$ and $\beta$ polymorphs, two kinds of hexagonal structures, through which we are able to investigate their thermal-induced phase- transition characteristics. Similar to the experimental results, a phase transition from alpha-Sn to beta-Sn was observed with increasing temperature, because the entropy of beta-Sn is larger than that of alpha-Sn. Thus, the entropy effect of phonons appears to have played an important role in the phase transition. The hexagonal structure can be more stable than the beta structure at a high temperature, but the transition temperature was higher than the melting temperature of Sn; thus, hexagonal Sn is thought to be a metastable phase.
CommonCrawl
Our proposed method to detect and segment the phytoplankton cells from microscopic images of non-setae species. Saliency-based marker-controlled watershed method was proposed to detect and segment phytoplankton cells from microscopic images of non-setae species. This method first improved IG saliency detection method by combining saturation feature with color and luminance feature to detect cells from microscopic images uniformly and then produced effective internal and external markers by removing various specific noises in microscopic images for efficient performance of watershed segmentation automatically. We built the first benchmark dataset for cell detection and segmentation, including 240 microscopic images across multiple phytoplankton species with pixel-wise cell regions labeled by a taxonomist, to evaluate our method. We compared our cell detection method with seven popular saliency detection methods and our cell segmentation method with six commonly used segmentation methods. The quantitative comparison validates that our method performs better on cell detection in terms of robustness and uniformity and cell segmentation in terms of accuracy and completeness. The qualitative results show that our improved saliency detection method can detect and highlight all cells, and the following marker selection scheme can remove the corner noise caused by illumination, the small noise caused by specks, and debris, as well as deal with blurred edges. We construct a new dataset that contains 240 phytoplankton microscopic images with 225 of single cell while 15 of multiple cells and human labeled ground-truth cell regions. These images are acquired and selected with the help of phytoplankton experts in different sizes from $256\times 256$ to $4080\times 3072$ and different species of non-setae phytoplankton. The pixel-wise ground truth masks are produced guided by phytoplankton taxonomist to contain the biomorphic characteristics of cells. These data provide useful resource to study the phytoplankton for automatic detection and segmentation. Some sample images are shown in the above figure. The data are available for downloading here. The MATLAB source code can be followed and downloaded from https://github.com/zhenglab/ACDS. Quantitative comparison of cell detection by different methods on our proposed dataset. (a) 225 microscopic images with single cells. The yellow $\bigstar$ representing the actual threshold generated by our method for binarization shows the nearly best segmentation on these PR curves. (b) 15 microscopic images with multiple cells. Quantitative comparison of cell segmentation by different methods on our proposed dataset. (a) 225 microscopic images with single cells. (b) 15 microscopic images with multiple cells. Shape matching using modified Hausdorff distance (MHD). For 225 microscopic images with a single cell (see S1 Table), the contour (shape)-matching number most similar to the ground truth is 12 by Canny, 2 by Ours1/Ours2, and 211 by our final segmentation (Ours3). For 15 microscopic images with multiple cells (see S2 Table), the corresponding matching number is 2 by Ours2 and 13 by Ours3, indicating the better performance of our proposed method on shape similarity of phytoplankton cells. Cell counting comparison on 15 microscopic images with multiple cells. The numbers in bold font indicate the best results. Experimental results of our proposed method for single cell detection and segmentation. The first column shows the original RGB microscopic images of the following non-setae species in each row: (a) Dictyocha fibula. (b) Chattonella marina. Columns 2 to 6 present the image results of salient objects detected by the IG method, the salient objects detected by saturation, our combined salient objects, the binarization of combined salient objects, and the markers containing internal (black regions in the objects) and external (black regions outside the objects) markers imposed on the gray-level microscopic images, respectively. The last column shows the final segmentation results of our proposed method. Visual comparison with the commonly used segmentation methods on single cell segmentation. The first column shows the original RGB microscopic images of the following non-setae species in each row: (a) Dictyocha fibula. (b) Chattonella marina. (c) Ceratium tripos. (d) Scrippsiella trochoidea. For comparison, the remaining columns present the results obtained by the following segmentation methods consecutively: Canny, ITS, Otsu, Sauvola, MET, K-means, and our proposed method. Experimental results of our proposed method for multiple cell detection and segmentation. The first column shows the original RGB microscopic images of the following non-setae species in each row: (a) Prorocentrum triestinum. (b) Amphidinium carterae. (c)(d) Chattonella marina. Columns 2 to 6 present the image results of the salient objects detected by the IG method, the salient objects detected by saturation, our combined salient objects, the binarization of combined salient objects, and the markers containing internal (black regions on the objects) and external (black lines between the objects) markers imposed on the gray-level microscopic images, respectively. The last column shows the final segmentation results of our proposed method. We wish to thank the Algal Collection of Research Center for Harmful Algae and Aquatic Environment in Jinan University and Key Laboratory of Marine Environment and Ecology, Ministry of Education in Ocean University of China for providing the samples of phytoplankton species and the instruments to observe and acquire the corresponding microscopic images. This work was supported by the National Natural Science Foundation of China under grant numbers 61301240 and 61271406 and China Postdoctoral Science Foundation under grant number 2016M590658.
CommonCrawl
On the border of Metsälä and Syrjälä there is a fenced area which is known as "Area 50". It can be presented as a $50 \times 50$ grid whose squares are numbered $1,2,\ldots,50^2$. Each square has a distinct number. Kotivalo wants to get a better picture of the area and has sent $100$ parachute robots to investigate the area. Each robot first lands at a random square. Then it $100$ times sends the number of the square where it is located at the moment, and then randomly moves left, right, up or down (but never outside the area). After that, the robot destroys itself. Could you help Kotivalo to create a map of the area based on the information given by the robots? The input contains $100$ lines, and each of them has $100$ integers: the numbers of the squares sent by a robot in the order it has visited them. Print $50$ lines, each of them with $50$ integers: the map of the area. Each number $1,2,\ldots,50^2$ must appear exactly once in the map. In this task, there is only one input file which is available here. You have to submit an output file that corresponds to the input file. You will get a point for each robot whose information matches your map, i.e., your final score will be between $0 \ldots 100$ points.
CommonCrawl
‎‎A finite group $G$ has uniform spread $k$ if there exists a fixed conjugacy class $C$ of elements in $G$ with the property that‎ ‎for any $k$ nontrivial elements $s_1, s_2,‎\ldots‎,s_k$ in $G$ there exists $y\in C$ such that $G = \langle s_i,y\rangle$ for $i=1, 2,‎\ldots,k$‎. ‎Further‎, ‎the exact uniform spread of $G$ is the largest $k$ such that $G$ has the uniform spread $k$‎. ‎In this paper we give upper bounds on the exact uniform spreads of thirteen sporadic simple groups‎.
CommonCrawl
» Steam hangs on "Installing breakpad exception handler..." It then hangs there indefinitely. For the record, it always says "appid(steam)/version(0_client)", never "appid(steam)/version($x_client)", where $x is some natural number greater than 0. I seem to get this regardless of how I launch steam. I've tried disabling STEAM_RUNTIME, I've tried seeing what strace would turn up (if you want me to post the output of "strace steam", I can), and I've also tried deleting the cache and even completely wiping steam off my machine and re-installing it fresh. Nothing seems to work. I'm pretty sure I've installed all the lib32 packages required - in fact, I don't think Steam is even reaching the point where it checks those dependencies. I'm really not sure where to go from here as Steam is not officially supported on Arch, the documentation is sparse, and yet I've gotten Steam to work on my machine about a year ago under an earlier version of my current setup. Re: Steam hangs on "Installing breakpad exception handler..." If there aren't any missing dependencies it might be worth a shot (re-)moving the .steam directory in your home folder, then reinstalling (may not be needed). Not sure what I'm looking for, it appears like it has all its dependencies. I tried completely removing steam from my system. When I re-installed it and launched it again, it gave me the license agreement and then got hung up on the same "appid(steam)/version(0_client)" as before. Did anybody solve the problem? I'm having exactly the same problem. Problem for me was with incorrect graphics drivers installed. I came back to arch after a bit of hiatus and the drivers for my card are in multilib/lib32-nvidia-340xx-libgl but i had the latest drivers installed instead (multilib/lib32-nvidia-libgl). Installing the right drivers fixed it. Hope this helps. For anyone else that finds this and still can't get it working (like me), you may also need to install lib32-nvidia-340xx-utils and lib32-nvidia-340xx-libgl. Those were not installed for me by default when I switched to the 340xx drivers. I also installed the 340xx versions of the opencl packages, but I doubt that had to do with steam working again. Hope that helps! lib32-nvidia-utils or lib32-nvidia-340xx-utils or lib32-nvidia-304xx-utils - match the version of the 64 bit package. Thank you for sharing your solution, ickognito. Since this a really old topic, and it seems to be resurrected every 12 months with a different (possible) solution, I'm going to go ahead and close it now.
CommonCrawl
Yun, H., Y. Wang, F. Zhang, Z. Lu, S. Lin, L. Chrostowski, and N. A. F. Jaeger, "Broadband 2$\times$ 2 adiabatic 3 dB coupler using silicon-on-insulator sub-wavelength grating waveguides", Optics Letters, vol. 41, no. 13: Optical Society of America, pp. 3041–3044, 2016. Wang, Y., X. Wang, J. Flueckiger, H. Yun, W. Shi, R. Bojko, N. A. F. Jaeger, and L. Chrostowski, "Focusing sub-wavelength grating couplers with low back reflections for rapid prototyping of silicon photonic circuits", Optics express, vol. 22, no. 17: Optical Society of America, pp. 20652–20662, 2014.
CommonCrawl