text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
The idea of a digital pill that records when it has been consumed is a sound one, but as the FDA notes, there is no evidence to say it actually increases the likelihood patients that have a history of inconsistent consumption will follow their prescribed course of treatment. There is also a very strange irony in schizophrenia being the first condition this technology is being used to target.
While the commentary makes effective arguments — that this isn't cheating, because cheating is based on what the rules are; that this is fair, because hiring a tutor isn't outlawed for being unfair to those who can't afford it; that this isn't unnatural, because humans with computers and antibiotics have been shaping what is natural for millennia; that this isn't drug abuse anymore than taking multivitamins is — the authors seem divorced from reality in the examples they provide of effective stimulant use today.
Flow diagram of epidemiology literature search completed July 1, 2010. Search terms were nonmedical use, nonmedical use, misuse, or illicit use, and prescription stimulants, dextroamphetamine, methylphenidate, Ritalin, or Adderall. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies of the extent of nonmedical prescription stimulant use by students and related questions addressed in the present article including students' motives and frequency of use.
NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab's production.) A year's supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one's eyes?), it could cost anywhere up to $10,000.
Since my experiment had a number of flaws (non-blind, varying doses at varying times of day), I wound up doing a second better experiment using blind standardized smaller doses in the morning. The negative effect was much smaller, but there was still no mood/productivity benefit. Having used up my first batch of potassium citrate in these 2 experiments, I will not be ordering again since it clearly doesn't work for me.
The truth is that, almost 20 years ago when my brain was failing and I was fat and tired, I did not know to follow this advice. I bought $1000 worth of smart drugs from Europe, took them all at once out of desperation, and got enough cognitive function to save my career and tackle my metabolic problems. With the information we have now, you don't need to do that. Please learn from my mistakes!
A number of different laboratory studies have assessed the acute effect of prescription stimulants on the cognition of normal adults. In the next four sections, we review this literature, with the goal of answering the following questions: First, do MPH (e.g., Ritalin) and d-AMP (by itself or as the main ingredient in Adderall) improve cognitive performance relative to placebo in normal healthy adults? Second, which cognitive systems are affected by these drugs? Third, how do the effects of the drugs depend on the individual using them?
It's been widely reported that Silicon Valley entrepreneurs and college students turn to Adderall (without a prescription) to work late through the night. In fact, a 2012 study published in the Journal of American College Health, showed that roughly two-thirds of undergraduate students were offered prescription stimulants for non-medical purposes by senior year.
But, if we find in 10 or 20 years that the drugs don't do damage, what are the benefits? These are stimulants that help with concentration. College students take such drugs to pass tests; graduates take them to gain professional licenses. They are akin to using a calculator to solve an equation. Do you really want a doctor who passed his boards as a result of taking speed — and continues to depend on that for his practice?
Smart Pill appears to be a powerful dietary supplement that blends ingredients with proven positive effect on the brain, thus promoting mental health. Some problems like attention disorders, mood disorders, or stress can be addressed with this formula. The high price related to the amount provided for a month can be a minus, but the ingredients used a strong link to brain health. Other supplements that provide the same effect can be found online, so a quick search is advised to find the best-suited supplement for your particular needs. If any problems arise, consult a medical doctor immediately.
Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it.
Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment.
Adaptogens are plant-derived chemicals whose activity helps the body maintain or regain homeostasis (equilibrium between the body's metabolic processes). Almost without exception, adaptogens are available over-the-counter as dietary supplements, not controlled drugs. Well-known adaptogens include Ginseng, Kava Kava, Passion Flower, St. Johns Wort, and Gotu Kola. Many of these traditional remedies border on being "folk wisdom," and have been in use for hundreds or thousands of years, and are used to treat everything from anxiety and mild depression to low libido. While these smart drugs work in a many different ways (their commonality is their resultant function within the body, not their chemical makeup), it can generally be said that the cognitive boost users receive is mostly a result of fixing an imbalance in people with poor diets, body toxicity, or other metabolic problems, rather than directly promoting the growth of new brain cells or neural connections.
…Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent.
These pills don't work. The reality is that MOST of these products don't work effectively. Maybe we're cynical, but if you simply review the published studies on memory pills, you can quickly eliminate many of the products that don't have "the right stuff." The active ingredients in brain and memory health pills are expensive and most companies sell a watered down version that is not effective for memory and focus. The more brands we reviewed, the more we realized that many of these marketers are slapping slick labels on low-grade ingredients.
Took random pill at 2:02 PM. Went to lunch half an hour afterwards, talked until 4 - more outgoing than my usual self. I continued to be pretty energetic despite not taking my caffeine+piracetam pills, and though it's now 12:30 AM and I listened to TAM YouTube videos all day while reading, I feel pretty energetic and am reviewing Mnemosyne cards. I am pretty confident the pill today was Adderall. Hard to believe placebo effect could do this much for this long or that normal variation would account for this. I'd say 90% confidence it was Adderall. I do some more Mnemosyne, typing practice, and reading in a Montaigne book, and finally get tired and go to bed around 1:30 AM or so. I check the baggie when I wake up the next morning, and sure enough, it had been an Adderall pill. That makes me 1 for 2.
It arrived as described, a little bottle around the volume of a soda can. I had handy a plastic syringe with milliliter units which I used to measure out the nicotine-water into my tea. I began with half a ml the first day, 1ml the second day, and 2ml the third day. (My Zeo sleep scores were 85/103/86 (▁▇▁), and the latter had a feline explanation; these values are within normal variation for me, so if nicotine affects my sleep, it does so to a lesser extent than Adderall.) Subjectively, it's hard to describe. At half a ml, I didn't really notice anything; at 1 and 2ml, I thought I began to notice it - sort of a cleaner caffeine. It's nice so far. It's not as strong as I expected. I looked into whether the boiling water might be breaking it down, but the answer seems to be no - boiling tobacco is a standard way to extract nicotine, actually, and nicotine's own boiling point is much higher than water; nor do I notice a drastic difference when I take it in ordinary water. And according to various e-cigarette sources, the liquid should be good for at least a year.
Talk to your doctor, too, before diving in "to ensure that they do not conflict with current meds or cause a detrimental effect," Hohler says. You also want to consider what you already know about your health and body – if you have anxiety or are already sensitive to caffeine, for example, you may find that some of the supplements work a little too well and just enhance anxiety or make it difficult to sleep, Barbour says. Finances matter, too, of course: The retail price for Qualia Mind is $139 for 22 seven-capsule "servings"; the suggestion is to take one serving a day, five days a week. The retail price for Alpha Brain is $79.95 for 90 capsules; adults are advised to take two a day.
Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it.
CDP-Choline is also known as Citicoline or Cytidine Diphosphocholine. It has been enhanced to allow improved crossing of the blood-brain barrier. Your body converts it to Choline and Cytidine. The second then gets converted to Uridine (which crosses the blood-brain barrier). CDP-Choline is found in meats (liver), eggs (yolk), fish, and vegetables (broccoli, Brussels sprout).
When you drink tea, you're getting some caffeine (less than the amount in coffee), plus an amino acid called L-theanine that has been shown in studies to increase activity in the brain's alpha frequency band, which can lead to relaxation without drowsiness. These calming-but-stimulating effects might contribute to tea's status as the most popular beverage aside from water. People have been drinking it for more than 4,000 years, after all, but modern brain hackers try to distill and enhance the benefits by taking just L-theanine as a nootropic supplement. Unfortunately, that means they're missing out on the other health effects that tea offers. It's packed with flavonoids, which are associated with longevity, reduced inflammation, weight loss, cardiovascular health, and cancer prevention.
Following up on the promising but unrandomized pilot, I began randomizing my LLLT usage since I worried that more productive days were causing use rather than vice-versa. I began on 2 August 2014, and the last day was 3 March 2015 (n=167); this was twice the sample size I thought I needed, and I stopped, as before, as part of cleaning up (I wanted to know whether to get rid of it or not). The procedure was simple: by noon, I flipped a bit and either did or did not use my LED device; if I was distracted or didn't get around to randomization by noon, I skipped the day. This was an unblinded experiment because finding a randomized on/off switch is tricky/expensive and it was easier to just start the experiment already. The question is simple too: controlling for the simultaneous blind magnesium experiment & my rare nicotine use (I did not use modafinil during this period or anything else I expect to have major influence), is the pilot correlation of d=0.455 on my daily self-ratings borne out by the experiment?
But there are some potential side effects, including headaches, anxiety and insomnia. Part of the way modafinil works is by shifting the brain's levels of norepinephrine, dopamine, serotonin and other neurotransmitters; it's not clear what effects these shifts may have on a person's health in the long run, and some research on young people who use modafinil has found changes in brain plasticity that are associated with poorer cognitive function.
And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy.
Because these drugs modulate important neurotransmitter systems such as dopamine and noradrenaline, users take significant risks with unregulated use. There has not yet been any definitive research into modafinil's addictive potential, how its effects might change with prolonged sleep deprivation, or what side effects are likely at doses outside the prescribed range. | CommonCrawl |
An algebraically completely integrable Hamiltonian system defined on the cotangent bundle to the moduli space of stable vector bundles (of fixed rank and degree; cf. also Vector bundle) over a given Riemann surface $X$ of genus $g\geq2$. Hitchin's definition of the system [a9] greatly enhanced the theory of spectral curves [a8], which underlies the discovery of a multitude of algebraically completely integrable systems in the 1970s. Such systems are given by a Lax-pair equation: $L=[M,L]$ with $(n\times n)$-matrices $L$, $M$ depending on a parameter $\lambda$, the spectral curve is an $n$-fold covering of the parameter space and the system lives on a co-adjoint orbit in a loop algebra, by the Adler–Kostant–Symes method of symplectic reduction, cf. [a1]. N.J. Hitchin defines the curve of eigenvalues on the total space of the canonical bundle of $X$, and linearizes the flows on the Jacobi variety of this curve.
The idea gave rise to a great amount of algebraic geometry: moduli spaces of stable pairs [a12]; meromorphic Hitchin systems [a3] and [a4]; Hitchin systems for principal $G$-bundles [a5]; and quantized Hitchin systems with applications to the geometric Langlands program [a2].
Moreover, by moving the curve $X$ in moduli, Hitchin [a10] achieved geometric quantization by constructing a projective connection over the spaces of bundles, whose associated heat operator generalizes the heat equation that characterizes the Riemann theta-function for the case of rank-one bundles. The coefficients of the heat operator are given by the Hamiltonians of the Hitchin systems.
Explicit formulas for the Hitchin Hamiltonian and connection were produced for the genus-two case [a7], [a6]. A connection of Hitchin's Hamiltonians with KP-flows (cf. also KP-equation) is given in [a4] and [a11].
This page was last modified on 4 October 2014, at 17:33. | CommonCrawl |
There are $n$ pupils in a school, numbered $1,2,\ldots,n$. There will soon be a dance event where the pupils will form $n/2$ pairs.
Justiina has planned the event and created a list of dance pairs. The list consists of $n/2$ pairs, and each pupil belongs to exactly one pair.
However, Kotivalo has made a little prank and added somewhere in the list an extra pair. Your task is to find this pair.
The first input line contains an integer $n$: the number of pupils.
After this, there are $n/2+1$ lines: Justiina's plan with Kotivalo's addition. Each line contains two distinct integers between $1,2,\ldots,n$.
Print the line added by Kotivalo in the same way as in the input. You can assume that there is only one possible answer. | CommonCrawl |
Abstract : We consider autonomous robots that are endowed with motion actuators and visibility sensors. The robots we consider are weak, i.e., they are anonymous, uniform, unable to explicitly communicate, and oblivious (they do not remember any of their past actions). In this paper, we propose an optimal (w.r.t. the number of robots) solution for the terminating exploration of a torus-shaped network by a team of $k$ such robots. In more details, we first show that it is impossible to explore a simple torus of arbitrary size with (strictly) less than four robots, even if the algorithm is probabilistic. If the algorithm is required to be deterministic, four robots are also insufficient. This negative result implies that the only way to obtain an optimal algorithm (w.r.t. the number of robots participating to the algorithm) is to make use of probabilities. Then, we propose a probabilistic algorithm that uses four robots to explore all simple tori of size $\ell \times L$, where $7 \leq \ell \leq L$. Hence, in such tori, four robots are necessary and sufficient to solve the (probabilistic) terminating exploration. As a torus can be seen as a 2-dimensional ring, our result shows, perhaps surprisingly, that increasing the number of possible symmetries in the network (due to increasing dimensions) does not come at an extra cost w.r.t. the number of robots that are necessary to solve the problem. | CommonCrawl |
The volume of a Riemannian metric on the projective plane is $2\pi$ and length of every non-contractible loop is greater than $\pi - \epsilon$ for some small, positive number $\epsilon$. Is this metric close to the canonical metric?
The question is somewhat vague on purpose. I'm mostly interested in the best constant for a bilipschitz equivalence in terms of $\epsilon$, but I also wonder whether for some sufficiently small $\epsilon$ one can conclude that the curvature is close to 1.
Stability of inequalities is a well-trodden research topic in convex geometry and I was wondering what was known about this in systolic geometry.
There is no Lipschitz or even Gromov-Hausdorff stability - just consider a round metric with long hairy tails of small area.
One can hope for stability with respect to intrinsic flat distance in the sense of Sormani-Wenger or some similar metric. This distance is basically Federer's flat distance between isometric images is $L^\infty$ (just like the Gromov-Hausdorff distance is the Hausdorff distance in $L^\infty$). The stability in this sense probably amounts to uniqueness of the equality case in the class of integral current spaces arising as limits of projective planes.
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry mg.metric-geometry systolic-geometry or ask your own question.
Is there a lower bound for variance in terms of curvature?
Riemannian metric on a space of "not-quite-smooth" (hyper)surfaces?
Does every smooth manifold admit a metric with bounded geometry and uniform growth? | CommonCrawl |
This page illustrates how to use Python to perform a simple but complete analysis: retrieve data, do some computations based on it, and visualise the results.
Don't worry if you don't understand everything on this page! Its purpose is to give you an example of things you can do and how to go about doing them - you are not expected to be able to reproduce an analysis like this in Python at this stage! We will be looking at the concepts and practices introduced on this page as we go along the course.
As we show the code for different parts of the work, we will be touching on various aspects you may want to keep in mind, either related to Python specifically, or to research programming more generally.
We can use programs for our entire research pipeline. Not just big scientific simulation codes, but also the small scripts which we use to tidy up data and produce plots. This should be code, so that the whole research pipeline is recorded for reproducibility. Data manipulation in spreadsheets is much harder to share or check.
You can see another similar demonstration on the software carpentry site. We'll try to give links to other sources of Python training along the way. Part of our approach is that we assume you know how to use the internet! If you find something confusing out there, please bring it along to the next session. In this course, we'll always try to draw your attention to other sources of information about what we're learning. Paying attention to as many of these as you need to, is just as important as these core notes.
Research programming is all about using libraries: tools other people have provided programs that do many cool things. By combining them we can feel really powerful but doing minimum work ourselves. The python syntax to import someone else's library is "import".
import geopy # A python library for investigating geographic information.
Now, if you try to follow along on this example in an Jupyter notebook, you'll probably find that you just got an error message.
You'll need to wait until we've covered installation of additional python libraries later in the course, then come back to this and try again. For now, just follow along and try get the feel for how programming for data-focused research works.
The results come out as a list inside a list: [Name, [Latitude, Longitude]]. Programs represent data in a variety of different containers like this.
Code after a # symbol doesn't get run.
print("This runs") # print "This doesn't"
We can wrap code up in a function, so that we can repeatedly get just the information we want.
The Yandex API allows us to fetch a map of a place, given a longitude and latitude. The URLs look like: https://static-maps.yandex.ru/1.x/?size=400,400&ll=-0.1275,51.51&z=10&l=sat&lang=en_US We'll probably end up working out these URLs quite a bit. So we'll make ourselves another function to build up a URL given our parameters.
We can write automated tests so that if we change our code later, we can check the results are still valid.
I'll need to do this a lot, so I'll wrap up our previous function in another function, to save on typing.
I can use a library that comes with Jupyter notebook to display the image. Being able to work with variables which contain images, or documents, or any other weird kind of data, just as easily as we can with numbers or letters, is one of the really powerful things about modern programming languages like Python.
Now we get to our research project: we want to find out how urbanised the world is, based on satellite imagery, along a line between two cites. We expect the satellite image to be greener in the countryside.
We'll use lots more libraries to count how much green there is in an image.
This code has assumed we have our pixel data for the image as a $400 \times 400 \times 3$ 3-d matrix, with each of the three layers being red, green, and blue pixels.
WARNING:root:Lossy conversion from int64 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
So now we can count the green from London to Birmingham!
From a research perspective, of course, this code needs a lot of work. But I hope the power of using programming is clear.
And that's it! We've covered, very very quickly, the majority of the python language, and much of the theory of software engineering.
Now we'll go back, carefully, through all the concepts we touched on, and learn how to use them properly ourselves. | CommonCrawl |
Capillarity functionals are parameter invariant functionals defined on classes of two-dimensional parametric surfaces in $\mathbb R^3$ as the sum of the area integral and a non homogeneous term of suitable form. Here we consider the case of a class of non homogenous terms vanishing at infinity for which the corresponding capillarity functional has no volume-constrained $\mathbb S^2$-type minimal surface. Using variational techniques, we prove existence of extremals characterized as saddle-type critical points. | CommonCrawl |
Abstract: Quantum electrodynamics and electroweak corrections are important ingredients for many theoretical predictions at the LHC. This paper documents APFEL, a new PDF evolution package that allows for the first time to perform DGLAP evolution up to NNLO in QCD and to LO in QED, in the variable-flavor-number scheme and with either pole or MSbar heavy quark masses. APFEL consistently accounts for the QED corrections to the evolution of quark and gluon PDFs and for the contribution from the photon PDF in the proton. The coupled QCD+QED equations are solved in x-space by means of higher order interpolation, followed by Runge-Kutta solution of the resulting discretized evolution equations. APFEL is based on an innovative and flexible methodology for the sequential solution of the QCD and QED evolution equations and their combination. In addition to PDF evolution, APFEL provides a module that computes Deep-Inelastic Scattering structure functions in the FONLL general-mass variable-flavor-number scheme up to O($\alpha_s^2$). All the functionalities of APFEL can be accessed via a Graphical User Interface, supplemented with a variety of plotting tools for PDFs, parton luminosities and structure functions. Written in Fortran 77, APFEL can also be used via the C/C++ and Python interfaces, and is publicly available from the HepForge repository. | CommonCrawl |
Paper summary fartash Table 4, 5, with only $.5\%$ of the pixels, you can get to $90\%$ missclassification, and it is a blackbox attack. #### LocSearchAdv Algorithm For $R$ rounds, at each round find $t$ top pixels that if you were to perturb them without bounds they could affect the classification the most. Then perturb each of the $t$ pixels such that they stay within the bounds (the magnitude of perturbation is a fixed value $r$). The top $t$ pixels are chosen from a subset of $P$ which is around $10\%$ of pixels; at the end of each round $P$ is updated to be the neighborhood of size $d\times d$ around the last $t$ top pixels.
Abstract: Deep neural networks are powerful and popular learning models that achieve state-of-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when these systems are deployed in the real world. In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network. Our attacks treat the network as an oracle (black-box) and only assume that the output of the network can be observed on the probed inputs. Our first attack is based on a simple idea of adding perturbation to a randomly selected single pixel or a small set of them. We then improve the effectiveness of this attack by carefully constructing a small set of pixels to perturb by using the idea of greedy local-search. Our proposed attacks also naturally extend to a stronger notion of misclassification. Our extensive experimental results illustrate that even these elementary attacks can reveal a deep neural network's vulnerabilities. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test for designing robust networks.
Definition (k-msiclassification). A neural network k-misclassifies an image if the true label is not among the k likeliest labels.
To this end, they propose a local search algorithm which, in each round, randomly perturbs individual pixels in a local search area around the last perturbation. If a perturbed image satisfies the k-misclassificaiton condition, it is returned as adversarial perturbation. While the approach is very simple, it is applicable to black-box models where gradients and or internal representations are not accessible but only the final score/probability is available. Still the approach seems to be quite inefficient, taking up to one or more seconds to generate an adversarial example. Unfortunately, the authors do not discuss qualitative results and do not give examples of multiple adversarial examples (except for the four in Figure 1).
Figure 1: Examples of adversarial attacks. Top: original image, bottom: perturbed image.
Table 4, 5, with only $.5\%$ of the pixels, you can get to $90\%$ missclassification, and it is a blackbox attack.
For $R$ rounds, at each round find $t$ top pixels that if you were to perturb them without bounds they could affect the classification the most. Then perturb each of the $t$ pixels such that they stay within the bounds (the magnitude of perturbation is a fixed value $r$). The top $t$ pixels are chosen from a subset of $P$ which is around $10\%$ of pixels; at the end of each round $P$ is updated to be the neighborhood of size $d\times d$ around the last $t$ top pixels. | CommonCrawl |
Abstract : Consider a regular triangulation of the convex-hull $P$ of a set $\mathcal A$ of $n$ points in $\mathbb R^d$, and a real matrix $C$ of size $d \times n$. A version of Viro's method allows to construct from these data an unmixed polynomial system with support $\mathcal A$ and coefficient matrix $C$ whose number of positive solutions is bounded from below by the number of $d$-simplices which are positively decorated by $C$. We show that all the $d$-simplices of a triangulation can be positively decorated if and only if the triangulation is balanced, which in turn is equivalent to the fact that its dual graph is bipartite. This allows us to identify, among classical families, monomial supports which admit maximally positive systems, i.e. systems all toric complex solutions of which are real and positive. These families give some evidence in favor of a conjecture due to Bihan. We also use this technique in order to construct fewnomial systems with many positive solutions. This is done by considering a simplicial complex with bipartite dual graph included in a regular triangulation of the cyclic polytope. | CommonCrawl |
Abstract: We study the predictions of holographic QCD for various observable four-point quark flavour current-current correlators. The dual 5-dimensional bulk theory we consider is a $SU(3)_L \times SU(3)_R$ Yang Mills theory in a slice of $AdS_5$ spacetime with boundaries. Particular UV and IR boundary conditions encode the spontaneous breaking of the dual 4D global chiral symmetry down to the $SU(3)_V$ subgroup. We explain in detail how to calculate the 4D four-point quark flavour current-current correlators using the 5D holographic theory, including interactions. We use these results to investigate predictions of holographic QCD for the $\Delta I = 1/2$ rule for kaon decays and the $B_K$ parameter. The results agree well in comparison with experimental data, with an accuracy of 25% or better. The holographic theory automatically includes the contributions of the meson resonances to the four-point correlators. The correlators agree well in the low-momentum and high-momentum limit, in comparison with chiral perturbation theory and perturbative QCD results, respectively. | CommonCrawl |
Abstract: Given a 0-1 integer programming problem, several authors have introduced sequential relaxation techniques --- based on linear and/or semidefinite programming --- that generate the convex hull of integer points in at most $n$ steps. In this paper, we introduce a sequential relaxation technique, which is based on $p$-order cone programming ($1 \le p \le \infty$). We prove that our technique generates the convex hull of 0-1 solutions asymptotically. In addition, we show that our method generalizes and subsumes several existing methods. For example, when $p = \infty$, our method corresponds to the well-known procedure of Lov\'asz and Schrijver based on linear programming (so that finite convergence is obtained by our method in special cases). Although the $p$-order cone programs in general sacrifice some strength compared to the analogous linear and semidefinite programs, we show that for $p = 2$ they enjoy a better theoretical iteration complexity. Computational considerations of our technique are also discussed. | CommonCrawl |
Let $\mathscr L (\phi(\mathbf x), \partial \phi(\mathbf x))$ denote the Lagrangian density of field $\phi(\mathbf x)$. Then then actual value of the field $\phi(\mathbf x)$ can be computed from the principle of least action. In case of motion of particles, I know that principle of least action comes from Newton's second laws. But why does the principle of least action also hold for classical fields like EM field and gravitational field? Is there any deep reason why it holds for both EM and gravitational field?
Browse other questions tagged classical-mechanics lagrangian-formalism field-theory variational-principle action or ask your own question.
Is the principle of least action fully equivalent to the Euler-Lagrange equations?
Can we derive most fundamental laws from the Action Principle?
In the Principle of Least Action, how does a particle know where it will be in the future?
Can we justify the Principle of Least Action through considerations of symmetry?
Why is Fermat's principle not formulated as principle of least action?
Why are action principles so powerful and widely applicable? | CommonCrawl |
The dynamics of a quantum system with a large number $N$ of identical bosonic particles interacting by means of weak two-body potentials can be simplified by using mean-field equations in which all interactions to any one body have been replaced with an average or effective interaction in the mean-field limit $N \rightarrow \infty$. In order to show these mean-field equations are accurate, one needs to show convergence of the quantum $N$-body dynamics to these equations in the mean-field limit. Previous results on convergence in the mean field limit have been derived for certain initial conditions in the case of one species of bosonic particles, but no results have yet been shown for multi-species. In this thesis, we look at a quantum bosonic system with two species of particles. For this system, we derive a formula for the rate of convergence in the mean-field limit in the case of an initial coherent state, and we also show convergence in the mean-field limit for the case of an initial factorized state. The analysis for two species can then be extended to multiple species. | CommonCrawl |
What would be the way to generate them on a personalized dataset?
I think (not sure ) it can be done using the V1 cmap technique, and get the dimensions from the cmap region or its done using Image Segmentation?
In short, the following 4 parameters (x1, y1, x2, y2) is what we are training for and trying to predict for the validation set !
Somebody who knows better, please correct me here if I'm wrong.
Yes, but we only use them for the training set. And, we compare/validate them for the validation set. Hope that clarifies.
Hi @ecdrib. I have the same problem. Will you share your understanding?
I am also very interested to try pytorch/fastai on windows.
Then what's the loss function or the metric involved then?
Is it Area of the two boxes then?
I'll have to rewatch the lectures later today / look at the notebook to give an exact answer.
But, I think might be on the right track there. (probably more like some loss function on each feature/class) Perfect time to open the notebook and read some code.
Can someone tell about how to use open_image(). I am getting this type error from pathlib.py (TypeError: expected str, bytes or os.PathLike object, not dict) . I've tried typecasting but it is not working out ?
what are you passing? Looks like a dictionary.
I tried with this, open_image(IMG_PATH/im0_d[FILE_NAME]) . I tried typecasting as well, it's not working .
Just paste IMG_PATH/im0_d[FILE_NAME] in a cell and see what it is. Then paste type(IMG_PATH/im0_d[FILE_NAME]) in a cell and see what that is. I haven't run the code yet so don't know off the top of my head.
For PyCharm and Mac users - a list of the shortcuts Jeremy provided for Visual Studio Code.
Zen mode (Control + Command + F) and same to get out too.
Find them all with the (Shift + Command+ A) palette option for reference.
Probably not the best list (would love suggestions) and perhaps should create a new thread for it too. Just wanted to leave myself a note. Didn't use symbols/shorthand for keys because I had trouble with them as a new Mac user once when I didn't use shortcuts.
I tried with IMG_PATH/im0_d[FILE_NAME] , it gives the same error.
This is awesome, thanks! I've been using PyCharm and thought I should find the equivalents (especially "Go back"). Thanks for taking the time to write this up.
Markdown cells accept LaTeX math inside dollar symbols: $\alpha$ becomes \alpha (now it works in Discourse too).
There is an awesome interactive online service for converting drawings into LaTeX math symbols.
what is the output of im0_d[FILE_NAME]? | CommonCrawl |
While the traditional form of continued fractions is well-documented, a new form, designed to approximate real numbers between 1 and 2, is less well-studied. This report first describes prior research into the new form, describing the form and giving an algorithm for generating approximations for a given real number. It then describes a rational function giving the rational number represented by the continued fraction made from a given tuple of integers and shows that no real number has a unique continued fraction. Next, it describes the set of real numbers that are hardest to approximate; that is, given a positive integer $n$, it describes the real number $\alpha$ that maximizes the value $|\alpha - T_n|$, where $T_n$ is the closest continued fraction to $\alpha$ generated from a tuple of length $n$. Finally, it lays out plans for future work.
Wiyninger, Donald Lee III, "Continued Fractions: A New Form" (2011). HMC Senior Theses. 14. | CommonCrawl |
Graduated in physics in 2006 in Craiova, Romania. The Craiova school of theoretical physics has a strong reputation in the theory of BRST symmetry following in the footsteps of Marc Henneaux's team from Brussels.
6 Is the Maxwell Stress Tensor Coordinate Dependent?
5 How do I construct the $SU(2)$ representation of the Lorentz Group using $SU(2)\times SU(2)\sim SO(3,1)$ ? | CommonCrawl |
Marek and his schoolmates have just finished their studies at the university. They wanted to celebrate it with a game of paintball. After an hour of playing a very strange thing happened – everyone had exactly one bullet left. Marek, being a very curious person, wanted to know whether it's possible that everyone will be hit exactly once provided nobody moves.
You are given a description of the situation during a paintball game when every player has only one bullet. The description of the game consists of pairs of players who can see each other. If a player can see another player, he can fire at him. Your task is to find a target for each player such that everyone will be hit.
The first line of input contains two space separated integers $N$ and $M$, satisfying $2\leq N\leq 1\, 000$ and $0\leq M\leq 5\, 000$, where $N$ is the number of players. Players are numbered $1, 2, \ldots , N$. $M$ lines follow, each line containing two space separated integers $A$ and $B$ ($1\leq A < B\leq N$), denoting that players $A$ and $B$ can see each other. Each pair of players appears at most once in the input.
If there is no assignment of targets such that everyone will be hit, output Impossible. Otherwise output $N$ lines. The $i$-th line should contain the number of the target of the $i$-th player. If there is more than one solution, output any one. | CommonCrawl |
The product rule for radicals tells us that $\sqrt[n] a\times\sqrt[n] b=\sqrt[n] ab$ (when $\sqrt[n] a$ and $\sqrt[n] b$ are real numbers and $n$ is a natural number). That is, the product of two nth roots is the nth root of the product. Therefore, $\sqrt 14\times\sqrt 3pqr=\sqrt (14\times 3pqr)=\sqrt 42pqr$. | CommonCrawl |
After the compare-exchange operation, we know that $A[i] \le A[j]$.
The 0-1 sorting lemma provides a powerful way to prove that an oblivious compare-exchange algorithm produces a sorted result. It states that if an oblivious compare-exchange algorithm correctly sorts all input sequences consisting of only 0s and 1s, then it correctly sorts all inputs containing arbitrary values.
Argue that $A[q] > A[p]$, so that $B[p] = 0$ and $B[q] = 1$.
To complete the proof 0-1 sorting lemma, provide that algorithm X fails to sort array $B$ correctly.
When columnsort completes, the array is sorted in column-major order: reading down the columns, from left to right, the elements monotonically increase.
Transpose the array, but reshape it back to $r$ rows and $s$ columns. In other words, turn the leftmost column into the top $r/s$ rows, in order; turn the next column into the next $r/s$ rows, in order; and so on.
Perform the inverse of the permutation performed in step 2.
Shift the top half of each column into the bottom half of the same column, and shift the bottom half of each column into the top half of the next column to the right. Leave the top half of the leftmost column empty. Shift the bottom half of the last column into the top last column into the top half of a new rightmost column, and leave the bottom half of this new column empty.
Perform the inverse of the permutation performed in step 6.
Argue that we can treat columnsort as an oblivious compare-exchange algorithm, even if we do not know what sorting method the odd steps are.
Although it might seem hard to believe that columnsort actually sorts, you will use the 0-1 sorting lemma to prove that it does. The 0-1 dorting lemma applies because we can treat columnsort as an oblivious compare-exchange algorithm. A couple of definitions will help you apply the 0-1 sorting lemma. We say that an area of an array is clean if we know that it contains either all 0s or all 1s. Otherwise, the area might contain mixed 0s and 1s, and it is dirty. From here on, assume that the input array contains only 0s and 1s, and that we can treat it as an array with $r$ rows and $s$ columns.
Prove that after step 4, the array, read in column-major order, starts with a clean area of 0s, ends with a clean area of 1s, and has a dirty area of at most $s^2$ elements in the middle.
Prove that steps 5-8 produce a fully sorted 0-1 output. Conclude that columnsort correctly sorts all inputs containing arbitrary values.
Now suppose that $s$ does not divide $r$. Prove that after steps 1-3, the array consists of some clean rows of 0s at the top, some clean rows of 1s at the bottom, and at most $2s - 1$ dirty rows between them. How large must $r$ be, compared with $s$, for columnsort to correctly sort when $s$ does not divide $r$?
Suggest a simple change to step 1 that allow us to maintain the requirement that $r \ge 2s^2$ when $s$ does not divide $r$, and prove that with your change, columnsort correctly sorts.
We know that $A[q] > A[p]$ by definition ($A[q]$ is misplaced, but it cannot be smaller than $A[p]$, since $A[p]$ is the smallest misplaced element). From this it follows that $B[p] = 0$ and $B[q] = 1$.
To prove the rest, we need to establish that a monotonous mapping and a compare-exchange operation commutate, that is, they can be applied in any order. This makes sense, since if the mapping is applied first, the order of the elements would not change (because the mapping is monotonic) and the compare-exchange would have the same result.
An oblivious compare-exchange algorithm can be regarded as a sequence of compare-exchange operations. Thus, it doesn't matter if the monotonous mapping is applied before the fist or after the last compare-exchange operation.
Applying that to $A$ and $B$, we conclude that $B[q] = 1$ and $B[p] = 0$. We know that $q < p$, otherwise $A[q]$ there would have been a smaller misplaced element. From this we gather that $B[q] > B[p]$ and $q < p$, which means that the array is unsorted.
There is a more formal proof in the first link.
Since the even-numbered steps perform things blindly, we can suspect that the algorithm has some elements of obliviousness in it.
If we perform the odd numbered steps with an oblivious compare-exchange algorithm, then columnsort is obviously oblivious and we can apply the 0-1 sorting lemma. Since we can treat those steps as "black boxes", we can replace the sorting algorithm with any other algorithm that produces the same result (that is, any sorting algorithm) and the resulting columnsort would still sort.
After the first step, each column becomes a sequences of 0s followed by a sequence of 1s. In this sense, there is only one 0 → 1 transition in each column. Since $s$ divides $r$, each column will map to $r/s$ rows. One of those rows will contain the 0 → 1 transition. The others will contain only 0s or 1s. That is, each column will map to at most one dirty row and the rest will be clean.
After the transposition, and second sorting, the clean rows of 0s will move to the top and the clean rows of 1s will move to the bottom. We're left with at most $s$ dirty rows in the middle.
After the reversal of the permutation, the $s$ dirty rows will map to a sequence of $s^2$. All the other elements are clean.
The dirty sequence is at least half a column long now. It either fits in one column or crosses over in the next one. All columns left of it contain only 0s and all columns right of it contain only 1s.
If the result is contained in a single column, step 5 will result in a sorting in column major mode and the subsequent steps will not interfere with it.
If not, step 6 arranges the columns in a way that the dirty subsequence will fill a single column. Sorting all column cleans it and we have a sorted array.
Note that sorting the half-columns is unnecessary - step 5 already sorted them.
If $s$ does not divide $r$, a row can contain not only a 0 → 1 transition, but also a 1 → 0 transitions. There would be at most $s - 1$ of those, resulting to a dirty region of $2s - 1$.
We can make $r$ to be at least $2(2s - 1)^2$. As for the change to step one, we can either pad the array with $+ \infty$ until $s$ divides $r$, or we can chop off a small part of the array and sort it separately. The latter will be more efficient, since it does not require moving the array.
Finally, all of that turns to be unnecessary - columnsort works without the divisibility restriction. Details can be found in the paper.
Surprising as it is, columnsort smokes the stdlib implementation of quicksort. I thought the overhead was to much, but it appears that it is not. Of course, the crossover point will vary.
* end, since that is not necessary for the correctness of the algorithm.
* A function that compares numbers, to be passed to the stdlib sort.
* Verified the dimensions of the passed array.
* A utility function to call with the array and a column.
* I never explored using locking mechanisms instead of joining the threads. | CommonCrawl |
Consider a 10 D spacetime which can be written as $E^1 \times X \times E^3 = Z\times E^3$ where $E^1$ is time, $X$ is the 6D (Calabi-Yau) space of extra dimensions, and $E^3$ are the 3 large spacial dimensions.
In his new book, Penrose claims that due to intrinsic perturbations of $X$ and extrinsic perturbations of how $X$ is embeded in $Z$ (that can lead away from the family of Calabi-Yau spaces!), the spacetime $Z$ would in accordance with the corresponding Einstein vacuum equations $^7G = 0$ evolve into a singular spacetime $Z^*$ due to his and Hawking's singularity theorems and if $^7G$ satisfies the strong energy condition.
Can somebody roughly outline the technical details of this argument and explain why this is not a serious issue for string theory? | CommonCrawl |
We characterize all complex-valued (Lebesgue) integrable functions $f$ on $[0,1]^m$ such that $f$ vanishes when integrated over the product of $m$ measurable sets which partition $[0,1]$ and have prescribed Lebesgue measures $\alpha_1,\ldots,\alpha_m$. We characterize the Walsh expansion of such functions $f$ via a first variation argument. Janson and Sos asked this analytic question motivated by questions regarding quasi-randomness of graph sequences in the dense model. We use this characterization to answer a few conjectures from [S. Janson and V. Sos: More on quasi-random graphs, subgraph counts and graph limits]. There it was conjectured that certain density conditions of paths of length 3 define quasi-randomness. We confirm this conjecture by showing more generally that similar density conditions for any graph with twin vertices define quasi-randomness. The quasi-randomness results use the language of graph limits. No back-ground on graph limit theory will be assumed, and we will spend a fraction of the talk introducing the graph limits approach in the study of quasi-randomness of graph sequences. The talk is based on joint work with Hamed Hatami and Yaqiao Li. | CommonCrawl |
Week 1 Power functions - when is $x^n>x^m$ and where do they intersect. Introduction to even and odd functions.
Cell size and nutrient balance: cell volume and area and the role of power functions in describing cell size limitations.
Week 2 Approximating a rational function near the origin.
Approximating a rational function for large x. Introduction to Hill functions.
Sketching Hill functions by hand and by Desmos (see Hill functions demo). Comparing Hill functions with different parameter values.
See video above for an introduction to even and odd functions and also Sec 1.4 and Appendix C.D of the course notes.
Average rate of change and secant lines. Instantaneous rate of change.
Continuity - definition and examples of three types of discontinuities.
Examples of computing the derivative of a function from the definition of the derivative.
Week 3 Derivatives: analytic, and geometric (zoom in on a point). Sketching $f'(x)$ given $f(x)$ (intro).
Using a spreadsheet to graph a function and its (approximate) derivative.
Rules of differentiation: Product and quotient rules.
Week 4 Rules of differentiation: Antiderivatives of polynomials.
Equation of a Tangent line.
Generic Tangent line and intro to Newton's method.
Tangent lines and linear approximation.
Introduction to Newton's method - how it works and the formula for successive estimates.
Week 5 Introduction to Newton's method - how to carry it out with a spreadsheet.
Introduction to Newton's method - how to choose a good $x_0$.
Increasing, decreasing and critical points.
Week 7 Least Squares - finding the best fitting line y=ax through a set of data points. See also the Fitting data supplement to the course notes.
Chain Rule: an applications to optimization problems involving plovers and crocodiles.
Exponential functions: derivative of $a^x$.
Week 9 Inverse functions and logarithm, applications of logs.
Differential equations for growth and decay.
A differential equation for human population growth.
A simple differential equation problem.
Week 10 Geometry of change: (I) Slope fields.
Geometry of change: (II) State space. This one is more relevant to the pre-lecture questions but you should watch both.
The Logistic equation I (state space and slope field).
The Logistic equation II (state space and slope field).
Week 11 . Solving differential equations of the type $dy/dt=a-by$.
Solving differential equations approximately using Euler's Method - theory.
Solving differential equations approximately using Euler's Method - spreadsheet.
, Introduction to Trigonometric Functions and review of trigonometric identities.
(LEK),EC Trigonometric Functions and cyclic processes, phase, amplitude, etc. (fitting a sin or cos to a cyclic process), Inverse trig functions.
Week 13 Derivatives of trig functions, related rates examples.
The Escape Response and trigonometric related rates. | CommonCrawl |
We derive a posteriori error estimates in the $L_\infty((0,T];L_\infty(\Omega))$ norm for approximations of solutions to linear parabolic equations. Using the elliptic reconstruction technique introduced by Makridakis and Nochetto and heat kernel estimates for linear parabolic problems, we first prove a posteriori bounds in the maximum norm for semidiscrete finite element approximations. We then establish a posteriori bounds for a fully discrete backward Euler finite element approximation. The elliptic reconstruction technique greatly simplifies our development by allowing the straightforward combination of heat kernel estimates with existing elliptic maximum norm error estimators. | CommonCrawl |
For a graph $G$, let $\alpha (G)$ be the cardinality of a maximum independent set, let $\mu (G)$ be the cardinality of a maximum matching and let $\xi (G)$ be the number of vertices belonging to all maximum independent sets. Boros, Golumbic and Levit showed that in connected graphs where the independence number $\alpha (G)$ is greater than the matching number $\mu (G)$, $\xi (G) \geq 1 + \alpha(G) - \mu (G)$. For any graph $G$, we will show there is a distinguished induced subgraph $G[X]$ such that, under weaker assumptions, $\xi (G) \geq 1 + \alpha (G[X]) - \mu (G[X])$. Furthermore $1 + \alpha (G[X]) - \mu (G[X]) \geq 1 + \alpha (G) - \mu (G)$ and the difference between these bounds can be arbitrarily large. Lastly some results toward a characterization of graphs with equal independence and matching numbers is given. | CommonCrawl |
Maybe you've read every single article on Medium about avoiding procrastination or you're worried that those cute dog gifs are using up too much CPU power.Forsaking both, I've written a brief guide about how to implement Gibbs sampling for Bayesian linear regression in Python.However, we might also consider approximate decoding/inference/sampling methods where the conditional UGM is more complicated, but still simple enough that we can do exact calculations.We will refer to methods that use this simple but powerful idea as block approximate decoding/inference/sampling methods.If you find any mistakes or if anything is unclear, please get in touch: kieranc [at] Here we are interested in Gibbs sampling for normal linear regression with one independent variable.Consider the problem of sampling from $p(\mathbf, \mathbf)$ using the Metropolis or Metropolis-Hastings (MH) algorithm.I can either propose samples for $p(\mathbf, \mathbf)$ directly, or I could do a blocked version of that and alternately propose samples for $p(\mathbf \mid \mathbf)$ and $p(\mathbf \mid \mathbf)$, which I believe is also called .
We wish to find the posterior distributions of the coefficients \(\beta_0\) (the intercept), \(\beta_1\) (the gradient) and of the precision \(\tau\), which is the reciprocal of the variance.
The approximate inference methods from the previous demo correspond to the special case where each variable forms its own block.
By conditioning on all variables outside the block, it is straightforward to do exact calculations within the block.
A key aspect of the approximate decoding/inference/sampling methods that we have discussed up to this point is that they are based on performing local calculations.
In most cases, this involved updating the state of a single node, conditioning on the values of its neighbors in the graph. | CommonCrawl |
Let $G(q)$ be a Chevalley group over a finite field $\mathbb F_q$. By Lusztig's and Shoji's work, the problem of computing the values of the unipotent characters of $G(q)$ is solved, in principle, by the theory of character sheaves; one issue in this solution is the determination of certain scalars relating two types of class functions on $G(q)$. We show that this issue can be reduced to the case where $q$ is a prime, which opens the way to use computer algebra methods. Here, and in a sequel to this article, we use this approach to solve a number of cases in groups of exceptional type which seemed hitherto out of reach. | CommonCrawl |
Does this conclude that $ \int_E f_jg_j dx \to \int_E fgdx$?
Under what condition on the space X, any Continuous operator will be Completely continuous.
Strongly convergent subsequence $+$ point-wise convergence $\Rightarrow$ strong convergence?
Weak convergence implies strong convergence in $L^1$ for Fourier series?
Double Limit of operators converges weakly, does single limit converge?
Why do this sequence converges weakly but not strongly?
What distinguishes weak and strong convergence of bounded linear operator in Banach spaces?
A subsequence of a weakly convergent sequence $x_n$ is strongly convergent to $x$, then the sequence $x_n$ is strongly convergent to $x$?
If $f(X_n,y)\to g(y)$ almost surely for each fixed $y$, $\Pr[f(X_n,Y)\leq\alpha]\to\Pr[g(Y)\leq\alpha]$?
Family of projections in a Banach space.
Show that the following is a bounded linear operator on $L^2(R_+)$ Calculate the adjoint operator. | CommonCrawl |
Robot Turtles is one of Theta's favorite games. In this game, kindergarteners learn how to "code" by creating programs that move a turtle from a starting field to a diamond. Robot Turtles is reportedly the most successful game funded by the Kickstarter incubator.
Players must develop a program consisting of "instructions" that brings a turtle from a starting location to a goal (a diamond). An adult will then "execute" this program by moving the turtle based on the given instructions.
Robot Turtles is played on an $8 \times 8$ board. There is one turtle (marked with the letter T), which always starts out at the bottom-left field, facing right. The board contains empty squares (marked as .), castles made out of rock (C), and castles made out of ice (I). The diamond is marked with a D. The turtle may move only onto empty squares and the square on which the diamond is located.
A turtle program contains $4$ kinds of instructions, marked by a single letter.
F The turtle moves one field forward in the direction it is facing. If the turtle faces a castle or the border of the board, a program error occurs.
R The turtle turns $90$ degrees to the right (the turtle will just turn and stay on the same field).
L The turtle turns $90$ degrees to the left (the turtle will just turn and stay on the same field).
X The turtle fires a laser in the direction it is facing. If the square it is facing contains an ice castle, the ice castle will melt and the square will turn into an empty square. Otherwise, a program error occurs. The turtle will not move or change direction. It is a program error to fire the laser at empty squares, rock castles or outside the board.
The input consists of $8$ lines, which represents the board, with each line representing one row. The turtle will always start out at the bottom-left. There will be exactly $1$ diamond. There will be no more than $10$ ice castles.
Output the shortest valid turtle program whose execution (without program error) brings the turtle from the starting location to the diamond! If there are multiple such programs of equal length, you may output any of them!
Output no solution if it is not possible for the turtle to reach the diamond! | CommonCrawl |
Consider an elliptic curve $E$ defined over $\mathbb Q$. Assume that the rank of $E(\mathbb Q)$ is $\geq2$. (Assume the Birch-Swinnerton-Dyer conjecture if needed, so that analytic rank $=$ algebraic rank.) How do you construct a point of infinite order on $E(\mathbb Q)$?
Implicit in a paper of Mazur and Swinnerton-Dyer ("Arithmetic of Weil curves", Invent. Math., 25, 1-61 (1974); see especially section 2.4) there is a construction that seems to work a positive proportion of the time, though not always. Here is what the construction would be according to my understanding: take a modular parametrisation $\phi:X_0(N)\to E(\mathbb C)$, consider its points of ramification on the imaginary axis (there is at least one), take the image $\phi(z)$ of one such point $z$; due to standard magic, $X_0(N)$ has an algebraic model that makes phi into an algebraic map; the trace of $\phi(z)$ is a point of $E(\mathbb Q)$ that might be non-torsion, and sometimes is).
Has any further work been done on this? (In particular, has it been proven that this works infinitely often?) Are there any other constructions for which similar statements have been conjectured or proven?
Browse other questions tagged nt.number-theory elliptic-curves modular-forms or ask your own question.
Is there an analog of the Birch/Swinnerton-Dyer conjecture for abelian varieties in higher dimensions? | CommonCrawl |
The modulus operator behaves differently for special values: If the operands are numbers, regular arithmetic division is performed, and the remainder of that division is returned. If the dividend is Infinity or the divisor is 0, the result is NaN.... $$ n \times a = a + a + a + \cdots +a$$ Can we define division similarly using only addition or subtraction? Stack Exchange Network Stack Exchange network consists of 174 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Perform Mathematical Operations. To perform mathematical operations such as addition, subtraction, multiplication and division of any two number in Java Programming, you have to ask to the user to enter the two number and then perform the action accordingly.
How do I write a formula to multiply and divide? i have a basic form and i need to take the data $ amount from one box and divide by 12. i then need to add 15% on another box. | CommonCrawl |
With the use of variables in mathematical expressions, you can program the computer to do useful calculations for yourself. How about the tip in a restaurant? If you know the cost of a meal $(meal)$, and decide what percentage to leave the server $(per)$, the tip will be $tip=per/100\times meal$. With this, the total cost of the meal will be $total=meal+tip$. See if you can finish the code below, and have a "tip calculator."
Now you try. Try finishing this tip calculator based on the formulas discussed above. | CommonCrawl |
At some stage of our school life, most of us would have sent or received coded messages from our classmates.
General mathematical principle: To investigate a complicated situation apply simplifying transformations, investigate the simplified situation, and try to transfer this information back to the original situation.
I have been doing some work on the $n$'th prime number, but was impeded for some time because of the lack of prime numbers.
This game is a contest between two gladiators in an arena consisting of a 20 $\times$ 20 grid.
the value of the expression can be made equal to $7/10$. How?
J231 A man goes to an auction with \$100 and buys exactly 100 animals. | CommonCrawl |
For $n$ individuals, can a blocking coalition only be formed by at least $n/2$ individuals? For example, if there are 6 individuals, can less than 3 individuals form blocking coalitions?
Coalition formation is not the same as majority voting. Say, in a single good economy, individual 1 has the entire endowment, so $\mathbf e=(1,0,0,0,0,0)$. Then individual 1 forms a blocking coalition to any allocation $\mathbf x=(x_1,\dots,x_6)$ where $x_1<1$.
Not the answer you're looking for? Browse other questions tagged microeconomics self-study general-equilibrium pareto-efficiency walrasian or ask your own question.
If US capital income taxation were eliminated, which ordinary income tax brackets would be equally progressive? | CommonCrawl |
Are there known accuracy issues between 2D axisymmetric and 3D solutions?
In my full 3D solutions I am solving for the potential throughout a $100\times 200\times 200$ grid. Inside is a ring electrode set to -5V via a Dirichlet boundary condition, and surrounded on all sides by Dirichlet boundary conditions at 0V.
When I solved for this setup in full 3D vs solving for the same setup in 2D axisymmetric form, I get different results. In the images below, one can denote the darker blue color, in the very center of each of the two solutions.
For the 2D axisymmetric setup, the left, top, and right all have the same 0V Dirichlet boundaries as the 3D setup except for the Neumann boundary on the bottom where the axis of symmetry for the problem resides.
Below you can see two images. In both images, the object on the left side is a 'slice' of the 3D solution, and the object(s) on the right is the solution for the 2D axisymmetric solution. The first image shows the 2D axisymmetric solution, whereas the second image has a 'mirror' image of that solution placed just below its counterpart, to make comparing the two solutions (3D vs 2D axisymmetric) that much easier to see.
So, is it in any way common for such a marked difference in the results of 3D vs 2D axisymmetric results for elliptic PDEs?
UPDATE: I would like to thank Bill and Wolfgang for their constructive questions.
Knowing that the 2D Axisymmetric and 3D solutions should be the same, I modified my 3D setup and used a $100\times199\times199$ grid instead, with $(i, 99, 99)$ as the axis for the ring electrode, and setting Dirichlet boundaries at a radius of $100$ from said axis. The solutions in 3D now match my 2D axisymmetric solution.
Below are the images showing the now matching solutions.
Browse other questions tagged boundary-conditions numerical-modelling elliptic-pde or ask your own question. | CommonCrawl |
The aim of this paper is that of discussing closed graph theorems for bornological vector spaces in a self-contained way, hoping to make the subject more accessible to non-experts. We will see how to easily adapt classical arguments of functional analysis over $\mathbb R$ and $\mathbb C$ to deduce closed graph theorems for bornological vector spaces over any complete, non-trivially valued field, hence encompassing the non-Archimedean case too. We will end this survey by discussing some applications. In particular, we will prove De Wilde's Theorem for non-Archimedean locally convex spaces and then deduce some results about the automatic boundedness of algebra morphisms for a class of bornological algebras of interest in analytic geometry, both Archimedean (complex analytic geometry) and non-Archimedean. | CommonCrawl |
Abstract: Given an element in the first homology of a rational homology 3-sphere $Y$, one can consider the minimal rational genus of all knots in this homology class. This defines a function $\Theta$ on $H_1(Y;\mathbb Z)$, which was introduced by Turaev as an analogue of Thurston norm. We will give a lower bound for this function using the correction terms in Heegaard Floer homology. As a corollary, we show that Floer simple knots in L-spaces are genus minimizers in their homology classes, hence answer questions of Turaev and Rasmussen about genus minimizers in lens spaces. | CommonCrawl |
Let $G$ be a simple graph. An independent set is a set of pairwise non-adjacent vertices. The number of vertices in a maximum independent set of $G$ is denoted by $\alpha(G)$. In this paper, we characterize graphs $G$ with $n$ vertices and with maximum number of maximum independent sets provided that $\alpha(G)\leq 2$ or $\alpha(G)\geq n-3$. | CommonCrawl |
Abstract: While going deeper has been witnessed to improve the performance of convolutional neural networks (CNN), going smaller for CNN has received increasing attention recently due to its attractiveness for mobile/embedded applications. It remains an active and important topic how to design a small network while retaining the performance of large and deep CNNs (e.g., Inception Nets, ResNets). Albeit there are already intensive studies on compressing the size of CNNs, the considerable drop of performance is still a key concern in many designs. This paper addresses this concern with several new contributions. First, we propose a simple yet powerful method for compressing the size of deep CNNs based on parameter binarization. The striking difference from most previous work on parameter binarization/quantization lies at different treatments of $1\times 1$ convolutions and $k\times k$ convolutions ($k>1$), where we only binarize $k\times k$ convolutions into binary patterns. The resulting networks are referred to as pattern networks. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of $1\times 1$ (data projection/transformation) and $k\times k$ convolutions (pattern extraction), we propose a new block structure codenamed the pattern residual block that adds transformed feature maps generated by $1\times 1$ convolutions to the pattern feature maps generated by $k\times k$ convolutions, based on which we design a small network with $\sim 1$ million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets. | CommonCrawl |
Inference procedures in some lifetime models.
This thesis deals with inference procedures for some parametric lifetime models, involving single as well as multiple samples. In some situations censored (Type I and Type II) samples are considered. The thesis consists of two parts. Part I deals with homogeneity testing involving multiple samples from the gamma, exponential and the Weibull or the extreme value distributions. Part II deals with confidence interval procedures for the parameters of the two parameter exponential distribution and the extreme value models. Assuming the underlying distribution for several groups of data to be two parameter gamma with common shape parameter various tests are developed for comparing the means of the groups. The performance of these test statistics are determined in terms of level and power by conducting simulations. A C($\alpha$) test and a likelihood ratio test are presented and compared for checking the validity of the assumption of common shape parameter. Under failure censoring, various test statistics for comparing the mean life times of several two parameter exponential distributions are derived and studied by performing Monte Carlo simulations. Considering failure censored data, homogeneity tests for extreme value location parameters with the assumption of a common scale parameter are studied. For this problem, a C($\alpha$) test is derived an compared with other existing methods through simulations. Also, for testing the assumption of common extreme value scale parameter, a C($\alpha$) statistic is derived and compared with other existing statistics. In single sample situations several confidence interval estimation procedures for the scale parameter of a two parameter exponential distribution under time censoring are discussed. Behaviours of the confidence intervals based on these procedures are examined by simulation study in terms of average lengths, coverage and tail probabilities. For extreme value failure censored data (with or without covariates), a simple method using orthogonality approach (Cox and Reid, 1987) to obtain explicit expression for the variance-covariance of the MLEs of the parameters is given. For obtaining confidence intervals for the parameters of interest various procedures, such as the procedure based on the likelihood ratio, the procedure based on the likelihood score corrected for bias and skewness and the procedure based on the likelihood ratio adjusted for mean and variance, are derived. The behaviours of these procedures are investigated in terms of average lengths, coverage and tail probabilities by conducting Monte Carlo simulations. The above procedures are extended to extreme value regression model. Confidence interval procedures are also derived and studied for the parameters of the extreme value model under time censoring. Source: Dissertation Abstracts International, Volume: 54-05, Section: B, page: 2583. Thesis (Ph.D.)--University of Windsor (Canada), 1992.
Thiagarajah, Kulathavaranee., "Inference procedures in some lifetime models." (1992). Electronic Theses and Dissertations. 2435. | CommonCrawl |
Abstract: Nonnegative matrices are important in many areas. Of particular importance are the spectral properties of square nonnegative matrices. Some spectral properties are given by the well-known Perron-Frobenius theory, which is about 100 years old. One of the most difficult problems in matrix theory is to determine the lists of $n$ complex numbers (respectively real numbers) which are the spectra of $ n \times n $ nonnegative (respectively symmetric nonnegative) matrices. In fact, this problem is open for any $ n \geq 5 $. Our work deals with the first open case, that is $ n = 5 $, for a list of real numbers. We made a significant progress towards the solution of this case. In particular, we obtain the solution when the sum of the five given numbers is zero or at least half of the largest one. | CommonCrawl |
Bingo is the game of chance where each player matches the numbers on their card with the numbers that the caller draws at random. When the first player has collected enough called numbers on their card, they declare that they have won, and their card is verified.
Imagine this game being played online with players allowed to choose their own card.
Question: Is there a way to play Bingo online and assert that your card won, without revealing what your card is, only that you chose this card at the beginning and that it does contain the called numbers?
Sub-question: Is this a practical application of an existing, generalised problem?
Own thoughts: In the first iteration of this game, we could ask that players publish, signed, which card they chose at the beginning of the game. When the first player has collected enough called numbers, the winner will be apparent. But I don't want for players to publish their card at the start of the game.
In the second iteration of this game, we could ask that players publish, encrypted, which card they chose at the beginning of the game. We could include salt to avoid reverse lookups, and we could include a checksum to ensure that what they publish can't easily be decrypted into any card. But I don't want for players to publish their card at the end of the game, either!
Once a game is won, we know which subset of cards could have won. Perhaps it is possible to say that your card belongs to this subset (with some probability?) without saying what member it is? Perhaps this involves generating a large amount of winning cards, of which perhaps one is the actual winning one, and then prove that your card is one of them, without saying which one it is.
Clarification 1: This problem does not deal with calling numbers in a fair way. Assume that the players cannot predict what numbers are called.
I will build upon Meir Maor's answer, by constructing a simple $f$ for use in a R1CS like libsnark's or dalek's ``bulletproof'' library. Turns out this is pretty efficient!
Every player commits to the board. We assume the board $B(i,j)$ is of size $n\times m$. Make $nm$ Pedersen commitments, which we publish.
for some random challenge $z$. Using a Fiat-Shamir transformation, this can be made non-interactive. This scheme needs $(l-1)(n-1)$ private multiplications, which means that a system like dalek's can create a non-interactive proof in $\log(n\cdot l)$ space, so logarithmic in the amount or rows and already-called numbers.
This seems entirely straight forward. Each player publishes a commitment, essentially a hash of their board. Possibly signed.
If the boards are not high enough entropy themselves we will tack on some random data to increase their entropy to make the result unguessable.
When a player declares victory he will provide a Zero Knowledge proof that he knows of a winning board matching his commitment.
Note Zero Knowledge proofs are very powerful stuff, but we need to open the black box of the functions we use (such as hashing) in order to use them. For any efficiently computeable predicate function $f$ we can prove we have $x$ so that $f(x)=true$ without revealing anything about $x$ beyond the statement.
In our case we would have $f(board)$ be the predicate which checks the board is a winning board and matches our commitment. Obviously trivial to compute. | CommonCrawl |
The central star of the Helix Nebula, the closest planetary nebula, is ringed by hundreds of Cometary Knots. Presented here are images taken with WFPC2 of a field in the northern portion of the bright ring and some physical parameters and models derived from the data. The field of view contains more than thirty well formed Cometary Knots and reveals that the entire bright ring is composed of knot-like structures. The data is in three narrow emission line filters, H$\alpha$, (OIII) and (NII). Optical thickness to ionizing photons for the knots is established, physical properties such as density in the ionized portions of the knots and dust mass are determined and the beginning of a model to explain the detailed structure set forth.
Handron, Kerry Dorinda Patrick. "Cometary knots in the Helix Nebula." (1996) Master's Thesis, Rice University. https://hdl.handle.net/1911/17047. | CommonCrawl |
There is a racetrack where $n$ players complete laps. Each player has their own maximum speed. In this racetrack, overtaking is only possible near the finish line at every lap: when a player approaches a slower player, she will stay behind him until at the finish line. At the finish line, all players crossing the line at the same time resume driving at their maximum speed (so faster players overtake slower ones). Initially, all players start at the finish line. Given the lap time and the number of laps to complete for each player, calculate the times they complete the race in.
The first line contains an integer $n$ ($1 \leq n \leq 5\, 000$), the number of players. The following $n$ lines contain the players' lap time and number of laps to complete: the $i$-th line contains two integers $t_ i$ and $c_ i$ ($1 \leq t_ i \leq 10^6$, $1 \leq c_ i \leq 1\, 000$), the lap time and the number of laps to complete for player $i$. The players are sorted in decreasing order of speed, that is, $t_1 \leq t_2 \leq \ldots \leq t_ n$.
Output $n$ lines; the $i$'th line must contain the time that player $i$ completes the race. | CommonCrawl |
20/05/2014�� Fast Math Tricks Multiply 2 Digit Numbers having Same Tens Digit & Ones Digits Sum is 10 - Duration: 6:08. Jogi And You 4,313,810 views... The second displayed line of matrices is not correct. The idea will work, but there needs to be either a separation into cases, or a much larger matrix, I think a $10\times 10$, except that there will be stuff only near the diagonal, essntially a string of five $2\times 2$ Fibonacci matrices, with $0$'s elsewhere.
20/08/2006�� And it appears to follow the same relation as with the first one except it's now 3 to some power (the same powers) instead of 2 to that power. I'd say it's safe to assume that with a(2)=x you'd get x^same power with that relation.... 20/08/2006�� And it appears to follow the same relation as with the first one except it's now 3 to some power (the same powers) instead of 2 to that power. I'd say it's safe to assume that with a(2)=x you'd get x^same power with that relation.
cs504, S99/00 Solving Recurrence Relations - Step 1 Find the Homogeneous Solution. Begin by putting the equation in the standard form. That means all terms containing the sequence go on the left and everything else on the right. | CommonCrawl |
Let $\mathcal R \subseteq S \times S$ be a relation on a set $S$.
Some sources call this a linear ordering, or a simple ordering.
If it is necessary to emphasise that a total ordering $\preceq$ is not strict, then the term weak total ordering may be used.
Results about total orderings can be found here. | CommonCrawl |
How is "the surrounding" and its temperature defined to calculate entropy change during a reaction?
Lets say supercooled liquid water at $263\ \mathrm K$ isobarically changes to solid ice at the same temperature. I wish to calculate the change in entropy of the surroundings and I happen to know the $\Delta H$ for the reaction.
Do we consider the temperature at which the reaction is taking place to be the temperature of the surroundings as well?
Thermodynamics deals with changes between systems at equilibrium, which is to say that the initial and final states of a process are regarded as equilibrium states. As explained in the comments, the surroundings are usually defined as an ideal reservoir of infinite size and thus infinite heat capacity and constant temperature, clearly an approximation, but the key point remains that stated in the previous sentence.
It does not make sense to define an equilibrium diathermal state in which the surroundings and the system have a different temperature. Since in the stated problem the system undergoes exchange of heat with its surroundings, and the system is at an initial and final T of 263 K after the transition, then the surroundings have to be at 263 K throughout.
One potential point of confusion regarding this type of problem concerns the metastable nature of the supercooled state. This is not a true thermodynamic equilibrium but is regarded as sufficiently stable to be regarded as such. An additional point of confusion may be that heat is transferred to the surroundings even though system and surroundings are at the same T. This is possible because the surroundings have infinite size and heat capacity. One way to envision such surroundings is as a reservoir of a substance with a melting point of 263 K.
Not the answer you're looking for? Browse other questions tagged thermodynamics enthalpy entropy or ask your own question.
Which temperature is meant in the Gibbs free energy equation?
How can enthalpy change of a system be negative while entropy change is positive?
Is ΔS of a system related to temperature and change in enthalpy?
Why don't these Entropy equations contradict each other? | CommonCrawl |
Given an $n\times n$ grid, and $2\times 2$ checkered tiles (white in the upper left and bottom right corners, and black in the upper right and bottom left corners), what is the smallest number of black squares that can be showing for a tiling that covers the grid (overlaps allowed)?
Tiles can be rotated in their placement, are assumed to be infinitely thin, and are placed one at a time on top of the grid/tile configuration, that is, the most recent tile that has been placed must have all four squares uncovered. Tiles cannot hang off of the edges of the grid but must be placed entirely within the grid.
I've been able to achieve tilings such that only $n$ black squares are showing, but I can't seem to prove that this is the minimum (if it is).
Yes, $n$ black squares is minimal.
No matter how you tile your grid, there will always be at least one black square in each row and in each column because adding a new tile always places a black square in both of the rows and columns in which the tile was added. The The best you can do is have one black line along the diagonal.
Not the answer you're looking for? Browse other questions tagged recreational-mathematics puzzle tiling .
Easy proofs of the undecidability of Wang's tiling problem?
Why do matching rules make a substitution tiling aperiodic?
Are periodic tilings stable against defects outside of some region away from the defect? | CommonCrawl |
How to prove or disprove this ?
What are nice (counter-)examples ?
I think i have seen it on sci.math as well ( also posted by him) I assume he knows the answer , but wants to see how people handle it.
Let $f(z),g_1(z),g_2(z),...$ be analytic on the closed unit circle.
I think replacing analytic with $C^\infty$ in Tommy's statement is interesting. Afterall there are functions that are $C^\infty$ almost everywhere but analytic almost nowhere.
This problem seemed simple when i first saw it , but is appears deceptively complicated to me. Or Maybe I am weak.
I considered fourier series , Riemann's series theorem and contour integrals to find a counterexample , but I failed.
I think it is possible to show the hypothesis correct if all the $f,g_i$ are monotone.
What bothers me most is - unlike most statements - that I am not even sure if it likely true or likely false !?
A further complication is summability methods, but for now Lets assume we consider ordinary sums that converge.
Is this a hard question ? Is this a new question ?
Not the answer you're looking for? Browse other questions tagged calculus complex-analysis derivatives summation examples-counterexamples or ask your own question. | CommonCrawl |
111 Why did no student correctly find a pair of $2\times 2$ matrices with the same determinant and trace that are not similar?
94 Is there any conjecture that we know is provable/disprovable but we haven't found a proof of yet?
82 How to represent "not an empty set"?
55 Can anyone give me a good example of two interestingly different ordinary cohomology theories?
54 What are some reasonable-sounding statements that are independent of ZFC? | CommonCrawl |
# xc in in the trust region.
# Consider xd outside of the trust region.
The following example is not in the book.
We verify that $||d_C + \lambda (d_d - d_C)|| = \Delta$.
The point on the border is $x_C + \lambda (x_d - x_C)$.
The following figure illustrates the various points involved.
We illustrate the method on the same example as before. This is not reported in the book. Note that there is no negative curvature here. Also, we have a large trust region ($\Delta=10$) to illustrate the case when the CG algorithm converges without hitting the trust region boundaries. | CommonCrawl |
This is a quick post exploring a process of upscaling images using neural nets with no training data other than the downscaled image itself.
Figure 1: The image was upscaled from 32x32 to 1024x1024, but one may as well see it as denoising the way it is presented above.
That is, given any 2 pixel coordinates, the network will output a colour for that pixel. The implications of having such a network is that one can then process colour values for arbitrarily large images by simply feeding the model a large variety of pixel coordinates.
The idea arose from looking at the samples on otoro.net, and wanting to simplify the system to only upscale images, rather than generate new large images. After simplifying the model, the core idea is the same as the one used by Andrej Karpathy here and by Thomas Schenker here. Further, a similar network was introduced by Kenneth O. Stanley in 2007 under the name of CPPN. However, I have not seen the network used for upscaling existing images, so I decided to give it a try and see how it performs.
The code I used for this work may be found in the format of a Jupyter Notebook here with additional comments to clarify each of the code sections. The network was implemented in TensorFlow, and the program is about 50 lines of code.
and for new coordinates, the network will estimate some plausible value to fill in.
Figure 2: On the first row, the training process uses a 2x2 image and the model learns to map from pairs of coordinates to pixel colours. On the second row, the network already knows how to map some of the coordinates (the corners), but it is unpredictable what the network will output for the other coordinate values in the 4x4 image.
Note that the coordinates are normalised between 0 and 1 such that the upper left corner is $(0,0)$. However, different ways of normalising may provide different results – for instance, setting the upper left corner to $(-1, 1)$ and the centre of the image to $(0,0)$ would ensure that the inputs are roughly centred around $0$. Further, one may standardise inputs such that batches have unit-variance, and potentially obtain different results.
For some lower level details about the exact network architecture, weight initialisation, and other hyperparameters, please check out the notebook where you can also clone the repository and upscale your own images.
Some quick results I obtained can be observed in the table below. Please note that all samples are resized to $512\times 512$ but the original sizes are $1024\times 1024$ for the generated image, $32\times 32$ or $64\times 64$ for the training sample, and varying size for the original images.
Clicking on an image opens a slightly enlarged version.
No original image, as this was just uniformly generated noise, so all credit goes to the network.
No quantitative analysis has been performed yet, but it can be observed that details and texture are typically lost, while high-contrast regions seem to be well preserved after processing.
I believe the results look interesting and are worth investigating further. Importantly, it would be beneficial if one would not need to retrain the network for every image, but rather train only one autoencoder network on an entire dataset. Further, none of the networks were trained to full capacity to learn the mapping properly (i.e. MSE $\not\approx 0$ on the training set) even if overfitting may be desirable, so longer training time and more expressive networks may also help.
Other possible directions include focusing on denoising, analysing the linearity of networks as functions of inputs, considering regions instead of single pixel coordinates, quantitatively comparing upscaling capabilities, exploring potential from an artistic perspective, and packaging a web interface. | CommonCrawl |
The Dynamic Mode Decomposition (DMD) is a relatively recent mathematical innovation that, among other things, allows us to solve or approximate dynamical systems in terms of coherent structures that grow, decay, and/ or oscillate in time. We refer to the coherent structures as DMD modes. Each DMD mode has corresponding time dynamics defined in terms of a single eigenvalue.
In other words, the DMD converts a dynamical system into a superposition of modes whose dynamics are governed by eigenvalues.
Amazingly, although the mathematical procedure for identifying the DMD modes and eigenvalues is purely linear, the system itself can be nonlinear! I won't go into it here but there are sound theoretical underpinnings for the claim that a nonlinear system can be described by a set of mode and eigenvalue pairs. Read up on the Koopman operator and the DMD's connection to it for more information 123.
Not only is the DMD a useful diagnostic tool for analyzing the inner workings of a system, but it can also be used to predict the future state of the system. All that is needed are the modes and the eigenvalues. With very little effort, the modes and eigenvalues can be combined to produce a function that approximates the system state at any moment in time.
Given: a 1-dimensional scalar function evolving in time.
Given: a set of trajectories in 3 dimensions produced by an unknown vector field.
Given: a set of 3-dimensional vectors sampled from an unknown vector field.
Given: a 2-dimensional scalar function in spherical coordinates evolving in time.
It is important to fully understand the limitations of any modeling strategy. Thus, I also talk a bit about when and how the DMD can fail. Finally, I conclude the post with a mention of notable extensions to the DMD and a summary.
where $X^\dagger$ is the pseudo-inverse4 of $X$, then the Dynamic Mode Decomposition of the pair $(X,Y)$ is given by the eigendecomposition of $A$. That is, the DMD modes and eigenvalues are eigenvectors and eigenvalues of $A$.
Clearly, $X$ is a set of inputs vectors and $Y$ is the corresponding set of output vectors. This particular interpretation of the DMD is extremely powerful, as it provides a convenient method for analyzing (and predicting) dynamical systems for which the governing equations are unknown. More on dynamical systems shortly.
There are a number of theorems that go along with this definition of the DMD 2. One of the more useful theorems states that $Y=AX$ exactly if and only if $X$ and $Y$ are linearly consistent (i.e., whenever $Xv=0$ for some vector $v$, then $Yv=0$ too). Linear consistency is relatively straightforward to test, as we shall see. That being said, linear consistency is not a mandatory prerequisite for using the DMD. Even if the DMD solution for $A$ doesn't exactly satisfy the equation $Y=AX$, it is still a least-squares solution, minimizing error in an $L^2$ sense.
At first glance, the task of finding the eigendecomposition of $A=YX^\dagger$ doesn't appear to be too big of a deal. Indeed, when $X$ and $Y$ are reasonably sized, a couple calls to the pinv and eig methods from Numpy or MATLAB will do the trick. The problem comes when $A$ is truly large. Notice that $A$ is $m\times m$, so when $m$ (the number of signals in each time sample) is very large, finding the eigendecomposition can become unwieldy.
Fortunately, the problem can be broken down into smaller pieces with the help of the exact DMD algorithm.
A more in-depth explanations of the algorithm's derivation can be found in the references 12. It also might be of theoretical interest to note that $\Phi=UW$ is an alternate derivation of $\Phi$ referred to as the projected DMD modes. In this post, I only use exact DMD modes.
Now let's go through the algorithm step-by-step in Python. Start by installing and importing all necessary packages.
Let's generate some play data. Keep in mind that in practice, one doesn't necessarily know the governing equations for a data. Here we're just inventing some equations to create a dataset. Forget they ever existed once the data has been generated.
Now compute the SVD of $X$. The first variable of major interest is $\Sigma$, the singular values of $X$. Taking the SVD of $X$ allows us to extract its "high-energy" modes and reduce the dimensionality of the system a la Proper Orthogonal Decomposition (POD). Looking at the singular values informs our decision of how many modes to truncate.
Given the singular values shown above, we conclude that the data has three modes of any significant interest. Therefore, we truncate the SVD to only include those modes. Then we build $\tilde A$ and find its eigendecomposition.
Each eigenvalue in $\Lambda$ tells us something about the dynamic behavior of its corresponding DMD mode. Its exact interpretation depends on the nature of the relationship between $X$ and $Y$. In the case of difference equations we can make a number of conclusions. If the eigenvalue has a non-zero imaginary part, then there is oscillation in the corresponding DMD mode. If the eigenvalue is inside the unit circle, then the mode is decaying; if the eigenvalue is outside, then the mode is growing. If the eigenvalue falls exactly on the unit circle, then the mode neither grows nor decays.
Now build the exact DMD modes.
The columns of $\Phi$ are the DMD modes plotted above. They are the coherent structures that grow/ decay/ oscillate in the system according to different time dynamics. Compare the curves in the plot above with the rolling, evolving shapes seen in the original 3D surface plot. You should notice similarities.
This is where the DMD algorithm technically ends. Equipped with the eigendecomposition of $A$ and a basic understanding of the nature of the system $Y=AX$, it is possible to construct a matrix $\Psi$ corresponding to the system's time evolution. To fully understand the code below, study the function $x(t)$ for difference equations in the next section.
The three plots above are the time dynamics of the three DMD modes. Notice how all three are oscillating. Furthermore, the second mode appears to grow exponentially, which is confirmed by the eigenvalue plot.
If you wish to construct an approximation to the original data matrix, simply multiply $\Phi$ and $\Psi$. The original and the approximation match up exactly in this particular case.
What is truly amazing is that we have just defined explicit functions in time using nothing but data! This is a good example of equation-free modeling.
In this case, the operator $A$ computes the first derivative with respect to time of a vector $x_i$. Our matrices $X$ and $Y$ would then consist of $n$ samples of a vector field: the $i$-th column of $X$ is a position vector $x_i$; the $i$-th column of $Y$ is a velocity vector $\dot x_i$.
Here are a few of examples of how to use the DMD with different types of experimental data. For convenience, I condense the DMD code into a single method and define some helper methods to check linear consistency and confirm my solutions.
In this first example, we are given a set of trajectories in 3 dimensions produced by an vector field. I used the following system of differential equations to produce the trajectories.
Watch the video below for an animation of the dynamic behavior.
You can generate a dataset with the following code. Basically, it randomly chooses a set of initial points and simulates each of them forward in time. Then it concatenates all of the individual trajectories together to produce $X$ and $Y$ matrices.
Now compute the DMD, just like before.
The DMD computation was successful. Given an initial condition $x(0)$, you can simulate the solution forward in time with the following code.
This example is very, very similar to the previous example. We shall use the same "mystery" vector field to create the dataset. The only difference is that the following code directly samples the output of the system of differential equations instead of simulating trajectories. In other words, we generate the data by randomly generating a set of position vectors, $X$, and feeding them into the system to produce velocity vectors, $Y$.
The decomposition is successful. As extra confirmation, when we examine the eigenvalues, we notice that they are the same eigenvalues as the original system that produced our dataset. There are two oscillating-decaying modes (complex conjugates) and one exponentially growing mode. Watch the video again to witness these exact time dynamics.
The mystery dataset is that of a $r=4$ sphere with five oscillating modes. Watch the video for an animation.
It is important to note that during the spheroid investigation, I found that the existence of the stationary, non-zero mode (i.e., the constant radius of 4) caused the DMD to fail. One would think that the constant mode would be the easiest mode to identify, but it was not. Instead, the DMD tried to account for the influence of constant mode by injecting it into the other modes, thereby corrupting the results. My conclusion is that, under certain circumstances, the singular value corresponding to the constant mode can shrink so small that the SVD-based DMD misses it completely.
Given that this is a potential problem for some datasets, it is probably a good idea to pre-process your dataset by removing the mean of each signal in the data matrix. While your at it, you could divide each signal by its variance to completely normalize the data.
Here are visualizations of the five DMD modes. Each pseudo-color plot represents a different scalar function $r=f(\theta,\phi)$. Each mode oscillates at some frequency (determined by the eigenvalues) in spherical coordinates.
And here are the time dynamics of the first three modes.
The DMD has several known limitations. First of all, it doesn't handle translational and rotational invariance particularly well. Secondly, it can fail completely when transient time behavior is present. The follow examples illustrate these problems.
The following dataset is very simple. It consists of a single mode (a Gaussian) translating along the spatial domain as the system evolves. Although one would think that the DMD would handle this cleanly, the opposite happens. Instead of picking up a single, well-defined singular value, the SVD picks up many.
The plot on the left shows the evolution of the system; the plot on the right shows its singular values. It turns out that close to ten DMD modes are needed to correctly approximate the system! Consider the following plots in which the original, true dynamics are compared to the superpositions of an increasing number of modes.
In this final example, we look at a dataset containing transient time dynamics. Specifically, the data shows a Gaussian popping in and out of existence. Unfortunately, the DMD cannot accurately decompose this data.
Although the DMD correctly identifies the mode, it fails completely to identify the time-behavior. This is understandable if we consider that the time-behavior of the DMD time series depends on eigenvalues, which are only capable of characterizing combinations of exponential growth (the real part of the eigenvalue) and oscillation (the imaginary part).
The interesting thing about this system is that an ideal decomposition could potentially consist of a superposition of a single mode (as shown in the figure) with various eigenvalues. Imagine the single mode being multiplied by a linear combination of many orthogonal sines and cosines (a Fourier series) that approximates the true time dynamics. Unfortunately, a single application of the SVD-based DMD is incapable of yielding the same DMD mode various times with different eigenvalues.
Furthermore, it is important to note that even if we were able to correctly extract the time behavior as a large set of eigenvalues, the solution's predictive capabilities would not be reliable without fully understanding the transient behavior itself. Transient behavior, by its very nature, is non-permanent.
The multi-resolution DMD (mrDMD) attempts to alleviate the transient time behavior problem by means of a recursive application of the DMD.
Despite its limitations, the DMD is a very powerful tool for analyzing and predicting dynamical systems. All data scientists from all backgrounds should have a good understanding of the DMD and how to apply it. The purpose of this post is to provide some theory behind the DMD and to provide practical code examples in Python that can be used with real-world data. After studying the formal definition of the DMD, walking through the algorithm step-by-step, and playing with several simple examples of its use – including examples in which it fails – it is my hope that this post has provided you an even clearer understanding of the DMD and how it might be applied in your research or engineering project.
There are many extensions to the DMD. Future work will likely result in posts on some of these extensions, including the multi-resolution DMD (mrDMD) and the sparse DMD (sDMD). | CommonCrawl |
Given $f(x) = x - sin(x)$, I'm asked to solve this using the Newton-Raphson method and give the order of convergence. Now for me this is strange because I need to have a starting value $x_0$ in order to go through the Newton-Raphson method.
However, I really don't know how to calculate the order of convergence. Any help for this formula, and how to do this in general?
The only root is $x=0$ and it has multiplicity $3$. Now apply what you know about the convergence of Newtons method to multiple roots.
Not the answer you're looking for? Browse other questions tagged numerical-methods newton-raphson or ask your own question. | CommonCrawl |
Abstract: Boykin and Jackson recently introduced a property of countable Borel equivalence relations called Borel boundedness, which they showed is closely related to the union problem for hyperfinite equivalence relations. In this paper, we introduce a family of properties of countable Borel equivalence relations which correspond to combinatorial cardinal characteristics of the continuum in the same way that Borel boundedness corresponds to the bounding number $\mathfrak b$. We analyze some of the basic behavior of these properties, showing for instance that the property corresponding to the splitting number $\mathfrak s$ coincides with smoothness. We then settle many of the implication relationships between the properties; these relationships turn out to be closely related to (but not the same as) the Borel Tukey ordering on cardinal characteristics. | CommonCrawl |
The critical behavior of a three real parameter class of solutions of the\\r\\nsixth Painlev\\\\\\\'e equation is computed, and parametrized in terms of monodromy\\r\\ndata of the associated $2\\\\times 2$ matrix linear Fuchsian system of ODE. The\\r\\nclass may contain solutions with poles accumulating at the critical point. The\\r\\nstudy of this class closes a gap in the description of the transcendents in one\\r\\nto one correspondence with the monodromy data. These transcendents are reviewed in the paper. Some formulas that relate the monodromy data to the critical behaviors of the four real (two complex) parameter class of solutions are\\r\\nmissing in the literature, so they are computed here. A computational procedure to write the full expansion of the four and three real parameter class of solutions is proposed. | CommonCrawl |
I am trying to write code that solves equations using Newton-Raphson method. I want the iterations would stop when the error is smaller than the tolerance defiend by the user. How can I validate the error is smaller than the tolerance?
In general, you cannot. You don't know the true value of the solution.
Consider solving $x^2+0.00001 =0$ and $x^2-0.00001$, the first has no solution, but a NR method will initially appear to be converging to a value close to zero, before diverging.
If you have an estimate for the root $\alpha$, and a tolerance $\epsilon$ then you can evaluate $f(\alpha\pm\epsilon)$. If these two values have different signs then you know that there is a root in the interval $\alpha\pm\epsilon$. If they have the same sign then either there is either no root in the range, a double root, or an even number of roots.
You can iterate the NR method until the difference between terms becomes small, then test by looking for a change of sign. If there is a change of sign then you have a proven root. If there is no certain way of telling whether a root even exists or not.
Finding reciprocal via Newton Raphson: How to determine initial guess? | CommonCrawl |
Can a division algebra have degree divisible by its characteristic?
I apologize in advance if this is easy, but I've tried Googling, and had no luck.
The characteristic of $D$ (as a normal ring) is $p$.
The degree of $D$ (the square root of its dimension over its center) is divisible by $p$.
I would really appreciate if anyone knows an example, or a proof that there are none. I could easily add hypotheses to rule out this case, but that would make things a bit messier, so I don't want to do that unless it's really necessary. Unfortunately, I know zippo about division algebras in characteristic p, other than that every finite one is actually just a field.
EDIT: Thanks for the references. Doing a little reading based on Mikhail's suggestion, I found a result which is good enough for my purposes: no such division algebra exists whose center is a perfect field, which was proven in 1934 by the remarkably named Abraham Adrian Albert. Those wanting more details on his life can read this detailed and impressive obituary by Jacobson. The best detail is that his Ukrainian father decided to ditch his last name (which is now unknown!) in Victorian England, and adopt the prince consort's name instead.
There are counterexamples for each $p$. The easiest maybe is the following: Let $F$ be the field of order $p^p$, and $\sigma$ be an automorphism of $F$ of order $p$. Let $D=F((t))$ be the set of Laurent series of the form $\sum a_it^i$ with the usual addition. Define multiplication by $ta=a^\sigma t$. Then $D$ is a division algebra with center $Z=\mathbb F_p((t^p))$, so $[F:Z]=p^2$.
Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras division-algebras positive-characteristic or ask your own question.
Is $SL_1(D)$ toplogically finitely generated, for $D$ a division algebra over a local field?
Does every equivalence class in a Brauer-Wall group have a (graded) division algebra?
Is an associative division algebra required for this phenomenon?
When is a crossed-product algebra a division algebra? | CommonCrawl |
Abstract. The aim of this paper is to extend a class of potentials for which the absolutely continuous spectrum of the corresponding multidimensional Schr\"odinger operator is essentially supported by $[0,\infty)$. Our main theorem states that this property is preserved for slowly decaying potentials provided that there are some oscillations with respect to one of the variables. | CommonCrawl |
We describe here the main research areas the Kitchin Group has been active in. For each area we briefly describe what we have done, with reference to most of the publications that resulted from the work.
For a complete listing of our publications see ./publications.html.
Molecular simulation continues to drive scientific progress. In all simulations, one of the main challenges is how to get good computational models to describe the chemistry and physics of the system. For decades we have relied on physical insight to build these models, but even these often have computationally practical approximations. A new approach using models derived by machine learning has developed in the past decade. In this approach, we use large databases of calculations from density functional theory to build neural network models of atomistic systems.
You can see our work in these papers.
This is an exciting new area we will be publishing in for the next few years!
We are huge proponents of using org-mode to write scientific documents. This has enabled our group to develop novel data sharing strategies kitchin-2015-examp,kitchin-2015-data-surfac-scien and increase our productivity significantly.
We recently published this work on automated data sharing kitchin-2017-autom-data.
Alloy catalysts frequently do not have the same reactivity as the parent metals. We use density functional theory calculations to understand how the electronic structure of the alloy differs from the parent metals, and how those differences lead to different reactivities. We are especially interested in modeling heterogeneous site distributions boes-2015-estim-bulk, modeling XPS spectra of alloys boes-2015-core-cu, and the limit of single atom alloys tierney-2009-hydrog-dissoc.
We have used density functional theory to identify S-tolerant alloys inoglu-2011-ident-sulfur. Our current work will develop a new understanding of selectivity in acetylene hydrogenation.
We are generally interested in understanding the reactivity of metal oxides towards oxygen and fuels. We use computational tools to model the electronic structure and reactivity of metal oxides for applications in water splitting and chemical looping.
We derived a number of heuristic rules that describe the reactivity of perovskites akhade-2011-effec,akhade-2012-effec. These principles were expanded into a concept of outer electrons that was used to describe the reactivity of a broad range of oxides calle-vallejo-2013-number. These observations have been found to be robust across a broad range of assumptions in the models used, although some care must be taken in a few cases curnan-2014-effec-concen. We have found novel relationships between oxide electronic structure and reactivity for oxides with rocksalt structures xu-2014-relat.
We did some work in infiltrating solid oxide fuel cell electrodes with electrocatalysts to improve their activity chao-2011-prepar-mesop,chao-2012-struc-relat.
An outstanding challenge in modeling oxides is the highly correlated electrons especially in the 3d orbitals of transition metals. We have expanded an approach to use a linear response method to compute U, making a nearly predictive, first-principles method for computing oxide formation energies with pretty good accuracy xu-2015-accur-u.
One new direction we have taken is to identify oxide polymorphs that may be grown as epitaxial thin films mehta-2015-ident-poten. Subsequent work xu-2017-first-princ has shown some design principles for predicting epitaxial stabilization, including the successful synthesis of columbite SnO2 wittkamper-2017-compet-growt.
We have worked in these research areas in the past, but are not currently actively working on them.
CO2 capture remains an important tool in considering how to mitigate the role of fossil energy on climate change rubin-2012. We have investigated the use of supported amines alesi-2010-co2-adsor, and a commercially available ion exchange resin as a CO2 capture sorbent alesi-2012-evaluat-primar. We have shown that supported amines are poisoned by SO2, but may they may be partially regenerated in some cases hallenbeck-2013-effec-o2.
We have used density functional theory calculations to model amine-CO2 reactions lee-2012-chemic-molec,mao-2013-inter, as well as process modeling to show that different process conditions are required to optimize a capture process for different solvents lee-2013-compar-co2. We have considered ionic liquids as potential CO2 capture solvents thompson-2014-co2-react.
We also have developed a high pressure silica capillary cell for in situ Raman measurements of CO2 solubility in solvents for precombustion capture applications. Our current interests include developing a microfluidic device for measuring CO2 absorption rates in amine-based solvents. We collaborate with the the Anna group on this.
We previously explored the electrochemical separation of CO2 from flue gas landon-2010-elect-concen,pennline-2010-separ-co2.
We are using Raman spectroscopy to probe the oxide/electrolyte interface under oxygen evolution conditions. We have focused most of our work on Ni-oxide containing materials, which are highly active when promoted by Fe impurities landon-2012-spect-charac,michael-2015-alkal-elect. We have also examined Fe-containing molecular electrocatalysts demeter-2014-elect-oxygen.
We have used density functional theory to show that there are correlations in the reactions of the oxygen evolution reaction that likely limit the activity of oxide-based electrocatalysts man-2011-univer-oxygen. These observations are robust, even with advanced computational methods such as the linear response DFT+U methods xu-2015-linear-respon.
Our earliest work was in modeling the coverage dependent adsorption energies of atomic adsorbates on late transition metal surfaces. We showed that there exist strong configurational correlations for many adsorbates on Pd(111) kitchin-2009-correl-pd, and for oxygen on late transition metal surfaces miller-2009-relat-au,miller-2011-config. These principles were generalized in a simple physical model inoglu-2010-simpl that showed the origin of the coverage dependence was an adsorbate-induced modification of the surface electronic structure. We wrote a review book chapter on this topic miller-2012-cover. We demonstrated that DFT can be used to interpret the coverage dependent desorption behavior of oxygen on Pt(111) miller-2014-simul-temper. Finally, we showed the generality of configurational correlations across many surfaces and for many adsorbates, demonstrating that geometric similarity is a requirement for correlation xu-2014-probin-cover.
[boes-2017-neural-networ] Jacob Boes & John Kitchin, Neural Network Predictions of Oxygen Interactions on a Dynamic Pd Surface, Molecular Simulation, , 1-9 (2017). link. doi.
[boes-2017-model-segreg] Boes & Kitchin, Modeling Segregation on AuPd(111) Surfaces With Density Functional Theory and Monte Carlo Simulations, The Journal of Physical Chemistry C, 121(6), 3479-3487 (2017). link. doi.
[kitchin-2017-autom-data] "Kitchin, Van Gulick & Zilinski, Automating Data Sharing Through Authoring Tools, "International Journal on Digital Libraries", 18(2), 93-98 (2017). link. doi.
[boes-2015-estim-bulk] Jacob Boes, Gamze Gumuslu, James Miller, Andrew, Gellman & John Kitchin, Estimating Bulk-Composition-Dependent \ceH2 Adsorption Energies on \ceCu_xPd_1-x Alloy (111) Surfaces, ACS Catalysis, 5(2), 1020-1026 (2015). link. doi.
[boes-2015-core-cu] Jacob Boes, Peter Kondratyuk, Chunrong Yin, James, Miller, Andrew Gellman & John Kitchin, Core Level Shifts in Cu-Pd Alloys As a Function of Bulk Composition and Structure, Surface Science, 640, 127-132 (2015). link. doi.
[tierney-2009-hydrog-dissoc] Tierney, Baber, Kitchin, Sykes & , Hydrogen Dissociation and Spillover on Individual Isolated Palladium Atoms, Physical Review Letters, 103(24), 246102 (2009). link. doi.
[inoglu-2011-ident-sulfur] Inoglu & Kitchin, Identification of Sulfur-Tolerant Bimetallic Surfaces Using DFT Parametrized Models and Atomistic Thermodynamics, ACS Catalysis, , 399-407 (2011). doi.
[akhade-2011-effec] Akhade & Kitchin, Effects of Strain, d-Band Filling, and Oxidation State on the Bulk Electronic Structure of Cubic 3d Perovskites, The Journal of Chemical Physics, 135(10), 104702-6 (2011). link.
[akhade-2012-effec] Akhade & Kitchin, Effects of Strain, d-Band Filling, and Oxidation State on the Surface Electronic Structure and Reactivity of 3d Perovskite Surfaces , J. Chem. Phys., 137, 084703 (2012). link. doi.
[calle-vallejo-2013-number] Calle-Vallejo, Inoglu, Su, , Martinez, Man, Koper, , Kitchin & Rossmeisl, Number of Outer Electrons As Descriptor for Adsorption Processes on Transition Metals and Their Oxides, Chemical Science, 4, 1245-1249 (2013). link. doi.
[curnan-2014-effec-concen] Curnan & Kitchin, Effects of Concentration, Crystal Structure, Magnetism, and Electronic Structure Method on First-Principles Oxygen Vacancy Formation Energy Trends in Perovskites, The Journal of Physical Chemistry C, 118(49), 28776-28790 (2014). link. doi.
[xu-2014-relat] Zhongnan Xu & John Kitchin, Relating the Electronic Structure and Reactivity of the 3d Transition Metal Monoxide Surfaces, Catalysis Communications, 52, 60-64 (2014). link. doi.
[chao-2011-prepar-mesop] Chao, Kitchin, Gerdes, Sabolsky, Edward & Salvador, Preparation of Mesoporous \ceLa_0.8Sr_0.2MnO3 Infiltrated Coatings in Porous SOFC Cathodes Using Evaporation-Induced Self-Assembly Methods, ECS Transactions, 35(1), 2387-2399 (2011). link. doi.
[chao-2012-struc-relat] Chao, Munprom, Petrova, , Gerdes, Kitchin & Salvador, Structure and Relative Thermal Stability of Mesoporous (La,Sr)MnO$_3$ Powders Prepared Using Evaporation-Induced Self-Assembly Methods, Journal of the American Ceramic Society, , 2339-2346 (2012). link. doi.
[xu-2015-accur-u] "Xu, Joshi, Raman, & Kitchin, Accurate Electronic and Chemical Properties of 3d Transition Metal Oxides Using a Calculated Linear Response U and a DFT + U(V) Method, "The Journal of Chemical Physics", 142(14), 144701 (2015). link. doi.
[mehta-2015-ident-poten] Prateek Mehta, Paul Salvador & John Kitchin, Identifying Potential \ceBO2 Oxide Polymorphs for Epitaxial Growth Candidates, ACS Appl. Mater. Interfaces, 6(5), 3630-3639 (2015). link. doi.
[xu-2017-first-princ] Xu, Salvador & Kitchin, First-Principles Investigation of the Epitaxial Stabilization of Oxide Polymorphs: \ceTiO2 on \ce(Sr,Ba)TiO3, ACS Applied Materials & Interfaces, 9(4), 4106-4118 (2017). link. doi.
[wittkamper-2017-compet-growt] Julia Wittkamper, Zhongnan Xu, Boopathy Kombaiah, , Farangis Ram, Marc De Graef, John Kitchin, Gregory, Rohrer & Paul Salvador, Competitive Growth of Scrutinyite ($\alpha$-\cePbO2) and Rutile Polymorphs of \ceSnO2 on All Orientations of Columbite Conb2o6 Substrates, Crystal Growth & Design, 17(7), 3929-3939 (2017). link. doi.
[rubin-2012] Rubin, Mantripragada, Marks, , Versteeg & Kitchin, The Outlook for Improved Carbon Capture Technology, Progress in Energy and Combustion Science, 38, 630-671 (2012). link. doi.
[alesi-2010-co2-adsor] Richard Alesi, McMahan Gray & John Kitchin, \ceCO2 Adsorption on Supported Molecular Amidine Systems on Activated Carbon, ChemSusChem, 3(8), 948-956 (2010). link. doi.
[alesi-2012-evaluat-primar] Alesi & Kitchin, Evaluation of a Primary Amine-Functionalized Ion-Exchange Resin for \ceCO_2 Capture, Industrial & Engineering Chemistry Research, 51(19), 6907-6915 (2012). link. doi.
[hallenbeck-2013-effec-o2] Hallenbeck & Kitchin, Effects of O2 and So2 on the Capture Capacity of a Primary-Amine Based Polymeric Co2 Sorbent, Industrial & Engineering Chemistry Research, 52(31), 10788-10794 (2013). link. doi.
[lee-2012-chemic-molec] Lee & Kitchin, Chemical and Molecular Descriptors for the Reactivity of Amines With \ceCO_2, Industrial & Engineering Chemistry Research, 51(42), 13609-13618 (2012). link. doi.
[mao-2013-inter] Mao, Lee, Kitchin, , Nulwala, Luebke, Damodaran & Krishnan, Interactions in 1-ethyl-3-methyl Imidazolium Tetracyanoborate Ion Pair: Spectroscopic and Density Functional Study, Journal of Molecular Structure, 1038(0), 12-18 (2013). link. doi.
[lee-2013-compar-co2] Anita Lee, John Eslick, David Miller, John & Kitchin, Comparisons of Amine Solvents for Post-Combustion \ceCO2 Capture: A Multi-Objective Analysis Approach, International Journal of Greenhouse Gas Control, 18, 68-74 (2013). link. doi.
[thompson-2014-co2-react] Thompson, Albenze, Shi, , Hopkinson, Damodaran, Lee, , Kitchin, Luebke & Nulwala, \ceCO2 Reactive Ionic Liquids: Effects of Functional Groups on the Anion and Its Influence on the Physical Properties, RSC Adv., 4, 12748-12755 (2014). link. doi.
[landon-2010-elect-concen] Landon & Kitchin, Electrochemical Concentration of Carbon Dioxide From an Oxygen/carbon Dioxide Containing Gas Stream, Journal of the Electrochemical Society, 157(8), B1149-B1153 (2010). link. doi.
[pennline-2010-separ-co2] Henry Pennline, Evan Granite, David Luebke, , John Kitchin, James Landon & Lisa Weiland, Separation of \ceCO2 From Flue Gas Using Electrochemical Cells, Fuel, 89(6), 1307-1314 (2010). link. doi.
[landon-2012-spect-charac] James Landon, Ethan Demeter, Nilay Ino\uglu, , Chris Keturakis, Israel Wachs, Relja Vasi\'c, , Anatoly Frenkel & John Kitchin, Spectroscopic Characterization of Mixed Fe-Ni Oxide Electrocatalysts for the Oxygen Evolution Reaction in Alkaline Electrolytes, ACS Catalysis, 2(8), 1793-1801 (2012). link. doi.
[michael-2015-alkal-elect] John Michael, Ethan Demeter, Steven Illes, , Qingqi Fan, Jacob Boes & John Kitchin, Alkaline Electrolyte and Fe Impurity Effects on the Performance and Active-Phase Structure of NiOOH Thin Films for OER Catalysis Applications, J. Phys. Chem. C, 119(21), 11475-11481 (2015). link. doi.
[demeter-2014-elect-oxygen] Ethan Demeter, Shayna Hilburg, Newell Washburn, , Terrence Collins & John Kitchin, Electrocatalytic Oxygen Evolution With an Immobilized TAML Activator, J. Am. Chem. Soc., 136(15), 5603-5606 (2014). link. doi.
[man-2011-univer-oxygen] Man, Su, Calle-Vallejo, Hansen, , Martinez, Inoglu, Kitchin, , Jaramillo, N\orskov, Rossmeisl & , Universality in Oxygen Evolution Electrocatalysis on Oxide Surfaces, ChemCatChem, 3(7), 1159-1165 (2011). doi.
[xu-2015-linear-respon] Xu, Rossmeisl & Kitchin, A Linear Response DFT+U Study of Trends in the Oxygen Evolution Activity of Transition Metal Rutile Dioxides, The Journal of Physical Chemistry C, 119(9), 4827-4833 (2015). link. doi.
[kitchin-2009-correl-pd] Kitchin, Correlations in Coverage-Dependent Atomic Adsorption Energies on Pd(111), Physical Review B, 79(20), 205412 (2009). doi.
[miller-2009-relat-au] Miller & Kitchin, Relating the Coverage Dependence of Oxygen Adsorption on Au and Pt Fcc(111) Surfaces Through Adsorbate-Induced Surface Electronic Structure Effects, Surface Science, 603(5), 794-801 (2009). doi.
[miller-2011-config] Spencer Miller, Nilay Inoglu & John Kitchin, Configurational Correlations in the Coverage Dependent Adsorption Energies of Oxygen Atoms on Late Transition Metal fcc(111) Surfaces, J. Chem. Phys., 134(10), 104709 (2011). link. doi.
[inoglu-2010-simpl] Inoglu & Kitchin, Simple Model Explaining and Predicting Coverage-Dependent Atomic Adsorption Energies on Transition Metal Surfaces, Physical Review B, 82(4), 045414 (2010). link. doi.
[miller-2014-simul-temper] Spencer Miller, Vladimir Pushkarev, Andrew, Gellman & John Kitchin, Simulating Temperature Programmed Desorption of Oxygen on Pt(111) Using DFT Derived Coverage Dependent Desorption Barriers, Topics in Catalysis, 57(1-4), 106-117 (2014). link. doi.
[xu-2014-probin-cover] Zhongnan Xu & John Kitchin, Probing the Coverage Dependence of Site and Adsorbate Configurational Correlations on (111) Surfaces of Late Transition Metals, J. Phys. Chem. C, 118(44), 25597-25602 (2014). link. doi. | CommonCrawl |
Let $M$ be a Riemannian manifold with affine connection such that the metric is covariantly constant (so that the connection equals the Levi-Civita connection up to torsion).
I know the interpretation of torsion and curvature in terms of rolling without slipping (that is its interpretation as the curvature of an underlying Cartan connection). What I am looking for is an interpretation in terms of geodesics (that means free-falling particles in Einstein-Cartan theory). As it is well known, the family of all geodesics does not depend on the torsion (two connections that are the same up to torsion have the same geodesics), this interpretation also has to use the concept of parallel displacement directly. For example, one could talk about geodesics starting parallel, etc.
In case of vanishing torsion, the Jacobi equation for Jacobi fields (that is infinitesimal families of parallel geodesics) give me a complete description (and interpretation) for the curvature tensor as a relative acceleration of nearby geodesics. In case of non-vanishing torsion, the equation becomes more complicated as a covariant derivative of the torsion enters as well.
Is there a similar (probably first-order) equation for geodesics in which the torsion enters directly and gives me a direct interpretation? Can the Jacobi equation (or the underlying problem) be reformulated so that it stays the same independent of the torsion?
I have read: http://en.wikipedia.org/wiki/Torsion_%28differential_geometry%29#Twisting_of_reference_frames but I have difficulties to interpret this result. And it is lacking any reference.
determines the connection up to a skew $\binom12$-tensor field, and the torsion free connection can be recontructed from the geodesic spray (a vector field on $TM$). See 22.6 ff in here. If $\nabla$ is torsion-free, and $T:TM\times_M TM \to TM$ is a skew tensor field, then $\nabla'_XY:=\nabla_XY+T(X,Y)$ has the same geodesics and torsion $2T$. Compute the curvature $R'$ of $\nabla'$ in terms of the curvature $R$ of $\nabla$ and $T$ and write the Jacobi equation for $\nabla'$ (including the torsion) and expand it in terms $\nabla$ and $T$. It is the same equation as the Jacobi equation for $\nabla$; all the extra terms cancel.
In this sense there is no way to see the torsion just from the geodesics alone.
See also this question and its answers.
What is torsion in differential geometry intuitively?
Relationship between geodesics and curvature lines on surfaces?
Is torsion of a connection always an obstruction to some kind of integrability? | CommonCrawl |
We will now state some important results regarding the linearity of measurable functions. These can be proven similarly to the analogous results proven for Lebesgue measurable functions.
a) $f + g$ is a measurable function on $E$.
b) $cf$ is a measurable function on $E$.
Similarly, we can state a result for products of measurable functions.
Theorem 2: Let $(X, \mathcal A)$ be a measurable space and let $f$ and $g$ be measurable functions defined on a measurable set $E$. Then $fg$ is a measurable function on $E$. | CommonCrawl |
We will now review some of the recent material regarding Riemann-Stieltjes integrals with integrators of bounded variation.
We noted that this representation is not unique though since if $g$ is any increasing function on $[a, b]$ then $V + g$ and $V - f + g$ are also increasing functions and $f = (V + g) - (V - f + g)$.
On the Riemann Stieltjes-Integrals with Integrators of Bounded Variation we saw that if $f$ is a bounded function on $[a, b]$, $\alpha$ is a function of bounded variation on $[a, b]$, and $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ the $f$ is also Riemann-Stieltjes integrable with respect to $V$ and $V - f$ on $[a, b]$.
On the Riemann-Stieltjes Integrability of Continuous Functions with Integrators of Bounded Variation page we saw that if $f$ is continuous on $[a, b]$ and $\alpha$ is of bounded variation on $[a, b]$ then $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. This is an extremely useful theorem that has the normal Riemann integrals that we're most familiar with as a subfamily satisfying these conditions.
On the Riemann Integrability of Continuous Functions and Functions of Bounded Variation page we applied the theorem above to Riemann integrals. We saw that if $f$ is continuous on $[a, b]$ then $\int_a^b f(x) \: dx$ exists. We also saw that if $\alpha$ is of bounded variation on $[a, b]$ then $\int_a^b f(x) \: dx$ exists too.
On the Riemann-Stieltjes Integrability of Functions on Subintervals with Integrators of Bounded Variation page we saw that if $f$ is any function defined on $[a, b]$, $\alpha$ is of bounded variation on $[a, b]$, and $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ then $f$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on any subinterval $[c, d] \subseteq [a, b]$. | CommonCrawl |
The goal of this RAMP is to classify correctly handwritten digits. For each submission, you will have to provide an image classifier (versus the original setup that required a transformer and a batch classifier). The images are usually big so loading them into the memory at once may be impossible. The image classifier therefore will access them through an img_loader function which can load one image at a time.
Setting up an AWS instance is easy, just follow this tutorial.
For learning the nuts and bolts of convolutional nets, we suggest that you follow Andrej Karpathy's excellent course.
If the images are not yet in data/imgs, change the type of the net cell to "Code" and run it.
The class distribution is balanced.
It is worthwhile to look at some image panels, grouped by label.
/Users/kegl/anaconda/lib/python2.7/site-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
All images have size 28 $\times$ 28.
In the first workflow element image_preprocessor.py you can resize, crop, or rotate the images. This is an important step. Neural nets need standard-size images defined by the dimension of the input layer. MNIST images are centered and resized, so these operations are unlikely to be useful but rotation may help.
Here we resize the images to different resolutions, then blow them up so the difference can be visible.
/Users/kegl/anaconda/lib/python2.7/site-packages/skimage/transform/_warps.py:84: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
Here we rotate the image. Explore options in skimage.
All these tansformations should be implemented in the transform function found in the image_preprocessor workflow element that you will submit.
For submitting at the RAMP site, you will have to write a single ImageClassifier class implementing a fit and a predict_proba function.
The starting kit implements a simple keras neural net. Since MNIST is a small set of small images, we can actually load them into the memory. MNIST contains well-centered and aligned images so _transform only needs to scale the pixels into [0, 1].
x = x / 255.
# load the next minibatch in memory.
# `nb` is a multiple of `batch_size`, or `nb % batch_size`.
Once you found a good feature extractor and classifier, you can submit them to ramp.studio. First, if it is your first time using RAMP, sign up, otherwise log in. Then find an open event on the particular problem, for example, the event MNIST for this RAMP. Sign up for the event. Both signups are controled by RAMP administrators, so there can be a delay between asking for signup and being able to submit.
Once your signup request is accepted, you can go to your sandbox and copy-paste (or upload) image_preprocessor.py and batch_classifier.py from submissions/starting_kit. Save it, rename it, then submit it. The submission is trained and tested on our backend in the same way as ramp_test_submission does it locally. While your submission is waiting in the queue and being trained, you can find it in the "New submissions (pending training)" table in my submissions. Once it is trained, you get a mail, and your submission shows up on the public leaderboard. If there is an error (despite having tested your submission locally with ramp_test_submission), it will show up in the "Failed submissions" table in my submissions. You can click on the error to see part of the trace.
The official score in this RAMP (the first score column after "historical contributivity" on the leaderboard) is balanced accuracy aka macro-averaged recall, so the line that is relevant in the output of ramp_test_submission is valid acc = 0.132 ± 0.0. When the score is good enough, you can submit it at the RAMP. | CommonCrawl |
I have a high frequency time series of the bid and ask prices of a stock recorded on every tick. For each data point I also have a certain indicators that predict the future movement of the price. The indicators have different horizons of the predictions, some being optimal at few second intervals and others few minutes. I need to assign these predictors weights and based on weather the linear combination crosses a threshold, the decision will be taken to buy of sell the stock. So far I have tried the Differential Evolution (DE) method to figure out the weights. I use a black box model with the weights vector($w_i$) and threshold as inputs. For each data point I have a vector of indicators($\alpha _i$). $$ total\_alpha = \sum\alpha _i*w_i $$ If $$ total\_alpha > threshold, BUY $$ Else If $$ total\_alpha < -threshold, SELL $$ The output of the model is the sum of difference between each between the price of each consecutive buy and sell. This output is being optimised by the DE algorithm. The issues with it being the computational aspects. I have large data sets of sizes(~7e8x20) and the time it takes for the DE algorithm. Is there a better and a faster way to solve this problem?
Browse other questions tagged optimization high-frequency modeling or ask your own question. | CommonCrawl |
"The H∞ Functional Calculus Based on the S-Spectrum for Quaternionic Op" by Daniel Alpay, Fabrizio Colombo et al.
In this paper we extend the H∞ functional calculus to quaternionic operators and to n-tuples of noncommuting operators using the theory of slice hyperholomorphic functions and the associated functional calculus, called S-functional calculus. The S-functional calculus has two versions one for quaternionic-valued functions and one for Clifford algebra-valued functions and can be considered the Riesz-Dunford functional calculus based on slice hyperholomorphicity because it shares with it the most important properties.
The S-functional calculus is based on the notion of S-spectrum which, in the case of quaternionic normal operators on a Hilbert space, is also the notion of spectrum that appears in the quaternionic spectral theorem.
The main purpose of this paper is to construct the H∞ functional calculus based on the notion of S-spectrum for both quaternionic operators and for n-tuples of noncommuting operators. We remark that the H∞ functional calculus for (n+1)-tuples of operators applies, in particular, to the Dirac operator.
D. Alpay, F. Colombo, I. Sabadini and T. Qian. The $H^\infty$ functional calculus based on the $S$-spectrum for quaternionic operators and for $n$-tuples of noncommuting operators. Journal of functional analysis, vol. 271, (2016), 1544-1584. | CommonCrawl |
Abstract: Recently developed neuromorphic vision sensors have become promising candidates for agile and autonomous robotic applications primarily due to, in particular, their high temporal resolution and low latency. Each pixel of this sensor independently fires an asynchronous stream of "retinal events" once a change in the light field is detected. Existing computer vision algorithms can only process periodic frames and so a new class of algorithms needs to be developed that can efficiently process these events for control tasks. In this paper, we investigate the problem of regulating a continuous-time linear time invariant (LTI) system to a desired point using measurements from a neuromorphic sensor. We present an $H_\infty$ controller that regulates the LTI system to a desired set-point and provide the set of neuromorphic sensor based cameras for the given system that fulfill the regulation task. The effectiveness of our approach is illustrated on an unstable system. | CommonCrawl |
We will now look at the number of $k$-combinations of a multiset containing $n$ distinct elements whose repetition numbers are $\infty$.
Recall that a $k$-combination of an ordinary finite $n$-element set $A$ is simply a selection of $k$ out of all $n$ elements in $A$. In others words, a $k$-combination of $A$ is a subset $B \subseteq A$ such that $\lvert B \rvert = k$. By extension, a $k$-combination of a multiset $A$ is simply a submultiset $B$ of $A$ that contains $k$ elements.
Set up positions to place all $k$ elements from $A$ and group like-elements together. Place $n - 1$ placeholders between each grouping of like-elements. There are now $k + n - 1$ positions total and the rearrangement of the placeholders signifies a new $k$-combination of the multiset $A$ as illustrated in the diagram below.
These $2$-combinations are given in the table below as submultisets.
We will look at the case of determining the number of $k$-combinations of a multiset containing $n$ distinct elements and whose repetition number are finite on the Combinations of Elements in Multisets with Finite Repetition Numbers page AFTER we have looked at The Inclusion-Exclusion Principle. | CommonCrawl |
But this was not the question.
Before asking the question let's first sport you with the following definition.
Definition: a peaceful area of size $k$ on a chessboard, is a $k$ by $k$ square of squares containing $k$ queens in which no two queens threaten each other, i.e. no two queen are in the same row, column or diagonal.
Given an integer $N>7$, let $f(N)$ be the smallest number such that it is possible to place $N$ queens on an $N$ by $N$ chessboard in a way that for every $k$ with $f(N)\le k\le N$, there is a peaceful area of size $k$.
Is it the case that for all $N>16$, $f(N)=f(16)$?
Does there exist an $M$ such that for all $N>M$, $f(N)=f(M)$?
$f(n)\geq f(n-1)$ since any sub-region may be considered as a whole region.
Each time a queen is removed during the recursive process for finding $r$ referenced above the new sub-region may only have a queen occupying at most one of $2$ of the $4$ corners, since the queen that was previously removed was threatening the other $2$ corners of the sub-region.
The sub-region must be solvable without placing a queen on any of the diagonals the previously removed queens threatened.
Any solution with a queen on a corner is a member of a set of $8$ isomorphic solutions under symmetry (the only time the set is smaller is when rotation by quarter turns or reflections in the horizontal or vertical thereof are the same, which cannot be the case with a queen on the corner as it would be in conflict with the other queen to which it translated) thus we only need to consider one such arrangement.
Note: Certainly this sequence can never be fully utilised (making $r=n$) since at some point a position will be on a diagonal already used earlier (e.g. $n=5$ has $(5,n-3)=(5,5)$ sharing a diagonal with $(0,0)$.
A . . . . . . . . . . . . . . .
. C . . . . . . . . . . . . . .
. . . . . . . . . Q . . . . . .
. . Q . . . . . . . . . . . . .
. . . . . . Q . . . . . . . . .
. . . . . . . . . . . . . Q . .
. . . . . . . . . . . Q . . . .
. . . . . Q . . . . . . . . . .
. . . . . . . . . . . . . . Q .
. . . . . . . . . . . . Q . . .
. . . Q . . . . . . . . . . . .
. . . . . . . . Q . . . . . . .
. . . . Q . . . . . . . . . . .
. . . . . . . Q . . . . . . . .
. . . . . . . . . . Q . . . . .
Is it the case that for all $n>16$, $f(n)=f(16)$ ?
If my analysis is correct, no - a counter-example would be $f(18)=14$.
Does there exist an $m$ such that for all $n>m$, $f(n)=f(m)$ ?
I strongly suspect not. As we place queens in the sequence suggested the sub-region always allows the next placement to make $r$ potentially one bigger as would be required, but the sub-region seems to not always be solvable, some analysis of the regions excluded by the diagonals threatened by removed queens should be made here to show that this will always occur.
whenever $n>3$ you can put $n$ non-attacking queens on an $n\times n$ chessboard: see e.g. this paper.
Let me add something that would help in a formal proof.
Any $n$ by $n$ peaceful region has either 3 or 4 queens living in its $4n-4$ house large boundary. But just those which has just 3 queens on their border has as a peaceful subregion of size $n-1$ by $n-1$.
If $l<m<n$ is it necessary that an $l$ by $l$ peaceful region to be a subregion of an $m$ by $m$ region!?
Not the answer you're looking for? Browse other questions tagged mathematics algorithm checkerboard or ask your own question. | CommonCrawl |
If I fix any parameter $\alpha$ (and, arbitrarily, set $z_0$ equal to $0$) then I can iterate starting from any point $z_1$ and look for cycles. This procedure yields Julia sets, or at least something very similar. I'm not sure what the correct name is in this context. For those running modern WebKit based browsers, here is an OpenGL Shading Language program for exploring the Julia sets of $f_\alpha$. Dragging the mouse changes the parameter.
What is the correct way to visualize the bifurcation locus of the family $f_\alpha$?
The technique that I suggested in my answer to the other question is to render a Julia set for every parameter $\alpha$ and to render a second Julia set for a perturbed parameter $\alpha + \epsilon$, and then to measure the average color change per pixel, to approximate the stability or instability of the Julia set for that parameter.
It yields muddy, unsatisfying renderings of the bifurcation locus.
Here is the bifurcation locus of $f_\alpha$, rendered by directly comparing Julia sets on a GPU (and even on dedicated graphics hardware I had to constrain the comparison region to keep the running time down). The bounding box runs from $-2$ to $+2$ on both the real and imaginary axes.
To demonstrate that the procedure is doing something reasonable here is the result for the family of quadratics $z\to z^2 + \alpha$.
I think the "usual" strategy is to identify the critical points of $f_\alpha$ and to iterate those points, then to color $\alpha$ according to the long term behavior of the critical points under iteration, but it isn't clear to me how to apply that strategy for this family, or even if that is a valid strategy for this type of multi-variable iteration scheme.
For example, both partial derivatives blow up when $w = -1$ and both partial derivatives are equal to zero when $w = \infty$, but iteration starting from $(z_0, z_1) = (0, -1)$ leads to a cycle between $(0, \infty)$ and $(\infty, 0)$, and starting from $(z_0, z_1) = (0, \infty)$ obviously leads to the same cycle. The parameter has no effect in either case.
Update: I was able to get some cleaner images by revising the algorithm slightly. The basic idea is the same, to directly compare colors in a rendering of the Julia set for $\alpha$ with colors (at the same points) in a rendering of the Julia set for a perturbed parameter, but now colors are compared for several perturbed parameters in a small neighborhood around $\alpha$. The resulting renderings show much better detail, but the additional Julia set color calculations exacerbate the algorithm's speed issues.
Update 2: I guess it's worth noting that I can produce a visualization of a set whose boundary is the bifurcation locus of the family $f_\alpha$, by iterating $(z_0, z_1) = (-\alpha, 0)$ or by iterating $(z_0, z_1) = (0, -\alpha)$ and looking for cycles. The "justification" for iterating starting at $(-\alpha, 0)$ is that one (but not the other) partial derivatives is zero at $(-\alpha, 0)$ so maybe it's "sort of like a critical point." The justification for iterating starting at $(0, -\alpha)$ is that I made a typo and noticed that I got the same picture anyway… I feel I'm missing something obvious.
To be precise, by computational experiments I have convinced myself that if $S$ is the set of parameters, $\alpha$, such that $(-\alpha, 0)$ is attracted to a cycle under iteration of $f_\alpha$, then $\partial S$ is the bifurcation locus of $f_\alpha$. I have no satisfying mathematical justification for this observation. In the following image the brightness of a point indicates the speed of convergence of $(-\alpha, 0)$ on a cycle. Brighter pixels indicate faster convergence (though I do not think there is much shading visible in the image).
Browse other questions tagged complex-dynamics or ask your own question.
What is the state of the art of visualizing bifurcations for "difficult" dynamical systems?
What is the "category of bifurcations"?
Infinite compositions of holomorphic functions, is there literature on the subject?
Is there a combinatorial analogue of the "sum over all possible paths"? | CommonCrawl |
A probability distribution is a special case of the more general notion of a probability measure, which is a function that assigns probabilities satisfying the Kolmogorov axioms to the measurable sets of a measurable space.
The rectangular distribution is a uniform distribution on [-1/2,1/2].
The triangular distribution on [a, b], a special case of which is the distribution of the sum of two uniformly distributed random variables (the convolution of two uniform distributions).
In (A science (or group of related sciences) dealing with the logic of quantity and shape and arrangement) mathematics, a degenerate distribution is the (Click link for more info and facts about probability distribution) probability distribution of a discrete random variable which always has the same value.
As a discrete distribution, the degenerate distribution does not have a (The amount per unit size) density.
The meaning given to it by Schwartz is not the meaning of the word distribution in (The branch of applied mathematics that deals with probabilities) probability theory.
In statistics, when a p-value is used as a test statistic for a simple null hypothesis, and the distribution of the test statistic is continuous, then the test statistic is uniformly distributed between 0 and 1 if the null hypothesis is true.
Although the uniform distribution is not commonly found in nature, it is particularly useful for sampling from arbitrary distributions.
The normal distribution is an important example where the inverse transform method is not efficient.
The Ewens sampling formula is a probability distribution on the set of all partitions of an integer n, arising in population genetics.
The F-distribution, which is the distribution of the ratio of two normally distributed random variables, used in the analysis of variance.
The Levy stable distribution is often used to characterize financial data and critical behavior.
The distribution of array elements is shown in Figure 22, along with an indication of which array elements may rest on which abstract processor (in array notation).
Block distributions such as this are good for problems which have a regular domain decomposition, such as fluid dynamics and Quantum Chromodynamics.
There is another type of distribution known as the cyclic distribution.
For angles, the uniform distribution is attractive as, among all distributions on a bounded range, the uniform distribution has maximum entropy, where entropy is defined to be Shannon's information, the expected value of the log of the density.
Whatever the distribution of the perimeter, we will get the same probability of an obtuse angle if, given the perimeter, the event "triangle is obtuse" is independent of the distribution of the perimeter.
However, it is easier to condition on A. The conditional distribution of B and C given A is normal and the 4 components are all independent.
An alternative approach to embeddability of a non-infinitely divisible $\mu$ is by considering non-classical convolution measure semigroups; for example embedding $\mu$ in a Boolean convolution measure semigroup and retaining the multinomial character of the moments.embedding $\mu$ in a Boolean convolution measure semigroup and retaining the multinomial character of the moments.
As t->infty, it is perhaps evident that the distribution of Z(t) converges weakly to that of the sum of the integrals of v along the paths of two independent Brownian motions, starting at x and y and running forever.Here we prove a stronger result, namely convergence of the corresponding moment generating functions and of moments.
These limit distributions have been seen previously in analysis of the Poisson-Dirichlet distribution and elsewhere; they are expressed in terms of Dickman's function, and their properties are discussed in some detail.
We show that the covariance of the counters for overlapping M- tuples causes this asymptotic distribution to degenerate to a lower dimension.
In order to construct a useful test statistic for the degenerate multivariate normal distribution we compute a onedimensional fingerprint which is related to the distance between theoretical and empirical distribution function of the counter vector.
The technique used to derive the asymptotic distribution of the test statistic can be extended to a large class of statistical problems.
If F. the degenerate distribution, is chosen in the first stage, the observed value of the outcome is zero; otherwise the observed value is drawn from the other distribution.
The distribution of Eros can, therefore, be Rewed as a mixture of two distinguishable distributions, one win a discrete probability mass at zero and He over a continuous distribution of non-zero positive and/or negative error amounts.
The distribution is a mixture of a mass point at O and a nontrivial continuous distnbution of decay score.
M stands for "Markovian", implying exponential distribution for service times or inter-arrival times.
D stands for "degenerate" distribution, or "deterministic" service times.
Ek stands for an Erlang distribution with k as the shape parameter.
, so the probability of sets not containing the degenerate point will tend to 0; large deviations is concerned with obtaining the exponential decay rate of these probabilities.
no longer exists), the same elementary computations are still applicable to the quasi-stationary distribution, and we show that the quasi-stationary distributions obey the same large deviations principle as in the recurrent case.
In addition, we address some questions related to the estimated time to absorption and obtain a large deviations principle for the invariant distribution in higher dimensions by studying a quasi-potential.
Distribution opportunities: Distribution provides a number of opportunities for the marketer that may normally be associated with other elements of the marketing mix.
In view of the need for markets to be balanced, the same distribution strategy is unlikely to be successful for each firm.
In general, for convenience products, intense distribution is desirable, but only brands that have a certain amount of power—e.g., an established brand name—can hope to gain national intense distribution.
Also, in the "standard" case, the OLS estimators have to be multiplied by some increasing function of T to obtain a non-degenerate limiting distribution.
The F statistic must be divided by T to obtain a non-divergent distribution (but one which is not F even then).
A DW statistic which is "significant" on usual criteria suggests that the regression model is mispecified, perhaps because it is a spurious regression.
The time taken by a station to process the customer can also be modeled with a probability distribution function.
Exponential Distribution, with parameters depending on the length of the queue, model the queues.
The service time is exponentially distributed with the same mean for all classes of customer.
If E is included, D must be, to ensure that one is not confused between the two, but an infinity symbol is allowed for D.
D / M / n - This would describe a queue with a degenerate distribution for the interarrival times of customers, an exponential distribution for service times of customers, and n servers.
M / M / m / K / N - This would describe a queueing system with an exponential distribution for the interarrival times of customers and the service times of customers, m servers, a maximum of K customers in the queueing system at once, and N potential customers in the calling population.
The study of degenerate stars is important because it can in principle be used to reconstruct the entire history of star formation in the Galactic disc.
To do this both the luminosity function and the mass distribution of degenerate stars need to be determined precisely, and we need to understand the relationship between the initial mass and final mass of a star.
The width of this distribution puts constraints on the theories of stellar formation in the Solar neighbourhood (Fusi-Pecci and Renzini 1976), so accurate determinations are vital.
The normal distribution curve is drawn by using the observed mean and the expected variance under the null hypothesis of uniform neutral mutation rate among genes.
If we have the beta function b(A,B,p,q) and n (which presumably is given by one of p or q) must lie between 0 and 1, then am I correct in thinking that the upper and lower bounds A and B, must be set to 0 and 1 respectively?
1.25*x^0.25, from x = 0 to 1, is a degenerate Beta distribution with A = 1.25 and B = 0.
It is a degenerate one, with B = 0 and A = n, but none of the Beta distribution results will hold because you will find yourself dividing by zero.
The interaction terms are equal to zero under the independence hypothesis I.
denotes the degenerate prior on the interaction terms under the independence hypothesis.
The probability distribution for an observation from the population of individuals in q + 1 mutually and exhaustive categories is known as the multinomial distribution.
Then the multivariate vector Y = ( EMBED Equation.3 ) is said to have a multinomial distribution and we write Y = ( EMBED Equation.3 ) ~ Multinomial (n, EMBED Equation.3 ) and P[Y = ( EMBED Equation.3 )] = EMBED Equation.3 EMBED Equation.3 .
Control region for future observations The goal is to use data , collected when a process is stable, to set a control region for a future observation X or future observations.
Typically, these distributions converge weakly to a degenerate distribution as N \rightarrow \infty, so the probability of sets not containing the degenerate point will tend to 0; large deviations is concerned with obtaining the exponential decay rate of these probabilities.
Table of contents for A primer on statistical distributions / N. Balakrishnan and V.B. Nevzorov. | CommonCrawl |
Standard facts about separating linear functionals will be used to determine how two cones $C$ and $D$ and their duals $C^*$ and $D^*$ may overlap. When $T\:V\rightarrow W$ is linear and $K \subset V$ and $D\subset W$ are cones, these results will be applied to $C=T(K)$ and $D$, giving a unified treatment of several theorems of the alternate which explain when $C$ contains an interior point of $D$. The case when $V=W$ is the space $H$ of $n\times n$ Hermitian matrices, $D$ is the $n\times n$ positive semidefinite matrices, and $T(X) = AX + X^*A$ yields new and known results about the existence of block diagonal $X$'s satisfying the Lyapunov condition: $T(X)$ is an interior point of $D$. For the same $V$, $W$ and $D$, $ T(X)=X-B^*XB$ will be studied for certain cones $K$ of entry-wise nonnegative $X$'s.
A. N. Lyapunov: Le problème général de la stabilité du mouvement. Ann. Math. Studies 17 (1949), Princeton University Press. | CommonCrawl |
This page lists articles associated with the same title. If an internal link led you here, you may wish to change the link to point directly to the intended article.
Complementary angles: two angles whose measures add up to the measure of a right angle.
Complements of Parallelograms: The extra bits that need to be added to a pair of parallelograms sharing a line for their diagonals that need to be added to make one big parallelogram.
Complement of Relation: for a relation $\mathcal R$, its complement is all those pairs which are not in $\mathcal R$.
Set Complement or Relative Complement: two related concepts: all the elements of a set which are not in a given subset.
Logical Complement: In logic, the negation of a statement.
Complement of Lattice Element: An in a strong and precise sense incomparable element of an element of a bounded lattice.
Complement of Graph: a graph with the same vertex set but whose edge set is all those edges not in that graph.
The word complement comes from the idea of complete-ment, it being the thing needed to complete something else.
It is a common mistake to confuse the words complement and compliment. Usually the latter is mistakenly used when the former is meant. | CommonCrawl |
> - can mtpro2.sty be extended in such a way that amsmath's cases environment automatically chooses 'ccases'?
> - what about a comparison of Times vs. TimesNR used together with MTProII? "($(a)$)" looks much better with TimesNR, perhaps there are more combinations worth noting?
effort. I have discussed the problem already with Mike Spivak.
Should I add them to mtpro2.tex ?
Zedler: These are the tex names the Lucida fonts use. If you look at http://www.hft.ei.tum.de/mz/MnSymbol.pdf they are called \largeemptyfilledspoon, \largefilledemptyspoon. They are used to denote the Fourier/Laplace transformation. Additional glyphs are not obligarory, this can be done on macro level.
The interior product I asked for is called minushookup in the above mentioned pdf, I need it to be of type \mathbin. This one may better be created as an additional glyph. | CommonCrawl |
In the In the 2018-2019 AY, the Random Matrix and Probability Theory Seminar will take place on Thursdays from 4:30 – 5:30pm in CMSA, room G10. As the seminar will not occur on a regular weekly basis, the list below will reflect the dates of the scheduled talks. Room numbers and times will be announced as the details are confirmed.
The schedule will be updated as details are confirmed.
Abstract: Estimating low-rank matrices from noisy observations is a common task in statistical and engineering applications. Following the seminal work of Johnstone, Baik, Ben-Arous and Peche, versions of this problem have been extensively studied using random matrix theory. In this talk, we will consider an alternative viewpoint based on tools from mean field spin glasses. We will present two examples that illustrate how these tools yield information beyond those from classical random matrix theory. The first example is the two-groups stochastic block model (SBM), where we will obtain a full information-theoretic understanding of the estimation phase transition. In the second example, we will augment the SBM with covariate information at nodes, and obtain results on the altered phase transition.
This is based on joint works with Emmanuel Abbe, Andrea Montanari, Elchanan Mossel and Subhabrata Sen.
Abstract: In 1979, O.Heilmann and E.H. Lieb introduced an interacting dimer model with the goal of proving the emergence of a nematic liquid crystal phase in it. In such a phase, dimers spontaneously align, but there is no long range translational order. Heilmann and Lieb proved that dimers do, indeed, align, and conjectured that there is no translational order. I will discuss a recent proof of this conjecture. This is joint work with Elliott H. Lieb.
Abstract: Many problems in signal/image processing, and computer vision amount to estimating a signal, image, or tri-dimensional structure/scene from corrupted measurements. A particularly challenging form of measurement corruption are latent transformations of the underlying signal to be recovered. Many such transformations can be described as a group acting on the object to be recovered. Examples include the Simulatenous Localization and Mapping (SLaM) problem in Robotics and Computer Vision, where pictures of a scene are obtained from different positions and orientations; Cryo-Electron Microscopy (Cryo-EM) imaging where projections of a molecule density are taken from unknown rotations, and several others.
One fundamental example of this type of problems is Multi-Reference Alignment: Given a group acting in a space, the goal is to estimate an orbit of the group action from noisy samples. For example, in one of its simplest forms, one is tasked with estimating a signal from noisy cyclically shifted copies. We will show that the number of observations needed by any method has a surprising dependency on the signal-to-noise ratio (SNR), and algebraic properties of the underlying group action. Remarkably, in some important cases, this sample complexity is achieved with computationally efficient methods based on computing invariants under the group of transformations.
Abstract: We consider the dynamics of a heavy quantum tracer particle coupled to a non-relativistic boson field in R^3. The pair interactions of the bosons are of mean-field type, with coupling strength proportional to 1/N where N is the expected particle number. Assuming that the mass of the tracer particle is proportional to N, we derive generalized Hartree equations in the limit where N tends to infinity. Moreover, we prove the global well-posedness of the associated Cauchy problem for sufficiently weak interaction potentials. This is joint work with Avy Soffer (Rutgers University).
Abstract: The Graph Matching problem is a robust version of the Graph Isomorphism problem: given two not-necessarily-isomorphic graphs, the goal is to find a permutation of the vertices which maximizes the number of common edges. We study a popular average-case variant; we deviate from the common heuristic strategy and give the first quasi-polynomial time algorithm, where previously only sub-exponential time algorithms were known.
Abstract: The asymmetric simple exclusion process (ASEP) is a model of particles hopping on a one-dimensional lattice, subject to the condition that there is at most one particle per site. This model was introduced in 1970 by biologists (as a model for translation in protein synthesis) but has since been shown to display a rich mathematical structure. There are many variants of the model — e.g. the lattice could be a ring, or a line with open boundaries. One can also allow multiple species of particles with different "weights." I will explain how one can give combinatorial formulas for the stationary distribution using various kinds of tableaux. I will also explain how the ASEP is related to interesting families of orthogonal polynomials, including Askey-Wilson polynomials, Koornwinder polynomials, and Macdonald polynomials.
Abstract: We will present the Bourgain-Dyatlov theorem on the line, it's connection with other uncertainty principles in harmonic analysis, and my recent partial progress with Rui Han on the problem of higher dimensions.
11/14/2018 David Gamarnik (MIT) Title: Two Algorithmic Hardness Results in Spin Glasses and Compressive Sensing.
Abstract: I will discuss two computational problems in the area of random combinatorial structures. The first one is the problem of computing the partition function of a Sherrington-Kirkpatrick spin glass model. While the the problem of computing the partition functions associated with arbitrary instances is known to belong to the #P complexity class, the complexity of the problem for random instances is open. We show that the problem of computing the partition function exactly (in an appropriate sense) for the case of instances involving Gaussian couplings is #P-hard on average. The proof uses Lipton's trick of computation modulo large prime number, reduction of the average case to the worst case instances, and the near uniformity of the "stretched" log-normal distribution.
In the second part we will discuss the problem of explicit construction of matrices satisfying the Restricted Isometry Property (RIP). This challenge arises in the field of compressive sensing. While random matrices are known to satisfy the RIP with high probability, the problem of explicit (deterministic) construction of RIP matrices eluded efforts and hits the so-called "square root" barrier which I will discuss in the talk. Overcoming this barrier is an open problem explored widely in the literature. We essentially resolve this problem by showing that an explicit construction of RIP matrices implies an explicit construction of graphs satisfying a very strong form of Ramsey property, which has been open since the seminal work of Erdos in 1947.
Abstract: We consider the product of m independent iid random matrices as m is fixed and the sizes of the matrices tend to infinity. In the case when the factor matrices are drawn from the complex Ginibre ensemble, Akemann and Burda computed the limiting microscopic correlation functions. In particular, away from the origin, they showed that the limiting correlation functions do not depend on m, the number of factor matrices. We show that this behavior is universal for products of iid random matrices under a moment matching hypothesis. In addition, we establish universality results for the linear statistics for these product models, which show that the limiting variance does not depend on the number of factor matrices either. The proofs of these universality results require a near-optimal lower bound on the least singular value for these product ensembles.
Abstract: I will present results on the scaling limit and asymptotics of the balanced excited random walk and related processes. This is a walk the that moves vertically on the first visit to a vertex, and horizontally on every subsequent visit. We also analyze certain versions of "clairvoyant scheduling" of random walks.
Joint work with Mark Holmes and Alejandro Ramirez.
Abstract: Quantum many-body systems usually reside in their lowest energy states. This among other things, motives understanding the gap, which is generally an undecidable problem. Nevertheless, we prove that generically local quantum Hamiltonians are gapless in any dimension and on any graph with bounded maximum degree.
We then provide an applied and approximate answer to an old problem in pure mathematics. Suppose the eigenvalue distributions of two matrices M_1 and M_2 are known. What is the eigenvalue distribution of the sum M_1+M_2? This problem has a rich pure mathematics history dating back to H. Weyl (1912) with many applications in various fields. Free probability theory (FPT) answers this question under certain conditions. We will describe FPT and show examples of its powers for approximating physical quantities such as the density of states of the Anderson model, quantum spin chains, and gapped vs. gapless phases of some Floquet systems. These physical quantities are often hard to compute exactly (provably NP-hard). Nevertheless, using FPT and other ideas from random matrix theory excellent approximations can be obtained. Besides the applications presented, we believe the techniques will find new applications in fresh new contexts.
Abstract: The perceptron is a toy model of a simple neural network that stores a collection of given patterns. Its analysis reduces to a simple problem in high-dimensional geometry, namely, understanding the intersection of the cube (or sphere) with a collection of random half-spaces. Despite the simplicity of this model, its high-dimensional asymptotics are not well understood. I will describe what is known and present recent results.
Abstract: In this talk I present some variational problems of Aharonov-Bohm type, i.e., they include a magnetic flux that is entirely concentrated at a point. This is maybe the simplest example of a variational problems for systems, the wave function being necessarily complex. The functional is rotationally invariant and the issue to be discussed is whether the optimizer have this symmetry or whether it is broken.
Science Center 411 Ilya Kachkovskiy (Michigan State University) Title: Localization and delocalization for interacting 1D quasiperiodic particles.
Abstract: We consider a system of two interacting one-dimensional quasiperiodic particles as an operator on $\ell^2(\mathbb Z^2)$. The fact that particle frequencies are identical, implies a new effect compared to generic 2D potentials: the presence of large coupling localization depends on symmetries of the single-particle potential. If the potential has no cosine-type symmetries, then we are able to show large coupling localization at all energies, even if the interaction is not small (with some assumptions on its complexity). If symmetries are present, we can show localization away from finitely many energies, thus removing a fraction of spectrum from consideration. We also demonstrate that, in the symmetric case, delocalization can indeed happen if the interaction is strong, at the energies away from the bulk spectrum. The result is based on joint works with Jean Bourgain and Svetlana Jitomirskaya.
Science Center 232 Anna Vershynina (University of Houston) Title: How fast can entanglement be generated in quantum systems?
Abstract: We investigate the maximal rate at which entanglement can be generated in bipartite quantum systems. The goal is to upper bound this rate. All previous results in closed systems considered entanglement entropy as a measure of entanglement. I will present recent results, where entanglement measure can be chosen from a large class of measures. The result is derived from a general bound on the trace-norm of a commutator, and can, for example, be applied to bound the entanglement rate for Renyi and Tsallis entanglement entropies.
Abstract: We derive the 3D energy-critical quintic NLS from quantum many-body dynamics with 3-body interaction in the T^3 (periodic) setting. Due to the known complexity of the energy critical setting, previous progress was limited in comparison to the 2-body interaction case yielding energy subcritical cubic NLS. We develop methods to prove the convergence of the BBGKY hierarchy to the infinite Gross-Pitaevskii (GP) hierarchy, and separately, the uniqueness of large GP solutions. Since the trace estimate used in the previous proofs of convergence is the false sharp trace estimate in our setting, we instead introduce a new frequency interaction analysis and apply the finite dimensional quantum de Finetti theorem. For the large solution uniqueness argument, we discover the new HUFL (hierarchical uniform frequency localization) property for the GP hierarchy and use it to prove a new type of uniqueness theorem.
Abstract: Fyodorov, Hiary and Keating have predicted the size of local maxima of L-function along the critical axis, based on analogous random matrix statistics. I will explain this prediction in the context of the log-correlated universality class and branching structures. In particular I will explain why the Riemann zeta function exhibits log-correlations, and outline the proof for the leading order of the maximum in the Fyodorov, Hiary and Keating prediction. Joint work with Arguin, Belius, Radziwill and Soundararajan.
Abstract: I consider matrices formed by a random $N\times N$ matrix drawn from the Gaussian Orthogonal Ensemble (or Gaussian Unitary Ensemble) plus a rank-one perturbation of strength $\theta$, and focus on the largest eigenvalue, $x$, and the component, $u$, of the corresponding eigenvector in the direction associated to the rank-one perturbation. I will show how to obtain the large deviation principle governing the atypical joint fluctuations of $x$ and $u$. Interestingly, for $\theta>1$, in large deviations characterized by a small value of $u$, i.e. $u<1-1/\theta$, the second-largest eigenvalue pops out from the Wigner semi-circle and the associated eigenvector orients in the direction corresponding to the rank-one perturbation. These results can be generalized to the Wishart Ensemble, and extended to the first $n$ eigenvalues and the associated eigenvectors.
Finally, I will discuss motivations and applications of these results to the study of the geometric properties of random high-dimensional functions—a topic that is currently attracting a lot of attention in physics and computer science.
Abstract: We present a full analysis of the spectrum of graphene in magnetic fields with constant flux through every hexagonal comb. In particular, we provide a rigorous foundation for self-similarity by showing that for irrational flux, the spectrum of graphene is a zero measure Cantor set. We also show that for vanishing flux, the spectral bands have nontrivial overlap, which proves the discrete Bethe-Sommerfeld conjecture for the graphene structure. This is based on joint works with S. Becker, J. Fillman and S. Jitomirskaya.
Abstract: We present a pathwise well-posedness theory for stochastic porous media and fast diffusion equations driven by nonlinear, conservative noise. Such equations arise in the theory of mean field games, approximate the Dean-Kawasaki equation in fluctuating fluid dynamics, describe the fluctuating hydrodynamics of the zero range process, and model the evolution of a thin film in the regime of negligible surface tension. Motivated by the theory of stochastic viscosity solutions, we pass to the equation's kinetic formulation, where the noise enters linearly and can be inverted using the theory of rough paths. The talk is based on joint work with Benjamin Gess. | CommonCrawl |
By using a formula relating topological entropy and cohomological pressure, we obtain several rigidity results about contact Anosov flows. For example, we prove the following result: Let $\varphi$ be a $C^\infty$ contact Anosov flow. If its Anosov splitting is $C^2$ and it is $C^0$ orbit equivalent to the geodesic flow of a closed negatively curved Riemannian manifold, then the cohomological pressure and the metric entropy of $\varphi$ coincide. This result generalizes a result of U. Hamenstädt for geodesic flows.
Keywords: Anosov flow, cohomological pressure., entropy.
Mathematics Subject Classification: Primary: 37A35, 34D20; Secondary: 37D35, 37D4.
Giovanni Forni. The cohomological equation for area-preserving flows on compact surfaces. Electronic Research Announcements, 1995, 1: 114-123. | CommonCrawl |
I'm trying to understand the CAPM model and how we can use it to understand efficient portfolios. Specfically, I'm trying to use the CML line (mapping expected returns and standard deviations of portfolios) to value proposed portfolios.
In this scenario: risk free rate = 2%. Expected excess return on market portfolio is 8% (so, I'm assuming, the expected return on the the market portfolio is 10%). The last given value is that the standard deviation of the market portfolio is 20.
Based on the Sharpe Ratio (ie: the slope of the CML), I deduced that portfolio A is unfeasible and C is inefficient, whereas B falls on the CML and must therefore be efficient for the level of risk.
The next question I'm posed with is "How can the expected return of the wining portfolio be achieved? Specify the amount invested in each asset/portfolio of assets?"
It is given that I have some number X to invest, but I'm not quite sure how to approach this problem. The question does not seem too clear to me.
It is really simple and probably not a question for this forum.
You just need: $\alpha * .08 + (1-\alpha) * .02 = .12$. Solve for alpha and then check the standard deviation that should be .25. | CommonCrawl |
We consider anchored and ANOVA spaces of functions with mixed first order partial derivatives bounded in $L_1$ and $L_\infty$ norms. The spaces are weighted and we provide necessary and sufficient conditions so that the corresponding spaces have equivalent norms with constants independent of the number of variables. This includes the case of countably many variables.
Joint work with Grzegorz Wasilkowski. | CommonCrawl |
The workshop ``Calculus of Variations'' took place from July 9 to 15, 2006, and was attended by almost fifty participants, mostly from European and North American universities and research institutes. There were 24 lectures on recent research topics, plus a review lecture on the Lieb-Thirring inequalities by Michael Loss (Georgia Tech, Atlanta). As the workshop had no specific focus, talks covered a wide range of topics, with the aim of featuring different research trends, bringing new problems to the fore, and stimulating interaction between mathematicians from different backgrounds. \smallskip Five lectures were focused on problems related to Continuum Mechanics and Materials Science. Gero Friesecke (Munich and Warwick) presented some results on a simplified model for molecules, where the aim is to give a rigorous explanation of the screening effect (i.e., the fact that the interaction of electrically balanced molecules due to electrostatic forces is short ranged); this problem is still open and presumably quite challenging in case of `realistic' models. L\'aszl\'o Sz\'ekelyhidy (ETH Z\"urich) presented new result about the structure of quasiconvex hulls for sets of $2\times2$ matrices, perhaps the most interesting development on this topic in recent years. Sergio Conti (Duisburg) considered the asymptotic behaviour of an energy functional that appears in the modeling of different physical problems, such as blistering in elastic films, magnetic thin films, etc.; the main result presented in his lecture adds one more piece to the work of many authors towards the proof of a conjecture by Aviles and Giga on the variational limit of such functional. The lecture by Felix Otto (Bonn) was focused on the rigorous analysis of pattern formation in micromagnetics: this type of pattern formation is particularly interesting because of the complexity of the observed behaviours -- not yet fully explained in rigorous terms -- and of the relative simplicity of the underlying continuum model. Related to this topic was also the lecture of Hans Kn\"upfer (Bonn). \smallskip Four lectures dealt with regularity problems of different sorts. G.~Rosario Mingione (Parma) reviewed some recent developments on the regularity of solutions of nonlinear parabolic systems. Michael Struwe (ETH Z\"urich) presented a new approach to regularity for harmonic maps valued in a hypersurfaces, yielding new results when the domain dimension is larger than $2$. The regularity of harmonic maps valued in Riemannian manifolds was also considered by Ernst Kuwert (Freiburg); these results stemmed from other results on the conformal structure of surfaces with suitable bounds on the Willmore energy. Mariel Saez (MPI for Gravitationl Physics, Potsdam) presented a Lipschitz regularity result for the pseudo-infinity Laplacian. \smallskip A certain number of lectures were related to shape optimization and optimal transport problems. Almut Burchard (Toronto) presented some partial results about the shape of closed curves in the three-dimensional space that minimize the first eigenvalue of the associated one-dimensional Schr\"odinger operator; it is conjectured that these curves are circles (among other things, the conjecture is related to the optimal constant in a particular Lieb-Thirring inequality). Jochen Denzler (Knoxville) and Giuseppe Buttazzo (Pisa) considered other optimization problems related to the first eigenvalue of (variants of) the Laplace operator on a given domain. Alexander Plakhov (Aveiro) studied bodies of minimal resistance moving through a rarefied particle gas. Francesco Maggi (Duisburg) and Aldo Pratelli (Pavia) presented some recent quantitative versions with optimal exponents of the classical isoperimetric inequality in the $n$-dimensional Euclidean space. Qinglan Xia (UC Davis) proposed a model for the shape formation in tree leaves which postulates a step-by-step optimized growth for the associated transport system (the venation of the leaf), where ``optimized'' refers to a given transport cost. Numerical simulations based on this simple model show that varying the two built-in parameters generates a wide variety of leaf shapes. Vladimir Oliker (Emory University, Atlanta) described a variational approach to the Aleksandrov problem about the existence of closed convex hypersurfaces with prescribed integral Gauss curvature. A similar approach is also used to design reflecting surfaces with prescribed irradiance properties; the functional underlying this variational principle is related to Monge-Kantorovich optimal transport theory. \smallskip Yann Brenier (Nice) considered the problem of foliating the three-dimensional Euclidean space and the four-dimensional Minkowski space by extremal surfaces (which in Minkowski space can be interpreted as classical relativistic strings). One way of obtaining such foliations is finding minimizers or critical points of suitable energy functionals, subject to certain nonlinear constraints; due to these constraints, standard methods do not apply in this case, and the existence of such minimizers is open. Pierre Cardaliaguet (Brest) studied a non-local geometric evolution problem for sets in the $n$-dimensional Euclidean space, which can be formally viewed as the gradient flow of a linear combination of volume and capacity. Since this flow preserves inclusion, it allows for a notion of weak solutions in the sense of viscosity; it is shown that such solutions agree with the limits of the the minimizing movements obtained by time discretization. Diogo Gomes (Instituto Superior Tecnico, Lisbon) reviewed some recent results on the viscosity solution of Hamilton-Jacobi equations and the relations with the associated Hamiltonian dynamics, and Aubrey-Mather theory. Olvier Druet (ENS Lyon) presented new results on the bubbling phenomenon for the solutions (and also the Palais-Smale sequences) of sequences of variational elliptic equations in dimension two with critical nonlinearities. Robert Jerrard (Toronto) described a version of the $\Gamma$-convergence method designed for saddle points instead of minima, and used this abstract tool to obtain non-trivial solutions to the Ginzburg-Landau system in dimension three. Reiner Sch\"atzle (T\"ubingen) gave a proof of (a modified version of) a conjecture by De Giorgi on the approximation of the Willmore functional for hypersurfaces in dimension three; the conjecture is still open in higher dimensions. Keith Ball (University College London) presented the proof of a long-standing conjecture (due to Lieb) on the entropy gap between the normalized sum of $N$ independent copies of a given random variable $X$ and its limit as $N\to\infty$, i.e., the Gaussian distribution. A key role in the proof is played by a new variational characterization of Fisher information. The lecture by Gerhard Huisken (MPI for Gravitationl Physics, Potsdam) was focused on the problem of defining mass in general relativity; in particular, he presented a new definition based on the isoperimetric inequality (more precisely, on the asymptotic behaviour of the isoperimetric profile), and some results on the properties of this mass. One of the advantages of this definition, compared to others based on the notion of curvature, is the relatively simple calculus that is required for handling it. Furthermore, it can be adapted so as to obtain a notion of localized mass. | CommonCrawl |
The posterior distribution is equal to the joint distribution divided by the marginal distribution of the evidence.
For many useful models the marginal distribution of the evidence is hard or impossible to calculate analytically.
To produce an interesting MCMC animation, we simulate a linear regression data set and animate samples from the posteriors of the regression coefficients.
Applied log-transform to tau and added transformed tau_log_ to model.
WARNING (theano.tensor.blas): We did not found a dynamic library into the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
We observe three successes in ten trials, and want to infer the true success probability.
Applied interval-transform to p and added transformed p_interval_ to model.
from approximate distributons, but we can't calculate the true posterior distribution.
is equivalent to maximizing the Evidence Lower BOund (ELBO), which only requires calculating the joint distribution.
In this example, we minimize the Kullback-Leibler divergence between a full-rank covariance Gaussian distribution and a diagonal covariance Gaussian distribution.
The congressional ideal point model uses the 1984 congressional voting records data set from the UCI Machine Learning Repository.
Load and code the congressional voting data.
"No" votes ('n') are coded as zero, "yes" ('y') votes are coded as one, and skipped/unknown votes ('?') are coded as np.nan. The skipped/unknown votes will be dropped later.
Also, code the representative's parties.
Republicans ('republican') are coded as zero and democrats ('democrat') are coded as one.
Transform the voting data from wide form to "tidy" form; that is, one row per representative and bill combination.
If you haven't already, go read Hadley Wickham's Tidy Data paper.
Note that we do not specify a hierarchical prior on $\alpha_1, \ldots, \alpha_K$ in order to ensure that the model is identifiable. See Practical issues in implementing and understanding Bayesian ideal point estimation for an in-depth discussion of identification issues in Bayesian ideal point models.
The depdendent density regression uses LIDAR data from Larry Wasserman's book All of Nonparametric Statistics.
The stick-breaking process transforms an arbitrary set of values in the interval $[0, 1]$ to a set of weights in $[0, 1]$ that sum to one.
The NormalMixture class is a bit of a hack to marginalize over the categorical indicators that would otherwise be necessary to implement a normal mixture model in PyMC3. This also speeds convergence in both MCMC and variational inference algorithms. See the Stan User's Guide and Reference Manual for more information on the benefits of marginalization. | CommonCrawl |
Proceedings of Machine Learning Research, PMLR 89:1215-1225, 2019.
We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework to simultaneously provide (1) resiliency against stragglers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. LCC, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. LCC significantly generalizes prior works to go beyond linear computations. It also enables secure and private computing in distributed settings, improving the computation and communication efficiency of the state-of-the-art. Furthermore, we prove the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, i.e., in terms of tolerating the maximum number of stragglers and adversaries, and providing data privacy against the maximum number of colluding workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to $13.43\times$, and also achieves a $2.36\times$-$12.65\times$ speedup over the state-of-the-art straggler mitigation strategies.
%X We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework to simultaneously provide (1) resiliency against stragglers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. LCC, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. LCC significantly generalizes prior works to go beyond linear computations. It also enables secure and private computing in distributed settings, improving the computation and communication efficiency of the state-of-the-art. Furthermore, we prove the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, i.e., in terms of tolerating the maximum number of stragglers and adversaries, and providing data privacy against the maximum number of colluding workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to $13.43\times$, and also achieves a $2.36\times$-$12.65\times$ speedup over the state-of-the-art straggler mitigation strategies. | CommonCrawl |
the robot that is being presented is equipped with an acoustic speaker that can generate 10-100 kHz sound waves. I want to purchase and test this product. I tried contacting the author but got no response.
Can anyone identify the speaker shown below?
Browse other questions tagged mobile-robot sensors acoustic-rangefinder or ask your own question.
How to model the noise in a range sensor's return?
What's a good pose estimation method for high precision (<5mm per-axis) solutions at short range (<50cm)?
Can we use this line sensor as a proximity sensor?
What's the diffrence between $H_2$ and $H_\infty$ controll?
What is the front mechanism of this robot's track called? | CommonCrawl |
Consider the following reparametrization of $\mathbb R^3$: $$(x,y,z)\mapsto (x-\cos z,y-\sin z,z).$$ Note that the horizontal translations of your helix go to the vertical lines. So the pullback of the canonical metric on $\mathbb R^3$ is the metric you want.
I'm pretty sure that Nil geometry works, but I don't know a reference. I seem to remember that one may think of Nil geometry as fibering over the plane, and that geodesics connecting different points in the same fiber had projections to circles. I think then these give helices.
A more general criterion was given by Dennis Sullivan for when a foliation may be realized as the geodesics of a Riemannian metric.
Not the answer you're looking for? Browse other questions tagged euclidean-geometry dg.differential-geometry mg.metric-geometry or ask your own question.
Aren't Riemannian geodesics also geodesics of the associated Cartan geometry? | CommonCrawl |
Abstract: In this paper we develop a Malliavin-Skorohod type calculus for additive processes in the $L^0$ and $L^1$ settings, extending the probabilistic interpretation of the Malliavin-Skorohod operators to this context. We prove calculus rules and obtain a generalization of the Clark-Hausmann-Ocone formula for random variables in $L^1$. Our theory is then applied to extend the stochastic integration with respect to volatility modulated Lévy-driven Volterra processes recently introduced in the literature. Our work yields to substantially weaker conditions that permit to cover integration with respect, e.g. to Volterra processes driven by $\alpha$-stable processes with $\alpha < 2$. The presentation focuses on jump type processes. | CommonCrawl |
Fully Convolution Networks (FCN) have achieved great success in dense prediction tasks including semantic segmentation. In this paper, we start from discussing FCN by understanding its architecture limitations in building a strong segmentation network. Next, we present our Improved Fully Convolution Network (IFCN). In contrast to FCN, IFCN introduces a context network that progressively expands the receptive fields of feature maps. In addition, dense skip connections are added so that the context network can be effectively optimized. More importantly, these dense skip connections enable IFCN to fuse rich-scale context to make reliable predictions. Empirically, those architecture modifications are proven to be significant to enhance the segmentation performance. Without engaging any contextual post-processing, IFCN significantly advances the state-of-the-arts on ADE20K (ImageNet scene parsing), Pascal Context, Pascal VOC 2012 and SUN-RGBD segmentation datasets.
We adopt Convolutional Neural Networks (CNNs) to be our parametric model to learn discriminative features and classifiers for local patch classification. Based on the occurrence frequency distribution of classes, an ensemble of CNNs (CNN-Ensemble) are learned, in which each CNN component focuses on learning different and complementary visual patterns. The local beliefs of pixels are output by CNN-Ensemble. Considering that visually similar pixels are indistinguishable under local context, we leverage the global scene semantics to alleviate the local ambiguity. The global scene constraint is mathematically achieved by adding a global energy term to the labeling energy function, and it is practically estimated in a non-parametric framework. A large margin based CNN metric learning method is also proposed for better global belief estimation. In the end, the integration of local and global beliefs gives rise to the class likelihood of pixels, based on which maximum marginal inference is performed to generate the label prediction maps. Even without any post-processing, we achieve state-of-the-art results on the challenging SiftFlow and Barcelona benchmarks.
In image labeling, local representations for image units are usually generated from their surrounding image patches, thus long-range contextual information is not effectively encoded. In this paper, we introduce recurrent neural networks (RNNs) to address this issue. Specifically, directed acyclic graph RNNs (DAG-RNNs) are proposed to process DAG-structured images, which enables the network to model long-range semantic dependencies among image units. Our DAG-RNNs are capable of tremendously enhancing the discriminative power of local representations, which significantly benefits the local classification. Meanwhile, we propose a novel class weighting function that attends to rare classes, which phenomenally boosts the recognition accuracy for non-frequent classes. Integrating with convolution and deconvolution layers, our DAG-RNNs achieve new state-of-the-art results on the challenging SiftFlow, CamVid and Barcelona benchmarks.
In crowd counting datasets, people appear at different scales, depending on their distance to the camera. To address this issue, we propose a novel multi-branch scale-aware attention network that exploits the hierarchical structure of convolutional neural networks and generates, in a single forward pass, multi-scale density predictions from different layers of the architecture. To aggregate these maps into our final prediction, we present a new soft attention mechanism that learns a set of gating masks. Furthermore, we introduce a scale-aware loss function to regularize the training of different branches and guide them to specialize on a particular scale. As this new training requires ground-truth annotations for the size of each head, we also propose a simple, yet effective technique to estimate it automatically. Finally, we present an ablation study on each of these components and compare our approach against the literature on 4 crowd counting datasets: UCF-QNRF, ShanghaiTech A & B and UCF_CC_50. Without bells and whistles, our approach achieves state-of-the-art on all these datasets. We observe a remarkable improvement on the UCF-QNRF (25%) and a significant one on the others (around 10%).
Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.
Given a target name, which can be a product aspect or entity, identifying its aspect words and opinion words in a given corpus is a fine-grained task in target-based sentiment analysis (TSA). This task is challenging, especially when we have no labeled data and we want to perform it for any given domain. To address it, we propose a general two-stage approach. Stage one extracts/groups the target-related words (call t-words) for a given target. This is relatively easy as we can apply an existing semantics-based learning technique. Stage two separates the aspect and opinion words from the grouped t-words, which is challenging because we often do not have enough word-level aspect and opinion labels. In this work, we formulate this problem in a PU learning setting and incorporate the idea of lifelong learning to solve it. Experimental results show the effectiveness of our approach.
While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence). Factorization Machine provides a possible approach to learning element-wise interaction for recommender systems, but they are not directly applicable to our task due to the inability to model contexts and word sequences. In this work, we develop two Position-aware Factorization Machines which consider word interaction, context and position information. Such information is jointly encoded in a set of sentiment-oriented word interaction vectors. Compared to traditional word embeddings, SWI vectors explicitly capture sentiment-oriented word interaction and simplify the parameter learning. Experimental results show that while they have comparable performance with state-of-the-art methods for document-level classification, they benefit the snippet/sentence-level sentiment analysis.
In order to encode the class correlation and class specific information in image representation, we propose a new local feature learning approach named Deep Discriminative and Shareable Feature Learning (DDSFL). DDSFL aims to hierarchically learn feature transformation filter banks to transform raw pixel image patches to features. The learned filter banks are expected to: (1) encode common visual patterns of a flexible number of categories; (2) encode discriminative information; and (3) hierarchically extract patterns at different visual levels. Particularly, in each single layer of DDSFL, shareable filters are jointly learned for classes which share the similar patterns. Discriminative power of the filters is achieved by enforcing the features from the same category to be close, while features from different categories to be far away from each other. Furthermore, we also propose two exemplar selection methods to iteratively select training data for more efficient and effective learning. Based on the experimental results, DDSFL can achieve very promising performance, and it also shows great complementary effect to the state-of-the-art Caffe features.
For visual tracking, most of the traditional correlation filters (CF) based methods suffer from the bottleneck of feature redundancy and lack of motion information. In this paper, we design a novel tracking framework, called multi-hierarchical independent correlation filters (MHIT). The framework consists of motion estimation module, hierarchical features selection, independent CF online learning, and adaptive multi-branch CF fusion. Specifically, the motion estimation module is introduced to capture motion information, which effectively alleviates the object partial occlusion in the temporal video. The multi-hierarchical deep features of CNN representing different semantic information can be fully excavated to track multi-scale objects. To better overcome the deep feature redundancy, each hierarchical features are independently fed into a single branch to implement the online learning of parameters. Finally, an adaptive weight scheme is integrated into the framework to fuse these independent multi-branch CFs for the better and more robust visual object tracking. Extensive experiments on OTB and VOT datasets show that the proposed MHIT tracker can significantly improve the tracking performance. Especially, it obtains a 20.1% relative performance gain compared to the top trackers on the VOT2017 challenge, and also achieves new state-of-the-art performance on the VOT2018 challenge.
Matching pedestrians across multiple camera views known as human re-identification (re-identification) is a challenging problem in visual surveillance. In the existing works concentrating on feature extraction, representations are formed locally and independent of other regions. We present a novel siamese Long Short-Term Memory (LSTM) architecture that can process image regions sequentially and enhance the discriminative capability of local feature representation by leveraging contextual information. The feedback connections and internal gating mechanism of the LSTM cells enable our model to memorize the spatial dependencies and selectively propagate relevant contextual information through the network. We demonstrate improved performance compared to the baseline algorithm with no LSTM units and promising results compared to state-of-the-art methods on Market-1501, CUHK03 and VIPeR datasets. Visualization of the internal mechanism of LSTM cells shows meaningful patterns can be learned by our method.
Recently, segmentation neural networks have been significantly improved by demonstrating very promising accuracies on public benchmarks. However, these models are very heavy and generally suffer from low inference speed, which limits their application scenarios in practice. Meanwhile, existing fast segmentation models usually fail to obtain satisfactory segmentation accuracies on public benchmarks. In this paper, we propose a teacher-student learning framework that transfers the knowledge gained by a heavy and better performed segmentation network (i.e. teacher) to guide the learning of fast segmentation networks (i.e. student). Specifically, both zero-order and first-order knowledge depicted in the fine annotated images and unlabeled auxiliary data are transferred to regularize our student learning. The proposed method can improve existing fast segmentation models without incurring extra computational overhead, so it can still process images with the same fast speed. Extensive experiments on the Pascal Context, Cityscape and VOC 2012 datasets demonstrate that the proposed teacher-student learning framework is able to significantly boost the performance of student network.
This paper proposes a new method called Multimodal RNNs for RGB-D scene semantic segmentation. It is optimized to classify image pixels given two input sources: RGB color channels and Depth maps. It simultaneously performs training of two recurrent neural networks (RNNs) that are crossly connected through information transfer layers, which are learnt to adaptively extract relevant cross-modality features. Each RNN model learns its representations from its own previous hidden states and transferred patterns from the other RNNs previous hidden states; thus, both model-specific and crossmodality features are retained. We exploit the structure of quad-directional 2D-RNNs to model the short and long range contextual information in the 2D input image. We carefully designed various baselines to efficiently examine our proposed model structure. We test our Multimodal RNNs method on popular RGB-D benchmarks and show how it outperforms previous methods significantly and achieves competitive results with other state-of-the-art works.
In this paper, we study the challenging problem of multi-object tracking in a complex scene captured by a single camera. Different from the existing tracklet association-based tracking methods, we propose a novel and efficient way to obtain discriminative appearance-based tracklet affinity models. Our proposed method jointly learns the convolutional neural networks (CNNs) and temporally constrained metrics. In our method, a Siamese convolutional neural network (CNN) is first pre-trained on the auxiliary data. Then the Siamese CNN and temporally constrained metrics are jointly learned online to construct the appearance-based tracklet affinity models. The proposed method can jointly learn the hierarchical deep features and temporally constrained segment-wise metrics under a unified framework. For reliable association between tracklets, a novel loss function incorporating temporally constrained multi-task learning mechanism is proposed. By employing the proposed method, tracklet association can be accomplished even in challenging situations. Moreover, a new dataset with 40 fully annotated sequences is created to facilitate the tracking evaluation. Experimental results on five public datasets and the new large-scale dataset show that our method outperforms several state-of-the-art approaches in multi-object tracking.
Removing rain streaks from a single image continues to draw attentions today in outdoor vision systems. In this paper, we present an efficient method to remove rain streaks. First, the location map of rain pixels needs to be known as precisely as possible, to which we implement a relatively accurate detection of rain streaks by utilizing two characteristics of rain streaks.The key component of our method is to represent the intensity of each detected rain pixel using a linear model: $p=\alpha s + \beta$, where $p$ is the observed intensity of a rain pixel and $s$ represents the intensity of the background (i.e., before rain-affected). To solve $\alpha$ and $\beta$ for each detected rain pixel, we concentrate on a window centered around it and form an $L_2$-norm cost function by considering all detected rain pixels within the window, where the corresponding rain-removed intensity of each detected rain pixel is estimated by some neighboring non-rain pixels. By minimizing this cost function, we determine $\alpha$ and $\beta$ so as to construct the final rain-removed pixel intensity. Compared with several state-of-the-art works, our proposed method can remove rain streaks from a single color image much more efficiently - it offers not only a better visual quality but also a speed-up of several times to one degree of magnitude.
This paper studies a Nystr\"om type subsampling approach to large kernel learning methods in the misspecified case, where the target function is not assumed to belong to the reproducing kernel Hilbert space generated by the underlying kernel. This case is less understood, in spite of its practical importance. To model such a case, the smoothness of target functions is described in terms of general source conditions. It is surprising that almost for the whole range of the source conditions, describing the misspecified case, the corresponding learning rate bounds can be achieved with just one value of the regularization parameter. This observation allows a formulation of mild conditions under which the plain Nystr\"om subsampling can be realized with subquadratic cost maintaining the guaranteed learning rates.
Rain streaks will inevitably be captured by some outdoor vision systems, which lowers the image visual quality and also interferes various computer vision applications. We present a novel rain removal method in this paper, which consists of two steps, i.e., detection of rain streaks and reconstruction of the rain-removed image. An accurate detection of rain streaks determines the quality of the overall performance. To this end, we first detect rain streaks according to pixel intensities, motivated by the observation that rain streaks often possess higher intensities compared to other neighboring image structures. Some mis-detected locations are then refined through a morphological processing and the principal component analysis (PCA) such that only locations corresponding to real rain streaks are retained. In the second step, we separate image gradients into a background layer and a rain streak layer, thanks to the image quasi-sparsity prior, so that a rain image can be decomposed into a background layer and a rain layer. We validate the effectiveness of our method through quantitative and qualitative evaluations. We show that our method can remove rain (even for some relatively bright rain) from images robustly and outperforms some state-of-the-art rain removal algorithms.
Sentence compression is an important problem in natural language processing. In this paper, we firstly establish a new sentence compression model based on the probability model and the parse tree model. Our sentence compression model is equivalent to an integer linear program (ILP) which can both guarantee the syntax correctness of the compression and save the main meaning. We propose using a DC (Difference of convex) programming approach (DCA) for finding local optimal solution of our model. Combing DCA with a parallel-branch-and-bound framework, we can find global optimal solution. Numerical results demonstrate the good quality of our sentence compression model and the excellent performance of our proposed solution algorithm.
In this paper, we propose a deep learning architecture that produces accurate dense depth for the outdoor scene from a single color image and a sparse depth. Inspired by the indoor depth completion, our network estimates surface normals as the intermediate representation to produce dense depth, and can be trained end-to-end. With a modified encoder-decoder structure, our network effectively fuses the dense color image and the sparse LiDAR depth. To address outdoor specific challenges, our network predicts a confidence mask to handle mixed LiDAR signals near foreground boundaries due to occlusion, and combines estimates from the color image and surface normals with learned attention maps to improve the depth accuracy especially for distant areas. Extensive experiments demonstrate that our model improves upon the state-of-the-art performance on KITTI depth completion benchmark. Ablation study shows the positive impact of each model components to the final performance, and comprehensive analysis shows that our model generalizes well to the input with higher sparsity or from indoor scenes. | CommonCrawl |
Suppose that there is an alien spacecraft travelling towards the Sun. This spacecraft is similar in design, size and power output to Voyager 1 and Voyager 2 as they were immediately after launch from Earth, and is coasting in its orbit (no powered maneuvers taking place).
Also suppose that a budding scientist on present day Earth just so happens to point their instruments (optical telescope, radio telescope, or something else; ground-based or space-based) in exactly the right direction at exactly the right time.
If the spacecraft is communicating at all at this point, it seems unlikely to be transmitting in the direction of Earth.
How far from the Sun (or Earth) could the spacecraft be where we'd still have a chance of detecting it, assuming for a moment that all events line up perfectly for detection? Would we be able to determine that it is likely an extraterrestrial spacecraft, as opposed to some natural interstellar object?
$\phi_0 =1.22 $$\lambda \over D$, where D is the diameter of the optics.
$R = 40\times10^6 \ m$, or 40 thousand km. This distance is about the height of the geosynchronous orbit.
If we instead are using passive radioastronomy, we have the largest structure on Earth to have a diameter of 500 meter (Chinese FAST) operating at a wavelength of 0.10 meters.
This would give a minimum detection distance of about 11000 meters. But I guess in this case we would first see the optical trail of the satellite burning in the atmosphere.
Optics are a bad choice, so maybe thing about the radars used in tracking space debris, which have incredible resolution. Of course, that will be reduced the farther out you are looking. Detection of a 2cm object at 1000km is not out of the question, so detecting something 12' large (if you just want to see it, not gain any surface information) would roughly be 180,000 km. By using an active transmission component, it can double the detection range.
So around 400,000 km isn't out of the question with current equipment (optimized for a different purpose). It wouldn't be out of the question to use more power, more or larger receivers, different frequencies etc. to increase this range by a considerable amount. You eliminate the largest factor by allowing the 'lucky spotting' scenario. With this in mind, I see very little reason why detecting something at the edge of the solar system with a purpose built system is out of the question.
As for knowing if it is alien or not, I doubt this is all too feasible without receiving transmission from it. You would know its path, speed, and approximate size. Other than that, you'd have to wait for optics and the object to be much closer.
Instead of using reflected optical wave light from the sun, lets try to detect something that the probe itself is emitting. Any radio emissions are very unlikely to be targeted at Earth, so the most likely emission that we would capture would be black body radiation from the probe itself.
One possible alternate calculation of detection is to simply use the optical resolution equation that L.Dutch used, except substituting in a wavelength of 10,000 nm for 500 nm. This makes the detection range 800,000 km: greater than the distance to the moon.
I tried to calculate difference between Voyager's IR emissions and background IR, but couldn't get enough data; not on background spectra, Voyager's surface area or in many other areas.
I did note that cosmic IR background peaks in the 100-1000 $\mu$m range, significantly higher than the peak for Voyager. This suggest that we might be able to get good resolution at the lower wavelengths where Voyager's IR emissions will be maximized.
Not the answer you're looking for? Browse other questions tagged space spaceships hard-science or ask your own question.
How large does a spacecraft need to be to be visible from the surface of the Earth at 400 km altitude?
Could Jupiter moon Europa become habitable when the sun enters its red giant stage?
How many spaceships would it take to block the Sun from the daylight side of the Earth?
How far could a planet be from its star and still be kept habitable by intense greenhouse gases?
How could humanity live on the Sun? | CommonCrawl |
Does anything like expectation of joint distribution exist?
I know how to find the expectation of a function of a Random Variables, I was just wondering that does expectation of a joint distribution exists?
I think, since expectation is an average which by definition means a statistic: a single value that describes a distribution so can we capture the behaviour of the entire distribution in both x and y in a single number, shouldn't we need 2 values for it(one for x and the other for y)?
We can do so for a function of in x and y because that function outputs a single value hence the Expectation is a single number.
Here $x$ is a vector and $f(x)$ is a scalar. So we can integrate the vector $x f(x)$ with respect to the normal uniform measure on $\mathbb R^n$ to get a vector in $\mathbb R^n$.
Not the answer you're looking for? Browse other questions tagged probability-theory probability-distributions random-variables average expected-value or ask your own question.
Why does marginalization of a joint probability distribution use sums?
finite joint distribution implies infinite joint distribution? | CommonCrawl |
Evgeny Ferapontov (Loughborough) Date in autumn to be arranged (postponed from Monday April 16).
Kirill Mackenzie (Sheffield) Tuesday August 7th, 2018, 2pm, LT 7.
For a transitive Lie algebroid $A$, and an ideal in the adjoint bundle (= kernel of the anchor), there is a simple construction of a quotient Lie algebroid over the same base, and this has the usual properties.
When the base manifold is also quotiented, the situation is more complicated.
This talk will describe the general quotient construction, starting with the case of vector bundles.
I'll assume a basic familiarity with Lie algebroids.
Malte Heuer (Sheffield) Monday February 12th, 2018, 2pm, J 11.
I will prove that any triple vector bundle is non-canonically isomorphic to a decomposed one. The method relies on del Carpio-Marek's construction of local splittings of double vector bundles. Our method yields a useful definition of triple vector bundles via atlases of triple vector bundle charts. This is joint work with Madeleine Jotz Lean.
Magdalini Flari (Sheffield) Wednesday November 15th, 2pm, F 38.
Grids are a natural extension of the notion of section to double vector bundles. A grid consists of a pair of linear sections, and constitutes two non-commuting paths from the base manifold to the total space; the warp measures the lack of commutativity.
Well-known geometric objects can be expressed as warps: for example, the bracket of two vector fields is a warp, and, given a connection in a vector bundle, the covariant derivative of a section along a vector field is a warp.
In triple vector bundles, analysis of the six paths from the base manifold to the total space leads to identities among the warps of the constituent double vector bundles.
In this talk we will start with the concept of warp for double and triple vector bundles, build up to a general result for grids in triple vector bundles, and see some applications of this result for grids in the iterated tangent and cotangent bundles. This is joint work with Kirill Mackenzie.
Honglei Lang (MPIM Bonn) Wednesday July 5th, 3.00pm, J 11.
We define double principal bundles (DPBs), for which the frame bundle of a double vector bundle, double Lie groups and double homogeneous spaces are basic examples.
It is shown that a double vector bundle can be realized as the associated bundle of its frame bundle. Also dual structures, gauge transformations and connections in DPBs are investigated.
The visit of Dr Honglei Lang is supported by the Sheffield MSRC.
Drinfeld's double construction for a Lie bialgebra produces a unique Lie bialgebra suitable for quantisation. With the introduction of Lie bialgebroids as linearisations of Poisson-Lie groupoids, followed the same question as to whether a double can be constructed. This proved to be not so straightforward, and indeed, can be considered to be only partially answered.
We will review these double constructions for Lie bialgebras and Lie bialgebroids using the language of supermathematics, and will discuss some of the problems encountered for the bialgebroid case. We will then define the Drinfeld double of a homotopy Lie bialgebra, or an $L_\infty$-bialgebra, and find a necessary condition for the existence.
In the third talk we will complete the construction of a double Lie algebroid of an LA-groupoid, and look at a specific example of an LA-groupoid arising naturally from a Poisson Lie group. We will finish by discussing the general notion of a double Lie algebroid of a double Lie groupoid.
In the second talk, we will briefly discuss some examples of Lie algebroids arising from Lie groupoids; this should tie in with the description of the Lie functor, given in the first seminar. We shall then continue the construction of a double Lie algebroid of an LA-groupoid.
We will complete the construction of a double Lie algebroid of an LA-groupoid, and look at a specific example of an LA-groupoid arising naturally from a Poisson Lie group. We will finish by discussing the general notion of a double Lie algebroid of a double Lie groupoid.
The series of two, possibly three, talks will consist of a precise formulation of the double Lie algebroid of a double Lie groupoid. We will also discuss some of the examples arising in Poisson geometry.
In the first talk we will consider the construction of the double Lie algebroid of an LA-groupoid. This will be a stepping stone in the general construction for a double Lie groupoid.
Knowledge of the standard formation of the Lie algebroid of a Lie groupoid will not be assumed, and the notions of a Lie groupoid and a Lie algebroid will be recalled.
We introduce a big class of Poisson manifolds, the "almost regular" ones. Roughly, these are the Poisson manifolds whose symplectic foliation is regular in a dense open subset. All regular Poisson manifolds are included in this class, as well as all the log-symplectic manifolds and certain Heisenberg-Poisson manifolds. We are looking for desingularizations of such structures. A natural candidate is the holonomy groupoid of the symplectic foliation, which is always smooth in this category. We show that, moreover, this is a regular Poisson groupoid. In the case of log-symplectic manifolds it coincides with the symplectic groupoid constructed by Gualtieri and Li. And for the Heisenberg-Poisson manifolds it is Connes' tangent groupoid. All this hints that various blow-up constructions in Poisson geometry might be replaced by the systematic construction of the holonomy groupoid of a singular foliation.
The visit of Professor Androulidakis is supported by the Sheffield MSRC.
When is a closed real-valued 2-form the curvature of a connection in a circle bundle (equivalently, a complex line bundle) ?
In what sense are coadjoint orbits the universal examples of Hamiltonian symplectic manifolds ?
The lectures are aimed at postgraduates having a nodding acquaintance (or better) with manifolds and Lie groups. All are welcome.
The course will start on Monday 31 October and will last about five weeks, with two lectures per week. The planned times are Mondays at 2pm and Thursdays at 10am.
The venue is Hicks J 11 unless otherwise stated.
Line bundles and cocycles: we define complex line bundles over manifolds and show that these can be described in terms of 1-cocycles of nowhere zero complex functions.
Cech cohomology: we introduce Cech cohomology with coefficients in a presheaf and prove some of its basic properties (long exact sequence, acyclicity of fine sheaves, double complex lemma). We deduce that line bundles are characterized by degree 2 cohomology classes with coefficients in (the constant presheaf given by) the integers.
Connections: after completing the proof of the double complex lemma which we didn't quite finish in the previous lecture, we will discuss some differential geometry of line bundles: connections, connection 1-forms, and curvature.
The integrality theorem: We will use (the proof of) the double complex to show that the de Rham cohomology class given by the curvature 2-form corresponds to the integral cocycle cohomology class of lecture 2. This will show that the de Rham cohomology class is integral.
In the fourth lecture, we did not get quite as far as described above. In the fifth lecture, everything that we did previously will come together, to allow for a comparison between the de Rham 2-form given by the curvature, and the integral 2-form given by the cocycle description of a line bundle. The integrality theorem will just drop out. We will conclude with a few remarks about to what extent the de Rham class determines the line bundle.
This will be designed to be accessible to people who have not come to Ieke's lectures, but who are familiar with the basics of manifolds and Lie groups.
Roughly speaking the question is: when is a 2-form on a manifold, which takes values in a `bundle of Lie algebras', the curvature of a connection in a principal bundle? The answer is recognizably similar to the answer in the case of complex line bundles, but the techniques are substantially different.
I will cover principal bundles and their connections at the start.
I'll conclude the account, begun on Thursday, of the non-abelian extension of the Weil Lemma. I will be able to go a bit slower than I did on Thursday.
I will begin the lectures on coadjoint orbits and Hamiltonian actions. I will start with concrete examples, at a leisurely pace. No knowledge of the preceding lectures is needed.
I will give an exposition of the two main results stated on Monday; that the coadjoint orbits of a connected Lie group are the symplectic leaves of the Lie algebra dual with its Poisson structure, and that the Marsden-Weinstein reduced spaces of a Hamiltonian action are the symplectic leaves of the quotient manifold (assumed to exist) with its Poisson structure.
The proofs won't be complete in every detail, but should give an idea of what is involved.
I intend to be faster than last Friday but slower than Monday.
All interested are welcome. The Hicks Building is 121 on the university map. | CommonCrawl |
To apply a transformation to every item in a list, and collect the transformed elements. This operation is called map.
For example, mapping the operation "add ten" across every item in the list [2, 4, 6] produces the list [12, 14, 16].
To narrow the list down to only those items that meet a certain criterion. This operation is called filter or select, or (if the criterion is expressed as which items not to include) reject.
For example, filtering or selecting the list [2, 4, 6, 8] against the criterion "is greater than 5" produces the list [6, 8]. Rejecting the items of [2, 4, 6, 8] that match the criterion "is greater than 5" produces the list [2, 4].
Combining the items in a list into a single value. This operation is called reduce or fold.
For example, using addition to reduce [2, 4, 6] produces the result $2 + 4 + 6 = 12$. Using multiplication produces $2 \times 4 \times 6 = 24$.
Note 1: If you want to show off, you can also call this a catamorphism.
Note 2: In general, it can matter whether you reduce left-to-right or right-to-left – whether you use a left fold or a right fold. With addition, and with multiplication of integers, it doesn't: $(2 + 4) + 6 = 2 + (4 + 6)$. Reduce and fold generally go left-to-right, unless otherwise specified.
Mapping applies a modification to each element in a list.
These correspond to chop and middle from first reading journal.
"""Add ten to each of the numbers in a list.
This function modifies its argument."""
print('add_ten returns', add_ten(lst)) # `add_one` is a fruitless function. It returns None.
print('lst =', lst) # lst has been modified.
add_ten, above, modifies its argument. It's like chop from the reading journal, or like lst.sort() from the Python standard library.
added_ten, below, constructs a new list. It's like middle from the reading journal, or like sorted(lst) from the Python standard library.
These stages are labelled below.
"""Add ten to each of the numbers in the list of numbers `xs`.
This function returns a new list."""
Mapping is such a common operation that the Python library provides a standard function for it.
This function takes another function as its argument.
The for statement replaces a number of different statements: that initialize, test, use, and increment the loop variable. With for, Python handles this for us automatically.
Similarly, the map function replaces a number of different statements: that initialize, modify, and return the accumulator variable.
Filtering returns a list that contains only some items from the original list.
We will only work with implementations that create a new list. See this Stack Overflow question for a discussion of some of the pitfalls with trying to delete items from a list from inside a for loop.
Like mapping and map, Python provides a built-in function for filtering.
There is also a list comprehension form of filter.
The reduce pattern combines the items in a list into a single value.
Unlike mapping and filtering, reduction does not have a list comprehension equivalent.
Also, you have to import the reduce function from the functools module.
Like map and filter, reduce takes a function argument. We'll create an add helper function to pass to it.
Instead of writing our implementation of add, we could use the operator module to get a Python function that acts like the + operator. | CommonCrawl |
Input: A sequence of $ n $ numbers $ A = \langle a_1, a_2, \ldots, a_n \rangle $ and a value $\nu$.
Write the pseudocode for linear search, which scans through the sequence, looking for $\nu$. Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.
At the start of each iteration of the for loop, the subarray $A[1..i - 1]$ consists of elements that are different than $\nu$.
Initially the subarray is the empty array, so proving it is trivial.
On each step, we know that $A[1..i-1]$ does not contain $\nu$. We compare it with $A[i]$. If they are the same, we return $i$, which is a correct result. Otherwise, we continue to the next step. We have already insured that $A[A..i-1]$ does not contain $\nu$ and that $A[i]$ is different from $\nu$, so this step preserves the invariant.
The loop terminates when $i > A.length$. Since $i$ increases by $1$ and $i > A.length$, we know that all the elements in $A$ have been checked and it has been found that $\nu$ is not among them. Thus, we return $NIL$. | CommonCrawl |
Tool to compute the trace of a matrix. The trace of a square matrix M is the addition of values of its main diagonal, and is noted Tr(M).
Team dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Trace of a Matrix tool. Thank you.
> [News]: Discover the next version of dCode Trace of a Matrix!
The trace of a square matrix is the addition of the values on its main diagonal (starting from the top left corner and shifting one space to the right and down).
How to calculate a matrix trace?
For rectangular matrix $ M $ of size $ m \times n $, the diagonal used is the one of the included square matrix (from top left corner).
What are trace mathematical properties?
The trace of an identity matrix $ I_n $ (of size $ n $) equals $ n $.
Improve the Trace of a Matrix page! | CommonCrawl |
Electromagnetic vector fields are composed of elaborate mathematical structures described with vector differential operators, and physicists cannot intuitively understand and distinguish of them. If one intends to learn about the changes in the electric displacement D in medium of polarization P, or the magnetic intensity H in the medium of magnetization M, knowing the features of the fields is impossible unless one calculates and visualizes them. Here, a vector fields platform visualizing the most basic vector fields: E = $-\nabla\varphi$, B = $\nabla\times$ A, D = $\varepsilon_0$E+P, H = $-\nabla\varphi*$, and B = $\mu_0$(H+M), present the fields with its norms in the vector differential mode completely in Mathematica.
The essentiality of metabolic reactions is estimated for hundreds of species by the flux balance analysis for the growth rate after removal of individual and pairs of reactions. About 10% of reactions are essential, i.e., growth stops without them. This large-scale and cross-species study allows us to determine ad hoc ages of each reaction and species. We find that when a reaction is older and in younger species, the reaction is more likely to be essential. Such correlations may be attributable to the recruitment of alternative pathways during evolution to ensure the stability of important reactions. | CommonCrawl |
CORE is an "open-access platform" "for a changing world" and "for anyone who wants to understand the economics of innovation, inequality, environmental sustainability, and more".1 It attempts to readjust economics teaching towards taking the financial crisis into account. As of December 2017, its textbook – The Economy – is used to teach undergraduate economics at 64 universities, e.g. in the UK at Bangor Business School, Birkbeck (University of London), Cardiff Business School, Kings' College London, London Metropolitan University, Northampton University, University College London, University of Aberdeen, University of Bath, University of Bristol, University of Manchester and University of Plymouth.2 CORE has received praise from the New Statesman, the Guardian and the Financial Times to name but a few.
Thus, Figure 1.1a expresses living standard or wellbeing as the sum of what a society produces divided by the number of people in that society. As far as this metric is concerned, this much is clear already on the first pages of this economics textbook, and before we learned a single thing about how this mode of production actually works: capitalism is rather good at producing wellbeing as such. The rest of Unit 1 is dedicated to creating the impression that this claim makes sense. These notes are dedicated to showing that this is wrong.
CORE chooses to entertain the criticism of averages in the form of criticising another measure for wellbeing which it is not too keen on: disposable income.
"Consider a group in which each person initially has a disposable income of $5,000 a month, and imagine that, with no change in prices, income has risen for every individual in the group. Then we would say that average or typical wellbeing had risen.
Section 1.2 started with asking "but is [the average, CC] the right way to measure their living standards, or wellbeing" and ends with the observation that we can communicate the average unambiguously; well, then.
If CORE were honest about its findings in Section 1.2 it would have to rephrase "Since the 1700s, increases in average living standards became a permanent feature of economic life in many countries" (Unit 1, Introduction) as "Since the 1700s, increases in GDP per capita became a permanent feature of economic life in many countries. This undoubtedly is telling us something about the differences in the availability of goods and services, but it is not quite clear what that means for the wellbeing of people in this society. The gaps between what we mean by wellbeing, and what GDP per capita measures, should make us cautious about the literal use of GDP per capita to measure how well off people are." But this is perhaps less unambiguous-née-catchy.
At the end of Section 1.2 we are left with an implicit confession that CORE itself draws no specific claim about wellbeing from its own graphs, paired with an explicit resolve to ignore this.
dividing the result by the number of members in society.
"GDP measures the output of the economy in a given period, such as a year. Diane Coyle, an economist, says it 'adds up everything from nails to toothbrushes, tractors, shoes, haircuts, management consultancy, street cleaning, yoga teaching, plates, bandages, books, and the millions of other services and products in the economy'.
CORE acknowledges the oddity of adding up toothbrushes, pigs and computers as the same thing. To write a text a computer is required and no amount of pigs can replace it.
But CORE discusses the problem as one of choice: economists must decide what to include and how to assign a value or a common dimension. CORE apparently does not find anything remarkable about declaring that its figures are products of its own subjective decisions rather than results derived from the object. As far as CORE is concerned, economists might as well add up weights (services get a zero), add up the water used up in production or how yellow things are. All of these choices would – in principle – work for counting stuff in some way. In practice, it is straightforward to estimate how much a thing weighs or how yellow it is.
Of course, these choices are easily recognised as silly. In capitalist economies, the worth of things is measured in money. Our thought experiment served to highlight that counting in money is in no way a mere pragmatic choice that economists make: instead, they, obviously, know what counts as wealth in capitalist societies. They do not choose prices but find them. Prices are not some convenient counting aid chosen by economists, but economic facts that economists need to explain. Somehow, the societies that CORE studies reduce toothbrushes, computers and yoga classes to a common dimension – money – "in practice".
Unit 1 more asserts than explains its neoclassical view on what a price is. When Unit 1 speaks of "value" and "worth" it does not merely mean the economic categories which are expressed in a price and which have yet to be explained. Instead, following its predecessors, CORE literally means how much people appreciate, like or want something, how good something is: Unit 1 presupposes some abstract notion of utility or pleasure.
It is a fact well-known to economists that poor people do not value haircuts, decent flats, food, clothing or entertainment as much as rich people, this is why they let their hair grow, live in small, cramped flats, eat junk food and do not go out as much. As absurd as it sounds when said out loud, CORE's identification of price and pleasure appeals to experience. The reader is invited to think about their day-to-day life where questions like "How much money is a haircut worth to me?" are common. However, when asking this question we already compare the amount of our money with the price of the things we would enjoy doing. We come to the conclusion that these magnitudes – our means and the price we are confronted with – do not match up and hence have to limit ourselves. Only after we have compared the respective magnitudes do we then limit our needs and desires, and make the comparison forced upon us: what can I do without. This way the daily grind appears as if the exchange relations were determined by our own individual needs and desires, or as if prices express preference and pleasure. In other words, despite what CORE appeals to, we are not actually comparing the utility of different goods and then assigning prices, but are merely comparing our means with what is available for them based on the prices we are confronted with.
The question of what is to be measured – what it means to be part of the overall economic output – is replaced by a valuation method, with the mere appeal of being feasible: all items to be counted are selected and redefined for this purpose.
Prices fail to live up to CORE's expectations as convenient counting aids in a second way: they keep changing, because they are the means by which different economic actors compete against each other. This is why CORE does not actually use the prices found "in practice" in a given society at a given point in time to compose its figures. In an "Einstein" section, CORE explains how the GDP is calculated and compared between different countries and times. The calculation starts by choosing a reference year and economy, e.g. 1990 in the UK. Then, the prices of all products and services produced that year are added up to calculate the GDP. For example, if a society produced 100 bottles of milk (£1 each), 10 tractors (£10k each), 2 nuclear weapons (£10M each) and 5 chickens (£10 each) that year, the GDP would be (100 \times £1 + 10 \times £10k + 2 \times £10M + 5 \times £10). Now, to compare this GDP with the GDP from 1450, economists estimate what products were produced that year. Let's say: 10 bottles of milk, no tractors but 1 horse plough, zero nuclear weapons, 15 chickens and 5 bibles. These products are then added up using the 1990 GBP prices, where the prices for products that are no longer in production in 1990 Britain are somehow estimated. Comparisons between different countries proceed analogously: estimate how much of each thing was produced and add up these things using their 1990 GBP prices.
As before, while the method of adding up and comparing is feasible, CORE does not explain what can be learned from expressing and comparing the economic output of China in 1500AD with the output of India in 1750AD using 1990 USD prices.
It is peculiar that CORE and other economists add up the product of self-sufficient respectively private producers and proprietors as one social product, despite the fact and celebration that these products are very much not at the collective disposal of society. Joining in with CORE's trip down history lane, why would a peasant in 1400 AD Britain be in the slightest bit interested in the production of a crop by some other peasant at the other end of the British Isles? Or why, for that matter, would it make sense to add up his crop and the castle their lord built as one gross domestic product? If anything, that castle represents a deduction from the peasant's wealth or free time, as its production is premised on the peasant's exploitation.
In modern times, how does it make sense to add up as one collective product the cars of one manufacturer, produced with the intent and effect of bankrupting their competitors, with the cars produced by those they bankrupted? How does it make sense to add up the management consultancy suggesting firing a bunch of workers with the products those workers produced before they got fired as one collective win for capitalism in the UK? In all these cases, the amounts added up belong to their respective, competing owners.
An analogy: when two states wage war against each other, they maintain military hospitals to tend to the wounds of their respective soldiers inflicted by the soldiers of the other side. These hospitals exist with the expressed purpose of breaking the power of the competing state by returning soldiers to the battlefield where their mission is to send the enemy's soldiers to a hospital or morgue. Certainly, these hospitals are not available for tending to the enemy. Moreover, the better one of the two parties is at patching up its wounded soldiers, the more precarious the situation becomes for the other. Just because there are two numbers with the same unit does not mean it makes sense to add them up: It would be absurd to add up and celebrate the hospitals of these two warring states as humanity's total sum of healing facilities. Unless you have the power to disregard the purpose for which these hospitals exist, that is.
Returning to private wealth, this power exists in the form of the capitalist state. No one bats an eye when CORE whips out its GDP sums because those kinds of calculations are anything but unworldly. Capitalist states commission GDP statistics on an annual basis, gauging the success of the economies they rule over. Thus, they express that from their standpoint, the wealth held in private hands in their respective societies indeed constitutes one social wealth that they can dispose over as the holders of the monopoly on force. Capitalist states partake in the success of their private economies, deciding on and collecting taxes. The better their economies thrive the better they can pursue their projects, such as looking after their economies, strong-arming other states into some favourable deal, etc. As such, the wealth accumulated in private hands determines the freedoms and limits of a state's rule.
That is to say, from the standpoint of the capitalist state, adding up the privately held wealth as if it were one wealth – its own – makes sense. This added up wealth, though, is not what economists count as the GDP, as the actions of the state itself are included in the GDP. Furthermore, the fact that the state can add up the private wealth in society does not imply it is an operative economic category. Just as the average marks of a class of pupils may be a metric for their performance in some test, this benchmark score itself does not say anything about why students did well or poorly in this test. Indeed, capitalist states engage with their economies not by abstractly measuring GDP but by tending to the specific needs they identify for boosting economic growth: change a regulation here, tweak a tax, provide some subsidies there etc.
When capitalist states add up the wealth under their rule as one wealth, they do not intend to expropriate this wealth in its entirety. They appreciate wealth in private hands as the economic foundation of their rule. Even more, capitalist states know that partaking in the success of capitalist economies is also to the detriment of these economies: every penny expropriated is a penny that cannot be reinvested. Thus, capitalist states turn to debt. To assess their creditworthiness, creditors routinely apply the debt to GDP ratio. While this measure is about as scientific as the GDP on which it is based, it is nonetheless used (until it isn't) as a standard.10 Consequently, GDP develops from a funny idea of economists into an economic datum to be reckoned with. In a modern capitalist economy, everything depends on credit, and credit decisions are routinely made taking the GDP into account.
CORE's intended learning outcome in Unit 1 is nationalism: the identification of personal wellbeing with the success of the nation and state. CORE first adds up the wealth in separate, competing hands as one wealth and then divides that sum by the members of society because it wants to think of the wealth that its students and most people are excluded from as our wealth: marvel at the success of our economy in the hands of others.11 When my competitor drives me into bankruptcy and uses their new position on the market to extend production, my average living standard rose. When my boss changes production, fires me and uses new machines to extend production, my average living standard rose. This pretends that the private wealth in society is a social good while affirming that it is in private hands. That is, CORE pretends that private property does not exist in its arithmetic to celebrate its successes. To square this circle, CORE appeals to potential or availability. The wealth of society is potentially ripe for the taking, something that each of us could, in principle, acquire if we apply ourselves – the meritocracy says "hi". In the world of CORE, that accumulated wealth I am excluded from represents an earning potential. CORE's response to those who point to the actual poverty in modern capitalist societies is that there is a lot of potential wealth in the hands of others to be competed for.
One or more individuals own a set of capital goods that are used in production.
They pay wages and salaries to employees.
They direct the employees (through the managers they also employ) in the production of goods and services.
The goods and services are the property of the owners.
CORE, secondly, knows of the driving motive for production in capitalist societies: profit. Not the mere provision of people with what they need and want but a principled more, a surplus counted in money over what was advanced is to be realised. While no actual economic actor pursues the aim of maximising GDP – CORE's mission assigned to the economy – firms do pursue the aim of maximising their own wealth, a fact which CORE celebrates in its first unit by adding up the successes of these shenanigans as one big win for us all.
and answers: specialisation and technology.
"When you hear the word 'market' what word do you think of? 'Competition' probably is what came to mind. And you would be right to associate the two words.
Observing that this society organises the production and distribution of goods in the form of competition, CORE invites the reader to only keep in mind that there is production and distribution going on. CORE illustrates the benefits of the market with an example which introduces us to Carlos and Greta. They have different resources when it comes to producing apples and wheat. In such a situation, we would say that it makes sense for the two of them to have a little chat about how many apples and wheat they want/need, how to best organise their production, how to make use of the resources available to them, whether they want to specialise and how etc.
CORE's example does not justify the praise. Here, Carlos happens to be the proprietor of less fertile land than Greta who happens to have a legal title to a more fertile piece of land. Thus, Carlos has to make do with whatever resources he happens to have a legal right to. Under these restrictions of the regime of private property where Carlos is excluded from Alice's land and vice versa, CORE suggests the production of apples to Carlos, because his land is relatively less bad for the production of apples than wheat. That is, CORE suggests a way of working with the restrictions imposed by private property. Nothing in CORE's example engages with the constraints imposed by the labour process itself to achieve the best outcome but instead it deals with the effects of the very regime that it intended to promote. Somehow, this ought to convince the reader that a mode of production, where you may or may not happen to have a legal right to the adequate means of production and where you must make do regardless, is "a way of connecting people" "in many cases better than the alternatives".
On the other hand, CORE also knows "a way of connecting people" that deserves the same praise as the freedom of the market, namely its opposite: command in a firm.
"So when the owner of a firm interacts with an employee, he or she is 'the boss'.
While we are at it, you can also think of firms as a means for organising Christmas parties and, by the same logic, a slave driver as facilitating "a kind of cooperation […] among producers that increases productivity".
The sleight of hand here is the same as above: from the fact that in this economy production is organised under the command of capitalist firms and their agents, the reader is invited to keep in mind only that there is some sort of production going on. CORE's praise for the particular institutions of capitalism proceeds by ignoring their specific nature and praising a truism: this division of labour – markets – in society is a good division of labour because it is a division of labour, that other division of labour in a factory – command – is a good division of labour because it is a division of labour.
We may wonder why this particular mix accounts for capitalism's success, but CORE's explanation is as circular as the rest of its theory: capitalism is successful and we observe this mix.
In The Economy there is no distinction between productive and unproductive labour or any of that Marxist stuff,21 but somehow the authors know that people make all that wealth which is counted in money, and they do this by turning previously produced capital goods – machines, raw materials, etc. – into more products which are then sold for more money than was advanced for their production.
CORE is not coy about the role of workers in a capitalist firm: they eat if and only if the performance that their wage pays for contributes to a firm's profit. With its celebration of hiring and firing workers and their application by technology, CORE implicitly addresses the core of the matter that explains the wealth growth that it set out to explain: squeezing more work out of workers than is required for their own reproduction.
Now, if Grete follows CORE's advise to maximise production and expands her wheat production, more of this year's wheat needs to be set aside for becoming seeds. Since GDP in the form of capital goods is a central premise for future GDP growth, making many things which can be used next year to make even more things ought to be prioritised over making things which are consumed unproductively; machines not Xboxes. Put differently, as a society, simply eating up your GDP sins against the objective of maximising it. The demand for GDP growth is a demand against the members of society to consume little. Thus, when firms pursue growth through profits and optimise the wage for this purpose,27 they realise the objective of maximising GDP. Insofar as wages can be reduced without negatively affecting outputs, low wages are demanded from the standpoint of maximising GDP. Workers will merely consume their wages, profits can be reinvested to make even more profits, i.e. "living standard".28 A society which aims to maximise the GDP strives to waste as little wheat as possible for such unproductive goods as cake. Instead, it maximises the percentage of wheat it sows out again each year to produce even more wheat.
A society with the purpose of maximising GDP, i.e. the kind of society CORE advocates for, positions itself in opposition towards the mere consumption and free time of its workers. The more they work and the less they consume the better for the growth of the GDP.
This should read: the advent of capitalism allowed firms to accumulate wealth at an unprecedented rate and in their pursuit of profit they use technology to squeeze more work out of their workers. Great stuff.
The Economy is a critical textbook and does not fail to mention that there are problems: poverty and the depletion of resources.31 These, however, are not considered as results of private property, markets and firms and the economic laws they imply but express a lack of "effectiveness" of "institutions and government policy". Instead of investigating its object as what it is, it is neatly divided into its nice, essential and naughty, accidental parts.
With discussing poverty and pollution in this way, CORE manages to pigeon-hole them as secondary problems: they never pose any real questions to the capitalist mode of production, its beauty is presupposed from the beginning and where it is not, the object is redefined until it is. As far as CORE is concerned, capitalism is great, GDP measures living standard and when the reality of this mode of production rears its head: there lies a policy challenge.
CORE considers previous economics textbooks to be insufficient, fearing that students will find these expositions of capitalism absurd in light of their negative experiences. CORE does not address this with new arguments or ideas – all of those gathered in Unit 1 can also be found in the previous textbooks CORE wants to stand out from. CORE simply pulls certain considerations, which come later in other textbooks, into its first unit. The reason is that CORE considers these thoughts to be useful in directly and clearly communicating: "yes, there are problems, but capitalism is actually incredibly effective and good". Previous textbooks have tried this with the theorem of insatiability.32 CORE is now choosing GDP. That is all there is to this new approach.
Hence, those who read The Economy lured by its promise of an economics textbook committed to the "experience of real life" will be disappointed: it does not avoid the mistakes of previous textbooks and thus does not given an adequate account of the capitalist mode of production.
"[Private property] means that you can: enjoy your possessions in a way that you choose; exclude others from their use if you wish; dispose of them by gift or sale to someone else … ; … who becomes their owner" (The Economy, Section 1.6) This account of private property is straight from a "get of my land" wild west fantasy, where self-sufficient farmers tend to their respective lands independently of each other. As far as CORE is concerned, private property protects the enjoyment of possessions, but in modern capitalism no commodity enters this world with the purpose of being enjoyed by its proprietor. Rather, commodities are produced in order to be exchanged against money and private property protects this purpose. In other words, they are produced for the consumption of others under the little condition that those others can pay. Then, when everyone excludes everyone else from their products in order to extract money, it is in no way a personal, idiosyncratic choice to "exclude others from their use if you wish" or to dispose of what you have "by gift or by sale". Rather, insisting on sale over gift is the strategy implied by these social relations.
The Economy dedicates the whole of Section 1.5 to the environmental impact of capitalism: "Through most of their history, humans have regarded natural resources as freely available in unlimited quantities (except for the costs of extracting them). But as production has soared (see Figures 1.1a and 1.1b), so too have the use of our natural resources and degradation of our natural environment. Elements of the ecological system such as air, water, soil, and weather have been altered by humans more radically than ever before." But, once again, mentioning the "downsides" is not meant as the starting point for discussing why and how "capitalist production […] only develops the techniques and the degree of combination of the social process of production by simultaneously undermining the original sources of all wealth – the soil and the worker." (Karl Marx, Capital Vol 1, p.638).
In Section 1.5, the effects of capitalism are held against humanity as a whole and the solution is, naturally, more of the same: "But the permanent technological revolution – which brought about dependence on fossil fuels – may also be part of the solution to today's environmental problems. Look back at Figure 1.3, which showed the productivity of labour in producing light. The vast increases shown over the course of history and especially since the mid-nineteenth century occurred largely because the amount of light produced per unit of heat (for example from a campfire, candle, or light bulb) increased dramatically. In lighting, the permanent technological revolution brought us more light for less heat, which conserved natural resources – from firewood to fossil fuels – used in generating the heat. Advances in technology today may allow greater reliance on wind, solar and other renewable sources of energy." Thus, Unit 1's way of addressing climate change is hoping that the next generation of technology in the service of maximising "living standards"-née-accumulation of capital may not destroy the planet while simultaneously suggesting that the maximisation of GDP is bliss. Under this premise the light/heat ratio is irrelevant as any improved efficiency merely translates to the production of more light. | CommonCrawl |
with 303 additions and 134 deletions.
The `InclinedNoDisplacementBC` Action is used to create a set of InclinedNoDisplacementBC for a string of displacement variables. See the description, example use, and parameters on the [InclinedNoDisplacementBC](/InclinedNoDisplacementBC/index.md) action system page.
where $\alpha$ is the penalty parameter and `component` corresponds to the direction in which to apply the residual. The normal directly comes from the surface normal defined in a mesh.
The InclinedNoDisplacementBCAction Action, given in the input file as simply `InclinedNoDisplacementBC`, is designed to simplify the input file when several variables have the same inclined no displacement boundary condition [Inclined no displacement boundary condition](PenaltyInclinedNoDisplacementBC.md) applied in the normal component.
* Weakly enforce an inclined BC (u\dot n = 0) using a penalty method. | CommonCrawl |
Both of the values above represent the 2-norm: $\|x\|_2$.
The $\infty$ norm represents a special case, because it's actually (in some sense) the limit of $p$-norms as $p\to\infty$.
Recall that: $\|x\|_\infty = \max(|x_1|, |x_2|, |x_3|)$.
Once you know the set of vectors for which $\|x\|=1$, you know everything about the norm, because of semilinearity. The graphical version of this is called the 'unit ball'.
We'll make a bunch of vectors in 2D (for visualization) and then scale them so that $\|x\|=1$. | CommonCrawl |
Abstract: Multidimensional combinatorial substitutions are rules that replace symbols by finite patterns of symbols in $\mathbb Z^d$. We focus on the case where the patterns are not necessarily rectangular, which requires a specific description of the way they are glued together in the image by a substitution. Two problems can arise when defining a substitution in such a way: it can fail to be consistent, and the patterns in an image by the substitution might overlap.
We prove that it is undecidable whether a two-dimensional substitution is consistent or overlapping, and we provide practical algorithms to decide these properties in some particular cases. | CommonCrawl |
Strategic thinking in financial markets policy by Ajay Shah in Business Standard, July 24, 2017.
Indian poultry farms are breeding drug-resistant superbugs: study by Natalie Obiko Pearson in Mint, July 21, 2017.
Why Aadhaar transaction authentication is like signing a blank paper by Jayanth R. Varma in Jayanth R. Varma's Financial Markets Blog, July 19, 2017.
Data protection and privacy: choices before India by Rahul Sharma in Mint, July 18, 2017.
Secret Millions for 0x00A651D43B6e209F5Ada45A35F92EFC0De3A5184 by Tom Metcalf in Bloomberg, July 18, 2017.
Sadanand Dhume - A conservative's take on India by Sidin Vadukut in Mint, July 17, 2017.
The perils of endogamy: In South Asian Social Castes, a Living Lab for Genetic Disease by Steph Yin in The New York Times, July 17, 2017.
Patrick French, eminent writer, historian and biographer, joins as inaugural Dean of the School of Arts and Sciences on Ahmedabad University news, July 14, 2017.
Vibrant democracy, dormant Parliament by Deepak Nayyar in Mint, July 14, 2017.
Is CBI the handmaiden of the government? by Prashant Bhushan in The Hindu, July 14, 2017.
The Passion of Liu Xiaobo by Perry Link in New York Review of Books, July 13, 2017.
Photos: Life inside of China's massive and remote bitcoin mines by Johnny Simon in Quartz, July 12, 2017.
You asked for it, so the Bankruptcy Code is here to stay by Sridhar Ramachandran in The Economic Times, July 12, 2017.
I went on a hunter-gatherer diet to improve my gut health-and it worked by Tim Spector and Jeff Leach in Quartz, July 10, 2017.
The Uninhabitable Earth by David Wallace-Wells in Nymag, July 9, 2017.
Do Political Institutions Still Rule? Thoughts on Acemoglu and Trump by Jared Rubin in jaredcrubin.com, January 26, 2017.
How a Chinese Billionaire Built Her Fortune by David Barboza in The New York Times, July 30, 2015.
Ark of Taste products in India on Slow food foundation for biodiversity in fondazioneslowfood.
Prasanth Regy is a researcher at the National Institute for Public Finance and Policy.
RBI's proposal for a Public Credit Registry, 2 August 2017.
Issues with the regulation of Information Utilities, 12 July 2017.
Understanding judicial delays in debt tribunals, 16 May 2017.
Does the NCLT Have Enough Judges?, 6 April 2017.
Judicial Procedures will make or break the Insolvency and Bankruptcy Code, 24 January 2017.
Understanding judicial delays in India: Evidence from Debt Recovery Tribunals, 18 May 2016.
There is an urgency about bankruptcy reforms in India. The credit boom of the mid 2000s gave rise to many failed firms. There are 14,900 non-financial firms in the CMIE database with recent accounting data, and of these, there are 1,039 firms where there is not enough cash (PBIT) to pay interest. Firms under extreme financial stress are a drag upon the economy: they are unlikely to add capital or labour or obtain productivity growth. The exit of such zombie firms will free up capital and labour, and will help improve the financial strength of healthy firms. The economic purpose of the bankruptcy process is to close the circle of life; to recycle this labour and capital into healthy firms.
The numbers above are likely to be an under-estimate. CMIE only tracks the biggest companies. There are a very large number of smaller firms which are likely to be in default. There are other organisational forms used by firms, where we do not have data, and where there will be failed firms. Demonetisation and GST have stressed firms' health. Taken together, tens of thousands of cases are likely to be headed for the bankruptcy process.
When the Insolvency and Bankruptcy Code came into effect on 28 May 2016, we raised questions about the Indian bankruptcy reforms in an overview article (Shah & Thomas, 2016). A year has passed and we revisit those question. What is the state of the play in bankruptcy reforms? What of the new process is working well, and what are the areas of concern?
When the government and RBI decided to put 12 big defaults into the IBC, some felt this demonstrated the capabilities of the new bankruptcy code. When we speak with practitioners, we get a rush of war stories and practical detail. In this article we try to distill the hopes and fears, and try to see the woods for the trees.
In public policy, it is useful to think in terms of inputs, outputs and outcomes. As an example from the field of education, the inputs are schools and teachers. The outputs are kids who enroll. The outcome is what kids know, as measured through tools like OECD PISA or Pratham's ASER.
Laws (both Parliamentary law and subordinate legislation), the institutional infrastructure that is required for the IBC to work, and capabilities of various private persons.
Transactions that go through the system.
Recovery rates, the growth of broader credit markets, and the deeper changes in behaviour by private persons who borrow and lend, who will re-optimise as the bankruptcy process starts working smoothly.
An Amendment Act to fix the mistakes in the 2016 law.
The Insolvency and Bankruptcy Board of India (IBBI) has to achieve the scale required for a high performance regulator.
An array of well drafted regulations have to be issued by IBBI, with a feedback loop to feed from practical and statistical experience into a robust regulation-making process to refine the regulations.
A competitive industry of private Information Utilities (IUs) has to arise.
A competitive industry of private Insolvency Professional Agencies (IPAs) has to arise.
A competitive industry of private Insolvency Professionals (IPs) has to arise.
NCLT has to find its feet in dealing with corporate bankruptcy, and DRT has to do similarly for individual bankruptcy.
Financial firms have to develop capacity on how to best to initiate the insolvency resolution process and participate in the process to ensure an optimal restructuring plans collectively.
Strategic investors, distressed asset funds and private equity funds have to gain confidence about expected outcomes, either when making a bid for a going concern or when buying assets in liquidation.
The IBC, 2016, suffers from conceptual errors. There are contradictions in definitions, ambiguous definitions, problems in the establishment of the IBBI, failure to establish sound processes at IBBI, the lack of legal foundations in the institutional infrastructure including insolvency professionals, insolvency professional agencies and information utilities, the lack of clear integration of secured credit (i.e. SARFAESI) into the main bankruptcy process, etc. As an example, the Working Group on Information Utilities, chaired by K. V. R. Murty (MCA, 2017) has a chapter on amendments required in the IBC in respect of information utilities.
The IBC has elementary drafting errors. Some examples of these are discussed in Malhotra & Sengupta, 2016, and Singh & Mishra, 2017. A fuller examination will reveal a larger list of such drafting errors.
The insolvency resolution and liquidation processes are procedural law where drafting errors can lead to litigation. As an example, Sibal & Shah, 2017, analyse an anomaly about antecedent transactions.
Many laws in India undergo a constitutional challenge in their initial days. Some founders/shareholders may go to the courts claiming that the IBC violates basic constitutional rights. Well funded attempts of this nature are likely with the 12 big cases. The Bombay HC dismissed an early challenge to the constitutionality of the IBC, but it did this without substantively deciding on its merits. We believe that, in the design of the IBC, sufficient thought was given towards ensuring due process and fairness to all creditors as well as the debtor. There may be certain drafting flaws in the IBC which the government may need to rapidly solve.
An Amendment Act is required which addresses these problems. In our knowledge, there is no drafting effort that is presently in motion to solve this.
Indian regulators suffer from low State capacity. Capacity in a regulator comes about through five processes: (a) The composition and working of the board; (b) The legislative process; (c) The executive process; (d) The quasi-judicial process; (e) Reporting and accountability. Hygiene in these world-facing processes should have been codified in the IBC, but was not.
Considerable new knowledge has developed in the last decade on how to achieve State capacity by setting up such processes. A good deal of this is discussed in the report of the Working Group on the Establishment of IBBI, chaired by Ravi Narain (MCA, 2016). Many aspects of this report have yet to be brought into IBBI.
The IBBI has been set up. It has an office and a team. It has been shouldering the effort of drafting regulations on a very tight timeline. IBBI has done a particularly good job in some respects, such as the recent unveiling of a mechanism to take feedback from the public in structured documents about the regulations that it has drafted. In this, IBBI is now ahead of SEBI and RBI on good governance practices.
The regulation-making process that has been used by IBBI so far has good features. The organisation structure used at IBBI respects the difficulties associated with setting up a regulator that violates the principle of separation of powers. However, these things are not in the IBC, or in rules under IBC, or in legal instruments issued by the board. There is the danger that good practices may be fall by the wayside at a future date.
IBBI is expected to perform a certain statistical system role, and a research capacity that can use this to strengthen regulatory capacity. The systematic process of using data to improve the working of the bankruptcy process is critical to the bankruptcy reform. Statistical system work at IBBI has, however, not yet commenced.
Thus, in critical aspects, IBBI is going down the route of conventional Indian regulators such as SEBI or RBI. This will reproduce the well known infirmities of conventional Indian regulators and runs counter to what the bankruptcy reform requires.
IBBI has issued 12 pieces of subordinate legislation. There are flaws in many of these regulations, some of which are described under the other sections in this article, which will hamper the working of the bankruptcy process and of the institutional infrastructure under the Act.
A critical part of the bankruptcy reform is individual insolvency. Advancing on this front will get India away from the recurrent loan waivers which spoil the repayment culture of borrowers, raise the cost of lending to individuals, and harm credit access to individuals as well as small entrepreneurs. This part of the law has not been notified, and IBBI has not released regulations that are required to operationalise the individual insolvency component of the IBC.
Under normal circumstances in an insolvency resolution process (IRP), a considerable amount of human effort is required in order to construct the list of creditors and the size of their claims. The BLRC design envisaged that this information would be stored in `information utilities (IUs)', as electronic records of credit contracts in computer systems, authenticated by creditor and debtor, which would thus eliminate delays and costs. In the Indian legal system, disputes about facts are well acknowledged as the source of delays, wastage of judicial time, and payments to lawyers. Irrepudiable records from IUs would eliminate these problems.
So far, no IU has come into operation.
One part of the problem lies in the IU regulations issued by IBBI. Prashant et. al. 2017 point out its anti-competitive features. The licensing requirement for the IUs are overly stringent, particularly for an industry where the costs of implementation are likely to be low because of the constantly decreasing costs of information technology. The regulations ask for capital requirements that are excessive, given the low levels of value at risk in the business. This is a new business with no precedent anywhere in the world. Entrants into this are are taking on the risk of failure of the business model, which must be compensated by sufficient returns to investment. The IU regulations simultaneously prescribe shareholding arrangements that deter enterpreneurs from viewing this as a viable business opportunity.
These barriers are likely to keep the Indian bankruptcy system from achieving a competitive industry of technologically capable IUs to serve multiple types of borrowers and lenders, as was visualised by the BLRC when designing the IBC. There are also other technical flaws in the IU regulations as is pointed out in Prashant et. al, 2017.
Insolvency Professional Agencies were envisaged by the BLRC as a strategy to regulate professionals through the structure of a self-regulatory organisation. However, the IPA regulations issued by IBBI have many features that vitiate this objective. As a consequence, the key players who were envisaged in this industry by the BLRC have been barred from it.
The existing players are generally passive and are not performing the role that is required of IPAs in the bankruptcy reform. In most aspects, the IPAs are going down the route of conventional Indian regulators-of-professions such as ICAI or BCI. This is likely to reproduce the widely acknowledged infirmities of these organisations, and is counter to what the bankruptcy reform had attempted to achieve by way of well regulated insolvency professionals who act in the best interest of the stakeholders in the resolution process.
IPAs are expected to create certain supervisory databases. These would be extremely valuable in the process of diagnosing problems of the bankruptcy reform and addressing them. So far this has not come into being.
A set of insolvency professionals (IPs) are in place. During the IRP, if required, the IP is expected to put together a temporary management team and temporary financing (under the guidance and approval of the creditors' committee) that will stabilise the firm. These are new roles for the IPs in India, who have yet to develop capabilities to carry out these functions, either within their organisations or through an extended network. It will take time and competition for IPs to develop the teams through which they are able to fully discharge such functions, particularly in complex cases.
One way to jump-start getting such capability would have been to open the industry to foreign insolvency professionals. Their participation, particularly at the early stages of the reforms implementation, would have helped augment capacity and diffuse knowledge. Most Indian IPs are likely to be in a repeated game with promoters; foreign IPs would be particularly valuable in their ability to be harsh with promoters, which would help set the tone for the working of the bankruptcy process. The entry of foreign IPs was, however, blocked by IBBI through the subordinate legislation (Burman & Sengupta, 2016).
A critical factor in dealing with a going concern is lining up interim financing. The IBC and the subordinate legislation fail to clarify the priority of interim financing, in case the firm goes into liquidation. This has hampered the ease of access to interim finance while in the IRP.
IPs are given considerable power in the working of the IRP. There is a need for regulation of the profession in order to deal with various kinds of misbehaviour that can arise. BLRC had developed sophisticated thinking on how the IP and IPA industries should work (Burman & Roy, 2015). A lot of this did not make it into the IBC or the subordinate legislation. The IPAs as constructed today are not performing the roles required of them in regulation of IPs. While IBBI has enforced against IPs on relatively trivial violations, IBBI itself is not equipped to enforce against the real challenge, of malpractice by IPs: it cannot overcome the lack of IPA capacity.
When IPs step into the shoes of the board, and make vital decisions, they are exposed to a new level of legal risk. There is a need for insurance against these risks. This is not yet available in the market.
While the design of the IBC has many features that will yield speed under Indian conditions, the working of NCLT/DRT is still a key factor that will determine rapid resolution as part of the Indian bankruptcy reform (Datta and Regy, 2017).
At present, we see many problems, such as inconsistencies in the behaviour of NCLT across locations, a few orders that are wrong, the lack of orders organised as structured documents, low transparency on the web, and delays. These may be a small precursor of the difficulties to come, as the case load has thus far been mild. Most defaults are, as yet, not going to the NCLT as creditors are waiting to see how the IBC works out. If the bankruptcy reform progresses, we will go from a case load of 20 per month to $10\times$ or $100\times$ as much (Damle & Regy, 2017). At this level of load, the unreconstructed NCLT will experience an organisational rout and will become the chokepoint of the bankruptcy reform.
NCLT has gone down the route of conventional courts and tribunals, which has reproduced the well acknowledged infirmities of the judicial process in India. This is not commensurate with what the bankruptcy reform requires. New knowledge needs to be brought to bear on the working of NCLT (Datta & Shah, 2015).
New thinking by banks and insurance companies is required if they are to play their part in the new bankruptcy process. However, their thinking is greatly shaped by regulations. Errors in the present body of banking regulations, and the associated enforcement machinery, have created an incentive to hide bad news, to not initiate the IRP and to vote irrationally in the creditors' committee.
Sound micro-prudential regulation is one which would require that when a default takes place, the lender must rapidly mark down the value of the asset to zero. This loss should go into the Income and Expenditure statement immediately. Once this is done, there is no overhang of the past, and the lender will be rational about recoveries. Whether the asset is sold off, whether IRP is filed, how to vote on the creditors committee: All these decisions will be based on commercial considerations alone. Recoveries in the future would flow back into the Income and Expenditure statement.
These issues have yet to make their way into micro-prudential regulation of banks and insurance companies.
Assuming RBI and IRDA address these mistakes in micro-prudential regulation, we may see a new cycle being established within two years. New defaults should immediately show up as expenditure, and there would be a flow of cash from old defaults where the bankruptcy process has been completed. This is the opposite of the present arrangement, where it is claimed that all loans are profitable other than some old loans which induce losses.
In the case of NAV-based financial firms, such as mutual funds and pension funds, the event of default should influence the marked-to-market (MTM) prices even if the secondary market for the bond is not liquid. Here also, the bias in micro-prudential regulation should be to mark down the prices to near zero values quickly. This will create incentives for these funds to sell off these assets to distressed debt funds, when these transactions would yield a price that exceeds the MTM price that was used internally.
On 4 May 2017, an Ordinance was promulgated that gave RBI powers to push banks to initiate IRP. There is less to it than meets the eye (Datta & Sengupta, 2017). Backseat driving in a few cases, even if done wisely, cannot solve the regulation-induced bad behaviour of banks. There is no substitute for the slow hard work of reforms of bank regulation and supervision.
The reforms requires the presence of two key groups of buyers. These are strategic players in the same sector, such as Jet Airways for the Kingfisher bankruptcy, and private equity funds (Shah, 2017).
Small firms lack the ability to set up dedicated teams that focus on opportunities coming up in the bankruptcy process. However, there should be teams at the top 2000 companies that watch the bankruptcy process and look to buy up useful things that come along, either in the form of going concerns or the liquidation process. Private equity funds have not yet started looking at this area on a significant scale. A few pioneers will get started. As they reap strong returns, other funds will follow, and new money will be raised to pursue these opportunities.
We could have developed these capabilities through `asset reconstruction companies' from 2002-2016, but through mistakes in the RBI regulations for these (Shah et. al. 2014), that opportunity was wasted.
As long as the IBC and its institutions are unproven, there will be a shortage of buyers, which in turn will lead to very low prices for the stressed assets. At the early stage, buyers will fear legal risk in the IBC, and will shy away from investing in building organisational capital. The critical story in the evolution of insolvency institutions in India is the emergence of thousands of skilled professionals at private equity funds and the 2000 big companies, each surrounded by an ecosystem of lawyers, accountants and consultants, armed with capital, process manuals and authorisations, who are ready to go. We are at the early stages of that journey.
If completed transactions are the output of the bankruptcy process, we are still some way from observing outputs under the IBC. So far, roughly 100 transactions have begun on their journey in the IRP. None has completed it.
The intent of the law was that six months after the initiation, the IRP should end with either a successful vote for a restructuring plan or the start of the liquidation process. The threat of value destruction in the liquidation would create a focus in the minds of the creditors committee, and thus avoid the delaying tactics that are seen in India today.
The first case, Innoventive Industries, started on 17 January 2017. Six months from that date is 16 July 2017. The delay with which this first case is completed will be an important first milestone for the bankruptcy reform.
First case where interim financing is obtained.
First vote by a creditors' committee.
First case to complete IRP with a super-majority in favour of a restructuring plan.
First case to commence liquidation.
First case to complete liquidation.
First individual insolvency to commence and complete.
An analysis of the orders passed by the NCLT (Chatterjee et. al, 2017 (forthcoming)) shows that some of these milestones will come soon. We will soon be talking in the language of cases per month and Rs.Bln. per month, in the aggregate and across categories of defaulters, which will be the output measures of the bankruptcy reform.
The proximate outcome of the bankruptcy process is the recovery rate. This expresses the NPV of recoveries on the date of default as a ratio to the face value of the debt on the date of default. What would give us a high recovery rate?
The extent that buyers feel there is predictability and certainty in the IBC processes (i.e. the lack of a lemon model problem).
The value of a firm is greater than the liquidation value only when the firm is a going concern. Financial distress disrupts the smooth working of the firm and damages organisational capital. For this reason, it is optimal that the resolution process must commence a day after the first default. However, for the bulk of the cases which are now being brought to the IBC process, the first default is likely to have taken place a long time ago. The recovery rates obtained there are going to be low. This is not a comment on the bankruptcy reform.
We should focus on new defaults that are brought into the IBC. These are the real test of IBC, where there continues to be organisational capital, and there is an opportunity to obtain higher recovery rates. If the bankruptcy reform is successful, new defaults should result in restructuring plans that achieve the super-majority in the creditors committee, obtain a new management team, avoid liquidation, and achieve high recovery rates. An equally important issue for new defaults is the ability of the market to correctly distinguish between firms that are worth rescuing as going concerns versus those that should be put into liquidation.
In liquidation, good outcomes will be rapid sales through which the recovery rate is enhanced, and predictable flow of cash through the IBC waterfall.
Once a certain recovery rate is consistently obtained --- even if it is a low value --- this will bring a new level of confidence to lenders. The first milestone is to gain confidence that we will consistently get a (say) 20% recovery rate. The second milestone is to get the recovery rate up to better values. As these milestones are achieved, lenders will become more comfortable with a broader range of credit risk and maturity. Once many transactions are completed, we will be able to do statistical analysis of delays and recovery rates, differentiated between firms across size categories (small/medium/large) and date of default (recent/old). That will yield report cards of the Indian bankruptcy reform.
A story about organisational capability. Almost everything described here is a story about the capabilities of organisations: of MCA, IBBI, RBI, SEBI, NCLT, DRT, IPAs, IPs, IPAs, IUs, the biggest 2000 companies, distressed debt and private equity funds. A well functioning bankruptcy process involves all these persons skillfully playing their own part. At first, these organisational capabilities do not exist. Nine months after being setup, IBBI has just five senior officers. We need to assess gaps in organisational capabilities, and undertake steps which foster this capability, both in individual organisations as well as jointly across them all.
There is a coordination problem here: organisation $x$ tends to underinvest in building organisational capacity as it sees that the remainder of the ecosystem lacks the requisite capabilities. This hampers the rate of return to its own investments in building organisational capability. If person $i$ were sure that person $j$ was going to invest in building organisational capital, then the marginal product of investment in organisational capital for person $i$ would be higher.
Too often, new organisations in this field are emulating existing organisations. IBBI is slipping into the mores of SEBI and RBI, NCLT is built like a conventional Indian court, IPAs have become like their parents, the quality of drafting law and regulations is slipping closer to conventional Indian standards. This ends up in an environment of low expectations and low organisational capabilities. Instead, we must create a climate of excellence, and work to build high performance organisations.
The lack of data is shaping up as a big barrier. The BLRC design had envisioned a data-rich environment for bankruptcy reform. This included the NCLT issuing structured orders, IBBI building a statistical system, IPAs building supervisory databases and large-scale data capture at IUs. So far, a small part of this has begun. The fog of war is heightened, and all persons are faring poorly on strategy and tactics. The creation of data, and downstream research based on this, that has been done by projects such as Chatterjee et. al. (2017, forthcoming) is a critical element of the bankruptcy reform process.
From uncertainty to risk, and reduction of risk. As with the establishment of all markets, private persons avoid committing resources that are required for building human capital in the field. This leads to a chicken-and-egg situation: lenders are loath to take distressed firms into IRP as the buyers are missing. The buyers are missing as there are not enough available transactions.
The ex-ante legal risk as seen by buyers is considerable. They know the inputs that are required to make the bankruptcy process work. They worry about the legal challenges that may materialise even after they have supposedly got a transaction. They compensate for these risks by low-balling their bids.
As cases move through the IBC processes, uncertainty about the working of the process is reducing for private persons. In time, uncertainty will be replaced by risk: they will understand the contours of the problem and they will build some priors about the process. In due course, more private persons will gain confidence and show up as buyers, thus yielding the ultimate desired outcome: high recovery rates.
Load and load bearing capacity. Big cases present a bigger load upon the fledgling institutions (Shah, 2017). With big cases, private persons have more to lose, and will use every means, fair or foul, to avoid losses. The high powered legal teams that have come together around the Essar Steel default are consistent with this prediction. Their actions, the precedents that they establish, and the way these events reshape the prior distributions of all the players, may end up harming the Indian bankruptcy reforms. The 12 big cases are hotspots for the bankruptcy reform where many things can go wrong. We are likely to obtain particularly weak recovery rates for these, as a big load is juxtaposed against a weak load-bearing capacity.
Success is not assured. There is universal optimism about the Indian bankruptcy reform. We worry that failure has not been ruled out. If the present effort at bankruptcy reform fails, it will be a tremendous loss of confidence in the eyes of creditors, and there will be sustained cynicism on the part of the private sector about future reform efforts. The Indian bond market is an example of persistent failure of reform, leading to endemic cynicism on the part of private persons. Such cynicism leads to reduced investments by private persons in organisational capital, and thus reduces the probability of success in the future.
The Indian bankruptcy reform is work in progress and there are many areas for concern.
A lot of the policy work in the bankruptcy reform has been approached as business as usual. In the Indian policy process, business as usual results in endemic failure. The success stories of the Indian policy process, like the telecom reforms, the equity market reforms, and the New Pension System, did not come from business as usual. The visible difficulties, after the first year of the bankruptcy reforms, call for higher resourcing and improved organisation for the nine areas of inputs, and a shift away from business as usual.
As Dr. Sahoo has emphasised, we will never have a perfect law and perfect regulations and a perfect IBBI at the early stages of a reform. What matters most is the intellectual capacity in discerning incipient problems, diagnosing them swiftly and correctly, and coming out with effective solutions. The private sector is ultimately watching the policy apparatus and waiting for this feedback loop to emerge.
Resource page on the bankruptcy reforms, IGIDR Finance Research Group, 2014 onwards.
Burman, Anirudh and Shubho Roy, Building the institution of Insolvency Practitioners in India, IGIDR Working Paper, December 2015.
Burman, Anirudh and Rajeswari Sengupta, Ushering in insolvency professionals, Business Standard, 20 November 2016.
Chatterjee, Sreyan, Gausia Shaikh and Bhargavi Zaveri, Watching India's insolvency reforms: a new data-set of insolvency cases, FRG Working Paper, 2017.
Damle, Devendra and Prasanth Regy, Does the NCLT have enough judges?, Ajay Shah's blog, 6 April 2017.
Datta, Pratik and Prasanth Regy, Judicial procedures will make or break the Insolvency and Bankruptcy Code, Ajay Shah's blog, 24 January 2017.
Datta, Pratik and Rajeswari Sengupta, Understanding the recent Banking Regulation (Amendment) Ordinance, 2017, Ajay Shah's blog, 8 May 2017.
Datta, Pratik and Ajay Shah, How to make courts work?, Ajay Shah's blog, 22 February 2015.
Malhotra, Shefali, and Rajeswari Sengupta, Drafting hall of shame #2: Mistakes in the Insolvency and Bankruptcy Code, Ajay Shah's blog, 18 November 2016.
Ministry of Company Affairs, Building the Insolvency and Bankruptcy Board of India, Report of the Working Group on the Establishment of the IBBI, chaired by Ravi Narain, 21 October 2016.
Ministry of Company Affairs, Report of the Working Group on Information Utilities, chaired by K. V. R. Murty, 10 January 2017.
Prashant, Sumant, Prasanth Regy, Renuka Sane, Anjali Sharma and Shivangi Tyagi, Issues with the regulation of Information Utilities Ajay Shah's blog, 12 July 2017.
Shah, Ajay, Anjali Sharma and Susan Thomas, NPAs processed by asset reconstruction companies -- where did we go wrong? Ajay Shah's blog, 23 August 2014.
Shah, Ajay and Susan Thomas, Indian bankruptcy reforms : Where we are and where we go next, Ajay Shah's blog, 18 May 2016.
Shah, Ajay, The buy side in the bankruptcy process, Business Standard, 10 July 2017.
Shah, Ajay, Beware of premature load bearing, Business Standard, 26 June 2017.
Sibal, Rahul and Deep Shah, Antecedent Transactions: An Anomaly in the Insolvency and Bankruptcy Code, 2016, IndiaCorpLaw blog, 5 May 2017.
Singh, Jyoti and Arushi Mishra, Dichotomy In Opinions Of NCLT Benches -- Meaning Of "Dispute" Under The Code, Mondaq, 10 July 2017.
Ajay Shah is a researcher at the National Institute for Public Finance and Policy, and Susan Thomas is a researcher at the Indira Gandhi Institute for Development Research. The authors thank Prasanth Regy, Bhargavi Zaveri and Rajeswari Sengupta for extensive comments and discussion on these issues.
Sumant Prashant is a researcher at the National Institute for Public Finance and Policy.
TRAI's move on net neutrality, 10 February 2016.
Concerns about compliance with IRDAI regulations by insurance companies, 28 January 2016.
by Pratik Datta, Radhika Pandey and Sumant Prashant.
Foreign investment into India has always been heavily regulated, requiring approvals from various government ministries. Post-liberalisation, a need was felt to create a single window for foreign investors applying for such approvals. As a result, the Foreign Investment Promotion Board (FIPB) was established in August 1991. Initially it was placed within the Prime Minister's Office (PMO) since its credibility needed to be projected speedily. Then it shifted to Department of Industrial Policy and Promotion (DIPP) and finally to Department of Economic Affairs (DEA) in Ministry of Finance. Here it functioned as an inter-ministerial body making recommendations to the Finance Minister for grant of approval for foreign investments as per the regulations under the Foreign Exchange Management Act, 1999.
Although FIPB was a single window for the foreign investors, at the back-end it was an agglomeration of various Ministries whose views were necessary. It comprised Secretaries from Department of Economic Affairs (DEA), DIPP, Department of Commerce (DoC) Ministry of External Affairs (MEA), Ministry of Overseas Indian Affairs (MOIA), Department of Revenue (DoR) and Ministry of Small, Medium and Micro Enterprises. Depending on the sector to which the investment proposal pertained, the concerned Ministry would also be asked to give comments. At times, views of RBI would also be sought. Naturally, this inter-ministerial coordination process took time and frequently delayed the approval process. Consequently, FIPB ended up being seen as another bureaucratic body delaying approvals. This fuelled the demand for a better substitute.
In a recent spate of reforms, the Cabinet finally decided to scrap FIPB. Now, foreign investment in any of the eleven notified sectors would require approval from the concerned Administrative Ministry. Last week the DIPP issued a Standard Operating Procedure (SoP) for processing FDI proposals under this new regime. The most promising feature of this SoP is the 8-10 weeks time-frame within which investment applications are required to be cleared by the government ministries. But will this reform ensure timely disposal of foreign investment approvals? To answer this, it would be useful to understand the legal institutional framework within which foreign investment approvals are processed in some advanced jurisdictions.
In Australia, the decision to approve a foreign investment proposal is with the Treasurer under the Foreign Acquisitions and Takeovers Act, 1975. When making such decision, the Treasurer is advised by the Foreign Investment Review Board (FIRB), which examines foreign investment proposals and advises on the national interest implications. Section 77 of the 1975 Act requires the Treasurer to make his decision within 30 days, which can be extended by 90 days. The Treasurer has to give reasons for rejecting substantial commercial proposals, which are published in Treasurer's press release.
In Canada, the decision to approve a foreign investment proposal is with the Minister under the Investment Canada Act, 1985. While taking the decision, the Minister is advised by the Director of Investments. A foreign investor is required to notify the Director before making an investment or within 30 days of making the investment. The investment proposal is subject to review only if the Director sends a notice for review to the foreign investor within 21 days. The Minister has 45 days to determine whether or not to allow the investment. The Minister can unilaterally extend the 45 day period by an additional 30 days by sending a notice to the investor prior to the expiration of the initial 45 day period. Further extensions are permitted if both the investor and the Minister agree. If no approval or notice of extension is received within the designated time, the investment is deemed approved. If a foreign investor's application is rejected, the law requires the rejection order to provide reasons for such rejection. Moreover, another opportunity is given to the applicant to reapply. If the applicant is unable to make its case stronger in the second attempt, the application is finally rejected.
Evidently, in these jurisdictions the primary law imposes time-limits on the Minister approving foreign investment proposals. The primary law is also clear on the precise purpose of the government approval. For instance, national security is a major concern while approving foreign investment proposals. Moreover, the primary law also lays down clear processes to handle rejection of investment applications, making the Minister more accountable. For instance, in the US, even the President, who has the final authority to reject an investment proposal, issues a presidential order providing justification for rejection. This institutional accountability hardwired within the legislative framework enables these jurisdictions to better handle foreign investment applications.
In contrast, the Indian primary law - the Foreign Exchange Management Act, 1999 (FEMA) - does not create any institutional accountability. It does not prescribe any time-limits for the Finance Minister to dispose of foreign investment applications. Neither is FEMA clear on the purpose of government approval itself. Further, the law does not require the government to give any reasons for rejecting an investment application. These are fundamental problems in the current Indian legal institutional framework around FDI approvals.
DIPP's new SoP does not resolve any of these fundamental issues. The timelines it imposes on the Ministries for various actions are not even binding. This is because the SoP is not a legal instrument. It is merely a pdf document uploaded as a public notice on the DIPP website. That does not make it a law binding on the different Ministries in the government. At most, it is an aspirational document laying out the good intentions of the government. But if a Ministry violates it, there are no consequences or sanctions. The SoP does not in anyway change the internal incentive structure of the bureaucracy to ensure that they comply with the timelines. Therefore, the SoP fails to solve the root cause of delay at FIPB - lack of time-bound inter-ministerial coordination needed for timely grant of approvals.
Any reform to the Indian FDI approval regime must start with a complete rethinking and redesigning of the primary law - FEMA - from first principles. What is the market failure in the field of capital controls? Why is government approval even necessary? How can the government be made accountable to ensure that approval decisions are made in time-bound manner without prejudicing the main purpose of such approval?
The Financial Sector Legislative Reforms Commission (FSLRC) answered these fundamental quesions based on a holistic review of international best practices. It recommended that the objective of capital controls should be to address national security concerns. In addition the report envisaged controls of temporary nature to address crisis situation. All these aspects are codified in the chapter on capital controls in the Indian Financial Code (IFC), the draft law prepared by the FSLRC. Even as we debate the objectives of capital controls, the law on capital controls must be unambiguous in laying down an effective procedure for processing FDI proposals. As an example, Clause 243 of the IFC provides for a 90 days time-bound process to be followed by the Central Government while approving or rejecting FDI proposals. Chapter 9 of the FSLRC Handbook released by the Ministry of Finance further elaborates this approval process.
The shortcomings of FIPB were merely symptoms of a deeper problem in the primary law, FEMA. This underlying problem can be resolved only by replacing FEMA with a coherent new primary law. The new law should require government approval only for foreign investment in sectors that are strategic from the viewpoint of national security considerations or to address emergency situations such as war, or balance of payments crisis. The law should also focus on accountability of the government. It should provide clear time-bound legal processes and require the government to give reasoned orders while rejecting an investment proposal. Only such fundamental legislative reforms can help create a better substitute to FIPB.
Pratik Datta, Radhika Pandey and Sumant Prashant are researchers at the National Institute of Public Finance and Policy, New Delhi.
by Sumant Prashant, Prasanth Regy, Renuka Sane, Anjali Sharma, and Shivangi Tyagi.
The Insolvency and Bankruptcy Code, 2016 (IBC) provides for the speedy resolution of insolvency. The process described in IBC hinges on the assumption that information will be easily accessible to the parties involved. It is for this purpose that IBC provides for the establishment of Information Utilities (IUs). As envisaged in IBC, as well as the Bankruptcy Law Reform Committee Report, IUs are repositories of information regarding debt and default, and are required to be able to produce this information quickly. This information can then be used for many purposes. For instance, the Courts can use it to decide whether to send a debtor company into a resolution process. But for this information to be widely used, it is essential that Judges, Insolvency Professionals, and other parties must be able to trust this information implicitly.
The timelines specified by IBC are quite strict. They can be met only if IUs stand ready to provide all requisite information quickly. IBC provides little guidance on how IUs are to function, leaving the details to subordinate regulation. A Working Group on IUs was set up by the Ministry of Corporate Affairs to draft the regulations governing IUs. The Working Group's suggestions (Draft Regulations) were put up for public comments. Subsequently, an Advisory Committee discussed the public comments, and the final regulations (IU Regulations) were notified on 31st March, 2017.
However, the regulations have some fundamental problems which are likely to impede the efficient and transparent functioning of IUs. In this article, we highlight some of these problems.
An information utility shall provide a registered user a functionality to enable its authorised representatives to carry on the activities in sub-regulation (1) on its behalf.
This can be very dangerous, because of the possibility of misuse. For instance, can a bank be a registered representative of its borrower? Imagine a situation where an individual takes a loan from a lender, and one of the signatures among the many he signs in the paperwork authorises the lender to be his representative for filing information with an IU. The lender can now declare, on behalf of the borrower, that the borrower has defaulted. This is a clear conflict of interest.
This provision lends itself to misuse by the borrower, too. A borrower may borrow money and confirm this debt in an IU, but he could later claim that some authorised representative committed fraud.
If the functionality of enabling authorised representatives is desired, appropriate safeguards need to be added to the IU Regulations.
Registering users: According to section 214(e) of IBC, IUs are supposed to get the financial information authenticated before storing. But 18(1) of the IU Regulations suggests registration only for submitting and accessing information. Does this mean that unregistered parties can authenticate information? If yes, it can lead to the danger that IU records will have little sanctity in court.
This is commercially sensitive information. We are asking IUs to share information they have with all other IUs. Can the destination IU store this information or use it in any way?
If incomplete information is provided at the time of financial information retrieval, it will not be clear whose fault it is: the primary IU's or some more distant IU's.
How is this to work? How are the other IUs to provide this info "directly" to the user? Presumably the intent is to avoid routing the financial information through the destination IU, but this is contradictory and unclear.
The Draft Regulations expect that the user (or software deployed by him) would be able to query multiple IUs inexpensively and in parallel, as happens every day in online air-ticket booking. This is a simple and straightforward architecture that avoids the problems above.
It should be digitally signed by the IU.
The IU Regulations do not contain these requirements. Without them, it is difficult to detect if the IU loses data or manipulates it in connivance with parties to the debt. The information in the IU loses its sanctity, so that judges can no longer trust it.
An information utility shall accept information submitted by a user in Form C of the Schedule.
Items 37, 50 and 56 of Form C lay down the documents to be attached as proof. This suggests that an IU has to accept documentary proof of the financial information being submitted.
This is a clear problem. The design of IBC intended IUs to be an electronic repository of financial information, and not a document management system. That is why the requirement of authentication and verification of information submitted to an IU was envisaged. While it may be possible for IUs to accept documents in electronic form, even this process creates two challenges. First, storing documents will add to the cost of the IU infrastructure, which will then be passed on to users. This may make storing credit contracts in an IU expensive and may disincentivise users from doing so. This in turn will pose fundamental viability challenges for the IU business model. Second, if an IU stores financial information as well as documents, its not clear whether both will need to be matched, and if so on whom the responsibility of doing so will fall upon.
Information of default: Regulation 21(2)(a) of the IU Regulations places an obligation on the IU to communicate information of default to all creditors. The question arises, which creditors: the creditors on the same IU or the creditors of that debtor on other IUs as well? The IU that has learnt of default does not know of the creditors of that debtor on other IUs, but IBC clearly intends that all creditors of a debtor should learn of default, whichever IU they are on.
The Working Group Report thinks this through properly: an obligation is placed on each IU to inform all IUs about default, and also on each IU to inform all creditors of a defaulting debtor.
Marking records as erroneous: Regulation 25(2) suggests that a user can unilaterally mark any information as erroneous. This is very dangerous. A process needs to be specified for correcting errors, and it should involve confirmation by the counterparties, just like any other information that gets to the IU.
An information utility shall not outsource the provision of core services to a third part service provider.
This clause is problematic, as it can create operating inflexibility for IUs. It is unclear how broad this clause is. For example: does this mean that the IU platform cannot be outsourced, or does this outsourcing ban apply to data center and related services, technology maintenance contracts, technology support personnel, physical security including guards, etc?
Technically, it can be read as an IU having to create every component of its core services delivery entirely on its own. This will increase the time taken for an IU to be set up, as well as add to the costs of service delivery by an IU.
Due to this problem, the Working Group Report suggested that outsourcing of core services could be possible, subject to approval by the IBBI. Alternately, instead of a blanket ban on outsourcing, the IBBI may consider a two stage outsourcing structure. First, a narrow list of services, such as authentication and verification, may be classified as those that cannot be outsourced. For the remaining, an outsourcing model similar to the one that the RBI follows for its regulated entities may be followed. Under this model, two elements are taken into consideration by the regulator when allowing outsourcing: (1) that the standards of service for the user remain the same, whether the components of service delivery are in-house or outsourced; (2) the primary liability, even in case of outsourced services, lies with the regulated entity.
Portability of records from one IU to another: Since IUs have pricing freedom, if they charge for storage, there is a danger that they may increase the fees substantially if users whose data is with them cannot move their data to another IU. To prevent such price-gouging, the Draft Regulations provide that any user may migrate his information from one IU to another IU, and the source IU is prohibited from charging for this facility. The manner in which the IU Regulations seeks to avoid this problem is through its declaration that fees should be a reasonable reflection of the service provided. But this requires the Board to form a view as to what is a reasonable fee. The solution suggested by the Working Group uses the market to achieve the same outcome in a less intrusive manner.
IU records admissible as evidence: Regulation 30 mandates that an IU shall adopt "secure systems" (as defined in the Information and Technology Act, 2000) for information flows. However, this alone does not ensure that the information stored in IUs will be admissible as evidence.
The Draft Regulations provided the information stored in an IU should conform to requirements laid down in section 65B of the Indian Evidence Act, 1872 (Evidence Act). This section of the Evidence Act lays down the standards for storage and maintenance of electronic records so that they are admissible as evidence.
Authentication and Verification: The manner in which the terms "authentication" and "verification" have been used in the IU Regulations creates confusion. It is not clear when authentication and verification will take place, after or before the information is submitted.
As per Regulation 20(2)(ii), on receipt of information by an IU, the submitter of information will be provided with terms and conditions of authentication and verification of information. It is not clear why the terms and conditions should be provided once the information has been submitted. The user should be aware of these terms before the information is submitted to an IU.
Why is an IU is required to act expeditiously once information of default is received? The process of authentication and verification should be followed in case of any information received and not just for default.
What is meant by 'expeditious'? IUs should act expeditiously whenever any information is submitted and not just in case of default.
Regulation 23(2)(b) and (c) states that an IU shall enable the user to view the status of authentication and verification. It is not clear why the status of authentication is required after information has been submitted. IBC mandates that information should be stored only after authentication, so an IU should never store any unauthenticated information in the first place.
Once information is submitted, the IU should make the information available to concerned parties for authentication.
An IU should verify the identity of the concerned party before allowing it to authenticate the information, so that there is no unauthorised access.
Once information is authenticated, the IU should store the information in such a manner that it cannot be lost or tampered with.
Access to IU records by IBBI: Regulation 23(1)(e) provides that the IBBI will be allowed to access any information from any IU. It doesn't impose any restrictions or conditions for accessing this information.
Since the information submitted to an IU is highly confidential and commercially sensitive, it is essential that there should be some accountability for the access of information by the IBBI. The Draft Regulations provided that in order to access information stored in an IU, the IBBI must pass a written order.
Exit management plan: Regulation 39 mandates all IUs to have an exit management plan. Clause 1(a) of this regulation requires the IU to have mechanisms in place so the users can transfer information to other IUs, in case there is a shut down of one or more IUs. This regulation places the onus of transferring information on the users of IUs instead of the IBBI or the IUs themselves. This onus is inappropriately placed, since an IU will probably have a large number of users, many of whom may not have the ability to ensure the transfer of their information.
The Draft Regulations put the onus of making sure information is transferred from one IU to another on IBBI and the IU. This ensures smooth transfer of information since they will be better equipped with resources to perform this task.
Entry Barriers: Regulation 3 of the IU Regulations requires that IUs have a net worth of at least Rs 50 crores, and prevents foreign control of IUs. Regulation 8 prevents any single shareholder from holding more than 10 percent of the equity share capital of an IU. Together, these have a chilling effect on the entry of firms into this industry.
Annual fees: Regulation 6(2)(e) provides that an IU must pay a fee of fifty lakh rupees to the Board annually. This is a large sum of money, especially given that the business model is unproven. This will discourage entry, limit competition, and increase the fees charged to the users.
In-principle vs regular registration: Regulation 7 is unclear on the difference between a regular approval and an in-principle approval. It is also unclear as to why an applicant would choose one form of application over the other. In the Draft Regulations, the idea was that in-principle registrations would be granted faster than regular registration.
An IU shall provide services without discrimination in any manner.
An explanation follows that mentions specific kinds of discrimination. It is not clear whether the explanation is indicative or if it is exhaustive. If exhaustive, there is no need for the broad prohibition of all discrimination above. This creates confusion for potential IUs.
Fees: Regulation 32(1)(a) provides that IUs shall charge a uniform fee for providing the same service to different users. Does this mean that an IU has to charge the same to an individual lender who wants to submit information about one loan, and a large bank such as SBI, which might want to submit information about thousands of loans everyday, on the basis that the service is the same?
Regulation 32(2)(a) provides that the fee charged for providing services shall be a reasonable reflection of the service provided. This is a very broad statement in the absence of any test of what a "reasonable reflection" is.
Many of the issues mentioned above can create serious problems for the new (and as yet unborn) industry of IUs. High entry barriers will lead to a monopolistic industry with high prices and poor service. If courts are not convinced of the accuracy of the records stored in IUs, the parties will get stuck in lengthy litigation to establish even the basic facts about the debt. IUs established under these regulations might not serve their basic purpose — the creation of an information-rich environment which would bolster speedy resolution in the country.
Government of India, The Report of the Bankruptcy Law Reforms Committee, chaired by Dr T K Vishwanathan, 4 November 2015.
Ministry of Corporate Affairs, The Report Working Group on Information Utilities, chaired by K V R Murty, 11 January 2017.
Sumant Prashant, Prasanth Regy, Renuka Sane, and Shivangi Tyagi are researchers at the National Institute of Public Finance and Policy, New Delhi. Anjali Sharma is a researcher at Indira Gandhi Institute of Development Research, Mumbai.
The buy side in the bankruptcy process by Ajay Shah in Business Standard, July 10, 2017.
Why doesn't anybody know if Swachh Bharat Mission is succeeding? by Diane Coffey and Dean Spears in Ideas for India, July 10, 2017.
Internal insecurity by Prakash Singh in The Indian Express, July 10, 2017.
The fires of Bengal by Pratap Bhanu Mehta in The Indian Express, July 8, 2017.
Why Hindutva hates Aryan Invasion Theory by Devangshu Datta in Business Standard, July 7, 2017.
Gorkhaland protest: Darjeeling brew costs $1,800 a kg to European buyers by Reuters in Business Standard, July 6, 2017.
Opioid Prescriptions Falling but Remain Too High, CDC Says by Rob Stein in NPR, July 6, 2017.
Who's conducting India's wars? by Aakar Patel in Business Standard, July 6, 2017.
Why no 'paan' stains in our Metro stations? by Biju Dominic in Mint, July 6, 2017.
Should we have gone ahead with the GST, warts and all? by Rahul Khullar in The Indian Express, July 6, 2017.
Leaks, Lies, and Chinese Politics by Rush Doshi and George Yin in Foreign Affairs, July 5, 2017. When the domestic media is stifled, the overseas media becomes more important.
Lessons from milk for agriculture by Ashok K Lahiri in Business Standard, July 4, 2017.
A Little Piece of Hell by Don North in The New York Times, July 4, 2017.
Are robots taking over the world's finance jobs? by Nafis Alam, and Graham Kendall in Business Standard, July 3, 2017.
Homophobia is back - it's no accident that nationalism is too by Zoe Williams in The Guardian, July 2, 2017.
GST rollout: Get Set for Turbulence by P Chidambaram in The Indian Express, July 2, 2017.
A tiny Indian publisher is translating hidden gems of world literature for global readers by Maria Thomas in Quartz, June 30, 2017.
After Air India, What About PSUs Bleeding Taxpayer Crores? by Shankkar Aiyar in Bloomberg, June 30, 2017.
Farm Prices After A Bumper Crop: Managing A Problem Of Plenty by Amey Sapre and Smriti Sharma in Bloomberg, June 29, 2017.
An Airbus that's typically used in India costs roughly \$100 million and works for roughly 25 years. The pure capital cost is roughly \$4 million a year for the depreciation and \$9 million a year for interest (assuming the borrower pays 9% in USD). This adds up to \$13 million a year for the privilege of owning the plane for 365 days. This maps to \$35,616 per day.
In other words, each day of down time by the plane is a cost of \$35,616 or roughly Rs.2.3 million.
If you were a dictator and ran the economy efficiently, you would be very focused on down time. The most important thing is to keep a plane working every day. You would try to ensure that the plane flies on every single day.
Airlines are vulnerable to fluctuations of fuel prices and fluctuations in traffic. The world over, private airlines fail a lot.
If an airline fails, and if the planes that belong to it sit idle on the tarmac, that is a substantial cost. Each day of down time for a plane is a cost of capital of Rs.2.3 million. As an example, it appears that 40 planes belonging to Kingfisher sat idle for a year. Going by our rough calculation, that's an opportunity cost of capital of Rs.33.8 billion. For a moment, let's not think about who paid this cost. A cost is a cost. Someone paid the cost; it was a cost to society. A plane that does not fly is capital that is wasted. The dictator would be laughing at our inefficiency - he would say that he would never waste capital like this.
In addition, airline bankruptcies can also be extremely disruptive if people have purchased tickets for flights, well ahead of time, and if the bankruptcy leads to flights being cancelled.
A market with multiple competing private airlines is better than a dictator as there is competition and innovation. But the efficiency of capital use is degraded if there are repeated bankruptcies, and if each airline failure gives a protracted patch of wastage of planes that are idle. The efficiency of the economy is hampered if people show up at the airport and find that the flight was cancelled.
If the bankruptcy process works well, the airline should be protected as a going concern.
No planes should be grounded. No flights should be cancelled.
Efficiency for the economy is about the physical planes performing their scheduled flights. Nothing should change on this.
The airline is an ephemeral legal person that owns the planes. The bankruptcy process should rip up the structure of liabilities of the airline, impose losses upon the previous holders of equity and debt, and replace them with a new configuration of equity and debt. A great churning would take place in the financial structure of the firm. The fact that this drama is taking place should not interfere with the efficient use of planes on the scheduled flights.
Capitalism is messy. A well functioning bankruptcy process protects the efficiency of capital use while the messy events play out.
A good bankruptcy process is one where fluctuations in the firm and its financing structure do not interfere with the core business of planes that fly continuously. Customers should never see a disruption of service, and the planes should never sit idle.
For a country to harness the efficiency and innovation that comes from multiple private competing airlines, a sound bankruptcy process is the secret sauce that achieves the normative ideal for the efficiency of capital utilisation -- the use of planes by a dictator.
RBI move to double award may not reduce mis-selling by Sanjay Kumar Singh in Business Standard, June 28, 2017. The two-part solution is prevention (IFC consumer protection) and cure (Financial Redress Agency).
Hacks Raise Fear Over N.S.A.s Hold on Cyberweapons by Nicole Perlroth and David E. Sanger in The New York Times, June 28, 2017.
Took a decade for RBI to even accept that banks mis-sell by Monika Halan in Mint, June 28, 2017.
Is the staggeringly profitable business of scientific publishing bad for science? by Stephen Buranyi in The Guardian, June 27, 2017.
Why are Indian news channels so disappointing? by Ashok Malik in Hindustan Times, June 27, 2017.
Beware of premature load bearing by Ajay Shah in Business Standard, June 26, 2017.
Lessons to learn from the Emergency by Manu S. Pillai in Mint, June 24, 2017.
Officers have been made scapegoat for political failure, says P C Parakh by Aditi Phadnis in Business Standard, June 24, 2017.
This is what foreign spies see when they read President Trump's tweets by Nada Bakos in The Washington Post, June 23, 2017.
The Language Genie: Put It Back Into The Bottle For The Sake Of National Unity by Vikram Sampath in Swarajya, June 23, 2017.
The future of journalism: The secret lives of young IS fighters by Quentin Sommerville & Riam Dalati in BBC, June 23, 2017.
How Just 14 People Make 500,000 Tons of Steel a Year in Austria by Thomas Biesheuvel in Bloomberg, June 21, 2017.
Uber Cant Be Fixed Its Time for Regulators to Shut It Down by Benjamin Edelman in Harvard Business Review, June 21, 2017.
Reversal on rupee-denominated debt by Bhargavi Zaveri & Radhika Pandey in Business Standard, June 19, 2017.
The Man Who Knew Too Much by By Jane Bradley, Jason Leopold, Richard Holmes, Tom Warren, Heidi Blake and Alex Campbell in BuzzFeed, June 19, 2017.
Robert Mueller Chooses His Investigatory Dream Team by Garrett M. Graff in Wired, June 14, 2017.
Trump, Putin, and the New Cold War by Evan Osnos, David Remnick, and Joshua Yaffa in The New Yorker, March 6, 2017. | CommonCrawl |
Now we don't need to consider $1\times 1$ any longer as we have found the smallest rectangle tilable with copies of X plus copies of $1\times 1$.
I've only found two other solutions. I tagged it 'computer-puzzle' but some people can probably work both of these out by hand.
This one has a rather interesting generalization (see the third spoiler block there) for a different pentomino and rectangle size.
We can tile a $5\times 6$ rectangle using the X pentomino and $1\times 2$ rectangles.
Not the answer you're looking for? Browse other questions tagged geometry computer-puzzle tiling polyomino or ask your own question. | CommonCrawl |
Abstract: A linear universal decay formula is presented starting from the microscopic mechanism of the charged-particle emission. It relates the half-lives of monopole radioactive decays with the $Q$-values of the outgoing particles as well as the masses and charges of the nuclei involved in the decay. This relation is found to be a generalization of the Geiger-Nuttall law in $\alpha$ radioactivity and explains well all known cluster decays. Predictions on the most likely emissions of various clusters are presented. | CommonCrawl |
Let G(V, E) be a finite undirected graph and let κ be its longest undirected cycle.
Prove that it is always possible to obtain an orientation ω(κ) in which κ is topologically sorted and hence its last edge (from node 1 to node N) is inverted.
Prove that given ω(κ) it is always possible to orient the remainder of the graph as to build a DAG.
The proof of (1) seems intuitive: one can always direct the edges of an undirected cycle as to form a directed cycle and then invert one of the edges at random. This causes a topological sort starting at the node on the origin of the new edge. I have no clue on the proof of (2). I'm new to graph theory and still lack the knowledge to write proofs formally, so I'm hoping to learn a bit more on this with this question too.
Assign different numbers as labels to the vertices, such that your chosen cycle visits vertices in increasing order.
It doesn't matter which labels you choose for vertices that are not in your cycle, as long as the labels are all distinct.
Orient each edge in the direction of increasing numbers.
Not the answer you're looking for? Browse other questions tagged graph-theory proof-writing hamiltonian-path orientation directed-graphs or ask your own question.
Let $G$ be a loop-less undirected graph. Prove that the edges of $G$ can be directed so that no directed cycle is formed.
Can we use the notation "$E\subset V\times V$" in undirected connected graph? | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.