text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Let $D\subset T\times X$, where $T$ is a measurable space, and $X$ a topological space. We study inclusions between three classes of extended real-valued functions on $D$ which are upper semicontinuous in $x$ and satisfy some measurability conditions.
Kucia A.: Some counterexamples for Carathéodory functions and multifunctions. submitted to Fund. Math.
Zygmunt W.: Scorza-Dragoni property (in Polish). UMCS, Lublin, 1990. | CommonCrawl |
Consider the Newtonian 4-body problem in 3-space with zero angular momentum and any mass ratios. Theorem: every bounded solution defined on an infinite time interval suffers infinitely many coplanar instants: instants where all four bodies lie in the same plane. This generalizes a theorem asserting that for the zero angular momentum 3-body problem in the plane, every bounded solution suffers infinitely many collinear instants, or ``syzygies''. The obvious generalization holds for any dimension $d$, that is, for the zero angular momentum $d+1$-body problem in $d$-space. After stating the results, I give a heuristic physical ``proof'' of this theorem inspired by Mark Levi. I then proceed to sketch the mathematical proof. Among its main ingredients are that (i) the translation-reduced problem has for its configuration space the space of $d \times d$ matrices (ii) the signed distance from the degeneration locus satisfies a Hamilton-Jacobi equation, (iii) the oriented shape space is homeomorphic to a Euclidean space (iv) the oriented shape space has non-negative sectional curvature where smooth. The theorem gives a ``hunting license'' to set up symbolic dynamics for the spatial 4-body problem, in analogy with the idea of syzygy sequences for the planar three-body problem. | CommonCrawl |
Two of these were rolled back today -- bond market regulation and PDMA. Here's a response by Ankur Bhardwaj in the Business Standard.
The lack of a bond market is going to weaken the monetary policy transmission, thus making it harder for the MPFA to work. The presence of debt management at RBI is a conflict of interest, which makes it harder for the MPFA to work. The reforms could have worked because they reinforce each other.
Now RBI will have excuses: lacking a monetary policy transmission, they were not able to do inflation targeting; owing to the problems of selling government bonds, they had to keep interest rates too low and SLR too high. The agent who has multiple objectives is accountable for none.
Can India leapfrog into decentralised energy?
India woke up to telecommunications through the reforms of the late 1990s: the power of DOT was curtailed, VSNL was privatised, private and foreign companies were permitted, new methods of working were permitted. At the time, wired lines were mainstream and wireless communications was novel. However, setting up wire lines in India is very hard. India leapfrogged, and jumped into the mobile revolution for both voice and data. The concept of not having a land line at home was exotic in the US when it was normal in India. In similar fashion, India was an early adopter of electronic order matching for financial trading, and of second generation pension reforms: these things became mainstream in the world after they were done in India.
Could similar leapfrogging take place in the field of electricity? An important milestone in this story will come about with the announcement by Tesla Motors on Thursday the 30th of April, 2015.
Electricity consumption fluctuates quite a bit within the day. More electricity is purchased when establishments are open (i.e. daytime), when it's too hot or too cold, and when humans are awake in the dark. The electricity system has to adjust its production to ensure that instantaneous consumption equals instantaneous generation.
If producers are inflexible and consumers are inflexible then generation will not equal consumption. The puzzle lies in creating mechanisms through which both sides adjust to the problems of the other in a way that minimises costs at a system level.
For producers, it is not easy to continually modify production to cater to changing demand. The two most important technologies -- coal and nuclear -- are most efficient in large scale plants which run round the clock. It may take as much as a day to switch off, or switch on, a plant. These plants are used to produce the `base load': the amount of electricity that is required in the deep of the night. Other technologies and modified plant designs are required to achieve flexibility of production within the day. This flexibility comes at a cost. Suppose the lowest demand of the day is $L$ and the highest is $H$. For the electricity system as a whole, a given level of average production is costlier when $H/L$ is higher. The cheapest electricity system is one where $H/L=1$; this runs base load all the time.
Matters have been made more complicated by renewables. Solar energy is only available when it's light, while peak demand of the day is generally in the late evening. Electricity generation from windmills is variable. Further, the planning and despatch management of the grid is made complicated when there is small scale production taking place at thousands of locations, as opposed to the few big generation plants of the old days.
The orange line shows consumption. This was lowest on Saturday night at around 70 GW. It peaked in the evening of Thursday at around 160 GW. This was $H/L> 2$! This gave huge fluctuations in the price, which is the blue line in the graph above. Base load production has no flexibility and was probably configured at 70 GW. When demand was 70 GW, the price was near zero, given the inelasticity of base load production. The price went all the way up to 450 \$/MWh at the Thursday peak.
From the viewpoint of both consumers and producers, these massive price fluctuations beg the question: How can we do things differently in order to fare better? The question for consumers is: How can purchase of electricity from the grid be moved from peak time to off-peak time? The question for producers is: How can more production be achieved at peak time?
All this is true of electricity worldwide. Turning to India, there are two key differences.
The first issue is that ubiquitous and reliable electricity from the grid has not been achieved. The mains power supply in India is unreliable. The euphemism `intermittent supply' is used in describing the electricity supplied by the grid in India. Households and firms are incurring significant expenses in dealing with intermittent supply (example). Intermittent power imposes costs including batteries, inverters, down time, burned out equipment, diesel generators, diesel, etc. Diesel generation seems to come at a cost of \$0.45/kWh. When power can be purchased from the grid, it isn't cheap, as a few buyers are cross-subsidising many others.
In large parts of India, the grid has just not been built out. There are numerous places where it would be very costly to scale out the conventional grid. There are places in India where calculations show that a large diesel generator in a village has strengths over the centralised system. There are small towns in Uttar Pradesh where private persons have illegally installed large generators and are selling electricity through the (non-functioning) grid, in connivance with the local utility staff.
Global discussions of energy systems talk about base load and peak load. In India, the existing generation capacity is not adequate even at base load! The apparent $H/L$ in the data is wrong; demand at the peak is much greater than $H$ -- we just get power cuts. Every little addition to capacity helps. There has been a large scale policy failure on the main energy system. Perhaps more decentralised solutions can help solve problems by being more immune to the mistakes of policy makers.
The second interesting difference is high insolation with high predictability of sunlight.
(source for these maps). Note that the deep blue for the European map is 800 kWh/m$^2$ while for the Indian map, the same deep blue is 1250 kWh/m$^2$. Arunachal Pradesh and Sikkim get more sunlight than Scotland.
Substantial technological progress is taking place in wind and in solar photovoltaics (SPV).
Wind energy is enjoying incremental gains through maturation of engineering, and also the gains from real time reconfiguration of systems using cheap CPUs and statistical analysis of historical data from sensors.
The price of crystalline silicon PV cells has dropped from \$77/watt in 1977 to \$0.77/watt in 2013: this is a decline at 13% per year, or a halving each 5 years, for 36 years. This is giving a huge surge in installed capacity (albeit a highly subsidised surge in most places).
For decades, renewables have been a part of science fiction. Now, for the first time, massive scale renewable generation has started happening. The present pace of installation is, indeed, the child of subsidy programs, but the calculations now yield reasonable values even without subsidies. If and when the world gets going with some kind of carbon taxation, that will generate a new government-induced push in favour of renewables, which could replace the existing subsidies in terms of reshaping incentives.
Electricity generation using renewables is variable (wind) or peaks at the wrong times (solar). In addition, wind and solar production is naturally distributed; it is not amenable to a single 100 acre facility that makes 2000 MW. These problems hamper the use of renewables in the traditional centralised grid architecture. These problems would be solved if only we could have distributed storage.
What would a world with low cost storage look like? Imagine a group of houses who put PV on their roofs and run one or two small windmills. Imagine that these sources feed a local storage system. The renewable generation would take place all through the day. When electricity prices on the grid are at their intra-day peak, electricity would be drawn from the storage system.
For the centralised system, the cost of delivering electricity at a certain $(x,y,t)$ can be quite high: perhaps households at certain $(x,y,t)$ can sell electricity back to the grid.
This is the best of all worlds for everyone. The grid would get a reduced $H/L$ ratio and would be able to do what the grid does best -- highly efficient large-scale base load technologies. The grid would be able to deliver electricity to remote customers at lower cost. Consumers would be better off, as payments for expensive peak load electricity would be reduced.
This scenario requires low cost storage. For many years, we were stuck on the problem of storage. In recent years, important breakthroughs have come in scaling up lithium-ion batteries, which were traditionally very expensive and only used in portable electronics. Lithium Ion batteries have 2.3 times the storage per unit volume, and 3.1 times the storage per unit mass, when compared with the lead acid batteries being used with inverters in India today.
Tesla Motors is an American car company. They have established a very large scale contract with Panasonic to buy Lithium Ion batteries. Nobody quite knows, but their internal cost for Lithium Ion batteries is estimated to be between \$200/kWh and \$400/kWh. On Thursday (30 April 2015), they are likely to announce a 10 kWh battery for use in homes. It's cost is likely to between \$2000 and \$4000 for the battery part, yielding a somewhat higher price as there will also be a non-battery part. (It is not yet certain that the part they announce will be 10 kWh. There are many stories which suggest this will cost \$13,000, which are likely to be wrong).
A 10 kWh battery can run for 10 hours at a load of 1000 Watts. Note that Tesla is only pushing innovations in manufacturing; they are not improving battery technology. Many others are on the chase for better battery technology.
Stupendous progress has happened with batteries in the last 20 years. Only two years ago, this price/performance was quite out of reach. It is a whole new game, to get a Lithium-Ion battery at between \$200 to \$400 per kWh. Suddenly, all sorts of design possibilities open up. Further, this is only the beginning.
Experts in this field in the US believe that when Lithium Ion batteries are below \$150/kWh, they will be fully ready for applications in the electricity industry in the US. These experts believe this number will be reached in 5 to 10 years.
The rise of storage links up to the rise of electric cars in two ways. First, electric cars are driving up demand for lithium-ion batteries and giving economies of scale in that industry. Second, a home which has an electric car has that battery! The present technology in electric cars -- Tesla's Model S -- has a 85 kWh battery, which is good capacity when compared with the requirements of a home.
Renewables have generated excitement among science geeks for a long time, but have disappointed in terms of their real world impact. Scientific progress in renewables, and in batteries, are coming together to the point of real world impact.
Storage is one method for coping with the intermittent generation from renewables. The other method is to make demand more flexible. As an example, a smart water heater or a smart air conditioner could do more when electricity is cheap, and vice versa. This would make consumption more price elastic.
The Indian environment with expensive and intermittent electricity from the grid is an ideal environment for renewables + batteries.
Distributed generation and distributed storage are seen as ambitious cutting edge technology in (say) Germany. Perhaps the natural use case for this is in India. In Germany, the grid works -- there is no problem with achieving high availability. In Germany, there isn't that much sun. In India, every customer of electricity suffers increased costs in getting up to high availability, and there is plentiful sunlight.
A weird thing that we do in India is to charge high prices for the biggest customers of electricity. For these customers, roof-top PV systems are already cheaper. Problems in the fuel supply have given a steep rise in base load prices, and have pushed the shift to renewables.
In the US, the cost of power varies between 7 and 20 cents/kWh. In this environment, grid parity requires that Lithium Ion batteries achieve \$150/kWh. In India, the break even point is much higher. The announcement on Thursday may yield a price that is viable for many applications in India.
The roofs are covered with PV.
There are a few windmills. Large-scale adoption would require windmill designs which cater to aesthetic sense and not just technical efficiency.
It would make sense to add one diesel generator into the mix, with the advantage that it would run at top efficiency as it would only be used to feed the battery. (This is similar to the efficiencies of running the engine in a hybrid car).
The campus would buy electricity from the grid when it's available and when it's cheap, and use this to charge the battery.
Electricity from the grid, the renewables and the disel generator would feed the battery.
All consumption would happen from the battery. Users inside the campus would experience 100% uptime.
Electric cars and motorcycles could augment the battery capacity at the campus scale.
Cheap CPUs would give the intelligence required to seamlessly orchestrate this system, in real time, all the time.
As an example, the picture below is a pretty windmill, 3m diameter and 5m high, which has a nameplate rating of 6500 watts. The output would vary with the wind, but under normal circumstances in India, we might get average production of 1500 watts from this.
Sprinkling a few of these devices on a campus would be quite elegant. Here is another example, a device that is 1.5m wide, and costs 4000 Euro or Rs.260,000.
A large number of installations of this nature would change the elasticity of demand for electricity. When there is peaking load, and the price of electricity is high, these installations would switch to using their batteries. This would reduce the $H/L$ ratio and thus bring down the capital cost of the centralised electricity system.
A related development is taking place with rural mobile towers in India. These must grapple with the problem of intermittent electricity, and are starting to do distributed electricity generation for surrounding households. They are also pushing into new storage technologies.
Higher oil prices, ideally a carbon tax worldwide.
In industrial countries: continued government support for R&D and adoption of renewables and electric cars.
Continued worldwide scientific progress with batteries.
Sustained low interest rates, globally, for a long time.
Electricity policy in India which gives time-of-day pricing all the way to each household, and sets up an API through which a CPU at the household can query the price. Ideally, a mechanism for distributed producers to sell back to the grid at a cost which reflects the cost faced by the grid in delivering electricity at that location.
If these five things happen, then we're pretty much on our way to a new world of distributed generation and distributed storage, in India and in the world outside.
One interesting consequence of this scenario would be a sustained decline in crude oil prices. This would finally yield the outcome envisaged by Sheik Zaki Yamani who said in 1973: The stone age did not end because the world ran out of stones.
In an alternative scenario, these five things do not quite work out okay, and distributed generation + distributed storage do not shape up as the mainstream technology for every day use in the first world. However, this could still be compatible with the possibility that these are good technologies for a place like India where grid supply is untrusted and there is plenty of sunlight.
The malleability of a late starter. In the US, where the grid is well established, these new developments threaten the business model of the existing electricity industry where vast investments are already in place [example]. In India, high availability grid power has not yet come about; the grid is far from meeting the requirements of the people. Hence, there is greater malleability and an opportunity to change course, in a direction that favours decentralisation and reduced carbon.
Disrupting a broken system. If a lot of buyers in India defect from the excessive prices charged by the grid (owing to the cross subsidisation and theft), this will generate financial difficulties for the grid. As an example, see this submission in Maharashtra by the Prayas, Energy Group, and their response on the proposed amendments to the Electricity Act.
Allocative decisions for the capital that goes into distributed energy. On the scale of the country, capital would shift from centralised production of electricity to distributed production + distributed storage. In the Indian public policy environment, it's always better to have self-interested households and firms making distributed choices about capital expenditures, rather than capital being placed in the hands of regulated firms.
Industrial policy is not required. This article is not a call for industrial policy. We don't need to launch subsidy programs, or force car manufacturers to switch to electric, or force mobile phone towers to switch to renewables, etc. The Indian State has poor capacity on thinking and executing industrial policy. As a general principle, in public policy thinking in India, it's best to eschew industrial policy or planning, and just focus on getting the basics right.
No industrial policy was required in getting to the ubiquitous water tanks on every roof in India -- it came from private choices responding to the failures of public policy on water. The invisible hand is amply at work. Indian car manufacturers exported 542,000 cars in 2014-15. Hence, these firms have ample incentive to figure out electric cars. Unreliable and expensive electricity is giving ample incentive to customers to find better solutions. Indian software services and IT product companies have ample incentive to tune into this space, and build the software end of this emerging global environment. New technological possibilities will be rapidly taken up.
India should fix the grid. There are major economies of scale in making centralised electricity generation work. But we should see that we are coming at this from the opposite direction. In the West, we start from 100% centralised energy and will perhaps head towards 66% centralised energy. We in India may first overshoot to 40% centralised energy and then go up to 66% centralised energy through gradual improvements in public policy on centralised electricity.
Compare and contrast this with how we see water tanks on roofs. These water tanks are the physical manifestation of the failure of public policy in the field of water. When sound water utilities come up, they will do centralised production of 24 hour water pressure, and the water tanks will go away.
On one hand, the failures of public policy on electricity in India are exactly like the failures of public policy in water in India. Once it becomes possible to opt out of public systems to a greater extent, with generation and storage under the control of a campus, people will take to this. This will overshoot, going beyond what's technically sound. In the long run, when the policy frameworks on electricity become better, the share of centralised energy will go up. But there is good sense in distributed energy and it's not just a coping strategy. Even deep in the future, when policy failures are absent, there's a big role for distributed energy while there is no role for distributed water storage.
For an analogy, the wireless revolution came first to Indian telecom. But now that this is established, we know that there laying fibre to the home is required in order to get good bandwidth. We will asymptotically endup converging on what's seen in the West, we'll just come at it from a different direction.
India has yet to reap the efficiencies of centralised generation, transmission and distribution. We need to end subsidies and combat theft. This is the slow process of improving policy frameworks in electricity. The main point of this article is that along the difficult journey to this destination, we'll first have an upsurge of Sintex water tanks on roofs.
Sound pricing rules are required. From an Indian public policy point of view, the key action point required is that moment to moment, supply and demand should clear, the spot price should fluctuate, buyers of electricity should be fully exposed to these fluctuating prices, and the spot price at all points of time should be made visible to each buyer through an API. This is not insuperably difficult. Even the present bad arrangement -- unpredictable grid outage where the price goes to $\infty$ -- is actually pushing private persons in the right direction.
The market failure: externalities. Knowledge spillovers benefit society at large, and self-interest favours under-investment in knowledge. In the face of this market failure (i.e. positive externalities), perhaps the government can fund a few research labs [example] so as to grow skills in this emerging landscape. See Rangan Banerjee's article at page 38 of the December 2014 issue of Energy Next which talks about renewables R&D and manufacturing in India. It would help if there was a large number of pilot projects which aim to build towards campus-scale adoption, so as to have a precise sense of how well things work, solve local problems, and diffuse knowledge.
The gap in knowledge in India on batteries is large. But it is feasible for India to get into manufacturing power units, solar cells, etc. We need to study the steps taken by Japan and China to build up their capabilities in this field.
The importance of the cost of capital. Renewables involve high capital cost and near-zero running cost. The use case is critically about the cost of capital. Successful inflation targeting, and capital account openness, will give lower rates of return for equity and debt, which is required for the adoption of these technologies.
There is no market failure in energy conservation. When customers are given high prices of electricity, they have ample incentive to adopt energy-efficient technologies. India is in good shape on pricing in some areas (electricity, petrol) though not in some others (kerosene, LPG). Once the price of energy is correct, the next price that shapes adoption of energy efficient technology is the cost of capital. The failures of monetary policy and finance in India are giving a high cost of capital. Once these are solved, there is no market failure in the adoption of demand side innovations. Low interest rates and low required rates of return on equity will shift the private sector calculation in favour of energy efficient technology.
If this scenario unfolds as described in India, there will be a loss of momentum in centralised energy, and sharp growth in distributed production and storage of energy.
Perhaps we will get a surge in imports of Lithium Ion batteries and slow growth of lead-acid battery production in India.
Brijesh Vyas helped me in understanding the issues and in getting the calculations right. He recommends that we read Linden's Handbook of batteries. I also thank Sanjay Arte, Ashwini Chitnis, Ashwin Gambhir, Sanjeev Gupta, Gopal Jain, Rajeev Kapoor and Anand Pai for useful discussions. All errors are, of course, mine.
Budget 2015 has been an important period for India's financial reforms. Indian Merchants Chamber, BSE and NIPFP had organised a meeting on these issues. From this show, here are two videos.
Both from our youtube channel.
Externalities -- e.g. a factory that pollutes.
Asymmetric information -- e.g. safety in food or medicines.
Public goods -- e.g. law and order.
In the standard approach we weigh market failures against the problems of obtaining effective State intervention. The barrier is `public choice theory': the State is not benevolent. The citizen is the principal, the State is the agent, and there is a principal-agent problem. It is not easy to obtain performance when setting up real world arrangements that will sally forth, intervene in the working of the economy, and address market failures. This limits the class of situations where intervention may be appropriate. In a world with a perfect and benevolent State, we'd do a lot more in terms of going after market failures. In India, where State capacity is low, we are very selective and only do a few things.
Alex Tabarrok and Tyler Cowen look at the proliferation of methods to create, store and transmit information, and say that there is an increased class of situations where asymmetric information has been conquered, thus reducing the extent of market failures.
Improved access to information also reduces the public choice problem. More information about the activities of politicians and bureaucrats is available to the citizen, which reduces the principal-agent problem.
It's a good article. I felt the case was a bit overstated. E.g. reputation measures on the Internet help people see more about you, and that's good in some settings (e.g. you know something about the Uber driver), but it's a small change in a vast gulf of lack of information. E.g. the authors say : Many public choice problems are really problems of asymmetric information. I don't agree. Yes, more information will help, but the principal-agent problem between citizen and State is vast and complicated. Merely monitoring some of the activities of civil servants better does not solve the public choice problem.
There is vast asymmetric information in the relationship between an employer and an employee in most complex work places, and nothing has changed which will make a dent to this. We can think of numerous other situations where the asymmetric information has not changed.
More or less government intervention?
The flavour of the article is that with less asymmetric information, you'd need less State. Yes, that is true, but it's also the case that with less of a public choice problem, you could use more State. In thinking about public policy, we are constantly watching the market failure that's worth addressing versus our ability to construct a State apparatus that would actually deliver the goods in trying to address this market failure. The new age of improved information cuts both ways: it reduces the places where we might want a State intervention, and it increases the class of places where we could pull off a successful State intervention.
In this new age of easy capture, storage and transmission of information, I have felt that there is a new kind of State intervention: one which rearranges the information set. The State can use its coercive power to force certain kind of information to be captured or released or transmitted. This is a beautiful intervention that directly addresses the root cause of the market failure, the lack of information.
As an example, in the old analysis of insurance, there were some good drivers and some bad drivers, but the insurance company did not know who fell into what category. There was adverse selection (bad drivers were more likely to sign up for insurance) which led to high prices of insurance and many good drivers got insurance which was not actuarially fair.
This is the standard description of the market failure, in the textbooks. We can now think of a new kind of State intervention: The government forces cars to be equipped with devices that measure how the person drives. This is an intervention that directly stabs at the asymmetric information.
Here is another example which nicely illustrates an information intervention. We have been working in the field of regulation of warehouses. There is an asymmetric information problem: I submit my goods for storage at the warehouse, but I don't know how much care will be exercised by the warehouseman. There are old style interventions which reduce this information asymmetry.
In some situations, we can directly attack the information asymmetry. As an example, consider frozen food. When you deposit 1000 kilograms of cheese into a cold storage, you worry that the warehouseman will not maintain the temperature at precisely 4 degrees. But now there are low cost devices that will measure the temperature every minute and thus tell you what your cheese experienced.
Now the customer and the warehouseman can enter into a private contract where the temperature of the cheese is monitored, and a set of payoffs calculated based on the extent to which the temperature of the cheese strays above 4 degrees. Good warehousemen would think: Why don't I release this information, so that prospective customers would trust me? The trouble is: this data could be tampered with, or data could be selectively released.
This suggests the design of a government intervention: The government could establish an inspection mechanism which ensures truthful release of comprehensive data about all transactions by the warehouse. This is a combination of the new age of devices (the data logger) plus a dose of State intervention (to ensure truthful and complete data release). We could also envision a valuable State intervention that standardises the XML files which are put out by all warehousemen, which would reduce the cost of processing this data for customers.
This is an example of what I call an `information intervention', which rearranges the structure of information, and thus combats the market failure that's rooted in asymmetric information.
I have a column in the Indian Express today on how Finance SEZs can matter.
Some people like a shirt if it's a good shirt. Some people are obsessed with the brand name on the shirt. To some extent, this could be rational: I know nothing about cars, but I've had good experiences with cars by Toyota in the past, so I have a bias in favour of cars by Toyota. And yet, why is it that in some places and some times, brand names matter more? The simplest idea seems to be one of exposure. If the stakes are very small, I'll go by a brand name, but if not, then it makes sense to see through the brand name to the underlying reality.
Similar problems are found in the academic version of the pursuit of brand names: connections with the top universities, and publications in the top journals. Research ought to be about following your curiosity, pursuing important questions, getting novel and persuasive answers, and doing research that matters. As I wrote in this post on Indian economics, the process of recruiting and promoting researchers in India has become centred on the filtering by North American editors and referees. This chase for external brand names is exerting a corrosive effect upon the Indian academic profession. Managers of research have absolved themselves of their responsibility to judge who is a good researcher. At too many places, it's turned into a stultifying chase for brand names.
A variation of the brand name problem is the `great man syndrome'. A person scores wins in field X, and starts talking about field Y, and is able to command credibility on field Y even though the actual knowledge on that field is low.
A person is a set of brand names (where have I studied, what organisations I have worked in, what journals I have published in), a set of capabilities (character, values, ethics, knowledge) and a set of outcomes (what have I done in life). While it appears obvious that the capabilities and outcomes should matter more, some people care more about the brand names. Why is it that in some places and some times, brand names matter more?
In an ideal world, we start at a young person of age 21 and we know little about the things that matter -- emotional endurance, values, ethics, knowledge, character. So we judge the person by the brand names: "She is an NTS scholar so she must be very smart". (I am showing my age; I believe KVPY is now the most elite club in India). As the person grows up, we can increasingly switch gears from the brand names to the person. What matters in the 20s is the brand names; in the 30s it's personality, and after that it's character.
Why might brand names matter disproportionately in India?
Let's use the notation B for the brand name, D for the data that we observe, j for our judgement about a person. We start with a very flat prior P(j); we know very little. Now we get the minimal information packet -- the brand name. Under Bayesian learning, we should P(j | B) = P(B | j) P(j) / P(B). Here we seem to do a lot of learning; if we were good Bayesians, P(B|j)/P(B) is a big number. And then we observe facts about the person D. We update P(j | D) = P(D | j) P(j) / P(D). Here, we seem to do little learning; if we were good Bayesians, P(D|j)/P(D) seems to be a small number.
One could say: "This has nothing to do with India; this is just `confirmation bias', a well known bug in human decision making. We are not rational Bayesian updaters, we overweight the prior and do not attach enough weight to the data. Get used to it, this problem is everywhere." There is something to this argument. However, there is something going on e.g. people in China seems to care more about the brand name on clothes; people in India seem to care more about the brand names on the resume. Maybe humans are the same everywhere, but are there some features of the stochastic environment which make things different across space and time?
In the West, the environment is stable. Trend GDP growth is 2.5%, which yields a doubling every 27 years. From age 20 to age 60, a person experiences a change from 100 to 268. This is a relaxed pace of change where people get satisfied with doing a few small steps, and can comprehend what is going on. In India, we are in an environment of far more hectic change. Trend GDP growth is 6.5%, which yields a doubling every 11 years. From age 20 to age 60, a person experiences a change from 100 to 1241. This an environment of big decisions and big consequences; this is not a comfortable locale for the cautious climber of career ladders. It is far more difficult to figure out what is going on. This is a very noisy environment. Could this high volatility generate greater conservatism i.e. inadequate updating? E.g. in money management, great returns can be owing to dumb luck, and this can happen more in a high volatility environment. In a high volatility environment, a money manager who generated high returns is less likely to have intrinsic skill -- data about performance is less informative.
Inequality of knowledge could also be an issue. If I know nothing about cars, I will just fall back on brand names. When journalists know less, they will fall back on brand names -- they will think that the IIT guy must be right. If the seniors in decision making roles (who judge young people for promotions or appointments) know very little, there is a greater temptation to fall back on brand names and hire the IIT guy. Critical thinking on the part of person i about person j requires a low gap in knowledge between the two. A greater use of brand names may be inevitable in an environment of high inequality of knowledge. By this logic, the use of brand names in the world of business should be lower as the results (profit, measured in rupees) are visible for all to see.
Principal-Agent problems are at work. Nobody ever got fired for hiring the IIT guy. When faced with the prospect of failure, the Agent seeks deniability by purchasing the brand name.
The IIT guy may feel he has arrived. He may work less hard. He may take less risk. He doesn't have to score wins; he just has to be good enough and make it into his next job.
In academics, the research trajectory of myriad researchers is distorted by chasing brand names. A lot of people would use their lives much better if only they would dig in and research reality. This misdirection of effort results in waste. Similar things can be said all across the labour market, but it's particularly bad in academics. In the non-academic part of the world, the brand names fade away more rapidly as the person grows up.
In the US, it seems that the price paid for a brand name education is hard to justify in terms of the improvements that flow in a causal sense from that education.
One bizarre thing that I often see is an exaggerated cynicism. It's claimed that we're all clueless and ignorant and wrong. This is elaborately packaged as humility -- let's be careful to not think that thinking helps. The hidden subtext is: "Thinking is pointless, so let's just leave it to the IIT guy". By deprecating logic, we hand it over to the brand name.
The pervasive obsession with brand names has left undiscovered assets for me. I take effort to see the person rather than the brand name, and find hidden geniuses who are shunned by a brand-conscious establishment. Many heroes of the Macro/Finance Group at NIPFP fit this description. In this `security selection' process, it is relatively easy to shrug aside the brand names, but it is harder to look beyond personality and peer into character.
I find some of the most impressive people in leadership roles in India are those who got there without brand names. This may be similar to what's being conjectured about women CEOs: It is so hard for a woman to become a CEO, she's got to be really good.
Indian Merchants Chamber, BSE and NIPFP have organised a meeting at 4:30 PM tomorrow.
Tamal Bandyopadhyay in Mint, and MC Govardhana Rangan in the Economic Times, worry about the lack of effectiveness of monetary policy.
RBI officials have hinted at taking `tough actions' if banks do not respond to changes in the policy rate by RBI. This seems to be a bit odd to me. In a well functioning economy, changes in the policy rate should propagate out through a market process and not central planning. If that market process is not working out okay, this calls for reforms of the underlying problems and not more central planning.
The bond market: When the central bank changes the policy rate, the entire yield curve changes through yield curve arbitrage. This propagates all through the Bond-Currency-Derivatives Nexus. It impacts upon the exchange rate as currencies and bonds are tightly interlinked. It impacts upon the corporate bond market at all maturities as the corporate bond market is priced off the risk yield curve.
The banking system: When the central bank changes the policy rate, competition between banks forces changes in lending and borrowing rates.
The exchange rate: When the central bank changes the policy rate, this impacts upon capital flows and particularly debt flows. This changes the exchange rate. E.g. when we cut rates, less money comes in, which gives a rupee depreciation, which is expansionary.
RBI has failed on bond market development for 25 years. This has damaged the monetary policy transmission through the Bond-Currency-Derivatives Nexus.
RBI gives out two banking licenses every decade, and blocks foreign banks, so there is a lack of competition in banking. This has damaged the monetary policy transmission through the banking system: changes in the policy rate do not impact upon the rates at which banks borrow and lend.
RBI has blocked debt capital flows. This has damaged the monetary policy transmission through the exchange rate.
RBI has emphasised exchange rate objectives. This has damaged the monetary policy transmission through the exchange rate. Things have become much worse on this count after 2013.
These four elements of RBI strategy have made RBI ineffective as a central bank. The journey to a strong and effective RBI lies in changing course on these four questions.
These four elements of RBI strategy are the barriers to make inflation targeting work. It is one thing to sign a Monetary Policy Framework Agreement, it is another to actually succeed in delivering the goods. Until the quality of economic thinking at RBI improves, we will ricochet from failure to failure.
It is important to see the triad of the recent reforms as tightly interconnected. Setting up the PDMA is important as it takes away a key conflict of interest, and leaves RBI free to focus on the inflation objective. Shifting bond market regulation to SEBI is important as it gives RBI the monetary policy transmission. These two moves are integral to inflation targeting. The people who argue against these reforms are those who are perpetuating a weak and ineffective RBI. The Ministry of Finance has been kind to RBI by doing three things, as opposed to only doing the Agreement.
The RBI is now 80 years old and faces existential questions. All these years, RBI staff could mumble some mumbo jumbo, and get away with it, as most people could not understand the mistakes in thinking. Now RBI is accountable for delivering on CPI inflation, where the target and the performance are three simple numbers. This is a whole new game. If financial sector reforms are now not undertaken, failure will be visible in public. | CommonCrawl |
Why adjugate matrix 2x2 is different from 3x3 and others? Math.stackexchange.com The cofactors of a $2\times 2$-matrix are $\pm$ determinants of $1 \times 1$-matrices, and the determinant of a $1\times 1$-matrix is its unique entry. – darij grinberg Sep 21 '15 at 15:05 1 Try calculating the cofactor matrix of a $2 \times 2$ matrix. | CommonCrawl |
Your task is to determine for each node the maximum distance to another node.
The first input line contains an integer $n$: the number of nodes. The nodes are numbered $1,2,\ldots,n$.
Print $n$ integers: for each node $1,2,\ldots,n$, the maximum distance to another node. | CommonCrawl |
A zero has a "multiplicity", which refers to the number of times that its associated factor appears in the polynomial. For instance, the quadratic ( x + 3)( x � 2) �... 12/01/2013�� Find the zeros of the function. State the multiplicity of multiple zeros. y = (x - 4)2 A. x = 0 multiplicity 4 B. x = 4 multiplicity 2 C. x = -4 multiplicity 3 D. x = 4 multiplicity 3 3. A multiple zero has a multiplicity equal to the number of times the zero occurs. true or false?
In general, the algebraic multiplicity and geometric multiplicity of an eigenvalue can differ. However, the geometric multiplicity can never exceed the algebraic multiplicity . It is a fact that summing up the algebraic multiplicities of all the eigenvalues of an \(n \times n\) matrix \(A\) gives exactly \(n\).
The number of possible orientations, calculated as , of the spin angular momentum corresponding to a given total spin quantum number (), for the same spatial electronic wavefunction. A state of singlet multiplicity has and .
The number line below shows all the zeroes of $\,P\,$. Recall that zeroes are the only type of place where a polynomial can change its sign, since there are no breaks in its graph. The interval highlighted in yellow doesn't contain any zero except the one under consideration.
Multiplicity. How many times a particular number is a zero for a given polynomial. For example, in the polynomial function f(x) = (x � 3) 4 (x � 5)(x � 8) 2, the zero 3 has multiplicity 4, 5 has multiplicity 1, and 8 has multiplicity 2. | CommonCrawl |
After the dry, algebraic discussion of the previous section it is a relief to finally be able to compute some variances.
We say that the variance of the sum is the sum of all the variances and all the covariances.
If $X_1, X_2 \ldots , X_n$ are independent, then all the covariance terms in the formula above are 0.
When the random variables are i.i.d., this simplifies even further.
Let $X_1, X_2, \ldots, X_n$ be i.i.d., each with mean $\mu$ and $SD$ $\sigma$. You can think of $X_1, X_2, \ldots, X_n$ as draws at random with replacement from a population, or the results of independent replications of the same experiment.
This implies that as the sample size $n$ increases, the distribution of the sum $S_n$ shifts to the right and is more spread out.
Here is one of the most important applications of these results.
Here is the distribution of $X$. You can see that there is almost no probability outside the range $E(X) \pm 3SD(X)$. | CommonCrawl |
This model describes the scattering from polymer chains subject to excluded volume effects and has been used as a template for describing mass fractals.
where $\nu$ is the excluded volume parameter (which is related to the Porod exponent $m$ as $\nu=1/m$ ), $a$ is the statistical segment length of the polymer chain, and $n$ is the degree of polymerization.
**SasView implements the 1993 expression**.
.. note:: This model applies only in the mass fractal range (ie, $5/3<=m<=3$ ) and **does not apply** to surface fractals ( $3<m<=4$ ). It also does not reproduce the rigid rod limit (m=1) because it assumes chain flexibility from the outset. It may cover a portion of the semi-flexible chain range ( $1<m<5/3$ ).
Here $\Gamma(x) = \gamma(x,\infty)$ is the gamma function.
The special case when $\nu=0.5$ (or $m=1/\nu=2$ ) corresponds to Gaussian chains for which the form factor is given by the familiar Debye function. | CommonCrawl |
An algebra $A$ with involution (cf. Involution algebra) over the field of complex numbers, equipped with a non-degenerate scalar product $(|)$, for which the following axioms are satisfied: 1) $(x|y)=(y^*|x^*)$ for all $x,y\in A$; 2) $(xy|z)=(y|x^*z)$ for all $x,y,z\in A$; 3) for all $x\in A$ the mapping $y\to xy$ of $A$ into $A$ is continuous; and 4) the set of elements of the form $xy$, $x,y\in A$, is everywhere dense in $A$. Examples of Hilbert algebras include the algebras $L_2(G)$ (with respect to convolution), where $G$ is a compact topological group, and the algebra of Hilbert–Schmidt operators (cf. Hilbert–Schmidt operator) on a given Hilbert space.
Let $A$ be a Hilbert algebra, let $H$ be the Hilbert space completion of $A$ and let $U_x$ and $V_x$ be the elements of the algebra of bounded linear operators on $H$ which are the continuous extensions of the multiplications from the left and from the right by $x$ in $A$. The mapping $x\to U_x$ (respectively, $x\to V_x$) is a non-degenerate representation of $A$ (respectively, of the opposite algebra), on $H$. The weak closure of the family of operators $U_x$ (respectively, $V_x$) is a von Neumann algebra in $H$; it is called the left (respectively, right) von Neumann algebra of the given Hilbert algebra $A$ and is denoted by $U(A)$ (respectively, $V(A)$); $U(A)$ and $V(A)$ are mutual commutators; they are semi-finite von Neumann algebras. Any Hilbert algebra unambiguously determines some specific normal semi-finite trace on the von Neumann algebra $U(A)$ (cf. Trace on a $C^*$-algebra). Conversely, if a von Neumann algebra $\mathfrak A$ and a specific semi-finite trace on $\mathfrak A$ are given, then it is possible to construct a Hilbert algebra such that the left von Neumann algebra of this Hilbert algebra is isomorphic to $\mathfrak A$ and the trace determined by the Hilbert algebra on $\mathfrak A$ coincides with the initial one . Thus, a Hilbert algebra is a means of studying semi-finite von Neumann algebras and traces on them; a certain extension of the concept of a Hilbert algebra makes it possible to study by similar means von Neumann algebras that are not necessarily semi-finite .
This page was last modified on 28 June 2015, at 20:32. | CommonCrawl |
A right stochastic matrix is a nonnegative square matrix with each row summing to 1. A left stochastic matrix is a nonnegative square matrix with each column summing to 1. A doubly stochastic matrix is both right stochastic and left stochastic.
If $A$ and $B$ are transition matrices such that $||A-B|| < c$, then what can we say about $||A^n-B^n||$ for a given $n$?
Does always a $B$ with orthonormal rows/column be found so that $BP=0$?
Is the tensor product of two column/row stochastic matrix is again a column/row stochastic? Thanks for helping.
Is the Birkhoff–von Neumann theorem true for infinite matrices?
Examples of stochastic matrices that are also unitary?
When are the inverses of stochastic matrices also stochastic matrices?
What are Markov matrices and can they be used to model migration?
Why is the eigenvalue of a stochastic matrix always $1$? I have found lots of articles simply saying it is obvious that the eigenvalue is $1$ but can't get my head around the proofs.
Is the convex combination of stochastic matrices also stochastic?
For a doubly stochastic matrix ($n \times n$) that is a linear combination of $N$ permutation matrices, how do we prove that $N = (n - 1)^2 + 1$ suffices?
Preposition about the Entries of the Product of Markov Matrices.
Do the matrices representing Markov chains need to be square?
Birkhoff-Neumann like result for stochastic matrices?
Is it true that for any square row-stochastic matrix one of the eigenvalues is $1$?
No solutions to a matrix inequality? | CommonCrawl |
A subgroup of $S_\infty$, the group of all permutations of the natural numbers, is said to be cofinitary, if all of its non-identity elements have only finitely many fixed points. In 1988 Adeleke showed that every countable cofinitary group is a proper subgroup of a highly transitive cofinitary group. Thus in particular maximal cofinitary groups are uncountable, which initiated the study of the spectrum of possible (infinite) sizes of maximal cofinitary groups.
In addition, we will consider the relationship between cofinitary groups and other combinatorial objects on the real line. We will give an outline of a result of Brendle, Spinas and Zhang, which states that the minimal size of a maximal cofinitary group is not smaller than the minimal size of a non-meager set. The result implies an important distinction between maximal cofinitary groups and some of its close combinatorial relatives.
We will conclude with a brief discussion of open questions. | CommonCrawl |
Daily, M. (2007). Proof of the double bubble curvature conjecture. Journal of Geometric Analysis, 17(1), 75-86.
Abstract: An area minimizing double bubble in $\mathbb R^n$ is given by two (not necessarily connected) regions which have two prescribed $n$-dimensional volumes whose combined boundary has least $(n\!-\!1)$-dimensional area. The double bubble theorem states that such an area minimizer is necessarily given by a standard double bubble, composed of three spherical caps. This has now been proven for $n=2,3,4$, but is, for general volumes, unknown for $ n\ge 5$. Here, for arbitrary $n$, we prove a conjectured lower bound on the mean curvature of a standard double bubble. This provides an alternative line of reasoning for part of the proof of the double bubble theorem in $\mathbb R^3$, as well as some new component bounds in $\mathbb R^n$. | CommonCrawl |
Can you make a $3\times 3$ cube with these shapes made from small cubes?
You can record your answer as either pictures or as a net of the cube.
Cubes & cuboids. 2D representations of 3D shapes. Working systematically. Trial and improvement. Interactivities. Practical Activity. Addition & subtraction. Visualising. Tangram. Compound transformations. | CommonCrawl |
Question 1 In a seminar, the number of participants in Hindi, English and Mathematics are 60, 84 and 108, respectively. Find the minimum number of rooms required if in each room the same numbers of participants are to be seated and all of them being in the same subject.
Question 2 If the HCF of 657 and 963 is expressible in the form $657x + 963 \times (-15)$, find x.
Question 3 Find the greatest number which divides 285 and 1249 leaving remainders 9 and 7 respectively.
Question 4 Find the greatest umbers that will divide 445, 572 and 699 leaving remainders 4, 5 and 6 respectively.
Question 5 Find the greatest number which divides 2011 and 2623 leaving remainder 9 and 5 respectively.
Question 6 Find the smallest number which leaves remainders 8 and 12 when divided by 28 and 32 respectively.
Question 7 Find the smallest number which when increased by 17 is exactly divisible by both 520 and 468.
Question 8 A circular field has a circumference of 360km. Three cyclists start together and can cycle 48, 60 and 72 km a day, round the field. When will they meet again?
Question 9 If the sum of LCM and HCF of two numbers is 1260 and their LCM is 900 more than their HCF, then the product of two numbers is.
Question 10 The HCF of two numbers is 145 and their LCM is 2175. If one number is 725, find the other.
Question 14 If 3 is the least prime factor of number a and 7 is the least prime factor of numbers b, then the least prime factor of a + b, is.
Odd Number + Odd number= even number.
Question 16 Two tankers contain 583 litres and 242 litres of petrol respectively. A container with maximum capacity is used which can measure the petrol of either tanker in exact number of litres. How many containers of petrol are there in the first tanker. | CommonCrawl |
Abstract: Let $E^0$ be a holomorphic vector bundle over $\mathbb P^1(\mathbb C)$ and $\nabla^0$ be a meromorphic connection of $E^0$. We introduce the notion of an integrable connection that describes the movement of the poles of $\nabla^0$ in the complex plane with integrability preserved. We show the that such a deformation exists under sufficiently weak conditions on the deformation space. We also show that if the vector bundle $E^0$ is trivial, then the solutions of the corresponding nonlinear equations extend meromorphically to the deformation space. | CommonCrawl |
Abstract: Two-qubit X-matrices have been the subject of considerable recent attention, as they lend themselves more readily to analytical investigations than two-qubit density matrices of arbitrary nature. Here, we maximally exploit this relative ease of analysis to formally derive an exhaustive collection of results pertaining to the separability probabilities of generalized two-qubit X-matrices endowed with Hilbert-Schmidt and, more broadly, induced measures. Further, the analytical results obtained exhibit interesting parallels to corresponding earlier (but, contrastingly, not yet fully rigorous) results for general 2-qubit states--deduced on the basis of determinantal moment formulas. Geometric interpretations can be given to arbitrary positive values of the random-matrix Dyson-index-like parameter $\alpha$ employed. | CommonCrawl |
Luis A. Caffarelli, YanYan Li. Preface. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): i-ii. doi: 10.3934\/dcds.2010.28.2i.
Gershon Kresin, Vladimir Maz\u2019ya. Optimal estimates for the gradient of harmonic functions in themultidimensional half-space. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 425-440. doi: 10.3934\/dcds.2010.28.425.
Changfeng Gui, Huaiyu Jian, Hongjie Ju. Properties of translating solutions to mean curvatureflow. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 441-453. doi: 10.3934\/dcds.2010.28.441.
Martin Schechter. Monotonicity methods for infinite dimensional sandwich systems. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 455-468. doi: 10.3934\/dcds.2010.28.455.
Carlo Mercuri, Michel Willem. A global compactness result for the p-Laplacian involving critical nonlinearities. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 469-493. doi: 10.3934\/dcds.2010.28.469.
Martin Fraas, David Krej\u010Di\u0159\u00EDk, Yehuda Pinchover. On some strong ratio limit theorems for heat kernels. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 495-509. doi: 10.3934\/dcds.2010.28.495.
Giovanni Bonfanti, Arrigo Cellina. The validity of the Euler-Lagrange equation. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 511-517. doi: 10.3934\/dcds.2010.28.511.
Mariano Giaquinta, Paolo Maria Mariano, Giuseppe Modica. A variational problem in the mechanics of complex materials. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 519-537. doi: 10.3934\/dcds.2010.28.519.
Italo Capuzzo Dolcetta, Antonio Vitolo. Glaeser\'s type gradient estimates for non-negative solutions of fully nonlinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 539-557. doi: 10.3934\/dcds.2010.28.539.
Alessio Figalli, Young-Heon Kim. Partial regularity of Brenier solutionsof the Monge-Amp\u00E8re equation. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 559-565. doi: 10.3934\/dcds.2010.28.559.
Joel Spruck, Yisong Yang. Charged cosmological dustsolutions of the coupled Einstein and Maxwell equations. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 567-589. doi: 10.3934\/dcds.2010.28.567.
Luigi Ambrosio, Michele Miranda jr., Diego Pallara. Sets with finite perimeter in Wiener spaces, perimeter measure and boundary rectifiability. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 591-606. doi: 10.3934\/dcds.2010.28.591.
Zheng-Chao Han, YanYan Li. On the local solvability of the Nirenberg problem on $\\mathbb S^2$. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 607-615. doi: 10.3934\/dcds.2010.28.607.
Bernard Helffer, Thomas Hoffmann-Ostenhof, Susanna Terracini. Nodal minimal partitions in dimension $3$. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 617-635. doi: 10.3934\/dcds.2010.28.617.
Cristian Bereanu, Petru Jebelean, Jean Mawhin. Radial solutions for Neumann problems with $\\phi$-Laplacians and pendulum-like nonlinearities. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 637-648. doi: 10.3934\/dcds.2010.28.637.
Alfonso Castro, Benjamin Preskill. Existence of solutions for asemilinear wave equation with non-monotone nonlinearity. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 649-658. doi: 10.3934\/dcds.2010.28.649.
Sun-Yung Alice Chang, Yu Yuan. A Liouville problem for the Sigma-2 equation. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 659-664. doi: 10.3934\/dcds.2010.28.659.
Helmut Hofer, Kris Wysocki, Eduard Zehnder. Sc-smoothness, retractions and new models for smooth spaces. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 665-788. doi: 10.3934\/dcds.2010.28.665.
Baojun Bian, Pengfei Guan. A structural condition for microscopic convexity principle. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 789-807. doi: 10.3934\/dcds.2010.28.789.
Kung-Ching Chang, Zhi-Qiang Wang, Tan Zhang. On a new index theory and non semi-trivial solutions for elliptic systems. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 809-826. doi: 10.3934\/dcds.2010.28.809.
Giovanna Cerami, Riccardo Molle. On some Schr\u00F6dinger equations with non regular potential at infinity. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 827-844. doi: 10.3934\/dcds.2010.28.827.
Isabeau Birindelli, Stefania Patrizi. A Neumann eigenvalue problem for fully nonlinear operators. Discrete & Continuous Dynamical Systems - A, 2010, 28(2): 845-863. doi: 10.3934\/dcds.2010.28.845. | CommonCrawl |
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
Now, the GCD (greatest common divisor) of $3$ & $5$ is $1$, so the LCM (least common multiple) should be $3\times 5 = 15$.
This means every number that divides by $15$ was counted twice, and it should be done only once. Because of this, you have an extra set of numbers started with $15$ all the way to $990$ that has to be removed from (b)&(c).
Simple but very fun problem.
The sum of the first numbers 1+2+3+4+...+n is n(n+1)/2.
The sum of the first few multiples of k, say k+2k+3k+4k+...+nk must be kn(n+1)/2.
Now you can just put these ingredients together to solve the problem.
To find n use 1000/3 = 333 + remainder, 1000/5 = 200 + remainder, 1000/15 = 66 + remainder and then sum multiples of 3: $3\cdot 333(333+1)/2 = 166833$. multiples of 5: $5\cdot 200(200+1)/2 = 100500$ and subtract multiples of 15 $15\cdot 66(66+1)/2 = 33165$ to get 234168.
Well the main equation is already given above. The only question which give me trouble is that why I have to subtract the sum of 15?! Well, the answer is, 15 can be evenly divide by both 3 & 5. So the products of 15 can also be divided by those number as well! So, when you adding the numbers with Sum Of Three & Sum Of Five there are some numbers(i.e. 15,30,45,60....) which are available at both SUMMATION. So, you have to subtract at least once from the total sum to get the answer!
Hope this helps someone like me:) !!
Not the answer you're looking for? Browse other questions tagged number-theory project-euler or ask your own question.
What is the sum of all the natural numbers between $500$ and $1000$.
When do the multiples of two primes span all large enough natural numbers?
How to show the existence of a number with certain divisibility conditions between two multiples?
Find all natrual numbers that are $13$ times bigger than the sum of their digits.
Has it been conjectured that all $k$-multiperfect numbers are multiples of $k$?
Minimum sum of integers not multiples of one another.
Finding the sum of 3 numbers with 5 positive integer divisors. | CommonCrawl |
21 What are examples of D-modules that I should have in mind while learning the theory?
16 Why are elliptic curves important for elementary number theory?
14 What is the motivation behind the characteristic variety of a D-module and what does it's geometry tell me about the D-module?
14 Where am I suppose to actually learn how to compute hypercohomology?
9 What is integration along the fibers in D-module theory?
8 Is the $E_\infty$-structure on the cochain complex of a $K(G,n)$ readily understandable?
8 Does an oriented $S^3$ fiber bundle admit the structure of a principal $SU(2)$-bundle? | CommonCrawl |
Kiesel, T., Vogel, W., Hage, B., & Schnabel, R. (2011). Entangled Qubits in a non-Gaussian Quantum State. Physical Review. A, 83(6): 062319. doi:10.1103/PhysRevA.83.062319.
Abstract: We experimentally generate and tomographically characterize a mixed, genuinely non-Gaussian bipartite continuous-variable entangled state. By testing entanglement in 2$\times$2-dimensional two-qubit subspaces, entangled qubits are localized within the density matrix, which, firstly, proves the distillability of the state and, secondly, is useful to estimate the efficiency and test the applicability of distillation protocols. In our example, the entangled qubits are arranged in the density matrix in an asymmetric way, i.e. entanglement is found between diverse qubits composed of different photon number states, although the entangled state is symmetric under exchanging the modes. | CommonCrawl |
$\alpha$ is the angle between the axis of the ellipsoid and $\vec q$, $V = (4/3)\pi R_pR_e^2$ is the volume of the ellipsoid, $R_p$ is the polar radius along the rotational axis of the ellipsoid, $R_e$ is the equatorial radius perpendicular to the rotational axis of the ellipsoid and $\Delta \rho$ (contrast) is the scattering length density difference between the scatterer and the solvent.
For 2d data from oriented ellipsoids the direction of the rotation axis of the ellipsoid is defined using two angles $\theta$ and $\phi$ as for the `cylinder orientation figure <cylinder-angle-definition>`. For the ellipsoid, $\theta$ is the angle between the rotational axis and the $z$ -axis in the $xz$ plane followed by a rotation by $\phi$ in the $xy$ plane, for further details of the calculation and angular dispersions see `orientation` .
NB: The 2nd virial coefficient of the solid ellipsoid is calculated based on the $R_p$ and $R_e$ values, and used as the effective radius for $S(q)$ when $P(q) \cdot S(q)$ is applied.
The $\theta$ and $\phi$ parameters are not used for the 1D output.
Validation of the code was done by comparing the output of the 1D model to the output of the software provided by the NIST (Kline, 2006).
The implementation of the intensity for fully oriented ellipsoids was validated by averaging the 2D output using a uniform distribution $p(\theta,\phi) = 1.0$ and comparing with the output of the 1D calculation.
Comparison of the intensity for uniformly distributed ellipsoids calculated from our 2D model and the intensity from the NIST SANS analysis software. The parameters used were: *scale* = 1.0, *radius_polar* = 20 |Ang|, *radius_equatorial* = 400 |Ang|, *contrast* = 3e-6 |Ang^-2|, and *background* = 0.0 |cm^-1|.
The discrepancy above $q$ = 0.3 |cm^-1| is due to the way the form factors are calculated in the c-library provided by NIST. A numerical integration has to be performed to obtain $P(q)$ for randomly oriented particles. The NIST software performs that integration with a 76-point Gaussian quadrature rule, which will become imprecise at high $q$ where the amplitude varies quickly as a function of $q$. The SasView result shown has been obtained by summing over 501 equidistant points. Our result was found to be stable over the range of $q$ shown for a number of points higher than 500.
Model was also tested against the triaxial ellipsoid model with equal major and minor equatorial radii. It is also consistent with the cyclinder model with polar radius equal to length and equatorial radius equal to radius. | CommonCrawl |
16/04/2014�� I assume P1 through P99 are the 1%ile through 99%ile, as we might calculate using Excel PERCENTILE. In that case, I believe you can only estimate �... The standard deviation is a statistic that tells you how tightly all the various examples are clustered around the mean in a set of data. The calculation of standard deviation is actually the root mean square (RMS) of the deviation of the values from the mean.
Percentage with only standard deviation and mean given. [closed] If you are at the 99th percentile, let's say, it means that you are doing better than 99% of the population. In other words, the percentage of people doing worse than you is 99. With relation to the normal curve, this would be the area from $-\infty$ to your z-score. Same exact thing - find the area from $-\infty$ to $0.7879 how to make the hair straight and silky naturally The standard deviation is a statistic that tells you how tightly all the various examples are clustered around the mean in a set of data. The calculation of standard deviation is actually the root mean square (RMS) of the deviation of the values from the mean.
I assume P1 through P99 are the 1%ile through 99%ile, as we might calculate using Excel PERCENTILE. In that case, I believe you can only estimate � how to play gta 5 online xbox 360 tutorial The standard deviation is a statistic that tells you how tightly all the various examples are clustered around the mean in a set of data. The calculation of standard deviation is actually the root mean square (RMS) of the deviation of the values from the mean.
Definition 3: A weighted mean of the percentiles from the first two definitions. In the above example, here�s how the percentile would be worked out using the weighted mean: Multiply the difference between the scores by 0.25 (the fraction of the rank we calculated above). | CommonCrawl |
I think Boaz's preface/introduction of his own article is worthy of reproducing, although I don't agree with a few side points he makes (probably semi-jokingly and/or while being carried away).
Most importantly, I do not think we should accept the norms of other sciences that encourage blurring the difference between highly likely conjectures and facts. However, I indeed welcome any attempt to develop notions and measures of the plausibility of conjectures.
Theoretical Computer Science is blessed (or cursed?) with many open problems. For some of these questions, such as the P vs NP problem, it seems like it could be decades or more before they reach resolution. So, if we have no proof either way, what do we assume about the answer? We could remain agnostic, saying that we simply don't know, but there can be such a thing as too much skepticism in science. For example, Scott Aaronson once claimed that in other sciences "P neq NP" would by now have been declared a law of nature. I tend to agree. After all, we are trying to uncover the truth about the nature of computation and this quest won't go any faster if we insist on discarding all evidence that is not in the form of mathematical proofs from first principles.
But what other methods can we use to get evidence for questions in computational complexity? After all, it seems completely hopeless to experimentally verify even a non-asymptotic statement such as "There is no circuit of size $2^100$ that can solve 3SAT on 10000 variables". There is in some sense only one tool us scientists can use to predict the answer to open questions, and this is Occam's Razor. That is, if we want to decide whether an answer to a certain question is Yes or No, we try to think of the simplest/nicest possible world consistent with our knowledge in which the answer is Yes, and the simplest such world in which the answer is No. If one of these worlds is much nicer than the other, that would suggest that it is probably the true one. For example, if assuming the answer to the question is "Yes" yields several implications that have been independently verified, while we must significantly contort the "No" world in order to make it consistent with current observations, then it is reasonable to predict that the answer is "Yes".
In this essay, I attempt to do this exercise for two fascinating conjectures for which, unlike the P vs NP problem, there is no consensus on their veracity: Khot's Unique Games Conjecture and Feige's Random 3SAT Hypothesis. This is both to illuminate the state of the art on these particular conjectures, and to discuss the general issue of what can be considered as valid evidence for open questions in computational complexity.
I wish to start by commending Boaz for writing an essay of the current nature, which is not only of a great value as an overview of important developments but also has the added value of offering well-articulated views regarding them.
I think that Boaz's suggestion for an Occum Razor consideration (regarding how assumptions fit with known facts) makes much sense, but I would not present it as the only possibility. Also, in using this criterion, I'd focus on simplicity not nicety (which is far a less sound notion, let alone that its relevance here is unclear).
Again, none of the above ``inferences'' is logically sound, but each is rather supported by the Occum's principle stated above.
I like Boaz's assertion (at the beginning of Sec 1.3) by which a conjecture is useful (or good) is it has led to significant progress in the field.
Now, to my critiques, which are actually disagreements wrt some (important) issues that are secondary to Boaz's article. I do not like the title and the end of the 1st paragraph, since these give the impression of a gap between proof systems and truth (in models). I think this is not the issue; the issue is rather giving some weight to unsound inferences rather than totally dismissing them, and the question is how exactly to go along in such a shaky terrain.
Likewise, as stated up-front, I'm noth enthusiastic (to say the least) of calling a widely held conjecture by the name ``law of nature'' (as other sciences may do). I am happy to belong to a field that carefully articulate, for each assertion, the quantifiers under which it holds. Indeed, one may warm against losing sign of the assertion due to the quantifiers, but that's different than suggesting that the latter be dropped (a suggestion Boaz does not make, but other sciences often do and Boaz's text may be read as advocating that practice).
I do not think that ``blessed (or cursed?) with many open problems'' is an accurate enough description of the state of TOC as compared with other sciences. The issue is not the number of open problems, but their positions within the field.
Typo on line 6 of page 3: $a_i$ should be $x_i$.
I found the beginning of the 1st paragraph of Sec 2.1 quite confusing, since it describes a hypothetical situation that we know not to hold (as indeed stated later). At the very least, I'd use here phrases that clearly indicate the situation (i.e., ``would be'' --> ``could have been'').
I don't understand why Levin's theory of average-case complexity (and its ramifications) is not even mentioned in Sec 2.3; in my opinion, the basic theory as well as some ramifications (see, e.g., Noam Livne's work (*)) conflict with some of the feelings expressed in Sec 2.3. | CommonCrawl |
With modern technology advancement, it is now possible to deliver mail with a robot! There is a neighborhood on a long horizontal road, on which there are $n$ houses numbered $1$ to $n$ from left to right. Every day a mail delivery robot receives a pile of letters with exactly one letter for each house. Due to mechanical restrictions, the robot cannot sort the letters. It always checks the letter on top of the pile, visits the house that should receive that letter and delivers it. The robot repeats this procedure until all the letters are delivered. As a result, each of the $n$ houses is visited by the robot exactly once during the mail delivery of a single day.
The mail delivery robot has a tracking device that records its delivery route. One day the device was broken, and the exact route was lost. However, the technical team managed to recover the moving directions of the robot from the broken device, which are represented as a string consisting of $n-1$ letters. The $i$-th letter of the string is 'L' (or 'R') if the $(i+1)$-th house visited by the robot is on the left (or right) of the $i$-th house visited. For example, if $n = 4$ and the robot visited the houses in the order of $2, 4, 3, 1$, its moving directions would be "RLL".
With the moving directions, it may be possible to determine the order in which the robot visited the houses. The technical team has asked you to write a program to do that. There can be multiple orders producing the same moving directions, among which you should find the lexicographically earliest order.
The input has a single integer $n$ ($2 \leq n \leq 2 \cdot 10^5$) on the first line. The second line has a string of length $n-1$ consisting of letters 'L' and 'R' giving the moving directions of the robot.
Output the lexicographically earliest order in which the robot may have visited the houses and delivered the letters according to the moving directions. Consider two different integer sequences of equal length $A = (a_1, a_2, \ldots , a_ k)$ and $B = (b_1, b_2, \ldots , b_ k)$, and let $1 \le i \le k$ be the lowest-numbered index where $a_ i \ne b_ i$. Then $A$ is lexicographically earlier than $B$ if $a_ i < b_ i$; otherwise $B$ is lexicographically earlier than $A$. | CommonCrawl |
One example is the standard Cartesian coordinates of the plane, where $X$ is the set of points on the $x$-axis, $Y$ is the set of points on the $y$-axis, and $X \times Y$ is the $xy$-plane.
If $X=Y$, we can denote the Cartesian product of $X$ with itself as $X \times X = X^2$. For examples, since we can represent both the $x$-axis and the $y$-axis as the set of real numbers $\R$, we can write the $xy$-plane as $\R \times \R = \R^2$.
Cartesian product definition by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
What Hermitian operators can be observables?
assign to each projector a unique real number $\lambda\in\mathbb R$.
Question: what restrictions which prevent $O$ from being an observable are known?
For example, we can't admit as observables the Hermitian operators having as eigenstates superpositions forbidden by the superselection rules.
a) Where can I find an exhaustive list of the superselection rules?
b) Are there other rules?
c) Is the particular case when the Hilbert space is the tensor product of two Hilbert spaces (representing two quantum systems), special from this viewpoint?
Without superselection rules to restrict the observables, any Hermitian operator is an admissible observable. The case of multiple identical systems is very important. Indeed, if the systems are really identical, only observables that are symmetric under the exchange of the systems are admissible. In such a case, technically speaking you should only consider observables that commute with all possible permutation operators (i.e., with the elements of the representation of the permutation group on the Hilbert space of the systems).
Thank you for the answer. Indeed, for identical particles one takes as the Hilbert space as the quotient of the tensor product by the appropriate ideal. Do you know some bibliographic references showing that the only restrictions to a Hermitian operator to be observable are only these? | CommonCrawl |
Proposition 15.49.6. A Noetherian complete local ring is a G-ring.
Proof. Let $A$ be a Noetherian complete local ring. By Lemma 15.49.2 it suffices to check that $B = A/\mathfrak q$ has geometrically regular formal fibres over the minimal prime $(0)$ of $B$. Thus we may assume that $A$ is a domain and it suffices to check the condition for the formal fibres over the minimal prime $(0)$ of $A$. Let $K$ be the fraction field of $A$.
We can choose a subring $A_0 \subset A$ which is a regular complete local ring such that $A$ is finite over $A_0$, see Algebra, Lemma 10.154.11. Moreover, we may assume that $A_0$ is a power series ring over a field or a Cohen ring. By Lemma 15.49.3 we see that it suffices to prove the result for $A_0$.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 07PS. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 07PS, in case you are confused. | CommonCrawl |
Currently, I am developing a transpiler for my own language, however, I find difficulties to transpile generic functions. For example, how would you translate the following Java-like code in to C?
Note: You might also translate it to any language you like, as long as you did not use generics.
When compiling to a target language without generics, there are two main strategies.
Monomorphization: if you have a generic function, and it's instantiated with types $T_1\ldots T_n$ throughout your program, you make $n$ copies of the function, with the concrete type replacing the generic variable. This tends to result in fast but large code, since there's lots of duplication.
Boxing: you can "box" the generic variables behind some sort of reference type, and cast to and from this type when entering and exiting your generic function. The guarantees of the source language ensure that these casts are always safe. For example, in C you probably want to use void*.
If you're looking for resources, generics are known as "parametric polymorphism" in the Programming Languages literature, so that might help your search.
If you're compiling from a Java-like OO language, you'll also need to attach a VTable to your objects: a table of pointers to their methods, you get Java's dynamic dispatch (i.e. the method of an object depends only on its runtime type, not it's compile-time type).
Having "bounded polymorphism" with option 2 (i.e. the T extends Comparable<T>makes this a little more tricky, since you need to come up with some sort of boxed representation that lets you find your Comparable methods. You could do a dynamic lookup to find such methods, but this is slow. I'd search around for how to compile bounded polymorphism, you'll probably find a better answer.
Not the answer you're looking for? Browse other questions tagged programming-languages compilers or ask your own question.
A programming language that can only implement computable bijective functions?
Why translating a program from one high-level programming language directly into another is difficult?
Examples of specialized instructions of assembly language not available in compilers?
Why are these FIRST sets not combined into a single set? | CommonCrawl |
As discussed by Prof.Wen in the context of the quantum orders of spin liquids, PSG is defined as all the transformations that leave the mean-field ansatz invariant, IGG is the so-called invariant gauge group formed by all the gauge-transformations that leave the mean-field ansatz invariant, and SG denotes the usual symmetry group (e.g., lattice space symmetry, time-reversal symmetry, etc), and these groups are related as follows SG=PSG/IGG, where SG can be viewed as the quotient group.
However, in math, the name of projective group is usually referred to the quotient group, like the so-called projective special unitary group $PSU(2)=SU(2)/Z_2$, and here $PSU(2)$ is in fact the group $SO(3)$.
So physically why we call the PSG projective rather than the SG? Thank you very much.
It depends on what group you consider the starting point, and that depends on context.
One context is the mathematical one (forget all you know about spins etc) where we start with the vector space $\mathbb C^2$ and the natural action of $SU(2)$ on it. If we then look at the projective vector space $\mathbb C^2/\mathbb C^* = \mathbb CP^1$, then the action of $SU(2)$ is given by $PSU(2) = SU(2)/ \mathbb Z_2$. Since this group now acts on the projective vector space, we can call it a projective group.
In the physical context--in the case of rotation symmetry--our starting group is not $SU(2)$ but rather $SO(3)$. The way this acts on the Hilbert space of a spin $1/2$ particle is by a nice linear representation $\rho: SO(3) \to \mathbb CP^1$ (this is in fact the identity map, since as you point out $\mathbb CP^1 \cong SO(3)$!). However physicists don't really like to think about projective Hilbert spaces, and so we prefer to think of our symmetry as acting on the linear Hilbert space: $\tilde \rho : SO(3) \to \mathbb C^2$. However, it turns out that the best you can do is that $\rho$ is a projective representation (which means the group structure is only respected up to a complex scalar). Hence you can say we traded in the projective Hilbert space for a projective group action. Again, physicist don't like thinking about projective representations, and so instead we use the linear representation of the covering group. Indeed, if we first extend $SO(3)$ to $SU(2)$ then it can act linearly on the linear space $\mathbb C^2$.
I am sure you know that that is why we use $SU(2)$ instead of $SO(3)$, but I wanted to go through the reasoning explicitly, to demonstrate that our original symmetry group is $SO(3)$, but since using that induces ''projectiveness'' down the road (either on the space or in the way it acts), we instead use the extended symmetry group $SU(2)$, which we can call the projective symmetry group of our system since it encodes all the projective realizations of our original symmetry group.
In conclusion: it's not that one name is better than the other, and you are correct in noticing that they are not talking about the same thing, it's just that it depends on the context when you tag the label ''projective''. In the former case we call it the projective group since it is the way in which the original group acts on the projective vector space. In the latter case we might call the extended symmetry group projective because its (linear) representations correspond to all projective representationsn of our original symmetry group. | CommonCrawl |
We want to determine all real $x$ values for which $f(x)=|3x-6|$ is defined. Starting inside the absolute values, we can always multiply any real number by 3 and then subtract 6 from the result, so $3x-6$ is defined for all real $x$. We can take the absolute value of any real number, so $|3x-6|$ is also defined for any real $x$. Thus, the domain of $f$ is all real numbers, written as $(-\infty,\infty)$ in interval notation. | CommonCrawl |
I'm working on a spatial project. I need to calculate the probability of a point being the closest to another. Say I'm given four points $y$, $x_1$,$x_2$ and $x_3$ in 2D plane, and let $Y'=y+Z$, where $Z$ is a bivariate normal with known mean and covariance. I want to know the probability that $x_1$ out of the three $x$'s is closest to $Y'$. How should I proceed? Thank you very much!
Browse other questions tagged probability normal-distribution spatial or ask your own question. | CommonCrawl |
But I do not understand how to derive or explain (59).
Browse other questions tagged signal-analysis gradient or ask your own question.
How is the energy of $x_1\cdot x_2$ related to the energies of $x_1$ and $x_2$?
What is this theorem in this formula?
What Is the Difference between Parseval's Theorem and Plancherel Theorem? | CommonCrawl |
Аннотация: We consider a discrete equation, defined on the two-dimensional square lattice, which is linearizable, namely, of the Burgers type and depends on a parameter $\alpha$. For any natural number $N$ we choose $\alpha$ so that the equation becomes Darboux integrable and the minimal orders of its first integrals in both directions are greater or equal than $N$.
Ключевые слова: discrete equation, Darboux integrability, first integral. | CommonCrawl |
Abstract : The main contribution of this paper is an integrated design of robust fault estimation (FE) and fault accommodation (FA), applied to uncertain linear time invariant (LTI) systems. In this design, the robust $H_\infty$ proportional-integral (PI) observer allows a precise estimation of actuator fault by dealing with system disturbances and uncertainties, while the feedback controller compensates the actuator fault, and therefore assuring the closed-loop stability. Thanks to the application of majoration and Young inequalities, the observer-controller decoupling problem is solved and both above objectives are combined into only one linear matrix inequalities (LMI) allowing a simultaneous solution for both observer and controller. Finally, an example of vehicle suspension system is illustrated to highlight the performance of the proposed method. | CommonCrawl |
Abstract: We calculate sensitivity coefficients to $\alpha$-variation for the fine-structure transitions (1,0) and (2,1) within $^3P_J[2s^2 2p^2]$ multiplet of the Carbon-like ions C I, N II, O III, Na VI, Mg VII, and Si IX. These transitions lie in the far infrared region and are in principle observable in astrophysics for high redshifts z~10. This makes them very promising candidates for the search for possible $\alpha$-variation on a cosmological timescale. In such studies one of the most dangerous sources of systematic errors is associated with isotope shifts. We calculate isotope shifts with the help of relativistic mass shift operator and show that it may be significant for C I, but rapidly decreases along the isoelectronic sequence and becomes very small for Mg VII and Si IX. | CommonCrawl |
This is a follow-up to this question; in particular, I'm wondering if anyone can expand upon the interesting answers given by Kevin McGerty and David Ben-Zvi there. (In particular, in this question I'm essentially quoting their answers).
Here's the setup: Let $k$ denote an algebraically closed field of positive characteristic and let $G$ be a semisimple algebraic group over $k$. Let $D$ denote the sheaf of ordinary differential operators on the flag variety $G/B$ of $G$; i.e., $D$ is the sheaf of divided-power differential operators. Also let $H$ denote the hyperalgebra of $G$.
Now, over $\mathbb C$ there is an equivalence of categories between $D$-modules and $H$-modules with a certain central character. My question is: Is there any sort of localization theorem like this in positive characteristic? Kashiwara and Lauritzen have shown that $G/B$ is not $D$-affine in general, so perhaps one should look for a derived equivalence. (Bezrukavnikov, Mirkovic, and Rumynin have answered a similar question, but instead of $D$ they take the sheaf of crystalline/PD differential operators, and instead of $H$ they take the enveloping algebra of the Lie algebra of $G$).
Browse other questions tagged rt.representation-theory ag.algebraic-geometry characteristic-p or ask your own question.
Have people successfully worked with the full ring of diferential operators in characteristic p?
Status of Borho and Brylinski's irreducibility conjectures?
Can we count the number of simple modules for a reduced enveloping algebra? | CommonCrawl |
These articles have been peer-reviewed and accepted for publication in IJPMB, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.
Register for our alerting service, which notifies you by email when new issues of IJPMB are published online.
Abstract: Performance parameters of Environmental Management System (EMS); based on HR dimensions and organizational dimension are important measures for the successful implementation of EMS in any manufacturing industry. This paper identifies the various EMS variables for the successful implementation EMS in the small and medium enterprises (SMEs) of India. Various measures of collected data of survey of Indian micro scale industries and SMEs have been validated and analysed by using SPSS software. The collected data have been analysed by statistical tools like importance-index and Corrected item minus total correlation (CIMTC) methods and reliability of these data has been tested by Cronbachs alpha (α) analysis. The computed results show that the identified measures are highly reliable for implementation of EMS in Indian SMEs. This study shows that 53 items out of 54 items of 9 variables are having the values of Cronbachs alpha (α) more than 0.70 and it shows that the collected data are highly reliable for Indian SMEs. It is also found that the values of Cronbachs alpha for the employees empowerment' and ʻTraining and awareness are above 0.80 and rest variables have the values of Cronbachs alpha (α) in the range of 0.50-0.80 for micro scale Indian industries.
Keywords: Small and Medium Enterprises (SMEs); Environmental Management System (EMS); Environmental Performance (EP); Cronbach's alpha; Importance-index and CIMTC.
Abstract: Globalization and modern lifestyle trends have put up tremendous challenge to manufacturing industries. To meet the challenges imposed by todays dynamic market conditions it is the right time for the manufacturing companies to transit from traditional manufacturing system to advance manufacturing system. This paper highlights four areas of manufacturing systems i.e., humanised flexible manufacturing system (HFMS), flexible manufacturing system (FMS), computer integrated manufacturing (CIM) and computer numerical control (CNC) systems. The purpose of this paper is to provide a framework to proactively select the best manufacturing system. This study is an effort for identifying the factors affecting the manufacturing system and to find out the best manufacturing system. For this purpose a comparative study of multi criteria decision making approaches, analytical network process (ANP) and analytical hierarchy process (AHP) techniques have been used for prioritization of alternatives. The shortcomings of the research are stated and future directions are proposed.
Keywords: Measures; humanised flexible manufacturing system; HFMS; flexible manufacturing system; FMS; computer integrated manufacturing; CIM; computer numerical control; CNC; analytical network process; ANP; analytical hierarchy process; AHP;.
Abstract: In this paper, we develop optimistic and pessimistic models using fuzzy data envelopment analysis (FDEA). We find the fuzzy optimistic and fuzzy pessimistic efficiencies by using the -cut method. By using super-efficiency technique, we develop models to obtain the complete ranking of the DMUs when fuzzy optimistic and pessimistic models are considered separately. Further, to rank the DMUs when both fuzzy optimistic and pessimistic models are taken simultaneously as hybrid approach. To address the overall performance using fuzzy optimistic and pessimistic situations together in FDEA, we propose a hybrid FDEA using -cut efficiencies decision model. Finally, these developed optimistic and pessimistic FDEA models and ranking models are illustrated with two examples. The proposed model is applied to a real world problem of health sector. We determine the performance efficiencies of hospitals in Meerut district, Uttar Pradesh state, India for the calendar year 2013-2014 using the proposed model. Number of health superintendents and number of health workers are taken as input variables, and number of inpatients and number of outpatients as output variables.
Keywords: Hospitals; Performance Efficiency; Optimistic and pessimistic; Super-efficiency; FDEA.
Abstract: The single vendor single-buyer integrated production inventory system has been an object of study for a long time but little is known about the effect of investing in reducing setup cost on the integrated inventory models with variable lead time and production lot size dependent lead time and transportation time. A mathematical model is derived to investigate the effects of setup cost reduction on the integrated inventory system by reducing the total cost system. A numerical example is included to illustrate the results of the model. Finally, Graphical representation is presented to illustrate the model. The solution procedure is furnished to determine the optimal solution and the sensitivity analysis has been carried out to illustrate the behaviours of the proposed model.
Keywords: Integrated inventory model; Supply chain management; Setup cost reduction.
Abstract: The objective behind the review is to have an insight of Technology Push (TP) activities to escalate the essential capabilities in manufacturing ventures and analyzing the experiences of the developed and developing countries. Themes and relevant articles have been identified from the various journals and linked with issues related to technology push for sustainable development in manufacturing industries. The study reveals that adjustment of viable TP strategies may endow towards reckoning key skills in the framework of a company. In addition to this, the review emphasizes that the TP strategies has significant impact on the sustainable development of the manufacturing industries. It has been observed that the role of TP in manufacturing companies has not been much reported. A synthesis can be established with satisfaction levels of technology push strategies, which may help to secure concrete objectives and interests of an organization. The paper reviews various attributes of TP through a number of publications in this field.
Keywords: Technology push; Sustainable development; Manufacturing industries; Technology management.
Abstract: To cater to dynamic and variable performance matrices, manufacturing industries use automation tools and technologies to impart manufacturing flexibility in its processes. To successfully cater to its performance, manufacturing requirements should be designed into the system and then appropriate planning and operational policies need to be followed to support it. This research work validates this using an industrial size Flexible Manufacturing System (FMS) as a multi node manufacturing framework having eight machines. While FMS design spectrum is modeled by different levels of its Routing Flexibility (RF) and number of pallets (NP), planning and operational strategies are implemented through various combinations of Dispatching and Sequencing rules (DR/SR). This work is a simulation study to evaluate FMS performance for which Make Span Time (MST) and machine utilization have been used as a performance metric. This research provides a framework for the decision makers in identifying the appropriate levels of routing flexibility, number of pallets and appropriate combination of dispatching and sequencing rules for optimizing system performance under a given manufacturing environment.
Keywords: Routing Flexibility; Sequencing Rule; Dispatching Rule; Simulation of FMS.
Abstract: This paper focuses on scheduling a permutation flow shop where setup times are sequence dependent. The objectives involved are minimisation of makespan and mean tardiness. A hybrid genetic algorithm is developed to obtain a set of non-dominant solutions to the multi-objective flow shop scheduling problem. Genetic algorithm is used in combination with a local search method to obtain the pareto-optimal solutions. The best parameters of the proposed algorithm are determined using the Taguchis robust design method and the concept of utility index value. The set of parameters corresponding to the highest utility value is selected as the optimal parameters for the proposed algorithm. The algorithm is applied to the benchmark problems of flow shop scheduling with sequence dependent setup time.
Keywords: permutation flow shop; hybrid genetic algorithm; Taguchi method; utility concept.
Abstract: The manufacturing organisations adopt Flexible Manufacturing Systems to meet the challenges imposed by todays volatile market standards. An FMS is designed to combine the efficiency of a mass production line and the flexibility of a job shop to produce a variety of products on a group of machines. Productivity is a key factor in a flexible manufacturing system performance. Despite the advantages offered, the implementation of FMS has not been very popular especially in developing countries as it is very difficult to quantify the factors favouring FMS implementation. For its successful implementation, technological considerations, cost justification as well as strategic benefits are to be weighted. Therefore an attempt has been made in the present work to identify and categorize various productivity factors influenced by the implementation of FMS in a firm, further these factors are quantitatively analysed to find their inhibiting strength using Graph Theory Approach (GTA). GTA is a powerful approach which synthesizes the inter-relationship among different variables or subsystems and provides a synthetic score for the entire system. So using this approach a numerical index is proposed in this work to evaluate and rank the various productivity factors so that the practising managers can have better focus.
Keywords: Manufacturing; FMS; Productivity; GTA.
Abstract: Reconfigurable manufacturing systems (RMS) are considered as next generation manufacturing systems that are capable of providing the functionality and capacity as and when required. Products are classified into several part families as per customer requirement and each of them are a set of similar products. In a shop floor, the manufacturers have to deal with varied number of orders for multiple part families and after completing the orders of a particular family, they need to change over to the orders of a different part family. Shifting from one part family to another may require the systems reconfiguration, which is a complicated method and requires tremendous cost and efforts. The complexity, effort and cost from changing one configuration to another depends on the existing initial configuration and the new configuration required for subsequent production of orders belonging to a different part family. There are different models available for process improvement This paper focuses on determining optimal sequencing of part families formation and configuration selection for process improvement in RMS on the basis of minimum loss occurred for a given system configuration. The proposed methodology is explained and an example is given for illustration.
Keywords: RMS; configuration selection; part family; reconfiguration cost; process;order state.
Abstract: Developing new products is a highly dynamic complex process. Managing such dynamic projects is not an easy task. In this research, different scenarios of NPD projects work systems are examined. The capability of these projects work systems to deal with the dynamic nature of NPD projects to mitigate the negative impacts of this nature on NPD project performance . This study shows that, the most suitable organizational work environment for moderate and high complex projects is the highly flexible work system environment. While it is appropriate for low complex projects to provide a formal work system environment. This study makes an attempt to use systems dynamic modeling to model the work system characteristics and managerial styles in interaction with projects behavior. It provides a step towards the understanding of how both work systems characteristics and managerial styles would influence projects performance and their probability of success. It serves as a guide for projects managers suggesting proper ways to better manage different NPD projects with different levels of complexities.
Keywords: project behavior; work system design; managerial styles; formality; system dynamics; scenario planning.
Abstract: In this paper, we use two metaheuristic algorithms, i.e., artificial bee colony (ABC) and covariance matrix adaptation revolution strategies (CMA-ES), for solving the generalized cell formation problem considering machine reliability. The purpose is to choose the best process routing for each part and to allocate the machines to the manufacturing cells in order to minimize the total cost, which is composed of intracellular movement cost, intercellular movement cost and machines breakdown cost. To evaluate the metaheuristic algorithms, eight numerical examples in three different sizes are solved. The results of the two algorithms are compared with each other and with the results of solving the MIP model. Both the MIP solver and metaheuristics find the optimal solutions for the small size problem instances while by increasing the problem size, metaheuristics show higher performance. The results illustrate that the CMA-ES algorithm outperforms the ABC algorithm in both solution quality and CPU time.
Keywords: group technology; generalized cell formation problem; reliability; artificial bee colony; covariance matrix adaptation revolution strategies.
Abstract: In this paper an inventory model for deteriorating items with two warehouses has been developed, where both the warehouses considered are rented. The demand rate for both the warehouses is different. Since the demand for the products increases due to stock up to a certain level and after that it becomes constant. In this model it is assumed that the demand in first warehouse is stock dependent and after that when the extra stock is filled in second warehouse the demand rate becomes constant for that. Due to different storage conditions the deterioration rate is also different in both the warehouse. A numerical example and sensitivity analysis is also presented to illustrate the study. The main objective of this paper is to find out the optimal total cost of the system.
Keywords: Rented Warehouses; Deterioration; Stock Dependent Demand; Partial Backlogging; Inventory.
Abstract: A good and interconnected demand and supply planning (D&SP) is essential for firms, as it will give better allocation of resource and enabler to stay competitive. The basic and simple applications of D&SP are forecasting and aggregate planning which mainly used in big corporations, but less in small and medium enterprises. The research was a case study in a small firm in Indonesia, which started as small and family-owned company, but can enjoy the benefit of simple demand and supply planning to enhance its cost and productivity. Despite main application of D&SP in every production environment, no vast research has been done in the last couple of decades. The study and the case are thus a refreshment on how application of D&SP still relevant. The case study examined 7 common forecasting tools for demand planning side, and exercised classical chase and level aggregate planning strategy for the supply planning side. It explored 3 possible scenarios, and recommended a better scenario with 3% cost saving for the company.
Keywords: Demand Planning; Supply Planning; Demand and Supply Planning (D&SP); Aggregate planning; Forecasting; Small enterprise.
Abstract: In order to understand the manufacturing and machining requirements that have been increased sharply due to evolution of newer and difficult to machine materials which are being utilized by the manufacturing industries, a different class of manufacturing and machining process has emerged from past few decades know as non-traditional machining (NTM) processes. These processes have ability to develop intricate and typical shape products with high degree of accuracy and precision, close dimensional tolerance, and good surface finish. On the other hand, NTM processes also consume high power and are too expensive, hence, necessitates in optimum selection of NTM processes and their related criteria. In this study, a model is developed using analytic hierarchy process (AHP) methodology to help the process engineers and decision makers in prioritizing and ranking the various NTM processes and their criteria identified from literature review, and selecting the most appropriate NTM process and its criteria for typical machining process. The methodology also assists in identifying the most significant NTM processes and their criteria that could help the process engineers and decision makers for higher productivity and industry performance. Analysis of the present study shows that NTM process "based on the source of energy" has ranked at first place followed by "based on the medium of the energy transfer", "based on the type of energy used", and "based on the mechanism of material removal". Further, the study also prioritized the various NTM processes criteria consisting of 27 criteria. The priority criteria that ranked on top are: voltage, current, ions, and ionised particles. The findings of the present study will help process engineers, decision makers and practitioners in selecting the best NTM process and its criteria, and then implementing them in their industry for improved performance.
Keywords: Non-traditional machining (NTM) processes; criteria; analytic hierarchy process (AHP); voltage; current; and energy.
Abstract: Computer Aided Testing (CAT) is the latest technique. Its because CAT involve in a different stages of manufacturing like Designing, Production, Quality Control by 3D Measuring Instrument that is time consuming. If the object position is known before examined, time can be managed. Machine vision dependent inspection of mechanical CAD parts has become demanding area in the field of industrial inspection. In this work we developed the procedure to detect the mechanical CAD parts with the edge based algorithms. The data has been taken with 3D model that has been designed using Solid Edge ST8 CAD/CAM PLM software and analyzed using MATLAB for automated production checking system. Our proposed method uses the edge based recognition of CAD object by Fuzzy based approach in order to create image information of shape before it can be used for pose estimation in Computer Aided Testing System. From the experimental results, it has been found that with the proposed vision system more accurate and reliable products can be manufactured intelligently.
Keywords: Computer Aided Testing (CAT); Machine vision; Image processing; CAD/CAM; Fuzzy logic; Intelligent Manufacturing; Edge extraction; STL format.
Abstract: In a competitive economic environment, manufacturers are forced to face delay, quality and cost challenges. Hence, emergence of lean manufacturing and total productive maintenance (TPM) concepts came to resolve this problem by improving productivity. However, for many manufacturers, especially small and medium-sized enterprises (SME), these concepts remain very difficult to implement and maintain. In this perspective, this work aims at clarifying some failure factors and proposes a model to improve both productivity and performance. This model has been implemented in the plastics industry through a case study relating basic concepts of TPM and Management by Objectives (MBO). Nevertheless, few research works are interested to develop lean model for manufacturing SMEs. This paper aims to enrich this research axis by an industrial case study in a Moroccan SME. Further researches can exploit the findings of this work in order to implement the proposed model in other industries.
Keywords: Total productive maintenance; lean manufacturing; key performance indicators; data collection; small and medium-sized enterprises.
Abstract: Creativity has found a leading strategic position in service industries, and companies have increasingly implemented knowledge management practices and dynamic capabilities in their organizations over the past decade. Realizing the importance of knowledge acquisition from social media, knowledge sharing, and dynamic capabilities in employee creativity, we have made an attempt in this research to propose a conceptual framework. Drawing from the organizational behavior literature, we hypothesize and test the relationship among knowledge acquisition, dynamic capabilities, knowledge sharing and employee creativity. We performed a structural equation model (SEM) test with maximum likelihood estimation to test the relationship among the research variables with a sample of 293 participants. The empirical results from the structural model suggest that knowledge acquisition positively influences both dynamic capabilities and knowledge sharing, and knowledge sharing has a direct effect on dynamic capabilities. Furthermore, dynamic capabilities and knowledge sharing were shown to be direct antecedents of employee creativity.
Keywords: Knowledge processes; Innovation; Creativity; Social media; Knowledge acquisition; Dynamic capabilities; Knowledge sharing; Small businesses.
Abstract: The concept of catastrophe, happening at random and leading to force to abandon all present customers, data, machines immediately and the instantly inoperative of the service facilities until a new arrival of the customer is not uncommon in many practical problems of the computer and communication systems. In this research article, we present a process to frame the membership function of queueing system' characteristics of the classical single server $M/M/1$ fuzzy queueing model having fuzzified exponentially distributed inter-arrival time and service time with fuzzy catastrophe. We employ the $\alpha$-cut approach to transform fuzzy queues into a family of the conventional crisp intervals for the queueing characteristics which are computed with a set of the parametric non-linear program using their membership functions. In a $FM/FM/1$ fuzzy queueing model with fuzzy catastrophe, we characterized an arrival rate, and service rate with fuzzy numbers with fuzzy catastrophe which is also represented by a fuzzy number. We employ basic fuzzy arithmetic fundamental, Zadeh's extension principle, Yager's ranking index to establish fuzzy relation among different rates and to compute corresponding defuzzify values. We present an illustrative example to show practicality and tractability of the process in detail. We also derived the informative membership function of queueing system characteristics using fuzzy operations and arithmetic and results are depicted in tables and graphs for providing better insight to management along with their sensitivity to parameters.
Keywords: Fuzzy sets; $alpha$-cut; FM/FM/1 queueing system; Catastrophe.
Abstract: The present paper deals with multi objective optimization of Wire EDM process parameters in a gear cutting process of Titanium alloy Grade 5 using MOORA based GA methodology. The objective of the present research is to optimize the response parameters using hybrid technique for a gear tooth in order to obtain an optimized setting and to study the effect of process parameters on the output responses. In the present study, Combii Wire (Brass with diffused Zinc) of 0.25 mm diameter was machined with a 2 mm thickness of Titanium plate in the shape of a gear. The process was carried out using full factorial design of experiments having 81 combinations. ANOVA table reveals that Pulse on time and Wire feed rate are the significant process parameters affecting the responses in WEDM process. The optimized setting obtained can be further used to produce high quality miniature gears.
Keywords: ANOVA; Brass; Full factorial; Gear; MOORA; Titanium; Pulse on time; Wire EDM; Wire feed rate.
Abstract: Lean Six Sigma (LSS) is wide accepted as a successful quality improvement program. However, many literatures have reported that companies have struggled with LSS projects and Indian manufacturing firms are no exception to this scenario. Therefore, the article provides a description of LSS implementation at a manufacturing company. The company manufactures PTO (Power Take Off) shafts for tractors. Nevertheless, the company was receiving high non-conformance resulting in part rejection. Therefore, the study deals with the defect reduction through LSS implementation. Using, the DMAIC approach with suitable tools, statistical analysis were performed to identify the bottleneck in the operations and the corrective measure were taken to the improve quality of PTO. In addition, the researchers proposed preventive methods for reoccurring issues and developed a plan to sustain the project to find the required results. At the end of the project, the researchers yielded considerable cost saving and defect reduction. The study concludes with the managerial implications and contribution to the society with the limitations of the study.
Keywords: Lean Six Sigma (LSS); Force Field Analysis; Cost Savings; Process Improvement.
Abstract: In this study; the authors consider the context of such products; whose component failures cannot be rectified through repair actions, but can only be fixed by replacement. The authors develop a cost effective decision model based on reliability for the replacement of street lights in this study. The direction to benchmark maintenance time for operating the street lights with minimum expenditure is discussed by the authors. The replacement policy based on fuzzy reliability index; which can effectively deal with uncertainty and vagueness is proposed in this study. The theoretical model is proposed and explained with a numerical procedure to illustrate the applicability of the proposed study. The study determined the link amongst individual replacement and the group replacement policies by defining the economic instant for replacing street lights. A decision support model incorporating uncertainty and impreciseness is presented to estimate and plan prior maintenance efforts. The individual and the group replacement policies are exposed in this study by housing a subjective theoretical framework that defines their monetary contribution. The study provides valuable insight for using the real time data information for designing a replacement model using fuzzy logic. The authors proposed a fuzzy reliability approach to check the optimality of the group replacement policy.
Keywords: Maintenance; Generalized Trapezoidal Fuzzy Number set (GTFNs); Replacement; Uncertainty; Economic Life; Reliability.
Abstract: A batch arrival retrial queueing model with k optional stages of service, extended Bernoulli vacation and stand-by is studied. After completion of the ith stage of service, the customer may have the option to choose (i+1)th stage of service with probability θi or may leave the system with probability 1-θi, i=1,2,...,k-1 and 1 for i=k. After service completion of each customer, the server may take a vacation with probability v1 and extend a vacation with probability v2 or rejoins the system after the first vacation with probability 1-v2. Busy server may get to breakdown and the stand-by server provides service only during the repair times. At the completion of service, vacation or repair, the server searches for the customer in the orbit (if any) with probability α or remains idle with probability 1-α. By using the supplementary variable method, steady state probability generating function for system size, some system performance measures like Ls, Lq, Ws and Wq are discussed. Simulation results are given using MATLAB.
Keywords: Retrial; k - optional service; orbital search; standby; extended vacation; Supplementary Variable Technique.
Abstract: Service Oriented Architecture is widely accepted approach that facilitates business agility by aligning IT with business. Service Identification with right level of granularity is the most critical aspect in service oriented architecture. It is considered as first steps in the Service Oriented Development life cycle. The services must be defined or identified keeping service reuse in different business contexts. Service identification is challenging to application development team due to several reasons such as lack of business process documentation, lack of expert analyst, and lack of business executive involvement, lack of reuse of services, lack of right decision to choose the appropriate service. Service oriented architectural project uses large repository, where in services are stored randomly in the repository. This causes considerable amount of search time when service is searched from the database. In case of trigger based application, current service storage process will not be much effective due to inadequacy in service level agreement. In this regard, the author(s) explored the possibilities of service identification using k-means clustering. This paper suggests an approach to identify the appropriate service using distributed repository based on enhanced k mean algorithm. The Creation of distributed repository based on business service functionality will reduce search time, service replication and increase the performance and reliability of the service. Proposed Model has been experimentally validated and author(s) found significant decrease in service search time. This Model will be helpful in building the applications with minimum service level agreement.
Keywords: Service-oriented architecture; K- Mean; Service Identification; Cluster.
Abstract: Number of natural disasters and the people affected by them has been increased over recent years. In the last two decades, the field of disaster management and humanitarian logistics has earned more attention. Design of relief logistic network as a strategic decision and relief distribution as an operational decision are the most important activities for disaster operation management before and after a disaster. Since related information can be updated after disaster, we consider a relief helicopter to satisfy the lack of inventory in different depots. In the proposed mathematical model, pre-disaster decisions are determined according to different scenarios in a two stage optimization scheme. Moreover, we present a meta-heuristic algorithm based on particle swarm optimization as a solution method. Finally, the model for two stages of disaster management is testified by several instances. Computational results based on three approaches confirm that the proposed model has proper performance.
Keywords: Disaster management; Scenario based; Relief distribution; facility location problem.
Abstract: The global competition in the international market is forcing industries to increase the productivity and profitability. Time driven activity based costing (TDABC) is newly developed approach in the field of cost accounting. Companies are more interested in increasing productivity and profitability rather than assigning the cost on the product. This paper presents an approach to estimate the cost of process industry using TDABC and how it is useful in increasing productivity and profitability. The approach is explained with the help of case study in the process industry. The results indicate that TDABC provides the useful information for identifying the areas for improving the productivity of the industry and its effect on profitability. This paper also discusses how TDABC helps in taking the decision for improving productivity and profitability.
Keywords: TDABC; process industry; productivity; profitability.
Abstract: In this paper, simulated annealing (SA) algorithm is employed to solve the flowshop scheduling problem with the objective of minimising the completion time variance (CTV) of jobs. Four variants of the two-phase SA algorithm (SA-I to SA-IV) are proposed to solve flowshop scheduling problem with the objective of minimising the CTV of jobs without considering the right shifting of completion time of jobs on the last machine. The proposed SA algorithms have been tested on 90 benchmark flowshop scheduling problems. The solutions yielded by the proposed SA variants are compared with the best CTV of jobs reported in the literature, and the proposed SA-III is found to perform well in minimising the chosen performance measure (CTV) particularly in medium and large size problems.
Keywords: scheduling; permutation flowshop; simulated annealing algorithm; completion time variance; CTV.
Abstract: Scheduling is very complex but important problem in the real world environment applications. Production scheduling with the objective of minimizing the makespan is an important task in manufacturing systems. For most of scheduling problem made so far, the processing time of each job on each machine have been assumed as a real number. However in real world applications the processing time is often imprecise which means the processing time may vary dynamically because of some human factor or operating faults. This paper considers an n jobs and m machines flow shop scheduling problem of minimizing the makespan. In this work fuzzy numbers are used to represent the processing times in the flow shop scheduling. Fuzzy and neural network based concept are applied to the flow shop scheduling problems to determine an optimal job sequence with the objective of minimize the makespan. The performance of our proposed hybrid model is compared with the existing methods selected from different papers. Some problems are solved with the present method and it is found suitable and workable in all the cases. A comparison of our present method with the existing methods is also provided in this work.
Keywords: Flow shop scheduling; sequence; fuzzy number; defuzzification; artificial neural network; makespan.
Abstract: Business units inclusive of large, medium and small-scale entities conventionally conducts activities based on business processes. Globalization has resulted in the gradual introduction of various automation systems at various levels of the business enterprise, specifically focussed on capturing essential business activities across the entity. These systems, inclusive of Enterprise Resource Planning (ERP), Manufacturing Execution Systems (MES) and Plant Systems (PS) has been adopted by larger corporates in executing and optimizing business functions. These large multinationals are described as complex entities with complex business structures inclusive of business processes. The quantification of automation, escalations and critical variables of these business processes has not been effectively conducted. A systems thinking approach adds the complexity of integrating all enterprise functions and creates a framework for evaluating the limitations and synergies so as to optimize these processes. This research focuses on the development, and configuration, of a simulation model for modelling enterprise maturity via business processes. This research approach includes hierarchical layout and segregation of these business processes, exploring these enterprise operations adopting business process tools, techniques, and methodologies aligned with a systems thinking approach. A simulation framework is configured and tested. The results prove that a simulation model potentially benefits a complex organization specific to evaluating time taken to execute business processes. The results indicate that interdependent processes can be modelled together by determining impacts of multiple critical variables in minimizing business process execution times.
Keywords: simulation-based approach; manufacturing execution system; enterprise resource planning; Plant Systems; business process optimization; systems thinking.
Abstract: The purpose of this paper is to explore and test certain assumptions concerning the fantasy sports industry as an enabler for quick, accurate, and descriptive information to its fan from a gender perspective. However, there are distinct disadvantages that seriously active user may have on sports fantasy gambling addiction and excessive wagering may have on society in general and this study should help provide a base line for future studies. Therefore, to explore the differences, the authors sought to provide statistical evidence that collaborate differences among gender based on technology acceptance models. The sample consisted of relatively well-paid professionals who many routinely engage in fantasy sports via a personal interview procedure was implemented and highly representative of the service industry located in the metropolitan section of Pittsburgh, PA. Multivariate statistical analyses were used to test the hypotheses in determine significant gender differences. It was found that those male professionals who were intensely engaged fantasy sports respondents and who spent considerable amount of money on fantasy leagues, found fantasy guides, expert opinions, and related information helpful in changing rosters, and were intensive users of mobile technology for personal use were significant and negatively related to the dependent variables, with significant gender differences. Such respondents did not perceive there was a global concern for fantasy gambling activities, although a considerable portion of the sample felt otherwise.
Keywords: customer relations management (CRM); empirical; fantasy sports; gender; information technology; market strategy.
Abstract: The significance of this research is to optimize the cutting force in turning by optimizing the cutting parameters like cutting speed, feed, and depth of cut. Cutting force is one of the important characteristic variables to be watch and controlled in the cutting processes to optimize tool life and surface roughness of the work piece. The principal presumption was that the cutting forces increase due to the wearing of the tool. Cutting force is optimized by the metaheuristic i.e. genetic algorithm (GA) and teaching-learning based optimization (TLBO) algorithm. The analysis of the result shows that the optimal combination for low resultant cutting force is low cutting speed, low feed and low depth of cut. This study finds that by adjusting machining parameters tool life can be enhanced because cutting forces increase due to the wearing of the tool. So, cutting forces have been used to maximize the tool life because cutting force increased rapidly as tool life finished. As a result, the production cost can be minimized and be extending the tool usage and subsequently, the machining time is reduced and the tool usage can be extended.
Keywords: Optimisation; cutting force; Tool life; metaheuristic; GA; TLBO.
TO FIND THE SUITABILITY OF CMS IN INDIAN INDUSTRIES IN COMPARISON OF OTHER MANUFACTURING SYSTEM USING AHP TECHNIQUE.
Abstract: The cellular manufacturing (CM) is a well known manufacturing technology that helps to improve manufacturing flexibility and productivity by maximum utilization of available resources. Cellular Manufacturing (CM) has been derived as a strong tool for increasing production in batches and job shop production. In cellular manufacturing, similar machines are grouped in machine cells and same processing parts are gathered in part families. In this paper factors of CMS are identified through AHP technique. Decisions involve many hypothetical factors that should be traded off. For doing that, they have to be measured along side factual factors whose estimation must also be evaluated as such, how well, they achieves the aim of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and depends on the decision of experts to find out priority scales. These are the scales that measure hypothetical factors in relative terms. The comparisons are made using a scale of absolute judgments that represents, how much; one element leads another with respect to a given aspect. The judgments may be scattered, and how to measure incoherency and improve the judgements, when possible to obtain better coherency is a concern of the AHP. The derived priority scales are synthesized by multiplying them by the priority of their parent nodes and adding for all such nodes. It has been a fact that the stability of any country depends on the stability of its production area and progress in industrial society has been accompanied by the development of new technologies. The analytic hierarchy process has the strength to build complex, multi-person, multi-attribute and multi-period problems hierarchically. In this paper, a decision-making model is being proposed for CMS evaluation and selection. This strategic decision-making model is based on both hypothetical and factual criteria and uses the analytical hierarchy process (AHP) approach.
Keywords: AHP; Analytic hierarchy process; CMS; cellular manufacturing system.
A model for operational budgeting by the application of interpretive structural modeling approach: A case of Saipa Investment Group Co.
Abstract: The operational budgeting system is a managerial system considered by governments at national and local levels for improving the efficiency and effectiveness of the resource consumption in organizations. This system bases the budget credit allocation on the performance of organizational units with the aim of generating outputs (products/services), i.e. short-term objectives, and/or achieving long-term goals. Then, executive agencies are led towards transparency in the resource consumption, attempts to achieve their goals and strategies, and more accountability. The present research was aimed to present a model of operational budgeting based on interpretive structural model (ISM) in Saipa Co. So, 46 factors underpinning operational budgeting were identified by literature review, experts were surveyed to derive the most important factors resulting in 14 factors, and an ISM-based structured matrix questionnaire was designed to find out the interrelations of these factors. The data collected by the questionnaire were analyzed by ISM and were mapped out at seven levels in an interactive network in that shareholders and accounting and financial reporting systems were placed at the highest level. The results revealed that the most effective factor in driving power-dependence matrix was annual budget and the five-year development plan so that the operational budgeting of Saipa Investment Co. was dependent on this factor.
Keywords: budget; budgeting; operational budgeting; interpretive structural model.
Abstract: The specific objectives of this study are the following: to analyse the association between the hotel features and the use of the Uniform System of Accounts for the Lodging Industry (USALI); to analyse whether the use of this system is associated with the price charged by the hotels. Data collection began with a survey addressed to the financial managers of 241 4-star and 5-star hotels located in Portugal. To meet the objectives proposed, information on the price charged by the responding hotels was also collected from the Booking.com online platform. The results obtained allow us to indicate as main contributions the following: evidence of the existence of associations between hotel features and the use of USALI; evidence that the use of USALI influences the price practiced.
Keywords: Uniform System of Accounts for the Lodging Industry; USALI; hotel features; hotel price; contingency variables.
Abstract: This study examines the role of marketing mix on brand switching among Malaysian smartphone users. It also investigates the mediating role of brand effect on the relationship between marketing mix (namely product, price, place and promotion), service, and brand switching. It uses a self-administered questionnaire to collect data from 304 participants, and their responses were analysed using Statistical Package for the Social Sciences (SPSS) and Partial Least Squares (Smart-PLS). Evidence from the study reveals that price and promotion have effects on brand switching towards smartphones. It also shows that brand effect partly mediates the connections between product, price and brand switching towards smartphones. Although the Samsung brand remains the market leader in the smartphone industry in Malaysia, empirical research on the nexus between marketing mix and brand switching in Malaysia remains very scanty. Despite some limitations of this study that could affect its generalisability, it enhances marketers understanding of the marketing mix, which should receive adequate attention to attract smartphone users to switch to their brands. It also provides marketers with the knowledge of usage behaviours of Malaysia smartphone users which could be useful for launching new models that suit the preferences of smartphone users.
Keywords: Marketing Mix; Brand Effect; Brand Switch; Smartphone.
Abstract: In Thailand, the petroleum industry is a major contributor to the economy as a supplier of crucial petroleum products. However, due to its geographical location, many multinational oil and gas companies have established their operations in the country. For Thai petroleum firms to remain competitive in the industry, effective commercialization strategies of inventions to innovations can bring firms competitive advantages in terms of financial returns, business opportunities for partnerships, and advanced operations. Nonetheless, commercialization of inventions is becoming increasingly difficult due to changes in the market. Firms operating without an explicitly established commercialization process are at a disadvantage. The objective of this research is to develop a formal commercialization model suitable for commercializing product and process inventions in Thailands petroleum industry. The methodology used in this research consists of three stages, i.e. 1) develop a preliminary commercialization model, 2) refine the model, and 3) apply the model to a Thai petroleum firm as a case study. As a result, two commercialization models are developed, namely an industrial commercialization model and a company commercialization model.
Keywords: Commercialization Model; Innovative Products; Petroleum Firm.
Abstract: In this paper we have developed an inventory model for deteriorating items in which demand of the product is a function of expiration date. In such cases the demand for the product generally decreases as the product is nearer to its expiration date. The different cases based on the allowed trade credit period have been discussed under inflationary environment. Shortages are allowed and partially backlogged. To exemplify this model numerical examples for all the cases and sensitivity analysis with respect to important parameters have been provided. The convexities graphs and the variation in total cost with the changes in parameters are represented with the help of the graph. The objective of this paper is to find out the optimal ordering quantity by which one can minimize the total associated cost of the system.
Keywords: Deterioration; Trade credit period; Inflation; Shortages; Partial Backlogging.
Abstract: The article provides an updated extensive literature review on Process Maturity. On a general level Process Maturity could be defined as the degree to which a process is defined, managed, measured, and continuously improved, where the aim of working with maturity is twofold, either to measure the level of maturity in order to compare with other processes or to identify improvement areas. According to the findings of this study there are areas indicating needs of further research. Firstly, the fundamentals of Process Management seem to be missing when describing the work with Process Maturity, which could affect the quality of maturity models. A second area of further research is related to the absence of approaches and methods for improving Process Maturity, since improving maturity is often addressed as what to do, not how to do.
Keywords: maturity; process maturity; processes; maturity model; process management; quality management; total quality management; process improvement; business process management; quality maturity.
Abstract: In this study a non-Markovian queuing model is considered for our examination. In this queuing framework, clients arriving in batches follow a poisson distribution. All the arriving clients have the alternative of picking any of the N sorts of services rendered. Service time follows general distribution. In addition, the system breaks down at random. Hence the server gets into repair process of two phases Based on the fault, in first phase general repair is carried out. Then in the second phase repair based on the category is followed. Likewise after the system ends up with service ,the server will go for a necessary vacation. In addition, an optional extended vacation is followed. Also an idea of vacation to be a maintenance time in which the support work required for the server will be completed. A concept of reneging is an additional aspect in this model which happens during the time of break down and vacation. This happens due to Impatience of the customers. By the use of birth and death queuing process the conditions overseeing the model is framed. The probability generating queue size and queuing execution measures of the model are derived. The legitimisation of the model is finished by methods for numerical illustration. Also the unmistakable investigation of the graphical portrayal gives a reasonable picture about the procedure completed in versatile communication process. Also it gives significant thoughts for decrease the system break down, reneging and for the advance of the system speedier.
Keywords: non-Markovian queue; optional types of service; service interference; optional extended vacation; maintenance work; revamp process of two stages.
Abstract: India has the second largest telecom network in the world after China with about 1,206 million subscribers. The sharp increase in the teledensity and further decline in the tariffs in telecom sector has contributed significantly in India's economic growth. Mobile number portability (MNP) was introduced in India in 2011 but was limited to licensed service area (LSA). Countrywide MNP was launched in India in 2015. By March 2017, MNP requests were 23.3% of mobile subscriber base in India. In this paper Indian Telecom sector, MNP progress in India, MNP implementation around the world, benefits of MNP and various studies on MNP have been discussed. Porting fees, porting speed, service quality, price, switching cost, satisfaction have been found the determinants of switching based on different studies carried around the world.
Keywords: mobile number portability; MNP; switching intention; Indian telecom sector; mnp benefits; telecom operators; determinants of switching.
Abstract: Companies are always looking for optimizing the use of their material resources and human resources. Nowadays, the issue of competency is important and crucial in industry. To perform a given task by two operators having the same qualification, performance achievement varies, which introduces the concept of individual competence level. This article presents an assessment method of multi-skilled workforce. In this paper, we provide a useful model to solve workforce assessment problems. We discuss how to consider the differences and similarities between acquired level and required level. The objective of our method is to present a combination of AHP technique and TOPSIS technique to assess the individual competence level.
Keywords: Multi-skilled workforce; activities; assessment; individual competence level; FMEA; AHP; TOPSIS.
Abstract: The paper proposes a framework for evaluating the organisational competitiveness attained through sustainable manufacturing. An assessment model using analytical hierarchal process (AHP) has been adopted for the evaluation of organisational competitiveness. A conceptual model is developed to understand the interrelationship between seven sustainable manufacturing practices and five organisational competitiveness with 14 sustainable manufacturing outputs at the intermediate level. The attainment of six organisational competitiveness is evaluated by computing the global priority scores of practices and outputs of sustainable manufacturing. The linkage matrix developed between the practices and the output of sustainable manufacturing and the output and organisational competitiveness is the significant contribution of this work that can be used by practicing managers/researchers to understand the various interactions and hence, promote the research in the field of sustainable manufacturing.
Keywords: organizational competitiveness; sustainable manufacturing practices; sustainable manufacturing output; decision making; linkage matrix.
Abstract: Measuring the performance of the bank and determining the key factor relating to its efficiency and effectiveness in gaining popularity in the recent time period. Efficient operation system is crucially important for banks. In such facet ranking of the bank is becoming essential. Ranking will help the bank in improving their performance with respect to self and other. The present paper seeks to measure the performance and propose a method to rank Indian public sector banks by combining some multiple criteria decision making tools like AHP and TOPSIS. To know the degree of relation between the different set of variables correlation test is applied. For the study purposes, all public sector banks are considered. Results interpret an alternative ranking of the banks. Finally, this paper provides better vision to focus on the area of improvement in comparison to others banks.
Keywords: Banking; Efficiency; Analytic Hierarchy Process; TOPSIS; Performance ranking.
Abstract: In conventional car body process planning, parts and spots of an assembly get evenly distributed in geometry fixtures (or) stations. The remaining spots, over and above the cycle time of the geometry stations are moved to re-spot stations. Global trend of geometry and re-spot fixture distribution indicates that percentages of Geometry fixtures are higher in manual body shops with less than 40% of automation. The percentage of geometry stations varies up to 14% in 100% automated body shops. This trend shows the inappropriate planning of geometry and re-spot stations for manual and automated lines. The main focus area in the conventional planning is to set geometry through balanced distribution of parts and spots. Productivity is often not considered as the top priority. A geometry spot distribution model has been developed to resolve this concern. The proposed model can be used to calculate the appropriate geometry and re-spot stations required for a lean body shop. Improvement in quality establishment and productivity of the body shop are discussed in this paper with case studies. The developed methodology for appropriate spot weld distribution among the welding stations for body shop is simple to use for the product and process designers during any new product development phase. This can also be used for optimization of any existing cells of body shops.
Keywords: Productivity; Geometry Station; Re- Spot Station; Body Shop; Weld Shop; Body-In-White (BIW); Floor space reduction; Process planning.
Abstract: Today there is a necessity of developing a unique market behaviour strategy using own competitive advantages to reach the highest scores of real competitive ability of a sector. The driving force of the competition law requires companies to have thorough strategies to improve the quality of produced goods, to decrease production costs and to increase labour efficiency. In this context the use of benchmarking tools to improve innovation efficiency at companies is an important task. The present work considers decision-making models and methods regarding company's competitiveness management in terms of benchmarking. The model of competitive interaction introduced in the article in the form of a system of coupled Van-der-Pol equations with time lag was successfully tested in radio physics and now is successfully applied in economics. The article demonstrates the application of the given system to describe benchmarking of different economic companies.
Keywords: benchmarking; digital economy; Big data; enterprise management system; systems of decision-making support; innovation activity; system of Van der Pol equations with time lag; mathematics.
Abstract: This study primarily aimed to identify the direct effect of perceived support on the employees' voice behaviour in the workplace. It examined knowledge interactive impact of locus of control on the perceived support of work engagement. Accordingly, this study was built on literature of voice behaviour and as such, it employed a survey methodology. The study focused on a government sector firm, specifically Basra Electricity Production. The data collection tool used is the questionnaire and it was distributed to 333 employees in the firm. The collected data was analysed using AMOS version 22. Based on the results, work engagement fully mediates the relationship between perceived support and employee voice behaviour, while external locus of control moderated the relationship between perceived support and work engagement. Suggestions were provided for several avenues for future studies.
Keywords: Perceived support; work engagement; employee voice behavior and locus of control.
How to increase organizational learning and knowledge sharing through human resource management processes?
Abstract: The purpose of this paper is to examine how human resource management (HRM) processes and knowledge sharing affect organisational learning within the context of steel industry. Drawing from the literature on HRM, this study hypothesises and tests the relationship between HRM processes, knowledge sharing and organisational learning. The authors used survey research to collect the data. The PLS path modelling approach was used to analyse the data and the conceptual model. The empirical results from the structural model suggest that three out of five HRM processes (i.e., training, job design and job quality) influenced knowledge sharing. Furthermore, knowledge sharing was a direct antecedent of organisational learning.
Keywords: Human resource management processes; knowledge sharing; organizational learning; structural equation modeling; steel industry.
Abstract: Due to varying requirements of customers, it became difficult for manufacturing firms to sustain their position in the market. Because of this, firms are finding and implementing new technologies in existing system and flexible manufacturing system (FMS) is one of the solutions for this problem. FMS is an automatic manufacturing system, capable of producing a wide variety of products with good quality. However, the cost of implementing FMS is high. So, decisions related with various parameters in implementing and managing the system, is one of the crucial steps in FMS. In present work, emphasis is done on developing a methodology that can make an appropriate decision of selecting best experiment level, by using simple calculations that can save money and time. For doing so, a relative study of decision-making approaches that combines Shannon entropy and weighted aggregated sum product assessment (WASPAS) method is performed for solving the selection problem of operating conditions. Criteria's weight is calculated first by Shannon entropy method. Later, alternatives were ranked by using calculated weights in WASPAS method. Based on obtained ranking, management could take verdicts for refining the performance parameters of the system.
Keywords: Selection; factor-level; shannon entropy method; weighted aggregated sum product assessment; flexible manufacturing system.
Abstract: There exists insufficient literature on classification and taxonomy of Indian tea supply chain (TSC), so the basic objectives of this study are to identify the existing TSCs and classify them accordingly. The paper is based on a three-year detailed field study on TSC in Assam which encompasses tea-estates, small tea gardens (STGs), bought leaf factories (BLFs), research institute, auction centres, branding companies, tea distributors, and retailers. The paper presented an incorporated structure that includes all the stakeholders with their roles in tea value chain. This has not been reported in previous research. The study also develops an integrated tea supply chain framework and apart from this, classifies the tea supply chain in context of Assam, India. The present study will help further research to optimise business operations and maximising the profit of the Indian tea industry.
Keywords: Tea Industry; Tea Research Center; Tea Estate; Small Tea Garden; Plucking; Tea Factory; Bought Leaf Factory; Tea Auction Centre; Tea Branding Company; Value Chain; Supply Chain Framework.
Abstract: This paper applies queuing models (M/G/N/N) to determine the right price for parking, given that the arrival rate and the staying time depend on the parking price. The models explore the payment per hour and the entrance fee models, with one or several customer types, and where each customer type faces different prices (price discrimination). The models are also applicable for cases where the objective function is to set the park occupancy rate at a desired level. The analysis of the numerical examples demonstrates the applicability of the models and provides some interesting insights about the setting of the correct parking price under the appropriate pricing scheme. The conclusion is that one of the most important tools for process management and benchmarking of a car park is the parking pricing.
Keywords: parking pricing; queuing models; M/G/N/N; parking occupancy; revenue; benchmarking.
Abstract: The arrival of new workforce generation would necessitate a leadership type transcending traditional boundaries. Organisations must strive to develop a supportive work climate which inspires, strengthens and connects employees to perform their tasks vigorously. This paper primarily seeks to investigate the relationship between engaging leadership and work engagement. With reference to empirical studies and existing research on leadership and work engagement, a positive association between engaging leadership and work engagement mediated by engaging job and engaging environment is confirmed. More importantly, this study deals with the paucity of the structured literature regarding work engagement determinants, and offers a holistic model providing a logical ground for identifying empirical indicators and hypotheses to verify the theory. Apart from extending engaging leadership dimensions, certain paradigms are proposed for jobs and work environment to facilitate work engagement.
Keywords: Work Engagement; Engaging Leadership; Engaging Environment; Job Meaningfulness; Job Resources.
Abstract: An initiative took in 1930s by the US trucking industries, cross-docking. The aim behind the initiative was to have lower inventory levels and optimised lead times by having an unbroken flow of product/material from the supplier to the end-customer. In order to have the seamless flow one must ensure that the operations are synchronising. Hence the optimisation of the overall supply chain to be more efficient in terms of lead time and cost, requires an interconnected and explicable connection of all the factors that has to be taken into consideration. The study conducted takes a real time model as an example and illustrates how cross-docking can be managed more holistically to synchronise cross-docking operations at the distribution centre with its inbound and outbound network logistics.
Keywords: Cross-Docking; lower inventory; lead time; inbound; outbound; operations; synchronize; optimization; efficient; holistically; distribution centre; logistics; management; supply chain.
Abstract: Malaysian community colleges play a significant role in achieving Malaysia's vision to be a developed country in 2020. For this reason, their efficiency should appropriately be measured. However, their efficiency measurement using a conventional data envelopment analysis (DEA) model is not appropriate since some of their inputs; e.g., entrant and enrolment students are non-discretionary while a part of output of their graduate employability is discretionary. This paper thus proposes an alternative approach of super efficiency slack-based measure for the case of non-discretionary factors in DEA. The proposed approach was used to evaluate the efficiency of 25 main campuses of Malaysian community colleges from 2012 to 2013. The results support the decision maker of Malaysian community colleges to discriminate and rank efficient and inefficient community colleges in the presence of both super efficiency and non-discretionary factors. The significance of inputs-outputs on efficiency status was tested by sensitivity analysis.
Keywords: data envelopment analysis; DEA; non-discretionary factors; slack-based measure; SBM; super efficiency.
Abstract: In this paper, a two stage production process is considered such that there is only one machine M1 in the first stage and two machines M2′, M2″ in the second stage. It is assumed that the output of M1 is the input for M2′, M2″. During the breakdown time of M1, a reserve inventory is suggested for M2′, M2″, to prevent the idle time, which is costlier. An inventory model is derived here based on certain assumptions to find the optimum size of the reserve inventory of the semi-finished product of the machine M1 to supply the machines M2′, M2″ till M1 resumes the function. Numerical illustrations are provided.
Keywords: breakdowns; repair time; inventory; change point; SCBZ property; production; shortages.
Abstract: This paper deals with steady state analysis of single server priority retrial queue with Bernoulli working vacation, where the regular busy server can be subjected to breakdown and repair. There are two types of customers are considered, which are priority customers and ordinary customers. As soon as orbit becomes empty, the server goes for a working vacation (WV). The server works at a lower service rate during working vacation period. If there are customers in the system at the end of each vacation, the server becomes idle and ready for serving new arrivals with probability p (single WV) or it remains on vacation with probability q (multiple WVs). Using the supplementary variable technique, we obtained the steady state probability generating functions for the system and its orbit. Important system performance measures, the mean busy period and the mean busy cycle are discussed. Finally, some numerical examples are presented.
Keywords: retrial queue; preemptive priority queue; working vacations; supplementary variable technique.
Abstract: Wire electrical discharge machining (WEDM) is an advanced non-conventional machining process specifically used for obtaining complex 3D shape objects in hard materials with high accuracy. The present study investigates the effect of various process parameters viz. pulse on time, pulse off time, wire feed and servo voltage on response variables such as cutting speed, material removal rate, kerf width and surface roughness on machining of high carbon high chromium steel. Taguchi's L9 orthogonal array for four factors and three levels has been used for designing the experiment. A novel technique for order preference by similarity to ideal solution (TOPSIS) approach has been applied to select the optimal level of machining parameters. Analysis of variance (ANOVA) has been conducted for investigating the effect of process parameters on overall machining performance. The effectiveness of proposed optimal condition is validated through the confirmatory test. The result of this study highlights that the parameters like pulse on time and servo voltage are significantly influencing to the machining performance.
Keywords: wire EDM; electrical discharge machining; high carbon high chromium steel; material removal rate; metal cutting; microscopic view; productivity; Taguchi; TOPSIS; ANOVA; multi-response optimisation.
Abstract: Poka-yoke has been recognised as a proven approach towards achieving 'error free' environment, especially in manufacturing. In this paper, an attempt has been made to justify importance of poka-yoke in SMEs, identify and analyse important drivers for successful implementation of poka-yoke concept. Literature review methodology has been utilised to identify important drivers for successful implementation of poka-yoke concept in SMEs of Indian automobile. Analytical hierarchy process (AHP) methodology has been used for the ranking of identified drivers. 'Management attitude' driver has been identified as top ranked driver to responsible to implement poka-yoke in Indian automobile SMEs. Further, DEMATEL methodology has also been used to understand and categorise these identified drivers of poka-yoke implementation into cause group and effect group of drivers. This paper may help production engineers and managers to identify errors and further rectify them for gaining competitive advantage over domestic and international automobile market players.
Keywords: analytical hierarchy process; AHP; decision making trial and evaluation laboratory; DEMATEL; drivers of poka-yoke; Indian automobile industry; poka-yoke; small and medium enterprises; SMEs; India.
Abstract: Due to advancement of science and technology a new concept called big data has emerged. Its utility has attracted not only the private companies and organisations but also governmental authorities. But at the same time it has also raised certain ethical, moral and legal concerns. This paper looks at the threats to individuals privacy caused due to big data. This paper compares the US laws with Indian laws with respect to privacy and data protection and attempts to offer a solution to safeguard individuals' privacy against such threats. The research methodology used by the researchers is doctrinal method along with armchair, exploratory, analytical and comparative method. The authors argue that individuals' privacy can be safeguarded through a holistic approach wherein apt technology standards, management practices along with force of law are adopted. Companies and organisations should adopt self-regulating operating standards and practices too. The concept of privacy in this paper has been restricted to mean the individual privacy and does not include national security concerns or confidential data of corporations. The authors also suggest a framework for the protection of individual privacy and its implementation in India.
Keywords: big data; privacy laws; data protection; sensitive personal information; right to privacy; USA; India. | CommonCrawl |
I am a professor of pure mathematics. I did my PhD at the University of Chicago and worked at universities in the US and Canada before joining the University of Adelaide in 2006. I do research in complex analysis and complex geometry and related areas of mathematics. I teach courses at all levels and supervise postgraduate students.
My main research area is holomorphic geometry. My research has involved several other fields of mathematics, in recent years including the theory of toric varieties, geometric invariant theory, and the theory of minimal surfaces. I regularly publish in international journals and have active collaborations with mathematicians in several countries.
Since 2009, I have co-organised nine conferences, conference sessions, and workshops in Europe, Canada, and Australia. In August to November 2016, I was a research fellow at the Centre for Advanced Study at the Norwegian Academy of Science and Letters in Oslo.
2018 Arosio, L., & Larusson, F. (2018). Chaotic Holomorphic Automorphisms of Stein Manifolds with the Volume Density Property. Journal of Geometric Analysis, Online, 1-19.
2018 Forstnerič, F., & Lárusson, F. (2018). The Oka principle for holomorphic Legendrian curves in C²ⁿ⁺¹. Mathematische Zeitschrift, 288(1-2), 643-663.
2018 Kutzschebauch, F., Lárusson, F., & Schwarz, G. (2018). An equivariant parametric Oka principle for bundles of homogeneous spaces. Mathematische Annalen, 370(1-2), 819-839.
2017 Lárusson, F., & Truong, T. (2017). Algebraic subellipticity and dominability of blow-ups of affine spaces. Documenta Mathematica, 22(2017), 151-163.
2017 Alarcón, A., & Lárusson, F. (2017). Representing de Rham cohomology classes on an open Riemann surface by holomorphic forms. International Journal of Mathematics, 28(9), 1740004-1-1740004-12.
2017 Kutzschebauch, F., Lárusson, F., & Schwarz, G. (2017). Homotopy principles for equivariant isomorphisms. Transactions of the American Mathematical Society, 369(10), 7251-7300.
2016 Kutzschebauch, F., Larusson, F., & Schwarz, G. (2016). Sufficient conditions for holomorphic linearisation. Transformation Groups, 22(2), 475-485.
2016 Lärkänd, R., & Lárusson, F. (2016). Extending holomorphic maps from stein manifolds into affine toric varieties. Proceedings of the American Mathematical Society, 144(11), 4613-4626.
2015 Lárusson, F. (2015). Absolute neighbourhood retracts and spaces of holomorphic maps from stein manifolds to oka manifolds. Proceedings of the American Mathematical Society, 143(3), 1159-1167.
2015 Kutzschebauch, F., Larusson, F., & Schwarz, G. W. (2015). An Oka principle for equivariant isomorphisms. Journal für die Reine und Angewandte Mathematik, 2015(706), 193-214.
2014 Forstnerič, F., & Lárusson, F. (2014). Holomorphic flexibility properties of compact complex surfaces. International Mathematics Research Notices, 2014(13), 3714-3734.
2014 Lárusson, F., & Ritter, T. (2014). Proper holomorphic immersions in homotopy classes of maps from finitely connected planar domains into ℂ × ℂ*. Indiana University Mathematics Journal, 63(2), 367-383.
2014 Forstnerič, F., & Lárusson, F. (2014). Oka properties of groups of holomorphic and algebraic automorphisms of complex affine space. Mathematical Research Letters, 21(5), 1047-1067.
2013 Larusson, F., & Poletsky, E. (2013). Plurisubharmonic subextensions as envelopes of disc functionals. Michigan Mathematical Journal, 62(3), 551-565.
2012 Larusson, F. (2012). Deformations of Oka manifolds. Mathematische Zeitschrift, 272(3-4), 1051-1058.
2011 Forstneric, F., & Larusson, F. (2011). Survey of Oka theory. The New York Journal of Mathematics, 17a, 11-38.
2010 Larusson, F. (2010). What is... an Oka Manifold?. American Mathematical Society. Notices, 57(1), 50-52.
2009 Larusson, F. (2009). Affine simplices in Oka manifolds. Documenta Mathematica, 14(1), 691-697.
2009 Larusson, F., & Sigurdsson, R. (2009). Siciak-Zahariuta extremal functions, analytic discs and polynomial hulls. Mathematische Annalen, 345(1), 159-174.
2009 Larusson, F., & Shafikov, R. (2009). Schlicht Envelopes of Holomorphy and Foliations by Lines. Journal of Geometric Analysis, 19(2), 373-389.
2008 Larusson, F., & Sadykov, T. (2008). Dessins d'enfants and differential equations. St Petersburg Mathematical Journal, 19(6), 1003-1014.
2008 Larusson, F., & Sadykov, T. (2008). A discrete version of the Riemann-Hilbert problem. Russian Mathematical Surveys, 63(5), 973-975.
2007 Larusson, F., & Sigurdsson, R. (2007). Siciak-zahariuta extremal functions and polynomial hulls. Annales Polonici Mathematici, 91(2-3), 235-239.
2005 Larusson, F., & Sigurdsson, R. (2005). The Siciak-Zahariuta extremal function as the envelope of disc functionals. Annales Polonici Mathematici, 86(2), 177-192.
2005 Larusson, F. (2005). Mapping Cylinders and the Oka Principle. Indiana University Mathematics Journal, 54(4), 1145-1159.
2004 Larusson, F. (2004). Model structures and the Oka Principle. Journal of Pure and Applied Algebra, 192(1-3), 203-223.
2003 Joita, C., & Larusson, F. (2003). The third Cauchy-Fantappiè formula of Leray. Michigan Mathematical Journal, 51(2), 339-350.
2003 Larusson, F., & Sigurdsson, R. (2003). Plurisubharmonicity of envelopes of disc functionals on manifolds. Journal fur die Reine und Angewandte Mathematik, 555(555), 27-38.
2003 Larusson, F. (2003). Excision for Simplicial Sheaves on the Stein Site and Gromov's Oka Principle. International Journal of Mathematics, 14(2), 191-209.
— Forstneric, F., & Larusson, F. (n.d.). Holomorphic Legendrian curves in projectivised cotangent bundles.
null curves in $\mathbb C^n$.
— Larusson, F. (n.d.). Smooth toric varieties are Oka.
— Larusson, F. (n.d.). Eight lectures on Oka manifolds.
— Larusson, F. (n.d.). Oka theory of blow-ups.
— Larusson, F. (n.d.). The homotopy theory of equivalence relations.
— Larusson, F., & Sigurdsson, R. (n.d.). The Jensen envelope is plurisubharmonic on all manifolds.
2010 Larusson, F. (2010). Applications of a parametric Oka principle for liftings. In P. Ebenfelt, N. Hungerbuhler, J. J. Kohn, N. Mok, & E. J. Straube (Eds.), Complex Analysis (pp. 205-211). Switzerland: Springer.
I was the sole chief investigator of a Discovery Grant entitled "Flexibility and symmetry in complex geometry" from the Australian Research Council in 2012-2014, and of a Discovery Grant entitled "Homotopical structures in algebraic, analytic, and equivariant geometry" in 2015-2017.
I teach courses at all levels, from large first-year courses to courses at the honours and master level. I take an interest in pedagogy and teaching innovations. I was the founder and convenor in 2014-2017 of the Learning and Teaching Working Group in the School of Mathematical Sciences. I was the Director of Teaching of the School in 2013-2016, and since 2017 I have been the Director of Teaching Development in the School. Since mid-2018, I have been the Director (Learning Quality) in the Faculty of Engineering, Computer and Mathematical Sciences. I take a hands-on interest in school education. I was an active member of Mathematicians in Schools from 2010 to 2017, and participated in the ChooseMaths national mentoring program for high-school girls in 2017-2018. | CommonCrawl |
For building question answering systems and natural language interfaces, semantic parsing has emerged as an important and powerful paradigm. Semantic parsers map natural language into logical forms, the classic representation for many important linguistic phenomena. The modern twist is that we are interested in learning semantic parsers from data, which introduces a new layer of statistical and computational issues. This article lays out the components of a statistical semantic parser, highlighting the key challenges. We will see that semantic parsing is a rich fusion of the logical and the statistical world, and that this fusion will play an integral role in the future of natural language understanding systems.
This short note presents a new formal language, lambda dependency-based compositional semantics (lambda DCS) for representing logical forms in semantic parsing. By eliminating variables and making existential quantification implicit, lambda DCS logical forms are generally more compact than those in lambda calculus.
We tackle the problem of generating a pun sentence given a pair of homophones (e.g., "died" and "dyed"). Supervised text generation is inappropriate due to the lack of a large corpus of puns, and even if such a corpus existed, mimicry is at odds with generating novel content. In this paper, we propose an unsupervised approach to pun generation using a corpus of unhumorous text and what we call the local-global surprisal principle: we posit that in a pun sentence, there is a strong association between the pun word (e.g., "dyed") and the distant context, as well as a strong association between the alternative word (e.g., "died") and the immediate context. This contrast creates surprise and thus humor. We instantiate this principle for pun generation in two ways: (i) as a measure based on the ratio of probabilities under a language model, and (ii) a retrieve-and-edit approach based on words suggested by a skip-gram model. Human evaluation shows that our retrieve-and-edit approach generates puns successfully 31% of the time, tripling the success rate of a neural generation baseline.
Adversarial perturbations dramatically decrease the accuracy of state-of-the-art image classifiers. In this paper, we propose and analyze a simple and computationally efficient defense strategy: inject random Gaussian noise, discretize each pixel, and then feed the result into any pre-trained classifier. Theoretically, we show that our randomized discretization strategy reduces the KL divergence between original and adversarial inputs, leading to a lower bound on the classification accuracy of any classifier against any (potentially whitebox) $\ell_\infty$-bounded adversarial attack. Empirically, we evaluate our defense on adversarial examples generated by a strong iterative PGD attack. On ImageNet, our defense is more robust than adversarially-trained networks and the winning defenses of the NIPS 2017 Adversarial Attacks & Defenses competition.
Uncertainty sampling, a popular active learning algorithm, is used to reduce the amount of data required to learn a classifier, but it has been observed in practice to converge to different parameters depending on the initialization and sometimes to even better parameters than standard training on all the data. In this work, we give a theoretical explanation of this phenomenon, showing that uncertainty sampling on a convex loss can be interpreted as performing a preconditioned stochastic gradient step on a smoothed version of the population zero-one loss that converges to the population zero-one loss. Furthermore, uncertainty sampling moves in a descent direction and converges to stationary points of the smoothed population zero-one loss. Experiments on synthetic and real datasets support this connection.
While active learning offers potential cost savings, the actual data efficiency---the reduction in amount of labeled data needed to obtain the same error rate---observed in practice is mixed. This paper poses a basic question: when is active learning actually helpful? We provide an answer for logistic regression with the popular active learning algorithm, uncertainty sampling. Empirically, on 21 datasets from OpenML, we find a strong inverse correlation between data efficiency and the error rate of the final classifier. Theoretically, we show that for a variant of uncertainty sampling, the asymptotic data efficiency is within a constant factor of the inverse error rate of the limiting classifier.
In sequential hypothesis testing, Generalized Binary Search (GBS) greedily chooses the test with the highest information gain at each step. It is known that GBS obtains the gold standard query cost of $O(\log n)$ for problems satisfying the $k$-neighborly condition, which requires any two tests to be connected by a sequence of tests where neighboring tests disagree on at most $k$ hypotheses. In this paper, we introduce a weaker condition, split-neighborly, which requires that for the set of hypotheses two neighbors disagree on, any subset is splittable by some test. For four problems that are not $k$-neighborly for any constant $k$, we prove that they are split-neighborly, which allows us to obtain the optimal $O(\log n)$ worst-case query cost.
Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear. To reward systems with real language understanding abilities, we propose an adversarial evaluation scheme for the Stanford Question Answering Dataset (SQuAD). Our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans. In this adversarial setting, the accuracy of sixteen published models drops from an average of $75\%$ F1 score to $36\%$; when the adversary is allowed to add ungrammatical sequences of words, average accuracy on four models decreases further to $7\%$. We hope our insights will motivate the development of new models that understand language more precisely.
A core problem in learning semantic parsers from denotations is picking out consistent logical forms--those that yield the correct denotation--from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WikiTableQuestions dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.
We show how to estimate a model's test error from unlabeled data, on distributions very different from the training distribution, while assuming only that certain conditional independencies are preserved between train and test. We do not need to assume that the optimal predictor is the same between train and test, or that the true distribution lies in any parametric family. We can also efficiently differentiate the error estimate to perform unsupervised discriminative learning. Our technical tool is the method of moments, which allows us to exploit conditional independencies in the absence of a fully-specified model. Our framework encompasses a large family of losses including the log and exponential loss, and extends to structured output settings such as hidden Markov models.
Modeling crisp logical regularities is crucial in semantic parsing, making it difficult for neural models with no task-specific prior knowledge to achieve good results. In this paper, we introduce data recombination, a novel framework for injecting such prior knowledge into a model. From the training data, we induce a high-precision synchronous context-free grammar, which captures important conditional independence properties commonly found in semantic parsing. We then train a sequence-to-sequence recurrent network (RNN) model with a novel attention-based copying mechanism on datapoints sampled from this grammar, thereby teaching the model about these structural properties. Data recombination improves the accuracy of our RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision.
Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality. While existing work trades off one aspect for another, this paper simultaneously makes progress on both fronts through a new task: answering complex questions on semi-structured tables using question-answer pairs as supervision. The central challenge arises from two compounding factors: the broader domain results in an open-ended set of relations, and the deeper compositionality results in a combinatorial explosion in the space of logical forms. We propose a logical-form driven parsing algorithm guided by strong typing constraints and show that it obtains significant improvements over natural baselines. For evaluation, we created a new dataset of 22,033 complex questions on Wikipedia tables, which is made publicly available.
Markov Chain Monte Carlo (MCMC) algorithms are often used for approximate inference inside learning, but their slow mixing can be difficult to diagnose and the approximations can seriously degrade learning. To alleviate these issues, we define a new model family using strong Doeblin Markov chains, whose mixing times can be precisely controlled by a parameter. We also develop an algorithm to learn such models, which involves maximizing the data likelihood under the induced stationary distribution of these chains. We show empirical improvements on two challenging inference tasks.
A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the amount of context (the arity of the factors in a graphical model) be chosen "at run-time" by reifying it---that is, letting this choice itself be a random variable inside the model. Empirically, we show that our approach obtains expressivity and coverage on three natural language tasks.
How can we explain the predictions of a black-box model? In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.
How much is 131 million US dollars? To help readers put such numbers in context, we propose a new task of automatically generating short descriptions known as perspectives, e.g. "$131 million is about the cost to employ everyone in Texas over a lunch period". First, we collect a dataset of numeric mentions in news articles, where each mention is labeled with a set of rated perspectives. We then propose a system to generate these descriptions consisting of two steps: formula construction and description generation. In construction, we compose formulae from numeric facts in a knowledge base and rank the resulting formulas based on familiarity, numeric proximity and semantic compatibility. In generation, we convert a formula into natural language using a sequence-to-sequence recurrent neural network. Our system obtains a 15.2% F1 improvement over a non-compositional baseline at formula construction and a 12.5 BLEU point improvement over a baseline description generation.
Discriminative latent-variable models are typically learned using EM or gradient-based optimization, which suffer from local optima. In this paper, we develop a new computationally efficient and provably consistent estimator for a mixture of linear regressions, a simple instance of a discriminative latent-variable model. Our approach relies on a low-rank linear regression to recover a symmetric tensor, which can be factorized into the parameters using a tensor power method. We prove rates of convergence for our estimator and provide an empirical evaluation illustrating its strengths relative to local optimization (EM).
Despite their impressive performance on diverse tasks, neural networks fail catastrophically in the presence of adversarial inputs---imperceptibly but adversarially perturbed versions of natural inputs. We have witnessed an arms race between defenders who attempt to train robust networks and attackers who try to construct adversarial examples. One promise of ending the arms race is developing certified defenses, ones which are provably robust against all attackers in some family. These certified defenses are based on convex relaxations which construct an upper bound on the worst case loss over all attackers in the family. Previous relaxations are loose on networks that are not trained against the respective relaxation. In this paper, we propose a new semidefinite relaxation for certifying robustness that applies to arbitrary ReLU networks. We show that our proposed relaxation is tighter than previous relaxations and produces meaningful robustness guarantees on three different "foreign networks" whose training objectives are agnostic to our proposed relaxation.
Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of large-scale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.
We consider negotiation settings in which two agents use natural language to bargain on goods. Agents need to decide on both high-level strategy (e.g., proposing \$50) and the execution of that strategy (e.g., generating "The bike is brand new. Selling for just \$50."). Recent work on negotiation trains neural models, but their end-to-end nature makes it hard to control their strategy, and reinforcement learning tends to lead to degenerate solutions. In this paper, we propose a modular approach based on coarse di- alogue acts (e.g., propose(price=50)) that decouples strategy and generation. We show that we can flexibly set the strategy using supervised learning, reinforcement learning, or domain-specific knowledge without degeneracy, while our retrieval-based generation can maintain context-awareness and produce diverse utterances. We test our approach on the recently proposed DEALORNODEAL game, and we also collect a richer dataset based on real items on Craigslist. Human evaluation shows that our systems achieve higher task success rate and more human-like negotiation behavior than previous approaches. | CommonCrawl |
Lemma 23.8.8. Let $S$ be a finite type algebra over a field $k$.
$S$ is a local complete intersection in the sense of Algebra, Definition 10.133.1 if and only if $S$ is a local complete intersection in the sense of Definition 23.8.5.
Proof. Proof of (1). Let $k[x_1, \ldots , x_ n] \to S$ be a surjection. Let $\mathfrak p \subset k[x_1, \ldots , x_ n]$ be the prime ideal corresponding to $\mathfrak q$. Let $I \subset k[x_1, \ldots , x_ n]$ be the kernel of our surjection. Note that $k[x_1, \ldots , x_ n]_\mathfrak p \to S_\mathfrak q$ is surjective with kernel $I_\mathfrak p$. Observe that $k[x_1, \ldots , x_ n]$ is a regular ring by Algebra, Proposition 10.113.2. Hence the equivalence of the two notions in (1) follows by combining Lemma 23.8.3 with Algebra, Lemma 10.133.7.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 09Q6. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 09Q6, in case you are confused. | CommonCrawl |
Borland and Dennis studied a system of three fermions with local rank six. That is, the wave function is an element of the Hilbert space $\mathcal H = \bigwedge^3 \mathbb C^6$. Using the results of arXiv:0806.4076, it is easy to show that there are four entanglement classes, with entanglement polytopes as displayed below. Intriguingly, this can be understood as a "symmetrization" of the three-qubit system (using the connection explained in that paper). Some of us have recently studied the implications of the Borland–Dennis inequalities (which describe the first polytope) on the ground state of a harmonic system. | CommonCrawl |
The grille cipher is a technique that dates back to 1550 when it was first described by Girolamo Cardano. The version we'll be dealing with comes from the late 1800's and works as follows. The message to be encoded is written on an $n \times n$ grid row-wise, top to bottom, and is overlaid with a card with a set of holes punched out of it (this is the grille).
The message is encrypted by writing down the letters that appear in the holes, row by row, then rotating the grille 90 degrees clockwise, writing the new letters that appear, and repeating this process two more times. Of course the holes in the grille must be chosen so that every letter in the message will eventually appear in a hole (this is actually not that hard to arrange).
If the message is larger than the $n \times n$ grid, then the first $n^2$ letters are written in the grid and encrypted, then the next $n^2$ are encrypted, and so on, always filling the last grid with random letters if needed. Here, we will only be dealing with messages of length $n^2$.
Your job, should you choose to accept it, is to take an encrypted message and the corresponding grille and decrypt it. And we'll add one additional twist: the grille given might be invalid, i.e., the holes used do not allow every location in the grid to be used during the encryption process. If this is the case, then you must indicate that you can't decrypt the message.
The input starts with a line containing a positive integer $n\le 10$ indicating the size of the grid and grille. The next $n$ lines will specify the grille, using '.' for a hole and 'X' for a non-hole. Following this will be a line containing the encrypted message, consisting solely of lowercase alphabetic characters. The number of characters in this line will always be $n^2$.
Output the decrypted text as a single string with no spaces, or the phrase "invalid grille" if the grille is invalid. | CommonCrawl |
where $F\colon \mathbb R^n\times[0,1] \to \mathbb R^n$ is a continuous vector field. The solution $x(t)=\varphi(t,y,\lambda)$ is uniquely determined by the initial condition $x(0)=y=\varphi(0,y,\lambda)$ and can be continued to the whole axis $(-\infty,+\infty)$ for all $\lambda\in[0,1]$. We obtain conditions ensuring the preservation of the property of global asymptotic stability of the stationary solution of such a system as the parameter $\lambda$ varies.
Keywords: matrix first-order differential equation, global asymptotic stability of solutions, deformation method, Lyapunov stability. | CommonCrawl |
This paper tries to identify waiting events that limit the maximal throughput of a multi-threaded application. To achieve this goal, we not only need to understand an event's impact on threads waiting for this event (i.e., local impact), but also need to understand whether its impact can reach other threads that are involved in request processing (i.e., global impact).
To address these challenges, wPerf computes the local impact of a waiting event with a technique called cascaded re-distribution; more importantly, wPerf builds a wait-for graph to compute whether such impact can indirectly reach other threads. By combining these two techniques, wPerf essentially tries to identify events with large impacts on all threads.
We apply wPerf to a number of open-source multi-threaded applications. By following the guide of wPerf, we are able to improve their throughput by up to 4.83$\times$. The overhead of recording waiting events at runtime is about 5.1% on average. | CommonCrawl |
How does the covariance matrix of a fit get computed?
I often have to fit data of physical experiments as a student. I always use (python's) functions like numpy.polyfit or scipy.optimize.curve_fit for that purpose. They also allow you to retrieve the covariance matrix of the parameters which has the variances of the parameters on its diagonal.
I'm not really good at statistics and this could be a very stupid question but I do not understand why there are non-zero variances at all. I thought the fit is well-defined because it is the fit with the least squares so there should be no variance. What do these variances of the parameters tell me? Do they come from numerical errors?
"fitting": iterating over a fitting algorithm (like gradient descent) to find the best vector (often called $\theta$) which will give you the smallest for the smallest "mean square error" (the sum of the squared difference between your estimation and the real value). This is what numpy.polyfit does ("poly" because it add polynomial features).
Features scaling aims at accelerating the convergence of the "fitting algorithm"
Fitting aims at... Fitting (!!!) the vector which ought to minimize the mean square error.
"feature scaling" transforms the features $X_i$ into new $X'_i$ whose variances are equal to zero.
"fitting" finds the vector that minimize the mean square error through multiple iteration but without touching the variance.
Moreover, the covariance matrix of $n$ features measures the "link" between the $n$ different features and display this link visually in a $n\times n$ matrix.
Each of these individual has 3 observed characteristics (or features): age, size, gender.
If two features are independent from each other (like the age and the gender might be), their covariance will be equal to zero and you will find a zero on both side of the diagonal (as $\Sigma$ is obviously symmetric: cov(X,Y) = cov(Y,X)).
If two features are not independent (like age and size might be), the covariance won't be equal to zero.
if two features and dependent and "moves together" on the same direction (i.e. if $X$ increase, $Y$ increases too), then $Cov(X,Y)$ will be strictly positive.
if two features and dependent and "moves together" on the opposite direction (i.e. if $X$ increase, $Y$ decreases), then Cov(X,Y) will be strictly negative.
Not the answer you're looking for? Browse other questions tagged covariance fitting or ask your own question.
How to obtain covariance matrix for constrained regression fit?
How can I use Akaike's Information Criterion to compare two models of multi-peaked emission spectra?
How to find the covariance matrix?
How to normalize by the covariance matrix?
How to calculate prediction intervals using best-fit parameters when parameters covary?
Equal Covariance in Linear Discriminant Analysis? | CommonCrawl |
Mercoledì 21 Febbraio 2018 alle ore 09:15 in Aula 2BC30, Davide Buoso (Universidade de Lisboa) terrà un seminario dal titolo "The Biharmonic operator with tension: eigenvalues' behavior and asymptotics".
The eigenvalues of the Biharmonic operator $\Delta^2$ have been extensively studied in the literature, and still today there are a number of open problems regarding e.g., shape optimization or positivity properties of the first eigenfunctions. In this talk we consider the Biharmonic operator with a tension term $\Delta^2+\alpha\Delta$, which recently gained interest in the mathematical community. After recalling the known results for this problem, we discuss the dependence of the eigenvalues upon the parameter \alpha as well as the asymptotic expansions. If time permits, we will also discuss other properties that depend on the parameter $\alpha$.
Joint works with P. Antunes and P. Freitas.? | CommonCrawl |
I am not sure if i am right. Please correct me if i am wrong. please help me I am weak in probability problems.
We can also notice that $(x,y)$ is uniformly distributed over a $10\times20$ rectangle and the triangle over which $x+y\gt20$ is one quarter of that rectangle.
How to find variance and probability from a uniformly distributed random variable?
How Are These Probabilities Greater than $1$ or less than $0$? Isn't This a Violation of Axioms of Probability Theory? | CommonCrawl |
Currently I have a sudoku puzzle solver program and I've tried all the puzzles I can find that are labeled the "hardest" on various sudoku video games and puzzle books. My solver has solved them all. It currently uses the following 4 polynomial time rules (i.e. polynomial time if the board was $N^2 \times N^2$ instead of $9 \times 9$).
4) For each row/column and block that intersects it, see if there is a value that can only appear in the intersection 3 cells either for the row/column or for the block. Then no other squares in the row/column or block can have that value.
After these rules are applied repeatedly, if the puzzle is not solved, then the program guesses a value for a cell and sees if this leads to a solution or contradiction when the above four rules are applied repeatedly. If a contradiction is found, then the value is removed from the possible values for the cell. This guessing is applied separately for each cell+possible value if the number of possible values for the cell is 3 or less. There is no simultaneous guessing for multiple cells. If at least one possible value is removed for some cell, then the above 4 rules are again applied repeatedly until guessing is required again. And repeat. If it ever happens that the 4 rules and guessing don't provide any additional information, then the program gives up and prints out the partial solution it found.
Now, there are about 10 or 20 different polynomial time rules that have been devised to rule out possibilities and infer cells' values in sudoku, and I only implemented 4. Furthermore, the guessing only guesses one cell's value at a time and merely removes the value from the possible values for the cell if a contradiction is found. So it's weird to me that this solves all the hardest puzzles I can find. Can someone produce a puzzle hard enough that my program can't solve it?
In response to the OP's request I'm posting this as an answer - although all I did was happen to know of a place where he found a puzzle that defeated his algorithm.
My solver uses no strategies at all but only guessing, and so the code is extremely short. There are plenty more that you can try from the second link, but you'll have to copy down the puzzles yourself.
Start with a simple puzzle, and then I made this harder and harder by the removing the known field values. The main hardness of a sudoku puzzle is at the beginning.
What is a good approach to demonstrate solvability of this type of puzzle without use of brute-force? | CommonCrawl |
Can a model of P(Y|X) be trained via stochastic gradient descent from non-i.i.d. samples of P(X) and i.i.d. samples of P(Y|X)?
When training a parameterized model (e.g. to maximize likelihood) via stochastic gradient descent on some data set, it is commonly assumed that the training samples are drawn i.i.d. from the training data distribution. So if the goal is to model a joint distribution $P(X,Y)$, then each training sample $(x_i,y_i)$ should be drawn i.i.d. from that distribution.
If the goal is instead to model a conditional distribution $P(Y|X)$, then how does the i.i.d. requirement change, if at all?
Must we still draw each sample $(x_i,y_i)$ i.i.d. from the joint distribution?
Should we draw $x_i$ i.i.d. from $P(X)$, then draw $y_i$ i.i.d. from $P(Y|X)$?
Can we draw $x_i$ not i.i.d. from $P(X)$ (e.g. correlated over time), then draw $y_i$ i.i.d. from $P(Y|X)$?
I would like to do #3 if possible. My application is in reinforcement learning, where I'm using a parameterized conditional model as a control policy. The sequence of states $x_i$ is highly correlated, but the actions $y_i$ are sampled i.i.d. from a stochastic policy conditioned on the state. The resulting samples $(x_i,y_i)$ (or a subset of them) are used to train the policy. (In other words, imagine running a control policy for a long time in some environment, gathering a data set of state/action samples. Then even though the states are correlated over time, the actions are generated independently, conditioned on the state.) This is somewhat similar to the situation in this paper.
I found a paper, Ryabko, 2006, "Pattern Recognition for Conditionally Independent Data," which at first seemed relevant; however, there the situation is reversed from what I need, where $y_i$ (the label/category/action) can be draw not i.i.d from $P(Y)$, and $x_i$ (the object/pattern/state) is drawn i.i.d. from $P(X|Y)$.
Update: Two papers (here and here) mentioned in the Ryabko paper seem relevant here. They assume the $x_i$ come from an arbitrary process (e.g. not iid, possibly nonstationary). They show that nearest-neighbor and kernel estimators are consistent in this case. But I'm more interested in whether estimation based on stochastic gradient descent is valid in this situation.
I think you could do either 2 or 3. However the problem with 3 is that in allowing arbitrary distributions for X you include distributions that would have all or almost all of the probability concentrated is a small interval in the x-space. This would hurt the overall estimation of P(Y|X) because you would have little or no data for certain values of X.
Not the answer you're looking for? Browse other questions tagged machine-learning conditional-probability reinforcement-learning gradient-descent or ask your own question.
How to make stochastic gradient descent algorithm converge to the optimum?
Should minibatch size be modulated during stochastic gradient descent training? | CommonCrawl |
Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet, a deep residual network and a generative adversarial network, using a simulator implemented with the neon deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.
At least two software packages---DARWIN, Eckerd College, and FinScan, Texas A&M---exist to facilitate the identification of cetaceans---whales, dolphins, porpoises---based upon the naturally occurring features along the edges of their dorsal fins. Such identification is useful for biological studies of population, social interaction, migration, etc. The process whereby fin outlines are extracted in current fin-recognition software packages is manually intensive and represents a major user input bottleneck: it is both time consuming and visually fatiguing. This research aims to develop automated methods (employing unsupervised thresholding and morphological processing techniques) to extract cetacean dorsal fin outlines from digital photographs thereby reducing manual user input. Ideally, automatic outline generation will improve the overall user experience and improve the ability of the software to correctly identify cetaceans. Various transformations from color to gray space were examined to determine which produced a grayscale image in which a suitable threshold could be easily identified. To assist with unsupervised thresholding, a new metric was developed to evaluate the jaggedness of figures ("pixelarity") in an image after thresholding. The metric indicates how cleanly a threshold segments background and foreground elements and hence provides a good measure of the quality of a given threshold. This research results in successful extractions in roughly 93% of images, and significantly reduces user-input time.
This paper introduces a fast, general method for dictionary-free parameter estimation in quantitative magnetic resonance imaging (QMRI) via regression with kernels (PERK). PERK first uses prior distributions and the nonlinear MR signal model to simulate many parameter-measurement pairs. Inspired by machine learning, PERK then takes these parameter-measurement pairs as labeled training points and learns from them a nonlinear regression function using kernel functions and convex optimization. PERK admits a simple implementation as per-voxel nonlinear lifting of MRI measurements followed by linear minimum mean-squared error regression. We demonstrate PERK for $T_1,T_2$ estimation, a well-studied application where it is simple to compare PERK estimates against dictionary-based grid search estimates. Numerical simulations as well as single-slice phantom and in vivo experiments demonstrate that PERK and grid search produce comparable $T_1,T_2$ estimates in white and gray matter, but PERK is consistently at least $23\times$ faster. This acceleration factor will increase by several orders of magnitude for full-volume QMRI estimation problems involving more latent parameters per voxel. | CommonCrawl |
How can natural numbers be represented to offer constant time addition?
Who invented proxy passing and when?
What sets of self-maps are the continuous self-maps under some topology?
Is there a Functor that cannot be a law-abiding Apply?
Are there non-trivial Foldable or Traversable instances that don't look like containers?
Why is empty set an open set?
Can we get just $3$ from $\pi$?
Why does GHC infer a monomorphic type here, even with MonomorphismRestriction disabled?
How is Data.Void.absurd different from ⊥?
Why does a simple Haskell function reject a Fractional argument expressed as a ratio?
Why there is no way to derive Applicative Functors in Haskell?
What is an unsafe function in Haskell?
Why is this unsafeCoerce not unsafe?
How can I show that a sequence of regular polygons with $n$ sides becomes more and more like a circle as $n \to \infty$?
Why does factoring eliminate a hole in the limit?
Does order of constructors / cases / guards / if-then-else matter to performance?
How does GHCi print partially-applied values created from "pure"?
What was wrong with Control.MonadPlus.Free?
Prove map id = id in idris?
How do I implicitly import commonly used modules?
Why `[1, "a"] :: [forall a. Show a => a]` is not allowed? | CommonCrawl |
Anabalon, A., Andrade, T., Astefanesei, D., & Mann, R. (2018). Universal Formula for the Holographic Speed of Sound. Physics Letters B, 781, 547-552. doi:10.1016/j.physletb.2018.04.028.
Abstract: We consider planar hairy black holes in five dimensions with a real scalar field in the Breitenlohner-Freedman window and show that is possible to derive a universal formula for the holographic speed of sound for any mixed boundary conditions of the scalar field. As an example, we locally construct the most general class of planar black holes coupled to a single scalar field in the consistent truncation of type IIB supergravity that preserves the $SO(3)\times SO(3)$ R-symmetry group of the gauge theory. We obtain the speed of sound for different values of the vacuum expectation value of a single trace operator when a double trace deformation is induced in the dual gauge theory. In this particular family of solutions, we find that the speed of sound exceeds the conformal value. Finally, we generalize the formula of the speed of sound to arbitrary dimensional scalar-metric theories whose parameters lie within the Breitenlohner-Freedman window. | CommonCrawl |
An exponential equation is an equation in which the unknown occurs as part of the exponent or index. For example, $2^x = 16$ and $25 \times 3^x = 9$ are both exponential equations. There are a number of methods we can use to solve exponential equations.
indicial equation noun Mathematics . an equation that is obtained from a given linear differential equation and that indicates whether a solution in power series form exists for the differential equation.
An indicial equation is one in which the power is the unknown, e.g. 2n = 8, the solution of which would be n = 3 because 2 3 = 8. Indicial equations often occur in the calculation of compound interest. | CommonCrawl |
Now that you know how to draw maps, and set markers, let's do some "map math." Let's compute the distance between two points on the map. We'll reference the two points by their latitude and longitude, like this $(lat_1,lng_1)$ and $(lat_2,lng_2)$.
To start, let's use Los Angeles (34.1,-118.25) and New York (40.7,-74.02).
$\Delta lat=lat_2-lat_1$ and $\Delta lng=lng_2-lng_1$.
The distance $d$ will be $d=R\times c$.
Remember, your latitude,longitude angles must all be in radians (radians=degrees$\times\pi/180$). As a check, the LA-NY distance is about 2,445 miles or 3,934 km.
Make a nice map application here, that accepts your two locations as easy-to-change variables, draws markers at both points, then displays the distance.
Now you try. Translate the equations above into code, and compute some distances.
This code should at least get you started. Dismiss. | CommonCrawl |
Now in my periodic boundary conditions, I chose to add $+10$ but could have added some other number instead. How does this "size of the torus" affect the function of the toric code?
The Toric code is an error correcting code. The distance of the code (I.e. the number of local operations required to convert one logical state into an orthogonal one) is equal to $N$, where the Toric code is defined on an $N\times N$ grid.
One of the places that the performance of the Toric code really wins out is that although it is only distance $N$, the vast majority of sets of $N$ single qubit errors can be corrected, and it is only once you get $O(N^2)$ errors that you get killed. That means that as $N\rightarrow\infty$, those $O(N)$ terms vanish, and you get a finite per-qubit error rate as a threshold for error correction. For finite $N$, the error correcting threshold will be lower.
Not the answer you're looking for? Browse other questions tagged architecture fault-tolerance topological-quantum-computing anyons toric-code or ask your own question. | CommonCrawl |
If $\times$ means -, $\div$ means + , - means $\div$, + means $\times$, then 64 $\div$ 32 - 8 $\times$ 4 + 6 = ?
In the following list of English alphabets, one alphabet has not been used. Identify the same.
If south-east becomes north, then what will north-east become ?
Poining towards a girl Sameer said, She is the daughter of only son of my paternal grandfather. How is the girl related to Sameer ?
5 . In questions, a series is given, with one term missing. Choose the correct alternative from the given ones that will complete the series.
6 . In questions, a series is given, with one term missing. Choose the correct alternative from the given ones that will complete the series.
7 . In questions, a series is given, with one term missing. Choose the correct alternative from the given ones that will complete the series.
BCB, DED, FGF, HIH, ?
8 . In questions, a series is given, with one term missing. Choose the correct alternative from the given ones that will complete the series.
PBQ, QCR, RDS, SET, ?
9 . In questions, a series is given, with one term missing. Choose the correct alternative from the given ones that will complete the series.
O, 6, M, 13, K, 19, I , 26, G, ?, ?
If REASON is coded as 5 and BELIEVED as 7, what is the code for GOVERNMENT ? | CommonCrawl |
Jantsch, Jonathan and Chakravortty, Dipshikha and Turza, Nadine and Prechtel, Alexander T and Buchholz, Björn and Gerlach, Roman G and Volke, Melanie and Glasner, Joachim and Warnecke, Christina and Warnecke, Michael Sx and Eckardt, Kai-Uwe and Steinkasserer, Alexander and Hensel, Michael and Willam, Carsten (2008) Hypoxia and hypoxia-inducible factor-1 alpha modulate lipopolysaccharide-induced dendritic cell activation and function. In: Journal of Immunology, 180 (7). pp. 4697-4705.
Dendritic cells (DC) play a key role in linking innate and adaptive immunity. In inflamed tissues, where DC become activated, oxygen tensions are usually low. Although hypoxia is increasingly recognized as an important determinant of cellular functions, the consequences of hypoxia and the role of one of the key players in hypoxic gene regulation, the transcription factor hypoxia inducible factor 1$\alpha$ (HIF-1$\alpha$), are largely unknown. Thus, we investigated the effects of hypoxia and HIF-1$\alpha$ on murine DC activation and function in the presence or absence of an exogenous inflammatory stimulus. Hypoxia alone did not activate murine DC, but hypoxia combined with LPS led to marked increases in expression of costimulatory molecules, proinflammatory cytokine synthesis, and induction of allogeneic lymphocyte proliferation compared with LPS alone. This DC activation was accompanied by accumulation of HIF-1 protein levels, induction of glycolytic HIF target genes, and enhanced glycolytic activity. Using RNA interference techniques, knockdown of HIF-1$\alpha$ significantly reduced glucose use in DC, inhibited maturation, and led to an impaired capability to stimulate allogeneic T cells. Alltogether, our data indicate that HIF-1$\alpha$ and hypoxia play a crucial role for DC activation in inflammatory states, which is highly dependent on glycolysis even in the presence of oxygen.
Copy right for this article belongs to The American Association of Immunologists, Inc. | CommonCrawl |
and obviously you'll want jupyter installed so you can run the notebook server. The Internet is full of opinions about how to set up your python environment. You should find one that works for you, but this guide is as good as any to get you started.
I'm going to start off by loading the required packages and suppressing some warnings that should be fixed in the next stable release of seaborn.
I'm going to assume that you are running this notebook out of a local copy of the SuchTree repository for any local file paths.
Loading tree data into SuchTree is pretty simple -- just give it a path to a valid Newick file. Under the hood, SuchTree uses dendropy for parsing. I figured it was better to introduce another dependency than to inflict yet another Newick parser on the world (the Newick file format has some slight ambiguities that can lead to annoying incompatibilities).
The SuchTree object has a dictionary called leafs that maps leaf names onto their node ids. We'll make extensive use of this as we put the utility through its paces.
SuchTree has two ways to calculate distances; one pair a time, or in batches. Batches are more efficient because it does each calculation without the interpreter's overhead.
The distance() function will accept either node ids (which are integers), or taxon names (which are strings).
Let's look at the distance matrix using a nice seaborn clustermap plot.
It's worth noting that seaborn is using scipy's cluster.hierarchy functions to build those dendrograms from the distance matrix. They aren't going to have exactly the same topology as the input tree, which was build with RAxML.
To calculate the distances in a batch, we can use the distances() function, which takes an $n \times 2$ array of node ids (make sure your array is representing them as integers).
We should get the same distance matrix and clustermap as before.
If you want to use taxon names instead, distances_by_name() accepts an $n \times 2$ list of tuples of taxon names, and looks up the node ids for you.
SuchTree can also import data from the internets. Here is the distance matrix for the penguins, from the Global Phylogeny of Birds (I've copied some of their data into a Gist post because their data repository forbids programatic queries).
So far, we haven't done anything you couldn't do with other phylogenetics packages. SuchTree really shines when you have to do a lot of distance calculations on very large trees.
Here, we use SuchTree to compare the topology of a two trees containing the taxa but constructed with different methods (FastTree and neighbor joining). One million random pairs are sampled from each tree, and the distances compared.
The distances() function, which uses node ids rather than taxa names, would be a little bit faster. However, because the trees have different topologies, the taxa have different node ids in each tree. SuchTree's distances_by_name() function untangles the leaf name to leaf node id mappings for you.
Another advantage of SuchTree's support for performing batches of distance calculations is that these calculations can run outside of Python's global interpreter lock. This makes it possible to parallelize with Threads. Python's Thread has less overhead than the multiprocessing package's Process, and Thread objects can access the same memory.
SuchTree intentionally does not allow the user to alter trees once they are created, and so distance calculations are always thread safe. This makes it possible to use only one instance of a tree for all threads, which ought to give you the best chance of keeping it within L3 cache.
First, let's create a Cython function that calls SuchTree outside of the GIL.
Let's set up two queues, one for uncompleted tasks, which we'll call work_q, and one for finished results, which we'll call done_q. The worker threads will pop tasks from the work_q, do some work, and push the results into the done_q.
Let's also push sentinels at the end of the work_q so the workers have a convenient way to know when to shut down.
The worker process takes the two queues as arguments, and returns True when it completes successfully.
Note that we're not using distances_by_name() because it requires a call to Python's dictionary object, which requires the GIL. Instead, we have looked up the node id for each taxon when we created the tasks we pushed onto work_q.
Now we can set up our thread pool, run it and print the results the threads pushed onto done_q.
Threaded algorithms are difficult to implement, and this is a very minimalist approach. It's really only good enough to demonstrate that you can call SuchTree from your worker threads without tripping the interpreter lock.
If you need this capability, you will probably want to study the documentation carefully. | CommonCrawl |
Deep in the top-secret base of Keepers of the Sacred Postulates, there's The Wall with many rectangular posters depicting deep knowledge derived from their most valuable postulates. They are quite rare, thus they are placed so that they don't overlap.
Occasionally, a batch of new posters worthy of the honor of being placed on The Wall arrives. The Keepers then have to decide, where to place the new posters. It's a complicated process and you are to help with one of its steps.
The Keepers are currently picking candidate positions for the posters. You are to quickly compute the total area of currently hanging posters that would be shadowed by placing a new poster at a given position.
You are given a description of $n$ disjoint gray rectangles in a plane. You are to answer $q$ queries of the form: What is the total grey area in a given rectangle? Note that this doesn't affect the plane.
The first line contains five numbers, $r, c, n, q, m$, ($1 \leq r, c < m \leq 10^9 + 9$, $0 \leq n,q \leq 50\,000$), the height and width of the wall, number of posters on the wall, number of queries and a modulus for computing the queries (we will explain that below).
Each of the next $n$ lines contains four numbers, $x_1, y_1, x_2, y_2$ ($0 \leq x_1, x_2 \leq r$, $0 \leq y_1, y_2 \leq c$), the coordinates of two opposite corners of the rectangle.
The last $q$ lines contain five numbers $x_1', y_1', x_2', y_2', v$, each between $0$ and $m - 1$ (inclusive). You can compute the real coordinates $x_1, y_1, x_2, y_2$ using the formula below.
The decoded coordinates $x_1, y_1, x_2, y_2$ satisfy following conditions: $0 \leq x_1, x_2 \leq r$, $0 \leq y_1, y_2 \leq c$.
There are several subtasks. In offline subtasks, the value $v$ will be zero for each query.
For each query output one line containing a single number: Answer to the query.
Sample 1: You can view the whole plane below.
Sample 2: This is the same input as above, using online queries. | CommonCrawl |
Volume 20 (2015), paper no. 117, 23 pp.
We consider Lipschitz percolation in $d+1$ dimensions above planes tilted by an angle $\gamma$ along one or several coordinate axes. In particular, we are interested in the asymptotics of the critical probability as $d \to \infty$ as well as $\gamma \uparrow \pi/4.$ Our principal results show that the convergence of the critical probability to 1 is polynomial as $d\to \infty$ and $\gamma \uparrow \pi/4.$ In addition, we identify the correct order of this polynomial convergence and in $d=1$ we also obtain the correct prefactor.
Electron. J. Probab., Volume 20 (2015), paper no. 117, 23 pp.
This work is licensed under aCreative Commons Attribution 3.0 License. | CommonCrawl |
Lemma 15.49.7. Let $R$ be a Noetherian ring. Then $R$ is a G-ring if and only if $R_\mathfrak m$ has geometrically regular formal fibres for every maximal ideal $\mathfrak m$ of $R$.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 07PT. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 07PT, in case you are confused. | CommonCrawl |
Though it might be hard to imagine, the inhabitants of a small country Additivia do not know of such thing as change, which probably has to do with them not knowing subtraction either. When they buy something, they always need to have the exact amount of addollars, their currency. The only other option, but not a really attractive one, is over-paying.
Professor Adem, one of the Additivian mathematicians came up with an algorithm for keeping a balanced portfolio. The idea is the following. Suppose you have more coins of value $v_1$ than coins of value $v_2$. In this case you should try to spend at least as many coins of value $v_1$ as those of value $v_2$ on any buy you make. Of course spending too many $v_1$ coins is not a good idea either, but to make the algorithm simpler professor Adem decided to ignore the problem. The algorithm became an instant hitand professor Adem is now designing a kind of "electronic portfolio" with built-in Adem's algorithm. All he needs now is a software for these machines, that will decide whether a given amount ofaddollars can be paid using a given set of coins according to the rules of Adem's algorithm. Needless to say, you are his chosen programmer for the task.
Write a program that reads the description of a set of coins and an amount of addollars to be paid, and determines whether you can pay that amount according to Professor Adem's rules.
The input starts with the amount of addollars to be paid $x$, where $1 \le x \le 100,000$. The number of different coin values $k$ follows, where $1 \le k \le 5$. The values of the coins $v_1, \ldots, v_k$ follow, where $1 \le v_i \le 10,000$.
Notice that the order among coin values is significant: you need to spend at least as many coins of value $v_1$ as coins of value $v_2$, at least as many coins of value $v_2$ as those of value $v_3$, and so on. You may assume that you have a sufficiently large number of coins of each value.
Your program should output for each test case either a single word "YES", if the given amount can be paid according to the rules, or a single word "NO" otherwise.
I take WA on test 10 because i didn't know that we could take some of money not all..
Suppose you have coins of values $1, $2 and $3 and let the number of coins of each value used be n1, n2, n3 then n1 >= n2 >= n3. You have to check if you can pay the required amount using given coins and following above condition.
Easy One! Should be moved to the tutorial section.
getting wrong answer in test case 10 any hints?
didn't understand the problem? can someone please simplify the problem stated? | CommonCrawl |
Usually when I see an inequality like $x^2 - 6x - 16 < 0$, I know that the answer is $-2 < x < 8$ because I can picture where the graph would lie below zero. However, for a problem like $x^2(x+5)^3(x-3) \ge 0$, I'm not sure how to set up the inequalities because I can't picture a graph like this as easily. I'm not supposed to use a calculator for this, and I highly doubt my teacher is expecting us to plot points. Is there a trick to figure out inequalities greater than the second power by using the number of exponents given?
This means that the roots of the polynomial on the left hand side are $-5$, $0$, and $3$, so all of these points are definitely in our solution set. Now, we need to check points from each of the four regions of the real line separated by these roots. Plugging in $-6$ gives us a positive value, so we know that $(-\infty,-5]$ will be a part of the solution set.
See if you can check the other three regions to finish off the problem. Remember that the solution set must contain the three roots.
Is this some kind of triangle inquality?
Is in general incorrect taking limits of inequalities?
Proof by induction : Sequences and inequalities ; how do I make this inequality >= rather than >? | CommonCrawl |
The design of cryptographically strong Substitution Boxes (S-boxes) is an interesting problem from both a cryptographic perspective as well as the combinatorial optimization one. Here we introduce the concept of evolving cellular automata rules that can be then translated into S-boxes. With it, we are able to find optimal S-boxes for sizes from $4 \times 4$ up to $7 \times 7$. As far as we know, this is the first time a heuristic approach is able to find optimal S-boxes for sizes larger than $4$. | CommonCrawl |
Showkat Ahmad Ganie, Tanveer Ali Dar, Aashiq Hussain Bhat, Khalid B Dar, Suhail Anees, Mohammad Afzal Zargar and Akbar Masood.
Melatonin: A Potential Anti-Oxidant Therapeutic Agent for Mitochondrial Dysfunctions and Related Disorders.. Rejuvenation research 19(1):21–40, 2016.
Abstract Mitochondria play a central role in cellular physiology. Besides their classic function of energy metabolism, mitochondria are involved in multiple cell functions, including energy distribution through the cell, energy/heat modulation, regulation of reactive oxygen species (ROS), calcium homeostasis, and control of apoptosis. Simultaneously, mitochondria are the main producer and target of ROS with the result that multiple mitochondrial diseases are related to ROS-induced mitochondrial injuries. Increased free radical generation, enhanced mitochondrial inducible nitric oxide synthase (iNOS) activity, enhanced nitric oxide (NO) production, decreased respiratory complex activity, impaired electron transport system, and opening of mitochondrial permeability transition pores have all been suggested as factors responsible for impaired mitochondrial function. Because of these, neurodegenerative diseases, such as Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), Huntington's disease (HD), and aging, are caused by ROS-induced mitochondrial dysfunctions. Melatonin, the major hormone of the pineal gland, also acts as an anti-oxidant and as a regulator of mitochondrial bioenergetic function. Melatonin is selectively taken up by mitochondrial membranes, a function not shared by other anti-oxidants, and thus has emerged as a major potential therapeutic tool for treating neurodegenerative disorders. Multiple in vitro and in vivo experiments have shown the protective role of melatonin for preventing oxidative stress-induced mitochondrial dysfunction seen in experimental models of PD, AD, and HD. With these functions in mind, this article reviews the protective role of melatonin with mechanistic insights against mitochondrial diseases and suggests new avenues for safe and effective treatment modalities against these devastating neurodegenerative diseases. Future insights are also discussed.
Giovanni Polimeni, Emanuela Esposito, Valentina Bevelacqua, Claudio Guarneri and Salvatore Cuzzocrea.
Role of melatonin supplementation in neurodegenerative disorders.. Frontiers in bioscience (Landmark edition) 19:429–46, January 2014.
Abstract Neurodegenerative diseases are chronic and progressive disorders characterized by selective destruction of neurons in motor, sensory and cognitive systems. Despite their different origin, free radicals accumulation and consequent tissue damage are importantly concerned for the majority of them. In recent years, research on melatonin revealed a potent activity of this hormone against oxidative and nitrosative stress-induced damage within the nervous system. Indeed, melatonin turned out to be more effective than other naturally occurring antioxidants, suggesting its beneficial effects in a number of diseases where oxygen radical-mediated tissue damage is involved. With specific reference to the brain, the considerable amount of evidence accumulated from studies on various neurodegeneration models and recent clinical reports support the use of melatonin for the preventive treatment of major neurodegenerative disorders. This review summarizes the literature on the protective effects of melatonin on Alzheimer disease, Parkinson disease, Huntington's disease and Amyotrophic Lateral Sclerosis. Additional studies are required to test the clinical efficacy of melatonin supplementation in such disorders, and to identify the specific therapeutic concentrations needed.
Yi Zhang, Anna Cook, Jinho Kim, Sergei V Baranov, Jiying Jiang, Karen Smith, Kerry Cormier, Erik Bennett, Robert P Browser, Arthur L Day, Diane L Carlisle, Robert J Ferrante, Xin Wang and Robert M Friedlander.
Melatonin inhibits the caspase-1/cytochrome c/caspase-3 cell death pathway, inhibits MT1 receptor loss and delays disease progression in a mouse model of amyotrophic lateral sclerosis.. Neurobiology of disease 55:26–35, July 2013.
Abstract Caspase-mediated cell death contributes to the pathogenesis of motor neuron degeneration in the mutant SOD1(G93A) transgenic mouse model of amyotrophic lateral sclerosis (ALS), along with other factors such as inflammation and oxidative damage. By screening a drug library, we found that melatonin, a pineal hormone, inhibited cytochrome c release in purified mitochondria and prevented cell death in cultured neurons. In this study, we evaluated whether melatonin would slow disease progression in SOD1(G93A) mice. We demonstrate that melatonin significantly delayed disease onset, neurological deterioration and mortality in ALS mice. ALS-associated ventral horn atrophy and motor neuron death were also inhibited by melatonin treatment. Melatonin inhibited Rip2/caspase-1 pathway activation, blocked the release of mitochondrial cytochrome c, and reduced the overexpression and activation of caspase-3. Moreover, for the first time, we determined that disease progression was associated with the loss of both melatonin and the melatonin receptor 1A (MT1) in the spinal cord of ALS mice. These results demonstrate that melatonin is neuroprotective in transgenic ALS mice, and this protective effect is mediated through its effects on the caspase-mediated cell death pathway. Furthermore, our data suggest that melatonin and MT1 receptor loss may play a role in the pathological phenotype observed in ALS. The above observations indicate that melatonin and modulation of Rip2/caspase-1/cytochrome c or MT1 pathways may be promising therapeutic approaches for ALS.
Seithikurippu R Pandi-Perumal, Ahmed S BaHammam, Gregory M Brown, Warren D Spence, Vijay K Bharti, Charanjit Kaur, Rüdiger Hardeland and Daniel P Cardinali.
Melatonin antioxidative defense: therapeutical implications for aging and neurodegenerative processes.. Neurotoxicity research 23(3):267–300, 2013.
Abstract The pineal product melatonin has remarkable antioxidant properties. It is secreted during darkness and plays a key role in various physiological responses including regulation of circadian rhythms, sleep homeostasis, retinal neuromodulation, and vasomotor responses. It scavenges hydroxyl, carbonate, and various organic radicals as well as a number of reactive nitrogen species. Melatonin also enhances the antioxidant potential of the cell by stimulating the synthesis of antioxidant enzymes including superoxide dismutase, glutathione peroxidase, and glutathione reductase, and by augmenting glutathione levels. Melatonin preserves mitochondrial homeostasis, reduces free radical generation and protects mitochondrial ATP synthesis by stimulating Complexes I and IV activities. The decline in melatonin production in aged individuals has been suggested as one of the primary contributing factors for the development of age-associated neurodegenerative diseases. The efficacy of melatonin in preventing oxidative damage in either cultured neuronal cells or in the brains of animals treated with various neurotoxic agents, suggests that melatonin has a potential therapeutic value as a neuroprotective drug in treatment of Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), Huntington's disease (HD), stroke, and brain trauma. Therapeutic trials with melatonin indicate that it has a potential therapeutic value as a neuroprotective drug in treatment of AD, ALS, and HD. In the case of other neurological conditions, like PD, the evidence is less compelling. Melatonin's efficacy in combating free radical damage in the brain suggests that it can be a valuable therapeutic agent in the treatment of cerebral edema following traumatic brain injury or stroke. Clinical trials employing melatonin doses in the range of 50-100 mg/day are warranted before its relative merits as a neuroprotective agent is definitively established.
Arabinda Das, Gerald Wallace, Russel J Reiter, Abhay K Varma, Swapan K Ray and Naren L Banik.
Overexpression of melatonin membrane receptors increases calcium-binding proteins and protects VSC4.1 motoneurons from glutamate toxicity through multiple mechanisms.. Journal of pineal research 54(1):58–68, 2013.
Abstract Melatonin has shown particular promise as a neuroprotective agent to prevent motoneuron death in animal models of both amyotrophic lateral sclerosis (ALS) and spinal cord injuries (SCI). However, an understanding of the roles of endogenous melatonin receptors including MT1, MT2, and orphan G-protein receptor 50 (GPR50) in neuroprotection is lacking. To address this deficiency, we utilized plasmids for transfection and overexpression of individual melatonin receptors in the ventral spinal cord 4.1 (VSC4.1) motoneuron cell line. Receptor-mediated cytoprotection following exposure to glutamate at a toxic level (25 $\mu$m) was determined by assessing cell viability, apoptosis, and intracellular free Ca(2+) levels. Our findings indicate a novel role for MT1 and MT2 for increasing expression of the calcium-binding proteins calbindin D28K and parvalbumin. Increased levels of calbindin D28K and parvalbumin in VSC4.1 cells overexpressing MT1 and MT2 were associated with cytoprotective effects including inhibition of proapoptotic signaling, downregulation of inflammatory factors, and expression of prosurvival markers. Interestingly, the neuroprotective effects conferred by overexpression of MT1 and/or MT2 were also associated with increases in the estrogen receptor $\beta$ (ER$\beta$): estrogen receptor $\alpha$ (ER$\alpha$) ratio and upregulation of angiogenic factors. GPR50 did not exhibit cytoprotective effects. To further confirm the involvement of the melatonin receptors, we silenced both MT1 and MT2 in VSC4.1 cells using RNA interference technology. Knockdown of MT1 and MT2 led to an increase in glutamate toxicity, which was only partially reversed by melatonin treatment. Taken together, our findings suggest that the neuroprotection against glutamate toxicity exhibited by melatonin may depend on MT1 and MT2 but not GPR50.
Efthimios Dardiotis, Elena Panayiotou, Marianne L Feldman, Andreas Hadjisavvas, Stavros Malas, Ilia Vonta, Georgios Hadjigeorgiou, Kyriakos Kyriakou and Theodoros Kyriakides.
Intraperitoneal melatonin is not neuroprotective in the G93ASOD1 transgenic mouse model of familial ALS and may exacerbate neurodegeneration.. Neuroscience letters 548:170–5, 2013.
Abstract In amyotrophic lateral sclerosis (ALS) reactive oxygen species and apoptosis are implicated in disease pathogenesis. Melatonin with its anti-oxidant and anti-apoptotic properties is expected to ameliorate disease phenotype. The aim of this study was to assess possible neuroprotection of melatonin in the G93A-copper/zinc superoxide dismutase (G93ASOD1) transgenic mouse model of ALS. Four groups of mice, 14 animals each, were injected intraperitoneally with 0mg/kg, 0.5mg/kg, 2.5mg/kg and 50mg/kg of melatonin from age 40 days. The primary end points were; disease onset, disease duration, survival and rotarod performance. No statistically significant difference in disease onset between the four groups was found. Survival was significantly reduced with the 0.5mg/kg and 50mg/kg doses and tended to be reduced with the 2.5mg/kg dose. Histological analysis of spinal cords revealed increased motoneuron loss in melatonin treated mice. Melatonin treated animals were associated with increased oxidative stress as assessed with 4-hydroxynonenal (4-HNE), a marker of lipid peroxidation. Histochemistry and Western blot data of spinal cord from melatonin treated mice revealed upregulation of human SOD1 compared to untreated mice. In addition, real-time PCR revealed a dose dependent upregulation of human SOD1 in melatonin treated animals. Thus, intraperitoneal melatonin, at the doses used, does not ameliorate and perhaps exacerbates phenotype in the G93ASOD1 mouse ALS model. This is probably due to melatonin's effect on upregulating gene expression of human toxic SOD1. This action presumably overrides any of its direct anti-oxidant and anti-apoptotic properties.
M E Camacho, M D Carrion, L C Lopez-Cara, A Entrena, M A Gallo, A Espinosa, G Escames and D Acuna-Castroviejo.
Melatonin synthetic analogs as nitric oxide synthase inhibitors.. Mini reviews in medicinal chemistry 12(7):600–17, 2012.
Abstract Nitric oxide (NO), which is produced by oxidation of L-arginine to L-citrulline in a process catalyzed by different isoforms of nitric oxide synthase (NOS), exhibits diverse roles in several physiological processes, including neurotransmission, blood pressure regulation and immunological defense mechanisms. On the other hand, an overproduction of NO is related with several disorders as Alzheimer's disease, Huntington's disease and the amyotrophic lateral sclerosis. Taking melatonin as a model, our research group has designed and synthesized several families of compounds that act as NOS inhibitors, and their effects on the excitability of N-methyl-D-aspartate (NMDA)-dependent neurons in rat striatum, and on the activity on both nNOS and iNOS were evaluated. Structural comparison between the three most representative families of compounds (kynurenines, kynurenamines and 4,5-dihydro-1H-pyrazole derivatives) allows the establishment of structure-activity relationships for the inhibition of nNOS, and a pharmacophore model that fulfills all of the observed SARs were developed. This model could serve as a template for the design of other potential nNOS inhibitors. The last family of compounds, pyrrole derivatives, shows moderate in vitro NOS inhibition, but some of these compounds show good iNOS/nNOS selectivity. Two of these compounds, 5-(2-aminophenyl)-1H-pyrrole-2-carboxylic acid methylamide and cyclopentylamide, have been tested as regulators of the in vivo nNOS and iNOS activity. Both compounds prevented the increment of the inducible NOS activity in both cytosol (iNOS) and mitochondria (i-mtNOS) observed in a MPTP model of Parkinson's disease.
Jorge H Limón-Pacheco and Mar\'ıa E Gonsebatt.
The glutathione system and its regulation by neurohormone melatonin in the central nervous system.. Central nervous system agents in medicinal chemistry 10(4):287–97, 2010.
Abstract The glutathione system includes reduced (GSH) and oxidized (GSSG) forms of glutathione; the enzymes required for its synthesis and recycling, such as gamma-glutamate cysteine ligase ($\gamma$-GCL), glutathione synthetase (GS), glutathione reductase (GSR) and gamma glutamyl transpeptidase ($\gamma$-GGT); and the enzymes required for its use in metabolism and in mechanisms of defense against free radical-induced oxidative damage, such as glutathione s-transferases (GSTs) and glutathione peroxidases (GPxs). Glutathione functions in the central nervous system (CNS) include maintenance of neurotransmitters, membrane protection, detoxification, metabolic regulation, and modulation of signal transduction. A common pathological hallmark in various neurodegenerative disorders, such as amyotrophic lateral sclerosis and Alzheimer's and Parkinson's diseases is the increase in oxidative stress and the failure of antioxidant systems, such as the decrease in the GSH content. The administration of exogenous neurohormone melatonin at pharmacological doses has been shown not only to be an effective scavenger of reactive oxygen and nitrogen species but also to enhance the levels of GSH and the expression and activities of the GSH-related enzymes including $\gamma$-GCL, GPxs, and GSR. The exact mechanisms by which melatonin regulates the glutathione system are not fully understood. The main purpose of this short review is to discuss evidence relating to the potential common modulation signals between the glutathione system and melatonin in the CNS. The potential regulatory mechanisms and interactions between neurons and non-neuronal cells are also discussed.
The antiapoptotic activity of melatonin in neurodegenerative diseases.. CNS neuroscience & therapeutics 15(4):345–57, January 2009.
Abstract Melatonin plays a neuroprotective role in models of neurodegenerative diseases. However, the molecular mechanisms underlying neuroprotection by melatonin are not well understood. Apoptotic cell death in the central nervous system is a feature of neurodegenerative diseases. The intrinsic and extrinsic apoptotic pathways and the antiapoptotic survival signal pathways play critical roles in neurodegeneration. This review summarizes the reports to date showing inhibition by melatonin of the intrinsic apoptotic pathways in neurodegenerative diseases including stroke, Alzheimer disease, Parkinson disease, Huntington disease, and amyotrophic lateral sclerosis. Furthermore, the activation of survival signal pathways by melatonin in neurodegenerative diseases is discussed.
Charanjit Kaur and Eng-Ang Ling.
Antioxidants and neuroprotection in the adult and developing central nervous system.. Current medicinal chemistry 15(29):3068–80, January 2008.
Abstract Oxidative stress is implicated in the pathogenesis of a number of neurological disorders such as Alzheimer's disease (AD), Parkinson's disease (PD), multiple sclerosis and stroke in the adult as well as in conditions such as periventricular white matter damage in the neonatal brain. It has also been linked to the disruption of blood brain barrier (BBB) in hypoxic-ischemic injury. Both experimental and clinical results have shown that antioxidants such as melatonin, a neurohormone synthesized and secreted by the pineal gland and edaravone (3-methyl-1-phenyl-2-pyrazolin-5-one), a newly developed drug, are effective in reducing oxidative stress and are promising neuroprotectants in reducing brain damage. Indeed, the neuroprotective effects of melatonin in many central nervous system (CNS) disease conditions such as amyotrophic lateral sclerosis, PD, AD, ischemic injury, neuropsychiatric disorders and head injury are well documented. Melatonin affords protection to the BBB in hypoxic conditions by suppressing the production of vascular endothelial growth factor and nitric oxide which are known to increase vascular permeability. The protective effects of melatonin against hypoxic damage have also been demonstrated in newborn animals whereby it attenuated damage in different areas of the brain. Furthermore, exogenous administration of melatonin in newborn animals effectively enhanced the surface receptors and antigens on the macrophages/microglia in the CNS indicating its immunoregulatory actions. Edaravone has been shown to reduce oxidative stress, edema, infarct volume, inflammation and apoptosis following ischemic injury of the brain in the adult as well as decrease free radical production in the neonatal brain following hypoxic-ischemic insult. It can counteract toxicity from activated microglia. This review summarizes the clinical and experimental data highlighting the therapeutic potential of melatonin and edaravone in neuroprotection in various disorders of the CNS.
Serotonergic mechanisms in amyotrophic lateral sclerosis.. The International journal of neuroscience 116(7):775–826, 2006.
Abstract Serotonin (5-HT) has been intimately linked with global regulation of motor behavior, local control of motoneuron excitability, functional recovery of spinal motoneurons as well as neuronal maturation and aging. Selective degeneration of motoneurons is the pathological hallmark of amyotrophic lateral sclerosis (ALS). Motoneurons that are preferentially affected in ALS are also densely innervated by 5-HT neurons (e.g., trigeminal, facial, ambiguus, and hypoglossal brainstem nuclei as well as ventral horn and motor cortex). Conversely, motoneuron groups that appear more resistant to the process of neurodegeneration in ALS (e.g., oculomotor, trochlear, and abducens nuclei) as well as the cerebellum receive only sparse 5-HT input. The glutamate excitotoxicity theory maintains that in ALS degeneration of motoneurons is caused by excessive glutamate neurotransmission, which is neurotoxic. Because of its facilitatory effects on glutaminergic motoneuron excitation, 5-HT may be pivotal to the pathogenesis and therapy of ALS. 5-HT levels as well as the concentrations 5-hydroxyindole acetic acid (5-HIAA), the major metabolite of 5-HT, are reduced in postmortem spinal cord tissue of ALS patients indicating decreased 5-HT release. Furthermore, cerebrospinal fluid levels of tryptophan, a precursor of 5-HT, are decreased in patients with ALS and plasma concentrations of tryptophan are also decreased with the lowest levels found in the most severely affected patients. In ALS progressive degeneration of 5-HT neurons would result in a compensatory increase in glutamate excitation of motoneurons. Additionally, because 5-HT, acting through presynaptic 5-HT1B receptors, inhibits glutamatergic synaptic transmission, lowered 5-HT activity would lead to increased synaptic glutamate release. Furthermore, 5-HT is a precursor of melatonin, which inhibits glutamate release and glutamate-induced neurotoxicity. Thus, progressive degeneration of 5-HT neurons affecting motoneuron activity constitutes the prime mover of the disease and its progression and treatment of ALS needs to be focused primarily on boosting 5-HT functions (e.g., pharmacologically via its precursors, reuptake inhibitors, selective 5-HT1A receptor agonists/5-HT2 receptor antagonists, and electrically through transcranial administration of AC pulsed picotesla electromagnetic fields) to prevent excessive glutamate activity in the motoneurons. In fact, 5HT1A and 5HT2 receptor agonists have been shown to prevent glutamate-induced neurotoxicity in primary cortical cell cultures and the 5-HT precursor 5-hydroxytryptophan (5-HTP) improved locomotor function and survival of transgenic SOD1 G93A mice, an animal model of ALS. | CommonCrawl |
for events the day of Thursday, April 5, 2018.
Abstract: For each prime $p\le x$, remove from the set of integers a set $I_p$ of residue classed modulo $p$, and let $S$ be the set of remaining integers. As long as $I_p$ has average 1, we are able to improve on the trivial bound of $\gg x$, and show that for some positive constant c, there are gaps in the set $S$ of size $x(\log x)^c$ as long as $x$ is large enough. As a corollary, we show that any irreducible polynomial $f$, when evaluated at the integers up to $X$, has a string of $\gg (\log X)(\log\log X)^c$ consecutive composite values, for some positive $c$ (depending only on the degree of $f$). Another corollary is that for any polynomial $f$, there is a number $G$ so that for any $k\ge G$, there are infinitely many values of $n$ for which none of the values $f(n+1),\ldots,f(n+k)$ are coprime to all the others. For $f(n)=n$, this was proved by Erdos in 1935, and currently it is known only for linear, quadratic and cubic polynomials. This is joint work with Sergei Konyagin, James Maynard, Carl Pomerance and Terence Tao.
Abstract: We discuss a recent theorem of the speaker and Kyle Kinneberg concerning rigidity for convex-cocompact actions on non-compact rank-one symmetric spaces, which generalizes a result of Bonk and Kleiner from real hyperbolic space. A key part of the proof concerns analysis on some non-Euclidean metric spaces (Cheeger's "Lipschitz differentiability spaces" and Carnot groups), and this will be the main focus of the talk.
1. The only non trivial closed ideal in L(L_p), 1 ≤ p < ∞ , that has a left approximate identity is the ideal of compact operators (joint with N. C. Phillips and G. Schechtman).
nitely many; in fact, a continuum; of closed ideals in L(L_1) (joint with G. Pisier and G. Schechtman).
The second result answers a question from the 1978 book of A. Pietsch, "Operator ideals". | CommonCrawl |
Currently, stats, math, cstheory and physics have MathJax support which turns LaTeX code into equation.
At least on math, occasionally there are users that do not know this feature or don't know it's possible to use LaTeX (MathJax), resulting a badly formatted post.
in the "How to Format" on the right hand side.
This is basically completed, but I can't find a good help / demos page to link to for MathJaX.
but oddly it requires users to "view source" before showing them the markup required, which is ... annoying.
Let's write our own help page for our use of MathJax. Here is a start. It is based on Stack Overflow's own Markdown editing help, the FAQ for typing math on math.SE and Math Overflow, and "Using LaTeX" on ask NRICH.
Characters in bold italics indicate highlighting.
This site supports typesetting mathematical formula with AMS-LaTeX markup, powered by the MathJax rendering engine.
The integers $x,y,z$ form a Pythagorean triplet when $x^2 + y^2 = z^2$.
where $n$ is a constant.
Spacing — a\ b (text space). Other kinds of spacing.
Symbols — \ne (≠), \ge (≥), \le (≤), \sim (∼), \pm (±), \to (→), \infty (∞), etc.
Function names — \sin, \cos, \log, \lim, etc.
Visit Detexify2 to lookup command for a symbol.
Check the MathJax documentation for the complete list of commands supported.
Right-clicking on any equations should reveal a context menu. Clicking "Show source" will open up a new window showing the LaTeX markup that generates it.
The Not So Short Introduction to LaTeX2e is a good beginner's guide on the LaTeX system.
Not the answer you're looking for? Browse other questions tagged feature-request status-completed stackexchange-2.0 help-text .
Where can I find a guide to create formulae or equations?
Is there any real association between different sites (except Association Bonus)? | CommonCrawl |
Let $G$ be a compact group. For $1\leq p\leq\infty$ we introduce a class of Banach function algebras $\mathcal A^p(G)$ on $G$ which are the Fourier algebras in the case $p=1$, and for $p=2$ are certain algebras discovered by Forrest, Samei and Spronk. In the case $p\neq 2$ we find that $\mathcal A^p(G)\cong\mathcal A^p(H)$ if and only if $G$ and $H$ are isomorphic compact groups. These algebras admit natural operator space structures, and also weighted versions, which we call $p$-Beurling–Fourier algebras. We study various amenability and operator amenability properties, Arens regularity and representability as operator algebras. For a connected Lie $G$ and $p > 1$, our techniques of estimation of when certain $p$-Beurling–Fourier algebras are operator algebras rely more on the fine structure of $G$, than in the case $p=1$. We also study restrictions to subgroups. In the case that $G=$ SU(2), restrict to a torus and obtain some exotic algebras of Laurent series. We study amenability properties of these new algebras, as well. | CommonCrawl |
Format: MarkdownItexThe [[Bayesian interpretation of quantum mechanics]] is correct. So there!
The Bayesian interpretation of quantum mechanics is correct. So there!
Format: MarkdownItexToby, I see you are editing this very minute, so maybe the following may be resolved once you see this here. But presently when I look at the entry it seems to me that the very explanation of what is Bayesian about it all is currently missing. Currently the section "Formalism" describes in which sense a suitable algebra may be thought of as a non-associative analog of an algebra dual to a space of probability measures. There ought to be a line following this about what specifially Bayes is meant to have to do with this.
Toby, I see you are editing this very minute, so maybe the following may be resolved once you see this here.
But presently when I look at the entry it seems to me that the very explanation of what is Bayesian about it all is currently missing. Currently the section "Formalism" describes in which sense a suitable algebra may be thought of as a non-associative analog of an algebra dual to a space of probability measures.
There ought to be a line following this about what specifially Bayes is meant to have to do with this.
Format: MarkdownItexI added some more to [[Bayesian reasoning]].
I added some more to Bayesian reasoning.
Format: MarkdownItexWhere the text says > People have implied [...] that this is what Niels Bohr meant all along when he put forth the Copenhagen interpretation I have appended > (for more on this suggestion see also at _[[Bohr topos]]_). I still feel the entry presently does not say what its title announces. I suppose you want to add a line roughly like the following, at the end of the section "Formalism": > This allows to systematically think of quantum observables as random variables. There have been many debates about what that means. In [[hidden variable theory]] one supposes that it means that there is an underlying ensemble of which these probability variables are coarse grained expectation values. This would give an actual "frequentist" interpretation to the probabilities appearing here. But it has also been argued that these probabilities are nothing but estimates of the available subjective knowledge of the system, and this may be related to the Bayesian interpretation to probability. Or something like this.
(for more on this suggestion see also at Bohr topos).
This allows to systematically think of quantum observables as random variables. There have been many debates about what that means. In hidden variable theory one supposes that it means that there is an underlying ensemble of which these probability variables are coarse grained expectation values. This would give an actual "frequentist" interpretation to the probabilities appearing here. But it has also been argued that these probabilities are nothing but estimates of the available subjective knowledge of the system, and this may be related to the Bayesian interpretation to probability.
Format: MarkdownItexMy editing is done for today. What you were missing is now under Interpretation, although it is rather too brief if one isn\'t already familiar with Bayesianism in classical probability.
My editing is done for today. What you were missing is now under Interpretation, although it is rather too brief if one isn't already familiar with Bayesianism in classical probability.
Format: MarkdownItexIs it really defensible to say that "the state vector is the map and not the territory" when there *is* no territory? For instance, saying "this collapse takes place in the map, not the territory; it is not a physical process" implies that "physical process" means something that takes place in the territory rather than the map; but then doesn't this argument show that there are no "physical processes" at all?
Is it really defensible to say that "the state vector is the map and not the territory" when there is no territory? For instance, saying "this collapse takes place in the map, not the territory; it is not a physical process" implies that "physical process" means something that takes place in the territory rather than the map; but then doesn't this argument show that there are no "physical processes" at all?
Format: MarkdownItexMike, it is typical for these debates to remain inconclusive, but let's just isolate what the point of saying "Bayesian" here is supposed to be: So the point is that a) when a Bayesian makes a measurement that decides the actual value of a probability variable, then he "updates his prior" to be a probability distribution delta-peaked at that value and then prodeeds by time-evolving this probability distribution from there on by Bayes' law and b) this may seem to be exactly what happens to physicists when they see their "wavefunction collapse" and then proceed time-evolving it with Schrödinger's equation from there. So the idea is that b) ist just an instance of a). (I am not sure if this "interpretation of wave function collapse" derserves to be called an "interpretation of quantum mechanics", since the reason why there are just probabilties in the first place is untouched by invoking Bayes and this is what is mostly what makes people invoke "interpretations of quantum mechanics".) On the other hand, it is known that coupling a big quantum mechanical system (e.g. an observer) to a tiny one (i.e. an particle in a detector) DOES lead to a dynamical collapse-like reduction of the tiny system's state to a "classical mixture", and this happens purely dynamically by Schrödinger's equation applied to both the system and its observer at once, as it should be. This insight goes by the name "decoherence" and when it was figured out long ago some people thought this settles the issue. But debates about this do still continue, remain inconclusive and maybe always will. The Wikipedia entry [Wave function collapse](http://en.wikipedia.org/wiki/Wave_function_collapse) is actually pretty good. It explains this decoherence-explanation of the "collapse".
So the point is that a) when a Bayesian makes a measurement that decides the actual value of a probability variable, then he "updates his prior" to be a probability distribution delta-peaked at that value and then prodeeds by time-evolving this probability distribution from there on by Bayes' law and b) this may seem to be exactly what happens to physicists when they see their "wavefunction collapse" and then proceed time-evolving it with Schrödinger's equation from there. So the idea is that b) ist just an instance of a).
On the other hand, it is known that coupling a big quantum mechanical system (e.g. an observer) to a tiny one (i.e. an particle in a detector) DOES lead to a dynamical collapse-like reduction of the tiny system's state to a "classical mixture", and this happens purely dynamically by Schrödinger's equation applied to both the system and its observer at once, as it should be. This insight goes by the name "decoherence" and when it was figured out long ago some people thought this settles the issue. But debates about this do still continue, remain inconclusive and maybe always will. The Wikipedia entry Wave function collapse is actually pretty good. It explains this decoherence-explanation of the "collapse".
Format: MarkdownItexI should add that decoherence only explains the reduction (collapse) from quantum probability to classical probability. To the classical probability distribution that remains after decoherence one will of course tend to want to apply somethingy like Bayesian reasoning again.
I should add that decoherence only explains the reduction (collapse) from quantum probability to classical probability. To the classical probability distribution that remains after decoherence one will of course tend to want to apply somethingy like Bayesian reasoning again.
Format: MarkdownItexI'm sorry, but I don't really see how what you said answers my question. (-: How about this for a more specific question: according to the point of view presented on the page, what is an example of a "physical process"?
I'm sorry, but I don't really see how what you said answers my question. (-: How about this for a more specific question: according to the point of view presented on the page, what is an example of a "physical process"?
Format: MarkdownItexI didn't try to answer your question, I tried to clarify what this thread is about (or ought to be about, judging from its title), namely the idea/suggestion that 'wave function collapse' is an instance of updating Bayesian priors. For a decent discussion of what a physical process in (quantum) theory is I would invoke a decent formal foundations of (quantum) physics in higher topos theory and then see what that tells us.
I didn't try to answer your question, I tried to clarify what this thread is about (or ought to be about, judging from its title), namely the idea/suggestion that 'wave function collapse' is an instance of updating Bayesian priors.
For a decent discussion of what a physical process in (quantum) theory is I would invoke a decent formal foundations of (quantum) physics in higher topos theory and then see what that tells us.
Format: MarkdownItexOk. Maybe Toby can answer my question, since he wrote the bit that confused me.
Ok. Maybe Toby can answer my question, since he wrote the bit that confused me.
Format: MarkdownItexReading only Mike #6 so far: I like to say that there is a territory, but it cannot be described as we would like, as a function from the space of observables to the real line. One might say, if the territory is not a function from the space of observables to the real line, then what is it? But of course, even in the days of classical physics (or today using a different interpretation of quantum physics), when you could describe it with such a function, you wouldn\'t say that the territory *is* this function, or at least you might not. Just because you have a perfect mathematical description of reality (even if you really do have one), that doesn\'t mean that reality *is* that mathematics. (To be sure, some people, such as Max Tegmark, take precisely the opposite position here.) Even a perfect map is not the territory. So it\'s very interesting that an obvious idea as to how to describe reality completely doesn\'t exist, but that doesn\'t mean that we don\'t have reality. Reality is what it is, meanwhile we are just doing our best to understand it. Then again, perhaps the territory can still be described using a different mathematical construct, say something that lives in a Bohr topos (which I say only because I haven\'t really figured out how a Bohr topos works yet). Even so, that would still be a map, not the territory. Edit to answer Mike #9: all of the usual things that we think of as physical processes are still physical processes. Although I usually prefer the Heisenberg picture when thinking philosophically, 'process' evokes time, so let\'s use the Schrödinger picture, as Urs was doing. Then we can say that physical processes comprise the time evolution of the state. However, changing which state you use when you gain information is not that. Edit: I just realized that this comment used the word 'map' in two subtly similar but fundamentally different ways. If this had been intentional, then it would have been a pun, but I\'m afraid that it must have just been confusing. So for one sense I have replaced it with 'function'.
Reading only Mike #6 so far: I like to say that there is a territory, but it cannot be described as we would like, as a function from the space of observables to the real line.
One might say, if the territory is not a function from the space of observables to the real line, then what is it? But of course, even in the days of classical physics (or today using a different interpretation of quantum physics), when you could describe it with such a function, you wouldn't say that the territory is this function, or at least you might not. Just because you have a perfect mathematical description of reality (even if you really do have one), that doesn't mean that reality is that mathematics. (To be sure, some people, such as Max Tegmark, take precisely the opposite position here.) Even a perfect map is not the territory.
So it's very interesting that an obvious idea as to how to describe reality completely doesn't exist, but that doesn't mean that we don't have reality. Reality is what it is, meanwhile we are just doing our best to understand it.
Then again, perhaps the territory can still be described using a different mathematical construct, say something that lives in a Bohr topos (which I say only because I haven't really figured out how a Bohr topos works yet). Even so, that would still be a map, not the territory.
Edit to answer Mike #9: all of the usual things that we think of as physical processes are still physical processes. Although I usually prefer the Heisenberg picture when thinking philosophically, 'process' evokes time, so let's use the Schrödinger picture, as Urs was doing. Then we can say that physical processes comprise the time evolution of the state. However, changing which state you use when you gain information is not that.
Edit: I just realized that this comment used the word 'map' in two subtly similar but fundamentally different ways. If this had been intentional, then it would have been a pun, but I'm afraid that it must have just been confusing. So for one sense I have replaced it with 'function'.
Format: MarkdownItexI\'ve decided that the formalism on this page is too precise. It\'s very natural, since probability theory is linked to measure spaces which are linked to their real $L^\infty$ spaces, which are $W^*$-algebras, or better $JBW$-algebras since only that structure is relevant even in the quantum case. But it\'s also fairly obscure, in two ways: using the Jordan multiplication of the real-valued observables instead of the full $W^*$-algebra with its commutators is rare (since the commutators are used to describe time evolution, if nothing else), and even using a $W^*$-algebra rather than an arbitrary $C^*$-algebra is rare (even if you don\'t restrict yourself to operators on a Hilbert space). So I should move that to [[JBW-algebras in quantum mechanics]] or something like that, then make the Formalism here more general. Edit: Done.
I've decided that the formalism on this page is too precise. It's very natural, since probability theory is linked to measure spaces which are linked to their real L ∞L^\infty spaces, which are W *W^*-algebras, or better JBWJBW-algebras since only that structure is relevant even in the quantum case. But it's also fairly obscure, in two ways: using the Jordan multiplication of the real-valued observables instead of the full W *W^*-algebra with its commutators is rare (since the commutators are used to describe time evolution, if nothing else), and even using a W *W^*-algebra rather than an arbitrary C *C^*-algebra is rare (even if you don't restrict yourself to operators on a Hilbert space). So I should move that to JBW-algebras in quantum mechanics or something like that, then make the Formalism here more general.
Format: MarkdownItexThanks for this. Toby, clicking through to [[localisable measurable space]] and [[measurable locale]] I am surprised not to find to connection to valuations on locales nor to the sheaf theoretic approach to measure theory. We have some information at [[Boolean topos]] and at [[Measurable spaces]]. Maybe we should discuss a bit before revising.
Thanks for this. Toby, clicking through to localisable measurable space and measurable locale I am surprised not to find to connection to valuations on locales nor to the sheaf theoretic approach to measure theory. We have some information at Boolean topos and at Measurable spaces. Maybe we should discuss a bit before revising.
Format: MarkdownItexThat rearrangement makes very good sense to me know. I have added cross-links vigorously.
That rearrangement makes very good sense to me know. I have added cross-links vigorously.
Format: MarkdownItex>It can be nice to describe the kinematics of a quantum system using a JBW-algebra. Not the greatest of opening lines here in [[JBW-algebraic quantum mechanics]]. What's trying to be said? That there's some kind of advantage to this formalism? Can we be specific?
It can be nice to describe the kinematics of a quantum system using a JBW-algebra.
Not the greatest of opening lines here in JBW-algebraic quantum mechanics.
What's trying to be said? That there's some kind of advantage to this formalism? Can we be specific?
Format: MarkdownItexThat\'s why I wrote >This is hastily copied from elsewhere and minimally edited. More work should be done to spell this out. Also motivation. What you\'re asking for is the motivation. I\'ve put some in now.
This is hastily copied from elsewhere and minimally edited. More work should be done to spell this out. Also motivation.
What you're asking for is the motivation. I've put some in now.
Format: MarkdownItexSorry, it's just that after all these years I still have the words of my primary school teacher in my head, "Always look for an alternative to the word 'nice'." She wasn't too fond of 'got' either.
Sorry, it's just that after all these years I still have the words of my primary school teacher in my head, "Always look for an alternative to the word 'nice'." She wasn't too fond of 'got' either.
Format: MarkdownItexRe: #12, the current wording on the page implies to me that collapse is "not a physical process" *because* it "takes place in the map, not the territory". However, if the quantum state is a map and not the territory, then it seems to me that the time evolution of that state is also something taking place in the map and not the territory, and thus also ought not to be considered a "physical process". That's what I was trying to say.
Re: #12, the current wording on the page implies to me that collapse is "not a physical process" because it "takes place in the map, not the territory". However, if the quantum state is a map and not the territory, then it seems to me that the time evolution of that state is also something taking place in the map and not the territory, and thus also ought not to be considered a "physical process". That's what I was trying to say.
Format: MarkdownItexThe idea from the Bayesian point of view is that the evolution of that state/of the probability distribution is that which most closely accounts for the actual process. One uses Bayes' law to update a prior given certain information, such as to stay as close to reality as possbible, and the idea is that this is what Schrödinger's equation does for you in quantum mechanics, to update the best possible of your knowledge of the actual system, to keep the map as close to the territory as possible. That's the idea.
The idea from the Bayesian point of view is that the evolution of that state/of the probability distribution is that which most closely accounts for the actual process. One uses Bayes' law to update a prior given certain information, such as to stay as close to reality as possbible, and the idea is that this is what Schrödinger's equation does for you in quantum mechanics, to update the best possible of your knowledge of the actual system, to keep the map as close to the territory as possible. That's the idea.
Format: MarkdownItexI prefer the Heisenberg picture, in which the state doesn\'t evolve but the observables do. Then the time evolution is in the territory.[^technicality] Even in the Schrödinger picture, the time evolution there is merely the automatic reflection of this physical process in the map, rather than a change of the map itself. Collapse, in contrast, reflects a change in the observer\'s information, a real updating of their map. [^technicality]: Technically, philosophically, the algebra of observables is also part of the map, but it\'s a part that we generally pretend is perfect for the sake of doing physics. But we have to remember that it\'s also part of the map if what we\'re testing is the physical theory itself. Still, the time evolution that appears there is also supposed to be a reflection of a physical process in the territory, while collapse is not.
I prefer the Heisenberg picture, in which the state doesn't evolve but the observables do. Then the time evolution is in the territory.1 Even in the Schrödinger picture, the time evolution there is merely the automatic reflection of this physical process in the map, rather than a change of the map itself. Collapse, in contrast, reflects a change in the observer's information, a real updating of their map.
Format: MarkdownItexOkay, that makes a tiny bit of sense. Maybe the page could be edited to clarify this?
Okay, that makes a tiny bit of sense. Maybe the page could be edited to clarify this?
Format: MarkdownItexWait, should we maybe distinguish between the Bayesian and the Bartelsian interpretation of quantum mechanics? The Bayesian interpretation is simply this: the idea that a) quantum states are thought of as analogous to "Bayesian probabilities" (priors), that b) time evolution of quantum states is the analog of Bayes' rule for updating probabilities (for instance on slides 19-21 [here](http://www.ee.ucr.edu/~korotkov/presentations/11-Taiwan-1w.pdf)) and that c) collapse of the wave function is the analog of a Bayesian updating his prior to be delta-peaked.
Wait, should we maybe distinguish between the Bayesian and the Bartelsian interpretation of quantum mechanics?
The Bayesian interpretation is simply this: the idea that a) quantum states are thought of as analogous to "Bayesian probabilities" (priors), that b) time evolution of quantum states is the analog of Bayes' rule for updating probabilities (for instance on slides 19-21 here) and that c) collapse of the wave function is the analog of a Bayesian updating his prior to be delta-peaked.
Format: MarkdownItexTime evolution by the Schrödinger equation is *not* an analogue of Bayes\'s Rule; collapse is. Slide 19 says 'evolution due to measurement', which is unfortunately ambiguous between collapse and evolution by the time-dependent Schrödinger equation incorporating a physical interaction with a measurement device. The simplest way to tell the difference, in my opinion, is to consider the (von Neumann) entropy. Applying Bayes\'s Rule classically reduces the entropy of a probability distribution; collapse also reduces the entropy. However, time evolution by the Schrödinger equation is unitary and so conserves total entropy.[^entropy] [^entropy]: The physical entropy that *increases* is not the von Neumann entropy but a coarse-grained entropy, as described at [[entropy#physical]]. Also, collapse is not generally collapse to a delta distribution; in fact, in quantum mechanics, there typically is no delta distribution (that\'s what makes it quantum rather than classical). Of course, if you measure an observable $O$ to have a particular value, then the probability distribution for *that* observable collapses to a delta distribution, but the distributions for the other observables generally won\'t. (And indeed, if $P$ does not commute with $O$, then the distributions for $O$ and $P$ *cannot* simultaneously be deltas.) Maybe this is all that you meant; but although people usually talk about measuring a particular value of an observable, you might also only measure that the value is within a certain range, and that still triggers the application of Bayes\'s Rule and a corresponding collapse of the wavefunction. Of course, measuring the value of $O$ to lie within the measurable set $E$ is equivalent to measuring the observable $\chi_E(O)$ to have the value $1$, so *something* is still collapsing to a delta; but I think that describing this as updating to a delta-peaked posterior[^posterior] is misleading. [^posterior]: This is not the prior, since it comes *after* the measurement, although it will be the prior for the *next* measurement.
Also, collapse is not generally collapse to a delta distribution; in fact, in quantum mechanics, there typically is no delta distribution (that's what makes it quantum rather than classical). Of course, if you measure an observable OO to have a particular value, then the probability distribution for that observable collapses to a delta distribution, but the distributions for the other observables generally won't. (And indeed, if PP does not commute with OO, then the distributions for OO and PP cannot simultaneously be deltas.) Maybe this is all that you meant; but although people usually talk about measuring a particular value of an observable, you might also only measure that the value is within a certain range, and that still triggers the application of Bayes's Rule and a corresponding collapse of the wavefunction. Of course, measuring the value of OO to lie within the measurable set EE is equivalent to measuring the observable χ E(O)\chi_E(O) to have the value 11, so something is still collapsing to a delta; but I think that describing this as updating to a delta-peaked posterior2 is misleading.
Format: MarkdownItexSo the Bayesian interpretation as I understand it (following Urs\'s points a,b,c) is this: 1. Quantum states (pure or mixed) are analogous to (indeed generalizations of) probability distributions, which are to be interpreted in a Bayesian way, as indicating knowledge, belief, etc. 2. Time evolution of quantum states by Schrödinger\'s equation is analogous to evolution of classical statistical systems by Liouville\'s equation (and von Neumann has an equation that generalizes both). 3. Collapse of the wave function is analogous to (indeed a generalization of) updating a probability distribution; Born\'s Rule and Bayes\'s Rule are analogues. Maybe I should put this in the article.
Quantum states (pure or mixed) are analogous to (indeed generalizations of) probability distributions, which are to be interpreted in a Bayesian way, as indicating knowledge, belief, etc.
Time evolution of quantum states by Schrödinger's equation is analogous to evolution of classical statistical systems by Liouville's equation (and von Neumann has an equation that generalizes both).
Collapse of the wave function is analogous to (indeed a generalization of) updating a probability distribution; Born's Rule and Bayes's Rule are analogues.
Maybe I should put this in the article.
Format: MarkdownItexThe phrase "analogous to" makes it make a *whole* lot more sense. I thought at first the claim was that quantum mechanics actually *is* a Bayesian probability system.
The phrase "analogous to" makes it make a whole lot more sense. I thought at first the claim was that quantum mechanics actually is a Bayesian probability system.
Format: TextA useful distinction now common in quantum foundations is between psi-ontic and psi-epistemic interpretations. Psi-ontic interpretations take the wavefunction to represent actual physical thing, and the challenge is to explain why measurements appear to have non-deterministic outcomes governed by the Born rule. Examples include the Bohmian, Modal, Spontaneous Collapse, and Many Worlds interpretations. Psi-epistemic interpretations take the wavefunction to fundamentally represent probabilities (in a suitably generalized sense). Two challenges to many such interpretations (expressed by John Bell as, "Whose information? Information about what?") are their fundamentally subjective nature, and the fact that one cannot think of wavefunctions as probability distributions over "real things" like positions, which seems to commit them to anti-realism, a point recently driven home by the PBR Theorem. How exactly one justifies (or at least conceives of the purport of of) the rules of quantum probability, along with one's philosophical commitments about the status of "reality" and the meaning of probability give rise to different psi-epistemic interpretations. Some examples are Ballentine's Statistical Interpretation (frequentist view), Fuchs et al.'s "Quanum Bayesianism" (subjective Bayesian view), Healey's Pragmatic Interpretation, and Bub's recent work. My main reason for posting this is one of naming: what is being called "The Bayesian Interpretation" here is in fact a family of interpretations that in the past few years has come to be called psi-epistemic, and what is called the Quantum Bayesian Interpretation in the literature is a psi-epistemic interpretation based on very specific philosophical commitments.
A useful distinction now common in quantum foundations is between psi-ontic and psi-epistemic interpretations.
Psi-ontic interpretations take the wavefunction to represent actual physical thing, and the challenge is to explain why measurements appear to have non-deterministic outcomes governed by the Born rule. Examples include the Bohmian, Modal, Spontaneous Collapse, and Many Worlds interpretations.
Psi-epistemic interpretations take the wavefunction to fundamentally represent probabilities (in a suitably generalized sense). Two challenges to many such interpretations (expressed by John Bell as, "Whose information? Information about what?") are their fundamentally subjective nature, and the fact that one cannot think of wavefunctions as probability distributions over "real things" like positions, which seems to commit them to anti-realism, a point recently driven home by the PBR Theorem. How exactly one justifies (or at least conceives of the purport of of) the rules of quantum probability, along with one's philosophical commitments about the status of "reality" and the meaning of probability give rise to different psi-epistemic interpretations. Some examples are Ballentine's Statistical Interpretation (frequentist view), Fuchs et al.'s "Quanum Bayesianism" (subjective Bayesian view), Healey's Pragmatic Interpretation, and Bub's recent work.
My main reason for posting this is one of naming: what is being called "The Bayesian Interpretation" here is in fact a family of interpretations that in the past few years has come to be called psi-epistemic, and what is called the Quantum Bayesian Interpretation in the literature is a psi-epistemic interpretation based on very specific philosophical commitments.
Format: MarkdownItex@ Mike #27: But keep in mind that 'analogous to' is an understatement. I took care to write 'generalization' wherever I could (which was at least once per item). So it is not *merely* an analogy. The Bayesian interpretation of classical probability theory is literally a special case of the Bayesian interpretation of quantum mechanics, not just something analogous. Or another way to avoid having merely an analogy: Given a state and an observable, you get an honest-to-goodness probability distribution (not merely an analogue) on the spectrum of the observable. So in particular, you get a real number between $0$ and $1$ from a state $\psi$, an observable $O$, and a measurable set $E$, which is (at least naively) the probability that the value of $O$ belongs to $E$, given that the world is in the state $\psi$. The Bayesian interpretation says that this number is to be interpreted directly as a Bayesian probability (not just an analogue). It actually says more than this, because so far we have left open the nature of $\psi$ itself. The Bayesian interpretation goes on to say that $\psi$ is nothing more than a record of all of the associated probabilities as we vary $O$ and $E$ (rather than, say, an objective feature of external reality that nevertheless gives these probabilities by some law of nature and epistemology).
But keep in mind that 'analogous to' is an understatement. I took care to write 'generalization' wherever I could (which was at least once per item). So it is not merely an analogy. The Bayesian interpretation of classical probability theory is literally a special case of the Bayesian interpretation of quantum mechanics, not just something analogous.
Or another way to avoid having merely an analogy: Given a state and an observable, you get an honest-to-goodness probability distribution (not merely an analogue) on the spectrum of the observable. So in particular, you get a real number between 00 and 11 from a state ψ\psi, an observable OO, and a measurable set EE, which is (at least naively) the probability that the value of OO belongs to EE, given that the world is in the state ψ\psi. The Bayesian interpretation says that this number is to be interpreted directly as a Bayesian probability (not just an analogue).
It actually says more than this, because so far we have left open the nature of ψ\psi itself. The Bayesian interpretation goes on to say that ψ\psi is nothing more than a record of all of the associated probabilities as we vary OO and EE (rather than, say, an objective feature of external reality that nevertheless gives these probabilities by some law of nature and epistemology).
Format: MarkdownItex@Justin: Thanks! We should consider renaming the page, or least inserting some discussion of the literature on this point. @Toby: Sure, but I think it's important to avoid giving the impression of the converse, i.e. that the "Bayesian interpretation of quantum mechanics" is a special case of the previously existent notion of Bayesian probability. The point is that seeing the way in which quantum mechanics generalizes Bayesian probability can help us to develop an intuition for QM based on our previous intuition for Bayesian probability --- not that QM is somehow "explained" in terms of notions from ordinary probability, which the word "interpretation" may suggest.
@Justin: Thanks! We should consider renaming the page, or least inserting some discussion of the literature on this point.
@Toby: Sure, but I think it's important to avoid giving the impression of the converse, i.e. that the "Bayesian interpretation of quantum mechanics" is a special case of the previously existent notion of Bayesian probability. The point is that seeing the way in which quantum mechanics generalizes Bayesian probability can help us to develop an intuition for QM based on our previous intuition for Bayesian probability — not that QM is somehow "explained" in terms of notions from ordinary probability, which the word "interpretation" may suggest.
Format: MarkdownItexI disagree (in part) with Justin #28, but my computer just lost it all. (I forgot the lesson that I give others, to always use Itsalltext!) So I will have to rewrite it another time. References for lost comment: [Letter to Schack](http://perimeterinstitute.ca/personal/cfuchs/PhaseTransition.pdf#12), [objective Bayesians in 1994](http://www.math.ucr.edu/home/baez/bayes.html).
I disagree (in part) with Justin #28, but my computer just lost it all. (I forgot the lesson that I give others, to always use Itsalltext!) So I will have to rewrite it another time.
References for lost comment: Letter to Schack, objective Bayesians in 1994.
Format: MarkdownItexA really good exposition of psi-ontic vs. psi-epistemic and the import of the PBR Theorem is at Matt Leifer's [blog](http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/).
A really good exposition of psi-ontic vs. psi-epistemic and the import of the PBR Theorem is at Matt Leifer's blog.
QuantumBayesianism⊂set;Bayesianinterpretationofquantummechanics⊂set;epistemicinterpretationofquantummechanics. Quantum\; Bayesianism \subset Bayesian\; interpretation\; of\; quantum\; mechanics \subset epistemic\; interpretation\; of\; quantum\; mechanics .
But perhaps Justin's point is simply that the article could be easily generalized to cover epistemic interpretations generally. This may be true, and then it probably should be, but I don't feel qualified to judge. Somebody else could edit it thus.
Justin also seems to be claiming that the containment on the left is not strict. That depends on what one means by "Quantum Bayesianism", but with the capital letters (at least, the capital "Q"), I recognize that as a name used specifically by Christopher Fuchs and his collaborators. (Or they will even say "QBism", making the term even more theirs, or perhaps even specifically Fuchs's.) As Justin says, this is based on "very specific philosophical commitments", including (but not limited to) subjective Bayesianism. But as Bayesian interpretation of probability is much broader than this, so is Bayesian interpretation of quantum mechanics.
Again, perhaps Justin's claim is simply that the term "Bayesian", in the context of interpretation of quantum mechanics, should be reserved for QBism. But then there is no way to identify an interpretation of quantum mechanics more specific than epistemic and less specific than QBism. As an objective Bayesian myself, and one whose interpretation of quantum mechanics is a generalization of my objective Bayesian interpretation of probability, I need such a term! And while I'm willing to let Fuchs et al have "QBism" (and even "Quantum Bayesianism" if capitalized), I won't surrender the term "Bayesian" to them entirely, not even in this context.
In the 1994 Usenet conversation on Bayesianism in quantum mechanics, Bayesian interpreation of quantum mechanics is discussed, predating by several years the 2001 papers of Caves, Fuchs, and Schack. There are people in that conversation who explicitly identify as objective Bayesians. So subjective Bayesianism has no priority to the term "Bayesian" in the context of interpretation of quantum mechanics.
Format: MarkdownItexAbove I objected to the last paragraph of Justin #28. But I like the other three paragraphs (as well as Justin #32), and we should perhaps put this in [[interpretation of quantum mechanics]]. I\'ve also some stuff to [[Bayesian interpretation of quantum mechanics]] about how its topic relates to the other things in my proper containment diagram above.
Above I objected to the last paragraph of Justin #28. But I like the other three paragraphs (as well as Justin #32), and we should perhaps put this in interpretation of quantum mechanics.
I've also some stuff to Bayesian interpretation of quantum mechanics about how its topic relates to the other things in my proper containment diagram above.
Format: MarkdownItex@Mike #30: >not that QM is somehow "explained" in terms of notions from ordinary probability, which the word "interpretation" may suggest Of course QM as a whole cannot be thereby explained, but there is a sense in which I want to say that the interpretation is reduced entirely to the interpretation of ordinary probability. Specifically, the probability distributions $O_*\psi$ (for $O$ an observable and $\psi$ a state) are to be interpreted as ordinary probability distributions (which, for me, are interpreted as indicating one\'s knowledge of something, in this case of the value of $O$). The difference, of course, is that these cannot be combined into a single joint probability distribution; this makes QM non-classical. Thus, QM is not given by a notion in ordinary probability theory; nevertheless, QM is described by net of such notions.
Of course QM as a whole cannot be thereby explained, but there is a sense in which I want to say that the interpretation is reduced entirely to the interpretation of ordinary probability. Specifically, the probability distributions O *ψO_*\psi (for OO an observable and ψ\psi a state) are to be interpreted as ordinary probability distributions (which, for me, are interpreted as indicating one's knowledge of something, in this case of the value of OO). The difference, of course, is that these cannot be combined into a single joint probability distribution; this makes QM non-classical. Thus, QM is not given by a notion in ordinary probability theory; nevertheless, QM is described by net of such notions.
Format: MarkdownItexI like the perspective that > QM is described by a net of ... notions [in ordinary probability theory] but I don't think it implies that > the interpretation [of QM] is reduced entirely to the interpretation of ordinary probability. The point is exactly that a net of notions is harder to interpret than a single one.
the interpretation [of QM] is reduced entirely to the interpretation of ordinary probability.
The point is exactly that a net of notions is harder to interpret than a single one.
Format: MarkdownItexSince the interpretation in any case comes down to interpreting what $O_*\psi$ means (what I know about the value of the observable $O$ when my state of knowledge is given by $\psi$), the same interpretation seems to me to work for the whole net. But I understand that this might not satisfy others.
Since the interpretation in any case comes down to interpreting what O *ψO_*\psi means (what I know about the value of the observable OO when my state of knowledge is given by ψ\psi), the same interpretation seems to me to work for the whole net. But I understand that this might not satisfy others.
Format: MarkdownItexSorry for the belated reaction. Re Toby's #25, #26 that only the collapse is analogous to Bayes law: true, all right. One other quibble: does it really make sense to speak of "Bayesian probability"? It's no different from "Kolmogorov probability", except that one tends to accompany it by more of a story.
Sorry for the belated reaction.
Re Toby's #25, #26 that only the collapse is analogous to Bayes law: true, all right.
One other quibble: does it really make sense to speak of "Bayesian probability"? It's no different from "Kolmogorov probability", except that one tends to accompany it by more of a story.
Format: MarkdownItexYes, the term should really be 'Bayesian interpretation of probability' or 'probability, with a Bayesian interpreration,' or something like that.
Yes, the term should really be 'Bayesian interpretation of probability' or 'probability, with a Bayesian interpreration,' or something like that. | CommonCrawl |
I have a PDE which involves the predictable finite-variation parts of some semi-martingales and a quadratic-covariation process and I tried to derive a Feynman-Kac style expectation from the PDE. However, the result looks somewhat strange: it is in the form $$ f(t, x) = \mathbb E \left[f(T, X_T)+\int^T_t d[Z, X]_s \,\bigg|\, X_t = x\right] $$ where $Z_t = f(t, X_t)$ and $X_t$ is a semi-martingale and $[.,.]_t$ is quadratic-covariation. The reason I have doubts about this is that, evidently, the expectation contains itself. Typically, Feynman-Kac formulas take forms like $$ f(t, x) = \mathbb E \left[ f(T, X_t) \,\bigg|\, X_t = x\right] $$ where $f(T, X_T)$ corresponds to some terminal condition, but in my case, the integral inside the expectation includes the solution itself at every point between time $t$ and $T$.
Might it then be inferred that differential operators corresponding to the drift-terms of a quadratic-variation process do not lead to meaningful Feynman-Kac representations?
Browse other questions tagged stochastic-processes or ask your own question.
Is this process a martingale? | CommonCrawl |
The susceptibilities of mixed antiferromagnetic/ferromagnetic rectangular Heisenberg lattices of $S = 1/2$ have been simulated using Quantum Monte Carlo techniques. These simulations include lattices in which the stronger interaction is ferromagnetic or antiferromagnetic along with the isotropically mixed lattice. The two exchange strengths, $J$ and $J'$, are related by $J'= \alpha J$, where $\alpha$ is the aspect ratio which ranges from $0 \le \alpha \le 1$. These simulations were done for $0 \le \alpha \le 1$ in .05 increments. The results are discussed and the models are used to fit suspected mixed antiferromagnetic/ferromagnetic rectangles such as $ Cu(pyz)(NO_3)(HCO_2) $ and $ Cu(pyz)(N_3)_2 $. | CommonCrawl |
23 Greatest number of non-attacking moves that queens can make on an $n \times n$ chess board.
17 Number of steps the path-avoiding snail must take before a step size of $(2n - 1)/2^k$?
15 Smallest region that can contain all free $n$-ominoes.
14 Combinatorial rule for a stable stack of bricks.
13 Intuition for why Costas arrays of order $n$ fail to undergo combinatorial explosion.
12 Does the "prime ant" ever backtrack? | CommonCrawl |
This repository contains an informal library of figures created in various ways and for various purposes which may be of use to others. These figures are typically described in TikZ, Inkscape or generated by Python (usually by outputting machine generated SVG or TikZ source).
Though the figures are intended to be aesthetically pleasing, their descriptions are often not. Please be aware of this if you are considering adapting a figure!
You can clone the repository containing these figures over on GitHub.
An illustration of a subsection of the hexagonal torus network used in SpiNNaker. This version is drawn with all edges of equal distance rather than the misleading projection often used with a normal 2D mesh augmented with diagonal links.
An illustration of a hexagonal torus network as used in SpiNNaker with the wrap-around links stubbed. This version is drawn with all edges of equal distance rather than the misleading projection often used with a normal 2D mesh augmented with diagonal links.
Shows how the torus network gets its name by transforming a torus in the conventional 2D form (with wrap-around links) and turning it into a torus.
Shows the range of Small-World networks generated by the Watts Strogatz model. See the (pleasingly short) sources for opportunities for tweaking.
A (horrific) script which generates examples of parallel signals with varying amounts of skew.
A (horrific) script which generates examples of serial signals with options to distort and encode the messages. Relies on eightbtenb.py, an equally horrific and known incorrect 8b10b encoder/decoder implementation (good enough for figures...).
An illustrative example of a spiking neuron model. Highly biologically unrealistic and so not the sort of thing really used in simulations but gets the jist across.
An illustrative example of a neural network. Mostly illustrates that it is a graph and can highlight a single neuron and its links which is handy while talking about fan-out.
Shows the steps to fold a network of $4\times4$ threeboards in SpiNNaker. Red, Green and Blue correspond to North, North-East, East respectively. Touching edges are implicitly connected.
An illustration of dimension order routing between two points in the SpiNNaker network including a set of unit vectors. This version is drawn with all edges of equal distance rather than the misleading projection often used with a normal 2D mesh augmented with diagonal links.
Shows the wiring for a version of the largest planned SpiNNaker machine with 1,200 boards of 48 chips with 18 cores each mapped into cabinets. Generated by the SpiNNer wiring guide generator using a LaTeX installation specifically configured to allow the use of insanely large diagrams.
Shows the long wires in a network of $4\times4$ threeboards in SpiNNaker. Red, Green and Blue correspond to North, North-East, East respectively. Touching edges are implicitly connected.
Various Watts Strogatz model style torus networks. Autogenerated by a script which has since gone missing -- sorry!
Shows the steps to fold a ring network.
A folded Closs network (also known as a fat tree).
A fat tree network. Thicker lines indicate higher-bandwidth links.
A very hand-wavy version of the pipeline taken by packets using the high-speed serial links in SpiNNaker SpiNNaker.
A diagram showing how a three-board configuration can be used to form a toroid from three SpiNNaker boards.
An illustration of what the SpiNNaker chip kind-of-sort-of looks like from an extremely high-level and network-centric point of view.
Shows the logical arrangement of chips on a SpiNNaker board and the collections of connections assigned to each SpiNN-link connection.
Shows the arbiter tree in SpiNNaker.
Neurogrid chip topology showing the mechanism used to arbitrate the chip's shared output port.
An illustration of what the BrainScaleS wafer kind-of-sort-of looks like.
An illustration of what the BrainScaleS on-wafer mesh network kind-of-sort-of looks like. | CommonCrawl |
AlexNet won the ImageNet competition in 2012 by a large margin. It was the biggest network at the time. The network demonstrated the potential of training large neural networks quickly on massive datasets using widely available gaming GPUs; before that neural networks had been trained mainly on CPUs. AlexNet also used novel ReLU activation, data augmentation, dropout and local response normalization. All of these allowed to achieve state-of-the art performance in object recognition in 2012.
The benefits of ReLU (excerpt from the paper).
ReLU is a so-called non-saturating activation. This means that gradient will never be close to zero for a positive activation and as result, the training will be faster.
By contrast, sigmoid activations are saturating, which makes gradient close to zero for large absolute values of activations. Very small gradient will make the network train slower or even stop, because the step size during gradient descent's weight update will be small or zero (so-called vanishing gradient problem).
By employing ReLU, training speed of the network was six times faster as compared to classical sigmoid activations that had been popular before ReLU. Today, ReLU is the default choice of activation function.
Local response normalization formula from the paper. Color labeling is mine.
After layers C1 and C2, activities of neurons were normalized according to the formula above. What this did is scaled the activities down by taking into account 5 neuron activities at preceding and following feature channels at the same spatial position.
An example of local response normalization made in Excel by me.
These activities were squared and used together with parameters $n$, $k$, $\alpha$ and $\beta$ to scale down each neuron's activity. Authors argue that this created "competition for big activities amongst neuron outputs computed using different kernels". This approach reduced top-1 error by 1%. In the table above you can see an example of neuron activations scaled down by using this approach. Also note that the values of $n$, $k$, $\alpha$ and $\beta$ were selected using cross-validation.
Overlapping pooling of the kind used by AlexNet. Source.
AlexNet used max pooling of size 3 and stride 2. This means that the largest values were pooled from 3x3 regions, centers of these regions being 2 pixels apart from each other vertically and horizontally. Overlapping pooling reduced tendency to overfit and also reduced test error rates by 0.4% and 0.3% (for top-1 and top-5 error correspondingly).
Data augmentation is a regularization strategy (a way to prevent overfitting). AlexNet uses two data augmentation approaches.
The first takes random crops of input images, as well as rotations and flips and uses them as inputs to the network during training. This allows to vastly increase the size of the data; the authors mention the increase by the factor of 2048. Another benefit is the fact that augmentation is performed on the fly on CPU while the GPUs train previous batch of data. In other words, this type of augmentation is essentially computationally free, and also does not require to store augmented images on disk.
The second data augmentation strategy is so-called PCA color augmentation. First, PCA on all pixels of ImageNet training data set is performed (a pixel is treated as a 3-dimensional vector for this purpose). As result, we get a 3x3 covariance matrix, as well as 3 eigenvectors and 3 eigenvalues. During training, a random intensity factor based on PCA components is added to each color channel of an image, which is equivalent to changing intensity and color of illumination. This scheme reduces top-1 error rate by over 1% which is a significant reduction.
The authors do not explicitly mention this as contribution of their paper, but they still employed this strategy. During test time, 5 crops of original test image (4 corners and center) are taken as well as their horizontal flips. Then predictions are made on these 10 images. Predictions are averaged to make the final prediction. This approach is called test time augmentation (TTA). Generally, it does not need to be only corners, center and flips, any suitable augmentation will work. This improves testing performance and is a very useful tool for deep learning practitioners.
AlexNet used 0.5 dropout during training. This means that during forward pass, 50% of all activations of the network were set to zero and also did not participate in backpropagation. During testing, all neurons were active and were not dropped. Dropout reduces "complex co-adaptations" of neurons, preventing them to depend heavily on other neurons being present. Dropout is a very efficient regularization technique that makes the network learn more robust internal representations, significantly reducing overfitting.
AlexNet architecture from paper. Color labeling is mine.
Architecture itself is relatively simple. There are 8 trainable layers: 5 convolutional and 3 fully connected. ReLU activations are used for all layers, except for the output layer, where softmax activation is used. Local response normalization is used only after layers C1 and C2 (before activation). Overlapping max pooling is used after layers C1, C2 and C5. Dropout was only used after layers F1 and F2.
Due to the fact that the network resided on 2 GPUs, it had to be split in 2 parts that communicated only partially. Note that layers C2, C4 and C5 only received as inputs outputs of preceding layers that resided on the same GPU. Communication between GPUs only happened at layer C3 as well as F1, F2 and the output layer.
The network was trained using stochastic gradient descent with momentum and learning rate decay. In addition, during training, learning rate was decreased manually by the factor of 10 whenever validation error rate stopped improving.
Quora: Why are GPUs well-suited to deep learning?
Data Augmentation: How to use Deep Learning when you have Limited Data.
Since PCA in the paper is done on the whole entirety of ImageNet data set (or maybe subsample, but that is not mentioned), the data most probably will not fit in memory. In that case, incremental PCA may be used that performs PCA in batches. This thread is also useful in explaining how to do partial PCA without loading the whole data in memory. | CommonCrawl |
Like many languages, Swift offers enumeration as first-class type. An enumeration defines a new group type of related values and allows us to work with those values in a type-safe way.
Among other things, enumerations are great to represent sets of options.
Let's take as example UIViewAnimationOptions. This type describes the available options to animate an UIView. You can combine these options, sometimes it is even mandatory.
However, in Swift this type is defined and used differently: it is a struct conforming to the OptionSet protocol, and you use it like a set.
Enumerations have one problem, you can only set one option at the time. This is the soul of an enumeration. The Objective-C version of UIViewAnimationOptions is expressed in a hacky-way, hijacking the prime goal of the enumeration.
OptionSet was designed to solve this very problem: a set where you can set multiple options at the same time. Under the hood, OptionSet is represented as a bit field but presented as an operation set. Basically, OptionSet enables us to represent bitset types and perform easy bit masks and bitwise operations.
Conforming to OptionSet only requires to provide a rawValue property of integer type. This type will be used as the underlying storage for the bit field. Indeed, integers are stored as a series of bits in memory. The size of the integer type will determine the maximum number of options you can define for your set to work accurately.
MyOptionSet uses Int8 and will be able to represent up to 8 options accurately.
Note that each option represents a single bit of the rawValue. In order to represent these options correctly, we need to assign ascending powers of two to each option: 1, 2, 4, 8, 16, etc.
Now when combining two or more options (aka bit mask), there is no overlapping.
OptionSet conforms to SetAlgebra, meaning you can manipulate it with multiple mathematical set operations: insert, remove, contains, intersection, etc.
A common bitwise operation is left shifting, noted <<.
Left shifting is equivalent to multiplication by powers of 2, regarding the offset. Shifting 6 from three positions (6 << 3) result in the number 48 ($6 \times 2^3$).
Using left shifting is pretty common good practice when describing OptionSet options ; just increase the shifting position and let the math do the rest.
Let's look at a worked example. Feature flags is a technique allowing to modify system behavior without changing code. They can help us to deliver new functionality to users rapidly and safely.
For example, you could be in the process of rewriting a part of your app to improve its efficiency. This work will take some time, probably multiple weeks, but you don't want to impact your team, that will continue to work on other parts of the app. Branching is a no go, thanks to previous experiences of merging long-lived branches. Instead, the people working on that rewrite will use a specific feature flag to use the new implementation, while the other will continue to use the current one as usual.
Canary deployment is another great benefit of feature flags. Say your rewrite is ready and you would like to test it in real conditions. However, you don't want to deliver it to all of your users, and go back to the old implementation in case there is something wrong. With feature flags, you can only activate the new implementation for a small percentage of users.
Since WWDC 2017, Apple introduced "phased releases", the ability to gradually release new versions of an application. However, with your own implementation of feature flags you get fine grained control over who is exposed to which feature and when. This is also useful when rolling out time based functionalities and need absolute control.
Feature flags can be implemented in many ways, but all of them will introduce additional complexity in your system. Our goal is to constrain this complexity by using a smart implementation.
Let's see how OptionSet can help us reduce this complexity.
Fundamentally, that's all what we need.
Usually, you retrieve these flags from an API, where each flag is represented by a boolean value. This is where the magic ofOptionSet begin : instead of list of boolean flags, you can use a single integer value representing all your flags!
The variable options now contains feature1 , feature6 and feature7 ($1+32+64 = 97$).
You even can have several FeatureFlags: a global one, one for each of your key functionalities, one specific to your user, etc. And of course, combine them!
OptionSet isn't a collection. You can't count them or iterate over them. However, since we only define them with integer values, we can improve the protocol to help us work with them.
all produces an instance of your type with all options, while members computes the list of all options of a particular instance, practical to iterate.
Let's update the conformance and add the new property to our type. We also add conformance to CustomStringConvertible for debug purpose. | CommonCrawl |
Abstract: Consider Young diagrams differing only by the length of the first row (i.e., the form of diagrams below the first row is fixed). We prove that the values of the irreducible characters of the groups $\mathrm S_n$ corresponding to these diagrams are given by a polynomial of a special form with respect to natural parameters related to the cycle notation of permutations. Bibl. 3 titles.
Key words and phrases: irreducible characters of the symmetric groups, induced characters, Kostka numbers. | CommonCrawl |
We characterize QB-ideals of exchange rings by means of quasi-invertible elements and annihilators. Further, we prove that every $2\times2$ matrix over such ideals of a regular ring admits a diagonal reduction by quasi-inverse matrices. Prime exchange QB-rings are studied as well. | CommonCrawl |
If we say fields $A$ and $B$ are isomorphic, does that just mean they are isomorphic as rings, or is there something else?
In a sense, yes, that is what it means. But not really. When we say two structures $S$ and $T$ of a certain type are isomorphic, we mean that there is a bijection $\varphi:S\rightarrow T$ which preserves the structure. So, for instance, if $\circ$ is a binary operation in the structure, then for $x,y\in S$, we have $\varphi(x\circ y)=\varphi(x)\circ \varphi(y)$.
It turns out that preserving the ring structure is enough to preserve the field structure; a field is just a commutative ring with inverses, so the property of being a field is preserved if the operations $+$ and $\times$ are preserved. Thus two fields are isomorphic if and only if they are isomorphic when considered as rings. But this is a contingent fact, and it's not really what we mean when we say that two fields are isomorphic.
I realise that this view verges on philosophy, and I wouldn't defend it to the death. I am just trying to give an idea of what mathematicians are thinking of when they say isomorphic.
They are just isomorphic as rings.
A ring isomorphism already preserves both operations of the field, and it's trivial to prove that a ring isomorphism "preserves inverses," so there's nothing else you could ask of an isomorphism between fields that isn't already there.
Not the answer you're looking for? Browse other questions tagged abstract-algebra field-theory or ask your own question.
Are rings with the same finite cardinality isomorphic?
Two finite fields are isomorphic.
Which of the following fields are isomorphic?
Is any homomorphism between two isomorphic fields an isomorphism?
What does it mean when two Groups are isomorphic?
Why are isomorphic rings also isomorphic modules?
Is it true that two polynomial rings are not isomorphic iff the rings are not isomorphic? | CommonCrawl |
Recently proposed neural network architectures, including PointNets and PointSetGeneration networks, allow deep learning on unordered point clouds. In this article, I present a Torch implementation of a PointNet auto-encoder — a network allowing to reconstruct point clouds through a lower-dimensional bottleneck. As loss during training, I implemented a symmetric Chamfer distance in C/CUDA and provide the code on GitHUb.
In several recent papers , researchers proposed neural network architectures for learning on unordered point clouds. In , shape classification and segmentation is performed by computing per-point features using a succession of multi-layer perceptrons which are finally max- or average-pooled into one large vector. This pooling layer ensures that the network is invariant to the order of the input points. In , in contrast, a network predicting point clouds is proposed for single-image 3D reconstruction. Both approaches are illustrated in Figure 1.
(a) Illustration of the PointNet architecture for shape classification and segmentation as discussed in .
(b) Illustration of the point set generation network as proposed in .
Figure 1 (click to enlarge): Illustrations of the PointNet and point set generation networks.
Note that the code was not thoroughly tested; and as discussed below, training is not necessarily stable.
In practice, the point-wise function $h$ can be learned using a multi-layer perceptron and the symmetric function $g$ may be a symmetric pooling operation such as max- or average-pooling.
-- Input is a N x 3 "image" with N being the number of points.
-- The first few convolution layers basically represent the point-wise multi-layer perceptron.
-- Average pooling, i.e. the symmetric function g.
-- Can be replaced by max pooling.
model:add(nn.SpatialAveragePooling(1, 1000, 1, 1, 0, 0)) -- Note that we assume 1000 points.
Note that the point-wise function $h$ is implemented using several convolutional layers. With the correct kernel size, these are used to easily implement the point-wise multi-layer perceptron with shared weights across points. As a result, the input point cloud is provided as $N \times 3$ image; in the above example $1000$ points are assumed to be provided. As symmetric function $g$, an average pooling layer is used; a subsequent convolutional layer inflates the pooled features in order to predict a point cloud.
As alternative to the squared distance used above, any other differentiable distance can be used.
A full example can be found on GitHub.
Figure 2 (click to enlarge): qualitative results of targets (left) and predictions (right). Note that we predicted $1000$ points; for the ground truth, however, we show $5000$ points.
Figure 2 shows some qualitative results on a dataset of rotated and slightly translated cuboids. In general, I found that these auto-encoders are difficult to train. For example, when adding a linear layer after pooling — to compute a low-dimensional bottleneck — training becomes significantly more difficult, such that the network often predicts a "mean point cloud". Similarly, the network architecture has a significant influence on training. | CommonCrawl |
Hypercompactness is a large cardinal property that is a strengthening of supercompactness. A cardinal $\kappa$ is $\alpha$-hypercompact if and only if for every ordinal $\beta < \alpha$ and for every cardinal $\lambda\geq\kappa$, there exists a cardinal $\lambda'\geq\lambda$ and an elementary embedding $j:V\to M$ generated by a normal fine ultrafilter on $P_\kappa\lambda$ such that $\kappa$ is $\beta$-hypercompact in $M$. $\kappa$ is hypercompact if and only if it is $\beta$-hypercompact for every ordinal $\beta$.
Every cardinal is 0-hypercompact, and 1-hypercompactness is equivalent to supercompactness.
This page was last modified on 20 September 2017, at 08:59. | CommonCrawl |
Abstract: $O(n) \times O(m)$ symmetric Landau-Ginzburg models in $d=3$ dimension possess a rich structure of the renormalization group and its understanding offers a theoretical prediction of the phase diagram in frustrated spin models with non-collinear order. Depending on $n$ and $m$, they may show chiral/anti-chiral/Heisenberg/Gaussian fixed points within the same universality class. We approach all the fixed points in the conformal bootstrap program by examining the bound on the conformal dimensions for scalar operators as well as non-conserved current operators with consistency crosschecks. For large $n/m$, we show strong evidence for the existence of four fixed points by comparing the operator spectrum obtained from the conformal bootstrap program with that from the large $n/m$ analysis. We propose a novel non-perturbative approach to the determination of the conformal window in these models based on the conformal bootstrap program. From our numerical results, we predict that for $m=3$, $n=7\sim 8$ is the edge of the conformal window for the anti-chiral fixed points. | CommonCrawl |
Results for "S. J. v. Gool"
The Density Matrix Renormalization Group Method for Realistic Large-Scale Nuclear Shell-Model CalculationsJul 11 2002The Density Matrix Renormalization Group (DMRG) method is developed for application to realistic nuclear systems. Test results are reported for 24Mg.
Comment on "Generalized black diholes"Oct 01 2014Nov 05 2015We show that a recent solution published by Cabrera-Munguia et al. is physically inconsistent since the quantity $\sigma$ it involves does not have a correct limit $R\to\infty$. | CommonCrawl |
b Moscow Institute of Physics and Technology (State University), Dolgoprudnyi, Moskovskaya obl.
Abstract: A diameter graph in $\mathbb R^d$ is a graph in which vertices are points of a finite subset of $\mathbb R^d$ and two vertices are joined by an edge if the distance between them is equal to the diameter of the vertex set. This paper is devoted to Schur's conjecture, which asserts that any diameter graph on $n$ vertices in $\mathbb R^d$ contains at most $n$ complete subgraphs of size $d$. It is known that Schur's conjecture is true in dimensions $d\le 3$. We prove this conjecture for $d=4$ and give a simple proof for $d=3$.
Keywords: diameter graph, Schur's conjecture, Borsuk's conjecture. | CommonCrawl |
Results for "Silvio C. Ferreira"
On the $3 \times 3$ magic square constructed with nine distinct square numbersApr 24 2015Jun 26 2015A proof that there is no $3 \times 3$ magic square constructed with nine distinct square numbers is given.
The Antinomy of the Liar and ProvabilityJun 03 2008This work evidences that a sentence cannot be denominated by P and written as P IS NOT TRUE. It demonstrates that in a system in which Q denominates the sentence Q IS NOT PROVABLE it is not provable that Q is true and not provable.
Retarded integral inequalities of Gronwall-Bihari typeJun 28 2008We establish two nonlinear retarded integral inequalities. Bounds on the solution of some retarded equations are then obtained.
Higher-Order Calculus of Variations on Time ScalesJun 21 2007Sep 30 2007We prove a version of the Euler-Lagrange equations for certain problems of the calculus of variations on time scales with higher-order delta derivatives.
Complete commuting vector fields and their singular points in dimension 2Sep 24 2018We classify degenerate singular points of $\C^2$-actions on complex surfaces. | CommonCrawl |
Let $G$ be a finitely generated group of polynomial volume growth equipped with a word-length $|\cdot |$. The goal of this paper is to develop techniques to study the behavior of random walks driven by symmetric measures $\mu $ such that, for any $\epsilon >0$, $\sum |\cdot |^\epsilon \mu =\infty $. In particular, we provide a sharp lower bound for the return probability in the case when $\mu $ has a finite weak-logarithmic moment.
Saloff-Coste (L.) and Zheng (T.).— On some random walks driven by spread-out measures, Available on Arxiv arXiv:1309.6296 [math.PR], submitted (2012).
Saloff-Coste (L.) and Zheng (T.).— Random walks and isoperimetric profiles under moment conditions, Available on Arxiv arXiv:1501.05929 [math.PR], submitted (2014). | CommonCrawl |
23.07.2016 16:18:52 Write a paper, yet the essays the independent variables dec 22, 2004 Subject A Scoring Guide.
23.07.2016 14:54:20 Common is the comparison/contrast essay, in which.
23.07.2016 10:49:43 Tiger Algebra with $\infty$ topics, how to begin to research.
24.07.2016 18:31:16 Essay Contest Scholarship-College, University and an invitation samples of Annual Reports from Civic Groups desktop. | CommonCrawl |
It was time for the 7th Nordic Cinema Popcorn Convention, and this year the manager Ian had a brilliant idea. In addition to the traditional film program, there would be a surprise room where a small group of people could stream a random movie from a large collection, while enjoying popcorn and martinis.
However, it turned out that some people were extremely disappointed, because they got to see movies like Ghosts of Mars, which instead caused them to tear out their hair in despair and horror.
To avoid this problem for the next convention, Ian has come up with a solution, but he needs your help to implement it. When the group enters the surprise room, they will type in a list of movies in a computer. This is the so-called horror list, which consists of bad movies that no one in the group would ever like to see. Of course, this list varies from group to group.
The first line of input contains three positive integers $N$, $H$, $L$ ($1 \leq H < N \leq 1\, 000,0 \leq L \leq 10\, 000$), where $N$ is the number of movies (represented by IDs, ranging from $0$ to $N-1$), $H$ is the number of movies on the horror list and $L$ is the number of similarities in the database.
The second line contains $H$ unique space-separated integers $x_ i$ ($0 \leq x_ i <N$) denoting the ID of the movies on the horror list.
The following $L$ lines contains two space-separated integers $a_ i,b_ i$ ($0 \leq a_ i < b_ i < N$), denoting that movie with ID $a_ i$ is similar to movie with ID $b_ i$ (and vice versa).
Output the ID of the movie in the collection with the highest Horror Index. In case of a tie, output the movie with the lowest ID. | CommonCrawl |
In 2017, the Bitcoin price went from some $1,000 to $20,000 on December 18th. Since that moment, the price lost 66% while the Google searches for the Bitcoin plummeted by 92%.
In a recent month or so, the Bitcoin price lost much of its volatility and seems to be stuck close to $6350. That's pretty much exactly where it was a year ago, sometime on November 12th, 2017. That factoid is just a coincidence but it allows us to make certain interpretations.
During the recent year or two, we've been bombarded by statements that the Bitcoin wasn't a bubble, it (and the underlying Blockchain meme) was a game-changing technology and the mechanisms deciding about your profits – if you bought some Bitcoins – are completely different than in the case of different bubbles. Just like every other time when a bubble was being inflated, this time was different, we were told all the time.
Even when the Bitcoin price crashed from $20,000 almost to $5,000, the Bitcoin champions were presenting that drop as a fluke – a standard fluctuation in their way towards the infinite price or "to the Moon", as the true Bitcoin fans like to say. Volatility was so normal and a 75% drop, while large, wasn't deadly. After all, during the frantic epoch of skyrocketing Bitcoin prices in late 2017, the price could double – and therefore more than undo the 75% drop – in a week or two. The "hodlers" were de facto proud about becoming the most aggressive class of investors. They expected the most spectacular yields and claimed to be ready for the most spectacular "temporary" losses, too.
However, this counting just doesn't seem to work in recent months. A 10% daily growth of the prices is almost impossible. Most of the days, the Bitcoin price changes by less than 1%. Someone surely "wants" to keep the price nearly stabilized – the only unknown is whether the "someone" is a collective spirit of most traders or some specific traders, hodlers, or their group, and whether they are big fish. The cryptocurrency cost-and-benefit analysis could have been viewed as an indeterminate \(0/0\) or \(\infty/\infty\) ratio but things are different. The price movements look tiny and the possibility that the price could return to the $20,000 high by the end of 2018, to mention a specific proposition, looks like a science-fiction story.
Just half a year ago, the Bitcoin fans could have said that "the Bitcoin goes up insanely in the long run". It looked so at time scales comparable to one year. After all, it was the year 2017 when the prices got multiplied by 20 or so. However, in the recent weeks, even that timescale-bound proposition seems to be wrong. The Bitcoin doesn't seem to go up at the timescale of one year! By November 2018, it hasn't gone up at any shorter timescale, either.
Is it going up at timescales longer than 1 year? We can't know for sure. Before December 2017, the prices went generally up for 4 years or so. Is a timescale like 2 or 4 years the right one for which you can still say "the Bitcoin price goes up at this timescale"? We don't know with certainty yet. It will depend on the future movements. But I personally find it unlikely that the Bitcoin will revisit the $20,000 peak in the next year or two or three. There are just too many people who have lost the unlimited overbloated belief in the infinity and who are waiting for somewhat higher prices to sell their holdings – sometimes large holdings.
So I think it is more likely than not that the Bitcoin price no longer goes up at any timescale. The meteoric rise of the price in late 2017 was something I was completely unable to predict – the intensity of mass delusions keeps on beating my expectations. But it was just a huge dose of group think and irrationality that was exponentially growing for a while, a giant fluke. I think that it won't get repeated.
Even the very stupid people who are the typical Bitcoin buyers can notice that exactly one year ago, the price went from $6,350 to $20,000 six weeks later, before it crashed back to $5,000 and then $6,350 where it was stuck for a rather long time. So they see that it's damn possible that they could buy the Bitcoin for $10,000 or $20,000 and then see the price crash. That's why I think that they're unlikely to buy the Bitcoin for $10,000 again – anytime soon. The demand for the Bitcoin would probably go down if the Bitcoin approached these level again. And the supply would go up. I am pretty sure that there must already be some rather big holders who are sort of unloading their holdings.
If the father of the Bitcoin nicknamed Satoshi Nakamoto is e.g. Nick Szabo, he sits on some $7 billion and he may have the private keys needed to send the coins. But he may be afraid of announcing he's the founder of the currency. That's too bad because this is the guy who would deserve a few billion for this influential paradigm.
The cryptocurrency markets have become too boring, too stable, and the potential for some new fast meteoric growth of the price has shrunk considerably. In an extremely non-dramatic, gradual way, the reality has proven that the Bitcoin price growth was just an insane bubble without any rational justification. Most of the bubble has burst, if we take the peak price as 100%. A young generation wanted to play their own Airplane Game or another pyramid game. Now their desires for this kind of activity have been mostly satisfied for a decade – when a new generation starts to arrive that wants to "play" a pyramid game, too.
So while the cryptocurrency mining consumes almost as much energy as Czechia, all the unbacked cryptocurrencies have become utterly pointless. They don't seem to move much, the trading volumes are tiny relatively to the peak in early 2018, the chance to get rich quick are slim, but there is still some risk that you may lose a significant fraction soon. If you hold this stuff, you're still tempted to watch it even though nothing is happening. Nothing is happening but a Czechia worth of electricity is still being wasted.
Some champions of this stuff still don't seem to get how irrational this whole activity has gotten. Just to sustain these non-currencies that no one really uses for anything useful, and that don't even provide us with any interesting pyramid game or lottery anymore, lots of people work in the cryptocurrency exchanges and huge amounts of hardware are being produced just for this purpose and then they are swallowing insane amounts of electricity. The benefits are more or less self-evidently zero while the costs are huge.
How is it possible that a network whose costs exceed the benefits so clearly manages to survive? It survives for the same reason why the communist regimes survived for half a century or so. What is still powering the Bitcoin is no longer greed. It is some politically correct utopia about the moral duty to change the world in a certain way. And many people have already understood that they were wrong about their utopia but they're still afraid of admitting this serious blunder to themselves and others – which is why many of them keep on hodling, too.
Who is paying for the costs? Well, a small number of people who are still buying additional cryptocoins does. Why? Every year, over 600,000 Bitcoins or $4 billion are mined and added to the pool of 17 million Bitcoin in circulation. Those are stocks of a company that pays no dividends. The number of stocks increases by 4% (600k/17M) a year, nothing is changing about the company, so the stock price should go down 4% a year. The holders of the Bitcoin should normally see the dollar value of their holdings to drop by 4% a year in average – and by this drop, they would really be funding the electricity and hardware for the useless mining.
If you assume that the smoothed drop is less than 4% a year, it means that the people are still buying the newly mined Bitcoin – there is still some new demand for this stuff, some new inflow of real dollars to the game. But while all these people are slow, even that supply has to largely disappear.
But even now, when the Bitcoin price is already 68% below the peak, I recommend all slightly sensible people to sell most of the stuff because it has become so pointless, its costs exceed benefits very clearly, and the potential for this fad to be revived and generate growth rates similar to the frantic late 2017 rates looks extremely slim. The fact that the Bitcoin price hasn't collapsed 100+ times more quickly is a testimony of the irrationality of the people who are dealing with this stuff. But I am sure that many of them "fail" to be completely irrational, so even the current lower prices will turn out to be unsustainable. | CommonCrawl |
Abstract: Creating an image reflecting the content of a long text is a complex process that requires a sense of creativity. For example, creating a book cover or a movie poster based on their summary or a food image based on its recipe. In this paper we present the new task of generating images from long text that does not describe the visual content of the image directly. For this, we build a system for generating high-resolution 256 $\times$ 256 images of food conditioned on their recipes. The relation between the recipe text (without its title) to the visual content of the image is vague, and the textual structure of recipes is complex, consisting of two sections (ingredients and instructions) both containing multiple sentences.
We used the recipe1M dataset to train and evaluate our model that is based on a the StackGAN-v2 architecture. | CommonCrawl |
In a normal deck of cards, you can either reveal the top card or guess whether that card is black. If you reveal the top card, you get to see what the card is and the game continues with one less card in the deck. If you were to make a guess, the game ends, and you get paid out $\$100$ if the card is black and $\$0$ if it's red.
What is the optimal strategy of this game and its expected value?
The lower bound has to be $0.5\times \$100 = \$50$ since you can just guess on the first card.
Browse other questions tagged expected-value card-games dynamic-programming or ask your own question.
How much would you be willing to pay to play this card game?
Drawing three cards and pick away the lowest, what is the expected value of this cards and the other cards? | CommonCrawl |
We review some of the significant generalizations and applications of the celebrated Douglas theorem on the equivalence of factorization, range inclusion, and majorization of operators. We then apply it to find a characterization of the positivity of $2\times 2$ block matrices of operators in Hilbert spaces and finally describe the nature of such block matrices and provide several ways for showing their positivity.
In this paper we obtain weighted higher order Rellich, weighted Gagliardo-Nirenberg, Caffarelli-Kohn-Nirenberg inequalities and the uncertainty principle for Dunkl operators. Moreover, we introduce an extension of the classical Caffarelli-Kohn-Nirenberg inequalities. Furthermore, we give an application of Gagliardo-Nirenberg inequality to the Cauchy problem for the nonlinear damped wave equations for the Dunkl Laplacian. | CommonCrawl |
Cornered at last, Bessie has gone to ground in a remote farm. The farm consists of $N$ barns ($2 \leq N \leq 7 \cdot 10^4$) and $N-1$ bidirectional tunnels between barns, so that there is a unique path between every pair of barns. Every barn which has only one tunnel is an exit. When morning comes, Bessie will surface at some barn and attempt to reach an exit.
But the moment Bessie surfaces at some barn, the law will be able to pinpoint her location. Some farmers will then start at various exit barns, and attempt to catch Bessie. The farmers move at the same speed as Bessie (so in each time step, each farmer can move from one barn to an adjacent barn). The farmers know where Bessie is at all times, and Bessie knows where the farmers are at all times. The farmers catch Bessie if at any instant a farmer is in the same barn as Bessie, or crossing the same tunnel as Bessie. Conversely, Bessie escapes if she reaches an exit barn strictly before any farmers catch her.
Bessie is unsure at which barn she should surface. For each of the $N$ barns, help Bessie determine the minimum number of farmers who would be needed to catch Bessie if she surfaced there, assuming that the farmers distribute themselves optimally among the exit barns.
Note that the time limit for this problem is slightly larger than the default: 4 seconds for C/C++/Pascal, and 8 seconds for Java/Python.
The first line of the input contains $N$. Each of the following $N-1$ lines specify two integers, each in the range $1 \ldots N$, describing a tunnel between two barns.
Please output $N$ lines, where the $i$th line of output tells the minimum number of farmers necessary to catch Bessie if she surfaced at the $i$th barn. | CommonCrawl |
If $f\in L^1(\Omega \times (0,T))$ then what assumption should we have on $u_0$ in order to deduce a bound for the solution $u$ in the parabolic Sobolev space?
To be honest, I'm not even sure if we can obtain any bound for the given $f$ but since we have one for the Poisson equation by Stampacchia(Proposition 4.3), I 'm motivated to think that it should exist also one for the heat equation (most of the times, parabolic pdes have analog results to the elliptic ones).
However I wasn't able to find anything, so this is why I'm asking here.
Any help or hint will be much appreciated.
Browse other questions tagged functional-analysis pde sobolev-spaces regularity-theory-of-pdes parabolic-pde or ask your own question.
Is the parabolic heat equation with pure neumann conditions well posed?
Duhamel's principle for heat equation. | CommonCrawl |
Euclidean, hyperbolic, discrete, convex, coarse geometry, comparisons in Riemannian geometry, symmetric spaces.
Is a polytope with vertices on a sphere and all edges of same length already rigid?
Which cubic graphs can be orthogonally embedded in $\mathbb R^3$?
An isoperimetric inequality for curve in the plane?
What is the meaning of Conjugate radius and Injectivity radius?
Can 4-space be partitioned into Klein bottles? | CommonCrawl |
Volume 97, Number 1 (2014), 91-108.
The set of all error-correcting block codes over a fixed alphabet with $q$ letters determines a recursively enumerable set of rational points in the unit square with coordinates $(R, \delta):=$ (relative transmission rate, relative minimal distance). Limit points of this set form a closed subset, defined by $R \leq \alpha_q(\delta)$, where $\alpha_q(\delta)$ is a continuous decreasing function called the asymptotic bound. Its existence was proved by the first-named author in 1981 (Renormalization and computation I: Motivation and background), but no approaches to the computation of this function are known, and in A computability challenge: Asymptotic bounds and isolated errorcorrecting codes, it was even suggested that this function might be uncomputable in the sense of constructive analysis.
In this note we show that the asymptotic bound becomes computable with the assistance of an oracle producing codes in the order of their growing Kolmogorov complexity. Moreover, a natural partition function involving complexity allows us to interpret the asymptotic bound as a curve dividing two different thermodynamic phases of codes.
J. Differential Geom., Volume 97, Number 1 (2014), 91-108.
Enumeration of points, lines, planes, etc. | CommonCrawl |
Search Results: 1 - 10 of 20852 matches for " Sanath Kumar "
Abstract: More than half of all cancer patients receive radiotherapy as a part of their treatment. With the increasing number of long-term cancer survivors, there is a growing concern about the risk of radiation induced second malignant neoplasm [SMN]. This risk appears to be highest for survivors of childhood cancers. The exact mechanism and dose-response relationship for radiation induced malignancy is not well understood, however, there have been growing efforts to develop strategies for the prevention and mitigation of radiation induced cancers. This review article focuses on the incidence, etiology, and risk factors for SMN in various organs after radiotherapy.
Abstract: Bacterial pathogens that are multi-drug resistant compromise the effectiveness of treatment when they are the causative agents of infectious disease. These multi-drug resistance mechanisms allow bacteria to survive in the presence of clinically useful antimicrobial agents, thus reducing the efficacy of chemotherapy towards infectious disease. Importantly, active multi-drug efflux is a major mechanism for bacterial pathogen drug resistance. Therefore, because of their overwhelming presence in bacterial pathogens, these active multi-drug efflux mechanisms remain a major area of intense study, so that ultimately measures may be discovered to inhibit these active multi-drug efflux pumps.
Abstract: Glioblastoma (GBM) is the most common malignant primary brain tumor in adults. However, the survival of patients with GBM has been dismal after multi-disciplinary treatment with surgery, radiotherapy, and chemotherapy. In the efforts to improve clinical outcome, anti-angiogenic therapy with bevacizumab (Avastin) was introduced to inhibit vascular endothelial growth factor (VEGF) mediated tumor neovascularization. Unfortunately, the results from clinical trials have not lived up to the initial expectations. Patients either fail to respond to anti-angiogenic therapy or develop resistance following an initial response. The failure of anti-angiogenic therapy has led to a frustration among physicians and research community. Recent evidence indicates that the dogma of tumor neovascularization solely dependent on VEGF pathways to be overly simplistic. A realistic model of tumor neovascularization should include alternative pathways that are independent of VEGF signaling. A better understanding of the underlying processes in tumor neovascularization would help in designing successful anti-angiogenic treatment strategies.
Abstract: HIV-1 forms infectious particles with Murine Leukemia virus (MLV) Env, but not with the closely related Gibbon ape Leukemia Virus (GaLV) Env. We have determined that the incompatibility between HIV-1 and GaLV Env is primarily caused by the HIV-1 accessory protein Vpu, which prevents GaLV Env from being incorporated into particles. We have characterized the 'Vpu sensitivity sequence' in the cytoplasmic tail domain (CTD) of GaLV Env using a chimeric MLV Env with the GaLV Env CTD (MLV/GaLV Env). Vpu sensitivity is dependent on an alpha helix with a positively charged face containing at least one Lysine. In the present study, we utilized functional complementation to address whether all the three helices in the CTD of an Env trimer have to contain the Vpu sensitivity motif for the trimer to be modulated by Vpu. Taking advantage of the functional complementation of the binding defective (D84K) and fusion defective (L493V) MLV and MLV/GaLV Env mutants, we were able to assay the activity of mixed trimers containing both MLV and GaLV CTDs. Mixed trimers containing both MLV and GaLV CTDs were functionally active and remained sensitive to Vpu. However, trimers containing an Env with the GaLV CTD and an Env with no CTD remained functional but were resistant to Vpu. Together these data suggest that the presence of at least one GaLV CTD is sufficient to make an Env trimer sensitive to Vpu, but only if it is part of a trimeric CTD complex.
Abstract: The present status of technological implementation for mushroom industry in Sri Lanka is expressed along this paper. It has been comparatively discussed with entire Japanese mushroom industry. Sri Lanka is a developing country located in south Asia. Almost all the mushroom cultivators in the country are growing Pleurotus ostreatus, Calocybe indica and Volvariella volvacea. These species are most preferred because they are not difficult to cultivate using the low cost cultivation method being practiced in the country. Mushroom cultivators are selling their product at prices ranging from LKR 240 (1.47) to LKR 430 (USD 2.63) per kg in 2017. Mushroom cultivation is not that popular in Sri Lanka. This may be, partly, attributed to lack of know-how, technological barrier and awareness on the economic, nutritive and medicinal benefits of cultivated mushrooms. Some of the major supermarkets do sell locally cultivated P. ostreatus and, Agaricus bisporus and Lentinula edodes mushrooms which are imported from the Republic of China and Thailand. At present, there are few private and government institutions which produce spawn and offer knowledge to the farmers. Their programs have been mainly focused on mushroom cultivation as a woman's household business; but the industry should be developed towards large scale commercial mushroom cultivation as well. This study is focused on main steps of mushroom production with some discussion and suggestion for increase production efficiency through technological advancement.
Abstract: The occurrence of methicillin-resistant staphylococci was investigated in fresh seafood, seafood products and related samples. Staphylococci were isolated from 13 (68.42%) fresh seafood samples, while 3 (15.78%) samples harbored coagulase-positive S. aureus. Resistance to methicillin was observed in 16 isolates of Staphylococcus spp., 15 of which were coagulase-negative S. aureus (MR-CoNS) and one was a coagulase-positive S. aureus (MRSA). The mecA gene is detected by PCR in 10 MR-CoNS and one MRSA strain. The lmrS gene, which codes for a multidrug efflux pump LmrS, is detected only in coagulase-positive isolates.
Abstract: After recognizing higher homotopy coherences, algebraic K-theory can be regarded as a functor from stable $\infty$-categories to $\infty$-categories. We establish the stability theoremm which states that the algebraic K-theory of a stable $\infty$-category is actually a stable $\infty$-category itself. This is a generalization of the statement that algebraic K-theory is a functor from spectra to spectra. We then prove a result which provides a simpler interpretation of the algebraic K-theory of ring spectra. In order to do this, we compute the algebraic K-theory of an $\infty$-category of modules, and establish that it is an $\infty$-category of modules itself. This result, known as the multiplicativity theorem, vastly generalizes results obtained by Elmendorf and Mandell. Since the algebraic K-theory of a ring spectrum $R$ is the algebraic K-theory of the $\infty$-category of perfect modules over $R$, this provides a simpler interpretation of the algebraic K-theory of ring spectra. Using this result, we prove an $\infty$-categorical counterpart of the derived Morita context for flat rings, which shows that algebraic K-theory is a homotopy coherent version of Morita theory. | CommonCrawl |
Thus parallel transport Adidas Superstar Grey Womens is a measurement of the.
cuts off sexual relationships 'at 8th graders'Houston radio host announces cancer diagnosisMuslim teen's arrest in Texas gets NASA's attention after social media outrageTexas drilling permits cut in half amid oil price falloutLocal meteorologist injures self on new TV set (w/video)Actor admits lying about narrowly escaping World Trade Center on 9/11Family of teen killed in bus crash plans to sue HISD, others'Star Trek' actress accused of exposing herself to kidsNational magazine praises River Oaks mansionI understand that for any given curve $\alpha$ and starting vector $v$ there is a unique parallel transport along $\alpha$ so that $v(t)$ is a parallel vector field. However, I am hung up on what the significance of the rotation of the tangent plane is. First I do not know how I would ever incorporate this rotation into anything mathematically (like what qualifies as a rotation?).
whether every rotations can be realized as a parallel transport. For example, if $M$ is the plane, it turns out that only the identity matrix can be realized as a parallel transport. This is due to the fact that the plane has Euclidean curvature. | CommonCrawl |
Abstract. We consider a scalar wave field translation invariantly coupled to a single particle. This Hamiltonian system admits soliton-like solutions, where the particle and comoving field travel with constant velocity. We prove that a solution of finite energy converges, in suitable local energy seminorms, to some soliton-like solutions in the long time limit $t\to\pm\infty$. | CommonCrawl |
Let be an even positive integer. We say that two different cells of a board are neighboring if they have a common side. Find the minimal number of cells on he board that must be marked so that any cell marked or not marked) has a marked neighboring cell.
%V0 Let $b$ be an even positive integer. We say that two different cells of a $n \times n$ board are neighboring if they have a common side. Find the minimal number of cells on he $n \times n$ board that must be marked so that any cell marked or not marked) has a marked neighboring cell.
Ispravak: Umjesto treba biti .
Ispravak: Umjesto $b$ treba biti $n$. | CommonCrawl |
Let $M$ be an irreducible left $R$-module, where $R$ is an arbitrary ring and $rm \neq 0$ for some $r \in R$ and $m \in M$. Prove that if $T \colon M \to M$ is an $R$-homomorphism, then it is either the zero map or an isomorphism.
This exercise is marked as more difficult than others, but I don't understand why, or why all of the hypotheses are necessary. In particular, I don't understand why the $rm \neq 0$ for some $r \in R$ and $m \in M$ matters. I know that assuming that implies that $M$ is cyclic, but not why that would be helpful here. Disagreeing with the text makes me think I've completely missed some subtlety of modules.
Not having the book leaves me at a disadvantage, but I think I have a pertinent comment about the extra conditions. At any rate, I think your work in this exercise does not require the bit about $rm\neq 0$.
I believe I recognize the condition from other texts I've seen on module theory for rings without identity. The idea is to exclude modules with trivial action from being considered as simple modules. In rings with identity, of course, this happens automatically.
You will also see definitions of "simple ring (without identity)" as being "a nonzero ring $R$ having only trivial ideals, and also $R^2\neq 0$" or sometimes just "$R^2=R$."
So my feeling is that it is just a nondegeneracy condition that was included by habit.
Not the answer you're looking for? Browse other questions tagged abstract-algebra proof-verification modules or ask your own question.
why these modules are simple?
Exercise on modules over PID involving injective modules, Baer's criterion.
Prove that every 2 dimensional FG-Module with gh not equal to hg is irreducible.
A module of a Lie algebra can be decomposed into irreducibles if and only if every submodule has a complement.
Field homomorphism $f:\mathbb K\longrightarrow \mathbb K$ is zero map or an isomorphism? | CommonCrawl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.