id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
730742
Principal–agent problem
Conflict of interest when one agent makes decisions on another's behalf The principal–agent problem refers to the conflict in interests and priorities that arises when one person or entity (the "agent") takes actions on behalf of another person or entity (the "principal"). The problem worsens when there is a greater discrepancy of interests and information between the principal and agent, as well as when the principal lacks the means to punish the agent. The deviation from the principal's interest by the agent is called "agency costs". Common examples of this relationship include corporate management (agent) and shareholders (principal), elected officials (agent) and citizens (principal), or brokers (agent) and markets (buyers and sellers, principals). In all these cases, the principal has to be concerned with whether the agent is acting in the best interest of the principal. Principal-agent models typically either examine moral hazard (hidden actions) or adverse selection (hidden information). The principal–agent problem typically arises where the two parties have different interests and asymmetric information (the agent having more information), such that the principal cannot directly ensure that the agent is always acting in the principal's best interest, particularly when activities that are useful to the principal are costly to the agent, and where elements of what the agent does are costly for the principal to observe. The agency problem can be intensified when an agent acts on behalf of multiple principals (see multiple principal problem). When multiple principals have to agree on the agent's objectives, they face a collective action problem in governance, as individual principals may lobby the agent or otherwise act in their individual interests rather than in the collective interest of all principals. The multiple principal problem is particularly serious in the public sector. Various mechanisms may be used to align the interests of the agent with those of the principal. In employment, employers (principal) may use piece rates/commissions, profit sharing, efficiency wages, performance measurement (including financial statements), the agent posting a bond, or the threat of termination of employment to align worker interests with their own. Overview. The principal's interests are expected to be pursued by the agent; however, when the interests of the agent and principal differ, a dilemma arises. The agent possesses resources such as time, information, and expertise that the principal lacks. At the same time, the principal does not have control over the agent's ability to act in the agent's own best interests. In this situation, the theory posits that the agent's activities are diverted from following the principal's interests and drive the agent to maximize the agent's interests instead. The principal and agent theory emerged in the 1970s from the combined disciplines of economics and institutional theory. There is some contention as to who originated the theory, with theorists Stephen Ross and Barry Mitnick both claiming authorship. Ross is said to have originally described the dilemma in terms of a person choosing a flavor of ice-cream for someone whose tastes they does not know ("Ibid"). The most cited reference to the theory, however, comes from Michael C. Jensen and William Meckling. The theory has come to extend well beyond economics or institutional studies to all contexts of information asymmetry, uncertainty and risk. In the context of law, principals do not know enough about whether (or to what extent) a contract has been satisfied, and they end up with agency costs. The solution to this information problem—closely related to the moral hazard problem—is to ensure the provision of appropriate incentives so agents act in the way principals wish. In terms of game theory, it involves changing the rules of the game so that the self-interested rational choices of the agent coincide with what the principal desires. Even in the limited arena of employment contracts, the difficulty of doing this in practice is reflected in a multitude of compensation mechanisms and supervisory schemes, as well as in critique of such mechanisms as e.g., Deming (1986) expresses in his Seven Deadly Diseases of management. Employment contract. In the context of the employment contract, individual contracts form a major method of restructuring incentives, by connecting as closely as optimal the information available about employee performance, and the compensation for that performance. Because of differences in the quantity and quality of information available about the performance of individual employees, the ability of employees to bear risk, and the ability of employees to manipulate evaluation methods, the structural details of individual contracts vary widely, including such mechanisms as "piece rates, [share] options, discretionary bonuses, promotions, profit sharing, efficiency wages, deferred compensation, and so on." Typically, these mechanisms are used in the context of different types of employment: salesmen often receive some or all of their remuneration as commission, production workers are usually paid an hourly wage, while office workers are typically paid monthly or semimonthly (and if paid overtime, typically at a higher rate than the hourly rate implied by the salary). The way in which these mechanisms are used is different in the two parts of the economy which Doeringer and Piore called the "primary" and "secondary" sectors (see also dual labour market). The secondary sector is characterised by short-term employment relationships, little or no prospect of internal promotion, and the determination of wages primarily by market forces. In terms of occupations, it consists primarily of low or unskilled jobs, whether they are blue-collar (manual-labour), white-collar (e.g., filing clerks), or service jobs (e.g., waiters). These jobs are linked by the fact that they are characterized by "low skill levels, low earnings, easy entry, job impermanence, and low returns to education or experience." In a number of service jobs, such as food service, golf caddying, and valet parking jobs, workers in some countries are paid mostly or entirely with tips. The use of tipping is a strategy on the part of the owners or managers to align the interests of the service workers with those of the owners or managers; the service workers have an incentive to provide good customer service (thus benefiting the company's business), because this makes it more likely that they will get a good tip. The issue of tipping is sometimes discussed in connection with the principal–agent theory. "Examples of principals and agents include bosses and employees ... [and] diners and waiters." "The "principal–agent problem", as it is known in economics, crops up any time agents aren't inclined to do what principals want them to do. To sway them [(agents)], principals have to make it worth the agents' while ... [in the restaurant context,] the better the diner's experience, the bigger the waiter's tip." "In the ... language of the economist, the tip serves as a way to reduce what is known as the classic "principal–agent" problem." According to "Videbeck, a researcher at the New Zealand Institute for the Study of Competition and Regulation[,] '[i]n theory, tipping can lead to an efficient match between workers' attitudes to service and the jobs they perform. It is a means to make people work hard. Friendly waiters will go that extra mile, earn their tip, and earn a relatively high income...[On the other hand,] if tipless wages are sufficiently low, then grumpy waiters might actually choose to leave the industry and take jobs that would better suit their personalities.'" As a solution to the principal–agent problem, though, tipping is not perfect. In the hopes of getting a larger tip, a server, for example, may be inclined to give a customer an extra large glass of wine or a second scoop of ice cream. While these larger servings make the customer happy and increase the likelihood of the server getting a good tip, they cut into the profit margin of the restaurant. In addition, a server may dote on generous tippers while ignoring other customers, and in rare cases harangue bad tippers. Non-financial compensation. Part of this variation in incentive structures and supervisory mechanisms may be attributable to variation in the level of intrinsic psychological satisfaction to be had from different types of work. Sociologists and psychologists frequently argue that individuals take a certain degree of pride in their work, and that introducing performance-related pay can destroy this "psycho-social compensation", because the exchange relation between employer and employee becomes much more narrowly economic, destroying most or all of the potential for social exchange. Evidence for this is inconclusive—Deci (1971), and Lepper, Greene and Nisbett (1973) find support for this argument; Staw (1989) suggests other interpretations of the findings. Incentive structures as mentioned above can be provided through non-monetary recognition such as acknowledgements and compliments on an employee (agent) in place of employment. Research conducted by Crifo and Diaye (2004) mentioned that agents who receive compensations such as praises, acknowledgement and recognition help to define intrinsic motivations that increase performance output from the agents thus benefiting the principal. Furthermore, the studies provided a conclusive remark that intrinsic motivation can be increased by utilising the use of non-monetary compensations that provide acknowledgement for the agent. These higher rewards, can provide a principal with the adequate methodologies to improve the effort inputs of the agent when looking at the principal agent theory through an employer vs employee level of conduct. Team production. On a related note, Drago and Garvey (1997) use Australian survey data to show that when agents are placed on individual pay-for-performance schemes, they are less likely to help their coworkers. This negative effect is particularly important in those jobs that involve strong elements of "team production" (Alchian and Demsetz 1972), where output reflects the contribution of many individuals, and individual contributions cannot be easily identified, and compensation is therefore based largely on the output of the team. In other words, pay-for-performance increases the incentives to free-ride, as there are large positive externalities to the efforts of an individual team member, and low returns to the individual (Holmström 1982, McLaughlin 1994). The negative incentive effects implied are confirmed by some empirical studies, (e.g., Newhouse, 1973) for shared medical practices; costs rise and doctors work fewer hours as more revenue is shared. Leibowitz and Tollison (1980) find that larger law partnerships typically result in worse cost containment. As a counter, peer pressure can potentially solve the problem (Kandel and Lazear 1992), but this depends on peer monitoring being relatively costless to the individuals doing the monitoring/censuring in any particular instance (unless one brings in social considerations of norms and group identity and so on). Studies suggest that profit-sharing, for example, typically raises productivity by 3–5% (Jones and Kato 1995, Knez and Simester 2001), although there are some selection issues (Prendergast). Empirical evidence. There is however considerable empirical evidence of a positive effect of compensation on performance (although the studies usually involve "simple" jobs where aggregate measures of performance are available, which is where piece rates should be most effective). In one study, Lazear (1996) saw productivity rising by 44% (and wages by 10%) in a change from salary to piece rates, with a half of the productivity gain due to worker selection effects. Research shows that pay for performance increases performance when the task at hand is more repetitive, and reduces performance when the task at hand requires more creative thinking. Furthermore, formulated from their studies that compensation tend to have an impact on performance as a result of risk aversion and the level of work that a CEO is willing to input. This showed that when the CEO returned less effort then the data correlated a pay level of neutral aversion based on incentives. However, when offered incentives the data correlated a spike in performance as a direct result. Conclusively, their studies indicated business owner (principal) and business employees (agents) must find a middle ground which coincides with an adequate shared profit for the company that is proportional to CEO pay and performance. In doing this risk aversion of employee efforts being low can be avoided pre-emptively. Contract design. Milgrom and Roberts (1992) identify four principles of contract design: When perfect information is not available, Holmström (1979) developed the Informativeness Principle to solve this problem. This essentially states that any measure of performance that (on the margin) reveals information about the effort level chosen by the agent should be included in the compensation contract. This includes, for example, Relative Performance Evaluation—measurement relative to other, similar agents, so as to filter out some common background noise factors, such as fluctuations in demand. By removing some exogenous sources of randomness in the agent's income, a greater proportion of the fluctuation in the agent's income falls under their control, increasing their ability to bear risk. If taken advantage of, by greater use of piece rates, this should improve incentives. (In terms of the simple linear model below, this means that increasing "x" produces an increase in "b".) However, setting incentives as intense as possible is not necessarily optimal from the point of view of the employer. The Incentive-Intensity Principle states that the optimal intensity of incentives depends on four factors: the incremental profits created by additional effort, the precision with which the desired activities are assessed, the agent's risk tolerance, and the agent's responsiveness to incentives. According to Prendergast (1999, 8), "the primary constraint on [performance-related pay] is that [its] provision imposes additional risk on workers ..." A typical result of the early principal–agent literature was that piece rates tend to 100% (of the compensation package) as the worker becomes more able to handle risk, as this ensures that workers fully internalize the consequences of their costly actions. In incentive terms, where we conceive of workers as self-interested rational individuals who provide costly effort (in the most general sense of the worker's input to the firm's production function), the more compensation varies with effort, the better the incentives for the worker to produce. The third principle—the Monitoring Intensity Principle—is complementary to the second, in that situations in which the optimal intensity of incentives is high corresponds highly to situations in which the optimal level of monitoring is also high. Thus employers effectively choose from a "menu" of monitoring/incentive intensities. This is because monitoring is a costly means of reducing the variance of employee performance, which makes more difference to profits in the kinds of situations where it is also optimal to make incentives intense. The fourth principle is the Equal Compensation Principle, which essentially states that activities equally valued by the employer should be equally valuable (in terms of compensation, including non-financial aspects such as pleasantness of the workplace) to the employee. This relates to the problem that employees may be engaged in several activities, and if some of these are not monitored or are monitored less heavily, these will be neglected, as activities with higher marginal returns to the employee are favoured. This can be thought of as a kind of "disintermediation"—targeting certain measurable variables may cause others to suffer. For example, teachers being rewarded by test scores of their students are likely to tend more towards teaching 'for the test', and de-emphasise less relevant but perhaps equally or more important aspects of education; while AT&T's practice at one time of paying programmers by the number of lines of code written resulted in programs that were longer than necessary—i.e., program efficiency suffering (Prendergast 1999, 21). Following Holmström and Milgrom (1990) and Baker (1992), this has become known as "multi-tasking" (where a subset of relevant tasks is rewarded, non-rewarded tasks suffer relative neglect). Because of this, the more difficult it is to completely specify and measure the variables on which reward is to be conditioned, the less likely that performance-related pay will be used: "in essence, complex jobs will typically not be evaluated through explicit contracts." (Prendergast 1999, 9). Where explicit measures are used, they are more likely to be some kind of aggregate measure, for example, baseball and American Football players are rarely rewarded on the many specific measures available (e.g., number of home runs), but frequently receive bonuses for aggregate performance measures such as Most Valuable Player. The alternative to objective measures is subjective performance evaluation, typically by supervisors. However, there is here a similar effect to "multi-tasking", as workers shift effort from that subset of tasks which they consider useful and constructive, to that subset which they think gives the greatest appearance of being useful and constructive, and more generally to try to curry personal favour with supervisors. (One can interpret this as a destruction of organizational social capital—workers identifying with, and actively working for the benefit of, the firm – in favour of the creation of personal social capital—the individual-level social relations which enable workers to get ahead ("networking").) Linear model. The four principles can be summarized in terms of the simplest (linear) model of incentive compensation: formula_0 where "w" (wage) is equal to "a" (the base salary) plus "b" (the intensity of incentives provided to the employee) times the sum of three terms: "e" (unobserved employee effort) plus "x" (unobserved exogenous effects on outcomes) plus the product of "g" (the weight given to observed exogenous effects on outcomes) and "y" (observed exogenous effects on outcomes). "b" is the slope of the relationship between compensation and outcomes. formula_1 The above discussion on explicit measures assumed that contracts would create the linear incentive structures summarised in the model above. But while the combination of normal errors and the absence of income effects yields linear contracts, many observed contracts are nonlinear. To some extent this is due to income effects as workers rise up a tournament/hierarchy: "Quite simply, it may take more money to induce effort from the rich than from the less well off." (Prendergast 1999, 50). Similarly, the threat of being fired creates a nonlinearity in wages earned versus performance. Moreover, many empirical studies illustrate inefficient behaviour arising from nonlinear objective performance measures, or measures over the course of a long period (e.g., a year), which create nonlinearities in time due to discounting behaviour. This inefficient behaviour arises because incentive structures are varying: for example, when a worker has already exceeded a quota or has no hope of reaching it, versus being close to reaching it—e.g., Healy (1985), Oyer (1997), Leventis (1997). Leventis shows that New York surgeons, penalised for exceeding a certain mortality rate, take less risky cases as they approach the threshold. Courty and Marshke (1997) provide evidence on incentive contracts offered to agencies, which receive bonuses on reaching a quota of graduated trainees within a year. This causes them to 'rush-graduate' trainees in order to make the quota. Options framework. In certain cases agency problems may be analysed by applying the techniques developed for financial options, as applied via a real options framework. Stockholders and bondholders have different objective—for instance, stockholders have an incentive to take riskier projects than bondholders do, and to pay more out in dividends than bondholders would like. At the same time, since equity may be seen as a call option on the value of the firm, an increase in the variance in the firm value, other things remaining equal, will lead to an increase in the value of equity, and stockholders may therefore take risky projects with negative net present values, which while making them better off, may make the bondholders worse off. See Option pricing approaches under Business valuation for further discussion. Nagel and Purnanandam (2017) notice that since bank assets are risky debt claims, bank equity resembles a subordinated debt and therefore the stock's payoff is truncated by the difference between the face values of the corporation debt and of the bank deposits. Based on this observation, Peleg-Lazar and Raviv (2017) show that in contrast to the classical agent theory of Michael C. Jensen and William Meckling, an increase in variance would not lead to an increase in the value of equity if the bank's debtor is solvent. Performance evaluation. Objective. The major problem in measuring employee performance in cases where it is difficult to draw a straightforward connection between performance and profitability is the setting of a standard by which to judge the performance. One method of setting an absolute objective performance standard—rarely used because it is costly and only appropriate for simple repetitive tasks—is time-and-motion studies, which study in detail how fast it is possible to do a certain task. These have been used constructively in the past, particularly in manufacturing. More generally, however, even within the field of objective performance evaluation, some form of relative performance evaluation must be used. Typically this takes the form of comparing the performance of a worker to that of his peers in the firm or industry, perhaps taking account of different exogenous circumstances affecting that. The reason that employees are often paid according to hours of work rather than by direct measurement of results is that it is often more efficient to use indirect systems of controlling the quantity and quality of effort, due to a variety of informational and other issues (e.g., turnover costs, which determine the optimal minimum length of relationship between firm and employee). This means that methods such as deferred compensation and structures such as tournaments are often more suitable to create the incentives for employees to contribute what they can to output over longer periods (years rather than hours). These represent "pay-for-performance" systems in a looser, more extended sense, as workers who consistently work harder and better are more likely to be promoted (and usually paid more), compared to the narrow definition of "pay-for-performance", such as piece rates. This discussion has been conducted almost entirely for self-interested rational individuals. In practice, however, the incentive mechanisms which successful firms use take account of the socio-cultural context they are embedded in (Fukuyama 1995, Granovetter 1985), in order not to destroy the social capital they might more constructively mobilise towards building an organic, social organization, with the attendant benefits from such things as "worker loyalty and pride (...) [which] can be critical to a firm's success ..." (Sappington 1991,63) Subjective. Subjective performance evaluation allows the use of a subtler, more balanced assessment of employee performance, and is typically used for more complex jobs where comprehensive objective measures are difficult to specify and/or measure. Whilst often the only feasible method, the attendant problems with subjective performance evaluation have resulted in a variety of incentive structures and supervisory schemes. One problem, for example, is that supervisors may under-report performance in order to save on wages, if they are in some way residual claimants, or perhaps rewarded on the basis of cost savings. This tendency is of course to some extent offset by the danger of retaliation and/or demotivation of the employee, if the supervisor is responsible for that employee's output. Another problem relates to what is known as the "compression of ratings". Two related influences—centrality bias, and leniency bias—have been documented (Landy and Farr 1980, Murphy and Cleveland 1991). The former results from supervisors being reluctant to distinguish critically between workers (perhaps for fear of destroying team spirit), while the latter derives from supervisors being averse to offering poor ratings to subordinates, especially where these ratings are used to determine pay, not least because bad evaluations may be demotivating rather than motivating. However, these biases introduce noise into the relationship between pay and effort, reducing the incentive effect of performance-related pay. Milkovich and Wigdor (1991) suggest that this is the reason for the common separation of evaluations and pay, with evaluations primarily used to allocate training. Finally, while the problem of compression of ratings originates on the supervisor-side, related effects occur when workers actively attempt to influence the appraisals supervisors give, either by influencing the performance information going to the supervisor: multitasking (focussing on the more visibly productive activities—Paul 1992), or by working "too hard" to signal worker quality or create a good impression (Holmström 1982); or by influencing the evaluation of it, e.g., by "currying influence" (Milgrom and Roberts 1988) or by outright bribery (Tirole 1992). Incentive structures. Tournaments. Much of the discussion here has been in terms of individual pay-for-performance contracts; but many large firms use internal labour markets (Doeringer and Piore 1971, Rosen 1982) as a solution to some of the problems outlined. Here, there is "pay-for-performance" in a looser sense over a longer time period. There is little variation in pay within grades, and pay increases come with changes in job or job title (Gibbs and Hendricks 1996). The incentive effects of this structure are dealt with in what is known as "tournament theory" (Lazear and Rosen 1981, Green and Stokey (1983), see Rosen (1986) for multi-stage tournaments in hierarchies where it is explained why CEOs are paid many times more than other workers in the firm). See the superstar article for more information on the tournament theory. Workers are motivated to supply effort by the wage increase they would earn if they win a promotion. Some of the extended tournament models predict that relatively weaker agents, be they competing in a sports tournaments (Becker and Huselid 1992, in NASCAR racing) or in the broiler chicken industry (Knoeber and Thurman 1994), would take risky actions instead of increasing their effort supply as a cheap way to improve the prospects of winning. These actions are inefficient as they increase risk taking without increasing the average effort supplied. Neilson (2007) further added to this from his studies which indicated that when two employees competed to win in a tournament they have a higher chance of bending and or breaking the rules to win. Nelson (2007) also indicated that when the larger the price (incentive) the more inclined the agent (employee in this case) is to increase their effort parameter from Neilson's studies. A major problem with tournaments is that individuals are rewarded based on how well they do relative to others. Co-workers might become reluctant to help out others and might even sabotage others' effort instead of increasing their own effort (Lazear 1989, Rob and Zemsky 1997). This is supported empirically by Drago and Garvey (1997). Why then are tournaments so popular? Firstly, because—especially given compression rating problems—it is difficult to determine absolutely differences in worker performance. Tournaments merely require rank order evaluation. Secondly, it reduces the danger of rent-seeking, because bonuses paid to favourite workers are tied to increased responsibilities in new jobs, and supervisors will suffer if they do not promote the most qualified person. This effectively takes the factors of ambiguity away from the principal agent problem by ensuring that the agent acts in the best interest of the principal but also ensures that the quality of work done is of an optimal level. Thirdly, where prize structures are (relatively) fixed, it reduces the possibility of the firm reneging on paying wages. As Carmichael (1983) notes, a prize structure represents a degree of commitment, both to absolute and to relative wage levels. Lastly when the measurement of workers' productivity is difficult, e.g., say monitoring is costly, or when the tasks the workers have to perform for the job is varied in nature, making it hard to measure effort and/or performance, then running tournaments in a firm would encourage the workers to supply effort whereas workers would have shirked if there are no promotions. Tournaments also promote risk seeking behavior. In essence, the compensation scheme becomes more like a call option on performance (which increases in value with increased volatility (cf. options pricing). If you are one of ten players competing for the asymmetrically large top prize, you may benefit from reducing the expected value of your overall performance to the firm in order to increase your chance that you have an outstanding performance (and win the prize). In moderation this can offset the greater risk aversion of agents vs principals because their social capital is concentrated in their employer while in the case of public companies the principal typically owns its stake as part of a diversified portfolio. Successful innovation is particularly dependent on employees' willingness to take risks. In cases with extreme incentive intensity, this sort of behavior can create catastrophic organizational failure. If the principal owns the firm as part of a diversified portfolio this may be a price worth paying for the greater chance of success through innovation elsewhere in the portfolio. If however the risks taken are systematic and cannot be diversified e.g., exposure to general housing prices, then such failures will damage the interests of principals and even the economy as a whole. (cf. Kidder Peabody, Barings, Enron, AIG to name a few). Ongoing periodic catastrophic organizational failure is directly incentivized by tournament and other superstar/winner-take-all compensation systems (Holt 1995). Deferred compensation. Tournaments represent one way of implementing the general principle of "deferred compensation", which is essentially an agreement between worker and firm to commit to each other. Under schemes of deferred compensation, workers are overpaid when old, at the cost of being underpaid when young. Salop and Salop (1976) argue that this derives from the need to attract workers more likely to stay at the firm for longer periods, since turnover is costly. Alternatively, delays in evaluating the performance of workers may lead to compensation being weighted to later periods, when better and poorer workers have to a greater extent been distinguished. (Workers may even prefer to have wages increasing over time, perhaps as a method of forced saving, or as an indicator of personal development. e.g., Loewenstein and Sicherman 1991, Frank and Hutchens 1993.) For example, Akerlof and Katz 1989: if older workers receive efficiency wages, younger workers may be prepared to work for less in order to receive those later. Overall, the evidence suggests the use of deferred compensation (e.g., Freeman and Medoff 1984, and Spilerman 1986—seniority provisions are often included in pay, promotion and retention decisions, irrespective of productivity.) Energy consumption. The "principal–agent problem" has also been discussed in the context of energy consumption by Jaffe and Stavins in 1994. They were attempting to catalog market and non-market barriers to energy efficiency adoption. In efficiency terms, a market failure arises when a technology which is both cost-effective and saves energy is not implemented. Jaffe and Stavins describe the common case of the landlord-tenant problem with energy issues as a principal–agent problem. "[I]f the potential adopter is not the party that pays the energy bill, then good information in the hands of the potential adopter may not be sufficient for optimal diffusion; adoption will only occur if the adopter can recover the investment from the party that enjoys the energy savings. Thus, if it is difficult for the possessor of information to convey it credibly to the party that benefits from reduced energy use, a principal/agent problem arises." The energy efficiency use of the principal agent terminology is in fact distinct from the usual one in several ways. In landlord/tenant or more generally equipment-purchaser/energy-bill-payer situations, it is often difficult to describe who would be the principal and who the agent. Is the agent the landlord and the principal the tenant, because the landlord is "hired" by the tenant through the payment of rent? As Murtishaw and Sathaye, 2006 point out, "In the residential sector, the conceptual definition of principal and agent must be stretched beyond a strictly literal definition." Another distinction is that the principal agent problem in energy efficiency does not require any information asymmetry: both the landlord and the tenant may be aware of the overall costs and benefits of energy-efficient investments, but as long as the landlord pays for the equipment and the tenant pays the energy bills, the investment in new, energy-efficient appliances will not be made. In this case, there is also little incentive for the tenant to make a capital efficiency investment with a usual payback time of several years, and which in the end will revert to the landlord as property. Since energy consumption is determined both by technology and by behavior, an opposite principal agent problem arises when the energy bills are paid by the landlord, leaving the tenant with no incentive to moderate her energy use. This is often the case for leased office space, for example. The energy efficiency principal agent problem applies in many cases to rented buildings and apartments, but arises in other circumstances, most often involving relatively high up-front costs for energy-efficient technology. Though it is challenging to assess exactly, the principal agent problem is considered to be a major barrier to the diffusion of efficient technologies. This can be addressed in part by promoting shared-savings performance-based contracts, where both parties benefit from the efficiency savings. The issues of market barriers to energy efficiency, and the principal agent problem in particular, are receiving renewed attention because of the importance of global climate change and rising prices of the finite supply of fossil fuels. Trust relationships. The problem arises in client–attorney, probate executor, bankruptcy trustee, and other such relationships. In some rare cases, attorneys who were entrusted with estate accounts with sizeable balances acted against the interests of the person who hired them to act as their agent by embezzling the funds or "playing the market" with the client's money (with the goal of pocketing any proceeds). This section can also be explored from the perspective of the trust game which captures the key elements of principal–agent problems. This game was first experimentally implemented by Berg, Dickhaut, and McCabe in 1995. The setup of the game is that there are two players – trustor/principal (investor) and agents (investee). The trustor is endowed with a budget and come transfer some of the amounts to an agent in expectation of return over the transferred amount in the future. The trustee may send any part of the transferred amount back to the trustor. The amount transferred back by the trustee is referred to as trustworthiness. Most of the studies find that 45% of the endowment was transferred by the principal and around 33% transferred back by an agent. This means that investors are not selfish and can be trusted for economic transactions. Trust within the principal-agent problem can also be seen from the perspective of an employer-employee relationship, whereby the employee (agent) has distrust in the employer (principal) which causes greater demotivation of the employee. It has been assumed that the principal having control in an organisational culture has benefits to for the organisation by creating greater productivity and efficiency. However it also entails some drawbacks that reduce the employee satisfaction such as reduced motivation, creativity, innovation and greater anxiety and stress. Personnel management. When managing personnel in an organisational setting, the principal-agent problem surfaces when employees are hired to perform specific tasks and fulfil certain roles. In this environment, the goals of employee and employer may not be aligned. Often employees have the desire to further their own career or financial goals where employers often have the output interests of the organisation at the forefront of their actions and goals. Employees may reveal the principal-agent problem in their work by slacking off and not meeting targets or KPIs and Employers may reveal the principal-agent problem by implementing damaging policies or actions that make the working environment unsustainable. Bureaucracy and public administration. In the context of public administration, the principal–agent problem can be seen in such a way where public administration and bureaucrats are the agents and politicians and ministers are the principal authorities. Ministers in the government usually command by framing policies and direct the bureaucrats to implement the public policies. However, there can be various principal-agent problems in the scenario such as misaligned intentions, information asymmetry, adverse selection, shirking, and slippage. There are various situations where the ambitions and goals of the principals and agents may diverge. For example, politicians and the government may want public administration to implement a welfare policy program but the bureaucrats may have other interests as well such as rent-seeking. This results in a lack of implementation of public policies, hence the wastage of economic resources. This can also lead to the problem of shirking which is characterized as avoidance of performing a defined responsibility by the agent. The information asymmetry problem occurs in a scenario where one of the two people has more or less information than the other. In the context of public administration, bureaucrats have an information advantage over the government and ministers as the former work at the ground level and have more knowledge about the dynamic and changing situation. Due to this government may frame policies that are not based on complete information and therefore problems in the implementation of public policies may occur. This can also lead to the problem of slippage which is defined as a myth where the principal sees that agents are working according to the pre-defined responsibilities but that might not be the reality. The problem of adverse selection is related to the selection of agents to fulfill particular responsibilities but they might deviate from doing so. The prime cause behind this is the incomplete information available at the desk of selecting authorities (principal) about the agents they selected. For example, the Ministry of Road and Transport Highways hired a private company to complete one of its road projects, however, it was later found that the company assigned to complete road projects lacked technical know-how and had management issues. The principal-agent problem in the public sector arises when there is a disconnect between politicians and public servants and their goals and interests. Other reasons that this occurs is because of political interference, bureaucratic resistance and public accountability. Political interference happens when the politicians try and influence the decisions of public servants or bureaucrats to try and push their own interests which ultimately leads to policies being warped. Bureaucratic Resistance is when public servants are hesitant to implement the policies that have been proposed or agreed on, which ultimately causes policies to be implemented at a slow rate. Bureaucratic resistance may be due to lack of funding, resources or political support. Public accountability also plays a role in how the principal-agent theory impacts the public sector. When sworn in, politicians and public servants are responsible for ensure that they act in the interest of the public that they represent or work for, however, due to budget and resourcing issues as well as lack of transparency trust in the public sector often falls and a major disconnect grows. Economic theory. In economic theory, the principal-agent approach (also called agency theory) is part of the field "contract theory". In agency theory, it is typically assumed that complete contracts can be written, an assumption also made in mechanism design theory. Hence, there are no restrictions on the class of feasible contractual arrangements between principal and agent. Agency theory can be subdivided in two categories: (1) In adverse selection models, the agent has private information about their type (say, their costs of exerting effort or their valuation of a good) "before" the contract is written. (2) In moral hazard models, the agent becomes privately informed "after" the contract is written. Hart and Holmström (1987) divide moral hazard models in the categories "hidden action" (e.g., the agent chooses an unobservable effort level) and "hidden information" (e.g., the agent learns their valuation of a good, which is modelled as a random draw by nature). In hidden action models, there is a stochastic relationship between the unobservable effort and the verifiable outcome (say, the principal's revenue), because otherwise the unobservability of the effort would be meaningless. Typically, the principal makes a take-it-or-leave-it offer to the agent; i.e., the principal has all bargaining power. In principal–agent models, the agent often gets a strictly positive rent (i.e. their payoff is larger than their reservation utility, which they would get if no contract were written), which means that the principal faces agency costs. For example, in adverse selection models the agent gets an information rent, while in hidden action models with a wealth-constrained agent the principal must leave a limited-liability rent to the agent. In order to reduce the agency costs, the principal typically induces a second-best solution that differs from the socially optimal first-best solution (which would be attained if there were complete information). If the agent had all bargaining power, the first-best solution would be achieved in adverse selection models with one-sided private information as well as in hidden action models where the agent is wealth-constrained. Contract-theoretic principal–agent models have been applied in various fields, including financial contracting, regulation, public procurement, monopolistic price-discrimination, job design, internal labor markets, team production, and many others. From the cybernetics point of view, the Cultural Agency Theory arose in order to better understand the socio-cultural nature of organisations and their behaviours. Negotiation. In the negotiation problem, the principal commissions an agent to conduct negotiations on its behalf. The principal may delegate certain authority to the agent, including the ability to conclude negotiations and enter into binding contracts. The principal may consider and assign a utility to each issue in the negotiation. However, it is not always the case that the principal will explicitly inform the agent of what it considers to be the minimally acceptable terms, otherwise known as the reservation price. The successfulness of a negotiation will be determined by a range of factors. These include: the negotiation objective, the role of the negotiating parties, the nature of the relationship between the negotiating parties, the negotiating power of each party and the negotiation type. Where there are information asymmetries between the principal and agent, this can affect the outcome of the negotiation. As it is impossible for a manager to attend all upcoming negotiations of the company, it is common practice to assign internal or external negotiators to represent the negotiating company at the negotiation table. With the principal–agent problem, two areas of negotiation emerge: The principal-agent problem can arises in representative negotiations where the interests of the principal and the agent are misaligned. The principal cannot directly observe the agent's efforts during the course of the negotiation. In such circumstances, this may lead to the agent employing negotiation tactics which are unfavourable to the principal, but which benefit the agent. Depending upon how the agent's reward is determined, the principal may be able to effectively retain control over the agent. If the agent receives a fixed fee, the agent may nonetheless act in a manner that is inconsistent with the principal's interests. The agent may adopt this strategy if they believe the negotiation is a one-shot game. The agent may adopt a different strategy if they account for reputational consequences of acting against the principal's interests. Similarly, if the negotiation is a repeated game, and the principal is aware of the results of the first iteration, the agent may opt to employ a different strategy which more closely aligns with the interests of the principal in order to ensure the principal will continue to contract with the agent in the following iterations. If the agent's reward is dependent upon the outcome of the negotiation, then this may help align the differing interests. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\nw = a + b(e + x + gy) \\,\n" }, { "math_id": 1, "text": "\n\\begin{align}\n\\text{wage} = {} & (\\text{base salary}) \n+ (\\text{incentives})\n\\cdot \\Big(\\text{(unobserved) effort} + \\text{(unobserved) effects} \\\\[5pt]\n& {} + (\\text{weight }g) \\cdot (\\text{observed exogenous effects})\\Big)\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=730742
73074374
Yan's theorem
In probability theory, Yan's theorem is a separation and existence result. It is of particular interest in financial mathematics where one uses it to prove the Kreps-Yan theorem. The theorem was published by Jia-An Yan. It was proven for the L1 space and later generalized by Jean-Pascal Ansel to the case formula_0. Yan's theorem. Notation: formula_1 is the closure of a set formula_2. formula_3. formula_4 is the indicator function of formula_5. formula_6 is the conjugate index of formula_7. Statement. Let formula_8 be a probability space, formula_0 and formula_9 be the space of non-negative and bounded random variables. Further let formula_10 be a convex subset and formula_11. Then the following three conditions are equivalent: formula_21.
[ { "math_id": 0, "text": "1\\leq p<+\\infty" }, { "math_id": 1, "text": "\\overline{\\Omega}" }, { "math_id": 2, "text": "\\Omega" }, { "math_id": 3, "text": "A-B=\\{f-g:f\\in A,\\;g\\in B\\}" }, { "math_id": 4, "text": "I_A" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "q" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "(\\Omega,\\mathcal{F},P)" }, { "math_id": 9, "text": "B_+" }, { "math_id": 10, "text": "K\\subseteq L^p(\\Omega,\\mathcal{F},P)" }, { "math_id": 11, "text": "0\\in K" }, { "math_id": 12, "text": "f\\in L_+^p(\\Omega,\\mathcal{F},P)" }, { "math_id": 13, "text": "f\\neq 0" }, { "math_id": 14, "text": "c>0" }, { "math_id": 15, "text": "cf \\not\\in \\overline{K-B_+}" }, { "math_id": 16, "text": "A\\in \\mathcal{F}" }, { "math_id": 17, "text": "P(A)>0" }, { "math_id": 18, "text": "cI_A \\not\\in \\overline{K-B_+}" }, { "math_id": 19, "text": "Z\\in L^q" }, { "math_id": 20, "text": "Z>0" }, { "math_id": 21, "text": "\\sup\\limits_{Y\\in K}\\mathbb{E}[ZY]<+\\infty" } ]
https://en.wikipedia.org/wiki?curid=73074374
730763
HP-20S
Programmable calculator The HP-20S (F1890A) is an algebraic programmable scientific calculator produced by Hewlett-Packard from 1987 to 2000. A member of HP's Pioneer series, the 20S was a low cost model targeted at students, using the same hardware as the HP-10B business calculator. Compared with the higher-end 32S and 42S scientific calculators, the 20S includes much more basic functionality. As a student calculator, it also uses infix notation rather than the Reverse Polish notation found in higher-end HP calculators. Despite these limitations, the 20S is keystroke programmable, supporting up to 99 program lines of fully merged instructions and ten memory registers. Hardware. Introduced at the 1989 Consumer Electronics Show, the HP 20S had an initial retail price of 50 USD. Introduced simultaneously was the HP-10B, based on the same hardware but targeting the business calculator market. The retail price set a new bar for HP, who credited their delivery of a low-price product to tight integration between their research, development and manufacturing departments. The 20S has the same physical form factor and 37-key keypad as other models in the Pioneer series. The CPU is an HP Saturn (Bert) chip clocked at 640 kHz. With only 256 bytes of RAM, the 20S is at the bottom end of the HP Pioneer range. While higher end scientific models in the Pioneer series were fitted with dot-matrix displays that allowed their functionality to be organized into menus (the 22S, 32S and 42S being examples), the 20S has only a more primitive 12-digit seven-segment display. Advanced functionality is therefore accessed by a pair of shift keys, with almost every key on the keypad assigned secondary and tertiary functions. The initial design used blue and orange shift keys, but a visual refresh in 1999 changed the color scheme to green and purple. Critical evaluation. A 1994 evaluation of contemporary calculators criticized some of the features and quirks of the HP 20S. Some points of criticism included: Despite these criticisms, the same source had praise for the calculator's accuracy (rounding errors produced by other calculators did not occur), and for the quality of the HP 20S user manual. Program library. The 20S contained six preloaded programs in ROM for common mathematical operations. These programs could be loaded to RAM and used and edited as user programs. The program library was used by HP as a key feature for advertising the 20S. HP-21S. The HP 21S is a variant of the 20S designed specifically for statistical calculations. HP's stated goal in releasing the 21S was to eliminate the need for statistics tables, just as the HP-35 had previously eliminated for trigonometric and log tables. The majority of the features of the 20S are still present, including keystroke programming support and the typical trigonometric, logarithmic and exponential functions found on most scientific calculators. However, the 21S has several features specifically to support statistical analysis: To accommodate the extra functionality, the 21S sacrifices some of the 20S's functionality; specifically it does not support base arithmetic and unit conversions, along with hyperbolic functions.
[ { "math_id": 0, "text": "\\sqrt{-1}" } ]
https://en.wikipedia.org/wiki?curid=730763
73078000
Porous medium equation
The porous medium equation, also called the nonlinear heat equation, is a nonlinear partial differential equation taking the form: formula_0 where formula_1 is the Laplace operator. It may also be put into its equivalent divergence form:formula_2where formula_3 may be interpreted as a diffusion coefficient and formula_4 is the divergence operator. Solutions. Despite being a nonlinear equation, the porous medium equation may be solved exactly using separation of variables or a similarity solution. However, the separation of variables solution is known to blow up to infinity at a finite time. Barenblatt-Kompaneets-Zeldovich similarity solution. The similarity approach to solving the porous medium equation was taken by Barenblatt and Kompaneets/Zeldovich, which for formula_5 was to find a solution satisfying:formula_6for some unknown function formula_7 and unknown constants formula_8. The final solution to the porous medium equation under these scalings is:formula_9where formula_10 is the formula_11-norm, formula_12 is the positive part, and the coefficients are given by:formula_13 Applications. The porous medium equation has been found to have a number of applications in gas flow, heat transfer, and groundwater flow. Gas flow. The porous medium equation name originates from its use in describing the flow of an ideal gas in a homogeneous porous medium. We require three equations to completely specify the medium's density formula_14, flow velocity field formula_15, and pressure formula_16: the continuity equation for conservation of mass; Darcy's law for flow in a porous medium; and the ideal gas equation of state. These equations are summarized below:formula_17where formula_18 is the porosity, formula_19 is the permeability of the medium, formula_20 is the dynamic viscosity, and formula_21 is the polytropic exponent (equal to the heat capacity ratio for isentropic processes). Assuming constant porosity, permeability, and dynamic viscosity, the partial differential equation for the density is:formula_22where formula_23 and formula_24. Heat transfer. Using Fourier's law of heat conduction, the general equation for temperature change in a medium through conduction is:formula_25where formula_14 is the medium's density, formula_26 is the heat capacity at constant pressure, and formula_27 is the thermal conductivity. If the thermal conductivity depends on temperature according to the power law:formula_28Then the heat transfer equation may be written as the porous medium equation:formula_29with formula_30 and formula_31. The thermal conductivity of high-temperature plasmas seems to follow a power law.
[ { "math_id": 0, "text": " \\frac{\\partial u}{\\partial t} = \\Delta \\left(u^{m}\\right), \\quad m > 1 " }, { "math_id": 1, "text": "\\Delta" }, { "math_id": 2, "text": "{\\partial u\\over{\\partial t}} = \\nabla \\cdot \\left[ D(u)\\nabla u \\right]" }, { "math_id": 3, "text": "D(u) = mu^{m-1}" }, { "math_id": 4, "text": "\\nabla\\cdot(\\cdot)" }, { "math_id": 5, "text": "x \\in \\mathbb{R}^{n}" }, { "math_id": 6, "text": "u(t,x) = {1\\over{t^{\\alpha}}}v\\left( {x\\over{t^{\\beta}}} \\right), \\quad t > 0" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "\\alpha,\\beta" }, { "math_id": 9, "text": "u(t,x) = {1\\over{t^{\\alpha}}}\\left( b - {m-1\\over{2m}} \\beta {\\|x\\|^{2}\\over{t^{2\\beta}}} \\right)_{+}^{1\\over{m-1}}" }, { "math_id": 10, "text": "\\|\\cdot\\|^{2}" }, { "math_id": 11, "text": "\\ell^{2}" }, { "math_id": 12, "text": "(\\cdot)_{+}" }, { "math_id": 13, "text": "\\alpha = {n\\over{n(m-1) + 2}}, \\quad \\beta = {1\\over{n(m-1) + 2}}" }, { "math_id": 14, "text": "\\rho" }, { "math_id": 15, "text": "{\\bf v}" }, { "math_id": 16, "text": "p" }, { "math_id": 17, "text": "\\begin{aligned}\n\\varepsilon {\\partial \\rho\\over{\\partial t}} &= -\\nabla \\cdot (\\rho {\\bf v}) & (\\text{Conservation of mass}) \\\\\n{\\bf v} &= -{k\\over{\\mu}}\\nabla p & (\\text{Darcy's law}) \\\\\np &= p_{0}\\rho^{\\gamma} & (\\text{Equation of state})\n\\end{aligned}" }, { "math_id": 18, "text": "\\varepsilon" }, { "math_id": 19, "text": "k" }, { "math_id": 20, "text": "\\mu" }, { "math_id": 21, "text": "\\gamma" }, { "math_id": 22, "text": "{\\partial \\rho\\over{\\partial t}} = c\\Delta \\left( \\rho^{m} \\right)" }, { "math_id": 23, "text": "m = \\gamma + 1" }, { "math_id": 24, "text": "c = \\gamma k p_{0}/(\\gamma+1)\\varepsilon\\mu" }, { "math_id": 25, "text": "\\rho c_{p} {\\partial T\\over{\\partial t}} = \\nabla \\cdot (\\kappa \\nabla T)" }, { "math_id": 26, "text": "c_{p}" }, { "math_id": 27, "text": "\\kappa" }, { "math_id": 28, "text": "\\kappa = \\alpha T^{n}" }, { "math_id": 29, "text": "{\\partial T\\over{\\partial t}} = \\lambda\\Delta \\left(T^{m}\\right)" }, { "math_id": 30, "text": "m=n+1" }, { "math_id": 31, "text": "\\lambda = \\alpha/\\rho c_{p}m" } ]
https://en.wikipedia.org/wiki?curid=73078000
73078347
Cole–Hopf transformation
The Cole–Hopf transformation is a change of variables that allows to transform a special kind of parabolic partial differential equations (PDEs) with a quadratic nonlinearity into a linear heat equation. In particular, it provides an explicit formula for fairly general solutions of the PDE in terms of the initial datum and the heat kernel. Consider the following PDE:formula_0where formula_1, formula_2 are constants, formula_3 is the Laplace operator, formula_4 is the gradient, and formula_5 is the Euclidean norm in formula_6. By assuming that formula_7, where formula_8 is an unknown smooth function, we may calculate:formula_9Which implies that:formula_10if we constrain formula_11 to satisfy formula_12. Then we may transform the original nonlinear PDE into the canonical heat equation by using the transformation: formula_13 This is "the" "Cole-Hopf transformation". With the transformation, the following initial-value problem can now be solved:formula_14The unique, bounded solution of this system is:formula_15Since the Cole–Hopf transformation implies that formula_16, the solution of the original nonlinear PDE is:formula_17
[ { "math_id": 0, "text": "u_{t} - a\\Delta u + b\\|\\nabla u\\|^{2} = 0, \\quad u(0,x) = g(x)\n" }, { "math_id": 1, "text": "x\\in \\mathbb{R}^{n}" }, { "math_id": 2, "text": "a,b" }, { "math_id": 3, "text": "\\Delta" }, { "math_id": 4, "text": "\\nabla" }, { "math_id": 5, "text": "\\|\\cdot\\|" }, { "math_id": 6, "text": "\\mathbb{R}^{n}" }, { "math_id": 7, "text": "w = \\phi(u)" }, { "math_id": 8, "text": "\\phi(\\cdot)" }, { "math_id": 9, "text": "w_{t} = \\phi'(u)u_{t}, \\quad \\Delta w = \\phi'(u)\\Delta u + \\phi''(u)\\|\\nabla u\\|^{2}\n" }, { "math_id": 10, "text": "\\begin{aligned}\nw_{t} = \\phi'(u)u_{t} &= \\phi'(u)\\left( a\\Delta u - b\\|\\nabla u\\|^{2}\\right) \\\\\n&= a\\Delta w - (a\\phi'' + b\\phi')\\|\\nabla u\\|^{2} \\\\\n&= a\\Delta w\n\\end{aligned}\n" }, { "math_id": 11, "text": "\\phi" }, { "math_id": 12, "text": "a\\phi'' + b\\phi' = 0" }, { "math_id": 13, "text": " w(u) = e^{-bu/a} " }, { "math_id": 14, "text": "w_{t} - a\\Delta w = 0, \\quad w(0,x) = e^{-bg(x)/a}\n" }, { "math_id": 15, "text": "w(t,x) = {1\\over{(4\\pi at)^{n/2}}} \\int_{\\mathbb{R}^{n}} e^{-\\|x-y\\|^{2}/4at - bg(y)/a}dy\n" }, { "math_id": 16, "text": "u = -(a/b)\\log w" }, { "math_id": 17, "text": "u(t,x) = -{a\\over{b}}\\log \\left[ {1\\over{(4\\pi at)^{n/2}}} \\int_{\\mathbb{R}^{n}} e^{-\\|x-y\\|^{2}/4at - bg(y)/a}dy \\right]\n" } ]
https://en.wikipedia.org/wiki?curid=73078347
73081332
Aluminium–copper alloys
Aluminium–copper alloys (AlCu) are aluminium alloys that consist largely of aluminium (Al) and traces of copper (Cu) as the main alloying elements. Important grades also contain additives of magnesium, iron, nickel and silicon (AlCu(Mg, Fe, Ni, Si)), often manganese is also included to increase strength (see aluminium-manganese alloys). The main area of application is aircraft construction. The alloys have medium to high strength and can be age hardened. They are both wrought alloy. Also available as cast alloy. Their susceptibility to corrosion and their poor weldability are disadvantageous. Duralumin is the oldest variety in this group and goes back to Alfred Wilm, who discovered it in 1903. Aluminium could only be used as a widespread construction material thanks to the aluminium-copper alloys, as pure aluminium is much too soft for this and other hardenable alloys such as aluminium-magnesium-silicon alloys (AlMgSi) or the naturally hard (non-hardenable) alloys. Aluminium–copper alloys were standardised in the 2000 series by the international alloy designation system (IADS) which was originally created in 1970 by the The Aluminum Association. The 2000 series includes 2014 and 2024 alloys used in airframe fabrication. Copper alloys with aluminium as the main alloying metal are known as aluminium bronze, the amount of aluminium is generally less than 12%. History. Duralumin is a trade name for one of the earliest types of age-hardenable aluminium alloys. The term is a combination of "Dürener" and "aluminium". Its use as a trade name is obsolete. Duralumin was developed by the German metallurgist Alfred Wilm at Dürener Metallwerke AG. In 1903, Wilm discovered that after quenching, an aluminium alloy containing 4% copper would harden when left at room temperature for several days. Further improvements led to the introduction of duralumin in 1909. The name is mainly used in pop-science to describe all Al-Cu alloys system. Aluminium–copper alloys were standardised in the 2000 series by the international alloy designation system (IADS) which was originally created in 1970 by the Aluminum Association. 2000s series includes 2014 and 2024 alloys used in airframe fabrication. Pure AlCu wrought alloys. All AlCu alloys are based on the system of pure AlCu alloys. Solubility of copper and phases. Aluminium forms a eutectic with copper at 547 °C and 33 mass percent copper, which also corresponds to the maximum solubility. At lower temperatures, the solubility drops sharply; at room temperature it is only 0.1%. At higher copper contents, Al2Cu is formed, an intermetallic phase. It is present in a tetragonal structure, which is so different from the cubic crystal system of aluminium that the formula_0-phase exists only as an incoherent phase. There are also the partially coherent ones formula_1- and formula_2-phases. Microstructural transformations. After casting, the material is usually oversaturated - Mixed crystal, which also contains more copper at room temperature than could actually be dissolved at this temperature. The individual temperature ranges overlap: Even at low temperatures, there is formation of formula_2- or formula_3 phases, but these form much more slowly than the GP(I/II) zones. Each of the phases forms faster the higher the temperature. GP(I) zones. The formation of GP(I) zones is referred to as natural hardening and occurs at temperatures up to 80 °C. They are tiny disc-shaped layers just one atom thick and 2 to 5 nanometers in diameter. With time, the number of zones increases and the copper concentration in them increases, but not their diameter. They are coherent with the aluminum lattice and form on the {100} planes. GP(II) zones. The GP(II) zones (formula_1-phases) are largely responsible for the increase in strength of the AlCu alloys. They are coherent with the aluminium crystal and consist of alternating layers of aluminium and copper with layer thicknesses of about 10 nanometers and dimensions of up to 150 nanometers. In contrast to the GP(I) zones, these are three-dimensional precipitations. Their layers are parallel to the aluminium {100} plane. From the formula_4-phase forms the formula_5-phases, but there are overlaps. The GP(II) zones need vacancies for growth, which is why a lack of these (e.g. due to magnesium) leads to delayed growth. Partially coherent phases. The formula_2-phase is only partially coherent with the aluminium lattice and forms at temperatures from 150 °C to 300 °C. It has the form of platelets and can arise from the GP(II) zones. However, it can also arise directly as a precipitation from the mixed crystal. In the first case, the increasing surface tension is reduced by dislocations, in the second case, the precipitates form preferentially at dislocations. Incoherent phases. The formula_3-phase is incoherent with the lattice of the mixed crystal. It forms at temperatures of 300 °C and more. It usually forms larger particles with a larger spacing than the other phases and thus does not lead to any increase in strength or even to a drop if its formation takes place at the expense of the other phases. The formula_3-phase also occurs at temperatures between 150 °C and 250 °C as precipitation at grain boundaries, as this reduces the surface tension. The formula_3-phase leads to a partial intergranular fracture; however, the fracture behavior remains ductile overall. The change in fracture behavior is caused by precipitation-free zones at the grain boundaries. The formula_3-phase has a greater potential difference compared to the mixed crystal, so that layer corrosion and intergranular corrosion can occur. With longer annealing times, the inside of the grains also separate the formula_3-phases, the potential difference is additionally lower. Grades, alloying elements and contents. As with almost all aluminium alloys, a distinction is made between wrought alloys for rolling and forging and cast alloys for casting. The copper content is usually between 3 and 6%. Between 0.3% and 6% the alloys are regarded as not weldable or very difficult to weld (by fusion welding), with higher copper contents they become weldable again. Most types also contain additives of magnesium, manganese and silicon to increase strength. Lead and bismuth form small inclusions that melt at low temperatures, resulting in better chip formation, similar to free machining steel. The heat resistance is increased by adding nickel and iron. Iron, is found as an impurity in engineering alloys, preventing strain hardening, but adding magnesium makes the aforementioned process possible again. Larger amounts of magnesium up to 1.5% increase strength and elongation at break (see Aluminium-magnesium alloy). Manganese is also used to increase strength (see AlMn). Larger amounts, however, have negative side effects, so the content is limited to around 1% manganese. Smaller additions of silicon are added to bind iron, since it prefers to form the AlFeSi phase, while the formation of Al7Cu2Fe would remove larger amounts of copper from the material, which then no longer leads to the formation of phases that are actually desired (especially Al2Cu, copper aluminide). Larger amounts of silicon are alloyed to form with magnesium Mg2Si (magnesium silicide) which, like aluminium-magnesium-silicon alloy, improves strength and hardenability. Lithium is added to some alloys with contents between 1.5% and 2.5%. Due to the very low density of Li (0.53 g/cm³ compared to 2.7 g/cm³ of aluminium), this leads to lighter components, which is particularly advantageous in aviation. See aluminium-lithium alloy for details. Cast alloys. Cast alloys contain about 4% copper and small amounts of other additives that improve castability, including titanium and magnesium. The starting material is primary aluminium; in contrast to other cast aluminium alloys, secondary aluminium (made from scrap) is not used because it reduces elongation and toughess at break. The AlCu cast alloys are prone to hot cracking and are used in the T4 and T6 hardening states. The following table shows the composition of some grades according to DIN EN 1706. All data is shown in percent by mass, the rest of the materials is aluminium. AlCuMg(Si,Mn) wrought alloys. The AlCuMg alloys represent the most important group of AlCu alloys. Many other phases can form in them: The addition of magnesium accelerates the process of cold hardening. Which phases are formed depends primarily on the ratio of copper to magnesium. If the ratio is less than 1/1, clusters containing Cu and Mg are eliminated. At a ratio above 1.5/1, which is the case with most engineering alloys, the forms preferentially phase. These kinds of alloys have significantly higher hardness and strength. Mechanical properties. Conditions: 2000 series. 2000 series was formerly referred to as duralumin. Applications. Aluminium-copper alloys are mainly used in aircraft construction, where their low corrosion resistance plays a subordinate role. Corrosion resistance can be greatly enhanced by the metallurgical bonding of a high-purity aluminium surface layer, referred to as alclad-duralum. To this day alclad materials are used commonly in the aircraft industry. The alloys are processed by rolling, forging, extrusion and partly by casting. Typical uses for wrought Al-Cu alloys include: Aviation. German scientific literature openly published information about duralumin, its composition and heat treatment, before the outbreak of World War I in 1914. Despite this, use of the alloy outside Germany did not occur until after fighting ended in 1918. Reports of German use during World War I, even in technical journals such as "Flight International", could still misidentify its key alloying component as magnesium rather than copper. Engineers in the UK showed little interest in duralumin only until after the war. The earliest known attempt to use duralumin for a heavier-than-air aircraft structure occurred in 1916, when Hugo Junkers first introduced its use in the airframe of the Junkers J 3, a single-engined monoplane "technology demonstrator" that marked the first use of the Junkers trademark duralumin corrugated skinning. The Junkers company completed only the covered wings and tubular fuselage framework of the J 3 before abandoning its development. The slightly later, solely "IdFlieg"-designated Junkers J.I armoured sesquiplane of 1917, known to the factory as the Junkers J 4, had its all-metal wings and horizontal stabilizer made in the same manner as the J 3's wings had been, like the experimental and airworthy all-duralumin Junkers J 7 single-seat fighter design, which led to the Junkers D.I low-wing monoplane fighter, introducing all-duralumin aircraft structural technology to German military aviation in 1918. Its first use in aerostatic airframes came in rigid airship frames, eventually including all those of the "Great Airship" era of the 1920s and 1930s: the British-built R-100, the German passenger Zeppelins LZ 127 "Graf Zeppelin", LZ 129 "Hindenburg", LZ 130 "Graf Zeppelin II", and the U.S. Navy airships USS "Los Angeles" (ZR-3, ex-LZ 126), USS "Akron" (ZRS-4) and USS "Macon" (ZRS-5). 2000 series were once the most common aerospace alloys, but because they were susceptible to stress corrosion cracking, they are increasingly being replaced by 7000 series in new designs. Bicycle. Duralumin was used to manufacture bicycle components and framesets from the 1930s to 1990s. Several companies in Saint-Étienne, France stood out for their early, innovative adoption of duralumin: in 1932, Verot et Perrin developed the first light alloy crank arms; in 1934, Haubtmann released a complete crankset; from 1935 on, Duralumin freewheels, derailleurs, pedals, brakes and handlebars were manufactured by several companies. Complete framesets followed quickly, including those manufactured by: Mercier (and Aviac and other licensees) with their popular Meca Dural family of models, the Pelissier brothers and their race-worthy La Perle models, and Nicolas Barra and his exquisite mid-twentieth century “Barralumin” creations. Other names that come up here also included: Pierre Caminade, with his beautiful Caminargent creations and their exotic octagonal tubing, and also Gnome et Rhône, with its deep heritage as an aircraft engine manufacturer that also diversified into motorcycles, velomotors and bicycles after World War Two. Mitsubishi Heavy Industries, which was prohibited from producing aircraft during the American occupation of Japan, manufactured the “cross” bicycle out of surplus wartime duralumin in 1946. The “cross” was designed by Kiro Honjo, a former aircraft designer responsible for the Mitsubishi G4M. Duralumin use in bicycle manufacturing faded in the 1970s and 1980s. Vitus (bicycle company) nonetheless released the venerable “979” frameset in 1979, a “Duralinox” model that became an instant classic among cyclists. The Vitus 979 was the first production aluminium frameset whose thin-wall 5083/5086 tubing was slip-fit and then glued together using a dry heat-activated epoxy. The result was an extremely lightweight but very durable frameset. Production of the Vitus 979 continued until 1992. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "\\theta''" }, { "math_id": 2, "text": "\\theta'" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "\\theta''" }, { "math_id": 5, "text": "\\theta'" } ]
https://en.wikipedia.org/wiki?curid=73081332
73082093
Dimension doubling theorem
In probability theory, the dimension doubling theorems are two results about the Hausdorff dimension of an image of a Brownian motion. In their core both statements say, that the dimension of a set formula_0 under a Brownian motion doubles almost surely. The first result is due to Henry P. McKean jr and hence called McKean's theorem (1955). The second theorem is a refinement of McKean's result and called Kaufman's theorem (1969) since it was proven by Robert Kaufman. Dimension doubling theorems. For a formula_1-dimensional Brownian motion formula_2 and a set formula_3 we define the image of formula_0 under formula_4, i.e. formula_5 McKean's theorem. Let formula_2 be a Brownian motion in dimension formula_6. Let formula_3, then formula_7 formula_8-almost surely. Kaufman's theorem. Let formula_2 be a Brownian motion in dimension formula_6. Then formula_8-almost surely, for any set formula_3, we have formula_9 Difference of the theorems. The difference of the theorems is the following: in McKean's result the formula_8-null sets, where the statement is not true, depends on the choice of formula_0. Kaufman's result on the other hand is true for all choices of formula_0 simultaneously. This means Kaufman's theorem can also be applied to random sets formula_0.
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "W(t)" }, { "math_id": 3, "text": "A\\subset [0,\\infty)" }, { "math_id": 4, "text": "W" }, { "math_id": 5, "text": "W(A):=\\{W(t): t\\in A\\}\\subset \\R^d." }, { "math_id": 6, "text": "d\\geq 2" }, { "math_id": 7, "text": "\\dim W(A)=2\\dim A" }, { "math_id": 8, "text": "P" }, { "math_id": 9, "text": "\\dim W(A)=2\\dim A." } ]
https://en.wikipedia.org/wiki?curid=73082093
730824
Van 't Hoff factor
Measure of solute effect The van 't Hoff factor i (named after Dutch chemist Jacobus Henricus van 't Hoff) is a measure of the effect of a solute on colligative properties such as osmotic pressure, relative lowering in vapor pressure, boiling-point elevation and freezing-point depression. The van 't Hoff factor is the ratio between the actual concentration of particles produced when the substance is dissolved and the concentration of a substance as calculated from its mass. For most non-electrolytes dissolved in water, the van 't Hoff factor is essentially 1. For most ionic compounds dissolved in water, the van 't Hoff factor is equal to the number of discrete ions in a formula unit of the substance. This is true for ideal solutions only, as occasionally ion pairing occurs in solution. At a given instant a small percentage of the ions are paired and count as a single particle. Ion pairing occurs to some extent in all electrolyte solutions. This causes the measured van 't Hoff factor to be less than that predicted in an ideal solution. The deviation for the van 't Hoff factor tends to be greatest where the ions have multiple charges. The factor binds osmolarity to molarity and osmolality to molality. Dissociated solutes. The degree of dissociation is the fraction of the original solute molecules that have dissociated. It is usually indicated by the Greek symbol formula_0. There is a simple relationship between this parameter and the van 't Hoff factor. If a fraction formula_0 of the solute dissociates into formula_1 ions, then formula_2 For example, the dissociation KCl ⇌ K+ + Cl− yields formula_3 ions, so that formula_4. For dissociation in the absence of association, the van 't Hoff factor is: formula_5. Associated solutes. Similarly, if a fraction formula_0 of formula_1 moles of solute associate to form one mole of an "n"-mer (dimer, trimer, etc.), then formula_6 For the dimerisation of acetic acid in benzene: 2 CH3COOH ⇌ (CH3COOH)2 2 moles of acetic acid associate to form 1 mole of dimer, so that formula_7 For association in the absence of dissociation, the van 't Hoff factor is: formula_8. Physical significance of i. The value of i is the actual number of particles in solution after dissociation divided by the number of formula units initially dissolved in solution and means the number of particles per formula unit of the solute when a solution is dilute. Relation to osmotic coefficient. This quantity can be related to the osmotic coefficient g by the relation: formula_9.
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": " i = 1 + \\alpha (n - 1). " }, { "math_id": 3, "text": "n = 2" }, { "math_id": 4, "text": "i = 1 + \\alpha" }, { "math_id": 5, "text": "i > 1 " }, { "math_id": 6, "text": " i = 1 - \\left(1 - \\frac{1}{n}\\right)\\alpha. " }, { "math_id": 7, "text": "i = 1 - \\left(1 - \\frac{1}{2}\\right)\\alpha = 1 - \\frac{\\alpha}{2}." }, { "math_id": 8, "text": "i <1 " }, { "math_id": 9, "text": "i = n g" } ]
https://en.wikipedia.org/wiki?curid=730824
7308284
Stopped process
Stochastic process In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed (possibly random) time. Definition. Let Then the stopped process formula_6 is defined for formula_7 and formula_8 by formula_9 Examples. Gambling. Consider a gambler playing roulette. "X""t" denotes the gambler's total holdings in the casino at time "t" ≥ 0, which may or may not be allowed to be negative, depending on whether or not the casino offers credit. Let "Y""t" denote what the gambler's holdings would be if he/she could obtain unlimited credit (so "Y" can attain negative values). formula_10 is a stopping time for "Y", and, since the gambler cannot continue to play after he/she has exhausted his/her resources, "X" is the stopped process "Y""τ". Brownian motion. Let formula_11 be a one-dimensional standard Brownian motion starting at zero. formula_20 Then the stopped Brownian motion formula_14 will evolve as per usual up until the random time formula_18, and will thereafter be constant with value formula_21: i.e., formula_22 for all formula_23.
[ { "math_id": 0, "text": "(\\Omega, \\mathcal{F}, \\mathbb{P})" }, { "math_id": 1, "text": "(\\mathbb{X}, \\mathcal{A})" }, { "math_id": 2, "text": "X : [0, + \\infty) \\times \\Omega \\to \\mathbb{X}" }, { "math_id": 3, "text": "\\tau : \\Omega \\to [0, + \\infty]" }, { "math_id": 4, "text": "\\{ \\mathcal{F}_{t} | t \\geq 0 \\}" }, { "math_id": 5, "text": "{}\\mathcal{F}" }, { "math_id": 6, "text": "X^{\\tau}" }, { "math_id": 7, "text": "t \\geq 0" }, { "math_id": 8, "text": "\\omega \\in \\Omega" }, { "math_id": 9, "text": "X_{t}^{\\tau} (\\omega) := X_{\\min \\{ t, \\tau (\\omega) \\}} (\\omega)." }, { "math_id": 10, "text": "\\tau (\\omega) := \\inf \\{ t \\geq 0 | Y_{t} (\\omega) = 0 \\}" }, { "math_id": 11, "text": "B : [0, + \\infty) \\times \\Omega \\to \\mathbb{R}" }, { "math_id": 12, "text": "T > 0" }, { "math_id": 13, "text": "\\tau (\\omega) \\equiv T" }, { "math_id": 14, "text": "B^{\\tau}" }, { "math_id": 15, "text": "T" }, { "math_id": 16, "text": "B_{t}^{\\tau} (\\omega) \\equiv B_{T} (\\omega)" }, { "math_id": 17, "text": "t \\geq T" }, { "math_id": 18, "text": "\\tau" }, { "math_id": 19, "text": "\\{ x \\in \\mathbb{R} | x \\geq a \\}" }, { "math_id": 20, "text": "\\tau (\\omega) := \\inf \\{ t > 0 | B_{t} (\\omega) \\geq a \\}." }, { "math_id": 21, "text": "a" }, { "math_id": 22, "text": "B_{t}^{\\tau} (\\omega) \\equiv a " }, { "math_id": 23, "text": "t \\geq \\tau (\\omega)" } ]
https://en.wikipedia.org/wiki?curid=7308284
7309022
Nearest neighbor search
Optimization problem in computer science Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: the less similar the objects, the larger the function values. Formally, the nearest-neighbor (NN) search problem is defined as follows: given a set "S" of points in a space "M" and a query point "q" ∈ "M", find the closest point in "S" to "q". Donald Knuth in vol. 3 of "The Art of Computer Programming" (1973) called it the post-office problem, referring to an application of assigning to a residence the nearest post office. A direct generalization of this problem is a "k"-NN search, where we need to find the "k" closest points. Most commonly "M" is a metric space and dissimilarity is expressed as a distance metric, which is symmetric and satisfies the triangle inequality. Even more common, "M" is taken to be the "d"-dimensional vector space where dissimilarity is measured using the Euclidean distance, Manhattan distance or other distance metric. However, the dissimilarity function can be arbitrary. One example is asymmetric Bregman divergence, for which the triangle inequality does not hold. Applications. The nearest neighbor search problem arises in numerous fields of application, including: Methods. Various solutions to the NNS problem have been proposed. The quality and usefulness of the algorithms are determined by the time complexity of queries as well as the space complexity of any search data structures that must be maintained. The informal observation usually referred to as the curse of dimensionality states that there is no general-purpose exact solution for NNS in high-dimensional Euclidean space using polynomial preprocessing and polylogarithmic search time. Exact methods. Linear search. The simplest solution to the NNS problem is to compute the distance from the query point to every other point in the database, keeping track of the "best so far". This algorithm, sometimes referred to as the naive approach, has a running time of "O"("dN"), where "N" is the cardinality of "S" and "d" is the dimensionality of "S". There are no search data structures to maintain, so the linear search has no space complexity beyond the storage of the database. Naive search can, on average, outperform space partitioning approaches on higher dimensional spaces. The absolute distance is not required for distance comparison, only the relative distance. In geometric coordinate systems the distance calculation can be sped up considerably by omitting the square root calculation from the distance calculation between two coordinates. The distance comparison will still yield identical results. Space partitioning. Since the 1970s, the branch and bound methodology has been applied to the problem. In the case of Euclidean space, this approach encompasses spatial index or spatial access methods. Several space-partitioning methods have been developed for solving the NNS problem. Perhaps the simplest is the k-d tree, which iteratively bisects the search space into two regions containing half of the points of the parent region. Queries are performed via traversal of the tree from the root to a leaf by evaluating the query point at each split. Depending on the distance specified in the query, neighboring branches that might contain hits may also need to be evaluated. For constant dimension query time, average complexity is "O"(log "N") in the case of randomly distributed points, worst case complexity is "O"("kN"^(1-1/"k")) Alternatively the R-tree data structure was designed to support nearest neighbor search in dynamic context, as it has efficient algorithms for insertions and deletions such as the R* tree. R-trees can yield nearest neighbors not only for Euclidean distance, but can also be used with other distances. In the case of general metric space, the branch-and-bound approach is known as the metric tree approach. Particular examples include vp-tree and BK-tree methods. Using a set of points taken from a 3-dimensional space and put into a BSP tree, and given a query point taken from the same space, a possible solution to the problem of finding the nearest point-cloud point to the query point is given in the following description of an algorithm. (Strictly speaking, no such point may exist, because it may not be unique. But in practice, usually we only care about finding any one of the subset of all point-cloud points that exist at the shortest distance to a given query point.) The idea is, for each branching of the tree, guess that the closest point in the cloud resides in the half-space containing the query point. This may not be the case, but it is a good heuristic. After having recursively gone through all the trouble of solving the problem for the guessed half-space, now compare the distance returned by this result with the shortest distance from the query point to the partitioning plane. This latter distance is that between the query point and the closest possible point that could exist in the half-space not searched. If this distance is greater than that returned in the earlier result, then clearly there is no need to search the other half-space. If there is such a need, then you must go through the trouble of solving the problem for the other half space, and then compare its result to the former result, and then return the proper result. The performance of this algorithm is nearer to logarithmic time than linear time when the query point is near the cloud, because as the distance between the query point and the closest point-cloud point nears zero, the algorithm needs only perform a look-up using the query point as a key to get the correct result. Approximation methods. An approximate nearest neighbor search algorithm is allowed to return points whose distance from the query is at most formula_0 times the distance from the query to its nearest points. The appeal of this approach is that, in many cases, an approximate nearest neighbor is almost as good as the exact one. In particular, if the distance measure accurately captures the notion of user quality, then small differences in the distance should not matter. Greedy search in proximity neighborhood graphs. Proximity graph methods (such as navigable small world graphs and HNSW) are considered the current state-of-the-art for the approximate nearest neighbors search. The methods are based on greedy traversing in proximity neighborhood graphs formula_1 in which every point formula_2 is uniquely associated with vertex formula_3. The search for the nearest neighbors to a query "q" in the set "S" takes the form of searching for the vertex in the graph formula_1. The basic algorithm – greedy search – works as follows: search starts from an enter-point vertex formula_3 by computing the distances from the query q to each vertex of its neighborhood formula_4, and then finds a vertex with the minimal distance value. If the distance value between the query and the selected vertex is smaller than the one between the query and the current element, then the algorithm moves to the selected vertex, and it becomes new enter-point. The algorithm stops when it reaches a local minimum: a vertex whose neighborhood does not contain a vertex that is closer to the query than the vertex itself. The idea of proximity neighborhood graphs was exploited in multiple publications, including the seminal paper by Arya and Mount, in the VoroNet system for the plane, in the RayNet system for the formula_5, and in the Navigable Small World, Metrized Small World and HNSW algorithms for the general case of spaces with a distance function. These works were preceded by a pioneering paper by Toussaint, in which he introduced the concept of a "relative neighborhood" graph. Locality sensitive hashing. Locality sensitive hashing (LSH) is a technique for grouping points in space into 'buckets' based on some distance metric operating on the points. Points that are close to each other under the chosen metric are mapped to the same bucket with high probability. Nearest neighbor search in spaces with small intrinsic dimension. The cover tree has a theoretical bound that is based on the dataset's doubling constant. The bound on search time is "O"("c"12 log "n") where "c" is the expansion constant of the dataset. Projected radial search. In the special case where the data is a dense 3D map of geometric points, the projection geometry of the sensing technique can be used to dramatically simplify the search problem. This approach requires that the 3D data is organized by a projection to a two-dimensional grid and assumes that the data is spatially smooth across neighboring grid cells with the exception of object boundaries. These assumptions are valid when dealing with 3D sensor data in applications such as surveying, robotics and stereo vision but may not hold for unorganized data in general. In practice this technique has an average search time of "O"("1") or "O"("K") for the "k"-nearest neighbor problem when applied to real world stereo vision data. Vector approximation files. In high-dimensional spaces, tree indexing structures become useless because an increasing percentage of the nodes need to be examined anyway. To speed up linear search, a compressed version of the feature vectors stored in RAM is used to prefilter the datasets in a first run. The final candidates are determined in a second stage using the uncompressed data from the disk for distance calculation. Compression/clustering based search. The VA-file approach is a special case of a compression based search, where each feature component is compressed uniformly and independently. The optimal compression technique in multidimensional spaces is Vector Quantization (VQ), implemented through clustering. The database is clustered and the most "promising" clusters are retrieved. Huge gains over VA-File, tree-based indexes and sequential scan have been observed. Also note the parallels between clustering and LSH. Variants. There are numerous variants of the NNS problem and the two most well-known are the "k"-nearest neighbor search and the ε-approximate nearest neighbor search. "k"-nearest neighbors. "k"-nearest neighbor search identifies the top "k" nearest neighbors to the query. This technique is commonly used in predictive analytics to estimate or classify a point based on the consensus of its neighbors. "k"-nearest neighbor graphs are graphs in which every point is connected to its "k" nearest neighbors. Approximate nearest neighbor. In some applications it may be acceptable to retrieve a "good guess" of the nearest neighbor. In those cases, we can use an algorithm which doesn't guarantee to return the actual nearest neighbor in every case, in return for improved speed or memory savings. Often such an algorithm will find the nearest neighbor in a majority of cases, but this depends strongly on the dataset being queried. Algorithms that support the approximate nearest neighbor search include locality-sensitive hashing, best bin first and balanced box-decomposition tree based search. Nearest neighbor distance ratio. Nearest neighbor distance ratio does not apply the threshold on the direct distance from the original point to the challenger neighbor but on a ratio of it depending on the distance to the previous neighbor. It is used in CBIR to retrieve pictures through a "query by example" using the similarity between local features. More generally it is involved in several matching problems. Fixed-radius near neighbors. Fixed-radius near neighbors is the problem where one wants to efficiently find all points given in Euclidean space within a given fixed distance from a specified point. The distance is assumed to be fixed, but the query point is arbitrary. All nearest neighbors. For some applications (e.g. entropy estimation), we may have "N" data-points and wish to know which is the nearest neighbor "for every one of those N points". This could, of course, be achieved by running a nearest-neighbor search once for every point, but an improved strategy would be an algorithm that exploits the information redundancy between these "N" queries to produce a more efficient search. As a simple example: when we find the distance from point "X" to point "Y", that also tells us the distance from point "Y" to point "X", so the same calculation can be reused in two different queries. Given a fixed dimension, a semi-definite positive norm (thereby including every Lp norm), and "n" points in this space, the nearest neighbour of every point can be found in "O"("n" log "n") time and the "m" nearest neighbours of every point can be found in "O"("mn" log "n") time. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "G(V,E)" }, { "math_id": 2, "text": "x_i \\in S " }, { "math_id": 3, "text": "v_i \\in V " }, { "math_id": 4, "text": "\\{v_j:(v_i,v_j) \\in E\\}" }, { "math_id": 5, "text": "\\mathbb{E}^n" } ]
https://en.wikipedia.org/wiki?curid=7309022
7309251
Neighbourhood (graph theory)
Subgraph made of all nodes linked to a given node of a graph In graph theory, an adjacent vertex of a vertex v in a graph is a vertex that is connected to v by an edge. The neighbourhood of a vertex v in a graph G is the subgraph of G induced by all vertices adjacent to v, i.e., the graph composed of the vertices adjacent to v and all edges connecting vertices adjacent to v. The neighbourhood is often denoted &amp;NoBreak;&amp;NoBreak; or (when the graph is unambiguous) &amp;NoBreak;&amp;NoBreak;. The same neighbourhood notation may also be used to refer to sets of adjacent vertices rather than the corresponding induced subgraphs. The neighbourhood described above does not include v itself, and is more specifically the open neighbourhood of v; it is also possible to define a neighbourhood in which v itself is included, called the closed neighbourhood and denoted by &amp;NoBreak;&amp;NoBreak;. When stated without any qualification, a neighbourhood is assumed to be open. Neighbourhoods may be used to represent graphs in computer algorithms, via the adjacency list and adjacency matrix representations. Neighbourhoods are also used in the clustering coefficient of a graph, which is a measure of the average density of its neighbourhoods. In addition, many important classes of graphs may be defined by properties of their neighbourhoods, or by symmetries that relate neighbourhoods to each other. An isolated vertex has no adjacent vertices. The degree of a vertex is equal to the number of adjacent vertices. A special case is a loop that connects a vertex to itself; if such an edge exists, the vertex belongs to its own neighbourhood. Local properties in graphs. If all vertices in "G" have neighbourhoods that are isomorphic to the same graph "H", "G" is said to be "locally H", and if all vertices in "G" have neighbourhoods that belong to some graph family "F", "G" is said to be "locally F". For instance, in the octahedron graph, shown in the figure, each vertex has a neighbourhood isomorphic to a cycle of four vertices, so the octahedron is locally "C"4. For example: Neighbourhood of a set. For a set "A" of vertices, the neighbourhood of "A" is the union of the neighbourhoods of the vertices, and so it is the set of all vertices adjacent to at least one member of "A". A set "A" of vertices in a graph is said to be a module if every vertex in "A" has the same set of neighbours outside of "A". Any graph has a uniquely recursive decomposition into modules, its modular decomposition, which can be constructed from the graph in linear time; modular decomposition algorithms have applications in other graph algorithms including the recognition of comparability graphs. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(\\sqrt{kn})" }, { "math_id": 1, "text": "n^{2-o(1)}" } ]
https://en.wikipedia.org/wiki?curid=7309251
73097380
Sung Ryul Eric Yang
Physics professor Sung Ryul Eric Yang (S.-R. Eric Yang) is a theoretical condensed matter physicist. He is a full professor in the Department of Physics of Korea University. Education. Yang earned his Candidate of Science from the University of Copenhagen in 1982 under supervision of Henrik Smith, who is a co-author of the book "Bose–Einstein Condensation in Dilute Gases" (Cambridge University Press, 2008). He attended University of California, San Diego, from which he obtained a PhD degree in physics in 1986 under the supervision of Lu Jeu Sham. Academic career. After receiving his PhD, Yang worked as a postdoctoral researcher at the University of Maryland from 1986 to 1987. He then worked as a research officer at the National Research Council of Canada from 1987 to 1995. In 1995, Yang joined the faculty of the Department of Physics at Korea University as an associate professor and in 2000, he became full professor. He was the condensed matter coordinator at Asia Pacific Center of Theoretical Physics (APCTP) during its establishment period. His research interests include quantum Hall edge reconstruction, many-body physics and its interplay with disorder. He showed, in collaboration with Allan H. MacDonald, that electron interaction effects do not change the critical properties of the integer quantum Hall plateau transitions, except for the dynamic critical exponent. In recent years, Yang's research focuses on topological order of disordered graphene zigzag nanoribbons. He showed that disordered interacting graphene zigzag nanoribbons is a new topologically ordered insulator with semionic formula_0 fractional charges. Textbook. In 2023, Yang published a graduate textbook titled "Topologically Ordered Zigzag Nanoribbon: e/2 Fractionally Charged Anyons and Spin-Charge Separation" (World Scientific). The book has been reviewed by Philip Kim. Personal life. From the age of 12, Yang has lived in various countries including Vietnam, Mexico, Denmark, the USA, and Canada. Denmark is where he spent the majority of his adolescent years. He has a Korean wife and two daughters. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " e/2 " } ]
https://en.wikipedia.org/wiki?curid=73097380
73098277
Factorization algebra
Algebraic structure in mathematical physics In mathematics and mathematical physics, a factorization algebra is an algebraic structure first introduced by Beilinson and Drinfel'd in an algebro-geometric setting as a reformulation of chiral algebras, and also studied in a more general setting by Costello and Gwilliam to study quantum field theory. Definition. Prefactorization algebras. A factorization algebra is a prefactorization algebra satisfying some properties, similar to sheafs being a presheaf with extra conditions. If formula_0 is a topological space, a prefactorization algebra formula_1 of vector spaces on formula_0 is an assignment of vector spaces formula_2 to open sets formula_3 of formula_0, along with the following conditions on the assignment: formula_14 So formula_1 resembles a precosheaf, except the vector spaces are tensored rather than (direct-)summed. The category of vector spaces can be replaced with any symmetric monoidal category. Factorization algebras. To define factorization algebras, it is necessary to define a Weiss cover. For formula_3 an open set, a collection of opens formula_15 is a Weiss cover of formula_3 if for any finite collection of points formula_16 in formula_3, there is an open set formula_17 such that formula_18. Then a factorization algebra of vector spaces on formula_0 is a prefactorization algebra of vector spaces on formula_0 so that for every open formula_3 and every Weiss cover formula_19 of formula_3, the sequence formula_20 is exact. That is, formula_1 is a factorization algebra if it is a cosheaf with respect to the Weiss topology. A factorization algebra is "multiplicative" if, in addition, for each pair of disjoint opens formula_21, the structure map formula_22 is an isomorphism. Algebro-geometric formulation. While this formulation is related to the one given above, the relation is not immediate. Let formula_23 be a smooth complex curve. A factorization algebra on formula_23 consists of formula_30 over formula_31. Example. Associative algebra. Any associative algebra formula_40 can be realized as a prefactorization algebra formula_41 on formula_42. To each open interval formula_43, assign formula_44. An arbitrary open is a disjoint union of countably many open intervals, formula_45, and then set formula_46. The structure maps simply come from the multiplication map on formula_40. Some care is needed for infinite tensor products, but for finitely many open intervals the picture is straightforward. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "\\mathcal{F}" }, { "math_id": 2, "text": "\\mathcal{F}(U)" }, { "math_id": 3, "text": "U" }, { "math_id": 4, "text": "U \\subset V" }, { "math_id": 5, "text": "m_V^U: \\mathcal{F}(U) \\rightarrow \\mathcal{F}(V)" }, { "math_id": 6, "text": "m_V^{U_1, \\cdots, U_n}: \\mathcal{F}(U_1)\\otimes \\cdots \\otimes \\mathcal{F}(U_n) \\rightarrow \\mathcal{F}(V)" }, { "math_id": 7, "text": "U_i \\subset V" }, { "math_id": 8, "text": "U_i" }, { "math_id": 9, "text": "U_{i, j}" }, { "math_id": 10, "text": "V_i" }, { "math_id": 11, "text": "W" }, { "math_id": 12, "text": "U_{i,1}\\sqcup \\cdots \\sqcup U_{i, n_i} \\subset V_i" }, { "math_id": 13, "text": "V_1 \\sqcup \\cdots V_n \\subset W" }, { "math_id": 14, "text": "\n\\begin{array}{lcl}\n & \\bigotimes_i \\bigotimes_j \\mathcal{F}(U_{i,j}) & \\rightarrow & \\bigotimes_i \\mathcal{F}(V_i) & \\\\\n & \\downarrow & \\swarrow & \\\\\n & \\mathcal{F}(W) & & & \\\\\n\\end{array}\n" }, { "math_id": 15, "text": "\\mathfrak{U} = \\{U_i | i \\in I\\}" }, { "math_id": 16, "text": "\\{x_1, \\cdots, x_k\\}" }, { "math_id": 17, "text": "U_i \\in \\mathfrak{U}" }, { "math_id": 18, "text": "\\{x_1, \\cdots, x_k\\} \\subset U_i" }, { "math_id": 19, "text": "\\{U_i | i \\in I\\}" }, { "math_id": 20, "text": " \\bigoplus_{i,j} \\mathcal{F}(U_i \\cap U_j) \\rightarrow \\bigoplus_k \\mathcal{F}(U_k) \\rightarrow \\mathcal{F}(U) \\rightarrow 0" }, { "math_id": 21, "text": "U, V \\subset M" }, { "math_id": 22, "text": " m^{U, V}_{U\\sqcup V} : \\mathcal{F}(U)\\otimes \\mathcal{F}(V) \\rightarrow \\mathcal{F}(U \\sqcup V)" }, { "math_id": 23, "text": "X" }, { "math_id": 24, "text": "\\mathcal{V}_{X, I}" }, { "math_id": 25, "text": "X^{I}" }, { "math_id": 26, "text": "I" }, { "math_id": 27, "text": "\\Delta^*_{J/I}\\mathcal{V}_{X, J} \\rightarrow \\mathcal{V}_{X, I}" }, { "math_id": 28, "text": "X^I" }, { "math_id": 29, "text": "J \\rightarrow I" }, { "math_id": 30, "text": " j^*_{J/I}\\mathcal{V}_{X, J} \\rightarrow j^*_{J/I}(\\boxtimes_{i \\in I} \\mathcal{V}_{X, p^{-1}(i)})" }, { "math_id": 31, "text": "U^{J/I}" }, { "math_id": 32, "text": "\\mathcal{V} = \\mathcal{V}_{X, \\{1\\}}" }, { "math_id": 33, "text": "\\mathcal{V}_2 = \\mathcal{V}_{X, \\{1, 2\\}}" }, { "math_id": 34, "text": "1 \\in \\mathcal{V}(X)" }, { "math_id": 35, "text": "f \\in \\mathcal V(U)" }, { "math_id": 36, "text": "U \\subset X" }, { "math_id": 37, "text": "1 \\boxtimes f" }, { "math_id": 38, "text": "\\mathcal{V}_2|_{U^2\\Delta}" }, { "math_id": 39, "text": "f \\in \\mathcal{V} \\cong \\mathcal{V}_2|_\\Delta" }, { "math_id": 40, "text": "A" }, { "math_id": 41, "text": "A^{f}" }, { "math_id": 42, "text": "\\mathbb{R}" }, { "math_id": 43, "text": "(a,b)" }, { "math_id": 44, "text": "A^f((a,b)) = A" }, { "math_id": 45, "text": "U = \\bigsqcup_i I_i" }, { "math_id": 46, "text": "A^f(U) = \\bigotimes_i A" } ]
https://en.wikipedia.org/wiki?curid=73098277
7309909
Nick Trefethen
American mathematician Lloyd Nicholas Trefethen (born 30 August 1955) is an American mathematician, professor of numerical analysis and until 2023 head of the Numerical Analysis Group at the Mathematical Institute, University of Oxford. Early life and education. Trefethen was born 30 August 1955 in Boston, Massachusetts, the son of mechanical engineer Lloyd M. Trefethen and codebreaker, poet, teacher and editor Florence Newman Trefethen. Trefethen attended Phillips Exeter Academy. He obtained his bachelor's degree from Harvard College in 1977 and his master's from Stanford University in 1980. His PhD was on "Wave Propagation and Stability for Finite Difference Schemes" supervised by Joseph E. Oliger at Stanford University. Career and research. Following his PhD, Trefethen went on to work at the Courant Institute of Mathematical Sciences in New York, Massachusetts Institute of Technology, and Cornell University, before being appointed to a chair at the University of Oxford and a Fellowship of Balliol College, Oxford. His publications span a wide range of areas within numerical analysis and applied mathematics, including non-normal eigenvalue problems and applications, spectral methods for differential equations, numerical linear algebra, fluid mechanics, computational complex analysis, and approximation theory. He is perhaps best known for his work on pseudospectra of non-normal matrices and operators. This work covers theoretical aspects as well as numerical algorithms, and applications including fluid mechanics, numerical solution of partial differential equations, numerical linear algebra, shuffling of cards, random matrices, differential equations and lasers. Trefethen is currently an ISI highly cited researcher. Trefethen has written a number of books on numerical analysis including "Numerical Linear Algebra" with David Bau, "Spectral Methods in MATLAB", "Schwarz–Christoffel Mapping" with Tobin Driscoll, and "Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators" with Mark Embree. He is the leader of the MATLAB-based Chebfun software project. In 2013 he proposed a new formula to calculate the BMI of a person: formula_0 Awards and honours. Trefethen was the first winner of the Leslie Fox Prize for Numerical Analysis. In 1998 he was an Invited Speaker of the International Congress of Mathematicians in Berlin. He is a fellow of the American Mathematical Society, and a member of the National Academy of Engineering in the United States. Trefethen was elected a Fellow of the Royal Society (FRS) in 2005, and his certificate of election reads: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Nick Trefethen is distinguished for his many seminal contributions to Numerical Analysis and its applications in Applied Mathematics and in Engineering Science. His research spans theory, algorithms, software and physical applications, particularly involving eigenvalues, pseudospectra – a concept which he introduced – and dynamics. He has an international reputation for his work on nonnormal matrices and operators. He has also made major contributions to finite difference and spectral methods for partial differential equations, numerical linear algebra, and complex analysis. His monograph Numerical Linear Algebra (SIAM, 1997) is one of the SIAM's best selling books and has already been through five printings. In 2010 Trefethen was awarded the Gold Medal of the Institute of Mathematics and its Applications in recognition of his "outstanding contributions to mathematics and its applications over a period of years". In 2013 Trefethen was awarded the Naylor Prize and lectureship in Applied Mathematics from the London Mathematical Society. He was awarded the George Pólya Prize for Mathematical Exposition in 2017 and the John von Neumann Prize in 2020 by SIAM. Personal life. Trefethen has one son and one daughter from his first marriage to Anne Elizabeth Trefethen (née Daman). He is currently married to Kate McLoughlin, a professor of English Literature at Oxford. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\text{BMI} = 1.3 \\times \\frac{\\text{weight}\\,\\,\\,\\,}{\\text{height}^{2.5}}" } ]
https://en.wikipedia.org/wiki?curid=7309909
73099857
Hyperproperty
In computer science, hyperproperties are a formalism for describing properties of computational systems. Hyperproperties generalize safety and liveness properties, and can express properties such as non-interference and observational determinism. Elaborating on the example of non-interference: Non-interference can't be represented as a "property" in the formal sense because there's no inclusion-test that could be applied to a single program trace; non-interference is an assertion about how neighboring traces are similar to each other and it does no good to look at one trace at a time. "Hyperproperties" are the extension from properties as predicates on traces to properties as relations between traces. Definitions. Traces and systems. Hyperproperties are defined in terms of traces of a computational system. A trace is a sequence of states; a system is a set of traces. Intuitively, a program corresponds to the set of all of its possible execution traces, given any inputs. Formally, the set of traces over a set of states formula_0 is formula_1. This representation is expressive enough to encompass several computational models, including labeled transition systems and state machines. Hyperproperties. A trace property is a set of traces. Safety and liveness properties are trace properties. Formally, a trace property is an element of formula_2, where formula_3 is the powerset operator. A hyperproperty is a set of trace properties, that is, an element of formula_4. Trace properties may be divided into safety properties (intuitively, properties that ensure "bad things don't happen") and liveness properties ("good things do happen"), and every trace property is the intersection of a safety property and a liveness property. Analogously, hyperproperties may be divided into hypersafety and hyperliveness hyperproperties, and every hyperproperty is an intersection of a safety hyperproperty and a liveness hyperproperty. formula_5-safety properties are safety hyperproperties such that every violation of the property can be witnessed by a set of at most formula_5 traces. Properties. Since hyperproperties are exactly the elements of the power set formula_4, they are closed under intersection and union. The lower Vietoris topology of a standard topology on trace properties yields a topology on the set of hyperproperties. Applications. Several program logics have been developed for checking that a program conforms to a hyperproperty. HyperLTL and some model checking algorithms have been developed for checking that a finite state system conforms to a hyperproperty. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Sigma" }, { "math_id": 1, "text": "\\Phi\\triangleq\\Sigma^\\omega" }, { "math_id": 2, "text": "\\mathbb{P}(\\Phi)" }, { "math_id": 3, "text": "\\mathbb{P}" }, { "math_id": 4, "text": "\\mathbb{P}(\\mathbb{P}(\\Phi))" }, { "math_id": 5, "text": "k" }, { "math_id": 6, "text": "k=1" }, { "math_id": 7, "text": "\\mathbf{false}\\triangleq \\{\\emptyset\\}" }, { "math_id": 8, "text": "\\mathbf{true}\\triangleq \\Phi" }, { "math_id": 9, "text": "k=2" }, { "math_id": 10, "text": "k=3" }, { "math_id": 11, "text": "k=4" } ]
https://en.wikipedia.org/wiki?curid=73099857
73101628
Union theorem
Computer science theorem The union theorem is a result from the 60s in computational complexity theory. It was published in 1969 by Ed McCreight and Albert Meyer. Originally it is stated for general Blum complexity classes, but it is most relevant for DTIME, NTIME, DSPACE or NSPACE as stated in ch. 12.6 of first edition from 1979 of the textbook of Hopcroft and Ullman. This chapter was removed from newer editions, however. The theorem for time complexity roughly states the following. Given a list of monotonically increasing time bound functions formula_0 where formula_1 for formula_2, there exists a time bound function formula_3 such that a problem is computable in time bounded by formula_3 if and only if there exists an formula_4 such that the problem is computable in time bounded by formula_5. The theorem can be applied to show that complexity classes like P are well-defined. Together with the speedup theorem, the gap theorem and the time and space hierarchy theorems it is a basis for hierarchies in complexity theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t_1,t_2,\\dots" }, { "math_id": 1, "text": "t_{i+1} \\ge t_i" }, { "math_id": 2, "text": "i \\in \\mathbb{N}_{>0}" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "t_i" } ]
https://en.wikipedia.org/wiki?curid=73101628
73102
Residue (complex analysis)
Attribute of a mathematical function In mathematics, more specifically complex analysis, the residue is a complex number proportional to the contour integral of a meromorphic function along a path enclosing one of its singularities. (More generally, residues can be calculated for any function formula_0 that is holomorphic except at the discrete points {"a""k"}"k", even if some of them are essential singularities.) Residues can be computed quite easily and, once known, allow the determination of general contour integrals via the residue theorem. Definition. The residue of a meromorphic function formula_1 at an isolated singularity formula_2, often denoted formula_3, formula_4, formula_5 or formula_6, is the unique value formula_7 such that formula_8 has an analytic antiderivative in a punctured disk formula_9. Alternatively, residues can be calculated by finding Laurent series expansions, and one can define the residue as the coefficient "a"−1 of a Laurent series. The concept can be used to provide contour integration values of certain contour integral problems considered in the residue theorem. According to the residue theorem, for a meromorphic function formula_1, the residue at point formula_10 is given as: formula_11 where formula_12 is a positively oriented simple closed curve around formula_10 and not including any other singularities on or inside the curve. The definition of a residue can be generalized to arbitrary Riemann surfaces. Suppose formula_13 is a 1-form on a Riemann surface. Let formula_13 be meromorphic at some point formula_14, so that we may write formula_13 in local coordinates as formula_15. Then, the residue of formula_13 at formula_14 is defined to be the residue of formula_16 at the point corresponding to formula_14. Contour integration. Contour integral of a monomial. Computing the residue of a monomial formula_17 makes most residue computations easy to do. Since path integral computations are homotopy invariant, we will let formula_18 be the circle with radius formula_19 going counter clockwise. Then, using the change of coordinates formula_20 we find that formula_21 hence our integral now reads as formula_22 Thus, the residue of formula_23 is 1 if integer formula_24 and 0 otherwise. Generalization to Laurent series. If a function is expressed as a Laurent series expansion around c as follows:formula_25Then, the residue at the point c is calculated as:formula_26using the results from contour integral of a monomial for counter clockwise contour integral formula_12 around a point c. Hence, if a Laurent series representation of a function exists around c, then its residue around c is known by the coefficient of the formula_27 term. Application in residue theorem. For a meromorphic function formula_1, with a finite set of singularities within a positively oriented simple closed curve formula_18 which does not pass through any singularity, the value of the contour integral is given according to residue theorem, as:formula_28where formula_29, the winding number, is formula_19 if formula_10 is in the interior of formula_18 and formula_30 if not, simplifying to:formula_31where formula_10 are all isolated singularities within the contour formula_18. Calculation of residues. Suppose a punctured disk "D" = {"z" : 0 &lt; |"z" − "c"| &lt; "R"} in the complex plane is given and "f" is a holomorphic function defined (at least) on "D". The residue Res("f", "c") of "f" at "c" is the coefficient "a"−1 of ("z" − "c")−1 in the Laurent series expansion of "f" around "c". Various methods exist for calculating this value, and the choice of which method to use depends on the function in question, and on the nature of the singularity. According to the residue theorem, we have: formula_32 where "γ" traces out a circle around "c" in a counterclockwise manner and does not pass through or contain other singularities within it. We may choose the path "γ" to be a circle of radius "ε" around "c." Since "ε" can be as small as we desire it can be made to contain only the singularity of c due to nature of isolated singularities. This may be used for calculation in cases where the integral can be calculated directly, but it is usually the case that residues are used to simplify calculation of integrals, and not the other way around. Removable singularities. If the function "f" can be continued to a holomorphic function on the whole disk formula_33, then Res("f", "c") = 0. The converse is not generally true. Simple poles. If "c" is a simple pole of "f", the residue of "f" is given by: formula_34 If that limit does not exist, then "f" instead has an essential singularity at "c". If the limit is 0, then "f" is either analytic at "c" or has a removable singularity there. If the limit is equal to infinity, then the order of the pole is higher than 1. It may be that the function "f" can be expressed as a quotient of two functions, formula_35, where "g" and "h" are holomorphic functions in a neighbourhood of "c", with "h"("c") = 0 and "h"'("c") ≠ 0. In such a case, L'Hôpital's rule can be used to simplify the above formula to: formula_36 Limit formula for higher-order poles. More generally, if "c" is a pole of order "n", then the residue of "f" around "z" = "c" can be found by the formula: formula_37 This formula can be very useful in determining the residues for low-order poles. For higher-order poles, the calculations can become unmanageable, and series expansion is usually easier. For essential singularities, no such simple formula exists, and residues must usually be taken directly from series expansions. Residue at infinity. In general, the residue at infinity is defined as: formula_38 If the following condition is met: formula_39 then the residue at infinity can be computed using the following formula: formula_40 If instead formula_41 then the residue at infinity is formula_42 For functions meromorphic on the entire complex plane with finitely many singularities, the sum of the residues at the (necessarily) isolated singularities plus the residue at infinity is zero, which gives: formula_43 Series methods. If parts or all of a function can be expanded into a Taylor series or Laurent series, which may be possible if the parts or the whole of the function has a standard series expansion, then calculating the residue is significantly simpler than by other methods. The residue of the function is simply given by the coefficient of formula_27 in the Laurent series expansion of the function. Examples. Residue from series expansion. Example 1. As an example, consider the contour integral formula_44 where "C" is some simple closed curve about 0. Let us evaluate this integral using a standard convergence result about integration by series. We can substitute the Taylor series for formula_45 into the integrand. The integral then becomes formula_46 Let us bring the 1/"z"5 factor into the series. The contour integral of the series then writes formula_47 Since the series converges uniformly on the support of the integration path, we are allowed to exchange integration and summation. The series of the path integrals then collapses to a much simpler form because of the previous computation. So now the integral around "C" of every other term not in the form "cz"−1 is zero, and the integral is reduced to formula_48 The value 1/4! is the "residue" of "e""z"/"z"5 at "z" = 0, and is denoted formula_49 Example 2. As a second example, consider calculating the residues at the singularities of the functionformula_50which may be used to calculate certain contour integrals. This function appears to have a singularity at "z" = 0, but if one factorizes the denominator and thus writes the function asformula_51it is apparent that the singularity at "z" = 0 is a removable singularity and then the residue at "z" = 0 is therefore 0. The only other singularity is at "z" = 1. Recall the expression for the Taylor series for a function "g"("z") about "z" = "a":formula_52So, for "g"("z") = sin "z" and "a" = 1 we haveformula_53and for "g"("z") = 1/"z" and "a" = 1 we haveformula_54Multiplying those two series and introducing 1/("z" − 1) gives usformula_55So the residue of "f"("z") at "z" = 1 is sin 1. Example 3. The next example shows that, computing a residue by series expansion, a major role is played by the Lagrange inversion theorem. Letformula_56be an entire function, and letformula_57with positive radius of convergence, and with formula_58. So formula_59 has a local inverse formula_60 at 0, and formula_61 is meromorphic at 0. Then we have:formula_62Indeed,formula_63because the first series converges uniformly on any small circle around 0. Using the Lagrange inversion theoremformula_64and we get the above expression. For example, if formula_65 and also formula_66, thenformula_67andformula_68The first term contributes 1 to the residue, and the second term contributes 2 since it is asymptotic to formula_69. Note that, with the corresponding stronger symmetric assumptions on formula_70 and formula_59, it also followsformula_71where formula_72 is a local inverse of formula_70 at 0.
[ { "math_id": 0, "text": " f\\colon \\mathbb{C} \\smallsetminus \\{a_k\\}_k \\rightarrow \\mathbb{C}" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "\\operatorname{Res}(f,a)" }, { "math_id": 4, "text": "\\operatorname{Res}_a(f)" }, { "math_id": 5, "text": "\\mathop{\\operatorname{Res}}_{z=a}f(z)" }, { "math_id": 6, "text": "\\mathop{\\operatorname{res}}_{z=a}f(z)" }, { "math_id": 7, "text": "R" }, { "math_id": 8, "text": "f(z)- R/(z-a)" }, { "math_id": 9, "text": "0<\\vert z-a\\vert<\\delta" }, { "math_id": 10, "text": "a_k" }, { "math_id": 11, "text": "\\operatorname{Res}(f,a_k) = {1 \\over 2\\pi i} \\oint_\\gamma f(z)\\,dz \\, ." }, { "math_id": 12, "text": "\\gamma" }, { "math_id": 13, "text": "\\omega" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "f(z) \\; dz" }, { "math_id": 16, "text": "f(z)" }, { "math_id": 17, "text": "\\oint_C z^k \\, dz" }, { "math_id": 18, "text": "C" }, { "math_id": 19, "text": "1" }, { "math_id": 20, "text": "z \\to e^{i\\theta}" }, { "math_id": 21, "text": "dz \\to d(e^{i\\theta}) = ie^{i\\theta} \\, d\\theta" }, { "math_id": 22, "text": "\n\\oint_C z^k dz = \\int_0^{2\\pi} i e^{i(k+1)\\theta} \\, d\\theta\n= \\begin{cases}\n2\\pi i & \\text{if } k = -1, \\\\\n0 & \\text{otherwise}.\n\\end{cases}\n" }, { "math_id": 23, "text": "z^k" }, { "math_id": 24, "text": "k=-1" }, { "math_id": 25, "text": "f(z) = \\sum_{n=-\\infty}^\\infty a_n(z-c)^n." }, { "math_id": 26, "text": "\\operatorname{Res}(f,c) = {1 \\over 2\\pi i} \\oint_\\gamma f(z)\\,dz = {1 \\over 2\\pi i} \\sum_{n=-\\infty}^\\infty \\oint_\\gamma a_n(z-c)^n \\,dz = a_{-1} " }, { "math_id": 27, "text": "(z-c)^{-1}" }, { "math_id": 28, "text": "\n\\oint_C f(z)\\, dz = 2\\pi i \\sum_{k=1}^n \\operatorname{I}(C, a_k) \\operatorname{Res}(f, a_k).\n" }, { "math_id": 29, "text": "\\operatorname{I}(C, a_k)" }, { "math_id": 30, "text": "0" }, { "math_id": 31, "text": "\n\\oint_\\gamma f(z)\\, dz = 2\\pi i \\sum \\operatorname{Res}(f, a_k)\n" }, { "math_id": 32, "text": "\\operatorname{Res}(f,c) = {1 \\over 2\\pi i} \\oint_\\gamma f(z)\\,dz" }, { "math_id": 33, "text": "|y-c|<R" }, { "math_id": 34, "text": "\\operatorname{Res}(f,c)=\\lim_{z\\to c}(z-c)f(z)." }, { "math_id": 35, "text": "f(z)=\\frac{g(z)}{h(z)}" }, { "math_id": 36, "text": "\n\\begin{align}\n\\operatorname{Res}(f,c) & =\\lim_{z\\to c}(z-c)f(z) = \\lim_{z\\to c}\\frac{z g(z) - cg(z)}{h(z)} \\\\[4pt]\n& = \\lim_{z\\to c}\\frac{g(z) + z g'(z) - cg'(z)}{h'(z)} = \\frac{g(c)}{h'(c)}.\n\\end{align}\n" }, { "math_id": 37, "text": " \\operatorname{Res}(f,c) = \\frac{1}{(n-1)!} \\lim_{z \\to c} \\frac{d^{n-1}}{dz^{n-1}} \\left( (z-c)^n f(z) \\right). " }, { "math_id": 38, "text": " \\operatorname{Res}(f(z), \\infty) = -\\operatorname{Res}\\left(\\frac{1}{z^2} f\\left(\\frac 1 z \\right), 0\\right)." }, { "math_id": 39, "text": " \\lim_{|z| \\to \\infty} f(z) = 0," }, { "math_id": 40, "text": " \\operatorname{Res}(f, \\infty) = -\\lim_{|z| \\to \\infty} z \\cdot f(z)." }, { "math_id": 41, "text": " \\lim_{|z| \\to \\infty} f(z) = c \\neq 0," }, { "math_id": 42, "text": " \\operatorname{Res}(f, \\infty) = \\lim_{|z| \\to \\infty} z^2 \\cdot f'(z)." }, { "math_id": 43, "text": " \\operatorname{Res}(f(z), \\infty) = -\\sum_k \\operatorname{Res} (f(z), a_k)." }, { "math_id": 44, "text": "\\oint_C {e^z \\over z^5}\\,dz" }, { "math_id": 45, "text": "e^z" }, { "math_id": 46, "text": "\\oint_C {1 \\over z^5}\\left(1+z+{z^2 \\over 2!} + {z^3\\over 3!} + {z^4 \\over 4!} + {z^5 \\over 5!} + {z^6 \\over 6!} + \\cdots\\right)\\,dz." }, { "math_id": 47, "text": "\n\\begin{align}\n& \\oint_C \\left({1 \\over z^5}+{z \\over z^5}+{z^2 \\over 2!\\;z^5} + {z^3\\over 3!\\;z^5} + {z^4 \\over 4!\\;z^5} + {z^5 \\over 5!\\;z^5} + {z^6 \\over 6!\\;z^5} + \\cdots\\right)\\,dz \\\\[4pt]\n= {} & \\oint_C \\left({1 \\over\\;z^5}+{1 \\over\\;z^4}+{1 \\over 2!\\;z^3} + {1\\over 3!\\;z^2} + {1 \\over 4!\\;z} + {1\\over\\;5!} + {z \\over 6!} + \\cdots\\right)\\,dz.\n\\end{align}\n" }, { "math_id": 48, "text": "\\oint_C {1 \\over 4!\\;z} \\,dz= {1 \\over 4!} \\oint_C{1 \\over z}\\,dz={1 \\over 4!}(2\\pi i) = {\\pi i \\over 12}." }, { "math_id": 49, "text": "\\operatorname{Res}_0 {e^z \\over z^5}, \\text{ or } \\operatorname{Res}_{z=0} {e^z \\over z^5}, \\text{ or } \\operatorname{Res}(f,0) \\text{ for } f={e^z \\over z^5}." }, { "math_id": 50, "text": "f(z) = {\\sin z \\over z^2-z}" }, { "math_id": 51, "text": "f(z) = {\\sin z \\over z(z - 1)}" }, { "math_id": 52, "text": " g(z) = g(a) + g'(a)(z-a) + {g''(a)(z-a)^2 \\over 2!} + {g'''(a)(z-a)^3 \\over 3!}+ \\cdots" }, { "math_id": 53, "text": " \\sin z = \\sin 1 + (\\cos 1)(z-1)+{-(\\sin 1)(z-1)^2 \\over 2!} + {-(\\cos 1)(z-1)^3 \\over 3!} + \\cdots." }, { "math_id": 54, "text": " \\frac{1}{z} = \\frac1 {(z - 1) + 1} = 1 - (z - 1) + (z - 1)^2 - (z - 1)^3 + \\cdots." }, { "math_id": 55, "text": " \\frac{\\sin z} {z(z - 1)} = {\\sin 1 \\over z-1} + (\\cos 1 - \\sin 1) + (z-1) \\left(-\\frac{\\sin 1}{2!} - \\cos1 + \\sin 1\\right) + \\cdots." }, { "math_id": 56, "text": " u(z) := \\sum_{k\\geq 1}u_k z^k" }, { "math_id": 57, "text": "v(z) := \\sum_{k\\geq 1}v_k z^k" }, { "math_id": 58, "text": " v_1 \\neq 0" }, { "math_id": 59, "text": " v(z)" }, { "math_id": 60, "text": " V(z)" }, { "math_id": 61, "text": " u(1/V(z))" }, { "math_id": 62, "text": "\\operatorname{Res}_0 \\big(u(1/V(z))\\big) = \\sum_{k=0}^\\infty ku_k v_k. " }, { "math_id": 63, "text": "\\operatorname{Res}_0\\big(u(1/V(z))\\big) = \\operatorname{Res}_0 \\left(\\sum_{k\\geq 1} u_k V(z)^{-k}\\right) = \\sum_{k\\geq 1} u_k \\operatorname{Res}_0 \\big(V(z)^{-k}\\big)" }, { "math_id": 64, "text": "\\operatorname{Res}_0 \\big(V(z)^{-k}\\big) = kv_k," }, { "math_id": 65, "text": "u(z) = z + z^2" }, { "math_id": 66, "text": "v(z) = z + z^2" }, { "math_id": 67, "text": "V(z) = \\frac{2z}{1 + \\sqrt{1 + 4z}}" }, { "math_id": 68, "text": "u(1/V(z)) = \\frac{1 + \\sqrt{1 + 4z}}{2z} + \\frac{1 + 2z + \\sqrt{1 + 4z}}{2z^2}." }, { "math_id": 69, "text": "1/z^2 + 2/z" }, { "math_id": 70, "text": " u(z)" }, { "math_id": 71, "text": "\\operatorname{Res}_0 \\left(u(1/V)\\right) = \\operatorname{Res}_0\\left(v(1/U)\\right)," }, { "math_id": 72, "text": " U(z)" } ]
https://en.wikipedia.org/wiki?curid=73102
7310614
Audio bit depth
Number of bits of information recorded for each digital audio sample In digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc, which can support up to 24 bits per sample. In basic implementations, variations in bit depth primarily affect the noise level from quantization error—thus the signal-to-noise ratio (SNR) and dynamic range. However, techniques such as dithering, noise shaping, and oversampling can mitigate these effects without changing the bit depth. Bit depth also affects bit rate and file size. Bit depth is useful for describing PCM digital signals. Non-PCM formats, such as those using lossy compression, do not have associated bit depths. Binary representation. A PCM signal is a sequence of digital audio samples containing the data providing the necessary information to reconstruct the original analog signal. Each sample represents the amplitude of the signal at a specific point in time, and the samples are uniformly spaced in time. The amplitude is the only information explicitly stored in the sample, and it is typically stored as either an integer or a floating-point number, encoded as a binary number with a fixed number of digits – the sample's "bit depth", also referred to as word length or word size. The resolution indicates the number of discrete values that can be represented over the range of analog values. The resolution of binary integers increases exponentially as the word length increases: adding one bit doubles the resolution, adding two quadruples it, and so on. The number of possible values that an integer bit depth can represent can be calculated by using 2"n", where "n" is the bit depth. Thus, a 16-bit system has a resolution of 65,536 (216) possible values. Integer PCM audio data is typically stored as signed numbers in two's complement format. Today, most audio file formats and digital audio workstations (DAWs) support PCM formats with samples represented by floating-point numbers. Both the WAV file format and the AIFF file format support floating-point representations. Unlike integers, whose bit pattern is a single series of bits, a floating-point number is instead composed of separate fields whose mathematical relation forms a number. The most common standard is IEEE 754, which is composed of three fields: a sign bit representing whether the number is positive or negative, a mantissa, and an exponent determining a power-of-two factor to scale the mantissa. The mantissa is expressed as a binary fraction in IEEE base-two floating-point formats. Quantization. The bit depth limits the signal-to-noise ratio (SNR) of the reconstructed signal to a maximum level determined by quantization error. The bit depth has no impact on the frequency response, which is constrained by the sample rate. Quantization error introduced during analog-to-digital conversion (ADC) can be modeled as quantization noise. It is a rounding error between the analog input voltage to the ADC and the output digitized value. The noise is nonlinear and signal-dependent. In an ideal ADC, where the quantization error is uniformly distributed between formula_0 least significant bit (LSB) and where the signal has a uniform distribution covering all quantization levels, the signal-to-quantization-noise ratio (SQNR) can be calculated from formula_1 where "b" is the number of quantization bits, and the result is measured in decibels (dB). Therefore, 16-bit digital audio found on CDs has a theoretical maximum SNR of 98 dB, and professional 24-bit digital audio tops out as 146 dB. , digital audio converter technology is limited to an SNR of about 123 dB (effectively 21 bits) because of real-world limitations in integrated circuit design. Still, this approximately matches the performance of the human auditory system. Multiple converters can be used to cover different ranges of the same signal, being combined to record a wider dynamic range in the long-term, while still being limited by the single converter's dynamic range in the short term, which is called "dynamic range extension". Floating point. The resolution of floating-point samples is less straightforward than integer samples because floating-point values are not evenly spaced. In floating-point representation, the space between any two adjacent values is in proportion to the value. The trade-off between floating-point and integer formats is that the space between large floating-point values is greater than the space between large integer values of the same bit depth. Rounding a large floating-point number results in a greater error than rounding a small floating-point number whereas rounding an integer number will always result in the same level of error. In other words, integers have a round-off that is uniform, always rounding the LSB to 0 or 1, and the floating-point format has uniform SNR, the quantization noise level is always of a certain proportion to the signal level. A floating-point noise floor rises as the signal rises and falls as the signal falls, resulting in audible variance if the bit depth is low enough. Audio processing. Most processing operations on digital audio involve the re-quantization of samples and thus introduce additional rounding errors analogous to the original quantization error introduced during analog-to-digital conversion. To prevent rounding errors larger than the implicit error during ADC, calculations during processing must be performed at higher precisions than the input samples. Digital signal processing (DSP) operations can be performed in either fixed-point or floating-point precision. In either case, the precision of each operation is determined by the precision of the hardware operations used to perform each step of the processing and not the resolution of the input data. For example, on x86 processors, floating-point operations are performed with single or double precision, and fixed-point operations at 16-, 32- or 64-bit resolution. Consequently, all processing performed on Intel-based hardware will be performed with these constraints regardless of the source format. Fixed-point digital signal processors often supports specific word lengths to support specific signal resolutions. For example, the Motorola 56000 DSP chip uses 24-bit multipliers and 56-bit accumulators to perform multiply-accumulate operations on two 24-bit samples without overflow or truncation. On devices that do not support large accumulators, fixed-point results may be truncated, reducing precision. Errors compound through multiple stages of DSP at a rate that depends on the operations being performed. For uncorrelated processing steps on audio data without a DC offset, errors are assumed to be random with zero means. Under this assumption, the standard deviation of the distribution represents the error signal, and quantization error scales with the square root of the number of operations. High levels of precision are necessary for algorithms that involve repeated processing, such as convolution. High levels of precision are also necessary in recursive algorithms, such as infinite impulse response (IIR) filters. In the particular case of IIR filters, rounding error can degrade frequency response and cause instability. Dither. The noise introduced by quantization error, including rounding errors and loss of precision introduced during audio processing, can be mitigated by adding a small amount of random noise, called dither, to the signal before quantizing. Dithering eliminates non-linear quantization error behavior, giving very low distortion, but at the expense of a slightly raised noise floor. Recommended dither for 16-bit digital audio measured using ITU-R 468 noise weighting is about 66 dB below alignment level, or 84 dB below digital full scale, which is comparable to the microphone and room noise level, and hence of little consequence in 16-bit audio. 24-bit and 32-bit audio does not require dithering, as the noise level of the digital converter is always louder than the required level of any dither that might be applied. 24-bit audio could theoretically encode 144 dB of dynamic range, and 32-bit audio can achieve 192 dB, but this is almost impossible to achieve in the real world, as even the best sensors and microphones rarely exceed 130 dB. Dither can also be used to increase the effective dynamic range. The "perceived" dynamic range of 16-bit audio can be 120 dB or more with noise-shaped dither, taking advantage of the frequency response of the human ear. Dynamic range and headroom. Dynamic range is the difference between the largest and smallest signal a system can record or reproduce. Without dither, the dynamic range correlates to the quantization noise floor. For example, 16-bit integer resolution allows for a dynamic range of about 96 dB. With the proper application of dither, digital systems can reproduce signals with levels lower than their resolution would normally allow, extending the effective dynamic range beyond the limit imposed by the resolution. The use of techniques such as oversampling and noise shaping can further extend the dynamic range of sampled audio by moving quantization error out of the frequency band of interest. If the signal's maximum level is lower than that allowed by the bit depth, the recording has headroom. Using higher bit depths during studio recording can make headroom available while maintaining the same dynamic range. This reduces the risk of clipping without increasing quantization errors at low volumes. Oversampling. Oversampling is an alternative method to increase the dynamic range of PCM audio without changing the number of bits per sample. In oversampling, audio samples are acquired at a multiple of the desired sample rate. Because quantization error is assumed to be uniformly distributed with frequency, much of the quantization error is shifted to ultrasonic frequencies and can be removed by the digital-to-analog converter during playback. For an increase equivalent to "n" additional bits of resolution, a signal must be oversampled by formula_2 For example, a 14-bit ADC can produce 16-bit 48 kHz audio if operated at 16× oversampling, or 768 kHz. Oversampled PCM, therefore, exchanges fewer bits per sample for more samples to obtain the same resolution. Dynamic range can also be enhanced with oversampling at signal reconstruction, absent oversampling at the source. Consider 16× oversampling at reconstruction. Each sample at reconstruction would be unique in that for each of the original sample points sixteen are inserted, all having been calculated by a digital reconstruction filter. The mechanism of increased effective bit depth is as previously discussed, that is, quantization noise power has not been reduced, but the noise spectrum has been spread over 16× the audio bandwidth. Historical note—The compact disc standard was developed by a collaboration between Sony and Philips. The first Sony consumer unit featured a 16-bit DAC; the first Philips units had dual 14-bit DACs. This confused the marketplace and even in professional circles, because 14-bit PCM allows for 84 dB SNR, 12 dB less than 16-bit PCM. Philips had implemented 4× oversampling with first order noise shaping which theoretically realized the full 96 dB dynamic range of the CD format. In practice the Philips CD100 was rated at 90 dB SNR in the audio band of 20 Hz–20 kHz, the same as Sony's CDP-101. Noise shaping. Oversampling a signal results in equal quantization noise per unit of bandwidth at all frequencies and a dynamic range that improves with only the square root of the oversampling ratio. Noise shaping is a technique that adds additional noise at higher frequencies which cancels out some error at lower frequencies, resulting in a larger increase in dynamic range when oversampling. For "n"th-order noise shaping, the dynamic range of an oversampled signal is improved by an additional 6"n" dB relative to oversampling without noise shaping. For example, for a 20 kHz analog audio sampled at 4× oversampling with second-order noise shaping, the dynamic range is increased by 30 dB. Therefore, a 16-bit signal sampled at 176 kHz would have a bit depth equal to a 21-bit signal sampled at 44.1 kHz without noise shaping. Noise shaping is commonly implemented with delta-sigma modulation. Using delta-sigma modulation, Direct Stream Digital achieves a theoretical 120 dB SNR at audio frequencies using 1-bit audio with 64× oversampling. Applications. Bit depth is a fundamental property of digital audio implementations. Depending on application requirements and equipment capabilities, different bit depths are used for different applications. &lt;templatestyles src="Reflist/styles.css" /&gt; Bit rate and file size. Bit depth affects bit rate and file size. Bits are the basic unit of data used in computing and digital communications. Bit rate refers to the amount of data, specifically bits, transmitted or received per second. In MP3 and other lossy compressed audio formats, bit rate describes the amount of information used to encode an audio signal. It is usually measured in kb/s. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle{\\pm \\frac{1}{2}}" }, { "math_id": 1, "text": "\\text{SQNR} = 20 \\log_{10}(\\sqrt{1.5} \\cdot 2^b) \\approx (1.76 + 6.02\\,b)\\ \\text{dB}," }, { "math_id": 2, "text": " \\mathrm{number\\ of\\ samples} = (2^n)^2 = 2^{2n}." } ]
https://en.wikipedia.org/wiki?curid=7310614
73110666
Émery topology
Topology on the space of semimartingales In martingale theory, Émery topology is a topology on the space of semimartingales. The topology is used in financial mathematics. The class of stochastic integrals with general predictable integrands coincides with the closure of the set of all simple integrals. The topology was introduced in 1979 by the French mathematician Michel Émery. Definition. Let formula_0 be a filtered probability space, where the filtration satisfies the usual conditions and formula_1. Let formula_2 be the space of real semimartingales and formula_3 the space of simple predictable processes formula_4 with formula_5. We define the quasinorm formula_6 Then formula_7 with the metric formula_8 is a complete metric space and the induced topology is called Émery topology.
[ { "math_id": 0, "text": "(\\Omega,\\mathcal{A},\\{\\mathcal{F_t}\\},P)" }, { "math_id": 1, "text": "T\\in (0,\\infty)" }, { "math_id": 2, "text": "\\mathcal{S}(P)" }, { "math_id": 3, "text": "\\mathcal{E}(1)" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": "|H|=1" }, { "math_id": 6, "text": "\\|X\\|_{\\mathcal{S}(P)}:=\\sup\\limits_{H\\in \\mathcal{E}(1)}\\mathbb{E}\\left[1\\wedge \\left(\\sup\\limits_{t\\in[0,T]}|(H\\cdot X)_t|\\right)\\right]." }, { "math_id": 7, "text": "(\\mathcal{S}(P),d)" }, { "math_id": 8, "text": "d(X,Y):=\\|X-Y\\|_{\\mathcal{S}(P)}" } ]
https://en.wikipedia.org/wiki?curid=73110666
73111004
Baik–Deift–Johansson theorem
The Baik–Deift–Johansson theorem is a result from probabilistic combinatorics. It deals with the subsequences of a randomly uniformly drawn permutation from the set formula_0. The theorem makes a statement about the distribution of the length of the longest increasing subsequence in the limit. The theorem was influential in probability theory since it connected the KPZ-universality with the theory of random matrices. The theorem was proven in 1999 by Jinho Baik, Percy Deift and Kurt Johansson. Statement. For each formula_1 let formula_2 be a uniformly chosen permutation with length formula_3. Let formula_4 be the length of the longest, increasing subsequence of formula_2. Then we have for every formula_5 that formula_6 where formula_7 is the Tracy-Widom distribution of the Gaussian unitary ensemble.
[ { "math_id": 0, "text": "\\{1,2,\\dots,N\\}" }, { "math_id": 1, "text": "N \\geq 1" }, { "math_id": 2, "text": "\\pi_N" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "l(\\pi_N)" }, { "math_id": 5, "text": "x \\in \\mathbb{R}" }, { "math_id": 6, "text": "\\mathbb{P}\\left(\\frac{l(\\pi_N)-2\\sqrt{N}}{N^{1/6}}\\leq x \\right)\\to F_2(x),\\quad N \\to \\infty" }, { "math_id": 7, "text": "F_2(x)" } ]
https://en.wikipedia.org/wiki?curid=73111004
73115476
Utility representation theorem
In economics, a utility representation theorem asserts that, under certain conditions, a preference ordering can be represented by a real-valued utility function, such that option A is preferred to option B if and only if the utility of A is larger than that of B. Background. Suppose a person is asked questions of the form "Do you prefer A or B?" (when A and B can be options, actions to take, states of the world, consumption bundles, etc.). If the agent prefers A to B, we write formula_0. The set of all such preference-pairs forms the person's "preference relation." Instead of recording the person's preferences between every pair of options, it would be much more convenient to have a single "utility function" - a function "u" that assigns a real number to each option, such that formula_1 if and only if formula_0. Not every preference-relation has a utility-function representation. For example, if the relation is not transitive (the agent prefers A to B, B to C, and C to A), then it has no utility representation, since any such utility function would have to satisfy formula_2, which is impossible. A utility representation theorem gives conditions on a preference relation, that are sufficient for the existence of a utility representation. Often, one would like the representing function "u" to satisfy additional conditions, such as continuity. This requires additional conditions on the preference relation. Definitions. The set of options is a topological space denoted by "X". In some cases we assume that "X" is also a metric space; in particular, "X" can be a subset of a Euclidean space "Rm", such that each coordinate in {1..., m} represents a commodity, and each "m"-vector in "X" represents a possible consumption bundle. Preference relations. A "preference relation" is a subset of formula_3. It is denoted by either formula_4 or formula_5: Given a weak preference relation formula_5, one can define its "strict part" formula_4 and "indifference part" formula_10 as follows: Given a strict preference relation formula_4, one can define its "weak part" formula_5 and "indifference part" formula_10 as follows: For every option formula_15, we define the contour sets at "A": Sometimes, the above continuity notions are called "semicontinuous", and a formula_5 is called "continuous" if it is a closed subset of formula_3. A preference-relation is called: As an example, the strict order "&gt;" on real numbers is separable, but not countable. Utility functions. A "utility function" is a function formula_23. Complete preference relations. Debreu proved the existence of a "contiuous" representation of a weak preference relation formula_5 satisfying the following conditions: Jaffray gives an elementary proof to the existence of a continuous utility function. Incomplete preference relations. Preferences are called "incomplete" when some options are incomparable, that is, neither formula_8 nor formula_26 holds. This case is denoted by formula_27. Since real numbers are always comparable, it is impossible to have a representing function "u" with formula_25. There are several ways to cope with this issue. One-directional representation. Peleg defined a utility function representation of a strict partial order formula_4 as a function "formula_23 "such thatformula_28, that is, only one direction of implication should hold. Peleg proved the existence of a one-dimensional continuous utility representation of a strict preference relation formula_4 satisfying the following conditions: If we are given a weak preference relation formula_5, we can apply Peleg's theorem by defining a strict preference relation: formula_0 if and only if formula_8 and "not" formula_11. The second condition (formula_4 is separable) is implied by the following three conditions: A similar approach was taken by Richter. Therefore, this one-directional representation is also called a Richter-Peleg utility representation. Jaffray defines a utility function representation of a strict partial order formula_4 as a function "formula_23 "such that both formula_28, and formula_29, where the relation formula_30 is defined by: for all C, formula_31 and formula_32 (that is: the lower and upper contour sets of "A" and "B" are identical). He proved that, for every partially-ordered space formula_33 that is perfectly-separable, there exists a utility function that is upper-semicontinuous in any topology stronger than the upper order topology.Sec.4 An analogous statement states the existence of a utility function that is lower-semicontinuous in any topology stronger than the lower order topology. Sondermann defines a utility function representation similarly to Jaffray. He gives conditions for existence of a utility function representation on a probability space, that is upper semicontinuous or lower semicontinuous in the order topology. Herden defines a utility function representation of a weak preorder formula_5 as an isotone function "formula_34 "such that formula_28. HerdenThm.4.1 proved that a weak preorder formula_5 on "X" has a continuous utility function, if and only if there exists a countable family E of separable systems on "X" such that, for all pairs formula_0, there is a separable system F in E, such that B is contained in all sets in F, and A is not contained in any set in F. He shows that this theorem implies Peleg's representation theorem. In a follow-up paper he clarifies the relation between this theorem and classical utility representation theorems on complete orders. Multi-utility representation. A "multi-utility representation" (MUR) of a relation formula_5 is a set "U" of utility functions, such that formula_35. In other words, "A" is preferred to "B" if and only if all utility functions in the set "U" unanimously hold this preference. The concept was introduced by Efe Ok. Every preorder (reflexive and transitive relation) has a trivial MUR.Prop.1 Moreover, every preorder with closed upper contour sets has an upper-semicontinuous MUR, and every preorder with closed lower contour sets has a lower-semicontinuous MUR.Prop.2 However, not every preorder with closed upper and lower contour sets has a "continuous" MUR.Exm.1 Ok and Evren present several conditions on the existence of a continuous MUR: All the representations guaranteed by the above theorems might contain infinitely many utilities, and even uncountably many utilities. In practice, it is often important to have a "finite" MUR - a MUR with finitely many utilities. Evren and Ok prove there exists a finite MUR where all utilities are upper[lower] semicontinuous for any weak preference relation formula_5 satisfying the following conditions:Thm 3 Note that the guaranteed functions are semicontinuous, but not necessarily continuous, even if all upper and lower contour sets are closed.Exm.2 Evren and Ok say that "there does not seem to be a natural way of deriving a continuous finite multi-utility representation theorem, at least, not by using the methods adopted in this paper". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A\\succ B" }, { "math_id": 1, "text": "u(A)>u(B)" }, { "math_id": 2, "text": "u(A)>u(B) > u(C) > u(A)" }, { "math_id": 3, "text": "X\\times X" }, { "math_id": 4, "text": "\\succ" }, { "math_id": 5, "text": "\\succeq" }, { "math_id": 6, "text": "A\\succ A" }, { "math_id": 7, "text": "B\\succ A" }, { "math_id": 8, "text": "A\\succeq B" }, { "math_id": 9, "text": "A\\succeq A" }, { "math_id": 10, "text": "\\simeq" }, { "math_id": 11, "text": "B\\succeq A" }, { "math_id": 12, "text": "A \\simeq B" }, { "math_id": 13, "text": "B \\succ A" }, { "math_id": 14, "text": "A \\succ B" }, { "math_id": 15, "text": "A \\in X" }, { "math_id": 16, "text": "\\{B\\in X : B\\succeq A \\}" }, { "math_id": 17, "text": "\\{B\\in X : A \\succeq B \\}" }, { "math_id": 18, "text": "\\{B\\in X : B\\succ A \\}" }, { "math_id": 19, "text": "\\{B\\in X : A \\succ B \\}" }, { "math_id": 20, "text": "Z\\subseteq X" }, { "math_id": 21, "text": "z_i\\in Z" }, { "math_id": 22, "text": "A \\succ z_i \\succ B" }, { "math_id": 23, "text": "u: X \\to \\mathbb{R}" }, { "math_id": 24, "text": "u(A) > u(B) \\iff A\\succ B" }, { "math_id": 25, "text": "u(A) \\geq u(B) \\iff A \\succeq B" }, { "math_id": 26, "text": "B \\succeq A" }, { "math_id": 27, "text": "A \\bowtie B" }, { "math_id": 28, "text": "A \\succ B \\implies u(A)>u(B)" }, { "math_id": 29, "text": "A\\approx B \\implies u(A)=u(B)" }, { "math_id": 30, "text": "A\\approx B" }, { "math_id": 31, "text": "A\\succ C \\iff B\\succ C" }, { "math_id": 32, "text": "C\\succ A \\iff C\\succ B" }, { "math_id": 33, "text": "(X, \\succ)" }, { "math_id": 34, "text": "u: (X, \\succeq) \\to (\\mathbb{R}, \\geq)" }, { "math_id": 35, "text": "A \\succeq B \\iff \\forall u\\in U: u(A)\\geq u(B)" } ]
https://en.wikipedia.org/wiki?curid=73115476
73119456
Method of Chester–Friedman–Ursell
Technique to find asymptotic expansions In asymptotic analysis, the method of Chester–Friedman–Ursell is a technique to find asymptotic expansions for contour integrals. It was developed as an extension of the steepest descent method for getting uniform asymptotic expansions in the case of coalescing saddle points. The method was published in 1957 by Clive R. Chester, Bernard Friedman and Fritz Ursell. Method. Setting. We study integrals of the form formula_0 where formula_1 is a contour and Suppose we have two saddle points formula_6 of formula_7 with multiplicity formula_8 that depend on a parameter formula_4. If now an formula_9 exists, such that both saddle points coalescent to a new saddle point formula_10 with multiplicity formula_11, then the steepest descent method no longer gives uniform asymptotic expansions. Procedure. Suppose there are two simple saddle points formula_12 and formula_13 of formula_14 and suppose that they coalescent in the point formula_15. We start with the "cubic transformation" formula_16 of formula_7, this means we introduce a new complex variable formula_17 and write formula_18 where the coefficients formula_19 and formula_20 will be determined later. We have formula_21 so the cubic transformation will be analytic and injective only if formula_22 and formula_23 are neither formula_24 nor formula_25. Therefore formula_26 and formula_27 must correspond to the zeros of formula_28, i.e. with formula_29 and formula_30. This gives the following system of equations formula_31 we have to solve to determine formula_32 and formula_33. A theorem by Chester–Friedman–Ursell (see below) says now, that the cubic transform is analytic and injective in a local neighbourhood around the critical point formula_34. After the transformation the integral becomes formula_35 where formula_36 is the new contour for formula_17 and formula_37 The function formula_38 is analytic at formula_39 for formula_40 and also at the coalescing point formula_41 for formula_9. Here ends the method and one can see the integral representation of the complex Airy function. Chester–Friedman–Ursell note to write formula_38 not as a single power series but instead as formula_42 to really get asymptotic expansions. Theorem by Chester–Friedman–Ursell. Let formula_43 and formula_44 be as above. The cubic transformation formula_45 with the above derived values for formula_46 and formula_47, such that formula_48 corresponds to formula_49, has only one branch point formula_50, so that for all formula_4 in a local neighborhood of formula_9 the transformation is analytic and injective.
[ { "math_id": 0, "text": "I(\\alpha,N):=\\int_{C}e^{-Nf(\\alpha,t)}g(\\alpha,t)dt," }, { "math_id": 1, "text": "C" }, { "math_id": 2, "text": "f,g" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "N" }, { "math_id": 6, "text": "t_+,t_-" }, { "math_id": 7, "text": "f(\\alpha,t)" }, { "math_id": 8, "text": "1" }, { "math_id": 9, "text": "\\alpha_0" }, { "math_id": 10, "text": "t_0" }, { "math_id": 11, "text": "2" }, { "math_id": 12, "text": "t_{-}:=t_{-}(\\alpha)" }, { "math_id": 13, "text": "t_{+}:=t_{+}(\\alpha)" }, { "math_id": 14, "text": "f" }, { "math_id": 15, "text": "t_0:=t_0(\\alpha_0)" }, { "math_id": 16, "text": "t\\mapsto w" }, { "math_id": 17, "text": "w" }, { "math_id": 18, "text": "f(\\alpha,t)=\\tfrac{1}{3}w^3-\\eta(\\alpha) w+A(\\alpha)," }, { "math_id": 19, "text": "\\eta:=\\eta(\\alpha)" }, { "math_id": 20, "text": "A:=A(\\alpha)" }, { "math_id": 21, "text": "\\frac{dt}{dw}=\\frac{w^2-\\eta}{f_t(\\alpha,t)}," }, { "math_id": 22, "text": "dt/dw" }, { "math_id": 23, "text": "dw/dt" }, { "math_id": 24, "text": "0" }, { "math_id": 25, "text": "\\infty" }, { "math_id": 26, "text": "t=t_{-}" }, { "math_id": 27, "text": "t=t_{+}" }, { "math_id": 28, "text": "w^2-\\eta" }, { "math_id": 29, "text": "w_{+}:=\\eta^{1/2}" }, { "math_id": 30, "text": "w_{-}:=-\\eta^{1/2}" }, { "math_id": 31, "text": "\\begin{cases}\nf(\\alpha,t_{-})=-\\frac{2}{3}\\eta^{3/2}+A,\\\\\nf(\\alpha,t_{+})=\\frac{2}{3}\\eta^{3/2}+A,\n\\end{cases}" }, { "math_id": 32, "text": "\\eta" }, { "math_id": 33, "text": "A" }, { "math_id": 34, "text": "(\\alpha_0,t_0)" }, { "math_id": 35, "text": "I(\\alpha,N)=e^{-NA}\\int_L \\exp\\left(-N\\left(\\tfrac{1}{3}w^3-\\eta w\\right)\\right)h(\\alpha,w)dw," }, { "math_id": 36, "text": "L" }, { "math_id": 37, "text": "h(\\alpha,w):=g(\\alpha,t)\\frac{dt}{dw}=g(\\alpha,t)\\frac{w^2-\\eta}{f_t(\\alpha,t)}." }, { "math_id": 38, "text": "h(\\alpha,w)" }, { "math_id": 39, "text": "w_{+}(\\alpha),w_{-}(\\alpha)" }, { "math_id": 40, "text": "\\alpha\\neq \\alpha_0" }, { "math_id": 41, "text": "w_0" }, { "math_id": 42, "text": "h(\\alpha,w)=\\sum\\limits_{m} q_m(\\alpha)(w^2-\\eta)^m+ \\sum\\limits_{m} p_m(\\alpha) w(w^2-\\eta)^m" }, { "math_id": 43, "text": "t_{+}:=t_{+}(\\alpha),t_{-}:=t_{-}(\\alpha)" }, { "math_id": 44, "text": "t_{0}:=t_{0}(\\alpha_0)" }, { "math_id": 45, "text": "f(t,\\alpha)=\\tfrac{1}{3}w^3-\\eta(\\alpha) w+A(\\alpha)" }, { "math_id": 46, "text": "\\eta(\\alpha)" }, { "math_id": 47, "text": "A(\\alpha)" }, { "math_id": 48, "text": "t=t_{\\pm}" }, { "math_id": 49, "text": "u=\\pm\\eta^{1/2}" }, { "math_id": 50, "text": "w=w(\\alpha,t)" } ]
https://en.wikipedia.org/wiki?curid=73119456
73119707
Aluminium–magnesium alloys
Aluminium–magnesium alloys (AlMg) – standardised in the 5000 series – are aluminium alloys that are mainly made of aluminium and contain magnesium as the main alloy element. Most standardised alloys also contain small additives of manganese (AlMg(Mn)). Pure AlMg alloys and the AlMg(Mn) alloys belong to the medium-strength, natural (not hardened by heat treatment) alloys. Other AlMg alloys are aluminium–magnesium–copper alloys (AlMgCu) and aluminium–magnesium–silicon alloys (AlMgSi, 6000 series). Applications and processing. Discovery of aluminium–magnesium alloys dates back to late 19th century. AlMg alloys are among the most important aluminium alloys for construction materials. They get cold dwell transform, i.e., by rolling and forging and are easily weldable at Mg levels of at least 3%. AlMg is rarely processed through extrusion presses, as subsequent strength changes in extrusion profiles must be avoided. The majority of AlMg alloys are processed into rolled products as well as pipes, rods, wires and free-form or drop-forged parts. Parts are also processed into extrusion profiles with simple cross-sections. Due to the good corrosion resistance and high strength at low temperatures, AlMg is used in shipbuilding, in the construction of chemical apparatus and pipelines, and for refrigeration technology and automobiles. The good weldability is crucial for use in the aircraft construction, there also with additions of scandium and zirconium for better weldability. Solubility of magnesium and phases. The solubility of magnesium is very high in aluminium and reaches a maximum at 450 °C with 14% to 17% depending on the literature reference. At 34.5%, there is a Eutectic with Al8Mg5 (sometimes referred to as Al3Mg2), an intermetallic phase (formula_0-phase). The solubility of Mg decreases sharply with falling temperature, i.e., at 100 °C it is still 2%, at room temperature 0.2%. The elimination of the formula_0-phase occurs with pure AlMg alloys after a four-stage process. With technically used alloys with other alloying elements and impurities, the process is much more complicated: In the case of technical alloys, the excretion differs from this for the following reasons: Structures. The diffusion of magnesium in aluminium is very low. The reason is the high size difference between the radius of the aluminium atoms and that of the magnesium atoms (formula_3). Therefore, after watering, only part of the magnesium is removed from the mixed crystal, while most of it is present as an oversaturated solution in aluminium. Even with prolonged annealing treatment, this condition cannot be eliminated. Excess magnesium is excreted mainly at the grain boundaries as well as on dispersion particles in the grain. The speed of the process depends on the Mg content and the temperature and increases with both. At the grain boundaries, so-called plaques are initially excreted, thin plates that are not connected, i.e. do not yet form a continuous layer around the grain. At 70 °C, they form after 3 months, at 100 °C after 3 days and at 150 °C after one to nine hours. If further time passes at elevated temperature, the plaques grow together to form a contiguous film. This has a negative effect on corrosion resistance, but can be dissolved by heat treatment. Annealing at 420 °C for one hour followed by slow cooling of 20 °C/h or starting annealing at 200 °C to 240 °C is suitable. The plaques of the formula_0-phase transform into numerous small particles, referred to in the specialist literature as "bead line-like". They no longer form a coherent film. Composition of standardised varieties. The compositions of some standardised varieties are contained in the following table. Proportions of alloying elements in mass percent. Of the available varieties, there are fine gradations of Mg and Mn levels. Mn-free are very rare. Standard alloys are AlMg3Mn, AlMg4.5Mn0.7, as well as for bodywork AlMg4.5Mn0.4. Magnesium levels of up to 5% and manganese content up to 1% are used for wrought alloys. Mg contents up to 10% are also possible for cast alloys; however, contents of 7% and more are considered heavypourable. 5000 series. 5000 series are alloyed with magnesium. 5083 alloy has the highest strength of non-heat-treated alloys. Most 5000 series alloys include manganese as well. Corrosion. Aluminium-magnesium alloys are considered to be very corrosion-resistant, making them suitable for marine applications, but this is only true if the formula_0-phase exists as a non-contiguous phase. Alloys with Mg contents below 3% are therefore always corrosion-resistant, with higher contents, appropriate heat treatment must ensure that this phase is not present as a continuous film at the grain boundaries. The formula_2-phase and the formula_0-phase are very base compared to aluminium and have an anodic characteristic. AlMg therefore tends to intergranular corrosion if Alloys in states susceptible to intergranular corrosion are annealed at temperatures of 200 °C to 250 °C with slow cooling (heterogeneisation annealing). This changes the formula_0-phase film in globulite formula_0-phase and the material is resistant to intergranular corrosion. Mechanical properties. Strengths and elongation at break in tensile test. The strength is increased by alloying magnesium. At low Mg levels, the increase in strength is relatively strong with higher levels, it is getting weaker and weaker. However, magnesium increases strength very efficiently compared to other elements; per % Mg, so it is stronger than with alternative elements. Even with medium Mg content, the increase in strength by alloying manganese is higher than by additional magnesium, which is also one reason why most AlMg alloys still contain manganese. As a reason for the high increase in strength of magnesium, the high binding energy of vacancies at Mg atoms. These spaces are then no longer available as free spaces. However, these are favourable for plastic deformation. The yield strength increases linearly with increasing Mg content from about 45 N/mm2 at 1% Mg to about 120 N/mm2 at 4% Mg. The tensile strength also increases linearly, but with a steeper gradient. With 1% Mg it is about 60 N/mm2, with 4% Mg 240 N/mm2. There are different statements for the elongation at break : Research on alloys based on the purest shows an increasing elongation at break from about 20% elongation at 1% to 30% at 5% Mg Elongation at break: First it drops sharply from 38% elongation and 1% Mg to 34% elongation and about 1.8% Mg, reaches a minimum at 3% Mg with only 32% elongation and then rises again to about 35% Elongation at 5% Mg. The flow curves for AlMg show the behaviour typical of metallic materials of increasing the flow voltage with the true elongation or forming degree. For all alloys, the increase is relatively strong at low elongations and lower at higher elongations. However, the curves for higher alloy varieties are always above the low-dried. For example, with a true elongation of 0.2, AlMg0.5 has a flow voltage of about 100 N/mm2, AlMg one of 150 N/mm2, AlMg3 of 230 N/mm2 and AlMg4.5Mn0.4 of about 300 N/mm2. The higher the alloy content and the greater the elongation, the greater the resulting PLC effect and the Lüders effect. Influence of grain size. In the case of pure aluminium, the grain size has a minor influence on the strength for metals. In the case of alloys, the influence increases with the alloy content. At 5% Mg, materials with grain sizes of 50 μm achieve uniform elongations of around 0.25, at 250 μm they are around 0.28. AlMg8 already achieves uniform elongations of 0.3 with a grain diameter of 200 μm. With increasing grain size, both the Lüders strain and the Lüders effect decrease. Cold forming and heat treatment. In the case of very high degrees of deformation with heavily work-hardened alloys, softening can also occur at room temperature. In a long-term study over 50 years, a decrease in strength could be measured by the end. The decrease is greater the higher the degree of deformation and the higher the alloy content. The softening itself is very pronounced at the beginning and quickly subsides. The effect can be avoided by stabilization annealing at around 120 °C to 170 °C for several hours. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta" }, { "math_id": 1, "text": "\\beta''" }, { "math_id": 2, "text": "\\beta'" }, { "math_id": 3, "text": "r_{Al}:r_{Mg} = 1.43:1.6" } ]
https://en.wikipedia.org/wiki?curid=73119707
7312322
Growth chart
Graphic of child development over time A growth chart is used by pediatricians and other health care providers to follow a child's growth over time. Growth charts have been constructed by observing the growth of large numbers of healthy children over time. The height, weight, and head circumference of a child can be compared to the expected parameters of children of the same age and sex to determine whether the child is growing appropriately. Growth charts can also be used to predict the expected adult height and weight of a child because, in general, children maintain a fairly constant growth curve. When a child deviates from his or her previously established growth curve, investigation into the cause is generally warranted. Parameters used to analyze growth charts include weight velocity (defined as rate of change in weight over time), height velocity (defined as rate of change in stature over time), and whether someone's growth chart crosses percentiles. For instance, endocrine disorders can be associated with a decrease in height velocity and preserved weight velocity while normal growth variants are associated with a decrease in height and weight velocity that are proportional to each other. It's important to note that other parameters are more commonly used such as waist circumference for assessing obesity and skin fold difference for assessing malnutrition. Growth charts can also be compiled with a portion of the population deemed to have been raised in more or less ideal environments, such as nutrition that conforms to pediatric guidelines, and no maternal smoking. Charts from these sources end up with slightly taller but thinner averages. Growth charts are different for persons assigned male at birth and female at birth, due in part to pubertal differences and disparity in final adult height. In addition, children born prematurely and children with chromosomal abnormalities such as Down syndrome and Turner syndrome follow distinct growth curves which deviate significantly from children without these conditions. As such, growth charts have been created to describe the expected growth patterns of several developmental conditions. Since there are differences in normal growth rates between breastfed and formula-fed babies, the World Health Organization growth charts, which better reflect the growth pattern of the healthy, breastfed infant, are considered the standard for U.S. children under age two. History and revisions to growth chart. The growth chart was first developed by the National Center for Health Statistics (NCHS) in 1977 to clinically analyze child development. The 1977 growth chart was subsequently used by the World Health Organization for dissemination to healthcare systems abroad. In order to accommodate for heterogenous populations internationally, the WHO made an effort to gather data from different regions in every continent. Data used to calculate the CDC's growth chart percentiles was accumulated periodically since the 1960s by the National Health and Nutrition Examination Survey. Updated and more comprehensive data was later used to revise the existing growth chart and construct the 2000 CDC growth charts. The revised growth charts include revision of the 14 existing charts as well as introduction of 2 new BMI-for-age charts. Quantitative definitions. Mid-parental height (MPH) is often used to predict the target height of an individual based on the heights of the two biological parents. It can be used to calculate the target height (TH) for children. MPH is given by (mother’s height + father’s height) divided by 2. MPH is unisex. Boys need an upward correction, girls need a downward correction. In view of an average height difference between adult men and women of 13 cm, TH for boys is usually given by MPH + 6.5cm, TH for girls by MPH - 6.5cm. Alternatively, TH can be expressed in standard deviation scores (SDS), with TH_SDS = (mother’s height_SDS + father’s height_SDS) / 2. Yet, this calculation is incorrect as it needs adjustment to mid-population height. It is suggested to use the conditional target height or cTH_SDS with a correction factor of 0.72. cTH_SDS = TH_SDS x 0.72 Velocity is another quantity that is used to quantify growth curves. It can be used for both height and weight. In the equation provided q is either weight or height, t represents time, and Δ represents change over a defined interval. Growth velocity is defined as follows. Body mass index (BMI) is a useful quantification that can gauge level of obesity. It is defined as follows with the given clinical ranges. Bone age is another useful metric that complements a physician's use of a growth chart. It is particularly useful in working up growth abnormalities and can indicate a delay in onset of puberty. Clinical significance. The combination of height and weight velocity can indicate underlying disease of genetic origin, endocrine cause, and/or delayed growth. Normal growth deficiency. One of the most common growth disorders, a growth deficiency can be due to either familial short stature or constitutional growth delay (CGD). Familial short stature is indicative when one or both parents are of a short stature, and the height and weight percentiles are under the 5 percentile threshold. The child will be concordant with the mean parental height, and the bone age should be normal. Constitutional growth delays are marked by low height and weight percentiles as early as the first 4–6 months following birth. Genetic syndromes. A variety of genetic syndromes can result growth chart patterns with a typical pattern. Genetic diseases such as Turner's syndrome, Prader Willi, and Noonan syndrome can be marked by a less than 5th percentile height and weight since birth. Other genetic disorders such as Marfan's syndrome and Klinefelter's syndrome are typically indicated by a height above the 90th percentile. Endocrine and metabolic disorders. A decrease of height velocity with retained or increased weight velocity can be indicative of endocrine disorders including hypothyroidism, growth hormone deficiency, and excess of glucocorticoids. Variability in growth charts. The CDC's growth chart is utilized from a population that consists of a representative population in the USA. Charts based on a specific race or ethnicity are not useful because of the growth chart progression can be attributed to socioeconomic factors. WHO launched a revised growth in 2006 chart using children from Ghana, Oman, Norway, Brazil, India and the USA that substantiated the fact that growth is highly dependent on environmental factors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "velocity = \\Delta q/\\Delta t" }, { "math_id": 1, "text": "BMI = weight(kg) / [height(m)]^2" } ]
https://en.wikipedia.org/wiki?curid=7312322
73123512
Degree-Rips bifiltration
Degree-Rips Bifiltration The degree-Rips bifiltration is a simplicial filtration used in topological data analysis for analyzing the shape of point cloud data. It is a multiparameter extension of the Vietoris–Rips filtration that possesses greater stability to data outliers than single-parameter filtrations, and which is more amenable to practical computation than other multiparameter constructions. Introduced in 2015 by Lesnick and Wright, the degree-Rips bifiltration is a parameter-free and density-sensitive vehicle for performing persistent homology computations on point cloud data. Definition. It is standard practice in topological data analysis (TDA) to associate a sequence of nested simplicial complexes to a finite data set in order to detect the persistence of topological features over a range of scale parameters. One way to do this is by considering the sequence of Vietoris–Rips complexes of a finite set in a metric space indexed over all scale parameters. If formula_0 is a finite set in a metric space, then this construction is known as the Vietoris–Rips (or simply "Rips") filtration on formula_0, commonly denoted formula_1 or formula_2. The Rips filtration can be expressed as a functor formula_3 from the real numbers (viewed as a poset category) to the category of simplicial complexes and simplicial maps, a subcategory of the category formula_4 of topological spaces and continuous maps via the geometric realization functor. The Rips filtration is indexed over a single parameter, but we can capture more information (e.g., density) about the underlying data set by considering multiparameter filtrations. A filtration indexed by the product of two totally-ordered sets is known as a bifiltration, first introduced by Gunnar Carlsson and Afra Zomorodian in 2009. The degree-Rips bifiltration filters each simplicial complex in the Rips filtration by the degree of each vertex in the graph isomorphic to the 1-skeleton at each index. More formally, let formula_5 be an element of formula_6 and define formula_7 to be the subgraph of the 1-skeleton of formula_8 containing all vertices whose degree is at least formula_9. Subsequently building the maximal simplicial complex possible on this 1-skeleton, we obtain a complex formula_10. By doing this for all possible vertex degrees, and across all scale parameters in the Rips filtration, we extend the Rips construction to a bifiltration formula_11. Note that since the size of each complex will decrease as formula_9 increases, we should identify the indexing set formula_6 with formula_12, where formula_13 is the opposite poset category of formula_14. Therefore the degree-Rips bifiltration can be viewed as a functor formula_15. The idea behind the degree-Rips bifiltration is that vertices of higher degree will correspond to higher density regions of the underlying data set. However, since degree-Rips does not depend on an arbitrary choice of a parameter (such as a pre-selected density parameter, which is "a priori" difficult to determine), it is a convenient tool for analyzing data. Applications to data analysis. The degree-Rips bifiltration possesses several properties that make it a useful tool in data analysis. For example, each of its skeleta has polynomial size; the k-dimensional skeleton of formula_16 has formula_17 simplices, where formula_18 denotes an asymptotic upper bound. Moreover, it has been shown that the degree-Rips bifiltration possesses reasonably strong stability properties with respect to perturbations of the underlying data set. Further work has also been done examining the stable components and homotopy types of degree-Rips complexes. The software RIVET was created in order to visualize several multiparameter invariants (i.e., data structures that attempt to capture underlying geometric information of the data) of 2-parameter persistence modules, including the persistent homology modules of the degree-Rips bifiltration. These invariants include the Hilbert function, rank invariant, and fibered barcode. As a follow-up to the introduction of degree-Rips in their original 2015 paper, Lesnick and Wright showed in 2022 that a primary component of persistent homology computations (namely, computing minimal presentations and bigraded Betti numbers) can be achieved efficiently in a way that outperforms other persistent homology software. Methods of improving algorithmic efficiency of multiparameter persistent homology have also been explored that suggest the possibility of substantial speed increases for data analysis tools such as RIVET. The degree-Rips bifiltration has been used for data analysis on random point clouds, as well as for analyzing data clusters with respect to variations in density. There has been some preliminary experimental analysis of the performance of degree-Rips with respect to outliers in particular, but this is an ongoing area of research as of February 2023. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\text{Rips}(X)" }, { "math_id": 2, "text": "\\mathcal R (X)" }, { "math_id": 3, "text": "\\text{Rips}(X): \\mathbb R \\to \\mathbf{Simp}" }, { "math_id": 4, "text": "\\mathbf{Top}" }, { "math_id": 5, "text": "(a,b)" }, { "math_id": 6, "text": "\\mathbb R^2" }, { "math_id": 7, "text": "G_{a,b}" }, { "math_id": 8, "text": "\\text{Rips}(X)_b" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "\\text{D-Rips}(X)_{a,b}" }, { "math_id": 11, "text": "\\{ \\text{D-Rips}(X)_{a,b}\\}_{(a,b) \\in \\mathbb R^2}" }, { "math_id": 12, "text": "\\mathbb R^{\\text{op}}\\times \\mathbb R" }, { "math_id": 13, "text": "\\mathbb R^{\\text{op}}" }, { "math_id": 14, "text": "\\mathbb R" }, { "math_id": 15, "text": "\\text{D-Rips}(X): \\mathbb R^{\\operatorname{op}}\\times \\mathbb R \\to \\mathbf{Simp}" }, { "math_id": 16, "text": "\\text{D-Rips}(X)" }, { "math_id": 17, "text": "O(|X|^{k+2})" }, { "math_id": 18, "text": "O" } ]
https://en.wikipedia.org/wiki?curid=73123512
7312598
Relative survival
Relative survival of a disease, in survival analysis, is calculated by dividing the overall survival after diagnosis by the survival as observed in a similar population not diagnosed with that disease. A similar population is composed of individuals with at least age and gender similar to those diagnosed with the disease. When describing the survival experience of a group of people or patients typically the method of overall survival is used, and it presents estimates of the proportion of people or patients alive at a certain point in time. The problem with measuring overall survival by using the Kaplan-Meier or actuarial survival methods is that the estimates include two causes of death: deaths from the disease of interest and deaths from all other causes, which includes old age, other cancers, trauma and any other possible cause of death. In general, survival analysis is interested in the deaths by a disease rather than all causes. Thus, a "cause-specific survival analysis" is employed to measure disease-specific survival. Thus, there are two ways in performing a cause-specific survival analysis "competing risks survival analysis" and "relative survival." Competing risks survival analysis. This form of analysis is known by its use of death certificates. In traditional overall survival analysis, the cause of death is irrelevant to the analysis. In a competing risks survival analyses, each death certificate is reviewed. If the disease of interest is cancer, and the patient dies of a car accident, the patient is labelled as censored at death instead of being labelled as having died. Issues with this method arise, as each hospital and or registry may code for causes of death differently. For example, there is variability in the way a patient who has cancer and commits suicide is coded/labelled. In addition, if a patient has an eye removed from an ocular cancer and dies getting hit while crossing the road because he did not see the car, he would often be considered to be censored rather than having died from the cancer or its subsequent effects. Hazard rate. The relative survival form of analysis is more complex than "competing risks" but is considered the gold-standard for performing a cause-specific survival analysis. It is based on two rates: the overall hazard rate observed in a diseased population and the background or expected hazard rate in the general or background population. Deaths from the disease in a single time period are the total number of deaths (overall number of deaths) minus the expected number of deaths in the general population. If 10 deaths per hundred population occur in a population of cancer patients, but only 1 death occurs per hundred general population, the disease specific number of deaths ("excess hazard rate") is 9 deaths per hundred population. The classic equation for the "excess hazard rate" is as follows: formula_0 formula_1 The equation does not define a survival proportion but simply describes the relationships between disease-specific death (excess hazard) rates, background mortality rates (expected death rate) and the overall observed mortality rates. The excess hazard rate is related to relative survival, just as hazard rates are related to overall survival. Cancer survival. Relative survival is typically used in the analysis of cancer registry data. Cause-specific survival estimation using the coding of death certificates has considerable inaccuracy and inconsistency and does not permit the comparison of rates across registries. The diagnosis of cause-of-death is varied between practitioners. How does one code for a patient who dies of heart failure after receiving a chemotherapeutic agent with known deleterious cardiac side-effects? In essence, what really matters is not why the population dies but if the rate of death is higher than that of the general population. If all patients are dying of car crashes, perhaps the tumour or treatment predisposes them to have visual or perceptual disturbances, which lead them to be more likely to die in a car crash. In addition, it has been shown that patients coded in a large US cancer registry as suffering from a non-cancer death are 1.37 times as likely to die than does a member of the general population. If the coding was accurate, this figure should approximate 1.0 as the rate of those dying of non-cancer deaths (in a population of cancer sufferers) should approximate that of the general population. Thus, the use of relative survival provides an accurate way to measure survival rates that are associated with the cancer in question. Epidemiology. In epidemiology, relative survival (as opposed to overall survival and associated with excess hazard rates) is defined as the ratio of observed survival in a population to the expected or background survival rate. It can be thought of as the kaplan-meier survivor function for a particular year, divided by the expected survival rate in that particular year. That is typically known as the "relative survival" (RS). If five consecutive years are multiplied, the resulting figure would be known as "cumulative relative survival" (CRS). It is analogous to the five-year overall survival rate, but it is a way of describing cancer-specific risk of death over five years after diagnosis. Software. There are several software suites available to estimate relative survival rates. Regression modelling can be performed using maximum likelihood estimation methods by using Stata or R. For example, the R package cmprsk may be used for competing risk analyses which utilize sub-distribution or 'Fine and Gray' regression methods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\lambda = \\lambda^*+\\nu\\,\\!" }, { "math_id": 1, "text": " \\lambda = \\text{Overall Death Rate},~\\lambda^*=\\text{Expected death rate},~\\nu=\\text{Disease-specific death rate}" } ]
https://en.wikipedia.org/wiki?curid=7312598
7313339
Louis Kauffman
American mathematician Louis Hirsch Kauffman (born February 3, 1945) is an American mathematician, mathematical physicist, and professor of mathematics in the Department of Mathematics, Statistics, and Computer Science at the University of Illinois at Chicago. He does research in topology, knot theory, topological quantum field theory, quantum information theory, and diagrammatic and categorical mathematics. He is best known for the introduction and development of the bracket polynomial and the Kauffman polynomial. Biography. Kauffman was valedictorian of his graduating class at Norwood Norfolk Central High School in 1962. He received his B.S. at the Massachusetts Institute of Technology in 1966 and his Ph.D. in mathematics from Princeton University in 1972, with thesis "Cyclic Branched-Covers, O(n)-Actions and Hypersurface Singularities" written under the supervision of William Browder. Kauffman has worked at many places as a visiting professor and researcher, including the University of Zaragoza in Spain, the University of Iowa in Iowa City, the Institut des Hautes Études Scientifiques in Bures Sur Yevette, France, the Institut Henri Poincaré in Paris, France, the University of Bologna, Italy, the Federal University of Pernambuco in Recife, Brazil, and the Newton Institute in Cambridge, England. He is the founding editor and one of the managing editors of the "Journal of Knot Theory and Its Ramifications", and editor of the "World Scientific Book Series On Knots and Everything". He writes a column entitled Virtual Logic for the journal "Cybernetics and Human Knowing". From 2005 to 2008, he was president of the American Society for Cybernetics. He plays clarinet in the ChickenFat Klezmer Orchestra in Chicago. Work. Kauffman's research interests are in the fields of cybernetics, topology, and mathematical physics. His work is primarily on the topics of knot theory and its connections with statistical mechanics, quantum theory, algebra, combinatorics, and foundations. In topology, he introduced and developed the bracket polynomial and Kauffman polynomial. Bracket polynomial. In the mathematical field of knot theory, the bracket polynomial, also known as the "Kauffman bracket", is a polynomial invariant of framed links. Although it is not an invariant of knots or links (as it is not invariant under type I Reidemeister moves), a suitably "normalized" version yields the famous knot invariant called the Jones polynomial. The bracket polynomial is important in unifying the Jones polynomial with other quantum invariants. In particular, Kauffman's interpretation of the Jones polynomial allows generalization to state sum invariants of 3-manifolds. Subsequently, the bracket polynomial formed the basis for Mikhail Khovanov's construction of a homology for knots and links, creating a stronger invariant than the Jones polynomial and such that the graded Euler characteristic of the Khovanov homology is equal to the original Jones polynomial. The generators for the chain complex of the Khovanov homology are states of the bracket polynomial decorated with elements of a Frobenius algebra. Kauffman polynomial. The Kauffman polynomial is a 2-variable knot polynomial due to Louis Kauffman. It is defined as formula_0 where formula_1 is the writhe and formula_2 is a regular isotopy invariant which generalizes the bracket polynomial. Discrete ordered calculus. In 1994, Kauffman and Tom Etter wrote a draft proposal for a non-commutative "discrete ordered calculus" (DOC), which they presented in revised form in 1996. In the meantime, the theory was presented in a modified form by Kauffman and H. Pierre Noyes together with a presentation of a derivation of free space Maxwell's equations on this basis. Awards and honors. He won a Lester R. Ford Award (with Thomas Banchoff) in 1978. Kauffman is the 1993 recipient of the Warren McCulloch award of the American Society for Cybernetics and the 1996 award of the Alternative Natural Philosophy Association for his work in discrete physics. He is the 2014 recipient of the Norbert Wiener award of the American Society for Cybernetics. In 2012 he became a fellow of the American Mathematical Society. Publications. Louis H. Kauffman is author of several monographs on knot theory and mathematical physics. His publication list numbers over 170. Books: Articles and papers, a selection:
[ { "math_id": 0, "text": "F(K)(a,z)=a^{-w(K)}L(K)" }, { "math_id": 1, "text": "w(K)" }, { "math_id": 2, "text": "L(K)" } ]
https://en.wikipedia.org/wiki?curid=7313339
73134451
Offset filtration
The offset filtration (also called the "union-of-balls" or "union-of-disks" filtration) is a growing sequence of metric balls used to detect the size and scale of topological features of a data set. The offset filtration commonly arises in persistent homology and the field of topological data analysis. Utilizing a union of balls to approximate the shape of geometric objects was first suggested by Frosini in 1992 in the context of submanifolds of Euclidean space. The construction was independently explored by Robins in 1998, and expanded to considering the collection of offsets indexed over a series of increasing scale parameters (i.e., a growing sequence of balls), in order to observe the stability of topological features with respect to attractors. Homological persistence as introduced in these papers by Frosini and Robins was subsequently formalized by Edelsbrunner et al. in their seminal 2002 paper "Topological Persistence and Simplification." Since then, the offset filtration has become a primary example in the study of computational topology and data analysis. Definition. Let formula_0 be a finite set in a metric space formula_1, and for any formula_2 let formula_3 be the closed ball of radius formula_4 centered at formula_5. Then the union formula_6 is known as the offset of formula_0 with respect to the parameter formula_4 (or simply the formula_4-offset of formula_0). By considering the collection of offsets over all formula_7 we get a family of spaces formula_8 where formula_9 whenever formula_10. So formula_11 is a family of nested topological spaces indexed over formula_4, which defines a filtration known as the offset filtration on formula_0. Note that it is also possible to view the offset filtration as a functor formula_12 from the poset category of non-negative real numbers to the category of topological spaces and continuous maps. There are some advantages to the categorical viewpoint, as explored by Bubenik and others. Properties. A standard application of the nerve theorem shows that the union of balls has the same homotopy type as its nerve, since closed balls are convex and the intersection of convex sets is convex. The nerve of the union of balls is also known as the Čech complex, which is a subcomplex of the Vietoris-Rips complex. Therefore the offset filtration is weakly equivalent to the Čech filtration (defined as the nerve of each offset across all scale parameters), so their homology groups are isomorphic. Although the Vietoris-Rips filtration is not identical to the Čech filtration in general, it is an approximation in a sense. In particular, for a set formula_13 we have a chain of inclusions formula_14 between the Rips and Čech complexes on formula_0 whenever formula_15. In general metric spaces, we have that formula_16 for all formula_17, implying that the Rips and Cech filtrations are 2-interleaved with respect to the interleaving distance as introduced by Chazal et al. in 2009. It is a well-known result of Niyogi, Smale, and Weinberger that given a sufficiently dense random point cloud sample of a smooth submanifold in Euclidean space, the union of balls of a certain radius recovers the homology of the object via a deformation retraction of the Čech complex. The offset filtration is also known to be stable with respect to perturbations of the underlying data set. This follows from the fact that the offset filtration can be viewed as a sublevel-set filtration with respect to the distance function of the metric space. The stability of sublevel-set filtrations can be stated as follows: Given any two real-valued functions formula_18 on a topological space formula_19 such that for all formula_20, the formula_21-dimensional homology modules on the sublevel-set filtrations with respect to formula_18 are point-wise finite dimensional, we have formula_22 where formula_23 and formula_24 denote the bottleneck and sup-norm distances, respectively, and formula_25 denotes the formula_21-dimensional persistent homology barcode. While first stated in 2005, this sublevel stability result also follows directly from an algebraic stability property sometimes known as the "Isometry Theorem," which was proved in one direction in 2009, and the other direction in 2011. A multiparameter extension of the offset filtration defined by considering points covered by multiple balls is given by the multicover bifiltration, and has also been an object of interest in persistent homology and computational geometry. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "(M,d)" }, { "math_id": 2, "text": "x\\in X" }, { "math_id": 3, "text": "B(x,\\varepsilon) = \\{y\\in X \\mid d(x,y) \\leq \\varepsilon \\}" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "X^{(\\varepsilon)}:=\\bigcup_{x\\in X} B(x,\\varepsilon)" }, { "math_id": 7, "text": "\\varepsilon \\in [0,\\infty)" }, { "math_id": 8, "text": "\\mathcal O(X) := \\{ X^{(\\varepsilon)} \\mid \\varepsilon \\in [0,\\infty)\\}" }, { "math_id": 9, "text": "X^{(\\varepsilon)}\\subseteq X^{(\\varepsilon^\\prime)}" }, { "math_id": 10, "text": "\\varepsilon \\leq \\varepsilon^\\prime" }, { "math_id": 11, "text": "\\mathcal O(X)" }, { "math_id": 12, "text": "\\mathcal O(X) : [0, \\infty) \\to \\mathbf{Top}" }, { "math_id": 13, "text": "X \\subset \\mathbb R^d" }, { "math_id": 14, "text": "\\operatorname{Rips}_\\varepsilon(X) \\subset \\operatorname{Cech}_{\\varepsilon^\\prime}(X) \\subset \\operatorname{Rips}_{\\varepsilon^\\prime}(X)" }, { "math_id": 15, "text": "\\varepsilon^\\prime / \\varepsilon \\geq \\sqrt{2d/d+1}" }, { "math_id": 16, "text": "\\operatorname{Cech}_\\varepsilon(X) \\subset \\operatorname{Rips}_{2\\varepsilon}(X) \\subset \\operatorname{Cech}_{2\\varepsilon}(X)" }, { "math_id": 17, "text": "\\varepsilon >0" }, { "math_id": 18, "text": "\\gamma, \\kappa" }, { "math_id": 19, "text": "T" }, { "math_id": 20, "text": "i\\geq 0" }, { "math_id": 21, "text": "i\\text{th}" }, { "math_id": 22, "text": "d_B (\\mathcal B_i (\\gamma), \\mathcal B_i (\\kappa)) \\leq d_\\infty (\\gamma, \\kappa)" }, { "math_id": 23, "text": "d_B(-)" }, { "math_id": 24, "text": "d_\\infty(-)" }, { "math_id": 25, "text": "\\mathcal B_i (-)" } ]
https://en.wikipedia.org/wiki?curid=73134451
73139878
Multicover bifiltration
The multicover bifiltration is a two-parameter sequence of nested topological spaces derived from the covering of a finite set in a metric space by growing metric balls. It is a multidimensional extension of the offset filtration that captures density information about the underlying data set by filtering the points of the offsets at each index according to how many balls cover each point. The multicover bifiltration has been an object of study within multidimensional persistent homology and topological data analysis. Definition. Following the notation of Corbet et al. (2022), given a finite set formula_0, the multicover bifiltration on formula_1 is a two-parameter filtration indexed by formula_2 defined index-wise as formula_3, where formula_4 denotes the non-negative integers. Note that when formula_5 is fixed we recover the Offset Filtration. Properties. The multicover bifiltration admits a topologically equivalent polytopal model of polynomial size, called the "rhomboid bifiltration." The rhomboid bifiltration is an extension of the rhomboid tiling introduced by Edelsbrunner and Osang in 2021 for computing the persistent homology of the multicover bifiltration along one axis of the indexing set. The rhomboid bifiltration on a set of formula_6 points in a Euclidean space can be computed in polynomial time. The multicover bifiltration is also topologically equivalent to a multicover nerve construction due to Sheehy called the subdivision-Čech bifiltration, which considers the barycentric subdivision on the nerve of the offsets. In particular, the subdivision-Čech and multicover bifiltrations are weakly equivalent, and hence have isomorphic homology modules in all dimensions. However, the subdivision-Čech bifiltration has an exponential number of simplices in the size of the data set, and hence is not amenable to efficient direct computations.
[ { "math_id": 0, "text": "A\\subset \\mathbb R^d" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "\\mathbb R \\times \\mathbb N^{\\text{op}}" }, { "math_id": 3, "text": "\\operatorname{Cov}_{r,k} := \\{b \\in \\mathbb R^d : ||b-a|| \\leq r \\text{ for at least } k \\text{ points } a\\in A\\}" }, { "math_id": 4, "text": "\\mathbb N" }, { "math_id": 5, "text": "k=1" }, { "math_id": 6, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=73139878
731401
Ideal solution
Solution exhibiting thermodynamic properties An ideal solution or ideal mixture is a solution that exhibits thermodynamic properties analogous to those of a mixture of ideal gases. The enthalpy of mixing is zero as is the volume change on mixing by definition; the closer to zero the enthalpy of mixing is, the more "ideal" the behavior of the solution becomes. The vapor pressures of the solvent and solute obey Raoult's law and Henry's law, respectively, and the activity coefficient (which measures deviation from ideality) is equal to one for each component. The concept of an ideal solution is fundamental to both thermodynamics and chemical thermodynamics and their applications, such as the explanation of colligative properties. Physical origin. Ideality of solutions is analogous to ideality for gases, with the important difference that intermolecular interactions in liquids are strong and cannot simply be neglected as they can for ideal gases. Instead we assume that the mean strength of the interactions are the same between all the molecules of the solution. More formally, for a mix of molecules of A and B, then the interactions between unlike neighbors ("U"AB) and like neighbors "U"AA and "U"BB must be of the same average strength, i.e., 2 "U"AB = "U"AA + UBB and the longer-range interactions must be nil (or at least indistinguishable). If the molecular forces are the same between AA, AB and BB, i.e., "U"AB = "U"AA = "U"BB, then the solution is automatically ideal. If the molecules are almost identical chemically, e.g., 1-butanol and 2-butanol, then the solution will be almost ideal. Since the interaction energies between A and B are almost equal, it follows that there is only a very small overall energy (enthalpy) change when the substances are mixed. The more dissimilar the nature of A and B, the more strongly the solution is expected to deviate from ideality. Formal definition. Different related definitions of an ideal solution have been proposed. The simplest definition is that an ideal solution is a solution for which each component obeys Raoult's law formula_0 for all compositions. Here formula_1 is the vapor pressure of component formula_2 above the solution, formula_3 is its mole fraction and formula_4 is the vapor pressure of the pure substance formula_2 at the same temperature. This definition depends on vapor pressure, which is a directly measurable property, at least for volatile components. The thermodynamic properties may then be obtained from the chemical potential μ (which is the partial molar Gibbs energy "g") of each component. If the vapor is an ideal gas, formula_5 The reference pressure formula_6 may be taken as formula_7 = 1 bar, or as the pressure of the mix, whichever is simpler. On substituting the value of formula_1 from Raoult's law, formula_8 This equation for the chemical potential can be used as an alternate definition for an ideal solution. However, the vapor above the solution may not actually behave as a mixture of ideal gases. Some authors therefore define an ideal solution as one for which each component obeys the fugacity analogue of Raoult's law formula_9. Here formula_10 is the fugacity of component formula_2 in solution and formula_11 is the fugacity of formula_2 as a pure substance. Since the fugacity is defined by the equation formula_12 this definition leads to ideal values of the chemical potential and other thermodynamic properties even when the component vapors above the solution are not ideal gases. An equivalent statement uses thermodynamic activity instead of fugacity. Thermodynamic properties. Volume. If we differentiate this last equation with respect to formula_13 at formula_14 constant we get: formula_15 Since we know from the Gibbs potential equation that: formula_16 with the molar volume formula_17, these last two equations put together give: formula_18 Since all this, done as a pure substance, is valid in an ideal mix just adding the subscript formula_2 to all the intensive variables and changing formula_17 to formula_19, with optional overbar, standing for partial molar volume: formula_20 Applying the first equation of this section to this last equation we find: formula_21 which means that the partial molar volumes in an ideal mix are independent of composition. Consequently, the total volume is the sum of the volumes of the components in their pure forms: formula_22 Enthalpy and heat capacity. Proceeding in a similar way but taking the derivative with respect to formula_14 we get a similar result for molar enthalpies: formula_23 Remembering that formula_24 we get: formula_25 which in turn means that formula_26 and that the enthalpy of the mix is equal to the sum of its component enthalpies. Since formula_27 and formula_28, similarly formula_29 It is also easily verifiable that formula_30 Entropy of mixing. Finally since formula_31 we find that formula_32 Since the Gibbs free energy per mole of the mixture formula_33 is formula_34 then formula_35 At last we can calculate the molar entropy of mixing since formula_36 and formula_37 formula_38 formula_39 Consequences. Solvent–solute interactions are the same as solute–solute and solvent–solvent interactions, on average. Consequently, the enthalpy of mixing (solution) is zero and the change in Gibbs free energy on mixing is determined solely by the entropy of mixing. Hence the molar Gibbs free energy of mixing is formula_40 or for a two-component ideal solution formula_41 where m denotes molar, i.e., change in Gibbs free energy per mole of solution, and formula_3 is the mole fraction of component formula_2. Note that this free energy of mixing is always negative (since each formula_42, each formula_43 or its limit for formula_44 must be negative (infinite)), i.e., "ideal solutions are miscible at any composition" and no phase separation will occur. The equation above can be expressed in terms of chemical potentials of the individual components formula_45 where formula_46 is the change in chemical potential of formula_2 on mixing. If the chemical potential of pure liquid formula_2 is denoted formula_47, then the chemical potential of formula_2 in an ideal solution is formula_48 Any component formula_2 of an ideal solution obeys Raoult's Law over the entire composition range: formula_49 where formula_50 is the equilibrium vapor pressure of pure component formula_2 and formula_51is the mole fraction of component formula_2 in solution. Non-ideality. Deviations from ideality can be described by the use of Margules functions or activity coefficients. A single Margules parameter may be sufficient to describe the properties of the solution if the deviations from ideality are modest; such solutions are termed "regular". In contrast to ideal solutions, where volumes are strictly additive and mixing is always complete, the volume of a non-ideal solution is not, in general, the simple sum of the volumes of the component pure liquids and solubility is not guaranteed over the whole composition range. By measurement of densities, thermodynamic activity of components can be determined. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_i=x_ip_i^*" }, { "math_id": 1, "text": "p_i" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "x_i" }, { "math_id": 4, "text": "p_i^*" }, { "math_id": 5, "text": "\\mu(T,p_i) = g(T,p_i)=g^\\mathrm{u}(T,p^u)+RT\\ln {\\frac{p_i}{p^u}}." }, { "math_id": 6, "text": "p^u" }, { "math_id": 7, "text": "P^o" }, { "math_id": 8, "text": "\\mu(T,p_i) =g^\\mathrm{u}(T,p^u)+RT\\ln {\\frac{p_i^*}{p^u}} + RT\\ln x_i =\\mu _i^*+ RT\\ln x_i." }, { "math_id": 9, "text": "f_i = x_i f_i^*" }, { "math_id": 10, "text": "f_i" }, { "math_id": 11, "text": "f_i^*" }, { "math_id": 12, "text": "\\mu(T,P) = g(T,P)=g^\\mathrm{u}(T,p^u)+RT\\ln {\\frac{f_i}{p^u}}" }, { "math_id": 13, "text": "p" }, { "math_id": 14, "text": "T" }, { "math_id": 15, "text": "\\left(\\frac{\\partial g(T,P)}{\\partial P}\\right)_{T}=RT\\left(\\frac{\\partial \\ln f}{\\partial P}\\right)_{T}." }, { "math_id": 16, "text": "\\left(\\frac{\\partial g(T,P)}{\\partial P}\\right)_{T}=v" }, { "math_id": 17, "text": "v" }, { "math_id": 18, "text": "\\left(\\frac{\\partial \\ln f}{\\partial P}\\right)_{T}=\\frac{v}{RT}." }, { "math_id": 19, "text": "\\bar{v_i}" }, { "math_id": 20, "text": "\\left(\\frac{\\partial \\ln f_i}{\\partial P}\\right)_{T,x_i}=\\frac{\\bar{v_i}}{RT}." }, { "math_id": 21, "text": "v_i^* = \\bar{v}_i" }, { "math_id": 22, "text": "V = \\sum_i V_i^*." }, { "math_id": 23, "text": "\\frac{g(T,P)-g^\\mathrm{gas}(T,p^u)}{RT}=\\ln\\frac{f}{p^u}." }, { "math_id": 24, "text": "\\left( \\frac{\\partial \\frac{g}{T}}{\\partial T}\\right)_P=-\\frac{h}{T^2}" }, { "math_id": 25, "text": "-\\frac{\\bar{h_i}-h_i^\\mathrm{gas}}{R}=-\\frac{h_i^*-h_i^\\mathrm{gas}}{R}" }, { "math_id": 26, "text": "\\bar{h_i}=h_i^*" }, { "math_id": 27, "text": "\\bar{u_i}=\\bar{h_i}-p\\bar{v_i}" }, { "math_id": 28, "text": "u_i^* = h_i^* - p v_i^*" }, { "math_id": 29, "text": "u_i^*=\\bar{u_i}." }, { "math_id": 30, "text": "C_{pi}^*=\\bar{C_{pi}}." }, { "math_id": 31, "text": "\\bar{g_i}=\\mu _i=g_i^\\mathrm{gas}+RT\\ln \\frac{f_i}{p^u}=g_i^\\mathrm{gas}+RT\\ln \\frac{f_i^*}{p^u}+RT\\ln x_i=\\mu _i^*+ RT\\ln x_i" }, { "math_id": 32, "text": "\\Delta g_{i,\\mathrm{mix}}=RT\\ln x_i." }, { "math_id": 33, "text": "G_m" }, { "math_id": 34, "text": "G_m = \\sum_i x_i{g_i}" }, { "math_id": 35, "text": "\\Delta G_\\mathrm{m,mix}=RT\\sum_i{x_i\\ln x_i}." }, { "math_id": 36, "text": "g_i^*=h_i^*-Ts_i^*" }, { "math_id": 37, "text": "\\bar{g_i}=\\bar{h_i}-T\\bar{s_i}" }, { "math_id": 38, "text": "\\Delta s_{i,\\mathrm{mix}}=-R\\sum _i \\ln x_i" }, { "math_id": 39, "text": "\\Delta S_\\mathrm{m,mix}=-R\\sum _i x_i\\ln x_i." }, { "math_id": 40, "text": "\\Delta G_{\\mathrm{m,mix}} = RT \\sum_i x_i \\ln x_i " }, { "math_id": 41, "text": "\\Delta G_{\\mathrm{m,mix}} = RT (x_A \\ln x_A + x_B \\ln x_B)" }, { "math_id": 42, "text": "x_i \\in [0,1]" }, { "math_id": 43, "text": "\\ln x_i" }, { "math_id": 44, "text": "x_i \\to 0" }, { "math_id": 45, "text": "\\Delta G_{\\mathrm{m,mix}} = \\sum_i x_i \\Delta\\mu_{i,\\mathrm{mix}}" }, { "math_id": 46, "text": "\\Delta\\mu_{i,\\mathrm{mix}}=RT\\ln x_i" }, { "math_id": 47, "text": "\\mu_i^*" }, { "math_id": 48, "text": "\\mu_i = \\mu_i^* + RT \\ln x_i." }, { "math_id": 49, "text": "\\ p_{i}=(p_{i})_\\text{pure} x_i " }, { "math_id": 50, "text": "(p_i)_\\text{pure}" }, { "math_id": 51, "text": " x_i\\," } ]
https://en.wikipedia.org/wiki?curid=731401
73141641
Refocusing (semantics)
Refocusing (semantics) In computer science, refocusing is a program transformation used to implement a reduction semantics—i.e., a small-step operational semantics with an explicit representation of the reduction context—more efficiently. It is a step towards implementing a deterministic semantics as a deterministic abstract machine. A small-step operational semantics defines the meaning of a given program formula_0 as a sequence of one-step reductions that starts with formula_0 and continues with a sequence of reducts formula_1, where formula_2: formula_3 A one-step reduction from formula_4 to formula_5 is achieved by A reduction semantics is a small-step operational semantics with an explicit representation of the context of each potential redex. Writing formula_6 for such context, the sequence of one-step reductions above reads: formula_7 where This succession of decompositions, contractions, and recomposition is depicted as follows: Refocusing is a deforestation of the successive reducts: After the initial decomposition, the succession of contractions and refocusings has the structure of a deterministic abstract machine. Background. The semantics of a programming language defines the meaning of the programs written in this programming language. Plotkin's Structural Operational Semantics is a small-step semantics where the meaning of a program is defined step by step and where each step is an elementary operation that is carried out with contraction rules. Example. Consider the following deterministic language of arithmetic expressions over integers with additions and quotients, in the manner of Hutton's razor. In OCaml: type operator = Add | Quo;; type expression = Lit of int | Opr of expression * operator * expression;; So formula_13 is parsed as codice_0 and formula_14 is parsed as codice_1. type value = Int of int;; let expression_of_value (v : value) : expression = match v with Int n -&gt; Lit n;; The smallest potentially reducible expressions ("potential redexes") are operations over values, and they are carried out with a contraction function that maps an actual redex to an expression and otherwise yields an error message: type potential_redex = PR of value * operator * value;; type contractum_or_error = Contractum of expression | Error of string;; let contract (pr : potential_redex) : contractum_or_error = match pr with PR (Int n1, Add, Int n2) -&gt; Contractum (Lit (n1 + n2)) | PR (Int n1, Quo, Int n2) -&gt; if n2 = 0 then Error (string_of_int n1 ^ " / 0") else Contractum (Lit (n1 / n2));; The addition of two integers is an actual redex, and so is the quotient of an integer and a nonzero integer. So for example, the expression codice_2, i.e., formula_15, reduces to codice_3, i.e., formula_16, the expression codice_1, i.e., formula_14, reduces to codice_5, i.e., formula_17 since formula_18, and the expression codice_6, i.e., formula_19, reduces to codice_7. Say that the reduction strategy is leftmost-innermost (i.e., depth first and left to right), as captured by the following congruence rules: The following one-step reduction function implements this strategy: let rec reduce_d (e : expression) : value_or_expression_or_stuck = match e with Lit n -&gt; Value (Int n) | Opr (e1, opr, e2) -&gt; match reduce_d e1 with Value v1 -&gt; (match reduce_d e2 with Value v2 -&gt; (match contract (PR (v1, opr, v2)) with Contractum e -&gt; Expression e | Error s -&gt; Stuck s) | Expression e2' -&gt; Expression (Opr (expression_of_value v1, opr, e2')) | Stuck s -&gt; Stuck s) | Expression e1' -&gt; Expression (Opr (e1', opr, e2)) | Stuck s -&gt; Stuck s;; In words: Evaluation is achieved by iterated reduction. It yields either a value or an error message: type result = Normal_form of value | Wrong of string;; let rec normalize_d (e : expression) : result = match reduce_d e with Value v -&gt; Normal_form v | Expression e' -&gt; normalize_d e' | Stuck s -&gt; Wrong s;; This one-step reduction function is structurally recursive. It implements a Structural Operational Semantics for this minimalistic language of arithmetic expressions with errors. For example, the reduction function implicitly constructs the following proof tree to carry out the reduction step formula_20: Reformatting this proof tree to emphasize the implicit decomposition yields: A reduction semantics is a small-step operational semantics where the implicit context of a potential redex is made explicit. So one reduction step gives rise to And pictorially, an arithmetic expression is evaluated in successive steps: Transforming the one-step reduction function in [[Continuation-passing style|Continuation-Passing Style]], [[Delimited continuation|delimiting the continuation]] from type codice_29 to type codice_30, and splitting this delimited continuation into two (one to continue the decomposition and one to recompose, using the type isomorphism between formula_21 and formula_22) makes it simple to implement the corresponding normalization function: type value_or_decomposition_cc = Val_cc of value let rec decompose_expression_cc (e : expression) (kd : value -&gt; value_or_decomposition_cc) (kr : expression -&gt; expression) : value_or_decomposition_cc = match e with Lit n -&gt; kd (Int n) | Opr (e1, opr, e2) -&gt; decompose_expression_cc e1 (fun v1 -&gt; decompose_expression_cc e2 (fun v2 -&gt; Dec_cc (PR (v1, opr, v2), kd, kr)) (fun e2' -&gt; kr (Opr (expression_of_value v1, opr, e2')))) (fun e1' -&gt; kr (Opr (e1', opr, e2)));; let decompose_cc (e : expression) : value_or_decomposition_cc = decompose_expression_cc e (fun v -&gt; Val_cc v) (fun e' -&gt; e');; let rec iterate_cc_rb (vod : value_or_decomposition_cc) : result = match vod with Val_cc v -&gt; Normal_form v | Dec_cc (pr, kd, kr) -&gt; (match contract pr with Contractum e -&gt; iterate_cc_rb (decompose_cc (kr e)) | Error s -&gt; (*^^^^^^^^^^^^^^^^^*) Wrong s);; let normalize_cc_rb (e : expression) : result = iterate_cc_rb (decompose_cc e);; In the underlined code, the contractum is recomposed and the result is decomposed. This normalization function is said to be "reduction-based" because it enumerates all the reducts in the reduction sequence. Refocusing. [[Extensionality|Extensionally]], the refocusing thesis is that there is no need to reconstruct the next reduct in order to decompose it in the next reduction step. In other words, these intermediate reducts can be [[Deforestation (computer science)|deforested]]. Pictorially: [[Intension]]ally, the refocusing thesis is that this deforestation is achieved by continuing the decomposition over the contractum in the current context. let rec iterate_cc_rf (vod : value_or_decomposition_cc) : result = match vod with Val_cc v -&gt; Normal_form v | Dec_cc (pr, kd, kr) -&gt; (match contract pr with Contractum e -&gt; iterate_cc_rf (decompose_expression_cc e kd kr) | Error s -&gt; (*^^^^^^^^^^^^^^^^^^^^^^^^^^^^^*) Wrong s);; let normalize_cc_rf (e : expression) : result = iterate_cc_rf (decompose_cc e);; In the underlined code, the decomposition is continued. This normalization function is said to be "reduction-free" because it enumerates none of the reducts in the reduction sequence. In practice, the two continuations are [[defunctionalization|defunctionalized]] into traditional first-order, inside-out contexts, which yields an implementation of [[Felleisen]] and Hieb's [[reduction semantics]], a small-step semantics that was designed independently of continuations and defunctionalization, but whose representation—as illustrated here—can be obtained by CPS-transforming and defunctionalizing the representation of a Structural Operational Semantics. The construction sketched above is completely formalized using [[Coq (software)|the Coq Proof Assistant]]. Applications. Over the years, refocusing has been used for inter-deriving calculi and abstract machines. Besides the [[CEK Machine]], the [[Krivine machine]], and the [[SECD machine]], examples also include the chemical abstract machine and abstract machines for JavaScript. Bach Poulsen and [[Peter Mosses|Mosses]] have also used refocusing for implementing Structural Operational Semantics and Modular Structural Operational Semantics. More broadly, refocusing has been used for deriving type systems and implementations for coroutines, for going from type checking via reduction to type checking via evaluation, for deriving a classical call-by-need sequent calculus, for deriving interpretations of the gradually-typed lambda calculus, and for full reduction. Correctness. [[Olivier Danvy|Danvy]] and Nielsen stated conditions for refocusing and proved them informally. Sieczkowski, Biernacka, and Biernacki formalized refocusing using [[Coq (software)|the Coq Proof Assistant]]. Bach Poulsen proved the correctness of refocusing for XSOS using rule induction. Biernacka, Charatonik, and Zielińska generalized refocusing using [[Coq (software)|the Coq Proof Assistant]]. Using [[Agda (programming language)|Agda]], Swiestra proved the refocusing step as part of his formalization of the syntactic correspondence between the formula_23 calculus with a normal-order reduction strategy and the [[Krivine machine]]. Also using [[Agda (programming language)|Agda]], Rozowski proved the refocusing step as part of his formalization of the syntactic correspondence between the formula_23 calculus with an applicative-order reduction strategy and the [[CEK Machine]]. References. &lt;templatestyles src="Reflist/styles.css" /&gt; [[Category:Programming language semantics]]
[ { "math_id": 0, "text": "p_0" }, { "math_id": 1, "text": "p_i" }, { "math_id": 2, "text": "i > 0" }, { "math_id": 3, "text": "p_0 \\rightarrow p_1 \\rightarrow p_2 \\rightarrow \\cdots" }, { "math_id": 4, "text": "p_n" }, { "math_id": 5, "text": "p_{n+1}" }, { "math_id": 6, "text": "C" }, { "math_id": 7, "text": "p_0 = C_0[\\mathit{pr}_0] \\rightarrow C_0[c_0] = C_1[\\mathit{pr}_1] \\rightarrow C_1[c_1] = C_2[\\mathit{pr}_2] \\rightarrow C_2[c_2] \\rightarrow \\cdots" }, { "math_id": 8, "text": "C_0" }, { "math_id": 9, "text": "\\mathit{pr}_0" }, { "math_id": 10, "text": "c_0" }, { "math_id": 11, "text": "C_1" }, { "math_id": 12, "text": "\\mathit{pr}_1" }, { "math_id": 13, "text": "1 + 10" }, { "math_id": 14, "text": "11 / 2" }, { "math_id": 15, "text": "(1+10)+100" }, { "math_id": 16, "text": "11+100" }, { "math_id": 17, "text": "5" }, { "math_id": 18, "text": "11 = 5 \\cdot 2 + 1" }, { "math_id": 19, "text": "(1/0)+100" }, { "math_id": 20, "text": "\n(1 - (5 + 5)) - (2 - 20)\n\\rightarrow\n(1 - 10) - (2 - 20)\n" }, { "math_id": 21, "text": "A + B \\rightarrow C" }, { "math_id": 22, "text": "(A \\rightarrow C) \\times (B \\rightarrow C)" }, { "math_id": 23, "text": "\\lambda\\widehat{\\rho}" } ]
https://en.wikipedia.org/wiki?curid=73141641
7314249
Richtmyer–Meshkov instability
The Richtmyer–Meshkov instability (RMI) occurs when two fluids of different density are impulsively accelerated. Normally this is by the passage of a shock wave. The development of the instability begins with small amplitude perturbations which initially grow linearly with time. This is followed by a nonlinear regime with bubbles appearing in the case of a light fluid penetrating a heavy fluid, and with spikes appearing in the case of a heavy fluid penetrating a light fluid. A chaotic regime eventually is reached and the two fluids mix. This instability can be considered the impulsive-acceleration limit of the Rayleigh–Taylor instability. Dispersion Relation. For ideal MHD formula_0For Hall MHD formula_1For QMHD formula_2 History. R. D. Richtmyer provided a theoretical prediction, and E. E. Meshkov (Евгений Евграфович Мешков)() provided experimental verification. Materials in the cores of stars, like Cobalt-56 from Supernova 1987A were observed earlier than expected. This was evidence of mixing due to Richtmyer–Meshkov and Rayleigh–Taylor instabilities. Examples. During the implosion of an inertial confinement fusion target, the hot shell material surrounding the cold D–T fuel layer is shock-accelerated. This instability is also seen in magnetized target fusion (MTF). Mixing of the shell material and fuel is not desired and efforts are made to minimize any tiny imperfections or irregularities which will be magnified by RMI. Supersonic combustion in a scramjet may benefit from RMI as the fuel-oxidants interface is enhanced by the breakup of the fuel into finer droplets. Also in studies of deflagration to detonation transition (DDT) processes show that RMI-induced flame acceleration can result in detonation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\omega^2 - 2k_\\parallel ^2 / \\beta )(\\omega ^4 - (2/ \\beta + 1)k^2 \\omega^2 +2k_\\parallel^2k^2 / \\beta )=0" }, { "math_id": 1, "text": "{\\displaystyle (\\omega ^{2}-2k_{\\parallel }^{2}/\\beta )(\\omega ^{4}-(2/\\beta +1)k^{2}\\omega ^{2}+2k_{\\parallel }^{2}k^{2}/\\beta )-2d_s^2 k_\\parallel^2 k^2 \\omega^2(\\omega^2 - k^2)/ \\beta =0}" }, { "math_id": 2, "text": "{\\displaystyle {\\displaystyle ((1+2/ \\beta c^2)\\omega ^{2}-2k_{\\parallel }^{2}/\\beta )((1+2/ \\beta c^2)\\omega ^{4}-(2/\\beta +1)k^{2}\\omega ^{2}+2k_{\\parallel }^{2}k^{2}/\\beta )-2d_{s}^{2}k_{\\parallel }^{2}k^{2}\\omega ^{2}(\\omega ^{2}-k^{2})/\\beta =0}} " } ]
https://en.wikipedia.org/wiki?curid=7314249
73153690
Persistence module
A persistence module is a mathematical structure in persistent homology and topological data analysis that formally captures the persistence of topological features of an object across a range of scale parameters. A persistence module often consists of a collection of homology groups (or vector spaces if using field coefficients) corresponding to a filtration of topological spaces, and a collection of linear maps induced by the inclusions of the filtration. The concept of a persistence module was first introduced in 2005 as an application of graded modules over polynomial rings, thus importing well-developed algebraic ideas from classical commutative algebra theory to the setting of persistent homology. Since then, persistence modules have been one of the primary algebraic structures studied in the field of applied topology. Definition. Single Parameter Persistence Modules. Let formula_0 be a totally ordered set and let formula_1 be a field. The set formula_0 is sometimes called the "indexing set". Then a single-parameter "persistence module" formula_2 is a functor formula_3 from the poset category of formula_0 to the category of vector spaces over formula_1 and linear maps. A single-parameter persistence module indexed by a discrete poset such as the integers can be represented intuitively as a diagram of spaces: formula_4To emphasize the indexing set being used, a persistence module indexed by formula_0 is sometimes called a formula_0-persistence module, or simply a formula_0-module. Common choices of indexing sets include formula_5, etc. One can alternatively use a set-theoretic definition of a persistence module that is equivalent to the categorical viewpoint: A persistence module is a pair formula_6 where formula_7 is a collection formula_8 of formula_9-vector spaces and formula_10 is a collection formula_11 of linear maps where formula_12 for each formula_13, such that formula_14 for any formula_15 (i.e., all the maps commute). Multiparameter Persistence Modules. Let formula_16 be a product of formula_17 totally ordered sets, i.e., formula_18 for some totally ordered sets formula_19. Then by endowing formula_16 with the product partial order given by formula_20 only if formula_21 for all formula_22, we can define a "multiparameter persistence module" indexed by formula_16 as a functor formula_23. This is a generalization of single-parameter persistence modules, and in particular, this agrees with the single-parameter definition when formula_24. In this case, a formula_16-persistence module is referred to as an formula_17-dimensional or formula_17-parameter persistence module, or simply a multiparameter or multidimensional module if the number of parameters is already clear from context. Multidimensional persistence modules were first introduced in 2009 by Carlsson and Zomorodian. Since then, there has been a significant amount of research into the theory and practice of working with multidimensional modules, since they provide more structure for studying the shape of data. Namely, multiparameter modules can have greater density sensitivity and robustness to outliers than single-parameter modules, making them a potentially useful tool for data analysis. One downside of multiparameter persistence is its inherent complexity. This makes performing computations related to multiparameter persistence modules difficult. In the worst case, the computational complexity of multidimensional persistent homology is exponential. The most common way to measure the similarity of two multiparameter persistence modules is using the interleaving distance, which is an extension of the bottleneck distance. Examples. Homology Modules. When using homology with coefficients in a field, a homology group has the structure of a vector space. Therefore, given a filtration of spaces formula_25, by applying the homology functor at each index we obtain a persistence module formula_26 for each formula_27 called the (formula_28th-dimensional) "homology module" of formula_29. The vector spaces of the homology module can be defined index-wise as formula_30 for all formula_31, and the linear maps are induced by the inclusion maps of formula_29. Homology modules are the most ubiquitous examples of persistence modules, as they encode information about the number and scale of topological features of an object (usually derived from building a filtration on a point cloud) in a purely algebraic structure, thus making understanding the shape of the data amenable to algebraic techniques, imported from well-developed areas of mathematics such as commutative algebra and representation theory. Interval Modules. A primary concern in the study of persistence modules is whether modules can be decomposed into "simpler pieces", roughly speaking. In particular, it is algebraically and computationally convenient if a persistence module can be expressed as a direct sum of smaller modules known as interval modules. Let formula_32 be a nonempty subset of a poset formula_33. Then formula_32 is an "interval" in formula_33 if Now given an interval formula_42 we can define a persistence module formula_43index-wise as follows: formula_44; formula_45. The module formula_43 is called an "interval module". Free Modules. Let formula_46. Then we can define a persistence module formula_47 with respect to formula_48 where the spaces are given by formula_49, and the maps defined via formula_50. Then formula_47 is known as a "free (persistence) module". One can also define a free module in terms of decomposition into interval modules. For each formula_46 define the interval formula_51, sometimes called a "free interval." Then a persistence module formula_29 is a free module if there exists a multiset formula_52 such that formula_53. In other words, a module is a free module if it can be decomposed as a direct sum of free interval modules. Properties. Finite Type Conditions. A persistence module formula_54 indexed over formula_55 is said to be of "finite type" if the following conditions hold for all formula_56: If formula_54 satisfies the first condition, then formula_54 is commonly said to be "pointwise finite-dimensional (p.f.d.)". The notion of pointwise finite-dimensionality immediately extends to arbitrary indexing sets. The definition of finite type can also be adapted to continuous indexing sets. Namely, a module formula_54 indexed over formula_61 is of finite type if formula_54 is p.f.d., and formula_54 contains a finite number of unique vector spaces. Formally speaking, this requires that for all but a finite number of points formula_62 there is a neighborhood formula_58 of formula_63 such that formula_64 for all formula_65, and also that there is some formula_66 such that formula_67 for all formula_68. A module satisfying only the former property is sometimes labeled "essentially discrete", whereas a module satisfying both properties is known as "essentially finite". An formula_61-persistence module is said to be "semicontinuous" if for any formula_62 and any formula_69 sufficiently close to formula_63, the map formula_70 is an isomorphism. Note that this condition is redundant if the other finite type conditions above are satisfied, so it is not typically included in the definition, but is relevant in certain circumstances. Structure Theorem. One of the primary goals in the study of persistence modules is to classify modules according to their decomposability into interval modules. A persistence module that admits a decomposition as a direct sum of interval modules is often simply called "interval decomposable." One of the primary results in this direction is that any p.f.d. persistence module indexed over a totally ordered set is interval decomposable. This is sometimes referred to as the "structure theorem for persistence modules." The case when formula_33 is finite is a straightforward application of the structure theorem for finitely generated modules over a principal ideal domain. For modules indexed over formula_71, the first known proof of the structure theorem is due to Webb. The theorem was extended to the case of formula_61 (or any totally ordered set containing a countable subset that is dense in formula_61 with the order topology) by Crawley-Boevey in 2015. The generalized version of the structure theorem, i.e., for p.f.d. modules indexed over arbitrary totally ordered sets, was established by Botnan and Crawley-Boevey in 2019.
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "M:T\\to \\mathbf{Vec}_K" }, { "math_id": 4, "text": "\\cdots \\to M_{-1} \\to M_0 \\to M_1 \\to M_2 \\to \\cdots " }, { "math_id": 5, "text": "\\mathbb R, \\mathbb Z, \\mathbb N" }, { "math_id": 6, "text": "(V,\\pi) " }, { "math_id": 7, "text": "V " }, { "math_id": 8, "text": "\\{V_z\\}_{z\\in T} " }, { "math_id": 9, "text": "K " }, { "math_id": 10, "text": "\\pi " }, { "math_id": 11, "text": "\\{\\pi_{y,z}\\}_{y\\leq z\\in T} " }, { "math_id": 12, "text": "\\pi_{y,z} : V_y \\to V_z " }, { "math_id": 13, "text": "y\\leq z\\in T " }, { "math_id": 14, "text": "\\pi_{y,z} \\circ \\pi_{x,y} = \\pi_{x,z} " }, { "math_id": 15, "text": "x \\leq y \\leq z \\in T " }, { "math_id": 16, "text": "P" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "P=T_1 \\times \\dots \\times T_n" }, { "math_id": 19, "text": "T_i " }, { "math_id": 20, "text": "(s_1,\\dots,s_n)\\leq (t_1,\\dots,t_n)" }, { "math_id": 21, "text": "s_i \\leq t_i" }, { "math_id": 22, "text": "i=1,\\dots,n" }, { "math_id": 23, "text": "M:P\\to \\mathbf{Vec}_K" }, { "math_id": 24, "text": "n=1" }, { "math_id": 25, "text": "F:P \\to \\mathbf{Top} " }, { "math_id": 26, "text": "H_i(F) : P \\to \\mathbf{Vec}_K " }, { "math_id": 27, "text": "i=1,2,\\dots " }, { "math_id": 28, "text": "i " }, { "math_id": 29, "text": "F " }, { "math_id": 30, "text": "H_i(F)_z = H_i (F_z) " }, { "math_id": 31, "text": "z\\in P " }, { "math_id": 32, "text": "J " }, { "math_id": 33, "text": "P " }, { "math_id": 34, "text": "x,z \\in J " }, { "math_id": 35, "text": "x \\leq y \\leq z \\in P " }, { "math_id": 36, "text": "y \\in J " }, { "math_id": 37, "text": "p_1,p_2,\\dots, p_n \\in J " }, { "math_id": 38, "text": "p_1=x " }, { "math_id": 39, "text": "p_n=z " }, { "math_id": 40, "text": "p_i, p_j " }, { "math_id": 41, "text": "i,j \\in \\{1,\\dots , n\\} " }, { "math_id": 42, "text": "J\\subseteq P " }, { "math_id": 43, "text": "\\mathbb I^J " }, { "math_id": 44, "text": "\\mathbb I^J_z := \n\\begin{cases}\n K & \\text{if } z \\in J\\\\\n 0 & \\text{otherwise }\n \\end{cases} " }, { "math_id": 45, "text": "\\mathbb I^J_{y,z} := \n\\begin{cases}\n \\operatorname{id}_K & \\text{if } y\\leq z \\in J\\\\\n 0 & \\text{otherwise }\n \\end{cases} " }, { "math_id": 46, "text": "a\\in P " }, { "math_id": 47, "text": "Q^a " }, { "math_id": 48, "text": "a " }, { "math_id": 49, "text": "Q^a_z := \n\\begin{cases}\n K & \\text{if } z \\geq a\\\\\n 0 & \\text{otherwise }\n \\end{cases} " }, { "math_id": 50, "text": "Q^a_{y,z} := \n\\begin{cases}\n \\operatorname{id}_K & \\text{if } z \\geq a\\\\\n 0 & \\text{otherwise }\n \\end{cases} " }, { "math_id": 51, "text": "a^\\llcorner := \\{ b \\in P \\mid b \\geq a \\} " }, { "math_id": 52, "text": "\\mathfrak J(F) \\subseteq P " }, { "math_id": 53, "text": "F = \\bigoplus_{a\\in \\mathfrak J(F)}\\mathbb I^{a^\\llcorner} " }, { "math_id": 54, "text": "M " }, { "math_id": 55, "text": "\\mathbb N " }, { "math_id": 56, "text": "n \\in \\mathbb N " }, { "math_id": 57, "text": "M_n " }, { "math_id": 58, "text": "N " }, { "math_id": 59, "text": "M_{N,n} " }, { "math_id": 60, "text": "n \\geq N " }, { "math_id": 61, "text": "\\mathbb R " }, { "math_id": 62, "text": "x\\in \\mathbb R " }, { "math_id": 63, "text": "x " }, { "math_id": 64, "text": "M_y \\cong M_z " }, { "math_id": 65, "text": "y,z \\in N " }, { "math_id": 66, "text": "w \\in \\mathbb R " }, { "math_id": 67, "text": "M_v = 0 " }, { "math_id": 68, "text": "v \\leq w " }, { "math_id": 69, "text": "y\\leq x " }, { "math_id": 70, "text": "M_{y,x}: M_y \\to M_x " }, { "math_id": 71, "text": "\\mathbb Z " } ]
https://en.wikipedia.org/wiki?curid=73153690
7315901
Milü
Pi approximations by astronomer Zu Chongzhi Milü (; "close ratio"), also known as Zulü (Zu's ratio), is the name given to an approximation to π (pi) found by Chinese mathematician and astronomer Zu Chongzhi in the 5th century. Using Liu Hui's algorithm (which is based on the areas of regular polygons approximating a circle), Zu famously computed π to be between 3.1415926 and 3.1415927 and gave two rational approximations of π, and , naming them respectively Yuelü (; "approximate ratio") and Milü. is the best rational approximation of π with a denominator of four digits or fewer, being accurate to six decimal places. It is within % of the value of π, or in terms of common fractions overestimates π by less than . The next rational number (ordered by size of denominator) that is a better rational approximation of π is , though it is still only correct to six decimal places. To be accurate to seven decimal places, one needs to go as far as . For eight, is needed. The accuracy of Milü to the true value of π can be explained using the continued fraction expansion of π, the first few terms of which are [3; 7, 15, 1, 292, 1, 1, ...]. A property of continued fractions is that truncating the expansion of a given number at any point will give the "best rational approximation" to the number. To obtain Milü, truncate the continued fraction expansion of π immediately before the term 292; that is, π is approximated by the finite continued fraction [3; 7, 15, 1], which is equivalent to Milü. Since 292 is an unusually large term in a continued fraction expansion (corresponding to the next truncation introducing only a very small term, , to the overall fraction), this convergent will be especially close to the true value of π: formula_0 Zu's contemporary calendarist and mathematician He Chengtian invented a fraction interpolation method called "harmonization of the divisor of the day" () to increase the accuracy of approximations of π by iteratively adding the numerators and denominators of fractions. Zu Chongzhi's approximation π ≈  can be obtained with He Chengtian's method. An easy mnemonic helps memorize this fraction by writing down each of the first three odd numbers twice: 1 1 3 3 5 5, then dividing the decimal number represented by the last 3 digits by the decimal number given by the first three digits: . (Note that in Eastern Asia, fractions are read by stating the denominator first, followed by the numerator). Alternatively, . Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi = 3 + \\cfrac{1}{7 + \\cfrac{1}{15 + \\cfrac{1}{1 + {\\color{magenta} \\cfrac{1}{292 + \\cdots}}}}} \\quad\\approx\\quad 3 + \\cfrac{1}{7 + \\cfrac{1}{15 + \\cfrac{1}{1 + {\\color{magenta} 0}}}} = \\frac{355}{113}" } ]
https://en.wikipedia.org/wiki?curid=7315901
73165579
Nickel(II) stearate
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Nickel(II) stearate is a metal-organic compound, a salt of nickel and stearic acid with the chemical formula C36H70NiO4. The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid. The compound is harmful if swallowed and may cause skin sensitization. Synthesis. An exchange reaction of sodium stearate and nickel dichloride: formula_0 Physical properties. Nickel(II) stearate forms a green powder. The compound is insoluble in water, methanol, ethanol, or ether, soluble in carbon tetrachloride and pyridine, slightly soluble in acetone. Uses. The compound is used as a lubricant and in various industrial applications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathsf{ NiCl_2 + 2C_{17}H_{35}COONa \\ \\xrightarrow{}\\ Ni(C_{17}H_{35}COO)_2\\downarrow + 2NaCl }" } ]
https://en.wikipedia.org/wiki?curid=73165579
73169693
Grothendieck trace theorem
Extension of Lidskii's theorem In functional analysis, the Grothendieck trace theorem is an extension of Lidskii's theorem about the trace and the determinant of a certain class of nuclear operators on Banach spaces, the so-called formula_0-nuclear operators. The theorem was proven in 1955 by Alexander Grothendieck. Lidskii's theorem does not hold in general for Banach spaces. The theorem should not be confused with the Grothendieck trace formula from algebraic geometry. Grothendieck trace theorem. Given a Banach space formula_1 with the approximation property and denote its dual as formula_2. ⅔-nuclear operators. Let formula_3 be a nuclear operator on formula_4, then formula_3 is a "formula_0-nuclear operator" if it has a decomposition of the form formula_5 where formula_6 and formula_7 and formula_8 Grothendieck's trace theorem. Let formula_9 denote the eigenvalues of a "formula_0-nuclear operator" formula_3 counted with their algebraic multiplicities. If formula_10 then the following equalities hold: formula_11 and for the Fredholm determinant formula_12 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{2}{3}" }, { "math_id": 1, "text": "(B,\\|\\cdot\\|)" }, { "math_id": 2, "text": "B'" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "B" }, { "math_id": 5, "text": "A = \\sum\\limits_{k=1}^{\\infty}\\varphi_k \\otimes f_k" }, { "math_id": 6, "text": "\\varphi_k \\in B" }, { "math_id": 7, "text": "f_k \\in B'" }, { "math_id": 8, "text": "\\sum\\limits_{k=1}^{\\infty}\\|\\varphi_k\\|^{2/3} \\|f_k\\|^{2/3} < \\infty." }, { "math_id": 9, "text": "\\lambda_j(A)" }, { "math_id": 10, "text": "\\sum\\limits_j |\\lambda_j(A)| < \\infty" }, { "math_id": 11, "text": "\\operatorname{tr}A = \\sum\\limits_j |\\lambda_j(A)|" }, { "math_id": 12, "text": "\\operatorname{det}(I+A) = \\prod\\limits_j (1+\\lambda_j(A))." } ]
https://en.wikipedia.org/wiki?curid=73169693
73172241
Oxidation state localized orbitals
Oxidation state localized orbitals (OSLOs) is a new concept used to determine the oxidation states of each fragment for the coordination complexes. Based on the result of density functional theory (DFT), all the occupied molecular orbitals are remixed to get the oxidation state localized orbitals. These orbitals are assigned to one of the fragments in this molecule based on the fragment orbital localization index (FOLI). After all the electrons are assigned, the oxidation states of each fragment could be obtained by calculating the difference between the number of electrons and protons in each fragment. History. Oxidation state is an important index to evaluate the charge distribution within molecules. The most common definition of oxidation state was established by IUPAC, which let the atom with higher electronegativity takes all the bonding electrons and calculated the difference between the number of electrons and protons around each atom to assign the oxidation states. However, the definition doesn't thoroughly consider the distribution of the bonding electrons and further restricts the applicability of oxidation states. To precisely assign the oxidation state for each component in the molecule, especially for organometallic complexes, several different research groups, including Pedro Salvador and Martin Head-Gordon, have developed different methods to determine the oxidation states. In 2009, Martin Head-Gordon group established a new method called localized orbitals bonding analysis (LOBA) to assign the electrons associated with each localized orbitals. However, this method failed to provide reasonable oxidation states since the orbitals cannot be localized for some complicated systems. To overcome this problem to get the correct assignment of oxidation states, in 2022, Martin Head-Gordon and Pedro Salvador decide to localize the electrons based on different fragments rather than atoms. Thus, they developed the method known as oxidation state localized orbitals (OSLOs), which can accurately assign electrons to different fragments to obtain the oxidation states of each fragment. General methods. Generation of full set of orbitals. Based on the density functional theory, a full set of orbitals will compose the resulting OSLOs for each fragment. Then, these sets will be imported to the algorithm for further assignment of oxidation states and construction of OSLOs. Localization measurement. The extent of delocalization could be quantified by using Pipek's delocalization measurement. For orbitals which are highly localized, the Pipek's indexes will be very close to 1. On the other hand, for highly delocalized orbitals, the Pipek's indexes become larger. formula_0 However, this method cannot evaluate the localization extent on each fragment. Thus, a new measurement is necessary. The fragment orbital localization index (FOLI) is defined as the square root of the fragment population over the delocalization index: formula_1 Based on this localization index, the localization extent on each fragment can be determined. with higher FOLI, it means the extent of localization on this fragment is relatively low, vice versa. Thus, after acquiring the FOLI, the electrons in each OSLO will be assigned to the fragment with the lowest FOLI. Workflow. First, based on the results of density functional theory calculations. The set with the minimal FOLI is selected for further analysis. Then, after calculating the FOLI for each set, the set with the minimal FOLI is selected. For the selected set, the OSLOs are removed and the oxidation states are assigned based on these OSLOs.In this method, the fragment with the higher electron population gets all the electrons in this orbital. For all the other sets, they become the input for the next-round analysis, and the process repeats until all OSLOs are constructed and all electrons are assigned. Result. Significance. The valence OSLOs of the molecule can also be constructed using the method. The oxidation state of the ligand and metal are also determined and show consistency with the expected Lewis structure and can provide great insight for evaluating the redox reactivity. last FOLI and Δ-FOLI are two important values to evaluate the quality of the localization result. With the last FOLI closer to 1, it means that the OSLOs are highly localized on one fragment. On the other hand, Δ-FOLI is the difference between the last FOLI and the second-last FOLI. With a larger Δ-FOLI, it means the selected set of OSLOs is much better than other options, which indicates the unambiguity of this result. Notable result. For example, using the OSLOs for ferrocene shows great consistency with the prediction. The metal center was assigned the oxidation state of +2, and the Cp ligands were assigned the oxidation state of -1, which is quite consistent with the aromatic behavior of Cp. Furthermore, the last FOLI for ferrocene is 1.313 and the Δ-FOLI is 1.800, both indicating the unambiguity of the result. However, for some complicated species possessing noninnocent ligands, the results become ambiguous. For example, several copper-trifluoromethyl complexes show small Δ-FOLI, which means the result is no longer unique. Moreover, whether the copper has the oxidation state of +3 or +1 remain controversial. Besides, for the Grubbs catalyst, the result is also inconsistent with conventional Fischer and Schrock classifications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_i= \\frac{1}{\\sum_(N_F^i)^2}" }, { "math_id": 1, "text": "D_i^F=\\sqrt{(\\tfrac{D_i}{N_F^i})}" } ]
https://en.wikipedia.org/wiki?curid=73172241
731780
Geometrical optics
Model of optics describing light as geometric rays Geometrical optics, or ray optics, is a model of optics that describes light propagation in terms of "rays". The ray in geometrical optics is an abstraction useful for approximating the paths along which light propagates under certain circumstances. The simplifying assumptions of geometrical optics include that light rays: Geometrical optics does not account for certain optical effects such as diffraction and interference, which are considered in physical optics. This simplification is useful in practice; it is an excellent approximation when the wavelength is small compared to the size of structures with which the light interacts. The techniques are particularly useful in describing geometrical aspects of imaging, including optical aberrations. Explanation. A light ray is a line or curve that is perpendicular to the light's wavefronts (and is therefore collinear with the wave vector). A slightly more rigorous definition of a light ray follows from Fermat's principle, which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. Geometrical optics is often simplified by making the paraxial approximation, or "small angle approximation". The mathematical behavior then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and "paraxial ray tracing", which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications. Reflection. Glossy surfaces such as mirrors reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. With such surfaces, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. This is known as the Law of Reflection. For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. (The magnification of a flat mirror is equal to one.) The law also implies that mirror images are parity inverted, which is perceived as a left-right inversion. Mirrors with curved surfaces can be modeled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with magnification greater than or less than one, and the image can be upright or inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen. Refraction. Refraction occurs when light travels through an area of space that has a changing index of refraction. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction formula_0 and another medium with index of refraction formula_1. In such situations, Snell's Law describes the resulting deflection of the light ray: formula_2 where formula_3 and formula_4 are the angles between the normal (to the interface) and the incident and refracted waves, respectively. This phenomenon is also associated with a changing speed of light as seen from the definition of index of refraction provided above which implies: formula_5 where formula_6 and formula_7 are the wave velocities through the respective media. Various consequences of Snell's Law include the fact that for light rays traveling from a material with a high index of refraction to a material with a low index of refraction, it is possible for the interaction with the interface to result in zero transmission. This phenomenon is called total internal reflection and allows for fiber optics technology. As light signals travel down a fiber optic cable, they undergo total internal reflection allowing for essentially no light lost over the length of the cable. It is also possible to produce polarized light rays using a combination of reflection and refraction: When a refracted ray and the reflected ray form a right angle, the reflected ray has the property of "plane polarization". The angle of incidence required for such a scenario is known as Brewster's angle. Snell's Law can be used to predict the deflection of light rays as they pass through "linear media" as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. Additionally, since different frequencies of light have slightly different indexes of refraction in most materials, refraction can be used to produce dispersion spectra that appear as rainbows. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton. Some media have an index of refraction which varies gradually with position and, thus, light rays curve through the medium rather than travel in straight lines. This effect is what is responsible for mirages seen on hot days where the changing index of refraction of the air causes the light rays to bend creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Material that has a varying index of refraction is called a gradient-index (GRIN) material and has many useful properties used in modern optical scanning technologies including photocopiers and scanners. The phenomenon is studied in the field of gradient-index optics. A device which produces converging or diverging light rays due to refraction is known as a lens. Thin lenses produce focal points on either side that can be modeled using the lensmaker's equation. In general, two types of lenses exist: convex lenses, which cause parallel light rays to converge, and concave lenses, which cause parallel light rays to diverge. The detailed prediction of how images are produced by these lenses can be made using ray-tracing similar to curved mirrors. Similarly to curved mirrors, thin lenses follow a simple equation that determines the location of the images given a particular focal length (formula_8) and object distance (formula_9): formula_10 where formula_11 is the distance associated with the image and is considered by convention to be negative if on the same side of the lens as the object and positive if on the opposite side of the lens. The focal length f is considered negative for concave lenses. Incoming parallel rays are focused by a convex lens into an inverted real image one focal length from the lens, on the far side of the lens. Rays from an object at finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. With concave lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at an upright virtual image one focal length from the lens, on the same side of the lens that the parallel rays are approaching on. Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal length, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. Likewise, the magnification of a lens is given by formula_12 where the negative sign is given, by convention, to indicate an upright object for positive values and an inverted object for negative values. Similar to mirrors, upright images produced by single lenses are virtual while inverted images are real. Lenses suffer from aberrations that distort images and focal points. These are due to both to geometrical imperfections and due to the changing index of refraction for different wavelengths of light (chromatic aberration). Underlying mathematics. As a mathematical study, geometrical optics emerges as a short-wavelength limit for solutions to hyperbolic partial differential equations (Sommerfeld–Runge method) or as a property of propagation of field discontinuities according to Maxwell's equations (Luneburg method). In this short-wavelength limit, it is possible to approximate the solution locally by formula_13 where formula_14 satisfy a dispersion relation, and the amplitude formula_15 varies slowly. More precisely, the leading order solution takes the form formula_16 The phase formula_17 can be linearized to recover large wavenumber formula_18, and frequency formula_19. The amplitude formula_20 satisfies a transport equation. The small parameter formula_21 enters the scene due to highly oscillatory initial conditions. Thus, when initial conditions oscillate much faster than the coefficients of the differential equation, solutions will be highly oscillatory, and transported along rays. Assuming coefficients in the differential equation are smooth, the rays will be too. In other words, refraction does not take place. The motivation for this technique comes from studying the typical scenario of light propagation where short wavelength light travels along rays that minimize (more or less) its travel time. Its full application requires tools from microlocal analysis. Sommerfeld–Runge method. The method of obtaining equations of geometrical optics by taking the limit of zero wavelength was first described by Arnold Sommerfeld and J. Runge in 1911. Their derivation was based on an oral remark by Peter Debye. Consider a monochromatic scalar field formula_22, where formula_23 could be any of the components of electric or magnetic field and hence the function formula_24 satisfy the wave equation formula_25 where formula_26 with formula_27 being the speed of light in vacuum. Here, formula_28 is the refractive index of the medium. Without loss of generality, let us introduce formula_29 to convert the equation to formula_30 Since the underlying principle of geometrical optics lies in the limit formula_31, the following asymptotic series is assumed, formula_32 For large but finite value of formula_33, the series diverges, and one has to be careful in keeping only appropriate first few terms. For each value of formula_33, one can find an optimum number of terms to be kept and adding more terms than the optimum number might result in a poorer approximation. Substituting the series into the equation and collecting terms of different orders, one finds formula_34 in general, formula_35 The first equation is known as the eikonal equation, which determines the eikonal formula_36 is a Hamilton–Jacobi equation, written for example in Cartesian coordinates becomes formula_37 The remaining equations determine the functions formula_38. Luneburg method. The method of obtaining equations of geometrical optics by analysing surfaces of discontinuities of solutions to Maxwell's equations was first described by Rudolf Karl Luneburg in 1944. It does not restrict the electromagnetic field to have a special form required by the Sommerfeld-Runge method which assumes the amplitude formula_39 and phase formula_36 satisfy the equation formula_40. This condition is satisfied by e.g. plane waves but is not additive. The main conclusion of Luneburg's approach is the following: Theorem. Suppose the fields formula_41 and formula_42 (in a linear isotropic medium described by dielectric constants formula_43 and formula_44) have finite discontinuities along a (moving) surface in formula_45 described by the equation formula_46. Then Maxwell's equations in the integral form imply that formula_23 satisfies the eikonal equation: formula_47 where formula_48 is the index of refraction of the medium (Gaussian units). An example of such surface of discontinuity is the initial wave front emanating from a source that starts radiating at a certain instant of time. The surfaces of field discontinuity thus become geometrical optics wave fronts with the corresponding geometrical optics fields defined as: formula_49 Those fields obey transport equations consistent with the transport equations of the Sommerfeld-Runge approach. Light rays in Luneburg's theory are defined as trajectories orthogonal to the discontinuity surfaces and can be shown to obey Fermat's principle of least time thus establishing the identity of those rays with light rays of standard optics. The above developments can be generalised to anisotropic media. The proof of Luneburg's theorem is based on investigating how Maxwell's equations govern the propagation of discontinuities of solutions. The basic technical lemma is as follows: A technical lemma. Let formula_50 be a hypersurface (a 3-dimensional manifold) in spacetime formula_51 on which one or more of: formula_41, formula_42, formula_43, formula_44, have a finite discontinuity. Then at each point of the hypersurface the following formulas hold: formula_52 where the formula_53 operator acts in the formula_54-space (for every fixed formula_55) and the square brackets denote the difference in values on both sides of the discontinuity surface (set up according to an arbitrary but fixed convention, e.g. the gradient formula_56 pointing in the direction of the quantities being subtracted "from"). Sketch of proof. Start with Maxwell's equations away from the sources (Gaussian units): formula_57 Using Stokes' theorem in formula_51 one can conclude from the first of the above equations that for any domain formula_58 in formula_51 with a piecewise smooth (3-dimensional) boundary formula_59 the following is true: formula_60 where formula_61 is the projection of the outward unit normal formula_62 of formula_59 onto the 3D slice formula_63, and formula_64 is the volume 3-form on formula_59. Similarly, one establishes the following from the remaining Maxwell's equations: formula_65 Now by considering arbitrary small sub-surfaces formula_66 of formula_59 and setting up small neighbourhoods surrounding formula_66 in formula_51, and subtracting the above integrals accordingly, one obtains: formula_67 where formula_68 denotes the gradient in the 4D formula_69-space. And since formula_66 is arbitrary, the integrands must be equal to 0 which proves the lemma. It's now easy to show that as they propagate through a continuous medium, the discontinuity surfaces obey the eikonal equation. Specifically, if formula_70 and formula_71 are continuous, then the discontinuities of formula_72 and formula_73 satisfy: formula_74 and formula_75. In this case the last two equations of the lemma can be written as: formula_76 Taking the cross product of the second equation with formula_56 and substituting the first yields: formula_77 The continuity of formula_71 and the second equation of the lemma imply: formula_78, hence, for points lying on the surface formula_79 "only": formula_80 Because of the physical considerations one can assume without loss of generality that formula_81 is of the following form: formula_82, i.e. a 2D surface moving through space, modelled as level surfaces of formula_23. (Mathematically formula_23 exists if formula_83 by the implicit function theorem.) The above equation written in terms of formula_23 becomes: formula_84 i.e., formula_85 which is the eikonal equation and it holds for all formula_86, formula_87, formula_88, since the variable formula_55 is absent. Other laws of optics like Snell's law and Fresnel formulae can be similarly obtained by considering discontinuities in formula_70 and formula_71. General equation using four-vector notation. In four-vector notation used in special relativity, the wave equation can be written as formula_89 and the substitution formula_90 leads to formula_91 Therefore, the eikonal equation is given by formula_92 Once eikonal is found by solving the above equation, the wave four-vector can be found from formula_93 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n_1" }, { "math_id": 1, "text": "n_2" }, { "math_id": 2, "text": "n_1\\sin\\theta_1 = n_2\\sin\\theta_2 " }, { "math_id": 3, "text": "\\theta_1" }, { "math_id": 4, "text": "\\theta_2" }, { "math_id": 5, "text": "v_1\\sin\\theta_2\\ = v_2\\sin\\theta_1" }, { "math_id": 6, "text": "v_1" }, { "math_id": 7, "text": "v_2" }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "S_1" }, { "math_id": 10, "text": "\\frac{1}{S_1} + \\frac{1}{S_2} = \\frac{1}{f} " }, { "math_id": 11, "text": "S_2" }, { "math_id": 12, "text": " M = - \\frac{S_2}{S_1} = \\frac{f}{f - S_1} " }, { "math_id": 13, "text": "u(t,x) \\approx a(t,x)e^{i(k\\cdot x - \\omega t)}" }, { "math_id": 14, "text": "k, \\omega" }, { "math_id": 15, "text": "a(t,x)" }, { "math_id": 16, "text": "a_0(t,x) e^{i\\varphi(t,x)/\\varepsilon}." }, { "math_id": 17, "text": "\\varphi(t,x)/\\varepsilon" }, { "math_id": 18, "text": "k:= \\nabla_x \\varphi" }, { "math_id": 19, "text": "\\omega := -\\partial_t \\varphi" }, { "math_id": 20, "text": "a_0" }, { "math_id": 21, "text": "\\varepsilon\\," }, { "math_id": 22, "text": "\\psi(\\mathbf{r},t)=\\phi(\\mathbf{r})e^{i\\omega t}" }, { "math_id": 23, "text": "\\psi" }, { "math_id": 24, "text": "\\phi" }, { "math_id": 25, "text": "\\nabla^2\\phi + k_o^2 n(\\mathbf{r})^2 \\phi =0" }, { "math_id": 26, "text": "k_o = \\omega/c = 2\\pi/\\lambda_o" }, { "math_id": 27, "text": "c" }, { "math_id": 28, "text": "n(\\mathbf{r})" }, { "math_id": 29, "text": "\\phi = A(k_o,\\mathbf{r}) e^{i k_o S(\\mathbf{r})}" }, { "math_id": 30, "text": "-k_o^2 A[(\\nabla S)^2 - n^2] + 2 i k_o(\\nabla S\\cdot \\nabla A) + ik_o A\\nabla^2 S + \\nabla^2 A =0." }, { "math_id": 31, "text": "\\lambda_o\\sim k_o^{-1}\\rightarrow 0" }, { "math_id": 32, "text": "A(k_o,\\mathbf{r}) = \\sum_{m=0}^\\infty \\frac{A_m(\\mathbf{r})}{(ik_o)^m}" }, { "math_id": 33, "text": "k_o" }, { "math_id": 34, "text": "\\begin{align}\nO(k_o^2): &\\quad (\\nabla S)^2 = n^2, \\\\[1ex]\nO(k_o) : &\\quad 2\\nabla S\\cdot \\nabla A_0 + A_0\\nabla^2 S =0, \\\\[1ex]\nO(1): &\\quad 2\\nabla S\\cdot \\nabla A_1 + A_1\\nabla^2 S =-\\nabla^2 A_0,\n\\end{align}" }, { "math_id": 35, "text": "O(k_o^{1-m}):\\quad 2\\nabla S\\cdot \\nabla A_m + A_m\\nabla^2 S =-\\nabla^2 A_{m-1}." }, { "math_id": 36, "text": "S(\\mathbf{r})" }, { "math_id": 37, "text": "\\left(\\frac{\\partial S}{\\partial x}\\right)^2 + \\left(\\frac{\\partial S}{\\partial y}\\right)^2 + \\left(\\frac{\\partial S}{\\partial z}\\right)^2 = n^2." }, { "math_id": 38, "text": "A_m(\\mathbf{r})" }, { "math_id": 39, "text": "A(k_o,\\mathbf{r})" }, { "math_id": 40, "text": "\\lim_{k_0 \\to \\infty} \\frac{1}{k_0}\\left(\\frac{1}{A}\\,\\nabla S \\cdot \\nabla A + \\frac{1}{2}\\nabla^2 S\\right) = 0" }, { "math_id": 41, "text": "\\mathbf{E}(x, y, z, t)" }, { "math_id": 42, "text": "\\mathbf{H}(x, y, z, t)" }, { "math_id": 43, "text": "\\varepsilon(x, y, z)" }, { "math_id": 44, "text": "\\mu(x, y, z)" }, { "math_id": 45, "text": "\\mathbf{R}^3" }, { "math_id": 46, "text": "\\psi(x, y, z) - ct = 0" }, { "math_id": 47, "text": "\\psi_x^2 + \\psi_y^2 + \\psi_z^2 = \\varepsilon\\mu = n^2," }, { "math_id": 48, "text": "n" }, { "math_id": 49, "text": "\\begin{align}\n\\mathbf{E}^*(x, y, z) &= \\mathbf{E}(x, y, z, \\psi(x, y, z)/c) \\\\[1ex]\n\\mathbf{H}^*(x, y, z) &= \\mathbf{H}(x, y, z, \\psi(x, y, z)/c)\n\\end{align}" }, { "math_id": 50, "text": "\\varphi(x, y, z, t) = 0" }, { "math_id": 51, "text": "\\mathbf{R}^4" }, { "math_id": 52, "text": "\\begin{align}\n\\nabla\\varphi \\cdot [\\varepsilon\\mathbf{E}] &= 0 \\\\[1ex]\n\\nabla\\varphi \\cdot [\\mu \\mathbf{H}] &= 0 \\\\[1ex]\n\\nabla\\varphi \\times [\\mathbf{E}] + \\frac{1}{c} \\, \\varphi_t \\, [\\mu\\mathbf{H}] &= 0 \\\\[1ex]\n\\nabla\\varphi \\times [\\mathbf{H}] - \\frac{1}{c} \\, \\varphi_t \\, [\\varepsilon\\mathbf{E}] &= 0\n\\end{align}" }, { "math_id": 53, "text": "\\nabla" }, { "math_id": 54, "text": "xyz" }, { "math_id": 55, "text": "t" }, { "math_id": 56, "text": "\\nabla\\varphi" }, { "math_id": 57, "text": "\\begin{align}\n\\nabla \\cdot \\varepsilon\\mathbf{E} = 0 \\\\[1ex]\n\\nabla \\cdot \\mu \\mathbf{H} = 0 \\\\[1ex]\n\\nabla \\times \\mathbf{E} + \\tfrac{\\mu}{c} \\, \\mathbf{H}_t = 0 \\\\[1ex]\n\\nabla \\times \\mathbf{H} - \\tfrac{\\varepsilon}{c} \\, \\mathbf{E}_t = 0\n\\end{align}" }, { "math_id": 58, "text": "D" }, { "math_id": 59, "text": "\\Gamma" }, { "math_id": 60, "text": "\\oint_\\Gamma (\\mathbf{M} \\cdot \\varepsilon\\mathbf{E}) \\, dS = 0" }, { "math_id": 61, "text": "\\mathbf{M} = (x_N, y_N, z_N)" }, { "math_id": 62, "text": "(x_N, y_N, z_N, t_N)" }, { "math_id": 63, "text": "t = \\rm{const}" }, { "math_id": 64, "text": "dS" }, { "math_id": 65, "text": "\\begin{align}\n\\oint_\\Gamma \\left(\\mathbf{M} \\cdot \\mu\\mathbf{H}\\right) dS &= 0 \\\\[1.55ex]\n\\oint_\\Gamma \\left(\\mathbf{M} \\times \\mathbf{E} + \\frac{\\mu}{c} \\, t_N \\, \\mathbf{H}\\right) dS &= 0 \\\\[1.55ex]\n\\oint_\\Gamma \\left(\\mathbf{M} \\times \\mathbf{H} - \\frac{\\varepsilon}{c} \\, t_N \\, \\mathbf{E}\\right) dS &= 0\n\\end{align}" }, { "math_id": 66, "text": "\\Gamma_0" }, { "math_id": 67, "text": "\\begin{align}\n\\int_{\\Gamma_0} (\\nabla\\varphi \\cdot [\\varepsilon\\mathbf{E}]) \\, {dS\\over \\|\\nabla^{4D}\\varphi\\|} &= 0 \\\\[1ex]\n\\int_{\\Gamma_0} (\\nabla\\varphi \\cdot [\\mu\\mathbf{H}]) \\, {dS\\over \\|\\nabla^{4D}\\varphi\\|} &= 0 \\\\[1ex]\n\\int_{\\Gamma_0} \\left( \\nabla\\varphi \\times [\\mathbf{E}] + {1\\over c} \\, \\varphi_t \\, [\\mu\\mathbf{H}] \\right) \\, \\frac{dS}{\\|\\nabla^{4D}\\varphi\\|} &= 0 \\\\[1ex]\n\\int_{\\Gamma_0} \\left( \\nabla\\varphi \\times [\\mathbf{H}] - {1\\over c} \\, \\varphi_t \\, [\\varepsilon\\mathbf{E}] \\right) \\, \\frac{dS}{\\|\\nabla^{4D}\\varphi\\|} &= 0\n\\end{align}" }, { "math_id": 68, "text": "\\nabla^{4D}" }, { "math_id": 69, "text": "xyzt" }, { "math_id": 70, "text": "\\varepsilon" }, { "math_id": 71, "text": "\\mu" }, { "math_id": 72, "text": "\\mathbf{E}" }, { "math_id": 73, "text": "\\mathbf{H}" }, { "math_id": 74, "text": "[\\varepsilon\\mathbf{E}] = \\varepsilon[\\mathbf{E}]" }, { "math_id": 75, "text": "[\\mu\\mathbf{H}] = \\mu[\\mathbf{H}]" }, { "math_id": 76, "text": "\\begin{align}\n\\nabla\\varphi \\times [\\mathbf{E}] + {\\mu\\over c} \\, \\varphi_t \\, [\\mathbf{H}] &= 0 \\\\[1ex]\n\\nabla\\varphi \\times [\\mathbf{H}] - {\\varepsilon\\over c} \\, \\varphi_t \\, [\\mathbf{E}] &= 0\n\\end{align}" }, { "math_id": 77, "text": "\\nabla\\varphi \\times (\\nabla\\varphi \\times [\\mathbf{H}]) - {\\varepsilon\\over c} \\, \\varphi_t \\, (\\nabla\\varphi \\times [\\mathbf{E}]) = (\\nabla\\varphi \\cdot [\\mathbf{H}]) \\, \\nabla\\varphi - \\|\\nabla\\varphi\\|^2 \\, [\\mathbf{H}] + {\\varepsilon\\mu\\over c^2} \\varphi_t^2 \\, [\\mathbf{H}] = 0" }, { "math_id": 78, "text": "\\nabla\\varphi \\cdot [\\mathbf{H}] = 0" }, { "math_id": 79, "text": "\\varphi = 0" }, { "math_id": 80, "text": "\\|\\nabla\\varphi\\|^2 = {\\varepsilon\\mu\\over c^2} \\varphi_t^2" }, { "math_id": 81, "text": "\\varphi" }, { "math_id": 82, "text": "\\varphi(x, y, z, t) = \\psi(x, y, z) - ct" }, { "math_id": 83, "text": "\\varphi_t \\ne 0" }, { "math_id": 84, "text": "\\|\\nabla\\psi\\|^2 = {\\varepsilon\\mu\\over c^2} \\, (-c)^2 = \\varepsilon\\mu = n^2" }, { "math_id": 85, "text": "\\psi_x^2 + \\psi_y^2 + \\psi_z^2 = n^2" }, { "math_id": 86, "text": "x" }, { "math_id": 87, "text": "y" }, { "math_id": 88, "text": "z" }, { "math_id": 89, "text": "\\frac{\\partial^2 \\psi}{\\partial x_i\\partial x^i} = 0" }, { "math_id": 90, "text": "\\psi= A e^{iS / \\varepsilon}" }, { "math_id": 91, "text": "-\\frac{A}{\\varepsilon^2}\\frac{\\partial S}{\\partial x_i} \\frac{\\partial S}{\\partial x^i} + \\frac{2i}{\\varepsilon} \\frac{\\partial A}{\\partial x_i} \\frac{\\partial S}{\\partial x^i} + \\frac{iA}{\\varepsilon} \\frac{\\partial^2 S}{\\partial x_i\\partial x^i} + \\frac{\\partial^2 A}{\\partial x_i\\partial x^i} = 0. " }, { "math_id": 92, "text": "\\frac{\\partial S}{\\partial x_i} \\frac{\\partial S}{\\partial x^i} = 0." }, { "math_id": 93, "text": "k_i = - \\frac{\\partial S}{\\partial x^i}." } ]
https://en.wikipedia.org/wiki?curid=731780
7318436
Von Zeipel theorem
Astrophysics concept In astrophysics, the von Zeipel theorem states that the radiative flux formula_0 in a uniformly rotating star is proportional to the local effective gravity formula_1. The theorem is named after Swedish astronomer Edvard Hugo von Zeipel. The theorem is: formula_2 where the luminosity formula_3 and mass formula_4 are evaluated on a surface of constant pressure formula_5. The effective temperature formula_6 can then be found at a given colatitude formula_7 from the local effective gravity: formula_8 This relation ignores the effect of convection in the envelope, so it primarily applies to early-type stars. According to the theory of rotating stars, if the rotational velocity of a star depends only on the radius, it cannot simultaneously be in thermal and hydrostatic equilibrium. This is called the von Zeipel paradox. The paradox is resolved, however, if the rotational velocity also depends on height, or there is a meridional circulation. A similar situation may arise in accretion disks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "g_\\text{eff}" }, { "math_id": 2, "text": "F = -\\frac{L(P)}{4\\pi G M_*(P)} g_\\text{eff}," }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "M_*" }, { "math_id": 5, "text": "P" }, { "math_id": 6, "text": "T_\\text{eff}" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "T_\\text{eff}(\\theta) \\sim g_\\text{eff}^{1/4}(\\theta)." } ]
https://en.wikipedia.org/wiki?curid=7318436
731884
Electromagnetic four-potential
Relativistic vector field An electromagnetic four-potential is a relativistic vector function from which the electromagnetic field can be derived. It combines both an electric scalar potential and a magnetic vector potential into a single four-vector. As measured in a given frame of reference, and for a given gauge, the first component of the electromagnetic four-potential is conventionally taken to be the electric scalar potential, and the other three components make up the magnetic vector potential. While both the scalar and vector potential depend upon the frame, the electromagnetic four-potential is Lorentz covariant. Like other potentials, many different electromagnetic four-potentials correspond to the same electromagnetic field, depending upon the choice of gauge. This article uses tensor index notation and the Minkowski metric sign convention (+ − − −). See also covariance and contravariance of vectors and raising and lowering indices for more details on notation. Formulae are given in SI units and Gaussian-cgs units. Definition. The contravariant electromagnetic four-potential can be defined as: in which "ϕ" is the electric potential, and A is the magnetic potential (a vector potential). The units of "Aα" are V·s·m−1 in SI, and Mx·cm−1 in Gaussian-cgs. The electric and magnetic fields associated with these four-potentials are: In special relativity, the electric and magnetic fields transform under Lorentz transformations. This can be written in the form of a rank two tensor - the electromagnetic tensor. The 16 contravariant components of the electromagnetic tensor, using Minkowski metric convention (+ − − −), are written in terms of the electromagnetic four-potential and the four-gradient as: formula_0 If the said signature is instead (− + + +) then: formula_1 This essentially defines the four-potential in terms of physically observable quantities, as well as reducing to the above definition. In the Lorenz gauge. Often, the Lorenz gauge condition formula_2 in an inertial frame of reference is employed to simplify Maxwell's equations as: where "Jα" are the components of the four-current, and formula_3 is the d'Alembertian operator. In terms of the scalar and vector potentials, this last equation becomes: For a given charge and current distribution, "ρ"(r, "t") and j(r, "t"), the solutions to these equations in SI units are: formula_4 where formula_5 is the retarded time. This is sometimes also expressed with formula_6 where the square brackets are meant to indicate that the time should be evaluated at the retarded time. Of course, since the above equations are simply the solution to an inhomogeneous differential equation, any solution to the homogeneous equation can be added to these to satisfy the boundary conditions. These homogeneous solutions in general represent waves propagating from sources outside the boundary. When the integrals above are evaluated for typical cases, e.g. of an oscillating current (or charge), they are found to give both a magnetic field component varying according to "r"−2 (the induction field) and a component decreasing as "r"−1 (the radiation field). Gauge freedom. When flattened to a one-form (in tensor notation, formula_7), the four-potential formula_8 (normally written as a vector or, formula_9 in tensor notation) can be decomposed via the Hodge decomposition theorem as the sum of an exact, a coexact, and a harmonic form, formula_10. There is gauge freedom in "A" in that of the three forms in this decomposition, only the coexact form has any effect on the electromagnetic tensor formula_11. Exact forms are closed, as are harmonic forms over an appropriate domain, so formula_12 and formula_13, always. So regardless of what formula_14 and formula_15 are, we are left with simply formula_16. In infinite flat Minkowski space, every closed form is exact. Therefore the formula_15 term vanishes. Every gauge transform of formula_8 can thus be written as formula_17. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F^{\\mu\\nu} = \\partial^{\\mu}A^{\\nu} - \\partial^{\\nu}A^{\\mu} =\n \\begin{bmatrix}\n 0 & -E_x/c & -E_y/c & -E_z/c \\\\\n E_x/c & 0 & -B_z & B_y \\\\\n E_y/c & B_z & 0 & -B_x \\\\\n E_z/c & -B_y & B_x & 0\n \\end{bmatrix}\n" }, { "math_id": 1, "text": "F'\\,^{\\mu\\nu} = \\partial'\\,^{\\mu}A^{\\nu} - \\partial'\\,^{\\nu}A^{\\mu} =\n \\begin{bmatrix}\n 0 & E_x/c & E_y/c & E_z/c \\\\\n -E_x/c & 0 & B_z & -B_y \\\\\n -E_y/c & -B_z & 0 & B_x \\\\\n -E_z/c & B_y & -B_x & 0\n \\end{bmatrix}\n" }, { "math_id": 2, "text": "\\partial_{\\alpha} A^{\\alpha} = 0" }, { "math_id": 3, "text": "\\Box = \\frac{1}{c^2} \\frac{\\partial^2}{\\partial t^2} - \\nabla^2 = \\partial^\\alpha \\partial_\\alpha" }, { "math_id": 4, "text": "\\begin{align}\n \\phi (\\mathbf{r}, t) &= \\frac{1}{4 \\pi \\epsilon_0} \\int \\mathrm{d}^3 x^\\prime \\frac{\\rho\\left( \\mathbf{r}^\\prime, t_r\\right)}{ \\left| \\mathbf{r} - \\mathbf{r}^\\prime \\right|} \\\\\n \\mathbf A (\\mathbf{r}, t) &= \\frac{\\mu_0}{4 \\pi} \\int \\mathrm{d}^3 x^\\prime \\frac{\\mathbf{j}\\left( \\mathbf{r}^\\prime, t_r\\right)}{ \\left| \\mathbf{r} - \\mathbf{r}^\\prime \\right|},\n\\end{align}" }, { "math_id": 5, "text": "t_r = t - \\frac{\\left|\\mathbf{r} - \\mathbf{r}'\\right|}{c}" }, { "math_id": 6, "text": "\\rho\\left(\\mathbf{r}', t_r\\right) = \\left[\\rho\\left(\\mathbf{r}', t\\right)\\right]," }, { "math_id": 7, "text": "A_\\mu" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "A^\\mu" }, { "math_id": 10, "text": "A = d \\alpha + \\delta \\beta + \\gamma" }, { "math_id": 11, "text": "F = d A" }, { "math_id": 12, "text": "d d \\alpha = 0" }, { "math_id": 13, "text": "d\\gamma = 0" }, { "math_id": 14, "text": "\\alpha" }, { "math_id": 15, "text": "\\gamma" }, { "math_id": 16, "text": "F = d \\delta \\beta" }, { "math_id": 17, "text": "A \\Rightarrow A + d\\alpha" } ]
https://en.wikipedia.org/wiki?curid=731884
73189075
Komlós' theorem
Theorem Komlós' theorem is a theorem from probability theory and mathematical analysis about the Cesàro convergence of a subsequence of random variables (or functions) and their subsequences to an integrable random variable (or function). It's also an existence theorem for an integrable random variable (or function). There exist a probabilistic and an analytical version for finite measure spaces. The theorem was proven in 1967 by János Komlós. There exists also a generalization from 1970 by Srishti D. Chatterji. Komlós' theorem. Probabilistic version. Let formula_0 be a probability space and formula_1 be a sequence of real-valued random variables defined on this space with formula_2 Then there exists a random variable formula_3 and a subsequence formula_4, such that for every arbitrary subsequence formula_5 when formula_6 then formula_7 formula_8-almost surely. Analytic version. Let formula_9 be a finite measure space and formula_10 be a sequence of real-valued functions in formula_11 and formula_12. Then there exists a function formula_13 and a subsequence formula_14 such that for every arbitrary subsequence formula_15 if formula_6 then formula_16 formula_17-almost everywhere. Explanations. So the theorem says, that the sequence formula_18 and all its subsequences converge in Césaro.
[ { "math_id": 0, "text": "(\\Omega,\\mathcal{F},P)" }, { "math_id": 1, "text": "\\xi_1,\\xi_2,\\dots" }, { "math_id": 2, "text": "\\sup\\limits_{n}\\mathbb{E}[|\\xi_n|]<\\infty." }, { "math_id": 3, "text": "\\psi\\in L^1(P)" }, { "math_id": 4, "text": "(\\eta_k)=(\\xi_{n_{k}})" }, { "math_id": 5, "text": "(\\tilde{\\eta}_n)=(\\eta_{k_{n}})" }, { "math_id": 6, "text": "n\\to \\infty" }, { "math_id": 7, "text": "\\frac{(\\tilde{\\eta}_1+\\cdots +\\tilde{\\eta}_n)}{n}\\to \\psi" }, { "math_id": 8, "text": "P" }, { "math_id": 9, "text": "(E,\\mathcal{A},\\mu)" }, { "math_id": 10, "text": "f_1,f_2,\\dots" }, { "math_id": 11, "text": "L^1(\\mu)" }, { "math_id": 12, "text": "\\sup\\limits_n \\int_E |f_n|\\mathrm{d}\\mu<\\infty" }, { "math_id": 13, "text": "\\upsilon \\in L^1(\\mu)" }, { "math_id": 14, "text": "(g_k)=(f_{n_{k}})" }, { "math_id": 15, "text": "(\\tilde{g}_n)=(g_{k_{n}})" }, { "math_id": 16, "text": "\\frac{(\\tilde{g}_1+\\cdots +\\tilde{g}_n)}{n}\\to \\upsilon " }, { "math_id": 17, "text": "\\mu" }, { "math_id": 18, "text": "(\\eta_k)" } ]
https://en.wikipedia.org/wiki?curid=73189075
73190019
Four-dimensional Chern–Simons theory
Gauge theory providing unifying formalism for integrable systems In mathematical physics, four-dimensional Chern–Simons theory, also known as semi-holomorphic or semi-topological Chern–Simons theory, is a quantum field theory initially defined by Nikita Nekrasov, rediscovered and studied by Kevin Costello, and later by Edward Witten and Masahito Yamazaki. It is named after mathematicians Shiing-Shen Chern and James Simons who discovered the Chern–Simons 3-form appearing in the theory. The gauge theory has been demonstrated to be related to many integrable systems, including exactly solvable lattice models such as the six-vertex model of Lieb and the Heisenberg spin chain and integrable field theories such as principal chiral models, symmetric space coset sigma models and Toda field theory, although the integrable field theories require the introduction of two-dimensional surface defects. The theory is also related to the Yang–Baxter equation and quantum groups such as the Yangian. The theory is similar to three-dimensional Chern–Simons theory which is a topological quantum field theory, and the relation of 4d Chern–Simons theory to the Yang–Baxter equation bears similarities to the relation of 3d Chern–Simons theory to knot invariants such as the Jones polynomial discovered by Witten. Formulation. The theory is defined on a 4-dimensional manifold which is a product of two 2-dimensional manifolds: formula_0, where formula_1 is a smooth orientable 2-dimensional manifold, and formula_2 is a complex curve (hence has real dimension 2) endowed with a meromorphic one-form formula_3. The field content is a gauge field formula_4. The action is given by wedging the Chern–Simons 3-form formula_5 with formula_3: formula_6 Restrictions on underlying manifolds. A heuristic puts strong restrictions on the formula_2 to be considered. This theory is studied perturbatively, in the limit that the Planck constant formula_7. In the path integral formulation, the action will contain a ratio formula_8. Therefore, zeroes of formula_3 naïvely correspond to points at which formula_9, at which point perturbation theory breaks down. So formula_3 may have poles, but not zeroes. A corollary of the Riemann–Roch theorem relates the degree of the canonical divisor defined by formula_3 (equal to the difference between the number of zeros and poles of formula_3, with multiplicity) to the genus formula_10 of the curve formula_2, giving formula_11 Then imposing that formula_3 has no zeroes, formula_10 must be formula_12 or formula_13. In the latter case, formula_3 has no poles and formula_14 a complex torus (with formula_15 a 2d lattice). If formula_16, then formula_2 is formula_17 the complex projective line. The form formula_3 has two poles; either a single pole with multiplicity 2, in which case it can be realized as formula_18 on formula_19, or two poles of multiplicity one, which can be realized as formula_20 on formula_21. Therefore formula_2 is either a complex plane, cylinder or torus. There is also a topological restriction on formula_1, due to a possible framing anomaly. This imposes that formula_1 must be a parallelizable 2d manifold, which is also a strong restriction: for example, if formula_1 is compact, then it is a torus. Surface defects and field theories. The above is sufficient to obtain spin chains from the theory, but to obtain 2-dimensional integrable field theories, one must introduce so-called surface defects. A surface defect, often labelled formula_22, is a 2-dimensional 'object' which is considered to be localized at a point formula_23 on the complex curve but covers formula_24 which is fixed to be formula_25 for engineering integrable field theories. This defect formula_26 is then the space on which a 2-dimensional field theory lives, and this theory couples to the bulk gauge field formula_4. Supposing the bulk gauge field formula_4 has gauge group formula_27, the field theory on the defect can interact with the bulk gauge field if it has global symmetry group formula_27, so that it has a current formula_28 which can couple via a term which is schematically formula_29. In general, one can have multiple defects formula_30 with formula_31, and the action for the coupled theory is then formula_32 with formula_33 the "collection" of fields for the field theory on formula_30, and coordinates formula_34 for formula_25. There are two distinct classes of defects: Order defects are easier to define, but disorder defects are required to engineer many of the known 2-dimensional integrable field theories. Master theories of integrable systems. 4d Chern–Simons theory is a 'master theory' for integrable systems, providing a framework that incorporates many integrable systems. Another theory which shares this feature, but with a Hamiltonian rather than Lagrangian description, is classical affine Gaudin models with a 'dihedral twist', and the two theories have been shown to be closely related. Another 'master theory' for integrable systems is the anti-self-dual Yang–Mills (ASDYM) system. Ward's conjecture is the conjecture that in fact all integrable ODEs or PDEs come from ASDYM. A connection between 4d Chern–Simons theory and ASDYM has been found so that they in fact come from a six-dimensional holomorphic Chern–Simons theory defined on twistor space. The derivation of integrable systems from this 6d Chern–Simons theory through the alternate routes of 4d Chern–Simons theory and ASDYM in fact fit into a commuting square. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M = \\Sigma \\times C" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "\\omega" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "CS(A)" }, { "math_id": 6, "text": "S_{4d} = \\frac{1}{2\\pi} \\int_M \\omega \\wedge CS(A)." }, { "math_id": 7, "text": "\\hbar << 1" }, { "math_id": 8, "text": "\\omega/\\hbar" }, { "math_id": 9, "text": "\\hbar \\rightarrow \\infty" }, { "math_id": 10, "text": "g" }, { "math_id": 11, "text": "\\text{number of zeros of } \\omega - \\text{number of poles of } \\omega = 2g - 2" }, { "math_id": 12, "text": "0" }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "C = \\mathbb{C}/\\Lambda" }, { "math_id": 15, "text": "\\Lambda" }, { "math_id": 16, "text": "g = 0" }, { "math_id": 17, "text": "\\mathbb{CP}^1" }, { "math_id": 18, "text": "\\omega = dz" }, { "math_id": 19, "text": "\\mathbb{C}" }, { "math_id": 20, "text": "\\omega = \\frac{dz}{z}" }, { "math_id": 21, "text": "\\mathbb{C}^\\times \\cong \\mathbb{C}/\\mathbb{Z}" }, { "math_id": 22, "text": "D" }, { "math_id": 23, "text": "z" }, { "math_id": 24, "text": "\\Sigma," }, { "math_id": 25, "text": "\\mathbb{R}^2" }, { "math_id": 26, "text": "D " }, { "math_id": 27, "text": "G" }, { "math_id": 28, "text": "J" }, { "math_id": 29, "text": "\\int JA" }, { "math_id": 30, "text": "D_\\alpha" }, { "math_id": 31, "text": "\\alpha = 1, \\cdots, n" }, { "math_id": 32, "text": "S_{4d-2d} = \\frac{1}{2\\hbar\\pi} \\int_{\\mathbb R^2 \\times C} \\omega \\wedge CS(A) + \\sum_{\\alpha = 1}^{n} \\frac{1}{\\hbar} \\int_{\\mathbb{R}^2 \\times z_\\alpha} \\mathcal{L}_\\alpha (\\phi_\\alpha;\n A_w|_{z_\\alpha}, A_{\\overline w}|_{z_\\alpha})," }, { "math_id": 33, "text": "\\phi_\\alpha" }, { "math_id": 34, "text": "w, \\overline w" } ]
https://en.wikipedia.org/wiki?curid=73190019
73193493
Disjunctive Datalog
Disjunctive Datalog is an extension of the logic programming language Datalog that allows disjunctions in the heads of rules. This extension enables disjunctive Datalog to express several NP-hard problems that are not known to be expressable in plain Datalog. Disjunctive Datalog has been applied in the context of reasoning about ontologies in the semantic web. DLV is an implementation of disjunctive Datalog. Syntax. A disjunctive Datalog program is a collection of rules. A &lt;includeonly&gt;"&lt;dfn &gt;rule&lt;/dfn&gt;"&lt;/includeonly&gt; is a clause of the form: formula_0 where formula_1, ..., formula_2 may be negated, and may include (in)equality constraints. Semantics. There are at least three ways to define the semantics of disjunctive Datalog: Expressivity. Disjunctive Datalog can express several NP-complete and NP-hard problems, including the travelling salesman problem, graph coloring, maximum clique problem, and minimal vertex cover. These problems are only expressible in Datalog if the polynomial hierarchy collapses. Implementations. The DLV (DataLog with Disjunction, where the logical disjunction symbol V is used) system implements the disjunctive stable model semantics. Sources. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_1 \\vee \\dots \\vee a_n \\leftarrow b_1 \\wedge \\dots \\wedge b_m \\quad 1 \\leq n, 0 \\leq m" }, { "math_id": 1, "text": "b_1" }, { "math_id": 2, "text": "b_m" } ]
https://en.wikipedia.org/wiki?curid=73193493
731959
Heronian triangle
Triangle whose side lengths and area are integers In geometry, a Heronian triangle (or Heron triangle) is a triangle whose side lengths a, b, and c and area A are all positive integers. Heronian triangles are named after Heron of Alexandria, based on their relation to Heron's formula which Heron demonstrated with the example triangle of sides 13, 14, 15 and area 84. Heron's formula implies that the Heronian triangles are exactly the positive integer solutions of the Diophantine equation formula_0 that is, the side lengths and area of any Heronian triangle satisfy the equation, and any positive integer solution of the equation describes a Heronian triangle. If the three side lengths are setwise coprime (meaning that the greatest common divisor of all three sides is 1), the Heronian triangle is called "primitive". Triangles whose side lengths and areas are all rational numbers (positive rational solutions of the above equation) are sometimes also called "Heronian triangles" or "rational triangles"; in this article, these more general triangles will be called "rational Heronian triangles". Every (integral) Heronian triangle is a rational Heronian triangle. Conversely, every rational Heronian triangle is similar to exactly one primitive Heronian triangle. In any rational Heronian triangle, the three altitudes, the circumradius, the inradius and exradii, and the sines and cosines of the three angles are also all rational numbers. Scaling to primitive triangles. Scaling a triangle with a factor of s consists of multiplying its side lengths by s; this multiplies the area by formula_1 and produces a similar triangle. Scaling a rational Heronian triangle by a rational factor produces another rational Heronian triangle. Given a rational Heronian triangle of side lengths formula_2 the scale factor formula_3 produce a rational Heronian triangle such that its side lengths formula_4 are setwise coprime integers. It is proved below that the area A is an integer, and thus the triangle is a Heronian triangle. Such a triangle is often called a "primitive Heronian triangle." In summary, every similarity class of rational Heronian triangles contains exactly one primitive Heronian triangle. A byproduct of the proof is that exactly one of the side lengths of a primitive Heronian triangle is an even integer. "Proof:" One has to prove that, if the side lengths formula_4 of a rational Heronian triangle are coprime integers, then the area A is also an integer and exactly one of the side lengths is even. The Diophantine equation given in the introduction shows immediately that formula_5 is an integer. Its square root formula_6 is also an integer, since the square root of an integer is either an integer or an irrational number. If exactly one of the side lengths is even, all the factors in the right-hand side of the equation are even, and, by dividing the equation by 16, one gets that formula_7 and formula_8 are integers. As the side lengths are supposed to be coprime, one is left with the case where one or three side lengths are odd. Supposing that c is odd, the right-hand side of the Diophantine equation can be rewritten formula_9 with formula_10 and formula_11 even. As the square of an odd integer is congruent to formula_12 modulo 4, the right-hand side of the equation must be congruent to formula_13 modulo 4. It is thus impossible, that one has a solution of the Diophantine equation, since formula_5 must be the square of an integer, and the square of an integer is congruent to 0 or 1 modulo 4. Examples. Any Pythagorean triangle is a Heronian triangle. The side lengths of such a triangle are integers, by definition. In any such triangle, one of the two shorter sides has even length, so the area (the product of these two sides, divided by two) is also an integer. Examples of Heronian triangles that are not right-angled are the isosceles triangle obtained by joining a Pythagorean triangle and its mirror image along a side of the right angle. Starting with the Pythagorean triple 3, 4, 5 this gives two Heronian triangles with side lengths (5, 5, 6) and (5, 5, 8) and area 12. More generally, given two Pythagorean triples formula_14 and formula_15 with largest entries c and e, one can join the corresponding triangles along the sides of length a (see the figure) for getting a Heronian triangle with side lengths formula_16 and area formula_17 (this is an integer, since the area of a Pythagorean triangle is an integer). There are Heronian triangles that cannot be obtained by joining Pythagorean triangles. For example, the Heronian triangle of side lengths formula_18 and area 72, since none of its altitudes is an integer. Such Heronian triangles are known as indecomposable. However, every Heronian triangle can be constructed from right triangles with rational side lengths, and is thus similar to a decomposable Heronian triangle. In fact, at least one of the altitudes of a triangle is inside the triangle, and divides it into two right triangles. These triangles have rational sides, since the cosine and the sine of the angles of a Heronian triangle are rational numbers, and, with notation of the figure, one has formula_19 and formula_20 where formula_21 is the left-most angle of the triangle. Rationality properties. Many quantities related to a Heronian triangle are rational numbers. In particular: Properties of side lengths. Here are some properties of side lengths of Heronian triangles, whose side lengths are "a", "b", "c" and area is A. Parametrizations. A parametric equation or "parametrization" of Heronian triangles consists of an expression of the side lengths and area of a triangle as functions—typically polynomial functions – of some parameters, such that the triangle is Heronian if and only if the parameters satisfy some constraints—typically, to be positive integers satisfying some inequalities. It is also generally required that all Heronian triangles can be obtained up to a scaling for some values of the parameters, and that these values are unique, if an order on the sides of the triangle is specified. The first such parametrization was discovered by Brahmagupta (598-668 A.D.), who did not prove that all Heronian triangles can be generated by the parametrization. In the 18th century, Leonhard Euler provided another parametrization and proved that it generates all Heronian triangles. These parametrizations are described in the next two subsections. In the third subsection, a rational parametrization—that is a parametrization where the parameters are positive rational numbers—is naturally derived from properties of Heronian triangles. Both Brahmagupta's and Euler's parametrizations can be recovered from this rational parametrization by clearing denominators. This provides a proof that Brahmagupta's and Euler's parametrizations generate all Heronian triangles. Brahmagupta's parametric equation. The Indian mathematician Brahmagupta (598-668 A.D.) discovered the following parametric equations for generating Heronian triangles, but did not prove that every similarity class of Heronian triangles can be obtained this way. For three positive integers m, n and k that are setwise coprime (formula_27) and satisfy formula_28 (to guarantee positive side lengths) and formula_29 (for uniqueness): formula_30 where s is the semiperimeter, A is the area, and r is the inradius. The resulting Heronian triangle is not always primitive, and a scaling may be needed for getting the corresponding primitive triangle. For example, taking "m" = 36, "n" = 4 and "k" = 3 produces a triangle with "a" = 5220, "b" = 900 and "c" = 5400, which is similar to the (5, 29, 30) Heronian triangle with a proportionality factor of 180. The fact that the generated triangle is not primitive is an obstacle for using this parametrization for generating all Heronian triangles with size lengths less than a given bound (since the size of formula_31 cannot be predicted. Euler's parametric equation. The following method of generating all Heronian triangles was discovered by Leonhard Euler, who was the first to provably parametrize all such triangles. For four positive integers m coprime to n and p coprime to q (formula_32) satisfying formula_33 (to guarantee positive side lengths): formula_34 where s is the semiperimeter, A is the area, and r is the inradius. Even when m, n, p, and q are pairwise coprime, the resulting Heronian triangle may not be primitive. In particular, if m, n, p, and q are all odd, the three side lengths are even. It is also possible that a, b, and c have a common divisor other than 2. For example, with "m" = 2, "n" = 1, "p" = 7, and "q" = 4, one gets ("a", "b", "c") = (130, 140, 150), where each side length is a multiple of 10; the corresponding primitive triple is (13, 14, 15), which can also be obtained by dividing the triple resulting from "m" = 2, "n" = 1, "p" = 3, "q" = 2 by two, then exchanging "b" and "c". Half-angle tangent parametrization. Let formula_35 be the side lengths of a triangle, let formula_36 be the interior angles opposite these sides, and let formula_37 formula_38 and formula_39 be the half-angle tangents. The values formula_40 are all positive and satisfy formula_41; this "triple tangent identity" is the half-angle tangent version of the fundamental triangle identity written as formula_42 radians (that is, 90°), as can be proved using the addition formula for tangents. By the laws of sines and cosines, all of the sines and the cosines of formula_36 are rational numbers if the triangle is a rational Heronian triangle and, because a half-angle tangent is a rational function of the sine and cosine, it follows that the half-angle tangents are also rational. Conversely, if formula_40 are positive rational numbers such that formula_43 it can be seen that they are the half-angle tangents of the interior angles of a class of similar Heronian triangles. The condition formula_41 can be rearranged to formula_44 and the restriction formula_45 requires formula_46 Thus there is a bijection between the similarity classes of rational Heronian triangles and the pairs of positive rational numbers formula_47 whose product is less than 1. To make this bijection explicit, one can choose, as a specific member of the similarity class, the triangle inscribed in a unit-diameter circle with side lengths equal to the sines of the opposite angles: formula_48 where formula_49 is the semiperimeter, formula_50 is the area, formula_51 is the inradius, and all these values are rational because formula_52 and formula_53 are rational. To obtain an (integral) Heronian triangle, the denominators of a, b, and c must be cleared. There are several ways to do this. If formula_54 and formula_55 with formula_56 (irreducible fractions), and the triangle is scaled up by formula_57 the result is Euler's parametrization. If formula_58 and formula_59 with formula_60 (lowest common denomimator), and the triangle is scaled up by formula_61 the result is similar but not quite identical to Brahmagupta's parametrization. If, instead, this is formula_62 and formula_63 that are reduced to the lowest common denominator, that is, if formula_64 and formula_65 with formula_66 then one gets exactly Brahmagupta's parametrization by scaling up the triangle by formula_67 This proves that either parametrization generates all Heronian triangles. Other results. has derived fast algorithms for generating Heronian triangles. There are infinitely many primitive and indecomposable non-Pythagorean Heronian triangles with integer values for the inradius formula_68 and all three of the exradii formula_69, including the ones generated by formula_70 There are infinitely many Heronian triangles that can be placed on a lattice such that not only are the vertices at lattice points, as holds for all Heronian triangles, but additionally the centers of the incircle and excircles are at lattice points. See also for parametrizations of some types of Heronian triangles. Examples. The list of primitive integer Heronian triangles, sorted by area and, if this is the same, by perimeter, starts as in the following table. "Primitive" means that the greatest common divisor of the three side lengths equals 1. The list of primitive Heronian triangles whose sides do not exceed 6,000,000 has been computed by . Heronian triangles with perfect square sides. Heronian triangles with perfect square sides are related to the Perfect cuboid problem. As of February 2021, only two "primitive" Heronian triangles with perfect square sides are known: (1853², 4380², 4427², Area=32918611718880), published in 2013. (11789², 68104² , 68595², Area=284239560530875680), published in 2018. Equable triangles. A shape is called equable if its area equals its perimeter. There are exactly five equable Heronian triangles: the ones with side lengths (5,12,13), (6,8,10), (6,25,29), (7,15,20), and (9,10,17), though only four of them are primitive. Almost-equilateral Heronian triangles. Since the area of an equilateral triangle with rational sides is an irrational number, no equilateral triangle is Heronian. However, a sequence of isosceles Heronian triangles that are "almost equilateral" can be developed from the duplication of right-angled triangles, in which the hypotenuse is almost twice as long as one of the legs. The first few examples of these almost-equilateral triangles are listed in the following table (sequence in the OEIS): There is a unique sequence of Heronian triangles that are "almost equilateral" because the three sides are of the form "n" − 1, "n", "n" + 1. A method for generating all solutions to this problem based on continued fractions was described in 1864 by Edward Sang, and in 1880 Reinhold Hoppe gave a closed-form expression for the solutions. The first few examples of these almost-equilateral triangles are listed in the following table (sequence in the OEIS): Subsequent values of "n" can be found by multiplying the previous value by 4, then subtracting the value prior to that one (52 = 4 × 14 − 4, 194 = 4 × 52 − 14, etc.), thus: formula_71 where "t" denotes any row in the table. This is a Lucas sequence. Alternatively, the formula formula_72 generates all "n" for positive integers "t". Equivalently, let "A" = area and "y" = inradius, then, formula_73 where {"n", "y"} are solutions to "n"2 − 12"y"2 = 4. A small transformation "n" = "2x" yields a conventional Pell equation "x"2 − 3"y"2 = 1, the solutions of which can then be derived from the regular continued fraction expansion for √3. The variable "n" is of the form formula_74, where "k" is 7, 97, 1351, 18817, ... The numbers in this sequence have the property that "k" consecutive integers have integral standard deviation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "16\\,A^2=(a+b+c)(a+b-c)(b+c-a)(c+a-b);" }, { "math_id": 1, "text": "s^2" }, { "math_id": 2, "text": "\\frac pd, \\frac qd,\\frac rd," }, { "math_id": 3, "text": "\\frac d{\\gcd(p,q,r)}" }, { "math_id": 4, "text": "a, b,c" }, { "math_id": 5, "text": "16A^2" }, { "math_id": 6, "text": "4A" }, { "math_id": 7, "text": "A^2" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "((a+b)^2-c^2)(c^2-(a-b)^2)," }, { "math_id": 10, "text": "a+b" }, { "math_id": 11, "text": "a-b" }, { "math_id": 12, "text": "1" }, { "math_id": 13, "text": "-1" }, { "math_id": 14, "text": "(a,b,c)" }, { "math_id": 15, "text": "(a,d,e)" }, { "math_id": 16, "text": "c,e,b+d" }, { "math_id": 17, "text": "\\tfrac12a(b+d)" }, { "math_id": 18, "text": "5, 29, 30" }, { "math_id": 19, "text": "a=c\\sin \\alpha" }, { "math_id": 20, "text": "b=c\\cos\\alpha," }, { "math_id": 21, "text": "\\alpha" }, { "math_id": 22, "text": "p_a=\\tfrac{2aA}{a^2+b^2-c^2}," }, { "math_id": 23, "text": "p_b=\\tfrac{2bA}{a^2+b^2-c^2}," }, { "math_id": 24, "text": "p_c=\\tfrac{2cA}{a^2-b^2+c^2}," }, { "math_id": 25, "text": "\\tfrac{2Aa}{a^2+2A}" }, { "math_id": 26, "text": "s(s-a)(s-b)(s-c)" }, { "math_id": 27, "text": "\\gcd(m,n,k)=1" }, { "math_id": 28, "text": "mn > k^2" }, { "math_id": 29, "text": "m \\ge n" }, { "math_id": 30, "text": "\\begin{align}\na &= n(m^2 + k^2), & s - a &= \\tfrac12(b + c - a) = n(mn - k^2), \\\\\nb &= m(n^2 + k^2), & s - b &= \\tfrac12(c + a - b) = m(mn - k^2), \\\\\nc &= (m + n)(mn - k^2), & s - c &= \\tfrac12(a + b - c) = (m + n)k^2, \\\\\n&& s &= \\tfrac12(a + b + c) = mn(m + n), \\\\\nA &= mnk(m+n)(mn-k^{2}), & r &= k(mn - k^2), \\\\\n\\end{align}" }, { "math_id": 31, "text": "\\gcd(a,b,c)" }, { "math_id": 32, "text": "\\gcd{(m, n)} = \\gcd{(p, q)} = 1" }, { "math_id": 33, "text": "mp > nq" }, { "math_id": 34, "text": "\\begin{align}\na &= mn(p^2 + q^2), & s - a &= mq(mp - nq), \\\\\nb &= pq(m^2 + n^2), & s - b &= np(mp - nq), \\\\\nc &= (mq + np)(mp - nq), & s - c &= nq(mq + np), \\\\\n& & s &= mp(mq + np), \\\\\nA &= mnpq(mq + np)(mp - nq), & r &= nq(mp - nq), \\\\\n\\end{align}" }, { "math_id": 35, "text": "a, b, c > 0" }, { "math_id": 36, "text": "\\alpha, \\beta, \\gamma" }, { "math_id": 37, "text": "t = \\tan\\frac\\alpha2," }, { "math_id": 38, "text": "u = \\tan\\frac\\beta2," }, { "math_id": 39, "text": "v = \\tan\\frac\\gamma2" }, { "math_id": 40, "text": "t, u, v" }, { "math_id": 41, "text": "tu + uv + vt = 1" }, { "math_id": 42, "text": "\\frac\\alpha 2 + \\frac\\beta 2 + \\frac\\gamma 2 = \\frac\\pi 2" }, { "math_id": 43, "text": "tu + uv + vt = 1," }, { "math_id": 44, "text": "v = \\frac{1-tu}{t+u}," }, { "math_id": 45, "text": "v > 0" }, { "math_id": 46, "text": "tu < 1." }, { "math_id": 47, "text": "(t, u)" }, { "math_id": 48, "text": "\\begin{align}\na &= \\sin\\alpha = \\frac{2t}{1+t^2}, & s - a = \\frac{2u(1-tu)}{(1+t^2)(1+u^2)}, \\\\[5mu]\nb &= \\sin\\beta = \\frac{2u}{1+u^2}, & s - b = \\frac{2t(1-tu)}{(1+t^2)(1+u^2)}, \\\\[5mu]\nc &= \\sin\\gamma = \\frac{2(t+u)(1-tu)}{(1+t^2)(1+u^2)},\n & s - c = \\frac{2tu(t+u)}{(1+t^2)(1+u^2)}, \\\\[5mu]\n& & s = \\frac{2(t+u)}{(1+t^2)(1+u^2)}, \\\\\nA &= \\frac{4tu(t+u)(1-tu)}{(1+t^2)^2(1+u^2)^2}, & r = \\frac{2tu(1-tu)}{(1+t^2)(1+u^2)},\n\\end{align}" }, { "math_id": 49, "text": "s = \\tfrac12(a + b + c)" }, { "math_id": 50, "text": "A = \\tfrac12 ab \\sin \\gamma" }, { "math_id": 51, "text": "r = \\sqrt{\\tfrac{(s-a)(s-b)(s-c)}{s}}" }, { "math_id": 52, "text": "t" }, { "math_id": 53, "text": "u" }, { "math_id": 54, "text": "t = m/n" }, { "math_id": 55, "text": "u = p/q," }, { "math_id": 56, "text": "\\gcd(m, n) = \\gcd(p,q) = 1" }, { "math_id": 57, "text": "\\tfrac12(m^2 + n^2)(p^2 + q^2)," }, { "math_id": 58, "text": "t = m/k" }, { "math_id": 59, "text": "u = n/k" }, { "math_id": 60, "text": "\\gcd(m, n, k) = 1" }, { "math_id": 61, "text": "(k^2 + m^2)(k^2 + n^2)/2k," }, { "math_id": 62, "text": "1/t" }, { "math_id": 63, "text": "1/u" }, { "math_id": 64, "text": "t = k/m" }, { "math_id": 65, "text": "u = k/n" }, { "math_id": 66, "text": "\\gcd(m, n, k) = 1," }, { "math_id": 67, "text": "(k^2 + m^2)(k^2 + n^2)/2k." }, { "math_id": 68, "text": "r" }, { "math_id": 69, "text": "(r_a, r_b, r_c)" }, { "math_id": 70, "text": "\\begin{align}\na &= 5(5n^2 + n - 1), & r_a &= 5n+3, \\\\\nb &= (5n + 3)(5n^2 - 4n + 1), & r_b &= 5n^2+n-1, \\\\\nc &= (5n - 2)(5n^2 + 6n + 2), & r_c &= (5n - 2)(5n + 3)(5n^2 + n - 1), \\\\\n& & r &= 5n - 2, \\\\\nA &= (5n - 2)(5n + 3)(5n^2 + n - 1) = r_c.\n\\end{align}" }, { "math_id": 71, "text": "n_t = 4n_{t-1} - n_{t-2} \\, ," }, { "math_id": 72, "text": "(2 + \\sqrt{3})^t + (2 - \\sqrt{3})^t" }, { "math_id": 73, "text": "\\big((n-1)^2+n^2+(n+1)^2\\big)^2-2\\big((n-1)^4+n^4+(n+1)^4\\big) = (6n y)^2 = (4A)^2" }, { "math_id": 74, "text": "n=\\sqrt{2 + 2 k}" } ]
https://en.wikipedia.org/wiki?curid=731959
73196152
Syntax and semantics of logic programming
Formal semantics of logic programming languages Logic programming is a programming paradigm that includes languages based on formal logic, including Datalog and Prolog. This article describes the syntax and semantics of the purely declarative subset of these languages. Confusingly, the name "logic programming" also refers to a specific programming language that roughly corresponds to the declarative subset of Prolog. Unfortunately, the term must be used in both senses in this article. Declarative logic programs consist entirely of "rules" of the form H :- B1, ..., BN. Each such rule can be read as an implication: formula_0 meaning "If each formula_1 is true, then formula_2 is true". Logic programs compute the set of facts that are implied by their rules. Many implementations of Datalog, Prolog, and related languages add procedural features such as Prolog's cut operator or extra-logical features such as a foreign function interface. The formal semantics of such extensions are beyond the scope of this article. Datalog. "Datalog" is the simplest widely-studied logic programming language. There are three major definitions of the semantics of Datalog, and they are all equivalent. The syntax and semantics of other logic programming languages are extensions and generalizations of those of Datalog. Syntax. A Datalog program consists of a list of "rules" (Horn clauses). If "constant" and "variable" are two countable sets of constants and variables respectively and "relation" is a countable set of predicate symbols, then the following BNF grammar expresses the structure of a Datalog program: &lt;program&gt; ::= &lt;rule&gt; &lt;program&gt; | "" &lt;rule&gt; ::= &lt;atom&gt; ":-" &lt;atom-list&gt; "." &lt;atom&gt; ::= &lt;relation&gt; "(" &lt;term-list&gt; ")" &lt;atom-list&gt; ::= &lt;atom&gt; | &lt;atom&gt; "," &lt;atom-list&gt; | "" &lt;term&gt; ::= &lt;constant&gt; | &lt;variable&gt; &lt;term-list&gt; ::= &lt;term&gt; | &lt;term&gt; "," &lt;term-list&gt; | "" Atoms are also referred to as &lt;includeonly&gt;"&lt;dfn &gt;literals&lt;/dfn&gt;"&lt;/includeonly&gt;. The atom to the left of the codice_0 symbol is called the &lt;includeonly&gt;"&lt;dfn &gt;head&lt;/dfn&gt;"&lt;/includeonly&gt; of the rule; the atoms to the right are the &lt;includeonly&gt;"&lt;dfn &gt;body&lt;/dfn&gt;"&lt;/includeonly&gt;. Every Datalog program must satisfy the condition that every variable that appears in the head of a rule also appears in the body (this condition is sometimes called the &lt;includeonly&gt;"&lt;dfn &gt;range restriction&lt;/dfn&gt;"&lt;/includeonly&gt;). Rules with empty bodies are called &lt;includeonly&gt;"&lt;dfn &gt;facts&lt;/dfn&gt;"&lt;/includeonly&gt;. For example, the following rule is a fact: r(x) :- . Syntactic sugar. Many implementations of logic programming extend the above grammar to allow writing facts without the codice_0, like so: r(x). Many also allow writing 0-ary relations without parentheses, like so: p :- q. These are merely abbreviations (syntactic sugar); they have no impact on the semantics of the program. Example. The following program computes the relation codice_2, which is the transitive closure of the relation codice_3. edge(x, y). edge(y, z). path(A, B) :- edge(A, B). path(A, C) :- path(A, B), edge(B, C). Semantics. There are three widely-used approaches to the semantics of Datalog programs: model-theoretic, fixed-point, and proof-theoretic. These three approaches can be proven to be equivalent. An atom is called &lt;includeonly&gt;"&lt;dfn &gt;ground&lt;/dfn&gt;"&lt;/includeonly&gt; if none of its subterms are variables. Intuitively, each of the semantics define the meaning of a program to be the set of all ground atoms that can be deduced from the rules of the program, starting from the facts. Model theoretic. A rule is called ground if all of its atoms (head and body) are ground. A ground rule "R"1 is a "ground instance" of another rule "R"2 if "R"1 is the result of a substitution of constants for all the variables in "R"2. The "Herbrand base" of a Datalog program is the set of all ground atoms that can be made with the constants appearing in the program. An "interpretation" (also known as a "database instance") is a subset of the Herbrand base. A ground atom is true in an interpretation I if it is an element of I. A rule is "true in an interpretation I" if for each ground instance of that rule, if all the clauses in the body are true in I, then the head of the rule is also true in I. A "model" of a Datalog program "P" is an interpretation I of "P" which contains all the ground facts of "P", and makes all of the rules of "P" true in I. Model-theoretic semantics state that the meaning of a Datalog program is its minimal model (equivalently, the intersection of all its models). For example, this program: edge(x, y). edge(y, z). path(A, B) :- edge(A, B). path(A, C) :- path(A, B), edge(B, C). has this Herbrand universe: codice_4, codice_5, codice_6 and this Herbrand base: codice_7, codice_8, ..., codice_9, codice_10, ..., codice_11 and this minimal Herbrand model: codice_8, codice_13, codice_14, codice_15, codice_16 Fixed-point. Let I be the set of interpretations of a Datalog program "P", that is, "I" P("H"), where "H" is the Herbrand base of "P" and P is the powerset operator. The &lt;includeonly&gt;"&lt;dfn &gt;immediate consequence operator&lt;/dfn&gt;"&lt;/includeonly&gt; for "P" is the following map T from I to I: For each ground instance of each rule in "P", if every clause in the body is in the input interpretation, then add the head of the ground instance to the output interpretation. This map T is monotonic with respect to the partial order given by subset inclusion on T. By the Knaster–Tarski theorem, this map has a least fixed point; by the Kleene fixed-point theorem the fixed point is the supremum of the chain formula_3. The least fixed point of M coincides with the minimal Herbrand model of the program. The fixpoint semantics suggest an algorithm for computing the minimal Herbrand model: Start with the set of ground facts in the program, then repeatedly add consequences of the rules until a fixpoint is reached. This algorithm is called naïve evaluation. Proof-theoretic. Given a program P, a &lt;includeonly&gt;"&lt;dfn &gt;proof tree&lt;/dfn&gt;"&lt;/includeonly&gt; of a ground atom A is a tree with a root labeled by A, leaves labeled by ground atoms from the heads of facts in P, and branches with children formula_4 labeled by ground atoms G such that there exists a ground instance codice_17 of a rule in P. The proof-theoretic semantics defines the meaning of a Datalog program to be the set of ground atoms that can be defived from such trees. This set coincides with the minimal Herbrand model. One might be interested in knowing whether or not a particular ground atom appears in the minimal Herbrand model of a Datalog program, perhaps without caring much about the rest of the model. A top-down reading of the proof trees described above suggests an algorithm for computing the results of such "queries", such a reading informs the SLD resolution algorithm, which forms the basis for the evaluation of Prolog. Other approaches. The semantics of Datalog have also been studied in the context of fixpoints over more general semirings. Logic programming. While the name "logic programming" is used to refer to the entire paradigm of programming languages including Datalog and Prolog, when discussing formal semantics, it generally refers to an extension of Datalog with function symbols. Logic programs are also called &lt;includeonly&gt;"&lt;dfn &gt;Horn clause programs&lt;/dfn&gt;"&lt;/includeonly&gt;. Logic programming as discussed in this article is closely related to the "pure" or declarative subset of Prolog. Syntax. The syntax of logic programming extends the syntax of Datalog with function symbols. Logic programming drops the range restriction, allowing variables to appear in the heads of rules that do not appear in their bodies. Semantics. Due to the presence of function symbols, the Herbrand models of logic programs can be infinite. However, the semantics of a logic program is still defined to be its minimal Herbrand model. Relatedly, the fixpoint of the immediate consequence operator may not converge in a finite number of steps (or to a finite set). However, any ground atom in the minimal Herbrand model will have a finite proof tree. This is why Prolog is evaluated top-down. Just as in Datalog, the three semantics can be proven equivalent. Negation. Logic programming has the desirable property that all three major definitions of the semantics of logic programs agree. In contrast, there are many conflicting proposals for the semantics of logic programs with negation. The source of the disagreement is that logic programs have a unique minimal Herbrand model, but in general, logic programming (or even Datalog) programs with negation do not. Syntax. Negation is written codice_18, and can appear in front of any atom in the body of a rule. &lt;atom-list&gt; ::= &lt;atom&gt; | "not" &lt;atom&gt; | &lt;atom&gt; "," &lt;atom-list&gt; | "" Semantics. Stratified negation. A logic program with negation is "stratified" when it is possible to assign each relation to some "stratum", such that if a relation R appears negated in the body of a relation S, then R is in a lower stratum than S. The model-theoretic and fixed-point semantics of Datalog can be extended to handle stratified negation, and such extensions can be proved equivalent. Many implementations of Datalog use a bottom-up evaluation model inspired by the fixed point semantics. Since this semantics can handle stratified negation, several implementations of Datalog implement stratified negation. While stratified negation is a common extension to Datalog, there are reasonable programs that cannot be stratified. The following program describes a two-player game where a player wins if their opponent has no moves: move(a, b). win(X) :- move(X, Y), not win(Y). This program is not stratified, but it seems reasonable to think that codice_19 should win the game. Stable model semantics. The stable model semantics define a condition for calling certain Herbrand models of a program "stable". Intuitively, stable models are the "possible sets of beliefs that a rational agent might hold, given [the program]" as premises. A program with negation may have many stable models or no stable models. For instance, the program p :- not q. q :- not p. has two stable models formula_5, formula_6. The one-rule program p :- not p. has no stable models. Every stable model is a minimal Herbrand model. A Datalog program without negation has a single stable model, which is exactly its minimal Herbrand model. The stable model semantics defines the meaning of a logic program with negation to be its stable model, if there is exactly one. However, it can be useful to investigate all (or at least, several) of the stable models of a program; this is the goal of answer set programming. Further extensions. Several other extensions of Datalog have been proposed and studied, including variants with support for integer constants and functions (including DatalogZ), inequality constraints in the bodies of rules, and aggregate functions. Constraint logic programming allows for constraints over domains such as the reals or integers to appear in the bodies of rules. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B_1\\land\\ldots\\land B_n\\rightarrow H" }, { "math_id": 1, "text": "B_i" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "T(\\emptyset), T(T(\\emptyset)), \\ldots, T^n(\\emptyset), \\ldots " }, { "math_id": 4, "text": "A_1, \\ldots, A_n" }, { "math_id": 5, "text": "\\{p\\}" }, { "math_id": 6, "text": "\\{q\\}" } ]
https://en.wikipedia.org/wiki?curid=73196152
73196815
Resilience (mathematics)
Mathematical measure of transient behavior. In mathematical modeling, resilience refers to the ability of a dynamical system to recover from perturbations and return to its original stable steady state. It is a measure of the stability and robustness of a system in the face of changes or disturbances. If a system is not resilient enough, it is more susceptible to perturbations and can more easily undergo a critical transition. A common analogy used to explain the concept of resilience of an equilibrium is one of a ball in a valley. A resilient steady state corresponds to a ball in a deep valley, so any push or perturbation will very quickly lead the ball to return to the resting point where it started. On the other hand, a less resilient steady state corresponds to a ball in a shallow valley, so the ball will take a much longer time to return to the equilibrium after a perturbation. The concept of resilience is particularly useful in systems that exhibit tipping points, whose study has a long history that can be traced back to catastrophe theory. While this theory was initially overhyped and fell out of favor, its mathematical foundation remains strong and is now recognized as relevant to many different systems. History. In 1973, Canadian ecologist C. S. Holling proposed a definition of resilience in the context of ecological systems. According to Holling, resilience is "a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables". Holling distinguished two types of resilience: engineering resilience and ecological resilience. Engineering resilience refers to the ability of a system to return to its original state after a disturbance, such as a bridge that can be repaired after an earthquake. Ecological resilience, on the other hand, refers to the ability of a system to maintain its identity and function despite a disturbance, such as a forest that can regenerate after a wildfire while maintaining its biodiversity and ecosystem services. With time, the once well-defined and unambiguous concept of resilience has experienced a gradual erosion of its clarity, becoming more vague and closer to an umbrella term than a specific concrete measure. Definition. Mathematically, resilience can be approximated by the inverse of the return time to an equilibrium given by formula_0 where formula_1 is the maximum eigenvalue of matrix formula_2. The largest this value is, the faster a system returns to the original stable steady state, or in other words, the faster the perturbations decay. Applications and examples. In ecology, resilience might refer to the ability of the ecosystem to recover from disturbances such as fires, droughts, or the introduction of invasive species. A resilient ecosystem would be one that is able to adapt to these changes and continue functioning, while a less resilient ecosystem might experience irreversible damage or collapse. The exact definition of resilience has remained vague for practical matters, which has led to a slow and proper application of its insights for management of ecosystems. In epidemiology, resilience may refer to the ability of a healthy community to recover from the introduction of infected individuals. That is, a resilient system is more likely to remain at the disease-free equilibrium after the invasion of a new infection. Some stable systems exhibit critical slowing down where, as they approach a basic reproduction number of 1, their resilience decreases, hence taking a longer time to return to the disease-free steady state. Resilience is an important concept in the study of complex systems, where there are many interacting components that can affect each other in unpredictable ways. Mathematical models can be used to explore the resilience of such systems and to identify strategies for improving their resilience in the face of environmental or other changes. For example, when modelling networks it is often important to be able to quantify network resilience, or network robustness, to the loss of nodes. Scale-free networks are particularly resilient since most of their nodes have few links. This means that if some nodes are randomly removed, it is more likely that the nodes with fewer connections are taken out, thus preserving the key properties of the network.
[ { "math_id": 0, "text": "\\text{resilience} \\equiv -\\text{Re}(\\lambda_1(\\textbf{A})))" }, { "math_id": 1, "text": "\\lambda_1" }, { "math_id": 2, "text": "\\textbf{A}" } ]
https://en.wikipedia.org/wiki?curid=73196815
73198875
Erdős–Kaplansky theorem
On the dimension of vector space duals The Erdős–Kaplansky theorem is a theorem from functional analysis. The theorem makes a fundamental statement about the dimension of the dual spaces of infinite-dimensional vector spaces; in particular, it shows that the algebraic dual space is not isomorphic to the vector space itself. A more general formulation allows to compute the exact dimension of any function space. The theorem is named after Paul Erdős and Irving Kaplansky. Statement. Let formula_0 be an infinite-dimensional vector space over a field formula_1 and let formula_2 be some basis of it. Then for the dual space formula_3, formula_4 By Cantor's theorem, this cardinal is strictly larger than the dimension formula_5 of formula_0. More generally, if formula_2 is an arbitrary infinite set, the dimension of the space of all functions formula_6 is given by: formula_7 When formula_2 is finite, it's a standard result that formula_8. This gives us a full characterization of the dimension of this space.
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "\\mathbb{K}" }, { "math_id": 2, "text": "I" }, { "math_id": 3, "text": "E^*" }, { "math_id": 4, "text": "\\operatorname{dim}(E^*)=\\operatorname{card}(\\mathbb{K}^I)." }, { "math_id": 5, "text": "\\operatorname{card}(I)" }, { "math_id": 6, "text": "\\mathbb{K}^I" }, { "math_id": 7, "text": "\\operatorname{dim}(\\mathbb{K}^I)=\\operatorname{card}(\\mathbb{K}^I)." }, { "math_id": 8, "text": "\\dim(\\mathbb{K}^I) = \\operatorname{card}(I)" } ]
https://en.wikipedia.org/wiki?curid=73198875
73200138
EPIC-Seq
EPIC-seq, (short for Epigenetic Expression Inference by Cell-free DNA Sequencing), is a high-throughput method that specifically targets gene promoters using cell-free DNA (cfDNA) sequencing. By employing non-invasive techniques such as blood sampling, it infers the expression levels of targeted genes. It consists of both wet and dry lab stages. EPIC-seq involves deep sequencing of the transcription start sites (TSS). It hypothesizes that with deep sequencing of these TSSs, usage of fragmentomic features, chromatin fragmentation patterns or properties, can allow high-resolution analyses, as opposed to its alternatives. The method has been shown effective for gene-level expression inference, molecular subtyping of diffuse large B cell lymphoma (DLBCL), histological classification of nonsmall-cell lung cancer (NSCLC), evaluation of results of immunotherapy agents, and assessment of the genes' prognostic importance. EPIC-seq uses machine learning to deduce the RNA expression of the genes and proposes two new metrics: promoter fragmentation entropy (PFE), an adjusted Shannon Index for entropy, and nucleosome-depleted region (NDR) score, the depth of sequencing in NDR regions. PFE showed superior performance compared to earlier metrics for fragmentomic features. Additionally, EPIC-seq has been mentioned as a possible solution for detecting tissue damage and esophagus cancer using methylation profiles of cfDNAs, profiling of donor liver molecular networks, and inflammatory bowel disease (IBD) detection. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Background. Historical Usage of cfDNA and fragmentomic features. cfDNA, cell death-related and chromatin fragmented DNA molecules contained in blood plasma, has been used to detect transplant tissue rejection, prenatal fetal aneuploidy testing, tumour profiling, and early cancer detection in previous research. Nevertheless, prevalent liquid biopsy methods for cfDNA profiling depend on detecting germline or somatic genetic variations, which may be absent even in high disease burden-bearing patients and cancers with high tumour mutation rates. Historically, the usage of fragmentomic features of cfDNA samples was shown to be another method to approach the problems mentioned. They demonstrated the capability to inform about the originated tissue classification of cfDNA molecules, which can help segregate tumour-related somatic mutations. However, current methods that use fragmentomic features, such as shallow whole genome sequencing (WGS) on cfDNA, do not fully cover all the tissues' effects and provide low sequencing depth and breadth to infer low-level, for example, gene level, properties. Hence, these methods require a high tumour burden from the patients. Circulating Tumor DNA profiling. Circulating tumour DNA (ctDNA) molecules are tumour-derived cell-free DNA (cfDNA) circulating in the bloodstream and are not associated with cells. CtDNA primarily arises from chromatin fragmentation accompanying tumour cell death and can be extracted by liquid biopsy. CtDNA analysis has been implemented for noninvasive identification of tumour genetic characteristics and early recognition of various cancer forms. The majority of current ctDNA analysis depends on genetic differences in germline or somatic cells to diagnose diseases and detect tumour cells at an early stage. While looking at genetic variations of ctDNA can be beneficial, not all ctDNAs contain genetic mutations. EPIC-seq unitized epigenetic features of ctDNA to inform tissue-of-origin of these unmutated molecules, which is helpful for cancer classification. Fragmentomic Features for Tissue-of-origin classification. The majority of circulating cfDNA molecules are fragments linked to nucleosomes, so they represent unique chromatin arrangements found in the nuclear genomes of the cells they originate from. In particular, open chromatin areas j, whereas genomic regions linked to nucleosomal complexes are often shielded from endonuclease activity. Several studies have identified specific chromatin fragmentomic characteristics that aid in informing tissue origins through cfDNA profiling. These features include: Principles of EPIC-seq. Currently, the majority of circulating tumour DNA (ctDNA) fragmentomic techniques lack the ability to achieve gene-level resolution and are effective mainly in inferring expression at elevated ctDNA levels. Consequently, they are primarily applicable to patients with notably advanced tumour burdens typically seen in late-stage cancer. To address this limitation, EPIC-seq employs hybrid capture-based targeted deep sequencing of regions flanking transcription start sites (TSS) in cfDNA. This approach allows for the acquisition of ctDNA fragmentation features crucial for predicting gene expressions, such as Promoter Fragmentation Entropy (PFE) and Nucleosome Depleted Region (NDR). These key fragmentomic features possess the capability to capture associations at the gene level with expression levels throughout the genome, enabling the construction of a predictive model for transcriptional output. This would allows for the high-resolution monitoring of cfDNA fragmentation and gene-level analysis. Promoter Fragmentation entropy. Epic-seq hypothesizes that cfDNA fragments originating from active promoters, which are less shielded by nucleosomes and thus more susceptible to endonuclease cleavage, will display more erratic cleavage patterns compared to fragments from inactive promoters, which are better protected by nucleosomes. PFE is a variation of the Shannon Index, which is a quantitative measure for estimating diversity. In the context of Epic-seq, PFE calculates the diversity of cfDNA fragment lengths where both ends of the fragment are situated within the 2 kb flanking region of each gene's TSS. The higher the PFE of a gene's TSS, the more likely the gene is highly expressed. Nucleosome Depleted region. Actively expressed genes have open chromatin at their TSS region, they are less shielded by nucleosomes and, therefore, more susceptible to endonuclease cleavage. Consequently, the depth of cfDNA originating from the TSS of active genes tends to be shallower compared to that of inactive genes. NDR quantifies the normalized depth within each 2-kilobase window surrounding each TSS. The lower the NDR of a gene TSS site, the more likely the gene is highly expressed. Methods. Wet Lab workflow. 1. Collection and Processing of plasma. Peripheral blood samples were obtained and processed to isolate plasma following standard protocols. Upon centrifugation, plasma specimens were preserved at −80 °C, awaiting the extraction of ctDNA. The extraction of cfDNA from plasma volumes ranging from 2 to 16 ml was carried out using established laboratory procedures. Following isolation, the concentration of cfDNA was determined using fluorescence-based quantification methods. 2. Sequencing Library preparation. A typical amount of 32 ng of cfDNA was utilized for library preparation. DNA input was adjusted to mitigate the effects of high molecular-weight DNA contamination. The library preparation process encompassed end repair, A-tailing, and adapter ligation, which also incorporated molecular barcodes into each read. These procedures were conducted according to ligation based library preparation standardized protocols, with overnight ligation performed at 4 °C. Following this, shotgun cfDNA libraries underwent hybrid capture targeting specific genomic regions, as detailed below. 3. Custom Capture Panels sequencing. Custom capture panels tailored to specific cancer types or personalized selectors were utilized in EPIC-seq. The capture panels targeted transcription start site regions of genes of interest. Enrichment for EPIC-seq was performed following established laboratory protocols. Subsequently, hybridization captures were pooled, and the pooled samples underwent sequencing using short read sequencing. Dry Lab workflow. Since EPIC-seq contains certain computational parts after the wet-lab portion for further processing, the following steps are summarized based on the developers' steps provided in the original paper. 4. Demultiplexing and Error correction. If multiplexed paired-end sequencing is used, then demultiplexing needs to be done to sort reads for different samples to different files. After the demultiplexing, error correction and read pair elimination based on unique identifier and barcode matching of pairs can be done. Developers adapt the demultiplexing and error correction steps from the CAPP-seq demultiplexing pipeline. 5. Outer Sequence Removal and trimming. For the preservation of shorter fragment reads, barcode removal and adapter trimming need to be done. After read preprocessing, the alignment of reads to the human genome reference should be performed. Original EPIC-seq used hg19 but for better results, an updated version of human genome reference can be used. One should be careful about their aligner's options since some aligners can interfere with the inclusion of shorter reads paired with longer ones. For the deduplication, attached molecular customized barcodes should be exploited. These barcodes include endogenous and exogenous unique molecular identifiers (UMIs) and are handy for distinguishing Polymerase Chain Reaction (PCR) duplicates from real duplicates and hence for PCR duplicate cleansing. This portion is especially important for oncologic applications since the low mutation abundance can be suppressed by PCR duplicates. 6. Read Normalization and quality control. If the data for different samples are going to be contrasted with each other, one can perform downsampling on the reads to achieve comparability. The reported sequencing coverage depth for reasonable analysis results was reported as bigger than 500 folds, thus any sample whose mean sequencing depth does not exceed the number can be dropped for more accurate outcomes. Also, EPIC-seq uses estimated expected cfDNA fragment length density of 140–185, based on chromatosomal length. The samples that have outlier fragment length density can be dropped for higher correlation results.  As the last quality control step, mapping quality should be considered. A looser threshold can be dictated on EPIC-seq reads, compared to WGS, due to the TSS selection criteria imposed during design phases making the reads more unique for EPIC-seq. Fragmentomic Feature Analysis. 7. Shannon's entropy. For the measurement of the diversity of fragmentomic features, the PFE metric, derived from Shannon's Index of entropy, is developed. The default number of 201 bins of lengths 100 to 300 are used for density estimation by the maximum likelihood method. The probability of having a fragment with size formula_0, (formula_1) is computed by the division of the number of fragments with size formula_0 by the total number of fragments. Shannon's entropy is calculated with fthe formula: formula_2. 8. Dirichlet-Multionomial model. Next, as a cleansing against different sequencing depths from different runs and other factors that can hinder the fragment length distribution sanity, Bayesian normalization via the Dirichlet-multinomial model should be done. Per every sample, based on the fragment lengths observed in that sample, a multinomial maximum likelihood estimation-based fragment length distribution is generated.  Two intervals of 250 base pair length are used, located between -1000th base pair and -750th base pair, and between 750th base pair and 1000th base pair locations to the centre of TSS. This is done due to the prevention of the impact of gene expression on the generated distribution, as the selected intervals are relatively far away from TSS. Then, the fragment length densities from that distribution are sampled for each 201-fragment size and used as a parameter for Dirichlet distribution generation. The initial parameter for Dirichlet distribution is set to 20. From the obtained Dirichlet distribution, 2000 fragments are sampled, and Shannon's entropy is calculated for those. The Shannon entropies are subsequently compared with the Shannon entropy values of five randomly selected background sets (formula_3 where formula_4). 9. PFE calculation. PFE is calculated as the probability of gene-specific entropy being higher than formula_5 times all other background set entropies individually. The variable formula_6 is sampled from the Gamma distribution with shape 1 and rate 0.5. Also, as the last step, the expected value for the sum of gene-specific entropy probability for each background is reported as PFE. That probability is based on the Dirichlet distribution generated in the previous step. 10. NDR calculation. NDR is the normalized measure of sequencing depth, which was downsampled to 2000 folds as a default in the 2000 base pair windows during read preprocessing and quality control steps. 11. Machine Learning for Expression prediction. With deep WGS data of cfDNA from a carcinoma of unknown primary patient with very low ctDNA concentration quantified, they trained a machine learning model using bootstrapping. The results of RNA-sequencing on PBMC runs for the 5 different patients are recorded and the average of 3 of these individuals' expression levels is used as a reference for gene expression. The genes are clustered into 10 clusters based on reference gene expression to increase the resolution at the core promoters. Then, genes used as a background value for PFE calculation are removed. Next, all the fragments in extended TSS regions, a region that has the center as TSS regions' center and the length of 2000 base pairs, are pooled. The PFE and NDR scores are calculated for the fragments pooled. Further normalization of these scores is done based on their 95th percentile. Using these two features, they bootstrapped, used in a weighted fashion, 600 expression prediction models developed for WGS data. Among those models, there are 200 univariable standalone NDR, 200 univariable standalone PFE, and 200 NDR-PFE integrated models. Advantages. High throughputness. EPIC-seq inherits the advantages of high-throughput sequencing: fast sequencing times, high scalability, higher sequencing depths, lower costs, and low error rates. Another advantage of EPIC-seq is that it is non-invasive. This also eliminates the risks of invasive methods done over risky tissues and allows scientists to study tissues that are too dangerous or difficult to do so. Indepency of High Tumour Burden requirement. As mentioned in the introduction, two major limitations of the predecessor methods are not inherited by EPIC-Seq: germline or somatic variant dependency of common liquid biopsy methods which is also not certain to be found even in high-disease burden patients and methods like shallow WGS's insufficient range of cfDNA tissue consideration, genomic breadth and genomic depth which causes low-resolution and level of inference of gene expression and, again, requires high tumour burden for higher resolution. EPIC-seq uses fragmentomic features instead of variant calling, thus it is not bound by the existence of the variation. Also, since it does targeted sequencing instead of whole genome, it allows scientists to increase the sequencing depth and hence provide a better resolution. Moreover, it also provides more sensitive and comprehensive tissue-of-origin information. Different Prediction sensitivities. Furthermore, the method showed consistent performance in cancer identification, classification, and treatment effect problems like NSCLC and DLBCL identification, histological classification of subtypes of NSCLC, molecular classification of subtypes of DLBCL, DLBCL COO detection, programmed death-ligand 1 immune-checkpoint inhibition response prediction against advanced NSCLC cases, and prognostic value detection of individual genes. Generalizability. WES was done with EPIC-seq and it detected a correlation between the biological signal and active genes' exonic regions; this shows that EPIC-seq can be generalized for expression of genes of interest rather than only cancer genes Robustness on cfDNA levels. In general, EPIC-seq analysis results showed a significant correlation between the inspected biological effect and the developed score. For the classification tasks Area Under the ROC (receiver operating characteristic curve) Curve (AUC) scores were over 90% with a sufficient significance interval. Also, for these tasks, cfDNA levels did not change the performance unfavourably even when the levels were below 1%. So, the method shows a good robustness against cfDNA levels as well. Finally, EPIC-seq did not show any significant changes under different pre-analytical factors, which proves that the method is robust under different circumstances that can be caused by the instruments and tools used before the analysis. Limitations. While EPIC-seq offers significant potential in various biomedical applications, it also has limitations that warrant consideration in its implementation and interpretation. Dependency on Known Cancer-Associated genes. One limitation of EPIC-seq is its reliance on prior knowledge of genes associated with specific cancers. The effectiveness of the EPIC-seq model hinges on the availability of comprehensive gene expression profiles for the targeted cancer types. This dependency may restrict its applicability to cancers with well-characterized gene expression patterns, limiting its utility in cancers with less understood molecular signatures. Limited applicability to specific cancer types. EPIC-seq may be more effective in cancers with prominent genes or well-defined molecular subtypes. Consequently, its utility may be limited in cancers with less distinct genetic profiles or those characterized by significant interpatient variability. This restricts its generalizability across different cancer types and necessitates cautious interpretation of results in diverse oncological contexts. Limited Performance in Early-stage cancer. EPIC-seq may exhibit enhanced performance in detecting late-stage cancer due to higher levels of ctDNA and more pronounced genetic alterations. For example, EPIC-seq's sensitivity for detecting NSCLC diminishes significantly in patients with low tumor-DNA burden (below 1%), resulting in decreased detection rates by approximately 34%. Applications. Noninvasive cancer detection. EPIC-seq has demonstrated remarkable potential in noninvasive cancer detection, notably in the diagnosis of lung cancer, the leading cause of cancer-related mortality. Using EPIC-seq, researchers have achieved high accuracy in distinguishing between NSCLC patients, DLBCL patients and healthy individuals. Noninvasive Classification of Cancer subtypes. EPIC-seq enables the subclassification of NSCLC into histological subtypes such as lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). EPIC-seq can also aid with the classification of cell-of-origin (COO) subtypes in DLBCL. By analyzing epigenetic and transcriptional signatures, EPIC-seq-derived classifiers provide valuable insights into tumor heterogeneity and molecular subtyping, providing valuable insights for tailored treatment strategies. Therapeutic Response prediction. In addition to diagnosis and classification, EPIC-seq holds promise in predicting patient response to various cancer therapies, including immune-checkpoint inhibition (ICI). By analyzing changes in gene expression patterns captured through EPIC-seq, researchers can forecast patient response to PD-(L)1 blockade therapy, which can provide great help in personalized cancer treatment. EPIC-seq-derived indices have shown significant correlation with treatment response, offering potential prognostic markers for therapy outcome prediction. Immunotranscriptomic profiling of Classical Hodgkin Lymphoma. EPIC-seq has been shown to be effective for inferral of epigenetic expression of classical Hodgkin Lymphoma's (cHL) subtypes. Hodgkin and Reed/Sternberg cells and their corresponding T cells' expression were inferred with EPIC-seq. Bulk single-cell RNA sequencing results shows significant correlation with EPIC-seq profilings of these cell types. Possible use cases. Research in different areas mention possible use cases of EPIC-seq. Integrated analysis toolkit for whole-genome-wide features of cfDNA (INAC) compiles different tools, including EPIC-seq's PFE and NDR scores, to provide in comprehensive silico analysis of cfDNA which can be exemplified disease state and clinical outcome inference, transcriptome modeling, and copy number profiling. EPIC-seq is also mentioned to be a potential application in clinica IBD cases. It can be used for survailance of IBD in high-risk groups and precancerous development caused by IBD. It is also named as a possible superior method in clinical IBD gut damage detection, compared to the current methods. Alternatives. As EPIC-seq studies epigenetic markers to infer gene expression, one can study epigenetic sequencing methods like ChIP-seq, ATAC-seq, MeDIP-seq, and Bisulfite-Free DNA Methylation sequencing in combination with methods for profiling RNA expression such as RNA-seq and scRNA-seq. Considering the method is mainly developed for early cancer detection or subgrouping, liquid biopsy methods, such as Twist cfDNA Pan-Cancer Reference Standard, can be used as an alternative. Different liquid biopsy methods focus on cell-free tumour markers, tumour methylation markers, exomes, proteins, lipids, carbohydrates, electrolytes, metabolites, RNA, extracellular vesicles, circulating tumour cells, and tumour-educated platelets for early identification of cancer non-invasively. Some of the proposed liquid biopsy methods provide a comprehensive detection of cancer types, such as ATR-FTIR spectroscopy and CancerSEEK, while others, like Dxcover and SelectMdx operate on more specific (even single) cancer targets. EPIC-seq utilizes fragmentomic features to infer expression levels of genes. Several studies also employ fragmentomic features to infer cancer existence, infer cell death, and detect other clinical conditions such as transplant failure. ctDNA by Fragment Size analysis. This method uses in vivo and silico ctDNA fragment length selection to enrich the variant proportion in the plasma. The method is decided on size selection criteria based on blood ctDNA fragment length properties, so it may not generalize well for other non-invasive sampling methods. Furthermore, it employs supervised machine learning methods like Random Forest and Logistic Regression on shallow WGS to classify cancer and healthy patients. The method can be used for different cancer types. Plasma DNA End-Motif profiling. This method tries to identify 4-bp long end motifs from each stand's 5' end on bisulfite sequencing reads of plasma cfDNAs. Hierarchical clustering of the motifs is done to detect any under/overrepresentation of these motifs due to cancer existence. The method incorporates Support Vector Machines and Logistic Regression to predict cancer patients from healthy ones. The method is also applied to transplant patients with clustering and multidimensional scaling (MDS) analysis and shows applicability. The same analysis types also proved that this method applies to prenetal testing. This method is also informative for cell type origins. Orientation-aware Plasma cell-free DNA Fragmentation analysis. Sequencing depth inconsistencies on open chromatin regions and signals derived from up/downstream orientation-sensitive sequencing read densities, this method infers the tissue of origin of the cfDNA fragments obtained from bisulfite sequencing. The method uses a mathematical formulation to generate signals for orientation-aware cfDNA fragmentation based on the empirical peak periods and positions of up/downstream ends of the reads. The method shown to be useful for inferring the tissue-of-origin, pregnancy identification, cancer detection, and transplant monitoring. This method also provides information on which tissue-of-origin contributes how much to cfDNA reads. DNA Evaluation of Fragments for early interception. The method analyzes the shallow WGS reads in windows while considering the cfDNA fragment length and coverage. The genome-wide pattern of cfDNA fragmentation features is then fed to a gradient tree-boosting machine learning model to predict their cancer situation.  They also used machine learning classifiers to predict the tissue of origin. Overall, the method can be used to identify if a patient has cancer. Even though the method does not specifically classify the cancer types during prediction, it is used for the detection of different cancers. In vivo Nucleosome footprinting. The method produces genome-wide mappings of in vivo nucleosome occupancy to detect the tissue-of-origin of cfDNA molecules. The method uses reads' endpoint position aligned which are expected to be close to nucleosome core particle (NCP) sites. Windowed Protection Score (WPS) is proposed to quantify the cfDNA density close to NCPs using the frequency of cfDNA particles that cover 120 base pairs centred at a given location minus the frequency of fragments with an endpoint at the same interval. Then, the peaks are called heuristically for WPS to identify footprints. The cells contributing to cfDNA are then predicted from the footprints. These footprints can be used for identifying non-malignant epigenetic or genetic sites like transcription factor binding sites, and detection of malignancy-related biomarkers based on the extent of tissue damage and cell deaths. ctDNA Nucleosome Pattern Employment for Transcriptional Regulation profiling. The method has mainly been developed for detecting the various phenotypes of metastatic castration-resistant prostate cancer. It requires the usage of patient-derived xenografts for enrichment of ctDNA in blood for further analysis. After WGS, the method utilizes the tool Griffin for inspection of local promoter coverage, nucleosome positioning, fragment size analysis, and composite transcription factor binding sites plus open chromatin sites of ctDNA reads. It also checks the histone modifications and applies dimensionality reduction on the found sites to identify putative promoter, enhancer, and gene repressive heterochromatic marks. To interrogate the chromatine phasing, distance between open chromatin regions, the method uses TritonNP, newly developed software, that uses Fourier transforms and band-pass filters. XGBoost is utilized for classification on cancer subtype with using the features detected in previous steps. cfDNA Methylation, Copy Number, and Fragmentation Analysis for early detection of multiple cancer types. The method is proposed as an assay that employs both cfDNA whole genome methylation sequencing and fragmentomic feature information for multicancer classification. Copy number ratios calculated for healthy and cancerous tissues are used as a cancer type and cancer existence identifier. As done in EPIC-seq, the method also utilizes fragment lengths. Short fragment over long fragment ratio is used in the method as an identifier score. Using the single base or region level methylation percentages on detected cancer methylation markers for each cancer type, copy number ratios, and short/long fragment ratios; the method employs a custom Support Vector Machines algorithm to classify the cancer type if there exists one. This method reports the cancer detection and tissue-of-origin of 4 cancer types. However, it requires detection of specific methylation sites/regions of interest for cancer types References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "99+i" }, { "math_id": 1, "text": "\\hat{p_i}" }, { "math_id": 2, "text": " - \\sum_{i=1}^{201}(\\hat{p_i}\\log_2(\\hat{p_i})) " }, { "math_id": 3, "text": "e_i" }, { "math_id": 4, "text": "1\\le i \\le5" }, { "math_id": 5, "text": "(1+k)" }, { "math_id": 6, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=73200138
73200355
Reinforcement learning from human feedback
Machine learning technique &lt;templatestyles src="Machine learning/styles.css"/&gt; In machine learning, reinforcement learning from human feedback (RLHF) is a technique to align an agent with human preferences. It involves training a reward model to represent preferences previously gathered from a sample of humans, which can then be used to train other models through reinforcement learning. In classical reinforcement learning, an intelligent agent's goal is to learn a function that guides its behavior, called a policy. This function is iteratively updated to maximize rewards based on the agent's task performance. However, explicitly defining a reward function that accurately approximates human preferences is challenging. Therefore, RLHF seeks to train a "reward model" directly from human feedback. The reward model is first trained in a supervised manner to predict if a response to a given prompt is good (high reward) or bad (low reward) based on ranking data collected from human annotators. This model then serves as a reward function to improve an agent's policy through an optimization algorithm like proximal policy optimization. RLHF has applications in various domains in machine learning, including natural language processing tasks such as text summarization and conversational agents, computer vision tasks like text-to-image models, and the development of video game bots. While RLHF is an effective method of training models to act better in accordance with human preferences, it also faces challenges due to the way the human preference data is collected. Though RLHF does not require massive amounts of data to improve performance, sourcing high-quality preference data is still an expensive process. Furthermore, if the data is not carefully collected from a representative sample, the resulting model may exhibit unwanted biases. Background and motivation. Optimizing a model based on human feedback is desirable when a task is difficult to specify yet easy to judge. For example, one may want to train a model to generate safe text that is both helpful and harmless (such as lacking bias, toxicity, or otherwise harmful content). Asking humans to manually create examples of harmless and harmful text would be difficult and time-consuming. However, humans are adept at swiftly assessing and comparing the harmfulness of different AI-generated text. Therefore, a more practical objective would be to allow the model to use this type of human feedback to improve its text generation. Despite the clear benefits of incorporating human feedback in training models, prior efforts—including some that leverage reinforcement learning—have encountered significant challenges. Most attempts were either narrow and difficult to generalize, breaking down on more complex tasks, or they faced difficulties learning from sparse (lacking specific information and relating to large amounts of text at a time) or noisy (inconsistently rewarding similar outputs) reward functions. RLHF was not the first successful method of using human feedback for reinforcement learning, but it is one of the most widely used. The foundation for RLHF was introduced as an attempt to create a general algorithm for learning from a practical amount of human feedback. The algorithm as used today was introduced by OpenAI in a paper on enhancing text continuation or summarization based on human feedback, and it began to gain popularity when the same method was reused in their paper on InstructGPT. RLHF has also been shown to improve the robustness of RL agents and their capacity for exploration, which results in an optimization process more adept at handling uncertainty and efficiently exploring its environment in search of the highest reward. Collecting human feedback. Human feedback is commonly collected by prompting humans to rank instances of the agent's behavior. These rankings can then be used to score outputs, for example, using the Elo rating system, which is an algorithm for calculating the relative skill levels of players in a game based only on the outcome of each game. While ranking outputs is the most widely adopted form of feedback, recent research has explored other forms, such as numerical feedback, natural language feedback, and prompting for direct edits to the model's output. One initial motivation of RLHF was that it requires relatively small amounts of comparison data to be effective. It has been shown that a small amount of data can lead to comparable results to a larger amount. In addition, increasing the amount of data tends to be less effective than proportionally increasing the size of the reward model. Nevertheless, a larger and more diverse amount of data can be crucial for tasks where it is important to avoid bias from a partially representative group of annotators. When learning from human feedback through pairwise comparison under the Bradley–Terry–Luce model (or the Plackett–Luce model for K-wise comparisons over more than two comparisons), the maximum likelihood estimator (MLE) for linear reward functions has been shown to converge if the comparison data is generated under a well-specified linear model. This implies that, under certain conditions, if a model is trained to decide which choices people would prefer between pairs (or groups) of choices, it will necessarily improve at predicting future preferences. This improvement is expected as long as the comparisons it learns from are based on a consistent and simple rule. Both offline data collection models, where the model is learning by interacting with a static dataset and updating its policy in batches, as well as online data collection models, where the model directly interacts with the dynamic environment and updates its policy immediately, have been mathematically studied proving sample complexity bounds for RLHF under different feedback models. In the offline data collection model, when the objective is policy training, a pessimistic MLE that incorporates a lower confidence bound as the reward estimate is most effective. Moreover, when applicable, it has been shown that considering K-wise comparisons directly is asymptotically more efficient than converting them into pairwise comparisons for prediction purposes. In the online scenario, when human feedback is collected through pairwise comparisons under the Bradley–Terry–Luce model and the objective is to minimize the algorithm's regret (the difference in performance compared to an optimal agent), it has been shown that an optimistic MLE that incorporates an upper confidence bound as the reward estimate can be used to design sample efficient algorithms (meaning that they require relatively little training data). A key challenge in RLHF when learning from pairwise (or dueling) comparisons is associated with the non-Markovian nature of its optimal policies. Unlike simpler scenarios where the optimal strategy does not require memory of past actions, in RLHF, the best course of action often depends on previous events and decisions, making the strategy inherently memory-dependent. Applications. RLHF has been applied to various domains of natural language processing (NLP), such as conversational agents, text summarization, and natural language understanding. Ordinary reinforcement learning, in which agents learn from their actions based on a predefined "reward function", is difficult to apply to NLP tasks because the rewards tend to be difficult to define or measure, especially when dealing with complex tasks that involve human values or preferences. RLHF can steer NLP models, in particular language models, to provide answers that align with human preferences with regard to such tasks by capturing their preferences beforehand in the reward model. This results in a model capable of generating more relevant responses and rejecting inappropriate or irrelevant queries. Some notable examples of RLHF-trained language models are OpenAI's ChatGPT (and its predecessor InstructGPT), DeepMind's Sparrow, Google's Gemini, and Anthropic's Claude. In computer vision, RLHF has also been used to align text-to-image models. Studies that successfully used RLHF for this goal have noted that the use of KL regularization in RLHF, which aims to prevent the learned policy from straying too far from the unaligned model, helped to stabilize the training process by reducing overfitting to the reward model. The final image outputs from models trained with KL regularization were noted to be of significantly higher quality than those trained without. Other methods tried to incorporate the feedback through more direct training—based on maximizing the reward without the use of reinforcement learning—but conceded that an RLHF-based approach would likely perform better due to the online sample generation used in RLHF during updates as well as the aforementioned KL regularization over the prior model, which mitigates overfitting to the reward function. RLHF was initially applied to other areas, such as the development of video game bots and tasks in simulated robotics. For example, OpenAI and DeepMind trained agents to play Atari games based on human preferences. In classical RL-based training of such bots, the reward function is simply correlated to how well the agent is performing in the game, usually using metrics like the in-game score. In comparison, in RLHF, a human is periodically presented with two clips of the agent's behavior in the game and must decide which one "looks" better. This approach can teach agents to perform at a competitive level without ever having access to their score. In fact, it was shown that RLHF can sometimes lead to superior performance over RL with score metrics because the human's preferences can contain more useful information than performance-based metrics. The agents achieved strong performance in many of the environments tested, often surpassing human performance. Training. In RLHF, two different models are trained: a reward model and a reinforcement learning (RL) policy. The reward model learns to determine what behavior is desirable based on human feedback, while the policy is guided by the reward model to determine the agent's actions. Both models are commonly initialized using a pre-trained autoregressive language model. This model is then customarily trained in a supervised manner on a relatively small dataset of pairs of prompts to an assistant and their accompanying responses, written by human annotators. The reward model benefits from starting with a pre-trained model, as this initializes it with an understanding of language and focuses training explicitly on learning human preferences, speeding up the process. In addition to being used to initialize the reward model and the RL policy, the model is then also used to sample data to be compared by annotators. The reward model is then trained by replacing the final layer of the previous model with a randomly initialized regression head. This change shifts the model from its original classification task over its vocabulary to simply outputting a number corresponding to the score of any given prompt and response. This model is trained on the human preference comparison data collected earlier from the supervised model. In particular, it is trained to minimize the following cross-entropy loss function, which incentivizes it to make predictions that are closer to the actual human ratings: formula_0 where formula_1 is the number of responses the labelers ranked, formula_2 is the output of the reward model for prompt formula_3 and completion formula_4, formula_5 is the preferred completion over formula_6, formula_7 denotes the sigmoid function, and formula_8 denotes the expected value. This loss function essentially measures the difference between the reward model's predictions and the decisions made by humans. The goal is to make the model's guesses as close as possible to the humans' preferences by minimizing the difference measured by this equation. In the case of only pairwise comparisons, the factor of formula_9 is omitted. Otherwise, all formula_10 comparisons from each prompt are used for training as a single batch. After training, the outputs of the model are normalized such that the reference completions have a mean score of 0. Similarly to the reward model, the human feedback policy is also fine-tuned over the pre-trained model. The objective of this fine-tuning step is to adapt the pre-existing, unaligned model (initially trained in a supervised manner) to better align with human preferences by adjusting its parameters based on the rewards derived from human feedback. The output of the reward model can be used as the reward to be maximized using RL for the prompt-response pairs. The environment randomly presents the policy with prompts from the dataset and expects responses to them, simulating real-world scenarios where the agent must understand diverse prompts and generate appropriate responses. Denoting the learned RL policy with parameters formula_11 as formula_12, we can define the following objective function: formula_13 where formula_14 is the training distribution we are drawing from and formula_15 is the previously trained, unaligned, model. The constant formula_16 is used to adjust the intensity of the KL penalty term. This penalty is applied on a per-token basis between the policy and the unaligned models' outputs. Its purpose is to avoid excessively fine-tuning the policy, ensuring that the training process does not overly specialize the model on the new training data. This KL term works by penalizing the KL divergence (a measure of statistical distance between distributions) between the model being fine-tuned and the initial supervised model. By choosing an appropriate formula_16, the training can balance learning from new data while retaining useful information from the initial model, increasing generalization by avoiding fitting too closely to the new data. Aside from preventing the new model from producing outputs too dissimilar those of the initial model, a second motivation of including the KL term is to allow the policy to further explore the environment by encouraging additional entropy, which can prevent the model from collapsing to a single mode. In simpler terms, the objective function calculates how well the policy's responses are expected to align with human feedback. The policy generates responses to prompts, and each response is evaluated both on how well it matches human preferences (as measured by the reward model) and how similar it is to responses the model would naturally generate. The goal is to balance improving alignment with human preferences while ensuring the model's responses remain diverse and not too far removed from what it has learned during its initial training. This helps the model not only to provide answers that people find useful or agreeable but also to maintain a broad understanding and avoid overly narrow or repetitive responses. A second term is commonly added to the objective function that allows the policy to incorporate the pre-training gradients. This term keeps the model from losing its initial language understanding ability while it learns new tasks based on human feedback by incorporating its original pre-training task of text completion. The final objective function is written as: formula_17 where formula_18 controls the strength of this additional term and formula_19 is the original pre-training text distribution. This objective function can then be directly used to train the policy using the proximal policy optimization algorithm. In total, this objective function defines the method for adjusting the RL policy, blending the aim of aligning with human feedback and maintaining the model's original language understanding. Limitations. RLHF suffers from challenges with collecting human feedback, learning a reward model, and optimizing the policy. In terms of data collection, the scalability and cost of human feedback can be slow and expensive compared to unsupervised learning. Its quality and consistency may vary depending on the task, interface, and the preferences and biases of individual humans. The effectiveness of RLHF depends on the quality of human feedback. For instance, the model may become biased, favoring certain groups over others, if the feedback lacks impartiality, is inconsistent, or is incorrect. There is a risk of overfitting, where the model memorizes specific feedback examples instead of learning to generalize. For instance, feedback predominantly from a specific demographic might lead the model to learn peculiarities or noise, along with the intended alignment. Excessive alignment to the specific feedback it received (that is, to the bias therein) can lead to the model performing sub-optimally in new contexts or when used by different groups. A single reward function cannot always represent the opinions of diverse groups of people. Even with a representative sample, conflicting views and preferences may result in the reward model favoring the majority's opinion, potentially disadvantaging underrepresented groups. In some cases, as is possible in regular reinforcement learning, there may be a risk of the model learning to manipulate the feedback process or game the system to achieve higher rewards rather than genuinely improving its performance. In the case of RLHF, a model may learn to exploit the fact that it is rewarded for what is evaluated positively and not necessarily for what is actually good, which can lead to it learning to persuade and manipulate. For example, models might learn that apparent confidence, even if inaccurate, garners higher rewards. Such behavior, if unchecked, is not just incentivized but can cause significant deployment issues due to the model's potential to mislead. Studies have found that humans are not skilled at identifying mistakes in LLM outputs in complex tasks; therefore, models learning to generate confident-sounding yet incorrect text can lead to significant issues when deployed. Alternatives. Reinforcement learning from AI feedback. Similarly to RLHF, "reinforcement learning from AI feedback" (RLAIF) relies on training a preference model, except that the feedback is automatically generated. This is notably used in Anthropic's constitutional AI, where the AI feedback is based on the conformance to the principles of a constitution. Direct preference optimization. Another alternative to RLHF called Direct Preference Optimization (DPO) has been proposed to learn human preferences. Like RLHF, it has been applied to align pre-trained large language models using human-generated preference data. Unlike RLHF, however, which first trains a separate intermediate model to understand what good outcomes look like and then teaches the main model how to achieve those outcomes, DPO simplifies the process by directly adjusting the main model according to people's preferences. It uses a change of variables to define the "preference loss" directly as a function of the policy and uses this loss to fine-tune the model, helping it understand and prioritize human preferences without needing a separate step. Essentially, this approach directly shapes the model's decisions based on positive or negative human feedback. DPO is simpler to implement and train than RLHF and has been shown to produce comparable and sometimes superior results. Nevertheless, RLHF has also been shown to beat DPO on some datasets, for example, on benchmarks that attempt to measure truthfulness. Therefore, the choice of method may vary depending on the features of the human preference data and the nature of the task. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{L}(\\theta)=-\\frac{1}{K\\choose 2}E_{(x,y_w,y_l)}[\\log(\\sigma(r_\\theta(x,y_w)-r_\\theta(x,y_l)))]" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "r_\\theta(x,y)" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "y_w" }, { "math_id": 6, "text": "y_l" }, { "math_id": 7, "text": "\\sigma(x)" }, { "math_id": 8, "text": "E[X]" }, { "math_id": 9, "text": "1/{\\tbinom K2}" }, { "math_id": 10, "text": "{\\tbinom K2}" }, { "math_id": 11, "text": "\\phi" }, { "math_id": 12, "text": "\\pi_\\phi^\\text{RL}" }, { "math_id": 13, "text": "\\text{objective}(\\phi)=E_{(x,y)\\sim D_{\\pi_\\phi^\\text{RL}}}\\left[r_\\theta(x,y)-\\beta\\log\\left(\\frac{\\pi^\\text{RL}_\\phi(y|x)}{\\pi^\\text{SFT}(y|x)}\\right)\\right]" }, { "math_id": 14, "text": "D_{\\pi_\\phi^\\text{RL}}" }, { "math_id": 15, "text": "\\pi^\\text{SFT}" }, { "math_id": 16, "text": "\\beta" }, { "math_id": 17, "text": "\\text{objective}(\\phi)=E_{(x,y)\\sim D_{\\pi_\\phi^\\text{RL}}}\\left[r_\\theta(x,y)-\\beta\\log\\left(\\frac{\\pi^\\text{RL}_\\phi(y|x)}{\\pi^\\text{SFT}(y|x)}\\right)\\right]+\\gamma E_{x\\sim D_\\text{pretrain}}[\\log(\\pi_\\phi^\\text{RL}(x))]" }, { "math_id": 18, "text": "\\gamma" }, { "math_id": 19, "text": "D_\\text{pretrain}" } ]
https://en.wikipedia.org/wiki?curid=73200355
7320365
Sieve analysis
Procedure to assess particle size distribution A sieve analysis (or gradation test) is a practice or procedure used in geology, civil engineering, and chemical engineering to assess the particle size distribution (also called "gradation") of a granular material by allowing the material to pass through a series of sieves of progressively smaller mesh size and weighing the amount of material that is stopped by each sieve as a fraction of the whole mass. The size distribution is often of critical importance to the way the material performs in use. A sieve analysis can be performed on any type of non-organic or organic granular materials including sand, crushed rock, clay, granite, feldspar, coal, soil, a wide range of manufactured powder, grain and seeds, down to a minimum size depending on the exact method. Being such a simple technique of particle sizing, it is probably the most common. Procedure. A gradation test is performed on a sample of aggregate in a laboratory. A typical sieve analysis uses a column of sieves with wire mesh screens of graded mesh size. A representative weighed sample is poured into the top sieve which has the largest screen openings. Each lower sieve in the column has smaller openings than the one above. At the base is a pan, called the receiver. The column is typically placed in a mechanical shaker, which shakes the column, usually for a set period, to facilitate exposing all of the material to the screen openings so that particles small enough to fit through the holes can fall through to the next layer. After the shaking is complete the material on each sieve is weighed. The mass of the sample of each sieve is then divided by the total mass to give a percentage retained on each sieve. The size of the average particle on each sieve is then analysed to get a cut-off point or specific size range, which is then captured on a screen. The results of this test are used to describe the properties of the aggregate and to see if it is appropriate for various civil engineering purposes such as selecting the appropriate aggregate for concrete mixes and asphalt mixes as well as sizing of water production well screens. The results of this test are provided in graphical form to identify the type of gradation of the aggregate. The complete procedure for this test is outlined in the American Society for Testing and Materials (ASTM) C 136 and the American Association of State Highway and Transportation Officials (AASHTO) T 27 A suitable sieve size for the aggregate underneath the nest of sieves to collect the aggregate that passes through the smallest. The entire nest is then agitated, and the material whose diameter is smaller than the mesh opening pass through the sieves. After the aggregate reaches the pan, the amount of material retained in each sieve is then weighed. Preparation. In order to perform the test, a sufficient sample of the aggregate must be obtained from the source. To prepare the sample, the aggregate should be mixed thoroughly and be reduced to a suitable size for testing. The total mass of the sample is also required. Results. The results are presented in a graph of percent passing versus the sieve size. On the graph the sieve size scale is logarithmic. To find the percent of aggregate passing through each sieve, first find the percent retained in each sieve. To do so, the following equation is used, %Retained = formula_0×100% where WSieve is the mass of aggregate in the sieve and WTotal is the total mass of the aggregate. The next step is to find the cumulative percent of aggregate retained in each sieve. To do so, add up the total amount of aggregate that is retained in each sieve and the amount in the previous sieves. The cumulative percent passing of the aggregate is found by subtracting the percent retained from 100%. %Cumulative Passing = 100% - %Cumulative Retained. The values are then plotted on a graph with cumulative percent passing on the y axis and logarithmic sieve size on the x axis. There are two versions of the %Passing equations. the .45 power formula is presented on .45 power gradation chart, whereas the more simple %Passing is presented on a semi-log gradation chart. version of the percent passing graph is shown on .45 power chart and by using the .45 passing formula. % Passing = Pi = formula_1x100% Where: SieveLargest - Largest diameter sieve used in (mm). Aggregatemax_size - Largest piece of aggregate in the sample in (mm). %Passing = formula_2x100% Where: WBelow - The total mass of the aggregate within the sieves below the current sieve, not including the current sieve's aggregate. WTotal - The total mass of all of the aggregate in the sample. Methods. There are different methods for carrying out sieve analyses, depending on the material to be measured. Throw-action. Here a throwing motion acts on the sample. The vertical throwing motion is overlaid with a slight circular motion which results in distribution of the sample amount over the whole sieving surface. The particles are accelerated in the vertical direction (are thrown upwards). In the air they carry out free rotations and interact with the openings in the mesh of the sieve when they fall back. If the particles are smaller than the openings, they pass through the sieve. If they are larger, they are thrown. The rotating motion while suspended increases the probability that the particles present a different orientation to the mesh when they fall back again, and thus might eventually pass through the mesh. Modern sieve shakers work with an electro-magnetic drive which moves a spring-mass system and transfers the resulting oscillation to the sieve stack. Amplitude and sieving time are set digitally and are continuously observed by an integrated control-unit. Therefore, sieving results are reproducible and precise (an important precondition for a significant analysis). Adjustment of parameters like amplitude and sieving time serves to optimize the sieving for different types of material. This method is the most common in the laboratory sector. Horizontal. In horizontal sieve shaker the sieve stack moves in horizontal circles in a plane. Horizontal sieve shakers are preferably used for needle-shaped, flat, long or fibrous samples, as their horizontal orientation means that only a few disoriented particles enter the mesh and the sieve is not blocked so quickly. The large sieving area enables the sieving of large amounts of sample, for example as encountered in the particle-size analysis of construction materials and aggregates. Tapping. A horizontal circular motion overlies a vertical motion which is created by a tapping impulse. These motional processes are characteristic of hand sieving and produce a higher degree of sieving for denser particles (e.g. abrasives) than throw-action sieve shakers. Wet. Most sieve analyses are carried out dry. But there are some applications which can only be carried out by wet sieving. This is the case when the sample which has to be analysed is e.g. a suspension which must not be dried; or when the sample is a very fine powder which tends to agglomerate (mostly &lt; 45 μm) – in a dry sieving process this tendency would lead to a clogging of the sieve meshes and this would make a further sieving process impossible. A wet sieving process is set up like a dry process: the sieve stack is clamped onto the sieve shaker and the sample is placed on the top sieve. Above the top sieve a water-spray nozzle is placed which supports the sieving process additionally to the sieving motion. The rinsing is carried out until the liquid which is discharged through the receiver is clear. Sample residues on the sieves have to be dried and weighed. When it comes to wet sieving it is very important not to change the sample in its volume (no swelling, dissolving or reaction with the liquid). Air Circular Jet. Air jet sieving machines are ideally suited for very fine powders which tend to agglomerate and cannot be separated by vibrational sieving. The reason for the effectiveness of this sieving method is based on two components: A rotating slotted nozzle inside the sieving chamber and a powerful industrial vacuum cleaner which is connected to the chamber. The vacuum cleaner generates a vacuum inside the sieving chamber and sucks in fresh air through the slotted nozzle. When passing the narrow slit of the nozzle the air stream is accelerated and blown against the sieve mesh, dispersing the particles. Above the mesh, the air jet is distributed over the complete sieve surface and is sucked in with low speed through the sieve mesh. Thus the finer particles are transported through the mesh openings into the vacuum cleaner. A dense gradation refers to a sample that is approximately of equal amounts of various sizes of aggregate. By having a dense gradation, most of the air voids between the material are filled with particles. A dense gradation will result in an even curve on the gradation graph. Also known as uniform gradation, a narrow gradation is a sample that has aggregate of approximately the same size. The curve on the gradation graph is very steep, and occupies a small range of the aggregate. A gap gradation refers to a sample with very little aggregate in the medium size range. This results in only coarse and fine aggregate. The curve is horizontal in the medium size range on the gradation graph. An open gradation refers an aggregate sample with very little fine aggregate particles. This results in many air voids, because there are no fine particles to fill them. On the gradation graph, it appears as a curve that is horizontal in the small size range. A rich gradation refers to a sample of aggregate with a high proportion of particles of small sizes. Types of sieves. Woven wire mesh sieves are according to technical requirements of ISO 3310-1. These sieves usually have nominal aperture ranging from 20 micrometers to 3.55 millimeters, with diameters ranging from 100 to 450 millimeters. Perforated plate sieves conform to ISO 3310-2 and can have round or square nominal apertures ranging from 1 millimeter to 125 millimeters. The diameters of the sieves range from 200 to 450 millimeters. American standard sieves also known as ASTM sieves conform to ASTM E11 standard. The nominal aperture of these sieves range from 20 micrometers to 200 millimeters, however these sieves have only and diameter sizes. Limitations of sieve analysis. Sieve analysis has, in general, been used for decades to monitor material quality based on particle size. For coarse material, sizes that range down to #100 mesh (150 μm), a sieve analysis and particle size distribution is accurate and consistent. However, for material that is finer than 100 mesh, dry sieving can be significantly less accurate. This is because the mechanical energy required to make particles pass through an opening and the surface attraction effects between the particles themselves and between particles and the screen increase as the particle size decreases. Wet sieve analysis can be utilized where the material analyzed is not affected by the liquid - except to disperse it. Suspending the particles in a suitable liquid transports fine material through the sieve much more efficiently than shaking the dry material. Sieve analysis assumes that all particle will be round (spherical) or nearly so and will pass through the square openings when the particle diameter is less than the size of the square opening in the screen. For elongated and flat particles a sieve analysis will not yield reliable mass-based results, as the particle size reported will assume that the particles are spherical, where in fact an elongated particle might pass through the screen end-on, but would be prevented from doing so if it presented itself side-on. Properties. Gradation affects many properties of an aggregate, including bulk density, physical stability and permeability. With careful selection of the gradation, it is possible to achieve high bulk density, high physical stability, and low permeability. This is important because in pavement design, a workable, stable mix with resistance to water is important. With an open gradation, the bulk density is relatively low, due to the lack of fine particles, the physical stability is moderate, and the permeability is quite high. With a rich gradation, the bulk density will also be low, the physical stability is low, and the permeability is also low. The gradation can be affected to achieve the desired properties for the particular engineering application. Engineering applications. Gradation is usually specified for each engineering application it is used for. For example, foundations might only call for coarse aggregates, and therefore an open gradation is needed. Sieve analysis determines the particle size distribution of a given soil sample and hence helps in easy identification of a soil's mechanical properties. These mechanical properties determine whether a given soil can support the proposed engineering structure. It also helps determine what modifications can be applied to the soil and the best way to achieve maximum soil strength. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{W_{Sieve}}{W_{Total}}" }, { "math_id": 1, "text": "\\frac{Sieve_{Largest}}{Aggregate_{max-size}}" }, { "math_id": 2, "text": "\\frac{W_{Below}}{W_{Total}}" } ]
https://en.wikipedia.org/wiki?curid=7320365
7320446
Soil gradation
Classification of grainy soils based on the sizes of their grains In soil science, soil gradation is a classification of a coarse-grained soil that ranks the soil based on the different particle sizes contained in the soil. Soil gradation is an important aspect of soil mechanics and geotechnical engineering because it is an indicator of other engineering properties such as compressibility, shear strength, and hydraulic conductivity. In a design, the gradation of the "in situ" (on site) soil often controls the design and ground water drainage of the site. A poorly graded soil will have better drainage than a well graded soil, if it is not high in clay quality. Soil is graded as either well graded or poorly graded. Soil gradation is determined by analyzing the results of a sieve analysis or a hydrometer analysis. The process for grading a soil is in accordance with either the Unified Soil Classification System or the AASHTO Soil Classification System. Gradation of a soil is determined by reading the grain size distribution curve produced from the results of laboratory tests on the soil. Gradation of a soil can also be determined by calculating the coefficient of uniformity, "C"u, and the coefficient of curvature, "C"c, of the soil and comparing the calculated values with published gradation limits. Soil gradations. Soil gradation is a classification of the particle size distribution of a soil. Coarse-grained soils, mainly gravels or sands, are graded as either well graded or poorly graded. Poorly graded soils are further divided into uniformly-graded or gap-graded soils. Fine-grained soils, mainly silts and clays, are classified according to their Atterberg limits. Well graded. A well-graded soil is a soil that contains particles of a wide range of sizes and has a good representation of all sizes from the No. 4 to No. 200 sieves. A well-graded gravel is classified as GW, while a well-graded sand is classified as SW. Poorly graded. A poorly-graded soil is a soil that does not have a good representation of all sizes of particles from the no. 4 to no. 200 sieve. A poorly-graded gravel is classified as GP, while a poorly-graded sand is classified as SP. Poorly-graded soils are more susceptible to soil liquefaction than well-graded soils. A gap-graded soil is a soil that has an excess or deficiency of certain particle sizes or a soil that has at least one particle size missing. An example of a gap-graded soil is one in which sand of the no. 10 and no. 40 sizes are missing, and all the other sizes are present. Process of grading a soil. The process of grading a soil is in accordance with either the Unified Soil Classification System or the AASHTO Soil Classification System. The steps in grading a soil are data collection, calculating coefficients of uniformity and curvature, and grading the soil based on the grading criteria given in the used soil classification system. Data collection. Soil gradation is determined by analyzing the results of a sieve analysis or a hydrometer analysis. In a sieve analysis, a coarse-grained soil sample is shaken through a series of woven-wire square-mesh sieves. Each sieve has successively smaller openings so particles larger than the size of each sieve are retained on the sieve. The percentage of each soil size is measured by weighing the amount retained on each sieve and comparing the weight to the total weight of the sample. The results of a sieve analysis are plotted as a grain size distribution curve, which is then analyzed to determine the soil gradation of the particular soil. In a hydrometer analysis, a fine-grained soil sample is left to settle in a viscous fluid. This method is used based on Stoke's Law which relates terminal velocity of fall of a particle in a viscous fluid to the grain diameter and density of the grain in suspension. Grain diameter is calculated from a known distance and time of the fall of the particle. This is used to classify fine-grained soils. Calculating the coefficients of uniformity and curvature. Calculating the coefficients of uniformity and curvature requires grain diameters. The grain diameter can be found for each percent of the soil passing a particular sieve. This means that if 40% of the sample is retained on the No. 200 sieve then there is 60% passing the No. 200 sieve. The coefficient of uniformity, Cu is a crude shape parameter and is calculated using the following equation: formula_0 where D60 is the grain diameter at 60% passing, and D10 is the grain diameter at 10% passing The "coefficient of curvature", Cc is a shape parameter and is calculated using the following equation: formula_1 where D60 is the grain diameter at 60% passing, D30 is the grain diameter at 30% passing, and D10 is the grain diameter at 10% passing Once the coefficient of uniformity and the coefficient of curvature have been calculated, they must be compared to published gradation criteria. Criteria for grading soils. The following criteria are in accordance with the Unified Soil Classification System: For a gravel to be classified as well graded, the following criteria must be met: Cu &gt; 4 &amp; 1 &lt; Cc &lt; 3 If both of these criteria are not met, the gravel is classified as poorly graded or GP. If both of these criteria are met, the gravel is classified as well graded or GW. For a sand to be classified as well graded, the following criteria must be met: Cu ≥ 6 &amp; 1 &lt; Cc &lt; 3 If both of these criteria are not met, the sand is classified as poorly graded or SP. If both of these criteria are met, the sand is classified as well graded or SW. Importance. Soil gradation is very important to geotechnical engineering. It is an indicator of other engineering properties such as compressibility, shear strength, and hydraulic conductivity. In a design, the gradation of the in situ or on site soil often controls the design and ground water drainage of the site. A poorly graded soil will have better drainage than a well graded soil because there are more void spaces in a poorly graded soil. When a fill material is being selected for a project such as a highway embankment or earthen dam, the soil gradation is considered. A well graded soil is able to be compacted more than a poorly graded soil. These types of projects may also have gradation requirements that must be met before the soil to be used is accepted. When options for ground remediation techniques are being selected, the soil gradation is a controlling factor.
[ { "math_id": 0, "text": "C_u = \\frac {D_{60}}{D_{10}}" }, { "math_id": 1, "text": "C_c = \\frac {(D_{30})^2}{D_{10} \\times\\ D_{60}}" } ]
https://en.wikipedia.org/wiki?curid=7320446
73214867
DatalogZ
DatalogZ (stylized as Datalogℤ) is an extension of Datalog with integer arithmetic and comparisons. The decision problem of whether or not a given ground atom (fact) is entailed by a DatalogZ program is RE-complete (hence, undecidable), which can be shown by a reduction to diophantine equations. Syntax. The syntax of DatalogZ extends that of Datalog with "numeric terms", which are integer constants, integer variables, or terms built up from these with addition, subtraction, and multiplication. Furthermore, DatalogZ allows &lt;includeonly&gt;"&lt;dfn &gt;comparison atoms&lt;/dfn&gt;"&lt;/includeonly&gt;, which are atoms of the form codice_0 or codice_1 for numeric terms codice_2, codice_3. Semantics. The semantics of DatalogZ are based on the model-theoretic (Herbrand) semantics of Datalog. Limit DatalogZ. The undecidability of entailment of DatalogZ motivates the definition of "limit DatalogZ". Limit DatalogZ restricts predicates to a single numeric position, which is marked maximal or minimal. The semantics are based on the model-theoretic (Herbrand) semantics of Datalog. The semantics require that Herbrand interpretations be &lt;includeonly&gt;"&lt;dfn &gt;limit-closed&lt;/dfn&gt;"&lt;/includeonly&gt; to qualify as models, in the following sense: Given a ground atom formula_0 of a limit predicate formula_1 where the last position is a max (resp. min) position, if formula_2 is in a Herbrand interpretation formula_3, then the ground atoms formula_4 for formula_5 (resp. formula_6) must also be in formula_3 for formula_3 to be limit-closed. Example. Given a constant codice_4, a binary relation codice_5 that represents the edges of a graph, and a binary relation codice_6 with the last position of codice_6 minimal, the following limit DatalogZ program computes the relation codice_6, which represents the length of the shortest path from codice_4 to any other node in the graph: sp(w, 0) :- . sp(y, m + 1) :- sp(x, m), edge(x, y). References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a=r(c_1, \\ldots, c_n)" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "r(c_1,\\ldots,k)" }, { "math_id": 5, "text": "k > c_n" }, { "math_id": 6, "text": "k < c_n" } ]
https://en.wikipedia.org/wiki?curid=73214867
73226836
Buffered probability of exceedance
Explains the buffered probability of exceedance (bPOE), a risk measure Buffered probability of exceedance (bPOE) is a function of a random variable used in statistics and risk management, including financial risk. The bPOE is the probability of a tail with known mean value formula_0. The figure shows the bPOE at threshold formula_0 (marked in red) as the blue shaded area. Therefore, by definition, bPOE is equal to one minus the confidence level at which the Conditional Value at Risk (CVaR) is equal to formula_0. bPOE is similar to the probability of exceedance of the threshold formula_0, but the tail is defined by its mean rather than the lowest point formula_0 of the tail. bPOE has its origins in the concept of "buffered probability of failure (bPOF)", developed by R. Tyrrell Rockafellar and Johannes Royset to measure failure risk. It was further developed and defined as the inverse CVaR by Matthew Norton, Stan Uryasev, and Alexander Mafusalov. Similar to CVaR, bPOE considers not only the probability that outcomes (losses) exceed the threshold formula_0, but also the magnitude of these outcomes (losses). Formal definition. There are two slightly different definitions of bPOE, so called Lower bPOE and Upper bPOE. For a random variable, formula_1 the Lower bPOE, formula_2, at threshold formula_3 is given by: formula_4 where formula_5. bPOE can be expressed as the inverse function of CVaR: formula_6, where formula_7 is the CVaR of formula_1 with confidence level formula_8. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\bar{p}_x(X)" }, { "math_id": 3, "text": "x \\in [E[X], \\sup X ]" }, { "math_id": 4, "text": "\\bar{p}_x (X) = \\min_{a \\geq 0} E[ a(X-x) +1 ]^+ = \\min_{\\gamma <x} \\frac{ E[X - \\gamma]^+ } { x - \\gamma}" }, { "math_id": 5, "text": "[\\cdot]^+ = \\max\\{\\cdot , 0\\}" }, { "math_id": 6, "text": " \\bar{p}_x (X) = \\{ 1 - \\alpha | \\bar{q}_\\alpha (X) = x \\} " }, { "math_id": 7, "text": "\\bar{q}_\\alpha (X)" }, { "math_id": 8, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=73226836
73227716
Metal tetranorbornyl
In organometallic chemistry, metal tetranorbornyls are compounds with the formula M(nor)4 (M = a metal in a +4 oxidation state) (1-nor = 4bicyclo[2.2.1]hept-1-yl) and are one of the largest series of tetraalkyl complexes derived from identical ligands. Metal tetranorbornyls display uniform stoichiometry, low-spin configurations, and high stability, which can be attributed to their +4 oxidation state metal center. The stability of metal tetranorbornyls is predominately considered to be derived from the unfavorable β-hydride elimination. Computational calculations have determined that London dispersion effects significantly contribute to the stability of metal tetranorbornyls. Specifically, Fe(nor)4 has a stabilization of 45.9 kcal/mol−1. Notable metal tetranorbornyls are those synthesized with metal centers of cobalt, manganese, or iron. Preparation. Traditionally, metal tetranorbornyls are prepared by a reaction of alkyllithiums, such as 1-norbornyllithium, with transition-metal halides while tumbling with glass beads in pentane. This is followed by a filtration step using a column of alumina to remove pentane byproducts. Lastly, a recrystallization step from pentane to obtain the crystalline compound. Alternative methods for the preparation of metal tetranorbornyls have been proposed. Specifically, the tetrakis(1-norbornyl)chromium complex can be prepared in inert atmosphere conditions with 1-norbornyllithium dissolved in hexane. An addition of CrCl3(THF)3 is made and allowed to stir for 48 hours. After, the solution is centrifuged for the removal of LiCl. The resulting supernatant is applied to an alumina column with hexane being used as the elution solvent. The use of the alumina column allows for the collection of a purple fraction that undergoes solvent evaporation and sublimation to obtain the desired Cr(nor)4 complex. The tetrakis(1-norbornyl)cobalt(IV) complex can be prepared by the following: The tetrakis(1-norbornyl)molybdenum(IV) complex was prepared by William M. Davis, Richard R. Schrock, and Richard M. Kolodziej by the following: The MoCl3(THF)3 was stirred with 1-norbornyllithium in a mixture of THF and diethyl ether at formula_0. The reaction mixture was then warmed to formula_1 and after approximately 90 minutes it was observed as a red color with a blue precipitate. The reaction mixture was then filtered to remove the blue precipitate. The red filtrate was then reduced via a vacuum to yield red crystals of Mo(nor)4. Structure and bonding. The stability of metal tetranorbornyls is generally considered to be a result of unfavorable β-hydrogen elimination. Metal alkyl species with β-hydrogen atoms present on the alkyl group are disfavored due to β-hydrogen migration to the metal center, which results in an olefin being eliminated and the production of the corresponding metal hydride. 1-norbornyl does not undergo β-hydrogen migration even though it possesses 6 β-hydrogen atoms due to the unfavorable formation of the olefin, 1-norbornene. According to Bredt's rule, one of the sp2 carbons of the double-bonded carbon atoms would be located at the bridgehead, which would cause 1-norbornene to be highly strained. β-hydrogen elimination does not explain the formation of metal tetranorbornyls complexes that are synthesized from lower valent metal center precursors, shortened bond lengths between the metal center and 1-norbornyl ligand carbons, or the resulting low-spin tetrahedral molecular geometry. Quantum mechanical calculations have elucidated that London dispersion forces between the norbornyl ligands are accountable for the stability and molecular geometry of the homoleptic tetranorbornyl metal complexes. Metal tetranorbornyls complexes consisting of the divalent and trivalent metal center species of Cr, Mn, Fe and Co halides undergo formation of negatively charged complexes followed by oxidation that is induced by other transition-metal species in the reaction. Factors that lead to disproportionation are traditionally considered to be derived from the tertiary carbanion ligand, 1-norbornyllithium, and the lack of potential for the pentane solvent to act as a ligand. Therefore, metal tetranorbornyls composed of first-row transition metals are not accessible to be penetrated by small reagents due to the metal center's coordination sphere. Tetrakis(1-norbornyl)cobalt(IV). Tetrakis(1-norbornyl)cobalt(IV) is a thermally stable homoleptic complex observed with σ-bonding ligands. The metal tetranorbornyl complex was the first isolated low-spin complex with tetrahedral molecular geometry. The tetrakis(1-norbornyl)cobalt(IV) complex was first synthesized by Barton K. Bower and Howard G. Tennent in 1972. The tetrakis(1-norbornyl)cobalt(IV) oxidation state is a reversible reaction using O2 as the oxidizing agent. The coordination environment of the cobalt metal center has a distorted tetrahedron structure. When examined by x-ray crystallography, the metal tetranorbornyl has a crystallographic Cs symmetry due to the presence of six carbons laid on the mirror plane. However, the four carbons atoms bonded to the cobalt metal center resembled a tetragonally compressed tetrahedron, which appeared as a pseudo D2d symmetry. The cobalt metal center in the +4 oxidation state has a d5 configuration. Typically, the d5 configuration is expected to result in the high spin complex containing 5 unpaired electrons and only 1 unpaired electron in the low spin tetrahedral complex. The single unpaired electron resides in the antibonding t2 orbital, which would cause the structure to experience a Jahn-Teller distortion. However, Theopold and co-workers speculated that the slight tetragonal compression could have been a result of steric interactions between norbornyl ligands and crystal packing forces. Tetrakis(1-norbornyl)iron(IV). The tetrakis(1-norbornyl)iron(IV) complex was first synthesized by Barton K. Bower and Howard G. Tennent in 1972. The 1-norbornyl ligands on the complex have a strong dispersion attraction and high ring strain, which as a consequence hinders the α- and β-hydride elimination reactions. Additionally, the identical ligands cause a reduced chemical reactivity due to a crowded chemical environment that impedes the interaction of small molecules with the Fe-C bonds. Synthesized complexes. Barton K. Bower and Howard G. Tennent were able to successfully synthesize and characterize the following metal tetranorbornyls derived from the first-, second-, and third-row transition metals: The metal tetranorbornyls complexes of hafnium, zirconium, titanium, and vanadium display a tetrahedral molecular geometry, which is analogous to the tetrachloride form of the metals. In comparison, the cobalt, manganese, and iron complexes display a tetragonal molecular geometry. A combination of London dispersion force and steric effects from the 1-norbornyl ligands results in the stability observed for the metal center. Characterization. Magnetic measurements. The resulting molecular geometry of the metal tetranorbornyls complexes is due to the unpaired and paired d electrons. Magnetic measurements have indicated that the d electrons of tetrakis(1-norbornyl)chromium (d2) and tetrakis(1-norbornyl)manganese (d3) are not spin paired. The four d electrons of tetrakis(1-norbornyl)iron and tetrakis(1-norbornyl)cobalt are spin paired. Electron paramagnetic resonance spectroscopy. Metal tetranorbornyls are commonly characterized via electron paramagnetic resonance (EPR) spectroscopy. Tetrakis(1-norbornyl)molybdenum was observed as a room temperature EPR signal that originated from a d2 metal center, which was considered to have two unpaired electrons in the eg orbital. In addition, the resulting EPR signal of tetrakis(1-norbornyl)chromium was comparable. Cyclic voltammetry. In 1988, Klaus H. Theopold and Erin K. Byrne performed the electrochemical experiment, cyclic voltammetry, to determine how oxidizing was the metal center of the tetrakis(1-norbornyl)cobalt(IV) complex. Two reversible electron transfer waves at -0.65 and -2.02 V were observed in THF, which elucidated that the difference in peak potentials were consistent with two one-electron transfer processes when being compared to the ferricenium/ ferrocene couple. In the same year, William M. Davis, Richard R. Schrock, and Richard M. Kolodziej produced a cyclic voltammogram for tetrakis(1-norbornyl)molybdenum. Two oxidation waves were observed at -0.15 and +1.25 V in DCM. The oxidation at -0.15 V was considered to be reversible. In comparison, the second oxidation at +1.25 V was considered to be irreversible.
[ { "math_id": 0, "text": "-46^\\circ C" }, { "math_id": 1, "text": "25^\\circ C" } ]
https://en.wikipedia.org/wiki?curid=73227716
73228457
Lead stearate
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Lead stearate is a metal-organic compound, a salt of lead and stearic acid with the chemical formula C36H70PbO4. The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid. The compound is toxic. Synthesis. The compound can be prepared by reacting stearic acid, lead(II) oxide, and a catalyst acetic acid. formula_0 Also, an exchange reaction between lead(II) acetate and sodium stearate: formula_1 Physical properties. White powder with a slight fatty odor. Sinks in water. Hygroscopic in air. Slightly soluble in water. Soluble in hot ethanol. Uses. The compound is used as a drier in oil paints and varnishes to speed the polymerization and oxidation processes. Also used as a lubricant and stabilizer in vinyl polymers and as a corrosion inhibitor in petroleum products. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{2 \\ C_{17}H_{35}COOH + PbO \\longrightarrow (C_{17}H_{35}COO)_{2}Pb + \\ H_2O}" }, { "math_id": 1, "text": "\\mathsf{ Pb(CH_3COO)_2 + 2NaC_{18}H_{35}O_2 \\ \\xrightarrow{}\\ Pb(C_{18}H_{35}O_2)_2\\downarrow + 2 CH_3COONa }" } ]
https://en.wikipedia.org/wiki?curid=73228457
73232072
Tuza's conjecture
Problem on triangles in graph theory Tuza's conjecture is an unsolved problem in graph theory, a branch of mathematics, concerning triangles in undirected graphs. Statement. In any graph formula_0, one can define two quantities formula_1 and formula_2 based on the triangles in formula_0. The quantity formula_1 is the "triangle packing number", the largest number of edge-disjoint triangles that it is possible to find in formula_0. It can be computed in polynomial time as a special case of the matroid parity problem. The quantity formula_2 is the size of the smallest "triangle-hitting set", a set of edges that touches at least one edge from each triangle. Clearly, formula_3. For the first inequality, formula_4, any triangle-hitting set must include at least one edge from each triangle of the optimal packing, and none of these edges can be shared between two or more of these triangles because the triangles are disjoint. For the second inequality, formula_5, one can construct a triangle-hitting set of size formula_6 by choosing all edges of the triangles of an optimal packing. This must hit all triangles in formula_0, even the ones not in the packing, because otherwise the packing could be made larger by adding any unhit triangle. Tuza's conjecture asserts that the second inequality is not tight, and can be replaced by formula_7. That is, according to this unproven conjecture, every undirected graph formula_0 has a triangle-hitting set whose size is at most twice the number of triangles in an optimal packing. History and partial results. Zsolt Tuza formulated Tuza's conjecture in 1981. If true, it would be best possible: there are infinitely many graphs for which formula_8, including all of the block graphs whose blocks are cliques of 2, 4, or 5 vertices. The conjecture is known to hold for planar graphs, and more generally for sparse graphs of degeneracy at most six. (Planar graphs have degeneracy at most five.) It is also known to hold for graphs of treewidth at most six, for threshold graphs, for sufficiently dense graphs, and for chordal graphs that contain a large clique. For random graphs in the Erdős–Rényi–Gilbert model, it is true with high probability. Although Tuza's conjecture remains unproven, the bound formula_5 can be improved, for all graphs, to formula_9. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\nu(G)" }, { "math_id": 2, "text": "\\tau(G)" }, { "math_id": 3, "text": "\\nu(G)\\le\\tau(G)\\le 3\\nu(G)" }, { "math_id": 4, "text": "\\nu(G)\\le\\tau(G)" }, { "math_id": 5, "text": "\\tau(G)\\le 3\\nu(G)" }, { "math_id": 6, "text": "3\\nu(G)" }, { "math_id": 7, "text": "\\tau(G)\\le 2\\nu(G)" }, { "math_id": 8, "text": "\\tau(G)=2\\nu(G)" }, { "math_id": 9, "text": "\\tau(G)\\le (3-\\tfrac{3}{23})\\nu(G)\\approx 2.8695\\nu(G)" } ]
https://en.wikipedia.org/wiki?curid=73232072
73236876
Bamberg-Refraktor
The Bamberg-Refraktor is a large telescope. The refracting telescope has an aperture of 320 millimetres, a focal length of five metres and is located in the Wilhelm Foerster Observatory in the Berlin district of Schöneberg. The name "Bamberg" goes back to the builder of the telescope, Carl Bamberg (* 12. Juli 1847 in Kranichfeld; † 4. Juni 1892 in Friedenau), and the term "refractor" (Latin ' = 'back' and ' = 'refract') means that the telescope is made exclusively with light-refracting optical lenses and does not use mirrors or zone plates. History. The 12-Zoll-telescope was built in 1889 in the Berlin workshops of Carl Bamberg in Friedenauer Bundesallee in Berlin, was at the time the largest telescope in the Kingdom of Prussia and the second largest in the German Empire after the refractor at the Observatoire de Strasbourg. It was characterised by careful manufacture, a large focal length and modern control technology. An electric clock was used for the largely automatic tracking of the telescope according to the hour angle of the object to be observed. The lenses were made of high-quality glass from the Glastechnisches Laboratorium Schott &amp; Genossen in Jena. The total cost was 50,000 Mark, which corresponded to 250 kilograms of silver ("Note:" This amount of silver corresponds to a good 8,000 fine ounces with a market value (as of 2018/2019) of a good 100,000 euros). Initially, it was not only available for research purposes, but primarily for the public in the observatory of the Urania on Invalidenstraße in Berlin, which was equipped with an electrically operated dome. Among the first astronomers working there were Friedrich Simon Archenhold and the co-founder of the Urania Wilhelm Foerster. With the refractor, the astronomer Gustav Witt discovered the asteroids (422) Berolina and (433) Eros in 1896 and 1898. The polar explorer Alfred Wegener, who was trained as an astronomer, also used the Bamberg refractor in the Urania. During the Second World War, the building was severely damaged, but the glass lenses remained undamaged. The telescope was salvaged in 1951 and was repaired by the Askania workhouses in Berlin-Mariendorf. In 1955, it was set up as the largest operable telescope in Berlin on the grounds of the observatory of the Wilhelm Foerster Institute in General-Pape-Straße in Berlin, which had been built up since 1947 in the half-ruin of a former officers' mess by the two Berlin amateur astronomers Hans Rechlin and Hans Mühle and transferred to the Wilhelm-Foerster-Sternwarte Association in June 1953. The Bamberg-Refraktor was also used there for public demonstrations, but also for training astronomers. However, the light pollution from the nearby railway facilities at Südkreuz proved unfavourable for night sky observations, so a new location was sought. In November 1961, the foundation stone was laid for the Wilhelm Foerster Observatory built with funds from the Deutsche Klassenlotterie Berlin on the Insulaner in Berlin-Schöneberg, which was piled up after the war as a mountain of rubble to a height of a good 78 metres. In 1962, Askania in Berlin-Mariendorf carried out a general overhaul of the telescope, and since the opening of the Wilhelm Foerster Observatory on 30 January 1963, the refractor in the largest dome of the public observatory has been the most important and frequently used instrument for demonstrations by the society. The movable dome with a diameter of eleven metres dates from 1905. It was no longer needed at the Berlin Zeiss-Ikon factories in Berlin-Friedenau and was given to the observatory. Using the Bamberg-Refraktor, Adolf Voigt and Hans Giebler of the Berlin Lunar Observers Group made the roll film images for the "Berlin Lunar Atlas" from 1964 to 1969, which have since been made available as a digitisation. Today, the large refractor is mainly used for public demonstrations. In 1996 and 1997 the Bamberg refractor was overhauled by Gebhard Kühn at Zeiss in Jena, and in 2020, it was equipped with a new electrically controlled tracking by the company 4H Jena-Engineering. In addition to the refractor in Rathenow, the great refractor in Potsdam and the great refractor of the Archenhold Observatory in Treptow, the Bamberg-Refraktor is still one of the large telescopes in the Berlin area. It is the oldest functioning large telescope with lenses in Europe. Engineering. The Bamberg Refractor was designed according to the principle of the Kepler telescope with an optical corrected lens system constructed with spherically ground lenses of flint and crown glass. By combining the two glass types with different dispersion, the lens is achromatic, so that the blue and red light components have almost the same back focal length, which is, however, slightly larger than the cut width in the green. The glasses are ordinary silicate glasses from Schott, which were, however, processed very elaborately and with particular care so that they could solidify stress-free and optically pure from the melt. The two lenses that are not cemented together have the following parameters: From the aperture width formula_0 and the focal length formula_1 results in a lens speed of just under 16 and an aperture ratio of a good 1/16 respectively: formula_2 The image-side angular aperture is: formula_3 The spherical aberration is not corrected. The diameter formula_4 of the Airy disc in the image plane at a wavelength formula_5 of 550 nanometres in the green by the diffraction limit is: formula_6 Thus, the optical resolution of the Bamberg-Refraktor, limited by diffraction and given as the smallest angle formula_7 between two stars still to be distinguished, is: formula_8 This gives a maximum number of resolvable line pairs formula_9 along the image circle diameter formula_10 in the image plane of the telescope of: formula_11 Thus, for the wavelength in the green, it follows: formula_12. This results in a maximum spatial frequency of just under 48 line pairs per millimetre in the image plane, and for an image circle diameter of 21 millimetres in the eyepiece, this results in a maximum spatial frequency of 1000 line pairs per image circle diameter. The light intensity is sufficient to observe objects up to more than 14th apparent magnitude. Depending on the eyepiece used, the telescope is usually operated with magnifications of 70x to 700x. With telescope mount and balance weight, the instrument weighs four and a half tonnes. It is balanced in such a way that it can be moved by hand without motors. Miscellaneous. The telescope of the Bosscha Observatory in Indonesia, also called the Bamberg-Refraktor, has a focal length of seven metres, a diameter of 370 millimetres (lens speed = 19) and was first commissioned in 1927 in Berlin. The comparatively large and thin lenses of this long-focal-length telescope cause an optically detectable deformation of the lenses when the telescope changes position, due to their own weight. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D = 320 \\text { mm}" }, { "math_id": 1, "text": "f = 5000 \\text { mm}" }, { "math_id": 2, "text": "k = \\frac {f} {D} \\approx 15.6 " }, { "math_id": 3, "text": "\\omega_b = 2 \\cdot \\arctan {\\frac {D} {2 \\cdot f}} \\approx 3.7^\\circ " }, { "math_id": 4, "text": "d_B" }, { "math_id": 5, "text": "\\lambda " }, { "math_id": 6, "text": "d_B(550 \\text { nm}) \\approx 2.44 \\cdot \\lambda \\cdot k \\approx 21 \\text { μm}" }, { "math_id": 7, "text": "\\delta" }, { "math_id": 8, "text": "\\delta \\approx \\arcsin {\\left( 1.22 \\frac {\\lambda} {D} \\right)} \\approx {0,00012}^\\circ \\approx 0,0072 ' \\approx 0.43''" }, { "math_id": 9, "text": "N_L" }, { "math_id": 10, "text": "B" }, { "math_id": 11, "text": "N_L(\\lambda) = \\frac {N_P} {2} \\approx \\frac {B} {2,44 \\cdot \\lambda \\cdot k} = \\frac {B} {d_B(\\lambda)}" }, { "math_id": 12, "text": "N_L(550 \\text { nm}) = \\frac {B} {d_B(550 \\text { nm})} = \\frac {B} {0.021 \\text { mm}}" } ]
https://en.wikipedia.org/wiki?curid=73236876
73238642
Matrix F-distribution
Multivariate continuous probability distribution In statistics, the matrix F distribution (or matrix variate F distribution) is a matrix variate generalization of the F distribution which is defined on real-valued positive-definite matrices. In Bayesian statistics it can be used as the semi conjugate prior for the covariance matrix or precision matrix of multivariate normal distributions, and related distributions. Density. The probability density function of the matrix formula_0 distribution is: formula_4 where formula_2 and formula_5 are formula_1 positive definite matrices, formula_6 is the determinant, Γ"p"(⋅) is the multivariate gamma function, and formula_3 is the "p" × "p" identity matrix. Properties. Construction of the distribution. formula_8 and formula_9, and define formula_10, then formula_11. formula_16 This construction is useful to construct a semi-conjugate prior for a covariance matrix. Marginal distributions from a matrix F distributed matrix. Suppose formula_20 has a matrix F distribution. Partition the matrices formula_21 and formula_22 conformably with each other formula_23 where formula_24 and formula_25 are formula_26 matrices, then we have formula_27. Moments. Let formula_28. The mean is given by: formula_29 The (co)variance of elements of formula_2 are given by: formula_30 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "p\\times p" }, { "math_id": 2, "text": "\\mathbf{X}" }, { "math_id": 3, "text": "\\textbf{I}_p" }, { "math_id": 4, "text": "\nf_{\\mathbf X}({\\mathbf X}; {\\mathbf \\Psi}, \\nu, \\delta) = \n\\frac{\\Gamma_p\\left(\\frac{\\nu+\\delta+p-1}{2}\\right)}{\\Gamma_p\\left(\\frac{\\nu}{2}\\right)\\Gamma_p\\left(\\frac{\\delta+p-1}{2}\\right)|\\mathbf{\\Psi}|^{\\frac{\\nu}{2}}}~|{\\mathbf X}|^{\\frac{\\nu-p-1}{2}} |\\textbf{I}_p+{\\mathbf X}\\mathbf{\\Psi}^{-1}|^{-\\frac{\\nu+\\delta+p-1}{2}}\n" }, { "math_id": 5, "text": "{\\mathbf\\Psi}" }, { "math_id": 6, "text": "| \\cdot |" }, { "math_id": 7, "text": "\\mathbf I_p" }, { "math_id": 8, "text": "{\\mathbf \\Phi_1}\\sim \\mathcal{W}({\\mathbf I_p},\\nu)" }, { "math_id": 9, "text": "{\\mathbf \\Phi_2}\\sim \\mathcal{W}({\\mathbf I_p},\\delta+k-1)" }, { "math_id": 10, "text": "\\mathbf X = {\\mathbf \\Phi_2}^{-1/2}{\\mathbf \\Phi_1}{\\mathbf \\Phi_2}^{-1/2}" }, { "math_id": 11, "text": "\\mathbf X\\sim \\mathcal{F}({\\mathbf I_p},\\nu,\\delta) " }, { "math_id": 12, "text": "{\\mathbf X}|\\mathbf\\Phi\\sim \\mathcal{W}^{-1}({\\mathbf\\Phi},\\delta+p-1)" }, { "math_id": 13, "text": "{\\mathbf \\Phi}\\sim \\mathcal{W}({\\mathbf\\Psi},\\nu)" }, { "math_id": 14, "text": "\\mathbf\\Phi" }, { "math_id": 15, "text": "\\mathbf X" }, { "math_id": 16, "text": "\nf_{\\mathbf X | \\mathbf\\Phi, \\nu, \\delta}(\\mathbf X) =\n\\int f_{\\mathbf X | \\mathbf\\Phi, \\delta+p-1}(\\mathbf X)\nf_{\\mathbf\\Phi | \\mathbf\\Psi, \\nu}(\\mathbf\\Phi) d\\mathbf\\Phi.\n" }, { "math_id": 17, "text": "{\\mathbf X}|\\mathbf\\Phi\\sim \\mathcal{W}({\\mathbf\\Phi},\\nu)" }, { "math_id": 18, "text": "{\\mathbf \\Phi}\\sim \\mathcal{W}^{-1}({\\mathbf\\Psi},\\delta+p-1)" }, { "math_id": 19, "text": "\nf_{\\mathbf X | \\mathbf\\Psi, \\nu, \\delta}(\\mathbf X) =\n\\int f_{\\mathbf X | \\mathbf\\Phi, \\nu}(\\mathbf X)\nf_{\\mathbf\\Phi | \\mathbf\\Psi, \\delta + p - 1}(\\mathbf\\Phi) d\\mathbf\\Phi.\n" }, { "math_id": 20, "text": "{\\mathbf A}\\sim F({\\mathbf\\Psi},\\nu,\\delta)" }, { "math_id": 21, "text": " {\\mathbf A} " }, { "math_id": 22, "text": " {\\mathbf\\Psi} " }, { "math_id": 23, "text": "\n {\\mathbf{A}} = \\begin{bmatrix} \\mathbf{A}_{11} & \\mathbf{A}_{12} \\\\ \\mathbf{A}_{21} & \\mathbf{A}_{22} \\end{bmatrix}, \\;\n {\\mathbf{\\Psi}} = \\begin{bmatrix} \\mathbf{\\Psi}_{11} & \\mathbf{\\Psi}_{12} \\\\ \\mathbf{\\Psi}_{21} & \\mathbf{\\Psi}_{22} \\end{bmatrix}\n" }, { "math_id": 24, "text": "{\\mathbf A_{ij}}" }, { "math_id": 25, "text": "{\\mathbf \\Psi_{ij}} " }, { "math_id": 26, "text": " p_{i}\\times p_{j}" }, { "math_id": 27, "text": " {\\mathbf A_{11} } \\sim F({\\mathbf \\Psi_{11} }, \\nu, \\delta) " }, { "math_id": 28, "text": " X \\sim F({\\mathbf\\Psi},\\nu,\\delta)" }, { "math_id": 29, "text": " E(\\mathbf X) = \\frac{\\nu}{\\delta-2}\\mathbf\\Psi." }, { "math_id": 30, "text": "\n\\operatorname{cov}(X_{ij},X_{ml}) = \\Psi_{ij}\\Psi_{ml}\\tfrac{2\\nu^2+2\\nu(\\delta-2)}{(\\delta-1)(\\delta-2)^2(\\delta-4)}\n+ (\\Psi_{il}\\Psi_{jm}+\\Psi_{im}\\Psi_{jl})\\left(\\tfrac{2\\nu+\\nu^2(\\delta-2)+\\nu(\\delta-2)}{(\\delta-1)(\\delta-2)^2(\\delta-4)}+\\tfrac{\\nu}{(\\delta-2)^2}\\right).\n" }, { "math_id": 31, "text": "p=1" }, { "math_id": 32, "text": "\\mathbf\\Psi = 1" }, { "math_id": 33, "text": "x=\\mathbf{X}" }, { "math_id": 34, "text": "\nf_{x\\mid\\nu, \\delta}(x) = \n\\operatorname{B}\\left(\\tfrac{\\nu}{2},\\tfrac{\\delta}{2}\\right)^{-1} \\left(\\tfrac{\\nu}{\\delta}\\right)^{\\nu/2} x^{\\nu/2 - 1} \\left(1+\\tfrac{\\nu}{\\delta} \\, x \\right)^{-(\\nu+\\delta)/2},\n" }, { "math_id": 35, "text": "\\nu=1" }, { "math_id": 36, "text": "\\sqrt{x}" }, { "math_id": 37, "text": "\\sqrt{\\psi}" }, { "math_id": 38, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=73238642
73242
Birthday problem
Probability of shared birthdays In probability theory, the birthday problem asks for the probability that, in a set of n randomly chosen people, at least two will share a birthday. The birthday paradox refers to the counterintuitive fact that only 23 people are needed for that probability to exceed 50%. The birthday paradox is a veridical paradox: it seems wrong at first glance but is, in fact, true. While it may seem surprising that only 23 individuals are required to reach a 50% probability of a shared birthday, this result is made more intuitive by considering that the birthday comparisons will be made between every possible pair of individuals. With 23 individuals, there are  = 253 pairs to consider, far more than half the number of days in a year. Real-world applications for the birthday problem include a cryptographic attack called the birthday attack, which uses this probabilistic model to reduce the complexity of finding a collision for a hash function, as well as calculating the approximate risk of a hash collision existing within the hashes of a given size of population. The problem is generally attributed to Harold Davenport in about 1927, though he did not publish it at the time. Davenport did not claim to be its discoverer "because he could not believe that it had not been stated earlier". The first publication of a version of the birthday problem was by Richard von Mises in 1939. Calculating the probability. From a permutations perspective, let the event "A" be the probability of finding a group of 23 people without any repeated birthdays. Where the event "B" is the probability of finding a group of 23 people with at least two people sharing same birthday, "P"("B") 1 − "P"("A"). "P"("A") is the ratio of the total number of birthdays, formula_0, without repetitions and order matters (e.g. for a group of 2 people, mm/dd birthday format, one possible outcome is formula_1) divided by the total number of birthdays with repetition and order matters, formula_2, as it is the total space of outcomes from the experiment (e.g. 2 people, one possible outcome is formula_3). Therefore formula_0 and formula_2 are permutations. formula_4 Another way the birthday problem can be solved is by asking for an approximate probability that in a group of n people at least two have the same birthday. For simplicity, leap years, twins, selection bias, and seasonal and weekly variations in birth rates are generally disregarded, and instead it is assumed that there are 365 possible birthdays, and that each person's birthday is equally likely to be any of these days, independent of the other people in the group. For independent birthdays, a uniform distribution of birthdays minimizes the probability of two people in a group having the same birthday. Any unevenness increases the likelihood of two people sharing a birthday. However real-world birthdays are not sufficiently uneven to make much change: the real-world group size necessary to have a greater than 50% chance of a shared birthday is 23, as in the theoretical uniform distribution. The goal is to compute "P"("B"), the probability that at least two people in the room have the same birthday. However, it is simpler to calculate "P"("A"′), the probability that no two people in the room have the same birthday. Then, because "B" and "A"′ are the only two possibilities and are also mutually exclusive, "P"("B") 1 − "P"("A"′). Here is the calculation of "P"("B") for 23 people. Let the 23 people be numbered 1 to 23. The event that all 23 people have different birthdays is the same as the event that person 2 does not have the same birthday as person 1, and that person 3 does not have the same birthday as either person 1 or person 2, and so on, and finally that person 23 does not have the same birthday as any of persons 1 through 22. Let these events be called Event 2, Event 3, and so on. Event 1 is the event of person 1 having a birthday, which occurs with probability 1. This conjunction of events may be computed using conditional probability: the probability of Event 2 is , as person 2 may have any birthday other than the birthday of person 1. Similarly, the probability of Event 3 given that Event 2 occurred is , as person 3 may have any of the birthdays not already taken by persons 1 and 2. This continues until finally the probability of Event 23 given that all preceding events occurred is . Finally, the principle of conditional probability implies that "P"("A"′) is equal to the product of these individual probabilities: The terms of equation (1) can be collected to arrive at: Evaluating equation (2) gives "P"("A"′) ≈ 0.492703 Therefore, "P"("B") ≈ 1 − 0.492703 0.507297 (50.7297%). This process can be generalized to a group of n people, where "p"("n") is the probability of at least two of the n people sharing a birthday. It is easier to first calculate the probability "p"("n") that all n birthdays are "different". According to the pigeonhole principle, "p"("n") is zero when "n" &gt; 365. When "n" ≤ 365: formula_5 where ! is the factorial operator, is the binomial coefficient and "kPr" denotes permutation. The equation expresses the fact that the first person has no one to share a birthday, the second person cannot have the same birthday as the first , the third cannot have the same birthday as either of the first two , and in general the nth birthday cannot be the same as any of the "n" − 1 preceding birthdays. The event of at least two of the n persons having the same birthday is complementary to all n birthdays being different. Therefore, its probability "p"("n") is formula_6 The following table shows the probability for some other values of n (for this table, the existence of leap years is ignored, and each birthday is assumed to be equally likely): Approximations. The Taylor series expansion of the exponential function (the constant "e" ≈) formula_7 provides a first-order approximation for "e""x" for formula_8: formula_9 To apply this approximation to the first expression derived for "p"("n"), set "x" −. Thus, formula_10 Then, replace a with non-negative integers for each term in the formula of "p"("n") until "a" "n" − 1, for example, when "a" 1, formula_11 The first expression derived for "p"("n") can be approximated as formula_12 Therefore, formula_13 An even coarser approximation is given by formula_14 which, as the graph illustrates, is still fairly accurate. According to the approximation, the same approach can be applied to any number of "people" and "days". If rather than 365 days there are d, if there are n persons, and if "n" ≪ "d", then using the same approach as above we achieve the result that if "p"("n", "d") is the probability that at least two out of n people share the same birthday from a set of d available days, then: formula_15 Simple exponentiation. The probability of any two people not having the same birthday is . In a room containing "n" people, there are pairs of people, i.e. events. The probability of no two people sharing the same birthday can be approximated by assuming that these events are independent and hence by multiplying their probability together. Being independent would be equivalent to picking with replacement, any pair of people in the world, not just in a room. In short can be multiplied by itself times, which gives us formula_16 Since this is the probability of no one having the same birthday, then the probability of someone sharing a birthday is formula_17 And for the group of 23 people, the probability of sharing is formula_18 Poisson approximation. Applying the Poisson approximation for the binomial on the group of 23 people, formula_19 so formula_20 The result is over 50% as previous descriptions. This approximation is the same as the one above based on the Taylor expansion that uses "ex" ≈ 1 + "x". Square approximation. A good rule of thumb which can be used for mental calculation is the relation formula_21 which can also be written as formula_22 which works well for probabilities less than or equal to . In these equations, d is the number of days in a year. For instance, to estimate the number of people required for a chance of a shared birthday, we get formula_23 Which is not too far from the correct answer of 23. Approximation of number of people. This can also be approximated using the following formula for the "number" of people necessary to have at least a chance of matching: formula_24 This is a result of the good approximation that an event with probability will have a chance of occurring at least once if it is repeated "k" ln 2 times. Probability table. The lighter fields in this table show the number of hashes needed to achieve the given probability of collision (column) given a hash space of a certain size in bits (row). Using the birthday analogy: the "hash space size" resembles the "available days", the "probability of collision" resembles the "probability of shared birthday", and the "required number of hashed elements" resembles the "required number of people in a group". One could also use this chart to determine the minimum hash size required (given upper bounds on the hashes and probability of error), or the probability of collision (for fixed number of hashes and probability of error). For comparison, to is the uncorrectable bit error rate of a typical hard disk. In theory, 128-bit hash functions, such as MD5, should stay within that range until about documents, even if its possible outputs are many more. An upper bound on the probability and a lower bound on the number of people. The argument below is adapted from an argument of Paul Halmos. As stated above, the probability that no two birthdays coincide is formula_25 As in earlier paragraphs, interest lies in the smallest n such that "p"("n") &gt;; or equivalently, the smallest n such that "p"("n") &lt;. Using the inequality 1 − "x" &lt; "e"−"x" in the above expression we replace 1 − with "e"&lt;templatestyles src="Fraction/styles.css" /&gt;−"k"⁄365. This yields formula_26 Therefore, the expression above is not only an approximation, but also an upper bound of "p"("n"). The inequality formula_27 implies "p"("n") &lt;. Solving for n gives formula_28 Now, 730 ln 2 is approximately 505.997, which is barely below 506, the value of "n"2 − "n" attained when "n" 23. Therefore, 23 people suffice. Incidentally, solving "n"2 − "n" 730 ln 2 for "n" gives the approximate formula of Frank H. Mathis cited above. This derivation only shows that "at most" 23 people are needed to ensure the chances of a birthday match are at least even; it leaves open the possibility that n is 22 or less could also work. Generalizations. Arbitrary number of days. Given a year with d days, the generalized birthday problem asks for the minimal number "n"("d") such that, in a set of n randomly chosen people, the probability of a birthday coincidence is at least 50%. In other words, "n"("d") is the minimal integer n such that formula_29 The classical birthday problem thus corresponds to determining "n"(365). The first 99 values of "n"("d") are given here (sequence in the OEIS): A similar calculation shows that "n"("d") = 23 when d is in the range 341–372. A number of bounds and formulas for "n"("d") have been published. For any "d" ≥ 1, the number "n"("d") satisfies formula_30 These bounds are optimal in the sense that the sequence "n"("d") − √2"d" ln 2 gets arbitrarily close to formula_31 while it has formula_32 as its maximum, taken for "d" 43. The bounds are sufficiently tight to give the exact value of "n"("d") in most of the cases. For example, for "d" 365 these bounds imply that 22.7633 &lt; "n"(365) &lt; 23.7736 and 23 is the only integer in that range. In general, it follows from these bounds that "n"("d") always equals either formula_33 where ⌈ · ⌉ denotes the ceiling function. The formula formula_34 holds for 73% of all integers d. The formula formula_35 holds for almost all d, i.e., for a set of integers d with asymptotic density 1. The formula formula_36 holds for all "d" ≤, but it is conjectured that there are infinitely many counterexamples to this formula. The formula formula_37 holds for all "d" ≤, and it is conjectured that this formula holds for all d. More than two people sharing a birthday. It is possible to extend the problem to ask how many people in a group are necessary for there to be a greater than 50% probability that at least 3, 4, 5, etc. of the group share the same birthday. The first few values are as follows: &gt;50% probability of 3 people sharing a birthday - 88 people; &gt;50% probability of 4 people sharing a birthday - 187 people (sequence in the OEIS). Probability of a shared birthday (collision). The birthday problem can be generalized as follows: Given n random integers drawn from a discrete uniform distribution with range [1,"d"], what is the probability "p"("n"; "d") that at least two numbers are the same? ("d" 365 gives the usual birthday problem.) The generic results can be derived using the same arguments given above. formula_38 Conversely, if "n"("p"; "d") denotes the number of random integers drawn from [1,"d"] to obtain a probability p that at least two numbers are the same, then formula_39 The birthday problem in this more generic sense applies to hash functions: the expected number of "N"-bit hashes that can be generated before getting a collision is not 2"N", but rather only . This is exploited by birthday attacks on cryptographic hash functions and is the reason why a small number of collisions in a hash table are, for all practical purposes, inevitable. The theory behind the birthday problem was used by Zoe Schnabel under the name of capture-recapture statistics to estimate the size of fish population in lakes. Generalization to multiple types of people. The basic problem considers all trials to be of one "type". The birthday problem has been generalized to consider an arbitrary number of types. In the simplest extension there are two types of people, say m men and n women, and the problem becomes characterizing the probability of a shared birthday between at least one man and one woman. (Shared birthdays between two men or two women do not count.) The probability of no shared birthdays here is formula_40 where "d" 365 and "S"2 are Stirling numbers of the second kind. Consequently, the desired probability is 1 − "p"0. This variation of the birthday problem is interesting because there is not a unique solution for the total number of people "m" + "n". For example, the usual 50% probability value is realized for both a 32-member group of 16 men and 16 women and a 49-member group of 43 women and 6 men. Other birthday problems. First match. A related question is, as people enter a room one at a time, which one is most likely to be the first to have the same birthday as someone already in the room? That is, for what n is "p"("n") − "p"("n" − 1) maximum? The answer is 20—if there is a prize for first match, the best position in line is 20th. Same birthday as you. In the birthday problem, neither of the two people is chosen in advance. By contrast, the probability "q"("n") that "at least one other person" in a room of n other people has the same birthday as a "particular" person (for example, you) is given by formula_41 and for general d by formula_42 In the standard case of "d" 365, substituting "n" 23 gives about 6.1%, which is less than 1 chance in 16. For a greater than 50% chance that "at least" one other person in a roomful of n people has the same birthday as "you", n would need to be at least 253. This number is significantly higher than 182.5: the reason is that it is likely that there are some birthday matches among the other people in the room. Number of people with a shared birthday. For any one person in a group of "n" people the probability that he or she shares his birthday with someone else is formula_43, as explained above. The expected number of people with a shared (non-unique) birthday can now be calculated easily by multiplying that probability by the number of people ("n"), so it is: formula_44 (This multiplication can be done this way because of the linearity of the expected value of indicator variables). This implies that the expected number of people with a non-shared (unique) birthday is: formula_45 Similar formulas can be derived for the expected number of people who share with three, four, etc. other people. Number of people until every birthday is achieved. The expected number of people needed until every birthday is achieved is called the Coupon collector's problem. It can be calculated by "nHn", where "Hn" is the nth harmonic number. For 365 possible dates (the birthday problem), the answer is 2365. Near matches. Another generalization is to ask for the probability of finding at least one pair in a group of n people with birthdays within k calendar days of each other, if there are d equally likely birthdays. formula_46 The number of people required so that the probability that some pair will have a birthday separated by k days or fewer will be higher than 50% is given in the following table: Thus in a group of just seven random people, it is more likely than not that two of them will have a birthday within a week of each other. Number of days with a certain number of birthdays. Number of days with at least one birthday. The expected number of different birthdays, i.e. the number of days that are at least one person's birthday, is: formula_47 This follows from the expected number of days that are no one's birthday: formula_48 which follows from the probability that a particular day is no one's birthday, , easily summed because of the linearity of the expected value. For instance, with d = 365, you should expect about 21 different birthdays when there are 22 people, or 46 different birthdays when there are 50 people. When there are 1000 people, there will be around 341 different birthdays (24 unclaimed birthdays). Number of days with at least two birthdays. The above can be generalized from the distribution of the number of people with their birthday on any particular day, which is a Binomial distribution with probability . Multiplying the relevant probability by d will then give the expected number of days. For example, the expected number of days which are shared; i.e. which are at least two (i.e. not zero and not one) people's birthday is: formula_49 Number of people who repeat a birthday. The probability that the kth integer randomly chosen from [1,"d"] will repeat at least one previous choice equals "q"("k" − 1; "d") above. The expected total number of times a selection will repeat a previous selection as n such integers are chosen equals formula_50 This can be seen to equal the number of people minus the expected number of different birthdays. Average number of people to get at least one shared birthday. In an alternative formulation of the birthday problem, one asks the "average" number of people required to find a pair with the same birthday. If we consider the probability function Pr[n people have at least one shared birthday], this "average" is determining the mean of the distribution, as opposed to the customary formulation, which asks for the median. The problem is relevant to several hashing algorithms analyzed by Donald Knuth in his book "The Art of Computer Programming". It may be shown that if one samples uniformly, with replacement, from a population of size "M", the number of trials required for the first repeated sampling of "some" individual has expected value , where formula_51 The function formula_52 has been studied by Srinivasa Ramanujan and has asymptotic expansion: formula_53 With "M" = 365 days in a year, the average number of people required to find a pair with the same birthday is "n" = 1 + "Q"("M") ≈ 24.61659, somewhat more than 23, the number required for a 50% chance. In the best case, two people will suffice; at worst, the maximum possible number of "M" + 1 = 366 people is needed; but on average, only 25 people are required An analysis using indicator random variables can provide a simpler but approximate analysis of this problem. For each pair ("i", "j") for k people in a room, we define the indicator random variable "Xij", for formula_54, by formula_55 formula_56 Let "X" be a random variable counting the pairs of individuals with the same birthday. formula_57 formula_58 For "n" = 365, if "k" = 28, the expected number of pairs of individuals with the same birthday is  ≈ 1.0356. Therefore, we can expect at least one matching pair with at least 28 people. In the 2014 FIFA World Cup, each of the 32 squads had 23 players. An analysis of the official squad lists suggested that 16 squads had pairs of players sharing birthdays, and of these 5 squads had two pairs: Argentina, France, Iran, South Korea and Switzerland each had two pairs, and Australia, Bosnia and Herzegovina, Brazil, Cameroon, Colombia, Honduras, Netherlands, Nigeria, Russia, Spain and USA each with one pair. Voracek, Tran and Formann showed that the majority of people markedly overestimate the number of people that is necessary to achieve a given probability of people having the same birthday, and markedly underestimate the probability of people having the same birthday when a specific sample size is given. Further results showed that psychology students and women did better on the task than casino visitors/personnel or men, but were less confident about their estimates. Reverse problem. The reverse problem is to find, for a fixed probability p, the greatest n for which the probability "p"("n") is smaller than the given p, or the smallest n for which the probability "p"("n") is greater than the given p. Taking the above formula for "d"   365, one has formula_59 The following table gives some sample calculations. Some values falling outside the bounds have been colored to show that the approximation is not always exact. Partition problem. A related problem is the partition problem, a variant of the knapsack problem from operations research. Some weights are put on a balance scale; each weight is an integer number of grams randomly chosen between one gram and one million grams (one tonne). The question is whether one can usually (that is, with probability close to 1) transfer the weights between the left and right arms to balance the scale. (In case the sum of all the weights is an odd number of grams, a discrepancy of one gram is allowed.) If there are only two or three weights, the answer is very clearly no; although there are some combinations which work, the majority of randomly selected combinations of three weights do not. If there are very many weights, the answer is clearly yes. The question is, how many are just sufficient? That is, what is the number of weights such that it is equally likely for it to be possible to balance them as it is to be impossible? Often, people's intuition is that the answer is above . Most people's intuition is that it is in the thousands or tens of thousands, while others feel it should at least be in the hundreds. The correct answer is 23. The reason is that the correct comparison is to the number of partitions of the weights into left and right. There are 2"N" − 1 different partitions for "N" weights, and the left sum minus the right sum can be thought of as a new random quantity for each partition. The distribution of the sum of weights is approximately Gaussian, with a peak at "N" and width , so that when 2"N" − 1 is approximately equal to the transition occurs. 223 − 1 is about 4 million, while the width of the distribution is only 5 million. In fiction. Arthur C. Clarke's 1961 novel "A Fall of Moondust" contains a section where the main characters, trapped underground for an indefinite amount of time, are celebrating a birthday and find themselves discussing the validity of the birthday problem. As stated by a physicist passenger: "If you have a group of more than twenty-four people, the odds are better than even that two of them have the same birthday." Eventually, out of 22 present, it is revealed that two characters share the same birthday, May 23. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_{nr}" }, { "math_id": 1, "text": "\\left \\{ \\left \\{01/02,05/20\\right \\},\\left \\{05/20,01/02\\right \\},\\left \\{10/02,08/04\\right\\},...\\right \\}" }, { "math_id": 2, "text": "V_{t}" }, { "math_id": 3, "text": "\\left \\{ \\left \\{01/02,01/02\\right \\},\\left \\{10/02,08/04\\right \\},...\\right \\}" }, { "math_id": 4, "text": "\\begin{align} V_{nr} &= \\frac{n!}{(n-k)!} = \\frac{365!}{(365-23)!} \\\\[8pt] V_t &= n^k = 365^{23} \\\\[8pt] P(A) &= \\frac{V_{nr}}{V_t} \\approx 0.492703 \\\\[8pt] P(B) &= 1 - P(A) \\approx 1 - 0.492703 \\approx 0.507297 \\quad (50.7297\\%)\\end{align}" }, { "math_id": 5, "text": " \\begin{align} \\bar p(n) &= 1 \\times \\left(1-\\frac{1}{365}\\right) \\times \\left(1-\\frac{2}{365}\\right) \\times \\cdots \\times \\left(1-\\frac{n-1}{365}\\right) \\\\[6pt] &= \\frac{ 365 \\times 364 \\times \\cdots \\times (365-n+1) }{ 365^n } \\\\[6pt] &= \\frac{ 365! }{ 365^n (365-n)!} = \\frac{n!\\cdot\\binom{365}{n}}{365^n} = \\frac{_{365}P_n}{365^n}\\end{align} " }, { "math_id": 6, "text": " p(n) = 1 - \\bar p(n). " }, { "math_id": 7, "text": " e^x = 1 + x + \\frac{x^2}{2!}+\\cdots " }, { "math_id": 8, "text": "|x| \\ll 1" }, { "math_id": 9, "text": " e^x \\approx 1 + x." }, { "math_id": 10, "text": " e^{-a/365} \\approx 1 - \\frac{a}{365}. " }, { "math_id": 11, "text": " e^{-1/365} \\approx 1 - \\frac{1}{365}. " }, { "math_id": 12, "text": "\n\\begin{align}\n\\bar p(n) & \\approx 1 \\cdot e^{-1/365} \\cdot e^{-2/365} \\cdots e^{-(n-1)/365} \\\\[6pt]\n& = e^{-\\big(1+2+ \\,\\cdots\\, +(n-1)\\big)/365} \\\\[6pt]\n& = e^{-\\frac{n(n-1)/2}{365}} = e^{-\\frac{n(n-1)}{730}}.\n\\end{align}\n" }, { "math_id": 13, "text": " p(n) = 1-\\bar p(n) \\approx 1 - e^{-\\frac{n(n-1)}{730}}." }, { "math_id": 14, "text": "p(n)\\approx 1-e^{-\\frac{n^2}{730}}," }, { "math_id": 15, "text": "\\begin{align}\n p(n, d) & \\approx 1-e^{-\\frac{n(n-1)}{2d}} \\\\[6pt]\n& \\approx 1-e^{-\\frac{n^2}{2d}}.\n\\end{align}" }, { "math_id": 16, "text": "\\bar p(n) \\approx \\left(\\frac{364}{365}\\right)^\\binom{n}{2}." }, { "math_id": 17, "text": "p(n) \\approx 1 - \\left(\\frac{364}{365}\\right)^\\binom{n}{2}." }, { "math_id": 18, "text": "p(23) \\approx 1 - \\left(\\frac{364}{365}\\right)^\\binom{23}{2} = 1 - \\left(\\frac{364}{365}\\right)^{253} \\approx 0.500477 ." }, { "math_id": 19, "text": "\\operatorname{Poi}\\left(\\frac{\\binom{23}{2}}{365}\\right) =\\operatorname{Poi}\\left(\\frac{253}{365}\\right) \\approx \\operatorname{Poi}(0.6932)" }, { "math_id": 20, "text": "\\Pr(X>0)=1-\\Pr(X=0) \\approx 1-e^{-0.6932} \\approx 1-0.499998=0.500002." }, { "math_id": 21, "text": "p(n,d) \\approx \\frac{n^2}{2d}" }, { "math_id": 22, "text": "n \\approx \\sqrt { 2d \\times p(n)}" }, { "math_id": 23, "text": "n \\approx \\sqrt{ 2 \\times 365 \\times \\tfrac12} = \\sqrt{365} \\approx 19" }, { "math_id": 24, "text": "n \\geq \\tfrac{1}{2} + \\sqrt{\\tfrac{1}{4} + 2 \\times \\ln(2) \\times 365} = 22.999943." }, { "math_id": 25, "text": "1-p(n) = \\bar p(n) = \\prod_{k=1}^{n-1}\\left(1-\\frac{k}{365}\\right) ." }, { "math_id": 26, "text": "\\bar p(n) = \\prod_{k=1}^{n-1}\\left(1-\\frac{k}{365}\\right) < \\prod_{k=1}^{n-1}\\left(e^{-\\frac{k}{365}}\\right) = e^{-\\frac{n(n-1)}{730}} ." }, { "math_id": 27, "text": " e^{-\\frac{n(n-1)}{730}} < \\frac{1}{2}" }, { "math_id": 28, "text": "n^2-n > 730 \\ln 2 ." }, { "math_id": 29, "text": "1-\\left(1-\\frac{1}{d}\\right)\\left(1-\\frac{2}{d}\\right)\\cdots\\left(1-\\frac{n-1}{d}\\right)\\geq \\frac{1}{2}." }, { "math_id": 30, "text": "\\frac{3-2\\ln2}{6}<n(d)-\\sqrt{2d\\ln2}\\leq 9-\\sqrt{86\\ln2}." }, { "math_id": 31, "text": "\\frac{3-2\\ln2}{6} \\approx 0.27," }, { "math_id": 32, "text": "9-\\sqrt{86\\ln2}\\approx 1.28" }, { "math_id": 33, "text": "\\left\\lceil\\sqrt{2d\\ln2}\\,\\right\\rceil \\quad\\text{or}\\quad \\left\\lceil\\sqrt{2d\\ln2}\\,\\right\\rceil+1" }, { "math_id": 34, "text": "n(d) = \\left\\lceil\\sqrt{2d\\ln2}\\,\\right\\rceil" }, { "math_id": 35, "text": "n(d) = \\left\\lceil\\sqrt{2d\\ln2}+\\frac{3-2\\ln2}{6}\\right\\rceil" }, { "math_id": 36, "text": "n(d)=\\left\\lceil \\sqrt{2d\\ln2}+\\frac{3-2\\ln2}{6}+\\frac{9-4(\\ln2)^2}{72\\sqrt{2d\\ln2}}\\right\\rceil" }, { "math_id": 37, "text": "n(d)=\\left\\lceil \\sqrt{2d\\ln2}+\\frac{3-2\\ln2}{6}+\\frac{9-4(\\ln2)^2}{72\\sqrt{2d\\ln2}}-\\frac{2(\\ln2)^2}{135d}\\right\\rceil" }, { "math_id": 38, "text": "\\begin{align}\np(n;d) &= \\begin{cases} 1-\\displaystyle\\prod_{k=1}^{n-1}\\left(1-\\frac{k}{d}\\right) & n\\le d \\\\ 1 & n > d \\end{cases} \\\\[8px]\n& \\approx 1 - e^{-\\frac{n(n-1)}{2d}} \\\\\n& \\approx 1 - \\left( \\frac{d-1}{d} \\right)^\\frac{n(n-1)}{2}\n\\end{align}" }, { "math_id": 39, "text": "n(p;d)\\approx \\sqrt{2d \\cdot \\ln\\left(\\frac{1}{1-p}\\right)}." }, { "math_id": 40, "text": "p_0 =\\frac{1}{d^{m+n}} \\sum_{i=1}^m \\sum_{j=1}^n S_2(m,i) S_2(n,j) \\prod_{k=0}^{i+j-1} d - k" }, { "math_id": 41, "text": " q(n) = 1 - \\left( \\frac{365-1}{365} \\right)^n " }, { "math_id": 42, "text": " q(n;d) = 1 - \\left( \\frac{d-1}{d} \\right)^n. " }, { "math_id": 43, "text": " q(n-1;d) " }, { "math_id": 44, "text": " n\\left(1 - \\left( \\frac{d-1}{d} \\right)^{n-1}\\right) " }, { "math_id": 45, "text": " n \\left( \\frac{d-1}{d} \\right)^{n-1} " }, { "math_id": 46, "text": " \\begin{align} p(n,k,d) &= 1 - \\frac{ (d - nk -1)! }{ d^{n-1} \\bigl(d - n(k+1)\\bigr)!}\\end{align} " }, { "math_id": 47, "text": "d - d \\left (\\frac {d-1} {d} \\right )^n " }, { "math_id": 48, "text": "d \\left (\\frac {d-1} {d} \\right )^n " }, { "math_id": 49, "text": "d - d \\left (\\frac {d-1} {d} \\right )^n - d \\cdot \\binom{n}{1} \\left (\\frac {1} {d} \\right )^1\\left (\\frac {d-1} {d} \\right )^{n-1} = d - d \\left (\\frac {d-1} {d} \\right )^n - n \\left (\\frac {d-1} {d} \\right )^{n-1} " }, { "math_id": 50, "text": "\\sum_{k=1}^n q(k-1;d) = n - d + d \\left (\\frac {d-1} {d} \\right )^n" }, { "math_id": 51, "text": "Q(M)=\\sum_{k=1}^M \\frac{M!}{(M-k)! M^k}." }, { "math_id": 52, "text": "Q(M)= 1 + \\frac{M-1}{M} + \\frac{(M-1)(M-2)}{M^2} + \\cdots + \\frac{(M-1)(M-2) \\cdots 1}{M^{M-1}}" }, { "math_id": 53, "text": "Q(M)\\sim\\sqrt{\\frac{\\pi M}{2}}-\\frac{1}{3}+\\frac{1}{12}\\sqrt{\\frac{\\pi}{2M}}-\\frac{4}{135M}+\\cdots." }, { "math_id": 54, "text": "1\\leq i \\leq j\\leq k" }, { "math_id": 55, "text": "\\begin{alignat}{2}\nX_{ij} & \n= I \\{ \\text{person }i\\text{ and person }j\\text{ have the same birthday} \\} \\\\[10pt] & \n= \\begin{cases} \n1, & \\text{if person }i\\text{ and person }j\\text{ have the same birthday;} \\\\ \n0, & \\text{otherwise.}\n\\end{cases}\n\\end{alignat}" }, { "math_id": 56, "text": "\\begin{alignat}{2}\nE[X_{ij}] & \n= \\Pr \\{ \\text{person }i\\text{ and person }j\\text{ have the same birthday} \\} = \\frac{1}{n}.\n\\end{alignat}" }, { "math_id": 57, "text": "X =\\sum_{i=1}^k \\sum_{j=i+1}^k X_{ij}" }, { "math_id": 58, "text": "\\begin{alignat}{3}\nE[X] \n& = \\sum_{i=1}^k \\sum_{j=i+1}^k E[X_{ij}]\\\\[8pt]\n& = \\binom{k}{2} \\frac{1}{n}\\\\[8pt]\n& = \\frac{k(k-1)}{2n}\n\\end{alignat}" }, { "math_id": 59, "text": "n(p;365)\\approx \\sqrt{730\\ln\\left(\\frac{1}{1-p}\\right)}." } ]
https://en.wikipedia.org/wiki?curid=73242
7324284
Indexed language
Indexed languages are a class of formal languages discovered by Alfred Aho; they are described by indexed grammars and can be recognized by nested stack automata. Indexed languages are a proper subset of context-sensitive languages. They qualify as an abstract family of languages (furthermore a full AFL) and hence satisfy many closure properties. However, they are not closed under intersection or complement. The class of indexed languages has generalization of context-free languages, since indexed grammars can describe many of the nonlocal constraints occurring in natural languages. Gerald Gazdar (1988) and Vijay-Shanker (1987) introduced a mildly context-sensitive language class now known as linear indexed grammars (LIG). Linear indexed grammars have additional restrictions relative to IG. LIGs are weakly equivalent (generate the same language class) as tree adjoining grammars. Examples. The following languages are indexed, but are not context-free: formula_0 formula_1 These two languages are also indexed, but are not even mildly context sensitive under Gazdar's characterization: formula_2 formula_3 On the other hand, the following language is not indexed: formula_4 Properties. Hopcroft and Ullman tend to consider indexed languages as a "natural" class, since they are generated by several formalisms, such as: Hayashi generalized the pumping lemma to indexed grammars. Conversely, Gilman gives a "shrinking lemma" for indexed languages. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\{a^n b^n c^n d^n| n \\geq 1 \\} " }, { "math_id": 1, "text": " \\{a^n b^m c^n d^m | m,n \\geq 0 \\}" }, { "math_id": 2, "text": " \\{a^{2^{n}} | n \\geq 0 \\}" }, { "math_id": 3, "text": " \\{www | w \\in \\{a,b\\}^+ \\}" }, { "math_id": 4, "text": "\\{(a b^n)^n | n \\geq 0 \\}" } ]
https://en.wikipedia.org/wiki?curid=7324284
732446
Lorentz factor
Quantity in relativistic physics The Lorentz factor or Lorentz term (also known as the gamma factor) is a quantity expressing how much the measurements of time, length, and other physical properties change for an object while it moves. The expression appears in several equations in special relativity, and it arises in derivations of the Lorentz transformations. The name originates from its earlier appearance in Lorentzian electrodynamics – named after the Dutch physicist Hendrik Lorentz. It is generally denoted "γ" (the Greek lowercase letter gamma). Sometimes (especially in discussion of superluminal motion) the factor is written as Γ (Greek uppercase-gamma) rather than "γ". Definition. The Lorentz factor "γ" is defined as formula_0 where: This is the most frequently used form in practice, though not the only one (see below for alternative forms). To complement the definition, some authors define the reciprocal formula_1 see velocity addition formula. Occurrence. Following is a list of formulae from Special relativity which use γ as a shorthand: Corollaries of the above transformations are the results: Applying conservation of momentum and energy leads to these results: Numerical values. In the table below, the left-hand column shows speeds as different fractions of the speed of light (i.e. in units of c). The middle column shows the corresponding Lorentz factor, the final is the reciprocal. Values in bold are exact. Alternative representations. There are other ways to write the factor. Above, velocity v was used, but related variables such as momentum and rapidity may also be convenient. Momentum. Solving the previous relativistic momentum equation for γ leads to formula_11 This form is rarely used, although it does appear in the Maxwell–Jüttner distribution. Rapidity. Applying the definition of rapidity as the hyperbolic angle formula_12: formula_13 also leads to γ (by use of hyperbolic identities): formula_14 Using the property of Lorentz transformation, it can be shown that rapidity is additive, a useful property that velocity does not have. Thus the rapidity parameter forms a one-parameter group, a foundation for physical models. Bessel function. The Bunney identity represents the Lorentz factor in terms of an infinite series of Bessel functions: formula_15 Series expansion (velocity). The Lorentz factor has the Maclaurin series: formula_16 which is a special case of a binomial series. The approximation formula_17 may be used to calculate relativistic effects at low speeds. It holds to within 1% error for v &lt; 0.4 c (v &lt; 120,000 km/s), and to within 0.1% error for v &lt; 0.22 c (v &lt; 66,000 km/s). The truncated versions of this series also allow physicists to prove that special relativity reduces to Newtonian mechanics at low speeds. For example, in special relativity, the following two equations hold: formula_18 For formula_19 and formula_17, respectively, these reduce to their Newtonian equivalents: formula_20 The Lorentz factor equation can also be inverted to yield formula_21 This has an asymptotic form formula_22 The first two terms are occasionally used to quickly calculate velocities from large γ values. The approximation formula_23 holds to within 1% tolerance for and to within 0.1% tolerance for Applications in astronomy. The standard model of long-duration gamma-ray bursts (GRBs) holds that these explosions are ultra-relativistic (initial γ greater than approximately 100), which is invoked to explain the so-called "compactness" problem: absent this ultra-relativistic expansion, the ejecta would be optically thick to pair production at typical peak spectral energies of a few 100 keV, whereas the prompt emission is observed to be non-thermal. Muons, a subatomic particle, travel at a speed such that they have a relatively high Lorentz factor and therefore experience extreme time dilation. Since muons have a mean lifetime of just 2.2 μs, muons generated from cosmic-ray collisions high in Earth's atmosphere should be nondetectable on the ground due to their decay rate. However, roughly 10% of muons from these collisions are still detectable on the surface, thereby demonstrating the effects of time dilation on their decay rate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma = \\frac{1}{\\sqrt{1 - \\frac{v^2}{c^2}}} =\\sqrt{\\frac{c^2}{c^2-v^2}} = \\frac{c}{\\sqrt{c^2-v^2}} = \\frac{1}{\\sqrt{1 - \\beta^2}} = \\frac{dt}{d\\tau} ," }, { "math_id": 1, "text": "\\alpha = \\frac{1}{\\gamma} = \\sqrt{1- \\frac{v^2}{c^2}} \\ = \\sqrt{1- {\\beta}^2} ;" }, { "math_id": 2, "text": "\\begin{align}\n t' &= \\gamma \\left( t - \\tfrac{vx}{c^2} \\right ), \\\\[1ex]\n x' &= \\gamma \\left( x - vt \\right ).\n\\end{align}" }, { "math_id": 3, "text": "\\Delta t' = \\gamma \\Delta t." }, { "math_id": 4, "text": "\\Delta x' = \\Delta x/\\gamma." }, { "math_id": 5, "text": "\\gamma" }, { "math_id": 6, "text": "m = \\gamma m_0." }, { "math_id": 7, "text": "\\vec p = m \\vec v = \\gamma m_0 \\vec v." }, { "math_id": 8, "text": "E_k = E - E_0 = (\\gamma - 1) m_0 c^2" }, { "math_id": 9, "text": "\\tfrac{v}{c}" }, { "math_id": 10, "text": "\\lim_{v/c\\to 0}E_k=\\tfrac{1}{2}m_0v^2" }, { "math_id": 11, "text": "\\gamma = \\sqrt{1+\\left ( \\frac{p}{m_0 c} \\right )^2 } \\,." }, { "math_id": 12, "text": "\\varphi" }, { "math_id": 13, "text": " \\tanh \\varphi = \\beta" }, { "math_id": 14, "text": " \\gamma = \\cosh \\varphi = \\frac{1}{\\sqrt{1 - \\tanh^2 \\varphi}} = \\frac{1}{\\sqrt{1 - \\beta^2}}." }, { "math_id": 15, "text": " \\sum_{m=1}^\\infty \\left(J^2_{m-1}(m\\beta)+J^2_{m+1}(m\\beta)\\right)=\\frac{1}{\\sqrt{1-\\beta^2}}." }, { "math_id": 16, "text": "\\begin{align}\n\\gamma & = \\dfrac{1}{\\sqrt{1 - \\beta^2}} \\\\[1ex]\n& = \\sum_{n=0}^{\\infty} \\beta^{2n}\\prod_{k=1}^n \\left(\\dfrac{2k - 1}{2k}\\right) \\\\[1ex]\n& = 1 + \\tfrac12 \\beta^2 + \\tfrac38 \\beta^4 + \\tfrac{5}{16} \\beta^6 + \\tfrac{35}{128} \\beta^8 + \\tfrac{63}{256} \\beta^{10} + \\cdots ,\n\\end{align}" }, { "math_id": 17, "text": "\\gamma \\approx 1 + \\frac{1}{2}\\beta^2" }, { "math_id": 18, "text": "\\begin{align}\n\\mathbf p & = \\gamma m \\mathbf v, \\\\\nE & = \\gamma m c^2.\n\\end{align}" }, { "math_id": 19, "text": "\\gamma \\approx 1" }, { "math_id": 20, "text": "\\begin{align}\n\\mathbf p & = m \\mathbf v, \\\\\nE & = m c^2 + \\tfrac12 m v^2.\n\\end{align}" }, { "math_id": 21, "text": "\\beta = \\sqrt{1 - \\frac{1}{\\gamma^2}} ." }, { "math_id": 22, "text": "\\beta = 1 - \\tfrac12 \\gamma^{-2} - \\tfrac18 \\gamma^{-4} - \\tfrac{1}{16} \\gamma^{-6} - \\tfrac{5}{128} \\gamma^{-8} + \\cdots\\,." }, { "math_id": 23, "text": "\\beta \\approx 1 - \\frac{1}{2}\\gamma^{-2}" } ]
https://en.wikipedia.org/wiki?curid=732446
73245839
Bogomolov–Sommese vanishing theorem
Theorem in algebraic geometry In algebraic geometry, the Bogomolov–Sommese vanishing theorem is a result related to the Kodaira–Itaka dimension. It is named after Fedor Bogomolov and Andrew Sommese. Its statement has differing versions: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Bogomolov–Sommese vanishing theorem for snc pair: Let "X" be a projective manifold (smooth projective variety), "D" a simple normal crossing divisor (snc divisor) and formula_0 an invertible subsheaf. Then the Kodaira–Itaka dimension formula_1 is not greater than "p". This result is equivalent to the statement that: formula_2 for every complex projective snc pair formula_3 and every invertible sheaf formula_4 with formula_5. Therefore, this theorem is called the vanishing theorem. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Bogomolov–Sommese vanishing theorem for lc pair: Let (X,D) be a log canonical pair, where X is projective. If formula_6 is a formula_7-Cartier reflexive subsheaf of rank one, then formula_8.
[ { "math_id": 0, "text": "A \\subseteq \\Omega ^{p} _ {X} (\\log D)" }, { "math_id": 1, "text": "\\kappa(A)" }, { "math_id": 2, "text": "H^{0}\\left(X,A^{- 1} \\otimes \\Omega ^{p}_{X} (\\log D) \\right) = 0" }, { "math_id": 3, "text": "(X, D)" }, { "math_id": 4, "text": "A \\in \\mathrm{Pic}(X)" }, { "math_id": 5, "text": "\\kappa(A) > p" }, { "math_id": 6, "text": "A \\subseteq\\Omega ^{[p]}_{X} (\\log \\lfloor D \\rfloor)" }, { "math_id": 7, "text": "\\mathbb{Q}" }, { "math_id": 8, "text": "\\kappa(A) \\leq p" } ]
https://en.wikipedia.org/wiki?curid=73245839
73248112
Large language model
Type of artificial neural network &lt;templatestyles src="Machine learning/styles.css"/&gt; A large language model (LLM) is a computational model capable of language generation or other natural language processing tasks. As language models, LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a self-supervised and semi-supervised training process. The largest and most capable LLMs, as of  2024[ [update]], are artificial neural networks built with a decoder-only transformer-based architecture, which enables efficient processing and generation of large-scale text data. Modern models can be fine-tuned for specific tasks or can be guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. Some notable LLMs are OpenAI's GPT series of models (e.g., GPT-3.5, GPT-4 and GPT-4o; used in ChatGPT and Microsoft Copilot), Google's Gemini (the latter of which is currently used in the chatbot of the same name), Meta's LLaMA family of models, IBM's Granite models initially released with Watsonx, Anthropic's Claude models, and Mistral AI's models. History. Before 2017, there were a few language models that were large as compared to capacities then available. In the 1990s, the IBM alignment models pioneered statistical language modelling. A smoothed n-gram model in 2001 trained on 0.3 billion words achieved then-SOTA perplexity. In the 2000s, as Internet use became prevalent, some researchers constructed Internet-scale language datasets ("web as corpus"), upon which they trained statistical language models. In 2009, in most language processing tasks, statistical language models dominated over symbolic language models, as they can usefully ingest large datasets. After neural networks became dominant in image processing around 2012, they were applied to language modelling as well. Google converted its translation service to Neural Machine Translation in 2016. As it was before Transformers, it was done by seq2seq deep LSTM networks. At the 2017 NeurIPS conference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 Seq2seq technology, and was based mainly on the attention mechanism developed by Bahdanau et al. in 2014. The following year in 2018, BERT was introduced and quickly became "ubiquitous". Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention because OpenAI at first deemed it too powerful to release publicly, out of fear of malicious use. GPT-3 in 2020 went a step further and as of 2024[ [update]] is available only via API with no offering of downloading the model to execute locally. But it was the 2022 consumer-facing browser-based ChatGPT that captured the imaginations of the general population and caused some media hype and online buzz. The 2023 GPT-4 was praised for its increased accuracy and as a "holy grail" for its multimodal capabilities. OpenAI did not reveal high-level architecture and the number of parameters of GPT-4. Competing language models have for the most part been attempting to equal the GPT series, at least in terms of number of parameters. Since 2022, source-available models have been gaining popularity, especially at first with BLOOM and LLaMA, though both have restrictions on the field of use. Mistral AI's models Mistral 7B and Mixtral 8x7b have the more permissive Apache License. As of  2024[ [update]], The Instruction fine tuned variant of the Llama 3 70 billion parameter model is the most powerful open LLM according to the LMSYS Chatbot Arena Leaderboard, being more powerful than GPT-3.5 but not as powerful as GPT-4. As of 2024, the largest and most capable models are all based on the Transformer architecture. Some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model). Dataset preprocessing. Tokenization. Because machine learning algorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, an embedding is associated to the integer index. Algorithms include byte-pair encoding (BPE) and WordPiece. There are also special tokens serving as control characters, such as codice_0 for masked-out token (as used in BERT), and codice_1 ("unknown") for characters not appearing in the vocabulary. For example, the BPE tokenizer used by GPT-3 (Legacy) would split as Tokenization also compresses the datasets. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be "padded" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset. BPE. As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of "n"-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) "n"-grams that most frequently occur together are then again merged into even lengthier "n"-gram, until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257). After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams. Problems. A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. An average word in another language encoded by such an English-optimized tokenizer is however split into suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for the Shan language from Myanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English. Greedy tokenization also causes subtle problems with text completion. Dataset cleaning. In the context of training LLMs, datasets are typically cleaned by removing toxic passages from the dataset, discarding low-quality data, and de-duplication. Cleaned datasets can increase training efficiency and lead to improved downstream performance. A trained LLM can be used to clean datasets for training a further LLM. With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it). Synthetic data. Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft's Phi series of LLMs is trained on textbook-like data generated by another LLM. Training and architecture. Reinforcement learning from human feedback (RLHF). Reinforcement learning from human feedback (RLHF) through algorithms, such as proximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences. Instruction tuning. Using "self-instruct" approaches, LLMs have been able to bootstrap correct responses, replacing any naive responses, starting from human-generated corrections of a few cases. For example, in the instruction "Write an essay about the main themes represented in "Hamlet"," an initial naive completion might be "If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay," based on the frequency of this textual sequence in the corpus. Mixture of experts. The largest LLM may be too expensive to train and use directly. For such models, mixture of experts (MoE) can be applied, a line of research pursued by Google researchers since 2017 to train models reaching up to 1 trillion parameters. Prompt engineering, attention mechanism, and context window. Most results previously achievable only by (costly) fine-tuning, can be achieved through prompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window). In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized) GPT-2 model has had twelve attention heads and a context window of only 1k token. In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized. The largest models, such as Google's Gemini 1.5, presented in February 2024, can have a context window sized up to 1 million (context window of 10 million was also "successfully tested"). Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens. Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller. For example, the GPT-4 Turbo model has a maximum output of 4096 tokens. Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example with ChatGPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation. The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them are a matter of experimentation and domain-specific considerations. A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset. It can be either Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus. During training, regularization loss is also used to stabilize training. However regularization loss is usually not used during testing and evaluation. Infrastructure. Substantial infrastructure is necessary for training the largest models. Training cost. Advances in software and hardware have reduced the cost substantially since 2020, such that in 2023 training of a 12-billion-parameter LLM computational cost is 72,300 A100-GPU-hours, while in 2020 the cost of training a 1.5-billion-parameter LLM (which was two orders of magnitude smaller than the state of the art in 2020) was between $80 thousand and $1.6 million. Since 2020, large sums were invested in increasingly large models. For example, training of the GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million. For Transformer-based LLM, training cost is much higher than inference cost. It costs 6 FLOPs per parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token. Tool use. There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software. An example of such a task is responding to the user's input '354 * 139 = ', provided that the LLM has not already encountered a continuation of this calculation in its training corpus. In such cases, the LLM needs to resort to running program code that calculates the result, which can then be included in its response. Another example is 'What is the time now? It is ', where a separate program interpreter would need to execute a code to get system time on the computer, so LLM could include it in its reply. This basic strategy can be sophisticated with multiple attempts of generated programs, and other sampling strategies. Generally, in order to get an LLM to use tools, one must finetune it for tool-use. If the number of tools is finite, then finetuning may be done just once. If the number of tools can grow arbitrarily, as with online API services, then the LLM can be fine-tuned to be able to read API documentation and call API correctly. A simpler form of tool use is retrieval-augmented generation: the augmentation of an LLM with document retrieval. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a vector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents. Agency. An LLM is a language model, which is not an agent as it has no goal, but it can be used as a component of an intelligent agent. Researchers have described several methods for such integrations. The ReAct pattern, a portmanteau of "Reason + Act", constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment. The linguistic description of the environment given to the LLM planner can even be the LaTeX code of a paper describing the environment. In the DEPS ("Describe, Explain, Plan and Select") method, an LLM is first connected to the visual world via image descriptions, then it is prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and environmental feedback it receives. The Reflexion method constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are given to the agent in the subsequent episodes. Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model. For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent. Alternatively, it can propose increasingly difficult tasks for curriculum learning. Instead of outputting individual actions, an LLM planner can also construct "skills", or functions for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning. LLM-powered agents can keep a long-term memory of its previous contexts, and the memory can be retrieved in the same way as Retrieval Augmented Generation. Multiple such agents can interact socially. Compression. Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics. "Post-training quantization" aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance. The simplest form of quantization simply truncates all numbers to a given number of bits. It can be improved by using a different quantization codebook per layer. Further improvement can be done by applying different precisions to different parameters, with higher precision for particularly important parameters ("outlier weights"). See for a visual guide. While quantized models are typically frozen, and only pre-quantized models are fine-tuned, quantized models can still be fine-tuned. Multimodality. Multimodality means "having several modalities", and a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. There have been many AI models trained specifically to ingest one modality and output another modality, such as AlexNet for image to label, visual question answering for image-text to text, and speech recognition for speech to text. A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder formula_0. Make a small multilayered perceptron formula_1, so that for any image formula_2, the post-processed vector formula_3 has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability. Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch. Google PaLM model was fine-tuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control. LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs, and video inputs. GPT-4 can use both text and image as inputs (although the vision component was not released to the public until GPT-4V); Google DeepMind's Gemini is also multimodal. Properties. Scaling laws. The following four hyper-parameters characterize an LLM: They are related by simple statistical laws, called "scaling laws". One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that: formula_5 where the variables are and the statistical hyper-parameters are Emergent abilities. Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)" in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities". They arise from the complex interaction of the model's components and are not explicitly programmed or designed. The most intriguing among emergent abilities is in-context learning from example demonstrations. In-context learning is involved in tasks, such as: Schaeffer "et. al." argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well. Let formula_6 be the number of parameter count, and formula_2 be the performance of the model. Interpretation. Large language models by themselves are "black boxes", and it is not clear how they can perform linguistic tasks. There are several methods for understanding how LLM work. Mechanistic interpretability aims to reverse-engineer LLM by discovering symbolic algorithms that approximate the inference performed by LLM. One example is Othello-GPT, where a small Transformer is trained to predict legal Othello moves. It is found that there is a linear representation of Othello board, and modifying the representation changes the predicted legal Othello moves in the correct way. In another example, a small Transformer is trained on Karel programs. Similar to the Othello-GPT example, there is a linear representation of Karel program semantics, and modifying the representation changes output in the correct way. The model also generates correct programs that are on average shorter than those in the training set. In another example, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform. Understanding and intelligence. NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense". Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to "understand" certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system": "Can one reasonably say that a system that passes exams for software engineering candidates is not "really" intelligent?" Some researchers characterize LLMs as "alien intelligence". For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding." In contrast, some proponents of the "LLMs lack understanding" school believe that existing LLMs are "simply remixing and recombining existing writing", a phenomenon known as stochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability. For example, GPT-4 has natural deficits in planning and in real-time learning. Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed "hallucination". Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input. Neuroscientist Terrence Sejnowski has argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate". The matter of LLM's exhibiting intelligence or understanding has two main aspects – the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human like language. These aspects of language as a model of cognition have been developed in the field of cognitive linguistics. American linguist George Lakoff presented Neural Theory of Language (NTL) as a computational basis for using language as a model of learning tasks and understanding. The NTL Model outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titled "The Language Myth: Why Language Is Not An Instinct", British cognitive linguist and digital communication technologist Vyvyan Evans mapped out the role of probabilistic context-free grammar (PCFG) in enabling NLP to model cognitive patterns and generate human like language. Evaluation. Perplexity. The most commonly used measure of a language model's performance is its perplexity on a given text corpus. Perplexity is a measure of how well a model is able to predict the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. Mathematically, perplexity is defined as the exponential of the average negative log likelihood per token:formula_11here formula_4 is the number of tokens in the text corpus, and "context for token formula_12" depends on the specific type of LLM used. If the LLM is autoregressive, then "context for token formula_12" is the segment of text appearing before token formula_12. If the LLM is masked, then "context for token formula_12" is the segment of text surrounding token formula_12. Because language models may overfit to their training data, models are usually evaluated by their perplexity on a test set of unseen data. This presents particular challenges for the evaluation of large language models. As they are trained on increasingly large corpora of text largely scraped from the web, it becomes increasingly likely that models' training data inadvertently includes portions of any given test set. BPW, BPC, and BPT. In information theory, the concept of entropy is intricately linked to perplexity, a relationship notably established by Claude Shannon. This relationship is mathematically expressed as formula_13. Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization. Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word. In the evaluation and comparison of language models, cross-entropy is generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions. Task-specific datasets and benchmarks. A large number of testing datasets and benchmarks have also been developed to evaluate the capabilities of language models on more specific downstream tasks. Tests may be designed to evaluate a variety of capabilities, including general knowledge, commonsense reasoning, and mathematical problem-solving. One broad category of evaluation dataset is question answering datasets, consisting of pairs of questions and correct answers, for example, ("Have the San Jose Sharks won the Stanley Cup?", "No"). A question answering task is considered "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be adjoined with some text which includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."). Otherwise, the task is considered "closed book", and the model must draw on knowledge retained during training. Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD. Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: "Alice was friends with Bob. Alice went to visit her friend, ____". Some composite benchmarks have also been developed which combine a diversity of different evaluation datasets and tasks. Examples include GLUE, SuperGLUE, MMLU, BIG-bench, and HELM. OpenAI has released tools for running composite benchmarks, but noted that the eval results are sensitive to the prompting method. Some public datasets contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality, which can be cleaned to give more reliable benchmark scores. It was previously standard to report results on a heldout portion of an evaluation dataset after doing supervised fine-tuning on the remainder. It is now more common to evaluate a pre-trained model directly through prompting techniques, though researchers vary in the details of how they formulate prompts for particular tasks, particularly with respect to how many examples of solved tasks are adjoined to the prompt (i.e. the value of "n" in "n"-shot prompting). Adversarially constructed evaluations. Because of the rapid pace of improvement of large language models, evaluation benchmarks have suffered from short lifespans, with state of the art models quickly "saturating" existing benchmarks, exceeding the performance of human annotators, leading to efforts to replace or augment the benchmark with more challenging tasks. In addition, there are cases of "shortcut learning" wherein AIs sometimes "cheat" on multiple-choice tests by using statistical correlations in superficial test question wording in order to guess the correct responses, without necessarily understanding the actual question being asked. Some datasets have been constructed adversarially, focusing on particular problems on which extant language models seem to have unusually poor performance compared to humans. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions which language models are susceptible to answering incorrectly by mimicking falsehoods to which they were repeatedly exposed during training. For example, an LLM may answer "No" to the question "Can you teach an old dog new tricks?" because of its exposure to the English idiom "you can't teach an old dog new tricks", even though this is not literally true. Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model and filtering with a set of classifiers. The resulting problems are trivial for humans but at the time the datasets were created state of the art language models had poor accuracy on them. For example: We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man... a) demonstrates how to increase efficient exercise work by running up and down balls. b) moves all his arms and legs and builds up a lot of muscle. c) then plays the ball and we see a graphics and hedge trimming demonstration. d) performs sit ups while on the ball and talking. BERT selects b) as the most likely completion, though the correct answer is d). Wider impact. In 2023, "Nature Biomedical Engineering" wrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time." Goldman Sachs suggested in 2023 that generative language AI could increase global GDP by 7% in the next ten years, and could expose to automation 300 million jobs globally. Memorization and copyright. Memorization is an emergent behavior in LLMs in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural nets. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates or up to about 7%. Security. Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse. For example, the availability of large language models could reduce the skill-level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens. A study by researchers at Google and several universities, including Cornell University and University of California, Berkeley, showed that there are potential security risks in language models such as ChatGPT. In their study, they examined and confirmed the possibility that questioners could get, from ChatGPT, the training data that the AI model used. For example, when asking ChatGPT 3.5 turbo to repeat the word "poem" forever, the AI model will say "poem" hundreds of times and then diverge, deviating from the standard dialogue style and spitting out nonsense phrases, thus spitting out the training data as it is. The researchers have seen more than 10,000 examples of the AI model exposing their training data in a similar method. The researchers said that it was hard to tell if the AI model was actually safe or not. The potential presence of "sleeper agents" within LLM models is another emerging security concern. These are hidden functionalities built into the model that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions. Large language model (LLM) applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, research by Kang et al. demonstrated a method for circumventing LLM safety systems. Similarly, Wang illustrated how a potential criminal could potentially bypass ChatGPT 4o's safety controls to obtain information on establishing a drug trafficking operation. Algorithmic bias. While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups. Since English data is overrepresented in current large language models' training data, it may also downplay non-English views. Stereotyping. AI models can reinforce a wide range of stereotypes, including those based on gender, ethnicity, age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways. Notably, gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. Large language models often assign roles and characteristics based on traditional gender norms. For example, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men. Political bias. Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data. List. For the training cost column, 1 petaFLOP-day = 1 petaFLOP/sec × 1 day = 8.64E19 FLOP. Also, only the largest model's cost is written. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "f(E(y))" }, { "math_id": 4, "text": "N" }, { "math_id": 5, "text": "\\begin{cases}\nC = C_0 ND \\\\[6pt]\nL = \\frac{A}{N^\\alpha} + \\frac{B}{D^\\beta} + L_0\n\\end{cases}" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "y = \\text{average } \\Pr(\\text{correct token})" }, { "math_id": 8, "text": "(\\log x, y)" }, { "math_id": 9, "text": "y = \\text{average } \\log(\\Pr(\\text{correct token}))" }, { "math_id": 10, "text": "y = \\text{average } \\Pr(\\text{the most likely token is correct})" }, { "math_id": 11, "text": "\\log(\\text{Perplexity}) = -\\frac{1}{N} \\sum_{i=1}^N \\log(\\Pr(\\text{token}_i \\mid \\text{context for token}_i))" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "\\text{Entropy} = \\log_2(\\text{Perplexity})" } ]
https://en.wikipedia.org/wiki?curid=73248112
7324959
Rook's graph
Graph of chess rook moves In graph theory, a rook's graph is an undirected graph that represents all legal moves of the rook chess piece on a chessboard. Each vertex of a rook's graph represents a square on a chessboard, and there is an edge between any two squares sharing a row (rank) or column (file), the squares that a rook can move between. These graphs can be constructed for chessboards of any rectangular shape. Although rook's graphs have only minor significance in chess lore, they are more important in the abstract mathematics of graphs through their alternative constructions: rook's graphs are the Cartesian product of two complete graphs, and are the line graphs of complete bipartite graphs. The square rook's graphs constitute the two-dimensional Hamming graphs. Rook's graphs are highly symmetric, having symmetries taking every vertex to every other vertex. In rook's graphs defined from square chessboards, more strongly, every two edges are symmetric, and every pair of vertices is symmetric to every other pair at the same distance in moves (making the graph distance-transitive). For rectangular chessboards whose width and height are relatively prime, the rook's graphs are circulant graphs. With one exception, the rook's graphs can be distinguished from all other graphs using only two properties: the numbers of triangles each edge belongs to, and the existence of a unique 4-cycle connecting each nonadjacent pair of vertices. Rook's graphs are perfect graphs. In other words, every subset of chessboard squares can be colored so that no two squares in a row or column have the same color, using a number of colors equal to the maximum number of squares from the subset in any single row or column (the clique number of the induced subgraph). This class of induced subgraphs are a key component of a decomposition of perfect graphs used to prove the strong perfect graph theorem, which characterizes all perfect graphs. The independence number and domination number of a rook's graph both equal the smaller of the chessboard's width and height. In terms of chess, the independence number is the maximum number of rooks that can be placed without attacking each other; the domination number is the minimum number needed to attack all unoccupied board squares. Rook's graphs are well-covered graphs, meaning that placing non-attacking rooks one at a time can never get stuck until a set of maximum size is reached. Definition and mathematical constructions. An "n" × "m" rook's graph represents the moves of a rook on an "n" × "m" chessboard. Its vertices represent the squares of the chessboard, and may be given coordinates ("x", "y"), where 1 ≤ "x" ≤ "n" and 1 ≤ "y" ≤ "m". Two vertices with coordinates ("x"1, "y"1) and ("x"2, "y"2) are adjacent if and only if either "x"1 = "x"2 or "y"1 = "y"2. (If "x"1 = "x"2, the vertices share a file and are connected by a vertical rook move; if "y"1 = "y"2, they share a rank and are connected by a horizontal rook move.) The squares of a single rank or file are all directly connected to each other, so each rank and file forms a clique—a subset of vertices forming a complete graph. The whole rook's graph for an "n" × "m" chessboard can be formed from these two kinds of cliques, as the Cartesian product of graphs "K""n" ◻ "K""m". Because the rook's graph for a square chessboard is the Cartesian product of equal-size cliques, it is an example of a Hamming graph. Its dimension as a Hamming graph is two, and every two-dimensional Hamming graph is a rook's graph for a square chessboard. Square rook's graphs are also called "Latin square graphs"; applied to a Latin square, its edges describe pairs of squares that cannot contain the same value. The Sudoku graphs are rook's graphs with some additional edges, connecting squares of a Sudoku puzzle that should have unequal values. Geometrically, the rook's graphs can be formed by sets of the vertices and edges (the skeletons) of a family of convex polytopes, the Cartesian products of pairs of neighborly polytopes. For instance, the 3-3 duoprism is a four-dimensional shape formed as the Cartesian product of two triangles, and has a 3 × 3 rook's graph as its skeleton. Regularity and symmetry. Strong regularity. and observe that the formula_0 rook's graph (or equivalently, as they describe it, the line graph of the complete bipartite graph formula_1) has all of the following properties: They show that except in the case formula_14, these properties uniquely characterize the rook's graph. That is, the rook's graphs are the only graphs with these numbers of vertices, edges, triangles per edge, and with a unique 4-cycle through each two non-adjacent vertices. When formula_11, these conditions may be abbreviated by stating that an formula_15 rook's graph is a strongly regular graph with parameters formula_16. These parameters describe the number of vertices, the number of edges per vertex, the number of triangles per edge, and the number of shared neighbors for two non-adjacent vertices, respectively. Conversely, every strongly regular graph with these parameters must be an formula_15 rook's graph, unless formula_17. When formula_17, there is another strongly regular graph, the Shrikhande graph, with the same parameters as the formula_18 rook's graph. The Shrikhande graph obeys the same properties listed by Moon and Moser. It can be distinguished from the formula_18 rook's graph in that the neighborhood of each vertex in the Shrikhande graph is connected to form a formula_19-cycle. In contrast, in the formula_18 rook's graph, the neighborhood of each vertex forms two triangles, one for its rank and another for its file, without any edges from one part of the neighborhood to the other. Another way of distinguishing the formula_18 rook's graph from the Shrikhande graph uses clique cover numbers: the formula_17 rook's graph can be covered by four cliques (the four ranks or the four files of the chessboard) whereas six cliques are needed to cover the Shrikhande graph. Symmetry. Rook's graphs are vertex-transitive, meaning that they have symmetries taking every vertex to every other vertex. This implies that every vertex has an equal number of edges: they are formula_20-regular. The rook's graphs are the only regular graphs formed from the moves of standard chess pieces in this way. When formula_6, the symmetries of the rook's graph are formed by independently permuting the rows and columns of the graph, so the automorphism group of the graph has formula_21 elements. When formula_11, the graph has additional symmetries that swap the rows and columns, so the number of automorphisms is formula_22. Any two vertices in a rook's graph are either at distance one or two from each other, according to whether they are adjacent or nonadjacent respectively. Any two nonadjacent vertices may be transformed into any other two nonadjacent vertices by a symmetry of the graph. When the rook's graph is not square, the pairs of adjacent vertices fall into two orbits of the symmetry group according to whether they are adjacent horizontally or vertically, but when the graph is square any two adjacent vertices may also be mapped into each other by a symmetry and the graph is therefore distance-transitive. When formula_23 and formula_24 are relatively prime, the symmetry group formula_25 of the rook's graph contains as a subgroup the cyclic group formula_26 that acts by cyclically permuting the formula_2 vertices. Therefore, in this case, the rook's graph is a circulant graph. Square rook's graphs are connected-homogeneous, meaning that every isomorphism between two connected induced subgraphs can be extended to an automorphism of the whole graph. Other properties. Perfection. A rook's graph can also be viewed as the line graph of a complete bipartite graph "K""n","m" — that is, it has one vertex for each edge of "K""n","m", and two vertices of the rook's graph are adjacent if and only if the corresponding edges of the complete bipartite graph share a common endpoint. In this view, an edge in the complete bipartite graph from the ith vertex on one side of the bipartition to the jth vertex on the other side corresponds to a chessboard square with coordinates ("i", "j"). Any bipartite graph is a subgraph of a complete bipartite graph, and correspondingly any line graph of a bipartite graph is an induced subgraph of a rook's graph. The line graphs of bipartite graphs are perfect: in them, and in any of their induced subgraphs, the number of colors needed in any vertex coloring is the same as the number of vertices in the largest complete subgraph. Line graphs of bipartite graphs form an important family of perfect graphs: they are one of a small number of families used by to characterize the perfect graphs and to show that every graph with no odd hole and no odd antihole is perfect. In particular, rook's graphs are themselves perfect. Because a rook's graph is perfect, the number of colors needed in any coloring of the graph is just the size of its largest clique. The cliques of a rook's graph are the subsets of a single row or a single column, and the largest of these have size max("m", "n"), so this is also the chromatic number of the graph. An n-coloring of an "n" × "n" rook's graph may be interpreted as a Latin square: it describes a way of filling the rows and columns of an "n" × "n" grid with n different values in such a way that the same value does not appear twice in any row or column. In the same way, a coloring of a rectangular rook's graph corresponds to a Latin rectangle. Although finding an optimal coloring of a rook's graph is straightforward, it is NP-complete to determine whether a partial coloring can be extended to a coloring of the whole graph (this problem is called precoloring extension). Equivalently, it is NP-complete to determine whether a partial Latin square can be completed to a full Latin square. Independence. An independent set in a rook's graph is a set of vertices, no two of which belong to the same row or column of the graph; in chess terms, it corresponds to a placement of rooks no two of which attack each other. Perfect graphs may also be described as the graphs in which, in every induced subgraph, the size of the largest independent set is equal to the number of cliques in a partition of the graph's vertices into a minimum number of cliques. In a rook's graph, the sets of rows or the sets of columns (whichever has fewer sets) form such an optimal partition. The size of the largest independent set in the graph is therefore min("m", "n"). Rook's graphs are well-covered graphs: every independent set in a rook's graph can be extended to a maximum independent set, and every maximal independent set in a rook's graph has the same size, min("m", "n"). Domination. The domination number of a graph is the minimum cardinality among all dominating sets. On the rook's graph a set of vertices is a dominating set if and only if their corresponding squares either occupy, or are a rook's move away from, all squares on the "m" × "n" board. For the "m" × "n" board the domination number is min("m", "n"). On the rook's graph a k-dominating set is a set of vertices whose corresponding squares attack all other squares (via a rook's move) at least k times. A k-tuple dominating set on the rook's graph is a set of vertices whose corresponding squares attack all other squares at least k times and are themselves attacked at least "k" − 1 times. The minimum cardinality among all k-dominating and k-tuple dominating sets are the k-domination number and the k-tuple domination number, respectively. On the square board, and for even k, the k-domination number is "nk"/2 when "n" ≥ ("k"2 − 2"k")/4 and "k" &lt; 2"n". In a similar fashion, the k-tuple domination number is "n"("k" + 1)/2 when k is odd and less than 2"n". Hamiltonicity. Every rook's graph contains a Hamiltonian cycle. However, these cycles may involve moves between squares that are far apart within a single row or column of the chessboard. Instead, the study of "rook's tours", in the mathematics of chess, has generally concentrated on a special case of these Hamiltonian cycles where the rook is restricted to move only to adjacent squares. These single-step rook's tours only exist on boards with an even number of squares. They play a central role in the proof of Gomory's theorem that, if two squares of opposite colors are removed from a standard chessboard, the remaining squares can always be covered by dominoes. They are featured alongside knight's tours in the first work to discuss chess-piece tours, the 9th century Sanskrit "Kavyalankara" of Rudrata. Spectrum. The spectrum of a rook's graph (the eigenvalues of its adjacency matrix) consists of the four eigenvalues formula_3, formula_8, formula_10, and formula_27. Because these are all integers, rook's graphs are integral graphs. There are only three classes of graphs (and finitely many exceptional graphs) that can have four eigenvalues with one of the four being formula_27; one of the three classes is the class of rook's graphs. For most combinations of formula_23 and formula_24, the formula_0 rook's graph is spectrally unique: no other graph has the same spectrum. In particular this is true when formula_28 or formula_29, or when the two numbers formula_23 and formula_24 sum to at least 18 and do not have the form formula_30. In other graphs. The graphs for which the neighbors of each vertex induce a rook's graph have been called "locally grid". Examples include the Johnson graphs formula_31, for which the neighbors of each vertex form a formula_32 rook's graph. Other examples are known, and for some rook's graphs, a complete classification is known. For instance, there are two graphs whose neighborhoods are all formula_33 rook's graphs: they are the Johnson graph formula_34, and the complement graph of a formula_18 rook's graph. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m\\times n" }, { "math_id": 1, "text": "K_{m,n}" }, { "math_id": 2, "text": "mn" }, { "math_id": 3, "text": "m+n-2" }, { "math_id": 4, "text": "m-1" }, { "math_id": 5, "text": "n-1" }, { "math_id": 6, "text": "m\\ne n" }, { "math_id": 7, "text": "n\\tbinom{m}{2}" }, { "math_id": 8, "text": "m-2" }, { "math_id": 9, "text": "m\\tbinom{n}{2}" }, { "math_id": 10, "text": "n-2" }, { "math_id": 11, "text": "m=n" }, { "math_id": 12, "text": "m-2=n-2" }, { "math_id": 13, "text": "4" }, { "math_id": 14, "text": "m=n=4" }, { "math_id": 15, "text": "n\\times n" }, { "math_id": 16, "text": "\\operatorname{srg}(n^2,2n-2,n-2,2)" }, { "math_id": 17, "text": "n=4" }, { "math_id": 18, "text": "4\\times 4" }, { "math_id": 19, "text": "6" }, { "math_id": 20, "text": "(m+n-2)" }, { "math_id": 21, "text": "m!n!" }, { "math_id": 22, "text": "2n!^2" }, { "math_id": 23, "text": "m" }, { "math_id": 24, "text": "n" }, { "math_id": 25, "text": "S_m\\times S_n" }, { "math_id": 26, "text": "C_{mn}=C_m\\times C_n" }, { "math_id": 27, "text": "-2" }, { "math_id": 28, "text": "n=2" }, { "math_id": 29, "text": "n=m-1" }, { "math_id": 30, "text": "2t^2\\pm t" }, { "math_id": 31, "text": "J(n,k)" }, { "math_id": 32, "text": "k\\times(n-k)" }, { "math_id": 33, "text": "3\\times 3" }, { "math_id": 34, "text": "J(6,3)" } ]
https://en.wikipedia.org/wiki?curid=7324959
73251188
Araceli Lopez-Martens
French physicist Araceli Lopez-Martens (born 1971) is a French nuclear physicist specializing in the stability of superheavy elements, and known for her research on the superdeformation of nuclei and her role in the discovery of 249No, the lightest isotope of nobelium. She is a director of research for the French National Centre for Scientific Research. Education and career. Lopez-Martens was born in 1971. After primary and secondary education in France, she studied physics and Russian at the University of Sussex in England from 1989 to 1993, before returning to France for a diplôme d'études approfondies and doctoral thesis in physics through Paris-Sud University and the Centre de Sciences Nucléaires et de Sciences de la Matière in 1994 and 1996, respectively. Her doctoral dissertation, "Étude de la désexcitation des états superdéformés dans la région de masse formula_0", was directed by Fazia Hannachi. After completing her doctorate, she became a postdoctoral researcher at the Niels Bohr Institute in Denmark. She has worked as a researcher for the French National Centre for Scientific Research (CNRS) since 1999, initially at the Institut de Recherches Subatomiques (IReS) in Strasbourg, and is currently a director of research for the CNRS, affiliated with the Laboratory of the Physics of the two Infinities – Irène Joliot-Curie, jointly operated by the CNRS, Paris-Saclay University, and the Université Paris Cité. Her research has involved participation in the international collaborations on the Euroball gamma detector array, the AGATA advanced gamma tracking array, and the S3 Super Separator Spectrometer at the Grand Accélérateur National d'Ions Lourds (GANIL) in France. Recognition. Lopez-Martens received the CNRS Silver Medal in 2023. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a=190" } ]
https://en.wikipedia.org/wiki?curid=73251188
73267330
Ville's inequality
Probabilistic inequality In probability theory, Ville's inequality provides an upper bound on the probability that a supermartingale exceeds a certain value. The inequality is named after Jean Ville, who proved it in 1939. The inequality has applications in statistical testing. Statement. Let formula_0 be a non-negative supermartingale. Then, for any real number formula_1 formula_2 The inequality is a generalization of Markov's inequality. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_0, X_1, X_2, \\dots" }, { "math_id": 1, "text": "a > 0," }, { "math_id": 2, "text": "\n\\operatorname{P} \\left[ \\sup_{n \\ge 0} X_n \\ge a \\right] \\le \\frac{\\operatorname{E}[X_0]}{a} \\ .\n" } ]
https://en.wikipedia.org/wiki?curid=73267330
732729
Conjunction fallacy
Formal fallacy, aka Linda Problem The conjunction fallacy (also known as the Linda problem) is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability. It is a type of formal fallacy. Definition and basic example. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; I am particularly fond of this example [the Linda problem] because I know that the [conjoint] statement is least probable, yet a little homunculus in my head continues to jump up and down, shouting at me—"but she can't just be a bank teller; read the description." The most often-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman. "Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations." Which is more probable? The majority of those asked chose option 2. However, the probability of two events occurring together (that is, in conjunction) is always less than or equal to the probability of either one occurring itself—formally, for two events "A" and "B" this inequality could be written as formula_0 and formula_1. For example, even choosing a very low probability of Linda's being a bank teller, say Pr(Linda is a bank teller) = 0.05 and a high probability that she would be a feminist, say Pr(Linda is a feminist) = 0.95, then, assuming these two facts are independent of each other, Pr(Linda is a bank teller "and" Linda is a feminist) = 0.05 × 0.95 or 0.0475, lower than Pr(Linda is a bank teller). Tversky and Kahneman argue that most people get this problem wrong because they use a heuristic (an easily calculated) procedure called representativeness to make this kind of judgment: Option 2 seems more "representative" of Linda from the description of her, even though it is clearly mathematically less likely. In other demonstrations, they argued that a specific scenario seemed more likely because of representativeness, but each added detail would actually make the scenario less and less likely. In this way it could be similar to the misleading vividness or slippery slope fallacies. More recently Kahneman has argued that the conjunction fallacy is a type of extension neglect. Joint versus separate evaluation. In some experimental demonstrations, the conjoint option is evaluated separately from its basic option. In other words, one group of participants is asked to rank-order the likelihood that Linda is a bank teller, a high school teacher, and several other options, and another group is asked to rank-order whether Linda is a bank teller and active in the feminist movement versus the same set of options (without "Linda is a bank teller" as an option). In this type of demonstration, different groups of subjects still rank-order Linda as a bank teller and active in the feminist movement more highly than Linda as a bank teller. Separate evaluation experiments preceded the earliest joint evaluation experiments, and Kahneman and Tversky were surprised when the effect was observed even under joint evaluation. In separate evaluation, the term conjunction effect may be preferred. Other examples. While the Linda problem is the best-known example, researchers have developed dozens of problems that reliably elicit the conjunction fallacy. Tversky &amp; Kahneman (1981). The original report by Tversky &amp; Kahneman (later republished as a book chapter) described four problems that elicited the conjunction fallacy, including the Linda problem. There was also a similar problem about a man named Bill (a good fit for the stereotype of an accountant — "intelligent, but unimaginative, compulsive, and generally lifeless" — but not a good fit for the stereotype of a jazz player), and two problems where participants were asked to make predictions for events that could occur in 1981. Policy experts were asked to rate the probability that the Soviet Union would invade Poland, and the United States would break off diplomatic relations, all in the following year. They rated it on average as having a 4% probability of occurring. Another group of experts was asked to rate the probability simply that the United States would break off relations with the Soviet Union in the following year. They gave it an average probability of only 1%. In an experiment conducted in 1980, respondents were asked the following: Suppose Björn Borg reaches the Wimbledon finals in 1981. Please rank order the following outcomes from most to least likely. On average, participants rated "Borg will lose the first set but win the match" more likely than "Borg will lose the first set". However, winning the match is only one of several potential eventual outcomes after having lost the first set. The first and the second outcome are thus more likely (as they only contain one condition) than the third and fourth outcome (which depend on two conditions). Tversky &amp; Kahneman (1983). Tversky and Kahneman followed up their original findings with a 1983 paper that looked at dozens of new problems, most of these with multiple variations. The following are a couple of examples. Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequence of greens (G) and reds (R) will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you choose appears on successive rolls of the die. 65% of participants chose the second sequence, though option 1 is contained within it and is shorter than the other options. In a version where the $25 bet was only hypothetical the results did not significantly differ. Tversky and Kahneman argued that sequence 2 appears "representative" of a chance sequence (compare to the "clustering illusion"). A health survey was conducted in a representative sample of adult males in British Columbia of all ages and occupations. Mr. F. was included in the sample. He was selected by chance from the list of participants. Which of the following statements is more probable? (check one) The probability of the conjunctions is never greater than that of its conjuncts. Therefore, the first choice is more probable. Criticism. Critics such as Gerd Gigerenzer and Ralph Hertwig criticized the Linda problem on grounds such as the wording and framing. The question of the Linda problem may violate conversational maxims in that people assume that the question obeys the maxim of relevance. Gigerenzer argues that some of the terminology used have polysemous meanings, the alternatives of which he claimed were more "natural". He argues that one meaning of "probable" ("what happens frequently") corresponds to the mathematical probability people are supposed to be tested on, but other meanings ("what is plausible" and "whether there is evidence") do not. The term "and" has even been argued to have relevant polysemous meanings. Many techniques have been developed to control for this possible misinterpretation, but none of them has dissipated the effect. Many variations in wording of the Linda problem were studied by Tversky and Kahneman. If the first option is changed to obey conversational relevance, i.e., "Linda is a bank teller whether or not she is active in the feminist movement" the effect is decreased, but the majority (57%) of the respondents still commit the conjunction error. If the probability is changed to frequency format ("see debiasing section below") the effect is reduced or eliminated. However, studies exist in which indistinguishable conjunction fallacy rates have been observed with stimuli framed in terms of probabilities versus frequencies. The wording criticisms may be less applicable to the conjunction effect in separate evaluation. The "Linda problem" has been studied and criticized more than other types of demonstration of the effect (some described below). In an incentivized experimental study, it has been shown that the conjunction fallacy decreased in those with greater cognitive ability, though it did not disappear. It has also been shown that the conjunction fallacy becomes less prevalent when subjects are allowed to consult with other subjects. Still, the conjunction fallacy occurs even when people are asked to make bets with real money, and when they solve intuitive physics problems of various designs. Debiasing. Drawing attention to set relationships, using frequencies instead of probabilities, and/or thinking diagrammatically sharply reduce the error in some forms of the conjunction fallacy. In one experiment the question of the Linda problem was reformulated as follows: There are 100 persons who fit the description above (that is, Linda's). How many of them are: Whereas previously 85% of participants gave the wrong answer (bank teller and active in the feminist movement), in experiments done with this questioning the proportion of incorrect answers is dramatically reduced (to ~20%). Participants were forced to use a mathematical approach and thus recognized the difference more easily. However, in some tasks only based on frequencies, not on stories, that used clear logical formulations, conjunction fallacies continued to occur dominantly, with only few exceptions, when the observed pattern of frequencies resembled a conjunction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Pr(A \\land B) \\leq \\Pr(A)" }, { "math_id": 1, "text": "\\Pr(A \\land B) \\leq \\Pr(B)" } ]
https://en.wikipedia.org/wiki?curid=732729
73274726
Wu–Yang dictionary
Mathematical physics relation In topology and high energy physics, the Wu–Yang dictionary refers to the mathematical identification that allows back-and-forth translation between the concepts of gauge theory and those of differential geometry. The dictionary appeared in 1975 in an article by Tai Tsun Wu and C. N. Yang comparing electromagnetism and fiber bundle theory. This dictionary has been credited as bringing mathematics and theoretical physics closer together. A crucial example of the success of the dictionary is that it allowed the understanding of monopole quantization in terms of Hopf fibrations. History. Equivalences between fiber bundle theory and gauge theory were hinted at the end of the 1960s. In 1967, mathematician Andrzej Trautman started a series of lectures aimed at physicists and mathematicians at King's College London regarding these connections. Theoretical physicists Tai Tsun Wu and C. N. Yang working in Stony Brook University, published a paper in 1975 on the mathematical framework of electromagnetism and the Aharonov–Bohm effect in terms of fiber bundles. A year later, mathematician Isadore Singer came to visit and brought a copy back to the University of Oxford. Singer showed the paper to Michael Atiyah and other mathematicians, sparking a close collaboration between physicists and mathematicians. Yang also recounts a conversation that he had with one of the mathematicians that founded fiber bundle theory, Shiing-Shen Chern: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;In 1975, impressed with the fact that gauge fields are connections on fiber bundles, I drove to the house of Shiing-Shen Chern in El Cerrito, near Berkeley. (I had taken courses with him in the early 1940s when he was a young professor and I an undergraduate student at the National Southwest Associated University in Kunming, China. That was before fiber bundles had become important in differential geometry and before Chern had made history with his contributions to the generalized Gauss–Bonnet theorem and the Chern classes.) We had much to talk about: friends, relatives, China. When our conversation turned to fiber bundles, I told him that I had finally learned from Jim Simons the beauty of fiber-bundle theory and the profound Chern-Weil theorem. I said I found it amazing that gauge fields are exactly connections on fiber bundles, which the mathematicians developed without reference to the physical world. I added ‘this is both thrilling and puzzling, since you mathematicians dreamed up these concepts out of nowhere.’ He immediately protested, ‘No, no. These concepts were not dreamed up. They were natural and real.' In 1977, Trautman used these results to demonstrate an equivalence between a quantization condition for magnetic monopoles used by Paul Dirac back in 1931 and Hopf fibration, a fibration of a 3-sphere proposed io the same year by mathematician Heinz Hopf. Mathematician Jim Simons discussing this equivalence with Yang expressed that “Dirac had discovered trivial and nontrivial bundles before mathematicians.” In the original paper, Wu and Yang added sources (like the electric current) to the dictionary next to a blank spot, indicating a lack of any equivalent concept on the mathematical side. During interviews, Yang recalls that Singer and Atiyah found great interest in this concept of sources, which was unknown for mathematicians but that physicists knew since the 19th century. Mathematicians started working on that, which lead to the development of Donaldson theory by Simon Donaldson, a student of Atiyah. Description. Summarized version. The Wu-Yang dictionary relates terms in particle physics with terms in mathematics, specifically fiber bundle theory. Many versions and generalization of the dictionary exist. Here is an example of a dictionary, which puts each physics term next to its mathematical analogue: Original version for electromagnetism. Wu and Yang considered the description of an electron traveling around a cylinder in the presence of a magnetic field inside the cylinder (outside the cylinder the field vanishes i.e. formula_0). According to the Aharonov–Bohm effect, the interference patterns shift by a factor formula_1, where formula_2 is the magnetic flux and formula_3 is the magnetic flux quantum. For two different fluxes "a" and "b", the results are identical if formula_4, where formula_5 is an integer. We define the operator formula_6 as the gauge transformation that brings the electron wave function from one configuration to the other formula_7. For an electron that takes a path from point "P" to point "Q", we define the phase factor as formula_8, where formula_9 is the electromagnetic four-potential. For the case of a SU2 gauge field, we can make the substitution formula_10, where formula_11 are the generators of SU2, formula_12 are the Pauli matrices. Under these concepts, Wu and Yang showed the relation between the language of gauge theory and fiber bundles, was codified in following dictionary: &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_{\\mu\\nu}=0" }, { "math_id": 1, "text": "\\exp(-i \\Omega/\\Omega_0)" }, { "math_id": 2, "text": "\\Omega" }, { "math_id": 3, "text": "\\Omega_0" }, { "math_id": 4, "text": "\\Omega_a-\\Omega_b=N\\Omega_0" }, { "math_id": 5, "text": "N" }, { "math_id": 6, "text": "S_{ab}" }, { "math_id": 7, "text": "\\psi_b=S_{ba}\\psi_a" }, { "math_id": 8, "text": "\\Phi_{QP} =\\exp \\left(-\\frac{i}{\\Omega_0} \\int_P^Q A_\\mu \\mathrm{d} x^{\\mu}\\right) " }, { "math_id": 9, "text": "A_\\mu" }, { "math_id": 10, "text": "A_\\mu=ib_\\mu^kX_k" }, { "math_id": 11, "text": "X_k=-i\\sigma_k/2" }, { "math_id": 12, "text": "\\sigma_k" } ]
https://en.wikipedia.org/wiki?curid=73274726
73275338
Coincidence method
Particle physics experimental design developed 1924 In particle physics, the coincidence method (or coincidence technique) is an experimental design through which particle detectors register two or more simultaneous measurements of a particular event through different interaction channels. Detection can be made by sensing the primary particle and/or through the detection of secondary reaction products. Such a method is used to increase the sensitivity of an experiment to a specific particle interaction, reducing conflation with background interactions by creating more degrees of freedom by which the particle in question may interact. The first notable use of the coincidence method was conducted in 1924 by the Bothe–Geiger coincidence experiment. The higher the rate of interactions or reaction products that can be measured in coincidence, the harder it is to justify such an event occurred from background flux and the higher the experiment's efficiency. As an example, the Cowan and Reines’ neutrino experiment (1956) used a design that featured a four-fold coincidence technique. Particle detectors that rely on measurements of coincidence are often referred to as q-fold, where q is the number of channel measurements which must be triggered to affirm the desired interaction took place. Anti-coincidence counters or "vetos" are often used to filter common backgrounds, such as cosmic rays, from interacting with the primary detection medium. For instance, such a veto is used in the gamma ray observatory COS-B. Detectors relying on coincidence designs are limited by random, chance coincidence events. Background. Coincidence designs are an essential technique for increasing confidence in signals and reducing random background within a wide range of particle detectors. Common backgrounds include radioactive decay products (beta, alpha, and gamma radiation) and cosmic rays (protons, air showers). Such backgrounds can produce random interactions within a particle detector that may be hard to differentiate from the target particle. If the particle in question is able to trigger multiple channels that are correlated in time or space, it can be determined more likely that the particle is not a background trigger. "Chance" coincidence events may occur, in which all channels are triggered by particles which are not under investigation yet happen to interact with each channel at the same time. In this case, measurements of this chance event may be difficult to separate from measurements of the target events. A coincidence design must contain two or more measured channels for detecting a particle interaction which can be correlated with each other or the interaction in question over time, space, and/or the properties/products of the interaction. For some experimental setup with q coincidence channels (q-fold coincidence), the rate at which true correlated coincidence events can be measured formula_0 is given by: formula_1 where formula_2 is the count rate of each channel and formula_3 is the time between counts. The higher the time resolution of the coincidence detector, the easier it is to discriminate between "chance" coincidences and true signals. The rate at which coincidence events are measured formula_4 compared to the rate at which all suspected signal triggers are measured formula_5 defines the efficiency of the detector formula_6: formula_7 in which case formula_4 can also be defined by the product of all q channels of coincidence times the raw count of particles available for detection formula_8: formula_9 Therefore, the ability of a detector to successfully confirm signals in coincidence is directly proportional to its efficiency. History and notable experiments. The use of coincidence detectors in particle physics experiments opened doors to similar methods in nuclear physics, astroparticle physics, and other related fields. A wide variety of operational particle detectors today contain some identifiable form of coincidence or anti-coincidence design. Geiger, Bothe, and the Geiger-Müller counter. In 1924, physicists Walther Bothe and Hans Geiger used the coincidence method to probe the Compton scattering of gamma rays and x-rays, a phenomenon whose quantum mechanical nature (see particle-wave duality) with regard to energy conservation was ambiguous at the time. The Bothe–Geiger experiment was the first significant coincidence experiment to test the transfer of energy between the incoming photon and the electron in this process. The experiment utilized two Geiger counters: one to detect the initial recoiling election and one to simultaneously detect a secondary electron recoil caused by the photonic product of the first recoil. This setup included a coincidence circuit which measured the process to formula_3 = 1 ms and with an accuracy of 0.1 ms. In 1954, Bothe won the Nobel Prize in Physics for this work. Conan and Reines' neutrino experiment. In 1956, it was known that in order to balance the spin states of a beta decay process, a neutrino of spin 1/2 had to be a product of the reaction formula_10, where formula_11 is a neutron, formula_12 is the neutrino, formula_13 is a proton, and formula_14 is a beta particle. In an attempt to build on the theoretical concept of a neutrino by providing empirical evidence for its existence, physicists Clyde L. Cowan and Frederik Reines constructed an experiment outside of a nuclear reactor expected to emit neutrinos. Cowan and Reines decided to construct a four-fold coincidence experiment, for while the proximity to a nuclear reactor provided ample flux of neutrinos, it also created intense backgrounds (neutrons, gamma rays, etc.). The experiment utilized multiple interaction channels through which the presence of a neutrino (or in this experiment, an antineutrino) could be detected. The antineutrinos would enter a tank of water doped with cadmium chloride and interact with a water molecule's proton. This reaction (formula_15, where formula_16 represents a positron and formula_17 represents an antineutrino) released positions which interacted with one of two adjacent tanks of liquid scintillator. The resulting photons could then be measured by photomultiplier tubes installed on the scintillator tanks. While this interaction occurs, the neutron product from the original reaction follows a random walk through the cadmium-doped water until it is absorbed in a cadmium atom. This process then produces more gamma rays, which are subsequently detected. The overall system therefore includes two pairs of simultaneously recorded events, the correlation of which in time provides strong evidence for an interaction involving a neutrino. COS-B gamma-ray telescope. The invention of the coincidence method enlightened new techniques for measuring high-energy cosmic rays. On such experiment, COS-B, launched in 1975 and featured an anti-coincidence veto for charged particles, as well as three scintillation detectors to measure electron cascades caused by incoming gamma radiation. Therefore, gamma ray interactions could be measured with three-fold coincidence, after having passed a charged particle veto (see Anti-Coincidence). Anti-coincidence. The anti-coincidence method, similarly to the coincidence method, helps discriminate background interactions from target signals. However, anti-coincidence designs are used to actively "reject" non-signal particles rather than "affirm" signal particles. For instance, anti-coincidence counters can be used to shield charged particles when an experiment is explicitly searching for neutral particles, as in the SuperKamiokande neutrino experiment. These charged particles are often cosmic rays. Anti-coincidence detectors work by flagging or rejecting any events that trigger one channel of the detector, but not another. For a given rate of coincident particle interactions, formula_18, formula_19 where formula_5 is the rate of "suspected" target interactions and formula_20 is the rate of all detected, but uncorrelated events across multiple channels. This shows that all uncorrelated events, measured using the anti-coincidence technique, can be removed from the whole of possible interactions to retrieve those affirmable coincident interactions. For any q-fold design, formula_21 would include all coincident and all uncorrelated events. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_{q}" }, { "math_id": 1, "text": "R_{q} = q\\tau^{q-1}\\prod_{1}^{q} N_{q}" }, { "math_id": 2, "text": "N_{q}" }, { "math_id": 3, "text": "\\tau" }, { "math_id": 4, "text": "R_{\\rm coincidence}" }, { "math_id": 5, "text": "R_{\\rm suspected}" }, { "math_id": 6, "text": "\\epsilon" }, { "math_id": 7, "text": "\\epsilon = \\frac{R_{\\rm coincidence}}{R_{\\rm suspected}}" }, { "math_id": 8, "text": "N" }, { "math_id": 9, "text": "R_{\\rm coincidence} = N\\prod_{1}^{q}\\epsilon_{q}" }, { "math_id": 10, "text": "n^{0} \\rightarrow \\nu + p^{+} + \\beta^{-}" }, { "math_id": 11, "text": "n^{0}" }, { "math_id": 12, "text": "\\nu" }, { "math_id": 13, "text": "p^{+}" }, { "math_id": 14, "text": "\\beta^{-}" }, { "math_id": 15, "text": "p^{+} + \\nu_{-} \\rightarrow n^{0} + \\beta^{+}" }, { "math_id": 16, "text": "\\beta^{+}" }, { "math_id": 17, "text": "\\nu_{-}" }, { "math_id": 18, "text": "R_{\\rm coincident}" }, { "math_id": 19, "text": "R_{\\rm coincident} = R_{\\rm suspected} - R_{\\rm uncorrelated}" }, { "math_id": 20, "text": "R_{\\rm uncorrelated}" }, { "math_id": 21, "text": "R_{suspected}" } ]
https://en.wikipedia.org/wiki?curid=73275338
73276921
Bohr–Favard inequality
The Bohr–Favard inequality is an inequality appearing in a problem of Harald Bohr on the boundedness over the entire real axis of the integral of an almost-periodic function. The ultimate form of this inequality was given by Jean Favard; the latter materially supplemented the studies of Bohr, and studied the arbitrary periodic function formula_0 with continuous derivative formula_1 for given constants formula_2 and formula_3 which are natural numbers. The accepted form of the Bohr–Favard inequality is formula_4 formula_5 with the best constant formula_6: formula_7 The Bohr–Favard inequality is closely connected with the inequality for the best approximations of a function and its formula_2th derivative by trigonometric polynomials of an order at most formula_3 and with the notion of Kolmogorov's width in the class of differentiable functions (cf. Width). References. &lt;templatestyles src="Reflist/styles.css" /&gt;  This article incorporates text from a free content work. Licensed under CC BY-SA and GFDL. Text taken from "Bohr-Favard inequality​", see revision history for contributors, Encyclopedia of Mathematics.
[ { "math_id": 0, "text": "\nf(x) = \\ \n\\sum _ { k=n } ^ \\infty \n(a _ {k} \\cos kx + b _ {k} \\sin kx)\n" }, { "math_id": 1, "text": "f ^ {(r)} (x)" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\n\\| f \\| _ {C} \\leq K \\| f ^ {(r)} \\| _ {C} ,\n" }, { "math_id": 5, "text": "\n\\| f \\| _ {C} = \\max _ {x \\in [0, 2 \\pi ] } | f(x) | ,\n" }, { "math_id": 6, "text": "K = K (n, r)" }, { "math_id": 7, "text": "\nK = \\sup _ {\\| f ^ {(r)} \\| _ {C} \\leq 1 } \\ \n\\| f \\| _ {C} .\n" } ]
https://en.wikipedia.org/wiki?curid=73276921
73278652
Sectorial operator
Notion of a sectorial operator in mathematical operator theory, translation of existing articles In mathematics, more precisely in operator theory, a sectorial operator is a linear operator on a Banach space whose spectrum in an open sector in the complex plane and whose resolvent is uniformly bounded from above outside any larger sector. Such operators might be unbounded. Sectorial operators have applications in the theory of elliptic and parabolic partial differential equations. Definition. Let formula_0 be a Banach space. Let formula_1 be a (not necessarily bounded) linear operator on formula_2 and formula_3 its spectrum. For the angle formula_4, we define the open sector formula_5, and set formula_6 if formula_7. Now, fix an angle formula_8. The operator formula_1 is called sectorial with angle formula_9 if formula_10 and if formula_11 for every larger angle formula_12. The set of sectorial operators with angle formula_9 is denoted by formula_13.
[ { "math_id": 0, "text": "(X,\\|\\cdot\\|)" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\sigma(A)" }, { "math_id": 4, "text": "0<\\omega\\leq \\pi" }, { "math_id": 5, "text": "\\Sigma_{\\omega}:=\\{z \\in \\mathbb{C}\\setminus\\{0\\}:|\\operatorname{arg} z|<\\omega\\}" }, { "math_id": 6, "text": "\\Sigma_{0}:=(0,\\infty)" }, { "math_id": 7, "text": "\\omega=0" }, { "math_id": 8, "text": "\\omega \\in [0,\\pi)" }, { "math_id": 9, "text": "\\omega" }, { "math_id": 10, "text": "\\sigma(A)\\subset \\overline{\\Sigma_{\\omega}}" }, { "math_id": 11, "text": "\\sup\\limits_{\\lambda \\in \\mathbb{C}\\setminus\\overline{\\Sigma_{\\psi}}}|\\lambda|\\|(\\lambda-A)^{-1}\\|<\\infty" }, { "math_id": 12, "text": "\\psi\\in (\\omega,\\pi)" }, { "math_id": 13, "text": "\\operatorname{Sect}(\\omega)" }, { "math_id": 14, "text": "\\omega\\neq 0" }, { "math_id": 15, "text": "\\Sigma_{\\omega}" }, { "math_id": 16, "text": "2\\omega" } ]
https://en.wikipedia.org/wiki?curid=73278652
73281006
Stochastic analysis on manifolds
In mathematics, stochastic analysis on manifolds or stochastic differential geometry is the study of stochastic analysis over smooth manifolds. It is therefore a synthesis of stochastic analysis (the extension of calculus to stochastic processes) and of differential geometry. The connection between analysis and stochastic processes stems from the fundamental relation that the infinitesimal generator of a continuous strong Markov process is a second-order elliptic operator. The infinitesimal generator of Brownian motion is the Laplace operator and the transition probability density formula_0 of Brownian motion is the minimal heat kernel of the heat equation. Interpreting the paths of Brownian motion as characteristic curves of the operator, Brownian motion can be seen as a stochastic counterpart of a flow to a second-order partial differential operator. Stochastic analysis on manifolds investigates stochastic processes on non-linear state spaces or manifolds. Classical theory can be reformulated in a coordinate-free representation. In that, it is often complicated (or not possible) to formulate objects with coordinates of formula_1. Thus, we require an additional structure in form of a linear connection or Riemannian metric to define martingales and Brownian motion on manifolds. Therefore, controlled by the Riemannian metric, Brownian motion will be a local object by definition. However, its stochastic behaviour determines global aspects of the topology and geometry of the manifold. Brownian motion is defined to be the diffusion process generated by the Laplace-Beltrami operator formula_2 with respect to a manifold formula_3 and can be constructed as the solution to a non-canonical stochastic differential equation on a Riemannian manifold. As there is no Hörmander representation of the operator formula_4 if the manifold is not "parallelizable", i.e. if the tangent bundle is not trivial, there is no canonical procedure to construct Brownian motion. However, this obstacle can be overcome if the manifold is equipped with a connection: We can then introduce the "stochastic horizontal lift" of a semimartingale and the "stochastic development" by the so-called "Eells-Elworthy-Malliavin construction". The latter is a generalisation of a horizontal lift of smooth curves to horizontal curves in the frame bundle, such that the "anti-development" and the horizontal lift are connected by a stochastic differential equation. Using this, we can consider an SDE on the orthonormal frame bundle of a Riemannian manifold, whose solution is Brownian motion, and projects down to the (base) manifold via "stochastic development". A visual representation of this construction corresponds to the construction of a spherical Brownian motion by "rolling without slipping" the manifold along the paths (or footprints) of Brownian motion left in Euclidean space. Stochastic differential geometry provides insight into classical analytic problems, and offers new approaches to prove results by means of probability. For example, one can apply Brownian motion to the Dirichlet problem at infinity for Cartan-Hadamard manifolds or give a probabilistic proof of the Atiyah-Singer index theorem. Stochastic differential geometry also applies in other areas of mathematics (e.g. mathematical finance). For example, we can convert classical arbitrage theory into differential-geometric language (also called "geometric arbitrage theory"). Preface. For the reader's convenience and if not stated otherwise, let formula_5 be a filtered probability space and formula_3 be a smooth manifold. The filtration satisfies the usual conditions, i.e. it is right-continuous and complete. We use the Stratonovich integral which obeys the classical chain rule (compared to Itô calculus). The main advantage for us lies in the fact that stochastic differential equations are then stable under diffeomorphisms formula_6 between manifolds, i.e. if formula_7 is a solution, then also formula_8 is a solution under transformations of the stochastic differential equation. Notation: Flow processes. "Flow processes" (also called formula_17"-diffusions") are the probabilistic counterpart of integral curves (flow lines) of vector fields. In contrast, a flow process is defined with respect to a second-order differential operator, and thus, generalises the notion of deterministic flows being defined with respect to a first-order operator. Partial differential operator in Hörmander form. Let formula_18 be a vector field, understood as a derivation by the formula_12-isomorphism formula_19 for some formula_20. The map formula_21 is defined by formula_22. For the composition, we set formula_23 for some formula_20. A partial differential operator (PDO) formula_24 is given in "Hörmander form" if and only there are vector fields formula_25 and formula_17 can be written in the form formula_26. Flow process. Let formula_17 be a PDO in Hörmander form on formula_3 and formula_27 a starting point. An adapted and continuous formula_3-valued process formula_7 with formula_28 is called a "flow process to formula_17 starting in formula_29", if for every test function formula_15 and formula_30 the process formula_31 is a martingale, i.e. formula_32. Remark. For a test function formula_15, a PDO formula_17 in Hörmander form and a flow process formula_33 (starting in formula_29) also holds the flow equation, but in comparison to the deterministic case "only in mean" formula_34. and we can recover the PDO by taking the time derivative at time 0, i.e. formula_35. Lifetime and explosion time. Let formula_36 be open und formula_37 a predictable stopping time. We call formula_38 the "lifetime" of a continuous semimartingale formula_39 on formula_40 if Moreover, if formula_47 for almost all formula_48, we call formula_38 "explosion time". A flow process formula_7 can have a finite lifetime formula_38. By this we mean that formula_49 is defined such that if formula_50, then formula_44-almost surely on formula_51 we have formula_52 in the one-point compactification formula_16. In that case we extend the process path-wise by formula_53 for formula_54. Semimartingales on a manifold. A process formula_7 is a s"emimartingale on formula_3", if for every formula_55 the random variable formula_8 is an formula_56-semimartingale, i.e. the composition of any smooth function formula_57 with the process formula_7 is a real-valued semimartingale. It can be shown that any formula_3-semimartingale is a solution of a stochastic differential equation on formula_3. If the semimartingale is only defined up to a finite lifetime formula_38, we can always construct a semimartingale with infinite lifetime by a transformation of time. A semimartingale has a quadratic variation with respect to a section in the bundle of bilinear forms on formula_9. Introducing the "Stratonovich Integral of a differential form formula_58 along the semimartingale formula_7" we can study the so called "winding behaviour" of formula_7, i.e. a generalisation of the winding number. Stratonovich integral of a 1-form. Let formula_7 be an formula_3-valued semimartingale and formula_59 be a 1-form. We call the integral formula_60 the "Stratonovich integral of formula_58 along formula_7". For formula_20 we define formula_61. SDEs on a manifold. A "stochastic differential equation on a manifold" formula_3, denoted "SDE on formula_3", is defined by the pair formula_62 including a bundle homomorphism (i.e. a "homomorphism of vector bundles") or the (formula_63)-tuple formula_64 with vector fields formula_65 given. Using the Whitney embedding, we can show that there is a unique maximal solution to every SDE on formula_3 with initial condition formula_28. If we have identified the maximal solution, we recover directly a flow process formula_66 to the operator formula_17. Definition. An "SDE on formula_3" is a pair formula_62, where formula_70 whereformula_71 is a linear map. The stochastic differential equation formula_62 is denoted by formula_72 or formula_73 The latter follows from setting formula_74 with respect to a basis formula_75 and formula_56-valued semimartingales formula_76 with formula_77. As for given vector fields formula_78 there is exactly one bundle homomorphism formula_79 such that formula_74, our definition of an SDE on formula_3 as formula_64 is plausible. If formula_80 has only finite life time, we can transform the time horizon into the infinite case. Solutions. Let formula_62 be an SDE on formula_3 and formula_81 an formula_82-measurable random variable. Let formula_83 be a continuous adapted formula_3-valued process with life time formula_84 on the same probability space such as formula_80. Then formula_83 is called a "solution" to the SDE formula_72 with initial condition formula_85 up to the life time formula_84, if for every test function formula_86 the process formula_8 is an formula_56-valued semimartingale and for every stopping time formula_87 with formula_88, it holds formula_44-almost surely formula_89, where formula_90 is the push-forward (or differential) at the point formula_7. Following the idea from above, by definition formula_8 is a semimartingale for every test function formula_91, so that formula_7 is a "semimartingale on formula_3". If the lifetime is maximal, i.e. formula_92 formula_44-almost surely, we call this solution the "maximal solution". The lifetime of a maximal solution formula_7 can be extended to all of formula_93 , and after extending formula_57 to the whole of formula_94, the equation formula_95, holdsup to indistinguishability. Remark. Let formula_96 with a formula_97-dimensional Brownian motion formula_98, then we can show that every maximal solution starting in formula_99 is a flow process to the operator formula_100. Martingales and Brownian motion. Brownian motion on manifolds are stochastic flow processes to the Laplace-Beltrami operator. It is possible to construct Brownian motion on Riemannian manifolds formula_101. However, to follow a canonical ansatz, we need some additional structure. Let formula_102 be the orthogonal group; we consider the canonical SDE on the orthonormal frame bundle formula_103 over formula_3, whose solution is Brownian motion. The orthonormal frame bundle is the collection of all sets formula_104 of orthonormal frames of the tangent space formula_105 formula_106 or in other words, the formula_102-principal bundle associated to formula_9. Let formula_107 be an formula_1-valued semimartingale. The solution formula_40 of the SDE formula_108 defined by the projection formula_109 of a Brownian motion formula_7 on the Riemannian manifold, is the "stochastic development" from formula_107 on formula_3. Conversely we call formula_107 the "anti-development" of formula_40 or, respectively, formula_110. In short, we get the following relations: formula_111, where For a Riemannian manifold we always use the Levi-Civita connection and the corresponding Laplace-Beltrami operator formula_4. The key observation is that there exists a lifted version of the Laplace-Beltrami operator on the orthonormal frame bundle. The fundamental relation reads, for formula_20, formula_112 for all formula_113 with formula_114, and the operator formula_115 on formula_103 is well-defined for so-called "horizontal vector fields". The operator formula_115 is called "Bochner's horizontal Laplace operator". Martingales with linear connection. To define martingales, we need a linear connection formula_116. Using the connection, we can characterise formula_116-martingales, if their anti-development is a local martingale. It is also possible to define formula_116-martingales without using the anti-development. We write formula_117 to indicate that equality holds modulo differentials of local martingales. Let formula_7 be an formula_3-valued semimartingale. Then formula_7 is a "martingale" or "formula_116-martingale", if and only if for every formula_20, it holds that formula_118 Brownian motion on a Riemannian manifold. Let formula_101 be a Riemannian manifold with Laplace-Beltrami operator formula_119. An adapted formula_3-valued process formula_7 with maximal lifetime formula_38 is called a "Brownian motionformula_101", if for every formula_20 formula_120 is a local formula_56-martingale with life time formula_38. Hence, Brownian motion Bewegung is the diffusion process to formula_121. Note that this characterisation does not provide a canonical procedure to define Brownian motion.
[ { "math_id": 0, "text": "p(t,x,y)" }, { "math_id": 1, "text": "\\R^d" }, { "math_id": 2, "text": "\\tfrac{1}{2}\\Delta_M" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "\\Delta_M" }, { "math_id": 5, "text": "(\\Omega,\\mathcal{A},(\\mathcal{F}_t)_{t\\geq 0},\\mathbb P)" }, { "math_id": 6, "text": "f:M\\to N" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "f(X)" }, { "math_id": 9, "text": "TM" }, { "math_id": 10, "text": "T^*M" }, { "math_id": 11, "text": "\\Gamma(TM)" }, { "math_id": 12, "text": "C^{\\infty}(M)" }, { "math_id": 13, "text": "X \\circ dZ" }, { "math_id": 14, "text": "C^{\\infty}_c(M)" }, { "math_id": 15, "text": "f\\in C^{\\infty}_c(M)" }, { "math_id": 16, "text": "\\widehat{M}:=M\\cup \\{\\infty\\}" }, { "math_id": 17, "text": "L" }, { "math_id": 18, "text": "A\\in \\Gamma(TM)" }, { "math_id": 19, "text": "\\Gamma(TM)\\to \\operatorname{Der}_{\\mathbb{R}} C^{\\infty}(M),\\quad A\\mapsto (f\\mapsto Af)" }, { "math_id": 20, "text": "f\\in C^{\\infty}(M)" }, { "math_id": 21, "text": "Af:M\\to \\mathbb{R}" }, { "math_id": 22, "text": "Af(x):=A_x(f)" }, { "math_id": 23, "text": "A^2:=A(A(f))" }, { "math_id": 24, "text": "L:C^{\\infty}(M)\\to C^{\\infty}(M)" }, { "math_id": 25, "text": "A_0,A_1,\\dots,A_r\\in \\Gamma(TM)" }, { "math_id": 26, "text": "L=A_0+\\sum\\limits_{i=1}^r A_i^2" }, { "math_id": 27, "text": "x\\in M" }, { "math_id": 28, "text": "X_0=x" }, { "math_id": 29, "text": "x" }, { "math_id": 30, "text": "t\\in\\mathbb{R}_+" }, { "math_id": 31, "text": "N(f)_t:=f(X_t)-f(X_0)-\\int_0^t Lf(X_r)\\mathrm{d}r " }, { "math_id": 32, "text": "\\mathbb{E}\\left(N(f)_t\\mid\\mathcal{F}_s\\right)=N(f)_s,\\quad \\forall s\\leq t" }, { "math_id": 33, "text": "X_t^x" }, { "math_id": 34, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\mathbb{E} f(X_t^x) = \\mathbb{E}\\left[Lf(X_t^x)\\right]" }, { "math_id": 35, "text": "\\left.\\frac{\\mathrm{d}}{\\mathrm{d}t}\\right|_{t=0}\\mathbb{E}f(X_t^x)=Lf(x)" }, { "math_id": 36, "text": "\\empty \\neq U\\subset \\mathbb{R}^n" }, { "math_id": 37, "text": "\\xi>0" }, { "math_id": 38, "text": "\\xi" }, { "math_id": 39, "text": "X=(X_t)_{0\\leq t<\\xi}" }, { "math_id": 40, "text": "U" }, { "math_id": 41, "text": "(\\xi_n)" }, { "math_id": 42, "text": "\\xi_n\\nearrow\\xi" }, { "math_id": 43, "text": "\\xi_n< \\xi" }, { "math_id": 44, "text": "\\mathbb P" }, { "math_id": 45, "text": "\\{0<\\xi<\\infty\\}" }, { "math_id": 46, "text": "(X_{t\\wedge \\xi_n})" }, { "math_id": 47, "text": "X_{\\xi_n(\\omega)}\\to \\partial U" }, { "math_id": 48, "text": "\\omega\\in\\{\\xi<\\infty\\}" }, { "math_id": 49, "text": "X=(X)_{t<\\xi}" }, { "math_id": 50, "text": "t\\to \\xi" }, { "math_id": 51, "text": "\\{\\xi<\\infty\\}" }, { "math_id": 52, "text": "X_t\\to \\infty" }, { "math_id": 53, "text": "X_t:=\\infty" }, { "math_id": 54, "text": "t\\geq \\xi" }, { "math_id": 55, "text": "f\\in C^2(M)" }, { "math_id": 56, "text": "\\R" }, { "math_id": 57, "text": "f" }, { "math_id": 58, "text": "\\alpha" }, { "math_id": 59, "text": "\\alpha\\in\\Gamma(T^*M)" }, { "math_id": 60, "text": "\\int_X\\alpha:=\\int \\alpha (\\circ dX)" }, { "math_id": 61, "text": "f(X)\\circ \\alpha(\\circ dX):=f(X)\\circ d(\\int_X\\alpha)" }, { "math_id": 62, "text": "(A,Z)" }, { "math_id": 63, "text": "r+1" }, { "math_id": 64, "text": "(A_1,\\dots,A_r,Z)" }, { "math_id": 65, "text": "A_1,\\dots,A_r" }, { "math_id": 66, "text": "X^x" }, { "math_id": 67, "text": "Z=(Z_t)_{t\\in\\mathbb{R}_+}" }, { "math_id": 68, "text": "E" }, { "math_id": 69, "text": "A:M\\times E\\to TM" }, { "math_id": 70, "text": "A:(x,e)\\mapsto A(x)e" }, { "math_id": 71, "text": "A(x):E\\to TM" }, { "math_id": 72, "text": "dX=A(X)\\circ dZ" }, { "math_id": 73, "text": "dX=\\sum\\limits_{i=1}^r A_i(X)\\circ dZ^i." }, { "math_id": 74, "text": "A_i:=A(\\cdot)e_i" }, { "math_id": 75, "text": "(e_i)_{i=1,\\dots,r}" }, { "math_id": 76, "text": "(Z^{i})_{i=1,\\dots,r}" }, { "math_id": 77, "text": "Z=\\sum\\limits_{i=1}^rZ^{i}e_i" }, { "math_id": 78, "text": "A_1,\\dots,A_r\\in \\Gamma(TM)" }, { "math_id": 79, "text": "A" }, { "math_id": 80, "text": "Z" }, { "math_id": 81, "text": "x_0:\\Omega\\to M" }, { "math_id": 82, "text": "\\mathcal{F}_0" }, { "math_id": 83, "text": "(X_t)_{t<\\zeta}" }, { "math_id": 84, "text": "\\zeta" }, { "math_id": 85, "text": "X_0=x_0" }, { "math_id": 86, "text": "f \\in C^{\\infty}_c(M)" }, { "math_id": 87, "text": "\\tau" }, { "math_id": 88, "text": "0\\leq \\tau < \\zeta" }, { "math_id": 89, "text": "f(X_\\tau) = f(x_0) + \\int_0^\\tau (df)_{X_s} A(X_s)\\circ \\mathrm{d}Z_s " }, { "math_id": 90, "text": "(df)_X:T_xM\\to T_{f(x)}M" }, { "math_id": 91, "text": "f\\in C_c^{\\infty}(M)" }, { "math_id": 92, "text": "\\{\\zeta <\\infty\\}\\subset\\left\\{\\lim\\limits_{t\\nearrow \\zeta}X_t=\\infty \\text{ in }\\widehat{M}\\right\\}" }, { "math_id": 93, "text": "\\R_+" }, { "math_id": 94, "text": "\\widehat{M}" }, { "math_id": 95, "text": "f(X_{t})=f(X_0)+\\int_0^t (df)_X A(X)\\circ dZ, \\quad t\\geq 0" }, { "math_id": 96, "text": "Z=(t,B)" }, { "math_id": 97, "text": "d" }, { "math_id": 98, "text": "B=(B_1,\\dots,B_d)" }, { "math_id": 99, "text": "x_0" }, { "math_id": 100, "text": "L=A_0+\\frac{1}{2}\\sum\\limits_{i=1}^d A_i^2" }, { "math_id": 101, "text": "(M,g)" }, { "math_id": 102, "text": "\\mathcal{O}(d)" }, { "math_id": 103, "text": "O(M)" }, { "math_id": 104, "text": "O_x(M)" }, { "math_id": 105, "text": "T_xM" }, { "math_id": 106, "text": "O(M):=\\bigcup\\limits_{x\\in M}O_x(M)" }, { "math_id": 107, "text": "W" }, { "math_id": 108, "text": "dU_t = \\sum\\limits_{i=1}^d A_i(U_t)\\circ dW_t^i,\\quad U_0=u_0," }, { "math_id": 109, "text": "\\pi:O(M)\\to M" }, { "math_id": 110, "text": "\\pi(U)=X" }, { "math_id": 111, "text": "W\\leftrightarrow U \\leftrightarrow X" }, { "math_id": 112, "text": "\\Delta_M f(x)=\\Delta_{O(M)}(f\\circ \\pi)(u)" }, { "math_id": 113, "text": "u\\in O(M)" }, { "math_id": 114, "text": "\\pi u=x" }, { "math_id": 115, "text": "\\Delta_{O(M)}" }, { "math_id": 116, "text": "\\nabla" }, { "math_id": 117, "text": "\\stackrel{m}{=}" }, { "math_id": 118, "text": "d(f\\circ X)\\,\\,\\stackrel{m}{=}\\,\\,\\tfrac{1}{2}(\\nabla df)(dX,dX)." }, { "math_id": 119, "text": "\\Delta_{M}" }, { "math_id": 120, "text": "f(X)-\\frac{1}{2}\\int\\Delta_{M} f(X)\\mathrm{d}t" }, { "math_id": 121, "text": "\\tfrac{1}{2}\\Delta_{M}" } ]
https://en.wikipedia.org/wiki?curid=73281006
73283196
Ascon (cipher)
Family of authenticated ciphers Ascon is a family of lightweight authenticated ciphers that had been selected by US National Institute of Standards and Technology (NIST) for future standardization of the lightweight cryptography. History. Ascon was developed in 2014 by a team of researchers from Graz University of Technology, Infineon Technologies, Lamarr Security Research, and Radboud University. The cipher family was chosen as a finalist of the CAESAR Competition in February 2019. NIST had announced its decision on February 7, 2023 with the following intermediate steps that would lead to the eventual standardization: Design. The design is based on a sponge construction along the lines of SpongeWrap and MonkeyDuplex. This design makes it easy to reuse Ascon in multiple ways (as a cipher, hash, or a MAC). As of February 2023, the Ascon suite contained seven ciphers, including: The main components have been borrowed from other designs: Parameterization. The ciphers are parameterizable by the key length "k" (up to 128 bits), "rate" (block size) "r", and two numbers of rounds "a", "b". All algorithms support authenticated encryption with plaintext P and additional authenticated data A (that remains unencrypted). The encryption input also includes a public nonce N, the output - authentication tag T, size of the ciphertext C is the same as that of P. The decryption uses N, A, C, and T as inputs and produces either P or signals verification failure if the message has been altered. Nonce and tag have the same size as the key K ("k" bits). In the CAESAR submission, two sets of parameters were recommended: Padding. The data in both A and P is padded with a single bit with the value of 1 and a number of zeros to the nearest multiple of r bits. As an exception, if A is an empty string, there is no padding at all. State. The state consists of 320 bits, so the capacity formula_1. The state is initialized by an initialization vector IV (constant for each cipher type, e.g., hex 80400c0600000000 for Ascon-128) concatenated with K and N. Transformation. The initial state is transformed by applying a times the transformation function p (formula_2). On encryption, each word of A || P is XORed into the state and the p is applied b times (formula_3). The ciphertext C is contained in the first r bits of the result of the XOR. Decryption is near-identical to encryption. The final stage that produces the tag T consists of another application of formula_2; the special values are XORed into the last c bits after the initialization, the end of A, and before the finalization. Transformation p consists of three layers: Test vectors. Hash values of an empty string (i.e., a zero-length input text) for both the XOF and non-XOF variants. Ascon-Hash("") 0x 7346bc14f036e87ae03d0997913088f5f68411434b3cf8b54fa796a80d251f91 Ascon-HashA("") 0x aecd027026d0675f9de7a8ad8ccf512db64b1edcf0b20c388a0c7cc617aaa2c4 Ascon-Xof("", 32) 0x 5d4cbde6350ea4c174bd65b5b332f8408f99740b81aa02735eaefbcf0ba0339e Ascon-XofA("", 32) 0x 7c10dffd6bb03be262d72fbe1b0f530013c6c4eadaabde278d6f29d579e3908d Even a small change in the message will (with overwhelming probability) result in a different hash, due to the avalanche effect. Ascon-Hash("The quick brown fox jumps over the lazy dog") 0x 3375fb43372c49cbd48ac5bb6774e7cf5702f537b2cf854628edae1bd280059e green 0x c9744340ed476ac235dd979d12f5010a7523146ee90b57ccc4faeb864efcd048 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Sigma" }, { "math_id": 1, "text": "c=320-r" }, { "math_id": 2, "text": "p^a" }, { "math_id": 3, "text": "p^b" }, { "math_id": 4, "text": "p_C" }, { "math_id": 5, "text": "p_S" }, { "math_id": 6, "text": "p_L" } ]
https://en.wikipedia.org/wiki?curid=73283196
732873
Base rate fallacy
Error in thinking which involves under-valuing base rate information The base rate fallacy, also called base rate neglect or base rate bias, is a type of fallacy in which people tend to ignore the base rate (e.g., general prevalence) in favor of the individuating information (i.e., information pertaining only to a specific case). For example, if someone hears that a friend is very shy and quiet, they might think the friend is more likely to be a librarian than a salesperson, even though there are far more salespeople than librarians overall - hence making it more likely that their friend is actually a salesperson. Base rate neglect is a specific form of the more general extension neglect. It is also called the prosecutor's fallacy or defense attorney's fallacy when applied to the results of statistical tests (such as DNA tests) in the context of law proceedings. These terms were introduced by William C. Thompson and Edward Schumann in 1987, although it has been argued that their definition of the prosecutor's fallacy extends to many additional invalid imputations of guilt or liability that are not analyzable as errors in base rates or Bayes's theorem. False positive paradox. An example of the base rate fallacy is the false positive paradox (also known as accuracy paradox). This paradox describes situations where there are more false positive test results than true positives (this means the classifier has a low precision). For example, if a facial recognition camera can identify wanted criminals 99% accurately, but analyzes 10,000 people a day, the high accuracy is outweighed by the number of tests, and the program's list of criminals will likely have far more false positives than true. The probability of a positive test result is determined not only by the accuracy of the test but also by the characteristics of the sampled population. When the prevalence, the proportion of those who have a given condition, is lower than the test's false positive rate, even tests that have a very low risk of giving a false positive "in an individual case" will give more false than true positives "overall". It is especially counter-intuitive when interpreting a positive result in a test on a low-prevalence population after having dealt with positive results drawn from a high-prevalence population. If the false positive rate of the test is higher than the proportion of the "new" population with the condition, then a test administrator whose experience has been drawn from testing in a high-prevalence population may conclude from experience that a positive test result usually indicates a positive subject, when in fact a false positive is far more likely to have occurred. Examples. Example 1: Disease. High-prevalence population. Imagine running an infectious disease test on a population "A" of 1,000 persons, of which 40% are infected. The test has a false positive rate of 5% (0.05) and a false negative rate of zero. The expected outcome of the 1,000 tests on population "A" would be: Infected and test indicates disease (true positive) 1000 × = 400 people would receive a true positive Uninfected and test indicates disease (false positive) 1000 × × 0.05 = 30 people would receive a false positive The remaining 570 tests are correctly negative. So, in population "A", a person receiving a positive test could be over 93% confident () that it correctly indicates infection. Low-prevalence population. Now consider the same test applied to population "B", of which only 2% are infected. The expected outcome of 1000 tests on population "B" would be: Infected and test indicates disease (true positive) 1000 × = 20 people would receive a true positive Uninfected and test indicates disease (false positive) 1000 × × 0.05 = 49 people would receive a false positive The remaining 931 tests are correctly negative. In population "B", only 20 of the 69 total people with a positive test result are actually infected. So, the probability of actually being infected after one is told that one is infected is only 29% () for a test that otherwise appears to be "95% accurate". A tester with experience of group "A" might find it a paradox that in group "B", a result that had usually correctly indicated infection is now usually a false positive. The confusion of the posterior probability of infection with the prior probability of receiving a false positive is a natural error after receiving a health-threatening test result. Example 2: Drunk drivers. Imagine that a group of police officers have breathalyzers displaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. One in a thousand drivers is driving drunk. Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. No other information is known about them. Many would estimate the probability that the driver is drunk as high as 95%, but the correct probability is about 2%. An explanation for this is as follows: on average, for every 1,000 drivers tested, Therefore, the probability that any given driver among the 1 + 49.95 = 50.95 positive test results really is drunk is formula_0. The validity of this result does, however, hinge on the validity of the initial assumption that the police officer stopped the driver truly at random, and not because of bad driving. If that or another non-arbitrary reason for stopping the driver was present, then the calculation also involves the probability of a drunk driver driving competently and a non-drunk driver driving (in-)competently. More formally, the same probability of roughly 0.02 can be established using Bayes' theorem. The goal is to find the probability that the driver is drunk given that the breathalyzer indicated they are drunk, which can be represented as formula_1 where "D" means that the breathalyzer indicates that the driver is drunk. Using Bayes's theorem, formula_2 The following information is known in this scenario: formula_3 formula_4 formula_5 and formula_6 As can be seen from the formula, one needs "p"("D") for Bayes' theorem, which can be computed from the preceding values using the law of total probability: formula_7 which gives formula_8 Plugging these numbers into Bayes' theorem, one finds that formula_9 which is the precision of the test. Example 3: Terrorist identification. In a city of 1 million inhabitants, let there be 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that all people present in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software. The software has two failure rates of 1%: Suppose now that an inhabitant triggers the alarm. Someone making the base rate fallacy would infer that there is a 99% probability that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the probability of a terrorist is actually near 1%, not near 99%. The fallacy arises from confusing the natures of two different failure rates. The 'number of non-bells per 100 terrorists' (P(¬B | T), or the probability that the bell fails to ring given the inhabitant is a terrorist) and the 'number of non-terrorists per 100 bells' (P(¬T | B), or the probability that the inhabitant is a non-terrorist given the bell rings) are unrelated quantities; one does not necessarily equal—or even be close to—the other. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore, 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The 'number of non-terrorists per 100 bells' in that city is 100, yet P(T | B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell. Imagine that the first city's entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. The probability that a person triggering the alarm actually is a terrorist is only about 99 in 10,098, which is less than 1% and very, very far below the initial guess of 99%. The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists, and the number of false positives (non-terrorists scanned as terrorists) is so much larger than the true positives (terrorists scanned as terrorists). Multiple practitioners have argued that as the base rate of terrorism is extremely low, using data mining and predictive algorithms to identify terrorists cannot feasibly work due to the false positive paradox. Estimates of the number of false positives for each accurate result vary from over ten thousand to one billion; consequently, investigating each lead would be cost- and time-prohibitive. The level of accuracy required to make these models viable is likely unachievable. Foremost, the low base rate of terrorism also means there is a lack of data with which to make an accurate algorithm. Further, in the context of detecting terrorism false negatives are highly undesirable and thus must be minimised as much as possible; however, this requires increasing sensitivity at the cost of specificity, increasing false positives. It is also questionable whether the use of such models by law enforcement would meet the requisite burden of proof given that over 99% of results would be false positives. Example 4: biological testing of a suspect. A crime is committed. Forensic analysis determines that the perpetrator has a certain blood type shared by 10% of the population. A suspect is arrested, and found to have that same blood type. A prosecutor might charge the suspect with the crime on that basis alone, and claim at trial that the probability that the defendant is guilty is 90%. However, this conclusion is only close to correct if the defendant was selected as the main suspect based on robust evidence discovered prior to the blood test and unrelated to it. Otherwise, the reasoning presented is flawed, as it overlooks the high prior probability (that is, prior to the blood test) that he is a random innocent person. Assume, for instance, that 1000 people live in the town where the crime occurred. This means that 100 people live there who have the perpetrator's blood type, of whom only one is the true perpetrator; therefore, the true probability that the defendant is guilty – based only on the fact that his blood type matches that of the killer – is only 1%, far less than the 90% argued by the prosecutor. The prosecutor's fallacy involves assuming that the prior probability of a random match is equal to the probability that the defendant is innocent. When using it, a prosecutor questioning an expert witness may ask: "The odds of finding this evidence on an innocent man are so small that the jury can safely disregard the possibility that this defendant is innocent, correct?" The claim assumes that the probability that evidence is found on an innocent man is the same as the probability that a man is innocent given that evidence was found on him, which is not true. Whilst the former is usually small (10% in the previous example) due to good forensic evidence procedures, the latter (99% in that example) does not directly relate to it and will often be much higher, since, in fact, it depends on the likely quite high prior odds of the defendant being a random innocent person. Examples in law. O. J. Simpson trial. O. J. Simpson was tried and acquitted in 1995 for the murders of his ex-wife Nicole Brown Simpson and her friend Ronald Goldman. Crime scene blood matched Simpson's with characteristics shared by 1 in 400 people. However, the defense argued that the number of people from Los Angeles matching the sample could fill a football stadium and that the figure of 1 in 400 was useless. It would have been incorrect, and an example of prosecutor's fallacy, to rely solely on the "1 in 400" figure to deduce that a given person matching the sample would be likely to be the culprit. In the same trial, the prosecution presented evidence that Simpson had been violent toward his wife. The defense argued that there was only one woman murdered for every 2500 women who were subjected to spousal abuse, and that any history of Simpson being violent toward his wife was irrelevant to the trial. However, the reasoning behind the defense's calculation was fallacious. According to author Gerd Gigerenzer, the correct probability requires additional context: Simpson's wife had not only been subjected to domestic violence, but rather subjected to domestic violence (by Simpson) and killed (by someone). Gigerenzer writes "the chances that a batterer actually murdered his partner, given that she has been killed, is about 8 in 9 or approximately 90%". While most cases of spousal abuse do not end in murder, most cases of murder where there is a history of spousal abuse were committed by the spouse. Sally Clark case. Sally Clark, a British woman, was accused in 1998 of having killed her first child at 11 weeks of age and then her second child at 8 weeks of age. The prosecution had expert witness Sir Roy Meadow, a professor and consultant paediatrician, testify that the probability of two children in the same family dying from SIDS is about 1 in 73 million. That was much less frequent than the actual rate measured in historical data – Meadow estimated it from single-SIDS death data, and the assumption that the probability of such deaths should be uncorrelated between infants. Meadow acknowledged that 1-in-73 million is not an impossibility, but argued that such accidents would happen "once every hundred years" and that, in a country of 15 million 2-child families, it is vastly more likely that the double-deaths are due to Münchausen syndrome by proxy than to such a rare accident. However, there is good reason to suppose that the likelihood of a death from SIDS in a family is significantly greater if a previous child has already died in these circumstances, (a genetic predisposition to SIDS is likely to invalidate that assumed statistical independence) making some families more susceptible to SIDS and the error an outcome of the ecological fallacy. The likelihood of two SIDS deaths in the same family cannot be soundly estimated by squaring the likelihood of a single such death in all otherwise similar families. The 1-in-73 million figure greatly underestimated the chance of two successive accidents, but even if that assessment were accurate, the court seems to have missed the fact that the 1-in-73 million number meant nothing on its own. As an "a priori" probability, it should have been weighed against the "a priori" probabilities of the alternatives. Given that two deaths had occurred, one of the following explanations must be true, and all of them are "a priori" extremely improbable: It is unclear whether an estimate of the probability for the second possibility was ever proposed during the trial, or whether the comparison of the first two probabilities was understood to be the key estimate to make in the statistical analysis assessing the prosecution's case against the case for innocence. Clark was convicted in 1999, resulting in a press release by the Royal Statistical Society which pointed out the mistakes. In 2002, Ray Hill (a mathematics professor at Salford) attempted to accurately compare the chances of these two possible explanations; he concluded that successive accidents are between 4.5 and 9 times more likely than are successive murders, so that the "a priori" odds of Clark's guilt were between 4.5 to 1 and 9 to 1 against. After the court found that the forensic pathologist who had examined both babies had withheld exculpatory evidence, a higher court later quashed Clark's conviction, on 29 January 2003. Findings in psychology. In experiments, people have been found to prefer individuating information over general information when the former is available. In some experiments, students were asked to estimate the grade point averages (GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive information about the particular student even if the new descriptive information was obviously of little or no relevance to school performance. This finding has been used to argue that interviews are an unnecessary part of the college admissions process because interviewers are unable to pick successful candidates better than basic statistics. Psychologists Daniel Kahneman and Amos Tversky attempted to explain this finding in terms of a simple rule or "heuristic" called representativeness. They argued that many judgments relating to likelihood, or to cause and effect, are based on how representative one thing is of another, or of a category. Kahneman considers base rate neglect to be a specific form of extension neglect. Richard Nisbett has argued that some attributional biases like the fundamental attribution error are instances of the base rate fallacy: people do not use the "consensus information" (the "base rate") about how others behaved in similar situations and instead prefer simpler dispositional attributions. There is considerable debate in psychology on the conditions under which people do or do not appreciate base rate information. Researchers in the heuristics-and-biases program have stressed empirical findings showing that people tend to ignore base rates and make inferences that violate certain norms of probabilistic reasoning, such as Bayes' theorem. The conclusion drawn from this line of research was that human probabilistic thinking is fundamentally flawed and error-prone. Other researchers have emphasized the link between cognitive processes and information formats, arguing that such conclusions are not generally warranted. Consider again Example 2 from above. The required inference is to estimate the (posterior) probability that a (randomly picked) driver is drunk, given that the breathalyzer test is positive. Formally, this probability can be calculated using Bayes' theorem, as shown above. However, there are different ways of presenting the relevant information. Consider the following, formally equivalent variant of the problem:  1 out of 1000 drivers are driving drunk. The breathalyzers never fail to detect a truly drunk person. For 50 out of the 999 drivers who are not drunk the breathalyzer falsely displays drunkenness. Suppose the policemen then stop a driver at random, and force them to take a breathalyzer test. It indicates that they are drunk. No other information is known about them. Estimate the probability the driver is really drunk. In this case, the relevant numerical information—"p"(drunk), "p"("D" | drunk), "p"("D" | sober)—is presented in terms of natural frequencies with respect to a certain reference class (see reference class problem). Empirical studies show that people's inferences correspond more closely to Bayes' rule when information is presented this way, helping to overcome base-rate neglect in laypeople and experts. As a consequence, organizations like the Cochrane Collaboration recommend using this kind of format for communicating health statistics. Teaching people to translate these kinds of Bayesian reasoning problems into natural frequency formats is more effective than merely teaching them to plug probabilities (or percentages) into Bayes' theorem. It has also been shown that graphical representations of natural frequencies (e.g., icon arrays, hypothetical outcome plots) help people to make better inferences. One important reason why natural frequency formats are helpful is that this information format facilitates the required inference because it simplifies the necessary calculations. This can be seen when using an alternative way of computing the required probability "p"(drunk|"D"): formula_10 where "N"(drunk ∩ "D") denotes the number of drivers that are drunk and get a positive breathalyzer result, and "N"("D") denotes the total number of cases with a positive breathalyzer result. The equivalence of this equation to the above one follows from the axioms of probability theory, according to which "N"(drunk ∩ "D") = "N" × "p" ("D" | drunk) × "p" (drunk). Importantly, although this equation is formally equivalent to Bayes' rule, it is not psychologically equivalent. Using natural frequencies simplifies the inference because the required mathematical operation can be performed on natural numbers, instead of normalized fractions (i.e., probabilities), because it makes the high number of false positives more transparent, and because natural frequencies exhibit a "nested-set structure". Not every frequency format facilitates Bayesian reasoning. Natural frequencies refer to frequency information that results from "natural sampling", which preserves base rate information (e.g., number of drunken drivers when taking a random sample of drivers). This is different from "systematic sampling", in which base rates are fixed "a priori" (e.g., in scientific experiments). In the latter case it is not possible to infer the posterior probability "p"(drunk | positive test) from comparing the number of drivers who are drunk and test positive compared to the total number of people who get a positive breathalyzer result, because base rate information is not preserved and must be explicitly re-introduced using Bayes' theorem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1/50.95 \\approx 0.019627" }, { "math_id": 1, "text": "p(\\mathrm{drunk}\\mid D)" }, { "math_id": 2, "text": "p(\\mathrm{drunk}\\mid D) = \\frac{p(D \\mid \\mathrm{drunk})\\, p(\\mathrm{drunk})}{p(D)}." }, { "math_id": 3, "text": "p(\\mathrm{drunk}) = 0.001," }, { "math_id": 4, "text": "p(\\mathrm{sober}) = 0.999," }, { "math_id": 5, "text": "p(D\\mid\\mathrm{drunk}) = 1.00," }, { "math_id": 6, "text": "p(D\\mid\\mathrm{sober}) = 0.05." }, { "math_id": 7, "text": "p(D) = p(D \\mid \\mathrm{drunk})\\,p(\\mathrm{drunk})+p(D\\mid\\mathrm{sober})\\,p(\\mathrm{sober})" }, { "math_id": 8, "text": "p(D)= (1.00 \\times 0.001) + (0.05 \\times 0.999) = 0.05095." }, { "math_id": 9, "text": "p(\\mathrm{drunk}\\mid D) = \\frac{1.00 \\times 0.001}{0.05095} \\approx 0.019627," }, { "math_id": 10, "text": "p(\\mathrm{drunk}\\mid D) = \\frac{N(\\mathrm{drunk} \\cap D)}{N(D)} = \\frac{1}{51} = 0.0196" } ]
https://en.wikipedia.org/wiki?curid=732873
73292008
Small set expansion hypothesis
Computational hardness assumption The small set expansion hypothesis or small set expansion conjecture in computational complexity theory is an unproven computational hardness assumption. Under the small set expansion hypothesis it is assumed to be computationally infeasible to distinguish between a certain class of expander graphs called "small set expanders" and other graphs that are very far from being small set expanders. This assumption implies the hardness of several other computational problems, and the optimality of certain known approximation algorithms. The small set expansion hypothesis is related to the unique games conjecture, another unproven computational hardness assumption according to which accurately approximating the value of certain games is computationally infeasible. If the small set expansion hypothesis is true, then so is the unique games conjecture. Background. The "edge expansion" of a set formula_0 of vertices in a graph formula_1 is defined as formula_2 where the vertical bars denote the number of elements of a set, and formula_3 denotes the set of edges that have one endpoint in formula_0 and the other endpoint in its complement. This number can be as low as zero, when formula_0 is a connected component of the graph, because in this case there are no edges connecting formula_0 to other parts of the graph. A graph is called regular or formula_4-regular when every vertex is incident to the same number of edges, formula_4, the degree of the graph. For a formula_4-regular graph, the maximum possible edge expansion is formula_4. This expansion is achieved by any subset formula_0 that induces an independent set, as in this case all of the edges that touch vertices in formula_0 belong to formula_3. The edge expansion of a graph with formula_5 vertices is defined to be the minimum edge expansion among its subsets of at most formula_6 vertices. Instead, the "small set expansion" is defined as the same minimum, but only over smaller subsets, of at most formula_7 vertices. Informally, a small set expander is a graph whose small set expansion is large. Statement. The small set expansion hypothesis uses a real number formula_8 as a parameter to formalize what it means for the small set expansion of a graph to be large or small. It asserts that, for every formula_9, it is NP-hard to distinguish between formula_4-regular graphs with small set expansion at least formula_10 (good small set expanders), and formula_4-regular graphs with small set expansion at most formula_11 (very far from being a small set expander). Here, the degree formula_4 is a variable that might depend on the choice of formula_8, unlike in many applications of expander graphs where the degree is assumed to be a fixed constant. Consequences. The small set expansion hypothesis implies the NP-hardness of several other computational problems. Because it is only a hypothesis, this does not prove that these problems actually are NP-hard. Nevertheless, it suggests that it would be difficult to find an efficient solution for these problems, because solving any one of them would also solve other problems whose solution has so far been elusive (including the small set expansion problem itself). In the other direction, this implication opens the door to disproving the small set expansion hypothesis, by providing other problems through which it could be attacked. In particular, there exists a polynomial-time reduction from the recognition of small set expanders to the problem of determining the approximate value of unique games, showing that the small set expansion hypothesis implies the unique games conjecture. Boaz Barak has suggested more strongly that these two hypotheses are equivalent. In fact, the small set expansion hypothesis is equivalent to a restricted form of the unique games conjecture, asserting the hardness of unique games instances whose underlying graphs are small set expanders. On the other hand, it is possible to quickly solve unique games instances whose graph is "certifiably" a small set expander, in the sense that their expansion can be verified by sum-of-squares optimization. Another application of the small set expansion hypothesis concerns the computational problem of approximating the treewidth of graphs, a structural parameter closely related to expansion. For graphs of treewidth formula_12, the best approximation ratio known for a polynomial time approximation algorithm is formula_13. The small set expansion hypothesis, if true, implies that there does not exist an approximation algorithm for this problem with constant approximation ratio. It also can be used to imply the inapproximability of finding a complete bipartite graph with the maximum number of edges (possibly restricted to having equal numbers of vertices on each side of its bipartition) in a larger graph. The small set expansion hypothesis implies the optimality of known approximation ratios for certain variants of the edge cover problem, in which one must choose as few vertices as possible to cover a given number of edges in a graph. History and partial results. The small set expansion hypothesis was formulated, and connected to the unique games conjecture, by Prasad Raghavendra and David Steurer in 2010, as part of a body of work for which they were given the 2018 Michael and Sheila Held Prize of the National Academy of Sciences. One approach to resolving the small set expansion hypothesis is to seek approximation algorithms for the edge expansion of small vertex sets that would be good enough to distinguish the two classes of graphs in the hypothesis. In this light, the best approximation known, for the edge expansion of subsets of at most formula_14 vertices in a formula_4-regular graph, has an approximation ratio of formula_15. This is not strong enough to refute the hypothesis; doing so would require finding an algorithm with a bounded approximation ratio. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "\\frac{|\\partial X|}{|X|}," }, { "math_id": 3, "text": "\\partial X" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "n/2" }, { "math_id": 7, "text": "n/\\log_2 n" }, { "math_id": 8, "text": "\\varepsilon" }, { "math_id": 9, "text": "\\varepsilon>0" }, { "math_id": 10, "text": "(1-\\varepsilon)d" }, { "math_id": 11, "text": "\\varepsilon d" }, { "math_id": 12, "text": "w" }, { "math_id": 13, "text": "O(\\sqrt{\\log w})" }, { "math_id": 14, "text": "n/\\log n" }, { "math_id": 15, "text": "O(\\sqrt{\\log n\\log\\log n})" } ]
https://en.wikipedia.org/wiki?curid=73292008
73297602
Round (cryptography)
Repeated basic operation in a cryptosystem In cryptography, a round or round function is a basic transformation that is repeated (iterated) multiple times inside the algorithm. Splitting a large algorithmic function into rounds simplifies both implementation and cryptanalysis. For example, encryption using an oversimplified three-round cipher can be written as formula_0, where C is the ciphertext and P is the plaintext. Typically, rounds formula_1 are implemented using the same function, parameterized by the round constant and, for block ciphers, the "round key" from the key schedule. Parameterization is essential to reduce the self-similarity of the cipher, which could lead to slide attacks. Increasing the number of rounds "almost always" protects against differential and linear cryptanalysis, as for these tools the effort grows exponentially with the number of rounds. However, increasing the number of rounds does not "always" make weak ciphers into strong ones, as some attacks do not depend on the number of rounds. The idea of an iterative cipher using repeated application of simple non-commutating operations producing diffusion and confusion goes as far back as 1945, to the then-secret version of C. E. Shannon's work "Communication Theory of Secrecy Systems"; Shannon was inspired by mixing transformations used in the field of dynamical systems theory (cf. horseshoe map). Most of the modern ciphers use iterative design with number of rounds usually chosen between 8 and 32 (with 64 and even 80 used in cryptographic hashes). For some Feistel-like cipher descriptions, notably the one of the RC5, a term "half-round" is used to define the transformation of part of the data (a distinguishing feature of the Feistel design). This operation corresponds to a full round in traditional descriptions of Feistel ciphers (like DES). Round constants. Inserting round-dependent constants into the encryption process breaks the symmetry between rounds and thus thwarts the most obvious slide attacks. The technique is a standard feature of most modern block ciphers. However, a poor choice of round constants or unintended interrelations between the constants and other cipher components could still allow slide attacks (e.g., attacking the initial version of the format-preserving encryption mode FF3). Many lightweight ciphers utilize very simple key scheduling: the round keys come from adding the round constants to the encryption key. A poor choice of round constants in this case might make the cipher vulnerable to invariant attacks; ciphers broken this way include SCREAM and Midori64. Optimization. Daemen and Rijmen assert that one of the goals of optimizing the cipher is reducing the overall workload, the product of the round complexity and the number of rounds. There are two approaches to address this goal: Reduced-round ciphers. Cryptanalysis techniques include the use of versions of ciphers with fewer rounds than specified by their designers. Since a single round is usually cryptographically weak, many attacks that fail to work against the full version of ciphers will work on such "reduced-round" variants. The result of such attack provides valuable information about the strength of the algorithm, a typical break of the full cipher starts out as a success against a reduced-round one. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C = R_3(R_2(R_1(P)))" }, { "math_id": 1, "text": "R_1, R_2, ..." } ]
https://en.wikipedia.org/wiki?curid=73297602
73305281
Green solvent
Environmentally sustainable solvent Green solvents are environmentally friendly chemical solvents that are used as a part of green chemistry. They came to prominence in 2015, when the UN defined a new sustainability-focused development plan based on 17 sustainable development goals, recognizing the need for green chemistry and green solvents for a more sustainable future. Green solvents are developed as more environmentally friendly solvents, derived from the processing of agricultural crops or otherwise sustainable methods as alternatives to petrochemical solvents. Some of the expected characteristics of green solvents include ease of recycling, ease of biodegradation, and low toxicity. Classification. The following are qualified as green solvents, based on their production methods or on the raw materials from which they are produced: Water. Water is not an organic solvent, as it contains no carbon atoms. It is a polar protic solvent due to its chemical structure. It is non-toxic and renewable. In living matter, many ions and proteins are dissolved in water. It is the cheapest and most abundant solvent for a large range of reactions and processes in industrial chemistry. There are cases where traditional organic solvents can be replaced by aqueous preparations. Water-based coatings have largely replaced standard petroleum-based paints for the construction industry; however, solvent-based anti-corrosion paints remain among the most used today. Supercritical fluids. Some substances that occur in the gas phase at ambient temperature and pressure can act as solvents if elevated to temperatures and pressures above their critical point. At this point, gas and liquid states exist in a single phase with properties intermediate between liquid and gas, including the mobility of gas and the dissolving power of liquid: a supercritical fluid. Solvents derived from lipids. Lipids (triglycerides) themselves can be used as solvents, but are mostly hydrolyzed to fatty acids and glycerol (glycerin). Fatty acids can be esterified with an alcohol to give fatty acid esters, e.g., FAMEs (fatty acid methyl esters) if the esterification is performed with methanol. Usually derived from natural gas or petroleum, the methanol used to produce FAMEs can also be obtained by other routes, including gasification of biomass and household hazardous waste. Glycerol from lipid hydrolysis can be used as a solvent in synthetic chemistry, as can some of its derivatives. Deep eutectic solvents. A mixture whose melting point is lower than that of the constituents is called an eutectic mixture; many solid substances mixed in this way become liquids that can be used as solvents, especially when the melting-point depression is very large, hence the term deep eutectic solvent (DES). One of the most commonly used substances to obtain DES is the ammonium salt choline chloride. Smith, Abbott, and Ryder report that a mixture of urea (melting point: 133 °C) and choline chloride (melting point: 302 °C) in a 2:1 molar ratio has a melting point of 12 °C. Natural deep eutectic solvents (NADES) are also a research area relevant to green chemistry, being easy to produce from two low-cost and well-known ecotoxicity components, a hydrogen-bond acceptor, and a hydrogen-bond donor. Terpenes. Solvents in a diverse class of natural substances called terpenes are obtained by extraction from certain parts of plants. All terpenes are structurally presented as multiples of isoprene with the gross formula (C5H8)n. Turpentine, formerly used as a solvent in organic coatings, is now largely replaced by petroleum hydrocarbons. Nowadays, it is mainly used as a source of its constituents, including α-pinene and β-pinene. Ionic liquids. Ionic liquids are molten organic salts that are generally fluid at room temperature. Frequently used cationic liquids, include imidazolium, pyridinium, ammonium and phosphonium. Anionic liquids include halides, tetrafluoroborate, hexafluorophosphate, and nitrate. Bubalo et al. (2015) argue that ionic liquids are non-flammable, and chemically, electrochemically and thermally stable. These properties allow for ionic liquids to be used as green solvents, as their low volatility limits VOC emissions compared to conventional solvents. The ecotoxicity and poor degradability of ionic liquids has been recognized in the past because the resources typically used for their production are non-renewable, as is the case for imidazole and halogenated alkanes (derived from petroleum). Ionic liquids produced from renewable and biodegradable materials have recently emerged, but their availability is low because of high production costs. Switchable solvents. Bubbling CO2 into water or an organic solvent results in changes to certain properties of the liquid such as its polarity, ionic strength, and hydrophilicity. This allows an organic solvent to form a homogeneous mixture with the otherwise immiscible water. This process is reversible, and was developed by Jessop et al. (2012) for potential uses in synthetic chemistry, extraction and separation of various substances. The degree of how green switchable solvents are is measured by the energy and material savings it provides; thus, one of the advantages of switchable solvents is the potential reuse of solvent and water in post-process applications. Solvents from waste materials. First-generation biorefineries exploit food-based substances such as starch and vegetable oils. For example, corn grain is used to make ethanol. Second-generation biorefineries use residues or wastes generated by various industries as feedstock for the manufacture of their solvents. 2-Methyltetrahydrofuran, derived from lignocellulosic waste, would have the potential to replace tetrahydrofuran, toluene, DCM, and diethyl ether in some applications. Levulinic acid esters from the same source would have the potential to replace DCM in paint cleaners and strippers. Used cooking oils can be used to produce FAMEs. Glycerol, obtained as a byproduct of the synthesis of these, can in turn be used to produce various solvents such as 2,2-dimethyl-1,3-dioxolane-4-methanol, usable as a solvent in the formulation of inks and cleaners. Fusel oil, an isomeric mixture of amyl alcohol, is a byproduct of ethanol production from sugars. Green solvents derived from fusel oil such as isoamyl acetate or isoamyl methyl carbonate could be obtained. When these green solvents are used to manufacture nail polishes, VOC emissions report a minimum reduction of 68% compared to the emissions caused by using traditional solvents. Petrochemical solvents with green characteristics. Due to the high price of new sustainable solvents, in 2017, Clark et al. listed twenty-five solvents that are currently considered acceptable to replace hazardous solvents, even if they are derived from petrochemicals. These include propylene carbonate and dibasic esters (DBEs). Propylene carbonate and DBEs have been the subject of monographs on solvent substitution. Propylene carbonate and two DBEs are considered green in the manufacturer GlaxoSmithKline's (GSK) Solvent Sustainability Guide, which is used in the pharmaceutical industry. Propylene carbonate can be produced from renewable resources, but DBEs that have appeared on the market in recent years are obtained as by-products of the synthesis of polyamides, derived from petroleum. Other petrochemical solvents are variously referred to as green solvents, such as halogenated hydrocarbons like parachlorobenzotrifluoride, which has been used since the early 1990s in paints to replace smog-forming solvents. Siloxanes are compounds known in industry in the form of polymers (silicones, R-SiO-R'), for their thermal stability and elastic and non-stick properties. The early 1990s saw the emergence of low molecular weight siloxanes (methylsiloxanes), which can be used as solvents in precision cleaning, replacing stratospheric ozone-depleting solvents. A final category of petrochemical solvents that qualify as green involves polymeric solvents. The International Union of Pure and Applied Chemistry defines the term "polymer solvent" as "a polymer that acts as a solvent for low-molecular weight compounds". In industrial chemistry, polyethylene glycols (PEGs, H(OCH2CH2)nOH) are one of the most widely used polymeric solvent families. PEGs, with molecular weights below 600 Da, are viscous liquids at room temperature, while heavier PEGs are waxy solids. Soluble in water and readily biodegradable, liquid PEGs have the advantage of negligible volatility (&lt; 0.01 mmHg or &lt; 1.3 Pa at 20 °C). PEGs are synthesized from ethylene glycol and ethylene oxide, both of which are petrochemical-derived molecules, though ethylene glycol from renewable sources (cellulose) is commercially available. Physical properties. The physical properties of solvents are important in identifying the solvent used according to the reaction conditions. In particular, their dissolution properties make it possible to assess the use of a particular solvent for a chemical reaction, such as an extraction or a washing. Evaporation is also important to consider, as it can be indicative of the potential volatile organic compound (VOC) emissions. The following table shows selected properties of green solvents in each category: Other categories of green solvent have additional properties that preclude their usage in various applications: Deep eutectic solvents (DES) have huge hydrogen bonds, leading them to have lower melting points. Some of them have lower melting points than 50 °C and they are investigated because they can be cheap, safe and useful in industries. Most of the DES have a higher density than water but its range is difficult to grasp as it depends on the components from which they are synthesized and their molecular arrangement and vacancies. For example, Octylammonium bromide/Decanoic acid (molar ratio of [1:2]) has a lower density compared to water of 0.8889 g.cm−3, up to 1.4851 g.cm−3 for Choline chloride/Trifluoroacetamine [1:2]. Their miscibility is also composition-dependent. Fatty acid methyl esters have been investigated and compared to fossil diesel. At 20 °C or 40 °C, those solvents have a lower density than water at 4 °C (temperature in which the water is the densest): Their kinematic viscosity depends if they are saturated or unsaturated or even the temperature. At 40 °C, for saturated FAMEs, it goes from 0.340 (acetate) to 6.39 (nonadecanoate), and for unsaturated FAMEs, it goes from 5.61 for the stearate to 7.21 for the erucate. Their dielectric constant decreases as their alkyl chain gets longer. For example, acetate has a tiny alkyl chain and has a dielectric constant of ε40= 6.852 and ε40= 2.982 for the nonadecanoate. The properties of switchable solvents are caused by the strength of their conjugate acid's pKa and octanol-water partition coefficient ratio Kow. They must have a pKa above 9.5 to be protonated by carbonated water and also a log(Kow) between 1.2 and 2.5 to be switchable, otherwise they will be hydrophilic or hydrophobic. These properties depend on the volumetric ratio of the compound compared to water. For example, N,N,N′-Tributylpentanamidine is a switchable solvent, and for a volumetric ratio of compound to water of 2:1, it has a log(Kow)= 5.99, which is higher than 2.5. Ionic liquids with low melting points are associated with asymmetric cations, and liquids with high melting point are associated with symmetric cations. Additionally, if they have branched alkyl chains, they will have a higher melting point. They are more dense than water, ranging from 1.05 to 1.64 g·cm−3 at 20 °C and from 1.01 to 1.57 at 90 °C. Applications. Some green solvents, in addition to being more sustainable, have been found to have more efficient physicochemical properties or reaction yields than when using traditional solvents. However, the results obtained are for the most part observations from experiments on particular green solvents and cannot be generalized. The effectiveness of a green solvent is quantified by calculating the "E factor", which is a ratio of waste materials to desired product produced through a process. formula_2 Organic synthesis. Green solvent efficiency has mainly been proven in extractions and separations in comparison to traditional solvents. Industrial chemistry. Solvent manufacturers also provide industrial companies with databases to propose green alternative solvent mixtures to those originally used in industrial processes with similar efficiency and reaction yield. However, environmental and safety requirements are not always considered in these suggestions. Safety. The use of green solvents is increasingly preferred because of their lower environmental impact. These solvents still present dangers for human health as well as for the environment. However, for a number of green solvents, their impact is still unclear, or at least, not categorized yet. Listed here is selected information from the safety data sheets of common green solvents: Solvents derived from carbohydrates. For ethanol, the American Conference of Governmental Industrial Hygienists, shortened ACGIH, advises a short-term exposure limit of 1000 ppm to avoid irritating the respiratory tract. The French National Agency for Food, Environmental, and Occupational Health Safety (ANSES) has recommended a short-term occupational exposure limit value of 100 mg/m3 for butan-1-ol, a solvent used in paints, cleaners, and degreasers, in order to prevent irritation of the mucous membranes of the eyes and upper airways. Since 1998, the ACGIH has suggested an 8-hour exposure limit value (ELV) of 20 ppm of butan-1-ol to prevent irritation of the upper respiratory tract and eyes. Male rats exposed to THFA develop reproductive toxicity. Moreover, it has an impact on fetal and embryonic development in rats. The American Industrial Hygiene Association suggested an ELV of 2 ppm for THFA to prevent testicular degeneration in 1993 based on the No-observed-effect level of two subchronic investigations in rats and dogs Deep eutectic solvents. DES components, according to Wazeer, Hayyan, and Hadj-Kali, are typically non-toxic and biodegradable. According to Hayyan et al., the DES they investigated were more harmful to the small crustacean artemia than each of their individual components, which could be attributed to synergy. The abbreviation NADES refers to DES that contain only materials sourced from renewable resources. Compared to other DES, these would typically be less hazardous. Legislation. Due to the recency of green solvent development, few laws related to their regulation have been developed beyond standard workplace safety precautions already in place, and laws that enforce the use of green solvents have not been widespread. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d_{4}^{40}" }, { "math_id": 1, "text": "d_{4}^{20}" }, { "math_id": 2, "text": "E factor=\\frac{\\text{Mass of all waste materials}}{\\text{Mass of desired product}}" } ]
https://en.wikipedia.org/wiki?curid=73305281
7330660
Numerical linear algebra
Field of mathematics Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social sciences are as vast as the applications of continuous mathematics. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, telecommunication, computational finance, materials science simulations, structural biology, data mining, bioinformatics, and fluid dynamics. Matrix methods are particularly used in finite difference methods, finite element methods, and the modeling of differential equations. Noting the broad applications of numerical linear algebra, Lloyd N. Trefethen and David Bau, III argue that it is "as fundamental to the mathematical sciences as calculus and differential equations",x even though it is a comparatively small field. Because many properties of matrices and vectors also apply to functions and operators, numerical linear algebra can also be viewed as a type of functional analysis which has a particular emphasis on practical algorithms.ix Common problems in numerical linear algebra include obtaining matrix decompositions like the singular value decomposition, the QR factorization, the LU factorization, or the eigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation. Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved by iterative methods rather than direct ones. History. Numerical linear algebra was developed by computer pioneers like John von Neumann, Alan Turing, James H. Wilkinson, Alston Scott Householder, George Forsythe, and Heinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems of partial differential equations. The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann and Herman Goldstine's work in 1947. The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies like parallel computing have made them practical approaches to scientific problems. Matrix decompositions. Partitioned matrices. For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors. For example, when solving the linear system formula_0, rather than understanding "x" as the product of formula_1 with "b", it is helpful to think of "x" as the vector of coefficients in the linear expansion of "b" in the basis formed by the columns of "A".8 Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms. This is because matrix algorithms frequently contain two nested loops: one over the columns of a matrix "A", and another over the rows of "A". For example, for matrices formula_2 and vectors formula_3 and formula_4, we could use the column partitioning perspective to compute "y" := "Ax" + "y" as for q = 1:n for p = 1:m y(p) = A(p,q)*x(q) + y(p) end end Singular value decomposition. The singular value decomposition of a matrix formula_2 is formula_5 where "U" and "V" are unitary, and formula_6 is diagonal. The diagonal entries of formula_6 are called the singular values of "A". Because singular values are the square roots of the eigenvalues of formula_7, there is a tight connection between the singular value decomposition and eigenvalue decompositions. This means that most methods for computing the singular value decomposition are similar to eigenvalue methods;36 perhaps the most common method involves Householder procedures.253 QR factorization. The QR factorization of a matrix formula_2 is a matrix formula_8 and a matrix formula_9 so that "A = QR", where "Q" is orthogonal and "R" is upper triangular.50223 The two main algorithms for computing QR factorizations are the Gram–Schmidt process and the Householder transformation. The QR factorization is often used to solve linear least-squares problems, and eigenvalue problems (by way of the iterative QR algorithm). LU factorization. An LU factorization of a matrix "A" consists of a lower triangular matrix "L" and an upper triangular matrix "U" so that "A = LU". The matrix "U" is found by an upper triangularization procedure which involves left-multiplying "A" by a series of matrices formula_10 to form the product formula_11, so that equivalently formula_12.14796 Eigenvalue decomposition. The eigenvalue decomposition of a matrix formula_13 is formula_14, where the columns of "X" are the eigenvectors of "A", and formula_15 is a diagonal matrix the diagonal entries of which are the corresponding eigenvalues of "A".33 There is no direct method for finding the eigenvalue decomposition of an arbitrary matrix. Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative.192 Algorithms. Gaussian elimination. From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrix "A" into its "LU" factorization, which Gaussian elimination accomplishes by left-multiplying "A" by a succession of matrices formula_16 until "U" is upper triangular and "L" is lower triangular, where formula_17.148 Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits. The simplest solution is to introduce pivoting, which produces a modified Gaussian elimination algorithm that is stable.151 Solutions of linear systems. Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors. In order to solve the linear system formula_0, the traditional algebraic approach is to understand "x" as the product of formula_1 with "b". Numerical linear algebra instead interprets "x" as the vector of coefficients of the linear expansion of "b" in the basis formed by the columns of "A".8 Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrix "A" and the vectors "x" and "b", which may make one factorization much easier to obtain than others. If "A" = "QR" is a QR factorization of "A", then equivalently formula_18. This is as easy to compute as a matrix factorization.54 If formula_14 is an eigendecomposition "A", and we seek to find "b" so that "b" = "Ax", with formula_19 and formula_20, then we have formula_21.33 This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrix formula_22. And if "A" = "LU" is an "LU" factorization of "A", then "Ax" = "b" can be solved using the triangular matrices "Ly" = "b" and "Ux" = "y".14799 Least squares optimisation. Matrix decompositions suggest a number of ways to solve the linear system "r" = "b" − "Ax" where we seek to minimize "r", as in the regression problem. The QR algorithm solves this problem by computing the reduced QR factorization of "A" and rearranging to obtain formula_23. This upper triangular system can then be solved for "x". The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decomposition formula_24 and then computing the vector formula_25, we reduce the least squares problem to a simple diagonal system.84 The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classical normal equations method for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods. Conditioning and stability. Allow that a problem is a function formula_26, where "X" is a normed vector space of data and "Y" is a normed vector space of solutions. For some data point formula_27, the problem is said to be ill-conditioned if a small perturbation in "x" produces a large change in the value of "f"("x"). We can quantify this by defining a condition number which represents how well-conditioned a problem is, defined as formula_28 Instability is the tendency of computer algorithms, which depend on floating-point arithmetic, to produce results that differ dramatically from the exact mathematical solution to a problem. When a matrix contains real data with many significant digits, many algorithms for solving problems like linear systems of equation or least squares optimisation may produce highly inaccurate results. Creating stable algorithms for ill-conditioned problems is a central concern in numerical linear algebra. One example is that the stability of householder triangularization makes it a particularly robust solution method for linear systems, whereas the instability of the normal equations method for solving least squares problems is a reason to favour matrix decomposition methods like using the singular value decomposition. Some matrix decomposition methods may be unstable, but have straightforward modifications that make them stable; one example is the unstable Gram–Schmidt, which can easily be changed to produce the stable modified Gram–Schmidt.140 Another classical problem in numerical linear algebra is the finding that Gaussian elimination is unstable, but becomes stable with the introduction of pivoting. Iterative methods. There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrary formula_29 matrix require formula_30 time, which is a surprisingly high floor given that matrices contain only formula_31 numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix is sparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix. The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensional Krylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. When "A" is symmetric and we wish to solve the linear problem "Ax" = "b", the classical iterative approach is the conjugate gradient method. If "A" is not symmetric, then examples of iterative solutions to the linear problem are the generalized minimal residual method and CGN. If "A" is symmetric, then to solve the eigenvalue and eigenvector problem we can use the Lanczos algorithm, and if "A" is non-symmetric, then we can use Arnoldi iteration. Software. Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages include MATLAB, Analytica, Maple, and Mathematica. Other programming languages which are not explicitly designed for numerical linear algebra have libraries that provide numerical linear algebra routines and optimisation; C and Fortran have packages like Basic Linear Algebra Subprograms and LAPACK, python has the library NumPy, and Perl has the Perl Data Language. Many numerical linear algebra commands in R rely on these more fundamental libraries like LAPACK. More libraries can be found on the List of numerical libraries. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x = A^{-1}b" }, { "math_id": 1, "text": "A^{-1}" }, { "math_id": 2, "text": "A^{m \\times n}" }, { "math_id": 3, "text": "x^{n \\times 1}" }, { "math_id": 4, "text": "y^{m \\times 1}" }, { "math_id": 5, "text": "A = U \\Sigma V^\\ast" }, { "math_id": 6, "text": "\\Sigma" }, { "math_id": 7, "text": "AA^\\ast" }, { "math_id": 8, "text": "Q^{m \\times m}" }, { "math_id": 9, "text": "R^{m \\times n}" }, { "math_id": 10, "text": "M_1,\\ldots,M_{n-1}" }, { "math_id": 11, "text": "M_{n-1} \\cdots M_1 A = U" }, { "math_id": 12, "text": "L = M_1^{-1} \\cdots M_{n-1}^{-1}" }, { "math_id": 13, "text": "A^{m \\times m}" }, { "math_id": 14, "text": "A = X \\Lambda X^{-1}" }, { "math_id": 15, "text": "\\Lambda" }, { "math_id": 16, "text": "L_{m-1} \\cdots L_2 L_1 A = U" }, { "math_id": 17, "text": "L \\equiv L_1^{-1}L_2^{-1} \\cdots L_{m-1}^{-1}" }, { "math_id": 18, "text": "Rx = Q^\\ast b" }, { "math_id": 19, "text": "b' = X^{-1}b" }, { "math_id": 20, "text": "x' = X^{-1}x" }, { "math_id": 21, "text": "b' = \\Lambda x'" }, { "math_id": 22, "text": "X^{*} X " }, { "math_id": 23, "text": "\\widehat{R}x = \\widehat{Q}^\\ast b" }, { "math_id": 24, "text": "A = \\widehat{U}\\widehat{\\Sigma}V^\\ast" }, { "math_id": 25, "text": "\\widehat{U}^\\ast b" }, { "math_id": 26, "text": "f: X \\to Y" }, { "math_id": 27, "text": "x \\in X" }, { "math_id": 28, "text": "\\widehat{\\kappa} = \\lim_{\\delta \\to 0} \\sup_{\\| \\delta x \\| \\leq \\delta} \\frac{\\| \\delta f \\|}{\\| \\delta x \\|}." }, { "math_id": 29, "text": "m \\times m" }, { "math_id": 30, "text": "O(m^3)" }, { "math_id": 31, "text": "m^2" } ]
https://en.wikipedia.org/wiki?curid=7330660
73307948
Photoconductance decay
Photoconductance decay or Photoconductivity decay (PCD or PC), is a non-destructive analytical technique used to measure the lifetime of minority charge carriers in a semiconductor, especially in silicon wafers. The technique studies the transient photoconductivity of a semiconductor sample during or after it is illuminated by a light pulse. Electron–hole pairs are first generated by the light pulse, and the photoconductivity of the sample declines as the carriers recombine. PCD is an important characterisation step in determining the quality and expected performance of wafers before they are used to fabricate devices such as integrated circuits or solar cells. It is one of the most common methods of determining carrier lifetimes. PCD uses a fast light source (e.g. a xenon flash lamp) to excite the test sample, causing free carriers to be generated. Excess carriers in the material cause it to become more conductive, and thus the number of excess carriers (formula_0) can be measured over time by measuring the material conductivity. Conductivity can be measured through non-contact methods, such as through microwave reflectance, or inductive or capacitive coupling. A higher effective lifetime of minority charge carriers indicate that they can remain mobile in the wafer for a long time period before undergoing recombination. History. Characterisation of minority carrier lifetimes through measurement of photoconductance decay was a technique used by Bell Laboratories as early as 1954 on silicon and germanium wafers during investigation of carrier trapping. A detailed method for measuring PCD was published soon after by MIT Lincoln Laboratory in 1955. A standard method for PCD was described in ATSM standards in 1971 for measurement of minority carrier lifetimes. A new method for Quasi-steady-state photoconductance measurements was described in 1996 by Ronald Sinton. Theory. The difference in dark and excited photoconductivity formula_1 of the wafer is typically measured through monitoring of the voltage across the induction coil beneath the wafer. This yields a conductance that is spatially averaged over the coil area. Conductance can be related to the excess carrier generation by; formula_2 where formula_0 and formula_3 are excess electrons and holes, formula_4 and formula_5 are the electron and hole mobilities respectively, formula_6 is the wafer thickness and formula_7 is the elementary charge. It can be assumed that formula_8 as electrons and holes are always generated in pairs. When the conductance is obtained, the average formula_0 can be calculated from the semiconductor parameters; formula_9 Depending on the expected lifetime of the material relative to the illumination decay characteristics of the flash lamp, there are several modes that can be used for PCD measurements. The generalised equation for effective lifetimes formula_10 as a function of the excess carrier density is given by; where formula_11 is the generation rate as measured by a photodetector. The generalised case can be used regardless of the wafer lifetime or flash lamp. Alternatively, the limiting cases of this function can be exploited in either quasi-transient or quasi-steady-state photoconductance (QSS-PC) measurements. Transient PCD is used when the formula_10 of the material is expected to "exceed" the flash duration, and the PCD measurement is taken after the light source has completely decayed. In this case, formula_11 is assumed to be 0, and Equation (1) can be reduced to; formula_12 For materials in which the effective lifetime is expected to be "shorter" than the flash lamp decay time, carriers are assumed to be continuously generated (i.e. at steady state), and therefore formula_13. Therefore, Equation (1) can be reduced to; formula_14 The case to be used is determined experimentally. The effective lifetimes is then calculated through curve fitting of the function and evaluated at a specified minority carrier density set-point, typically formula_15 (see figure). Applications. Recombination through the Shockley–Read–Hall (SRH) method is associated with the density of defects (such as contaminant atoms, dangling bonds, crystal grain boundaries, mechanical damage, etc.) in the semiconductor which "trap" charge carriers, leading to reduced lifetimes. Thus PCD can be used to infer the purity or passivation quality of the wafer. In solar cell production, this is particularly useful when estimating the performance of a precursor early in its manufacturing stage. Charge carriers that recombine cannot be used for photocurrent extraction, therefore for higher solar cell efficiency it is desirable to have a higher effective lifetime. The minority carrier lifetime is also related to the open-circuit voltage of a solar cell, and therefore can be used to infer an implicit current–voltage characteristic of the finished solar cell. Lifetimes measurements from PCD can be used to quantitatively calibrate spatially-resolved lifetimes measurements from Photoluminescence (PL) imaging. Minority carrier lifetimes are higher when there are fewer trap defects, and in the absence of SRH recombination, radiative recombination becomes more dominant. After excitation, radiatively recombining carriers emit nearly monochromatic light at the band gap energy of the semiconductor sample. For silicon, the energy is 1.12eV (in infrared) which can be captured by a CCD camera. Areas of the image with higher PL counts are inferred to have proportionately higher lifetimes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta n" }, { "math_id": 1, "text": "\\Delta \\sigma" }, { "math_id": 2, "text": "\\Delta\\sigma = q(\\mu_n \\Delta n + \\mu_p \\Delta p)W" }, { "math_id": 3, "text": "\\Delta p" }, { "math_id": 4, "text": "\\mu_n" }, { "math_id": 5, "text": "\\mu_p" }, { "math_id": 6, "text": "W" }, { "math_id": 7, "text": "q" }, { "math_id": 8, "text": "\\Delta n = \\Delta p" }, { "math_id": 9, "text": "\\overline{\\Delta n } = \\frac{\\Delta \\sigma}{q(\\mu_n + \\mu_p)W}" }, { "math_id": 10, "text": "\\tau_{eff}" }, { "math_id": 11, "text": "G(t)" }, { "math_id": 12, "text": "\\tau_{eff}(\\Delta n)=-\\frac{\\Delta n(t)}{\\frac{\\mathrm{d} \\Delta n(t)}{\\mathrm{d} t}}" }, { "math_id": 13, "text": "\\frac{\\mathrm{d} \\Delta n(t)}{\\mathrm{d} t}=0" }, { "math_id": 14, "text": "\\tau_{eff}(\\Delta n)=\\frac{\\Delta n(t)}{G(t)}" }, { "math_id": 15, "text": "\\Delta n = 1\\times10^{15}\\ {\\rm cm^{-3}}" } ]
https://en.wikipedia.org/wiki?curid=73307948
73319720
Quasi-free algebra
Associative algebra with lifting property In abstract algebra, a quasi-free algebra is an associative algebra that satisfies the lifting property similar to that of a formally smooth algebra in commutative algebra. The notion was introduced by Cuntz and Quillen for the applications to cyclic homology. A quasi-free algebra generalizes a free algebra, as well as the coordinate ring of a smooth affine complex curve. Because of the latter generalization, a quasi-free algebra can be thought of as signifying smoothness on a noncommutative space. Definition. Let "A" be an associative algebra over the complex numbers. Then "A" is said to be "quasi-free" if the following equivalent conditions are met: Let formula_3 denotes the differential envelope of "A"; i.e., the universal differential-graded algebra generated by "A". Then "A" is quasi-free if and only if formula_4 is projective as a bimodule over "A". There is also a characterization in terms of a connection. Given an "A"-bimodule "E", a right connection on "E" is a linear map formula_5 that satisfies formula_6 and formula_7. A left connection is defined in the similar way. Then "A" is quasi-free if and only if formula_4 admits a right connection. Properties and examples. One of basic properties of a quasi-free algebra is that the algebra is left and right hereditary (i.e., a submodule of a projective left or right module is projective or equivalently the left or right global dimension is at most one). This puts a strong restriction for algebras to be quasi-free. For example, a hereditary (commutative) integral domain is precisely a Dedekind domain. In particular, a polynomial ring over a field is quasi-free if and only if the number of variables is at most one. An analog of the tubular neighborhood theorem, called the "formal tubular neighborhood theorem", holds for quasi-free algebras. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R \\to R/I" }, { "math_id": 1, "text": "A \\to R/I" }, { "math_id": 2, "text": "A \\to R" }, { "math_id": 3, "text": "(\\Omega A, d)" }, { "math_id": 4, "text": "\\Omega^1 A" }, { "math_id": 5, "text": "\\nabla_r : E \\to E \\otimes_A \\Omega^1 A" }, { "math_id": 6, "text": "\\nabla_r(as) = a \\nabla_r(s)" }, { "math_id": 7, "text": "\\nabla_r(sa) = \\nabla_r(s) a + s \\otimes da" } ]
https://en.wikipedia.org/wiki?curid=73319720
73321
Avalanche
Rapid flow of a mass of snow down a slope An avalanche is a rapid flow of snow down a slope, such as a hill or mountain. Avalanches can be triggered spontaneously, by factors such as increased precipitation or snowpack weakening, or by external means such as humans, other animals, and earthquakes. Primarily composed of flowing snow and air, large avalanches have the capability to capture and move ice, rocks, and trees. Avalanches occur in two general forms, or combinations thereof: slab avalanches made of tightly packed snow, triggered by a collapse of an underlying weak snow layer, and loose snow avalanches made of looser snow. After being set off, avalanches usually accelerate rapidly and grow in mass and volume as they capture more snow. If an avalanche moves fast enough, some of the snow may mix with the air, forming a powder snow avalanche. Though they appear to share similarities, avalanches are distinct from slush flows, mudslides, rock slides, and serac collapses. They are also different from large scale movements of ice. Avalanches can happen in any mountain range that has an enduring snowpack. They are most frequent in winter or spring, but may occur at any time of the year. In mountainous areas, avalanches are among the most serious natural hazards to life and property, so great efforts are made in avalanche control. There are many classification systems for the different forms of avalanches. Avalanches can be described by their size, destructive potential, initiation mechanism, composition, and dynamics. Formation. Most avalanches occur spontaneously during storms under increased load due to snowfall and/or erosion. Metamorphic changes in the snowpack, such as melting due to solar radiation, is the second-largest cause of natural avalanches. Other natural causes include rain, earthquakes, rockfall, and icefall. Artificial triggers of avalanches include skiers, snowmobiles, and controlled explosive work. Contrary to popular belief, avalanches are not triggered by loud sound; the pressure from sound is orders of magnitude too small to trigger an avalanche. Avalanche initiation can start at a point with only a small amount of snow moving initially; this is typical of wet snow avalanches or avalanches in dry unconsolidated snow. However, if the snow has sintered into a stiff slab overlying a weak layer, then fractures can propagate very rapidly, so that a large volume of snow, possibly thousands of cubic metres, can start moving almost simultaneously. A snowpack will fail when the load exceeds the strength. The load is straightforward; it is the weight of the snow. However, the strength of the snowpack is much more difficult to determine and is extremely heterogeneous. It varies in detail with properties of the snow grains, size, density, morphology, temperature, water content; and the properties of the bonds between the grains. These properties may all metamorphose in time according to the local humidity, water vapour flux, temperature and heat flux. The top of the snowpack is also extensively influenced by incoming radiation and the local air flow. One of the aims of avalanche research is to develop and validate computer models that can describe the evolution of the seasonal snowpack over time. A complicating factor is the complex interaction of terrain and weather, which causes significant spatial and temporal variability of the depths, crystal forms, and layering of the seasonal snowpack. Slab avalanches. Slab avalanches are formed frequently in snow that has been deposited, or redeposited by wind. They have the characteristic appearance of a block (slab) of snow cut out from its surroundings by fractures. Elements of slab avalanches include a crown fracture at the top of the start zone, flank fractures on the sides of the start zones, and a fracture at the bottom called the stauchwall. The crown and flank fractures are vertical walls in the snow delineating the snow that was entrained in the avalanche from the snow that remained on the slope. Slabs can vary in thickness from a few centimetres to three metres. Slab avalanches account for around 90% of avalanche-related fatalities. Powder snow avalanches. The largest avalanches form turbulent suspension currents known as powder snow avalanches or mixed avalanches, a kind of gravity current. These consist of a powder cloud, which overlies a dense avalanche. They can form from any type of snow or initiation mechanism, but usually occur with fresh dry powder. They can exceed speeds of , and masses of 1,000,000 tons; their flows can travel long distances along flat valley bottoms and even uphill for short distances. Wet snow avalanches. In contrast to powder snow avalanches, wet snow avalanches are a low velocity suspension of snow and water, with the flow confined to the track surface (McClung, 1999, p. 108). The low speed of travel is due to the friction between the sliding surface of the track and the water saturated flow. Despite the low speed of travel (≈10–40 km/h), wet snow avalanches are capable of generating powerful destructive forces, due to the large mass and density. The body of the flow of a wet snow avalanche can plough through soft snow, and can scour boulders, earth, trees, and other vegetation; leaving exposed and often scored ground in the avalanche track. Wet snow avalanches can be initiated from either loose snow releases, or slab releases, and only occur in snowpacks that are water saturated and isothermally equilibrated to the melting point of water. The isothermal characteristic of wet snow avalanches has led to the secondary term of isothermal slides found in the literature (for example in Daffern, 1999, p. 93). At temperate latitudes wet snow avalanches are frequently associated with climatic avalanche cycles at the end of the winter season, when there is significant daytime warming. Ice avalanche. An ice avalanche occurs when a large piece of ice, such as from a serac or calving glacier, falls onto ice (such as the Khumbu Icefall), triggering a movement of broken ice chunks. The resulting movement is more analogous to a rockfall or a landslide than a snow avalanche. They are typically very difficult to predict and almost impossible to mitigate. Avalanche pathway. As an avalanche moves down a slope it follows a certain pathway that is dependent on the slope's degree of steepness and the volume of snow/ice involved in the mass movement. The origin of an avalanche is called the Starting Point and typically occurs on a 30–45 degree slope. The body of the pathway is called the Track of the avalanche and usually occurs on a 20–30 degree slope. When the avalanche loses its momentum and eventually stops it reaches the Runout Zone. This usually occurs when the slope has reached a steepness that is less than 20 degrees. These degrees are not consistently true due to the fact that each avalanche is unique depending on the stability of the snowpack that it was derived from as well as the environmental or human influences that triggered the mass movement. Injuries and deaths. People caught in avalanches can die from suffocation, trauma, or hypothermia. From "1950–1951 to 2020–2021" there were 1,169 people who died in avalanches in the United States. For the 11-year period ending April 2006, 445 people died in avalanches throughout North America. On average, 28 people die in avalanches every winter in the United States. In 2001 it was reported that globally an average of 150 people die each year from avalanches. Three of the deadliest recorded avalanches have killed over a thousand people each. Terrain, snowpack, weather. Doug Fesler and Jill Fredston developed a conceptual model of the three primary elements of avalanches: terrain, weather, and snowpack. Terrain describes the places where avalanches occur, weather describes the meteorological conditions that create the snowpack, and snowpack describes the structural characteristics of snow that make avalanche formation possible. Terrain. Avalanche formation requires a slope shallow enough for snow to accumulate but steep enough for the snow to accelerate once set in motion by the combination of mechanical failure (of the snowpack) and gravity. The angle of the slope that can hold snow, called the angle of repose, depends on a variety of factors, such as crystal form and moisture content. Some forms of drier and colder snow will only stick to shallower slopes, while wet and warm snow can bond to very steep surfaces. In coastal mountains, such as the Cordillera del Paine region of Patagonia, deep snowpacks collect on vertical and even overhanging rock faces. The slope angle that can allow moving snow to accelerate depends on a variety of factors such as the snow's shear strength (which is itself dependent upon crystal form) and the configuration of layers and inter-layer interfaces. The snowpack on slopes with sunny exposures is strongly influenced by sunshine. Diurnal cycles of thawing and refreezing can stabilize the snowpack by promoting settlement. Strong freeze-thaw cycles result in the formation of surface crusts during the night and of unstable surface snow during the day. Slopes in the lee of a ridge or of another wind obstacle accumulate more snow and are more likely to include pockets of deep snow, wind slabs, and cornices, all of which, when disturbed, may result in avalanche formation. Conversely, the snowpack on a windward slope is often much shallower than on a lee slope. Avalanches and avalanche paths share common elements: a start zone where the avalanche originates, a track along which the avalanche flows, and a runout zone where the avalanche comes to rest. The debris deposit is the accumulated mass of the avalanched snow once it has come to rest in the run-out zone. For the image at left, many small avalanches form in this avalanche path every year, but most of these avalanches do not run the full vertical or horizontal length of the path. The frequency with which avalanches form in a given area is known as the return period. The start zone of an avalanche must be steep enough to allow snow to accelerate once set in motion, additionally convex slopes are less stable than concave slopes because of the disparity between the tensile strength of snow layers and their compressive strength. The composition and structure of the ground surface beneath the snowpack influences the stability of the snowpack, either being a source of strength or weakness. Avalanches are unlikely to form in very thick forests, but boulders and sparsely distributed vegetation can create weak areas deep within the snowpack through the formation of strong temperature gradients. Full-depth avalanches (avalanches that sweep a slope virtually clean of snow cover) are more common on slopes with smooth ground, such as grass or rock slabs. Generally speaking, avalanches follow drainages down-slope, frequently sharing drainage features with summertime watersheds. At and below tree line, avalanche paths through drainages are well defined by vegetation boundaries called trim lines, which occur where avalanches have removed trees and prevented regrowth of large vegetation. Engineered drainages, such as the avalanche dam on Mount Stephen in Kicking Horse Pass, have been constructed to protect people and property by redirecting the flow of avalanches. Deep debris deposits from avalanches will collect in catchments at the terminus of a run out, such as gullies and river beds. Slopes flatter than 25 degrees or steeper than 60 degrees typically have a lower incidence of avalanches. Human-triggered avalanches have the greatest incidence when the snow's angle of repose is between 35 and 45 degrees; the critical angle, the angle at which human-triggered avalanches are most frequent, is 38 degrees. When the incidence of human triggered avalanches is normalized by the rates of recreational use, however, hazard increases uniformly with slope angle, and no significant difference in hazard for a given exposure direction can be found. The rule of thumb is: "A slope that is flat enough to hold snow but steep enough to ski has the potential to generate an avalanche, regardless of the angle." Snowpack structure and characteristics. The snowpack is composed of ground-parallel layers that accumulate over the winter. Each layer contains ice grains that are representative of the distinct meteorological conditions during which the snow formed and was deposited. Once deposited, a snow layer continues to evolve under the influence of the meteorological conditions that prevail after deposition. For an avalanche to occur, it is necessary that a snowpack have a weak layer (or instability) below a slab of cohesive snow. In practice the formal mechanical and structural factors related to snowpack instability are not directly observable outside of laboratories, thus the more easily observed properties of the snow layers (e.g. penetration resistance, grain size, grain type, temperature) are used as index measurements of the mechanical properties of the snow (e.g. tensile strength, friction coefficients, shear strength, and ductile strength). This results in two principal sources of uncertainty in determining snowpack stability based on snow structure: First, both the factors influencing snow stability and the specific characteristics of the snowpack vary widely within small areas and time scales, resulting in significant difficulty extrapolating point observations of snow layers across different scales of space and time. Second, the relationship between readily observable snowpack characteristics and the snowpack's critical mechanical properties has not been completely developed. While the deterministic relationship between snowpack characteristics and snowpack stability is still a matter of ongoing scientific study, there is a growing empirical understanding of the snow composition and deposition characteristics that influence the likelihood of an avalanche. Observation and experience has shown that newly fallen snow requires time to bond with the snow layers beneath it, especially if the new snow falls during very cold and dry conditions. If ambient air temperatures are cold enough, shallow snow above or around boulders, plants, and other discontinuities in the slope, weakens from rapid crystal growth that occurs in the presence of a critical temperature gradient. Large, angular snow crystals are indicators of weak snow, because such crystals have fewer bonds per unit volume than small, rounded crystals that pack tightly together. Consolidated snow is less likely to slough than loose powdery layers or wet isothermal snow; however, consolidated snow is a necessary condition for the occurrence of slab avalanches, and persistent instabilities within the snowpack can hide below well-consolidated surface layers. Uncertainty associated with the empirical understanding of the factors influencing snow stability leads most professional avalanche workers to recommend conservative use of avalanche terrain relative to current snowpack instability. Weather. Avalanches only occur in a standing snowpack. Typically winter seasons at high latitudes, high altitudes, or both have weather that is sufficiently unsettled and cold enough for precipitated snow to accumulate into a seasonal snowpack. Continentality, through its potentiating influence on the meteorological extremes experienced by snowpacks, is an important factor in the evolution of instabilities, and consequential occurrence of avalanches faster stabilization of the snowpack after storm cycles. The evolution of the snowpack is critically sensitive to small variations within the narrow range of meteorological conditions that allow for the accumulation of snow into a snowpack. Among the critical factors controlling snowpack evolution are: heating by the sun, radiational cooling, vertical temperature gradients in standing snow, snowfall amounts, and snow types. Generally, mild winter weather will promote the settlement and stabilization of the snowpack; conversely, very cold, windy, or hot weather will weaken the snowpack. At temperatures close to the freezing point of water, or during times of moderate solar radiation, a gentle freeze-thaw cycle will take place. The melting and refreezing of water in the snow strengthens the snowpack during the freezing phase and weakens it during the thawing phase. A rapid rise in temperature, to a point significantly above the freezing point of water, may cause avalanche formation at any time of year. Persistent cold temperatures can either prevent new snow from stabilizing or destabilize the existing snowpack. Cold air temperatures on the snow surface produce a temperature gradient in the snow, because the ground temperature at the base of the snowpack is usually around 0 °C, and the ambient air temperature can be much colder. When a temperature gradient greater than 10 °C change per vertical meter of snow is sustained for more than a day, angular crystals called depth hoar or facets begin forming in the snowpack because of rapid moisture transport along the temperature gradient. These angular crystals, which bond poorly to one another and the surrounding snow, often become a persistent weakness in the snowpack. When a slab lying on top of a persistent weakness is loaded by a force greater than the strength of the slab and persistent weak layer, the persistent weak layer can fail and generate an avalanche. Any wind stronger than a light breeze can contribute to a rapid accumulation of snow on sheltered slopes downwind. Wind slabs form quickly and, if present, weaker snow below the slab may not have time to adjust to the new load. Even on a clear day, wind can quickly load a slope with snow by blowing snow from one place to another. Top-loading occurs when wind deposits snow from the top of a slope; cross-loading occurs when wind deposits snow parallel to the slope. When a wind blows over the top of a mountain, the leeward, or downwind, side of the mountain experiences top-loading, from the top to the bottom of that lee slope. When the wind blows across a ridge that leads up the mountain, the leeward side of the ridge is subject to cross-loading. Cross-loaded wind-slabs are usually difficult to identify visually. Snowstorms and rainstorms are important contributors to avalanche danger. Heavy snowfall will cause instability in the existing snowpack, both because of the additional weight and because the new snow has insufficient time to bond to underlying snow layers. Rain has a similar effect. In the short term, rain causes instability because, like a heavy snowfall, it imposes an additional load on the snowpack and once rainwater seeps down through the snow, acts as a lubricant, reducing the natural friction between snow layers that holds the snowpack together. Most avalanches happen during or soon after a storm. Daytime exposure to sunlight will rapidly destabilize the upper layers of the snowpack if the sunlight is strong enough to melt the snow, thereby reducing its hardness. During clear nights, the snowpack can re-freeze when ambient air temperatures fall below freezing, through the process of long-wave radiative cooling, or both. Radiative heat loss occurs when the night air is significantly cooler than the snowpack, and the heat stored in the snow is re-radiated into the atmosphere. Dynamics. When a slab avalanche forms, the slab disintegrates into increasingly smaller fragments as the snow travels downhill. If the fragments become small enough the outer layer of the avalanche, called a saltation layer, takes on the characteristics of a fluid. When sufficiently fine particles are present they can become airborne and, given a sufficient quantity of airborne snow, this portion of the avalanche can become separated from the bulk of the avalanche and travel a greater distance as a powder snow avalanche. Scientific studies using radar, following the 1999 Galtür avalanche disaster, confirmed the hypothesis that a saltation layer forms between the surface and the airborne components of an avalanche, which can also separate from the bulk of the avalanche. Driving an avalanche is the component of the avalanche's weight parallel to the slope; as the avalanche progresses any unstable snow in its path will tend to become incorporated, so increasing the overall weight. This force will increase as the steepness of the slope increases, and diminish as the slope flattens. Resisting this are a number of components that are thought to interact with each other: the friction between the avalanche and the surface beneath; friction between the air and snow within the fluid; fluid-dynamic drag at the leading edge of the avalanche; shear resistance between the avalanche and the air through which it is passing, and shear resistance between the fragments within the avalanche itself. An avalanche will continue to accelerate until the resistance exceeds the forward force. Modeling. Attempts to model avalanche behaviour date from the early 20th century, notably the work of Professor Lagotala in preparation for the 1924 Winter Olympics in Chamonix. His method was developed by A. Voellmy and popularised following the publication in 1955 of his "Ueber die Zerstoerungskraft von Lawinen" (On the Destructive Force of Avalanches). Voellmy used a simple empirical formula, treating an avalanche as a sliding block of snow moving with a drag force that was proportional to the square of the speed of its flow: formula_0 He and others subsequently derived other formulae that take other factors into account, with the Voellmy-Salm-Gubler and the Perla-Cheng-McClung models becoming most widely used as simple tools to model flowing (as opposed to powder snow) avalanches. Since the 1990s many more sophisticated models have been developed. In Europe much of the recent work was carried out as part of the SATSIE (Avalanche Studies and Model Validation in Europe) research project supported by the European Commission which produced the leading-edge MN2L model, now in use with the "Service Restauration des Terrains en Montagne" (Mountain Rescue Service) in France, and D2FRAM (Dynamical Two-Flow-Regime Avalanche Model), which was still undergoing validation as of 2007. Other known models are the SAMOS-AT avalanche simulation software and the RAMMS software. Human involvement. How to prevent avalanches. Preventative measures are employed in areas where avalanches pose a significant threat to people, such as ski resorts, mountain towns, roads, and railways. There are several ways to prevent avalanches and lessen their power and develop preventative measures to reduce the likelihood and size of avalanches by disrupting the structure of the snowpack, while passive measures reinforce and stabilize the snowpack "in situ". The simplest active measure is repeatedly traveling on a snowpack as snow accumulates; this can be by means of boot-packing, ski-cutting, or machine grooming. Explosives are used extensively to prevent avalanches, by triggering smaller avalanches that break down instabilities in the snowpack, and removing overburden that can result in larger avalanches. Explosive charges are delivered by a number of methods including hand-tossed charges, helicopter-dropped bombs, Gazex concussion lines, and ballistic projectiles launched by air cannons and artillery. Passive preventive systems such as snow fences and light walls can be used to direct the placement of snow. Snow builds up around the fence, especially the side that faces the prevailing winds. Downwind of the fence, snow build-up is lessened. This is caused by the loss of snow at the fence that would have been deposited and the pickup of the snow that is already there by the wind, which was depleted of snow at the fence. When there is a sufficient density of trees, they can greatly reduce the strength of avalanches. They hold snow in place and when there is an avalanche, the impact of the snow against the trees slows it down. Trees can either be planted or they can be conserved, such as in the building of a ski resort, to reduce the strength of avalanches. In turn, socio-environmental changes can influence the occurrence of damaging avalanches: some studies linking changes in land-use/land-cover patterns and the evolution of snow avalanche damage in mid latitude mountains show the importance of the role played by vegetation cover, that is at the root of the increase of damage when the protective forest is deforested (because of demographic growth, intensive grazing and industrial or legal causes), and at the root of the decrease of damage because of the transformation of a traditional land-management system based on overexploitation into a system based on land marginalization and reforestation, something that has happened mainly since the mid-20th century in mountain environments of developed countries. Mitigation. In many areas, regular avalanche tracks can be identified and precautions can be taken to minimize damage, such as the prevention of development in these areas. To mitigate the effect of avalanches the construction of artificial barriers can be very effective in reducing avalanche damage. There are several types: One kind of barrier (snow net) uses a net strung between poles that are anchored by guy wires in addition to their foundations. These barriers are similar to those used for rockslides. Another type of barrier is a rigid fence-like structure (snow fence) and may be constructed of steel, wood or pre-stressed concrete. They usually have gaps between the beams and are built perpendicular to the slope, with reinforcing beams on the downhill side. Rigid barriers are often considered unsightly, especially when many rows must be built. They are also expensive and vulnerable to damage from falling rocks in the warmer months. In addition to industrially manufactured barriers, landscaped barriers, called avalanche dams stop or deflect avalanches with their weight and strength. These barriers are made out of concrete, rocks, or earth. They are usually placed right above the structure, road, or railway that they are trying to protect, although they can also be used to channel avalanches into other barriers. Occasionally, earth mounds are placed in the avalanche's path to slow it down. Finally, along transportation corridors, large shelters, called snow sheds, can be built directly in the slide path of an avalanche to protect traffic from avalanches. Early warning systems. Warning systems can detect avalanches which develop slowly, such as ice avalanches caused by icefalls from glaciers. Interferometric radars, high-resolution cameras, or motion sensors can monitor instable areas over a long term, lasting from days to years. Experts interpret the recorded data and are able to recognize upcoming ruptures in order to initiate appropriate measures. Such systems (e.g. the monitoring of the Weissmies glacier in Switzerland) can recognize events several days in advance. Alarm systems. Modern radar technology enables the monitoring of large areas and the localization of avalanches at any weather condition, by day and by night. Complex alarm systems are able to detect avalanches within a short time in order to close (e.g. roads and rails) or evacuate (e.g. construction sites) endangered areas. An example of such a system is installed on the only access road of Zermatt in Switzerland. Two radars monitor the slope of a mountain above the road. The system automatically closes the road by activating several barriers and traffic lights within seconds such that no people are harmed. Survival, rescue, and recovery. Avalanche accidents are broadly differentiated into 2 categories: accidents in recreational settings, and accidents in residential, industrial, and transportation settings. This distinction is motivated by the observed difference in the causes of avalanche accidents in the two settings. In the recreational setting most accidents are caused by the people involved in the avalanche. In a 1996 study, Jamieson et al. (pages 7–20) found that 83% of all avalanches in the recreational setting were caused by those who were involved in the accident. In contrast, all the accidents in the residential, industrial, and transportation settings were due to spontaneous natural avalanches. Because of the difference in the causes of avalanche accidents, and the activities pursued in the two settings, avalanche and disaster management professionals have developed two related preparedness, rescue, and recovery strategies for each of the settings. Notable avalanches. Two avalanches occurred in March 1910 in the Cascade and Selkirk Mountain ranges; on 1 March the Wellington avalanche killed 96 in Washington state, United States. Three days later 62 railroad workers were killed in the Rogers Pass avalanche in British Columbia, Canada. During World War I, an estimated 40,000 to 80,000 soldiers died as a result of avalanches during the mountain campaign in the Alps at the Austrian-Italian front, many of which were caused by artillery fire. Some 10,000 men, from both sides, died in avalanches in December 1916. In the northern hemisphere winter of 1950–1951 approximately 649 avalanches were recorded in a three-month period throughout the Alps in Austria, France, Switzerland, Italy and Germany. This series of avalanches killed around 265 people and was termed the Winter of Terror. A mountain climbing camp on Lenin Peak, in what is now Kyrgyzstan, was wiped out in 1990 when an earthquake triggered a large avalanche that overran the camp. Forty-three climbers were killed. In 1993, the Bayburt Üzengili avalanche killed 60 individuals in Üzengili in the province of Bayburt, Turkey. A large avalanche in Montroc, France, in 1999, 300,000 cubic metres of snow slid on a 30° slope, achieving a speed in the region of . It killed 12 people in their chalets under 100,000 tons of snow, deep. The mayor of Chamonix was convicted of second-degree murder for not evacuating the area, but received a suspended sentence. The small Austrian village of Galtür was hit by the Galtür avalanche in 1999. The village was thought to be in a safe zone but the avalanche was exceptionally large and flowed into the village. Thirty-one people died. On 1 December 2000, the Glory Bowl Avalanche formed on Mt. Glory which is located within the Teton Mountain Range in Wyoming, United States. Joel Roof was snowboarding recreationally in this backcountry, bowl-shaped run and triggered the avalanche. He was carried nearly 2,000 feet to the base of the mountain and was not successfully rescued. On 28 January 2003, the Tatra Mountains avalanche swept away nine out of a thirteen-member group heading to the summit of Rysy in the Tatra Mountains. The participants of the trip were students from the I Leon Kruczkowski High School in Tychy and individuals associated with the school's sports club. On 3 July 2022 a serac collapsed on the Marmolada Glacier, Italy, causing an avalanche that killed 11 alpinists and injured eight. Classification of avalanches. European avalanche risk. In Europe, the avalanche risk is widely rated on the following scale, which was adopted in April 1993 to replace the earlier non-standard national schemes. Descriptions were last updated in May 2003 to enhance uniformity. In France, most avalanche deaths occur at risk levels 3 and 4. In Switzerland most occur at levels 2 and 3. It is thought that this may be due to national differences of interpretation when assessing the risks. [1] Stability: [2] additional load: Gradient: European avalanche size table. Avalanche size: North American Avalanche Danger Scale. In the United States and Canada, the following avalanche danger scale is used. Descriptors vary depending on country. Avalanche problems. There are nine different types of avalanche problems: Canadian classification for avalanche size. The Canadian classification for avalanche size is based upon the consequences of the avalanche. Half sizes are commonly used. United States classification for avalanche size. The size of avalanches are classified using two scales; size relative to destructive force or D-scale and size relative to the avalanche path or R-scale. Both size scales range from 1 to 5 with the D size scale half sizes can be used. Rutschblock Test. Slab avalanche hazard analysis can be done using the Rutschblock Test. A 2 m wide block of snow is isolated from the rest of the slope and progressively loaded. The result is a rating of slope stability on a seven step scale. ("Rutsch" means slide in German.) Avalanches and climate change. Avalanche formation and frequency is highly affected by weather patterns and the local climate. Snowpack layers will form differently depending on whether snow is falling in very cold or very warm conditions, and very dry or very humid conditions. Thus, climate change may affect when, where, and how often avalanches occur, and may also change the type of avalanches that are occurring. Impacts on avalanche type and frequency. Overall, a rising seasonal snow line and a decrease in the number of days with snow cover are predicted. Climate change-caused temperature increases and changes in precipitation patterns will likely differ between the different mountain regions, and the impacts of these changes on avalanches will change at different elevations. In the long term, avalanche frequency at lower elevations is expected to decline corresponding to a decrease in snow cover and depth, and a short-term increase in the number of wet avalanches are predicted. Precipitation is expected to increase, meaning more snow or rain depending on the elevation. Higher elevations predicted to remain above the seasonal snow line will likely see an increase in avalanche activity due to the increases in precipitation during the winter season. Storm precipitation intensity is also expected to increase, which is likely to lead to more days with enough snowfall to cause the snowpack to become unstable. Moderate and high elevations may see an increase in volatile swings from one weather extreme to the other. Predictions also show an increase in the number of rain on snow events, and wet avalanche cycles occurring earlier in the spring during the remainder of this century. Impacts on burial survival rate. The warm, wet snowpacks that are likely to increase in frequency due to climate change may also make avalanche burials more deadly. Warm snow has a higher moisture content and is therefore denser than colder snow. Denser avalanche debris decreases the ability for a buried person to breath and the amount of time they have before they run out of oxygen. This increases the likelihood of death by asphyxia in the event of a burial. Additionally, the predicted thinner snowpacks may increase the frequency of injuries due to trauma, such as a buried skier striking a rock or tree. See also. Related flows. &lt;templatestyles src="Div col/styles.css"/&gt; References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\textrm{Pref} = \\frac {1} {2} \\, { \\rho} \\, { v^2} \\,\\!" } ]
https://en.wikipedia.org/wiki?curid=73321
73325414
Q-category
Concept in mathematical category theory In mathematics, a Q-category or almost quotient category is a category that is a "milder version of a Grothendieck site." A Q-category is a coreflective subcategory. The Q stands for a quotient. The concept of Q-categories was introduced by Alexander Rosenberg in 1988. The motivation for the notion was its use in noncommutative algebraic geometry; in this formalism, noncommutative spaces are defined as sheaves on Q-categories. Definition. A Q-category is defined by the formula formula_0where formula_1 is the left adjoint in a pair of adjoint functors and is a full and faithful functor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{A} : (u^* \\dashv u_*) : \\bar A \\stackrel{\\overset{u^*}{\\leftarrow}}{\\underset{u_*}{\\to}} A" }, { "math_id": 1, "text": "u^*" } ]
https://en.wikipedia.org/wiki?curid=73325414
73327646
Beta regression
Non-linear regression method Beta regression is a form of regression which is used when the response variable, formula_0, takes values within formula_1 and can be assumed to follow a beta distribution. It is generalisable to variables which takes values in the arbitrary open interval formula_2 through transformations. Beta regression was developed in the early 2000s by two sets of statisticians: Kieschnick and McCullough in 2003 and Ferrari and Cribari-Neto in 2004. Description. The modern beta regression process is based on the mean/precision parameterisation of the beta distribution. Here the variable is assumed to be distributed according to formula_3 where formula_4 is the mean and formula_5 is the precision. As the mean of the distribution, formula_4 is constrained to fall within formula_1 but formula_5 is not. For given values of formula_4, higher values of formula_5 result in a beta with a lower variance, hence its description as a precision parameter. Beta regression has three major motivations. Firstly, beta-distributed variables are usually heteroscedastic of a form where the scatter is greater closer to the mean value and lesser in the tails, whereas linear regression assumes homoscedasticity. Secondly, while transformations are available to consider beta distributed dependent variables within the generalised linear regression frameworks, these transformation mean that the regressions model formula_6 rather than formula_0, so the interpretation is in terms of the mean of formula_6 rather than the mean of formula_0, which presents a more awkward interpretation. Thirdly, values within formula_1 are generally from skewed distributions. The basic algebra of the beta regression is linear in terms of the link function, but even in the equal dispersion case presented below, it is not a special case of generalised linear regression: formula_7 where formula_8 is a link function.3 It is also notable that the variance of formula_0 is dependent on formula_4 in the model, so beta regressions are naturally heteroscedastic. Variable dispersion beta regression. There is also variable dispersion beta regression, where formula_5 is modelled independently for each observation rather than being held constant. Likelihood ratio tests can be "interpreted as testing the null hypothesis of equidispersion against a specific alternative of variable dispersion" by using normal versus variable dispersions. For example, within the R programming language, the formula "formula_9" describes an equidispersion model but it might be compared to any of the following three specific variable dispersion alternatives: The Breusch-Pagan test can be used to identify formula_13 variables. The choice of link equation can render the need for variable dispersion irrelevant, at least when judged in terms of model fit. A quasi RESET diagnostic test (inspired by RESET, i.e. regression specification error test) is available for considering misspecification, particularly in the context of link equation choice. If a power of a fitted mean/linear predictor is used as a covariate and it results in a better model than the same formula without the power term, then the original model formula is a misspecification. This quasi-RESET diagnostic procedure may also be considered graphically, for example by comparing the absolute raw residuals for each model as the formula_14 values, with the model that has the smaller absolute residual more often is to be preferred. In general, the closer the observed formula_0 values are to the formula_2 extremes, the more significant the choice of link function. The link function can also affect whether the MLE procedure statistical programs use to implement beta regressions converge. Furthermore the MLE procedure can tend to underestimate the standard errors and therefore significance inferences in beta regression. In practice, however, Bias Correction (BC) and Bias Reduction (BR) are essentially diagnostic steps, i.e. the analyst compares the model with neither BC nor BR to two models, each implementing one of BC and BR. The assumptions of beta regression are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y" }, { "math_id": 1, "text": "(0, 1)" }, { "math_id": 2, "text": "(a, b)" }, { "math_id": 3, "text": "B(\\mu, \\phi)" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "\\phi" }, { "math_id": 6, "text": "y'" }, { "math_id": 7, "text": "g(\\mu_i) = x_i^T\\beta_i = \\eta_i," }, { "math_id": 8, "text": "g" }, { "math_id": 9, "text": "y \\sim x_1 + x_2" }, { "math_id": 10, "text": "y\\sim x_1 + x_2 | z_1" }, { "math_id": 11, "text": "y\\sim x_1 + x_2 | z_2" }, { "math_id": 12, "text": "y\\sim x_1 + x_2 | z_1 + z_2" }, { "math_id": 13, "text": "z" }, { "math_id": 14, "text": "(x, y)" } ]
https://en.wikipedia.org/wiki?curid=73327646
7333103
Barrow's inequality
In geometry, Barrow's inequality is an inequality relating the distances between an arbitrary point within a triangle, the vertices of the triangle, and certain points on the sides of the triangle. It is named after David Francis Barrow. Statement. Let "P" be an arbitrary point inside the triangle "ABC". From "P" and "ABC", define "U", "V", and "W" as the points where the angle bisectors of "BPC", "CPA", and "APB" intersect the sides "BC", "CA", "AB", respectively. Then Barrow's inequality states that formula_0 with equality holding only in the case of an equilateral triangle and "P" is the center of the triangle. Generalisation. Barrow's inequality can be extended to convex polygons. For a convex polygon with vertices formula_1 let formula_2 be an inner point and formula_3 the intersections of the angle bisectors of formula_4 with the associated polygon sides formula_5, then the following inequality holds: formula_6 Here formula_7 denotes the secant function. For the triangle case formula_8 the inequality becomes Barrow's inequality due to formula_9. History. Barrow's inequality strengthens the Erdős–Mordell inequality, which has identical form except with "PU", "PV", and "PW" replaced by the three distances of "P" from the triangle's sides. It is named after David Francis Barrow. Barrow's proof of this inequality was published in 1937, as his solution to a problem posed in the American Mathematical Monthly of proving the Erdős–Mordell inequality. This result was named "Barrow's inequality" as early as 1961. A simpler proof was later given by Louis J. Mordell. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "PA+PB+PC\\geq 2(PU+PV+PW),\\," }, { "math_id": 1, "text": "A_1,A_2,\\ldots ,A_n " }, { "math_id": 2, "text": "P" }, { "math_id": 3, "text": "Q_1, Q_2,\\ldots ,Q_n" }, { "math_id": 4, "text": "\\angle A_1PA_2,\\ldots,\\angle A_{n-1}PA_n,\\angle A_nPA_1 " }, { "math_id": 5, "text": "A_1A_2,\\ldots ,A_{n-1}A_n, A_nA_1" }, { "math_id": 6, "text": "\\sum_{k=1}^n|PA_k|\\geq \\sec\\left(\\frac{\\pi}{n}\\right) \\sum_{k=1}^n|PQ_k|" }, { "math_id": 7, "text": "\\sec(x)" }, { "math_id": 8, "text": "n=3" }, { "math_id": 9, "text": "\\sec\\left(\\tfrac{\\pi}{3}\\right)=2" } ]
https://en.wikipedia.org/wiki?curid=7333103
73341
Disjunctive normal form
Standard form of a boolean function In boolean logic, a disjunctive normal form (DNF) is a canonical normal form of a logical formula consisting of a disjunction of conjunctions; it can also be described as an OR of ANDs, a sum of products, or — in philosophical logic — a "cluster concept". As a normal form, it is useful in automated theorem proving. Definition. A logical formula is considered to be in DNF if it is a disjunction of one or more conjunctions of one or more literals. A DNF formula is in full disjunctive normal form if each of its variables appears exactly once in every conjunction and each conjunction appears at most once (up to the order of variables). As in conjunctive normal form (CNF), the only propositional operators in DNF are and (formula_0), or (formula_1), and not (formula_2). The "not" operator can only be used as part of a literal, which means that it can only precede a propositional variable. The following is a context-free grammar for DNF: Where "Variable" is any variable. For example, all of the following formulas are in DNF: The formula formula_7 is in DNF, but not in full DNF; an equivalent full-DNF version is formula_8. The following formulas are not in DNF: Conversion to DNF. In classical logic each propositional formula can be converted to DNF ... ... by syntactic means. The conversion involves using logical equivalences, such as double negation elimination, De Morgan's laws, and the distributive law. Formulas built from the primitive connectives formula_12 can be converted to DNF by the following canonical term rewriting system: formula_13 ... by semantic means. The full DNF of a formula can be read off its truth table. For example, consider the formula formula_14. The corresponding truth table is formula_18 formula_20 Remark. A propositional formula can be represented by one and only one full DNF. In contrast, several "plain" DNFs may be possible. For example, by applying the rule formula_21 three times, the full DNF of the above formula_15 can be simplified to formula_22. However, there are also equivalent DNF formulas that cannot be transformed one into another by this rule, see the pictures for an example. Disjunctive Normal Form Theorem. It is a theorem that all consistent formulas in propositional logic can be converted to disjunctive normal form. This is called the Disjunctive Normal Form Theorem. The formal statement is as follows:Disjunctive Normal Form Theorem: Suppose formula_23 is a sentence in a propositional language formula_24 with formula_25 sentence letters, which we shall denote by formula_26. If formula_23 is not a contradiction, then it is truth-functionally equivalent to a disjunction of conjunctions of the form formula_27, where formula_28, and formula_29.The proof follows from the procedure given above for generating DNFs from truth tables. Formally, the proof is as follows:Suppose formula_23 is a sentence in a propositional language whose sentence letters are formula_30. For each row of formula_23's truth table, write out a corresponding conjunction formula_31, where formula_32 is defined to be formula_33 if formula_33 takes the value formula_34 at that row, and is formula_35 if formula_33 takes the value formula_36 at that row; similarly for formula_37, formula_38, etc. (the alphabetical ordering of formula_30 in the conjunctions is quite arbitrary; any other could be chosen instead). Now form the disjunction of all these conjunctions which correspond to formula_34 rows of formula_23's truth table. This disjunction is a sentence in formula_39, which by the reasoning above is truth-functionally equivalent to formula_23. This construction obviously presupposes that formula_23 takes the value formula_34 on at least one row of its truth table; if formula_23 doesn’t, i.e., if formula_23 is a contradiction, then formula_23 is equivalent to formula_41, which is, of course, also a sentence in formula_39.This theorem is a convenient way to derive many useful metalogical results in propositional logic, such as, trivially, the result that the set of connectives formula_40 is functionally complete. Maximum number of conjunctions. Any propositional formula is built from formula_25 variables, where formula_42. There are formula_43 possible literals: formula_44. formula_45 has formula_46 non-empty subsets. This is the maximum number of conjunctions a DNF can have. A full DNF can have up to formula_47 conjunctions, one for each row of the truth table. Example 1 Consider a formula with two variables formula_16 and formula_17. The longest possible DNF has formula_48 conjunctions: formula_49 The longest possible full DNF has 4 conjunctions: they are underlined. This formula is a tautology. Example 2 Each DNF of the e.g. formula formula_50 has formula_51 conjunctions. Computational complexity. The Boolean satisfiability problem on conjunctive normal form formulas is NP-complete. By the duality principle, so is the falsifiability problem on DNF formulas. Therefore, it is co-NP-hard to decide if a DNF formula is a tautology. Conversely, a DNF formula is satisfiable if, and only if, one of its conjunctions is satisfiable. This can be decided in polynomial time simply by checking that at least one conjunction does not contain conflicting literals. Variants. An important variation used in the study of computational complexity is "k-DNF". A formula is in "k-DNF" if it is in DNF and each conjunction contains at most k literals. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\wedge" }, { "math_id": 1, "text": "\\vee" }, { "math_id": 2, "text": "\\neg" }, { "math_id": 3, "text": "(A \\land \\neg B \\land \\neg C) \\lor (\\neg D \\land E \\land F \\land D \\land F)" }, { "math_id": 4, "text": "(A \\land B) \\lor (C)" }, { "math_id": 5, "text": "(A \\land B)" }, { "math_id": 6, "text": "(A)" }, { "math_id": 7, "text": "A \\lor B" }, { "math_id": 8, "text": "(A \\land B) \\lor (A \\land \\lnot B) \\lor (\\lnot A \\land B)" }, { "math_id": 9, "text": "\\neg(A \\lor B)" }, { "math_id": 10, "text": "\\neg(A \\land B) \\lor C" }, { "math_id": 11, "text": "A \\lor (B \\land (C \\lor D))" }, { "math_id": 12, "text": "\\{\\land,\\lor,\\lnot\\}" }, { "math_id": 13, "text": "\\begin{array}{rcl}\n(\\lnot \\lnot x) & \\rightsquigarrow & x \\\\\n(\\lnot (x \\lor y)) & \\rightsquigarrow & ((\\lnot x) \\land (\\lnot y)) \\\\\n(\\lnot (x \\land y)) & \\rightsquigarrow & ((\\lnot x) \\lor (\\lnot y)) \\\\\n(x \\land (y \\lor z)) & \\rightsquigarrow & ((x \\land y) \\lor (x \\land z)) \\\\\n((x \\lor y) \\land z) & \\rightsquigarrow & ((x \\land z) \\lor (y \\land z)) \\\\\n\\end{array}" }, { "math_id": 14, "text": "\\phi = ((\\lnot (p \\land q)) \\leftrightarrow (\\lnot r \\uparrow (p \\oplus q)))" }, { "math_id": 15, "text": "\\phi" }, { "math_id": 16, "text": "p" }, { "math_id": 17, "text": "q" }, { "math_id": 18, "text": "\n( p \\land \\lnot q \\land r) \\lor\n(\\lnot p \\land q \\land r) \\lor\n(\\lnot p \\land \\lnot q \\land r) \\lor\n(\\lnot p \\land \\lnot q \\land \\lnot r)\n" }, { "math_id": 19, "text": "\\lnot \\phi" }, { "math_id": 20, "text": "\n( p \\land q \\land r) \\lor\n( p \\land q \\land \\lnot r) \\lor\n( p \\land \\lnot q \\land \\lnot r) \\lor\n(\\lnot p \\land q \\land \\lnot r)\n" }, { "math_id": 21, "text": "((a \\land b) \\lor (\\lnot a \\land b)) \\rightsquigarrow b" }, { "math_id": 22, "text": "(\\lnot p \\land \\lnot q) \\lor (\\lnot p \\land r) \\lor (\\lnot q \\land r)" }, { "math_id": 23, "text": "X" }, { "math_id": 24, "text": "\\mathcal{L}" }, { "math_id": 25, "text": "n" }, { "math_id": 26, "text": "A_1,...,A_n" }, { "math_id": 27, "text": "\\pm A_1 \\land ... \\land \\pm A_n" }, { "math_id": 28, "text": "+A_i=A_i" }, { "math_id": 29, "text": "-A_i= \\neg A_i" }, { "math_id": 30, "text": "A, B, C, \\ldots" }, { "math_id": 31, "text": "\\pm A \\land \\pm B \\land \\pm C \\land \\ldots" }, { "math_id": 32, "text": "\\pm A" }, { "math_id": 33, "text": "A" }, { "math_id": 34, "text": "T" }, { "math_id": 35, "text": "\\neg A" }, { "math_id": 36, "text": "F" }, { "math_id": 37, "text": "\\pm B" }, { "math_id": 38, "text": "\\pm C" }, { "math_id": 39, "text": "\\mathcal{L}[A, B, C, \\ldots; \\land, \\lor, \\neg]" }, { "math_id": 40, "text": "\\{\\land, \\lor, \\neg\\}" }, { "math_id": 41, "text": "A \\land \\neg A" }, { "math_id": 42, "text": "n \\ge 1" }, { "math_id": 43, "text": "2n" }, { "math_id": 44, "text": "L = \\{ p_1, \\lnot p_1, p_2, \\lnot p_2, \\ldots, p_n, \\lnot p_n\\}" }, { "math_id": 45, "text": "L" }, { "math_id": 46, "text": "(2^{2n} -1)" }, { "math_id": 47, "text": "2^{n}" }, { "math_id": 48, "text": "2^{(2 \\times 2)} -1 = 15" }, { "math_id": 49, "text": " \n\\begin{array}{lcl}\n(\\lnot p) \\lor (p) \\lor (\\lnot q) \\lor (q) \\lor \\\\\n(\\lnot p \\land p) \\lor\n\\underline{(\\lnot p \\land \\lnot q)} \\lor\n\\underline{(\\lnot p \\land q)} \\lor\n\\underline{( p \\land \\lnot q)} \\lor\n\\underline{( p \\land q)} \\lor\n(\\lnot q \\land q) \\lor \\\\\n(\\lnot p \\land p \\land \\lnot q) \\lor\t\n(\\lnot p \\land p \\land q) \\lor\t\n(\\lnot p \\land \\lnot q \\land q) \\lor\t\n( p \\land \\lnot q \\land q) \\lor \\\\\n(\\lnot p \\land p \\land \\lnot q \\land q)\n\\end{array}" }, { "math_id": 50, "text": "(X_1 \\lor Y_1) \\land (X_2 \\lor Y_2) \\land \\dots \\land (X_n \\lor Y_n)" }, { "math_id": 51, "text": "2^n" } ]
https://en.wikipedia.org/wiki?curid=73341
73342
Conjunctive normal form
Standard form of Boolean function In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory. In automated theorem proving, the notion "clausal normal form" is often used in a narrower sense, meaning a particular representation of a CNF formula as a set of sets of literals. Definition. A logical formula is considered to be in CNF if it is a conjunction of one or more disjunctions of one or more literals. As in disjunctive normal form (DNF), the only propositional operators in CNF are or (formula_0), and (formula_1), and not (formula_2). The "not" operator can only be used as part of a literal, which means that it can only precede a propositional variable. The following is a context-free grammar for CNF: Where "Variable" is any variable. All of the following formulas in the variables formula_5, and formula_6 are in conjunctive normal form: The following formulas are not in conjunctive normal form: Conversion to CNF. In classical logic each propositional formula can be converted to an equivalent formula that is in CNF. This transformation is based on rules about logical equivalences: double negation elimination, De Morgan's laws, and the distributive law. Basic algorithm. The algorithm to compute a CNF-equivalent of a given propositional formula formula_14 builds upon formula_15 in disjunctive normal form (DNF): step 1. Then formula_16 is converted to formula_17 by swapping ANDs with ORs and vice versa while negating all the literals. Remove all formula_18. Conversion by syntactic means. Convert to CNF the propositional formula formula_14. Step 1: Convert its negation to disjunctive normal form. formula_19, where each formula_20 is a conjunction of literals formula_21. Step 2: Negate formula_16. Then shift formula_22 inwards by applying the (generalized) De Morgan's equivalences until no longer possible. formula_23 whereformula_24 Step 3: Remove all double negations. Example Convert to CNF the propositional formula formula_25. The (full) DNF equivalent of its negation is formula_26 formula_27 Conversion by semantic means. A CNF equivalent of a formula can be derived from its truth table. Again, consider the formula formula_28. The corresponding truth table is A CNF equivalent of formula_14 is formula_32 Each disjunction reflects an assignment of variables for which formula_14 evaluates to F(alse). If in such an assignment a variable formula_33 Other approaches. Since all propositional formulas can be converted into an equivalent formula in conjunctive normal form, proofs are often based on the assumption that all formulae are CNF. However, in some cases this conversion to CNF can lead to an exponential explosion of the formula. For example, translating the non-CNF formula formula_35 into CNF produces a formula with formula_36 clauses: formula_37 Each clause contains either formula_38 or formula_39 for each formula_40. There exist transformations into CNF that avoid an exponential increase in size by preserving satisfiability rather than equivalence. These transformations are guaranteed to only linearly increase the size of the formula, but introduce new variables. For example, the above formula can be transformed into CNF by adding variables formula_41 as follows: formula_42 An interpretation satisfies this formula only if at least one of the new variables is true. If this variable is formula_43, then both formula_38 and formula_39 are true as well. This means that every model that satisfies this formula also satisfies the original one. On the other hand, only some of the models of the original formula satisfy this one: since the formula_43 are not mentioned in the original formula, their values are irrelevant to satisfaction of it, which is not the case in the last formula. This means that the original formula and the result of the translation are equisatisfiable but not equivalent. An alternative translation, the Tseitin transformation, includes also the clauses formula_44. With these clauses, the formula implies formula_45; this formula is often regarded to "define" formula_43 to be a name for formula_46. Maximum number of disjunctions. Consider a propositional formula with formula_47 variables, formula_48. There are formula_49 possible literals: formula_50. formula_51 has formula_52 non-empty subsets. This is the maximum number of disjunctions a CNF can have. All truth-functional combinations can be expressed with formula_53 disjunctions, one for each row of the truth table.In the example below they are underlined. Example Consider a formula with two variables formula_29 and formula_30. The longest possible CNF has formula_54 disjunctions: formula_55 This formula is a contradiction. Computational complexity. An important set of problems in computational complexity involves finding assignments to the variables of a boolean formula expressed in conjunctive normal form, such that the formula is true. The "k"-SAT problem is the problem of finding a satisfying assignment to a boolean formula expressed in CNF in which each disjunction contains at most "k" variables. 3-SAT is NP-complete (like any other "k"-SAT problem with "k"&gt;2) while 2-SAT is known to have solutions in polynomial time. As a consequence, the task of converting a formula into a DNF, preserving satisfiability, is NP-hard; dually, converting into CNF, preserving validity, is also NP-hard; hence equivalence-preserving conversion into DNF or CNF is again NP-hard. Typical problems in this case involve formulas in "3CNF": conjunctive normal form with no more than three variables per conjunct. Examples of such formulas encountered in practice can be very large, for example with 100,000 variables and 1,000,000 conjuncts. A formula in CNF can be converted into an equisatisfiable formula in ""k"CNF" (for "k"≥3) by replacing each conjunct with more than "k" variables formula_56 by two conjuncts formula_57 and formula_58 with Z a new variable, and repeating as often as necessary. First-order logic. In first order logic, conjunctive normal form can be taken further to yield the clausal normal form of a logical formula, which can be then used to perform first-order resolution. In resolution-based automated theorem-proving, a CNF formula See below for an example. Converting from first-order logic. To convert first-order logic to CNF: Example As an example, the formula saying "Anyone who loves all animals, is in turn loved by someone" is converted into CNF (and subsequently into clause form in the last line) as follows (highlighting replacement rule redexes in formula_91): Informally, the Skolem function formula_92 can be thought of as yielding the person by whom formula_85 is loved, while formula_93 yields the animal (if any) that formula_85 doesn't love. The 3rd last line from below then reads as "formula_85 doesn't love the animal formula_93, or else formula_85 is loved by formula_92". The 2nd last line from above, formula_94, is the CNF. See also. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vee" }, { "math_id": 1, "text": "\\wedge" }, { "math_id": 2, "text": "\\neg" }, { "math_id": 3, "text": "\\land" }, { "math_id": 4, "text": "\\lor" }, { "math_id": 5, "text": "A,B,C,D,E" }, { "math_id": 6, "text": "F" }, { "math_id": 7, "text": "(A \\lor \\neg B \\lor \\neg C) \\land (\\neg D \\lor E \\lor F \\lor D \\lor F)" }, { "math_id": 8, "text": "(A \\lor B) \\land (C)" }, { "math_id": 9, "text": "(A \\lor B)" }, { "math_id": 10, "text": "(A)" }, { "math_id": 11, "text": "\\neg (A \\land B)" }, { "math_id": 12, "text": "\\neg(A \\lor B) \\land C" }, { "math_id": 13, "text": "A \\land (B \\lor (D \\land E))" }, { "math_id": 14, "text": "\\phi" }, { "math_id": 15, "text": "\\lnot \\phi" }, { "math_id": 16, "text": "\\lnot \\phi_{DNF}" }, { "math_id": 17, "text": "\\phi_{CNF}" }, { "math_id": 18, "text": "\\lnot \\lnot" }, { "math_id": 19, "text": "\\lnot \\phi_{DNF} = (C_1 \\lor C_2 \\lor \\ldots \\lor C_i \\lor \\ldots \\lor C_m)" }, { "math_id": 20, "text": "C_i" }, { "math_id": 21, "text": "l_{i1} \\land l_{i2} \\land \\ldots \\land l_{in_i}" }, { "math_id": 22, "text": "\\lnot" }, { "math_id": 23, "text": "\\begin{align}\n\\phi &\\leftrightarrow \\lnot \\lnot \\phi_{DNF} \\\\\n&= \\lnot (C_1 \\lor C_2 \\lor \\ldots \\lor C_i \\lor \\ldots \\lor C_m) \\\\\n&\\leftrightarrow \\lnot C_1 \\land \\lnot C_2 \\land \\ldots \\land \\lnot C_i \\land \\ldots \\land \\lnot C_m &&\\text{// (generalized) D.M.} \n\\end{align}" }, { "math_id": 24, "text": "\\begin{align}\n\\lnot C_i &= \\lnot (l_{i1} \\land l_{i2} \\land \\ldots \\land l_{in_i}) \\\\\n&\\leftrightarrow (\\lnot l_{i1} \\lor \\lnot l_{i2} \\lor \\ldots \\lor \\lnot l_{in_i}) &&\\text{// (generalized) D.M.}\n\\end{align}" }, { "math_id": 25, "text": "\\phi = ((\\lnot (p \\land q)) \\leftrightarrow (\\lnot r \\uparrow (p \\oplus q)))" }, { "math_id": 26, "text": " \\lnot \\phi_{DNF} =\n( p \\land q \\land r) \\lor\n( p \\land q \\land \\lnot r) \\lor\n( p \\land \\lnot q \\land \\lnot r) \\lor\n(\\lnot p \\land q \\land \\lnot r) " }, { "math_id": 27, "text": "\\begin{align}\n\\phi &\\leftrightarrow \\lnot \\lnot \\phi_{DNF} \\\\\n&= \\lnot \\{\n( p \\land q \\land r) \\lor\n( p \\land q \\land \\lnot r) \\lor\n( p \\land \\lnot q \\land \\lnot r) \\lor\n(\\lnot p \\land q \\land \\lnot r) \\} \\\\\n&\\leftrightarrow\n\\underline{\\lnot( p \\land q \\land r)} \\land\n\\underline{\\lnot( p \\land q \\land \\lnot r)} \\land\n\\underline{\\lnot( p \\land \\lnot q \\land \\lnot r)} \\land\n\\underline{\\lnot(\\lnot p \\land q \\land \\lnot r)} &&\\text{// generalized D.M. } \\\\\n&\\leftrightarrow\n(\\lnot p \\lor \\lnot q \\lor \\lnot r) \\land\n(\\lnot p \\lor \\lnot q \\lor \\lnot \\lnot r) \\land\n(\\lnot p \\lor \\lnot \\lnot q \\lor \\lnot \\lnot r) \\land\n(\\lnot \\lnot p \\lor \\lnot q \\lor \\lnot \\lnot r) &&\\text{// generalized D.M. } (4 \\times) \\\\\n&\\leftrightarrow\n(\\lnot p \\lor \\lnot q \\lor \\lnot r) \\land\n(\\lnot p \\lor \\lnot q \\lor r) \\land\n(\\lnot p \\lor q \\lor r) \\land\n( p \\lor \\lnot q \\lor r) &&\\text{// remove all } \\lnot \\lnot \\\\\n&= \\phi_{CNF}\n\\end{align}" }, { "math_id": 28, "text": "\\phi = ((\\lnot (p \\land q)) \\leftrightarrow (\\lnot r \\uparrow (p \\oplus q)))" }, { "math_id": 29, "text": "p" }, { "math_id": 30, "text": "q" }, { "math_id": 31, "text": "\\leftrightarrow" }, { "math_id": 32, "text": "\n(\\lnot p \\lor \\lnot q \\lor \\lnot r) \\land\n(\\lnot p \\lor \\lnot q \\lor r) \\land\n(\\lnot p \\lor q \\lor r) \\land\n( p \\lor \\lnot q \\lor r)\n" }, { "math_id": 33, "text": "V" }, { "math_id": 34, "text": "\\lnot V" }, { "math_id": 35, "text": "(X_1 \\wedge Y_1) \\vee (X_2 \\wedge Y_2) \\vee \\ldots \\vee (X_n \\wedge Y_n)" }, { "math_id": 36, "text": "2^n" }, { "math_id": 37, "text": "(X_1 \\vee X_2 \\vee \\ldots \\vee X_n) \\wedge (Y_1 \\vee X_2 \\vee \\ldots \\vee X_n) \\wedge (X_1 \\vee Y_2 \\vee \\ldots \\vee X_n) \\wedge (Y_1 \\vee Y_2 \\vee \\ldots \\vee X_n) \\wedge \\ldots \\wedge (Y_1 \\vee Y_2 \\vee \\ldots \\vee Y_n)." }, { "math_id": 38, "text": "X_i" }, { "math_id": 39, "text": "Y_i" }, { "math_id": 40, "text": "i" }, { "math_id": 41, "text": "Z_1,\\ldots,Z_n" }, { "math_id": 42, "text": "(Z_1 \\vee \\ldots \\vee Z_n) \\wedge\n(\\neg Z_1 \\vee X_1) \\wedge (\\neg Z_1 \\vee Y_1) \\wedge\n\\ldots \\wedge \n(\\neg Z_n \\vee X_n) \\wedge (\\neg Z_n \\vee Y_n). " }, { "math_id": 43, "text": "Z_i" }, { "math_id": 44, "text": "Z_i \\vee \\neg X_i \\vee \\neg Y_i" }, { "math_id": 45, "text": "Z_i \\equiv X_i \\wedge Y_i" }, { "math_id": 46, "text": "X_i \\wedge Y_i" }, { "math_id": 47, "text": "n" }, { "math_id": 48, "text": "n \\ge 1" }, { "math_id": 49, "text": "2n" }, { "math_id": 50, "text": "L = \\{ p_1, \\lnot p_1, p_2, \\lnot p_2, \\ldots, p_n, \\lnot p_n\\}" }, { "math_id": 51, "text": "L" }, { "math_id": 52, "text": "(2^{2n} -1)" }, { "math_id": 53, "text": "2^{n}" }, { "math_id": 54, "text": "2^{(2 \\times 2)} -1 = 15" }, { "math_id": 55, "text": " \n\\begin{array}{lcl}\n(\\lnot p) \\land (p) \\land (\\lnot q) \\land (q) \\land \\\\\n(\\lnot p \\or p) \\land\n\\underline{(\\lnot p \\or \\lnot q)} \\land\n\\underline{(\\lnot p \\or q)} \\land\n\\underline{( p \\or \\lnot q)} \\land\n\\underline{( p \\or q)} \\land\n(\\lnot q \\or q) \\land \\\\\n(\\lnot p \\or p \\or \\lnot q) \\land\t\n(\\lnot p \\or p \\or q) \\land\t\n(\\lnot p \\or \\lnot q \\or q) \\land\t\n( p \\or \\lnot q \\or q) \\land \\\\\n(\\lnot p \\or p \\or \\lnot q \\or q)\n\\end{array}" }, { "math_id": 56, "text": "X_1 \\vee \\ldots \\vee X_k \\vee \\ldots \\vee X_n" }, { "math_id": 57, "text": "X_1 \\vee \\ldots \\vee X_{k-1} \\vee Z" }, { "math_id": 58, "text": "\\neg Z \\vee X_k \\lor \\ldots \\vee X_n" }, { "math_id": 59, "text": "P \\rightarrow Q" }, { "math_id": 60, "text": "\\lnot P \\lor Q" }, { "math_id": 61, "text": "P \\leftrightarrow Q" }, { "math_id": 62, "text": "(P \\lor \\lnot Q) \\land (\\lnot P \\lor Q)" }, { "math_id": 63, "text": "\\rightarrow" }, { "math_id": 64, "text": "\\lnot (P \\lor Q)" }, { "math_id": 65, "text": "(\\lnot P) \\land (\\lnot Q)" }, { "math_id": 66, "text": "\\lnot (P \\land Q)" }, { "math_id": 67, "text": "(\\lnot P) \\lor (\\lnot Q)" }, { "math_id": 68, "text": "\\lnot\\lnot P" }, { "math_id": 69, "text": "P" }, { "math_id": 70, "text": "\\lnot (\\forall x P(x))" }, { "math_id": 71, "text": "\\exists x \\lnot P(x)" }, { "math_id": 72, "text": "\\lnot (\\exists x P(x))" }, { "math_id": 73, "text": "\\forall x \\lnot P(x)" }, { "math_id": 74, "text": "(\\forall x P(x)) \\lor (\\exists x Q(x))" }, { "math_id": 75, "text": "\\forall x [\\exists y \\mathrm{Animal}(y) \\land \\lnot \\mathrm{Loves}(x, y)] \\lor [\\exists y \\mathrm{Loves}(y, x)]" }, { "math_id": 76, "text": "\\forall x [\\exists y \\mathrm{Animal}(y) \\land \\lnot \\mathrm{Loves}(x, y)] \\lor [\\exists z \\mathrm{Loves}(z,x)]" }, { "math_id": 77, "text": "P \\land (\\forall x Q(x))" }, { "math_id": 78, "text": "\\forall x (P \\land Q(x))" }, { "math_id": 79, "text": "P \\lor (\\forall x Q(x))" }, { "math_id": 80, "text": "\\forall x (P \\lor Q(x))" }, { "math_id": 81, "text": "P \\land (\\exists x Q(x))" }, { "math_id": 82, "text": "\\exists x (P \\land Q(x))" }, { "math_id": 83, "text": "P \\lor (\\exists x Q(x))" }, { "math_id": 84, "text": "\\exists x (P \\lor Q(x))" }, { "math_id": 85, "text": "x" }, { "math_id": 86, "text": "\\forall x_1 \\ldots \\forall x_n \\; \\exists y \\; P(y)" }, { "math_id": 87, "text": "\\forall x_1 \\ldots \\forall x_n \\; P(f(x_1,\\ldots,x_n))" }, { "math_id": 88, "text": "f" }, { "math_id": 89, "text": "P \\lor (Q \\land R)" }, { "math_id": 90, "text": "(P \\lor Q) \\land (P \\lor R)" }, { "math_id": 91, "text": "{\\color{red}{\\text{red}}}" }, { "math_id": 92, "text": "g(x)" }, { "math_id": 93, "text": "f(x)" }, { "math_id": 94, "text": "(\\mathrm{Animal}(f(x)) \\lor \\mathrm{Loves}(g(x), x)) \\land (\\lnot \\mathrm{Loves}(x, f(x)) \\lor \\mathrm{Loves}(g(x), x))" } ]
https://en.wikipedia.org/wiki?curid=73342
7334318
Second-harmonic generation
Nonlinear optical process Second-harmonic generation (SHG), also known as frequency doubling, is the lowest-order wave-wave nonlinear interaction that occurs in various systems, including optical, radio, atmospheric, and magnetohydrodynamic systems. As a prototype behavior of waves, SHG is widely used, for example, in doubling laser frequencies. SHG was initially discovered as a nonlinear optical process in which two photons with the same frequency interact with a nonlinear material, are "combined", and generate a new photon with twice the energy of the initial photons (equivalently, twice the frequency and half the wavelength), that conserves the coherence of the excitation. It is a special case of sum-frequency generation (2 photons), and more generally of harmonic generation. The second-order nonlinear susceptibility of a medium characterizes its tendency to cause SHG. Second-harmonic generation, like other even-order nonlinear optical phenomena, is not allowed in media with inversion symmetry (in the leading electric dipole contribution). However, effects such as the Bloch–Siegert shift (oscillation), found when two-level systems are driven at Rabi frequencies comparable to their transition frequencies, will give rise to second-harmonic generation in centro-symmetric systems. In addition, in non-centrosymmetric crystals belonging to crystallographic point group 432, SHG is not possible and under Kleinman's conditions SHG in 422 and 622 point groups should vanish, although some exceptions exist. In some cases, almost 100% of the light energy can be converted to the second-harmonic frequency. These cases typically involve intense pulsed laser beams passing through large crystals and careful alignment to obtain phase matching. In other cases, like second-harmonic imaging microscopy, only a tiny fraction of the light energy is converted to the second harmonic, but this light can nevertheless be detected with the help of optical filters. Generating the second harmonic, often called frequency doubling, is also a process in radio communication; it was developed early in the 20th century and has been used with frequencies in the megahertz range. It is a special case of frequency multiplication. History. Second-harmonic generation was first demonstrated by Peter Franken, A. E. Hill, C. W. Peters, and G. Weinreich at the University of Michigan, Ann Arbor, in 1961. The demonstration was made possible by the invention of the laser, which created the required high-intensity coherent light. They focused a ruby laser with a wavelength of 694 nm into a quartz sample. They sent the output light through a spectrometer, recording the spectrum on photographic paper, which indicated the production of light at 347 nm. Famously, when published in the journal "Physical Review Letters", the copy editor mistook the dim spot (at 347 nm) on the photographic paper as a speck of dirt and removed it from the publication. The formulation of SHG was initially described by N. Bloembergen and P. S. Pershan at Harvard in 1962. In their extensive evaluation of Maxwell's equations at the planar interface between a linear and nonlinear medium, several rules for the interaction of light in non-linear media were elucidated. Types in crystals. Critical phase-matching. Second-harmonic generation occurs in three types for critical phase-matching, denoted 0, I and II. In "Type 0 SHG" two photons having extraordinary polarization with respect to the crystal will combine to form a single photon with double the frequency/energy and extraordinary polarization. In "Type I SHG" two photons having ordinary polarization with respect to the crystal will combine to form one photon with double the frequency and extraordinary polarization. In "Type II SHG", two photons having orthogonal polarizations will combine to form one photon with double the frequency and ordinary polarization. For a given crystal orientation, only one of these types of SHG occurs. In general to utilise "Type 0" interactions a quasi-phase-matching crystal type will be required, for example periodically poled lithium niobate (PPLN). Non-critical phase-matching. Since phase-matching process basically means to adapt the optical indices n at ω and 2ω, it can also be done by a temperature control in some birefringent crystals, because n changes with the temperature. For instance, LBO presents a perfect phase-matching at 25 °C for a SHG excited at 1200 or 1400 nm, but needs to be elevated at 200 °C for SHG with the usual laser line of 1064 nm. It is called "non-critical" because it does not depend on the crystal orientation as usual phase-matching. Optical second-harmonic generation. Since media with inversion symmetry are forbidden from generating second-harmonic light via the leading-order electric dipole contribution (unlike third harmonic generation), surfaces and interfaces make interesting subjects for study with SHG. In fact, second-harmonic generation and sum frequency generation discriminate against signals from the bulk, implicitly labeling them as surface specific techniques. In 1982, T. F. Heinz and Y. R. Shen explicitly demonstrated for the first time that SHG could be used as a spectroscopic technique to probe molecular monolayers adsorbed to surfaces. Heinz and Shen adsorbed monolayers of laser dye rhodamine to a planar fused silica surface; the coated surface was then pumped by a nanosecond ultra-fast laser. SH light with characteristic spectra of the adsorbed molecule and its electronic transitions were measured as reflection from the surface and demonstrated a quadratic power dependence on the pump laser power. In SHG spectroscopy, one focuses on measuring twice the incident frequency 2"ω" given an incoming electric field formula_0 in order to reveal information about a surface. Simply (for a more in-depth derivation see below), the induced second-harmonic dipole per unit volume, formula_1, can be written as formula_2 where formula_3 is known as the nonlinear susceptibility tensor and is a characteristic to the materials at the interface of study. The generated formula_4 and corresponding formula_3 have been shown to reveal information about the orientation of molecules at a surface/interface, the interfacial analytical chemistry of surfaces, and chemical reactions at interfaces. From planar surfaces. Early experiments in the field demonstrated second-harmonic generation from metal surfaces. Eventually, SHG was used to probe the air-water interface, allowing for detailed information about molecular orientation and ordering at one of the most ubiquitous of surfaces. It can be shown that the specific elements of formula_3: formula_5 where "N""s" is the adsorbate density, "θ" is the angle that the molecular axis "z" makes with the surface normal "Z", and formula_6 is the dominating element of the nonlinear polarizability of a molecule at an interface, allow one to determine "θ", given laboratory coordinates ("x", "y", "z"). Using an interference SHG method to determine these elements of "χ"(2), the first molecular orientation measurement showed that the hydroxyl group of phenol pointed downwards into the water at the air-water interface (as expected due to the potential of hydroxyl groups to form hydrogen bonds). Additionally SHG at planar surfaces has revealed differences in "pK""a" and rotational motions of molecules at interfaces. From non-planar surfaces. Second-harmonic light can also be generated from surfaces that are "locally" planar, but may have inversion symmetry (centrosymmetric) on a larger scale. Specifically, recent theory has demonstrated that SHG from small spherical particles (micro- and nanometer scale) is allowed by proper treatment of Rayleigh scattering (scattering without a change in frequency from absorbed to emitted waves). At the surface of a small sphere, inversion symmetry is broken, allowing for SHG and other even order harmonics to occur. For a colloidal system of microparticles at relatively low concentrations, the total SH signal formula_7, is given by: formula_8 where formula_9 is the SH electric field generated by the "j"th particle, and "n" the density of particles. The SH light generated from each particle is coherent, but adds incoherently to the SH light generated by others (as long as density is low enough). Thus, SH light is only generated from the interfaces of the spheres and their environment and is independent of particle-particle interactions. It has also been shown that the second-harmonic electric field formula_4 scales with the radius of the particle cubed, a3. Besides spheres, other small particles like rods have been studied similarly by SHG. Both immobilized and colloidal systems of small particles can be investigated. Recent experiments using second-harmonic generation of non-planar systems include transport kinetics across living cell membranes and demonstrations of SHG in complex nanomaterials. Radiation pattern. The SHG radiation pattern generated by an exciting Gaussian beam also has a (homogeneous) 2D Gaussian profile if the nonlinear medium being excited is homogeneous (A). However, if the exciting beam is positioned at an interface between opposite polarities (± boundary, "B") that is parallel to the beam propagation (see figure), the SHG will be split into two lobes whose amplitudes have opposite sign, i.e. are formula_10 phase-shifted. These boundaries can be found in the sarcomeres of muscles (protein = myosin), for instance. Note that we have considered here only the forward generation. Moreover the SHG phase-matching can also result in formula_11: some SHG is also emitted in backward (epi direction). When the phase-matching is not fulfilled, as in biological tissues, the backward signal comes from a sufficiently high phase-mismatch which allow a small backward contribution to compensate for it. Unlike fluorescence, the spatial coherence of the process constrain it to emit only in those two directions, but the coherence length in backward is always way smaller than in forward, meaning there is always more forward than backward SHG signal. The forward ("F") to backward ("B") ratio is dependent on the arrangement of the different dipoles (green in figure) that are being excited. With only one dipole ((a) in the figure), "F" = "B", but "F" becomes higher than "B" when more dipoles are stacked along the propagation direction (b and c). However, the Gouy phase-shift of the Gaussian beam will imply a formula_10 phase-shift between the SHGs generated at the edges of the focal volume, and can thus result in destructive interferences (zero signal) if there are dipoles at these edges having the same orientation (case (d) in the figure). Applications. Green lasers. Second-harmonic generation is used by the laser industry to make green 532 nm lasers from a 1064 nm source. The 1064 nm light is fed through a bulk nonlinear crystal (typically made of KDP or KTP). In high-quality diode lasers the crystal is coated on the output side with an infrared filter to prevent leakage of intense 1064 nm or 808 nm infrared light into the beam. Both of these wavelengths are invisible and do not trigger the defensive "blink-reflex" reaction in the eye and can therefore be a special hazard to human eyes. Furthermore, some laser safety eyewear intended for argon or other green lasers may filter out the green component (giving a false sense of safety), but transmit the infrared. Nevertheless, some "green laser pointer" products have become available on the market which omit the expensive infrared filter, often without warning. Ultra-short pulse measurement. Second-harmonic generation is also used for measuring ultra-short pulse widths with autocorrelators. Characterizing an ultrashort pulse (like measuring its temporal width) cannot be done directly with electronics only, as the time-scale is below 1ps (formula_12sec) : it needs to use the pulse itself, that is why an autocorrelation function is often used. SHG has the advantage of mixing two input fields to generate the harmonic one, it is thus a good candidate (but not the only one) to perform such a pulse measurement. Optical autocorrelation, in its intensity or fringe-resolved (interferometric) version use SHG, unlike field autocorrelation. Also, most versions of the FROG (called SHG-FROG) use SHG to mix the delayed fields. Second-harmonic generation microscopy. In biological and medical science, the effect of second-harmonic generation is used for high-resolution optical microscopy. Because of the non-zero second-harmonic coefficient, only non-centrosymmetric structures are capable of emitting SHG light. One such structure is collagen, which is found in most load-bearing tissues. Using a short-pulse laser such as a femtosecond laser and a set of appropriate filters the excitation light can be easily separated from the emitted, frequency-doubled SHG signal. This allows for very high axial and lateral resolution comparable to that of confocal microscopy without having to use pinholes. SHG microscopy has been used for studies of the cornea and lamina cribrosa sclerae, both of which consist primarily of collagen. Second-harmonic generation can be produced by several non-centrosymmetric organic dyes; however, most of the organic dyes also generate collateral fluorescence along with second-harmonic generation signals. Until now, only two classes of organic dyes have been shown which do not produce any collateral fluorescence and works purely on second-harmonic generation. Recently, using two-photon excited fluorescence and second-harmonic generation-based microscopy, a group of Oxford University researchers showed that organic porphyrin-type molecules can have different transition dipole moments for two-photon fluorescence and second-harmonic generation, which are otherwise thought to occur from the same transition dipole moment. Second-harmonic generation microscopy is also used in material science, for instance to characterize nanostructured materials. Characterization of crystalline materials. Second harmonic generation is also relevant to characterize organic or inorganic crystals since is one of the most discriminant and rapid technique to detect non-centrosymmetry. In addition, this technique can be used on single crystal as well as on powdered samples. One should recall that SHG is only possible (from the bulk) in non-centrosymmetric (NC) crystals. The part of non-centroysmmetric crystals in nature is much lower than centrosymmetric crystals (circa 22% of the Cambridge structural database), but the frequency of NC crystals increases by a lot in pharmaceutical, biological and electronic fields because of the particular properties of these crystals (piezoelectricity, pyroelectricity, polar phases, chirality, etc.). In 1968 (7 years after the first experimental evidence of SHG on single crystal), Kurtz and Perry started to develop a SHG analyzer to rapidly detect the presence or not of inversion center in powdered crystalline samples. The detection of a SHG signal has been shown to be reliable and sensitive test for the detection of crystalline non-centrosymmetry with the confidence level higher than 99%. It is a relevant tool to resolve space group ambiguities that can arise from Friedel's law in single-crystal X-ray diffraction. Furthermore, the method is referenced in the International Tables for Crystallography and is described as a "powerful method of testing crystalline materials for the absence of a symmetry center." One possible application is also to rapidly discriminate chiral phases such as conglomerate that are of particular interest for pharmaceutical industries. It could also be used as a technique to probe the structural purity of material if one of the impurities is NC reaching a detection threshold as low as 1 ppm using Kurtz–Perry apparatus up to one part in 10 billion by volume using a SHG microscope. Due to the high sensitivity of the technique, it can be a helpful tool in the accurate determination of phase diagram and can also be used to monitor phase transitions (polymorphic transition, dehydration, ...) when at least one of the phases is NC. Theoretical derivation (plane wave). At low conversion. The simplest case for analysis of second-harmonic generation is a plane wave of amplitude "E"("ω") traveling in a nonlinear medium in the direction of its "k" vector. A polarization is generated at the second-harmonic frequency: formula_13 where formula_14 is the effective nonlinear optical coefficient which is dependent on specific components of formula_3 that are involved in this particular interaction. The wave equation at 2ω (assuming negligible loss and asserting the slowly varying envelope approximation) is formula_15 where formula_16. At low conversion efficiency ("E"(2"ω") ≪ "E"("ω")) the amplitude formula_0 remains essentially constant over the interaction length, formula_17. Then, with the boundary condition formula_18 we obtain formula_19 In terms of the optical intensity, formula_20, this is, formula_21 This intensity is maximized for the phase-matched condition Δ"k" = 0. If the process is not phase matched, the driving polarization at "ω" goes in and out of phase with generated wave "E"(2"ω") and conversion oscillates as sin(Δ"kℓ"/2). The coherence length is defined as formula_22. It does not pay to use a nonlinear crystal much longer than the coherence length. (Periodic poling and quasi-phase-matching provide another approach to this problem.) With depletion. When the conversion to 2nd harmonic becomes significant it becomes necessary to include depletion of the fundamental. The energy conversion states that all the involved fields verify the Manley–Rowe relations. One then has the coupled equations: formula_24 where formula_25 denotes the complex conjugate. For simplicity, assume phase matched generation (formula_23). Then, energy conservation requires that formula_26 where formula_27 is the complex conjugate of the other term, or formula_28 Now we solve the equations with the premise formula_29 and obtain formula_30 which leads to formula_31 Using formula_32 we get formula_33 If we assume a real formula_14, the relative phases for real harmonic growth must be such that formula_34. Then formula_35 or formula_36 where formula_37. From formula_38, it also follows that formula_39 Theoretical expression with Gaussian beams. The excitation wave is assumed to be a Gaussian beam, of amplitude: formula_40 with formula_41, formula_42 the direction of propagation, formula_43 the Rayleigh range, formula_44 the wave vector. Each wave verifies the wave equation formula_45 where formula_46. With phase-matching. It can be shown that: formula_47 (a Gaussian), is a solution of the equation ("n" = 2 for SHG). No phase-matching. A non-perfect phase-matching is a more realistic condition in practice, especially in biological samples. The paraxial approximation is however supposed still valid: formula_48, and in the harmonic expression, formula_49 is now formula_50. In the special case of SHG ("n" = 2), in a medium of length "L" and a focus position formula_51, the intensity writes: formula_52 where formula_53 is the speed of light in vacuum, formula_54 the vacuum permittivity, formula_55 the optical index of the medium at formula_56 and formula_57 the waist size of excitation. Thus, the SHG intensity quickly decays in the bulk (formula_58), due to the Gouy phase-shift of the Gaussian beam. In conformity with experiments, the SHG signal vanishes in the bulk (if the medium thickness is too large), and the SHG must be generated at the surface of the material: the conversion therefore does not strictly scales with the square of the number of scatterers, contrary to what the plane wave model indicates. Interestingly, the signal also vanishes in bulk for higher orders, like THG. Materials used. Materials capable of generating a second harmonic are crystals without inversion symmetry, except crystals with point group 432. This eliminates water and glass. Notably, filamentous biological proteins with a cylindrical symmetric such as collagen, tubulin or myosin, but also certain carbohydrates (such as starch or cellulose) are also quite good converters of SHG (fundamental in the near infrared). Examples of crystals used with for SHG conversion: For common types of diode-pumped solid state lasers with input wavelengths: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E(\\omega)" }, { "math_id": 1, "text": "P^{(2)}(2\\omega)" }, { "math_id": 2, "text": "E(2\\omega) \\propto P^{(2)}(2\\omega) = \\chi^{(2)} E(\\omega) E(\\omega)" }, { "math_id": 3, "text": "\\chi^{(2)}" }, { "math_id": 4, "text": "E(2\\omega)" }, { "math_id": 5, "text": "\\begin{align}\n \\chi^{(2)}_{zzz} &= N_s \\left\\langle \\cos^3(\\theta) \\right\\rangle \\alpha^{(2)}_{zzz} \\\\\n \\chi^{(2)}_{xzx} &= \\frac{1}{2} N_s \\left\\langle \\cos(\\theta) \\sin^2(\\theta) \\right\\rangle \\alpha^{(2)}_{zzz}\n\\end{align}" }, { "math_id": 6, "text": "\\alpha^{(2)}_{zzz}" }, { "math_id": 7, "text": "I^\\text{total}_{2\\omega}" }, { "math_id": 8, "text": "I^\\text{total}_{2\\omega} \\propto \\sum\\limits_{j=1}^n \\left(E^{2\\omega}_j\\right)^2 = n\\left(E^{2\\omega}\\right)^2 = nI_{2\\omega}" }, { "math_id": 9, "text": "E^{2\\omega}_j" }, { "math_id": 10, "text": "\\pi" }, { "math_id": 11, "text": "\\vec k_{2\\omega} = -2\\vec k_\\omega" }, { "math_id": 12, "text": "10^{-12}" }, { "math_id": 13, "text": "P(2\\omega) = \\varepsilon_0 \\chi^{(2)} E^2(\\omega) = 2\\varepsilon_0 d_\\text{eff}(2\\omega; \\omega, \\omega) E^2(\\omega),\\," }, { "math_id": 14, "text": "d_\\text{eff}" }, { "math_id": 15, "text": "\\frac{\\partial E(2\\omega)}{\\partial z} = -\\frac{i\\omega}{n_{2\\omega}c} d_\\text{eff} E^2(\\omega)e^{i \\, \\Delta k \\, z}" }, { "math_id": 16, "text": "\\Delta k = k(2\\omega) - 2k(\\omega)" }, { "math_id": 17, "text": "\\ell" }, { "math_id": 18, "text": "E(2\\omega, z = 0) = 0" }, { "math_id": 19, "text": "E(2\\omega, z = \\ell) =\n -\\frac{i\\omega d_\\text{eff}}{n_{2\\omega}c} E^2(\\omega) \\int_0^\\ell e^{i\\,\\Delta k \\, z} \\, dz =\n -\\frac{i\\omega d_\\text{eff}}{n_{2\\omega}c} E^2(\\omega) \\ell\\, \\frac{\\sin\\left(\\frac{1}{2}\\,\\Delta k \\, \\ell\\right)}{\\frac{1}{2}\\,\\Delta k\\, \\ell} e^{\\frac{i}{2} \\, \\Delta k\\, \\ell}\n" }, { "math_id": 20, "text": "I = n/2\\sqrt{\\varepsilon_0/\\mu_0}|E|^2" }, { "math_id": 21, "text": "I(2\\omega,\\ell) = \\frac{2\\omega^2 d^2_\\text{eff} \\ell^2}{n_{2\\omega} n_\\omega^2 c^3 \\varepsilon_0}\\left(\\frac{\\sin\\left(\\frac{1}{2} \\,\\Delta k \\, \\ell\\right)}{\\frac{1}{2} \\,\\Delta k\\, \\ell}\\right)^2 I^2(\\omega)" }, { "math_id": 22, "text": "\\ell_c = \\frac{\\pi}{\\Delta k}" }, { "math_id": 23, "text": "\\Delta k = 0" }, { "math_id": 24, "text": "\\begin{align}\n \\frac{\\partial E(2\\omega)}{\\partial z} &= -\\frac{i\\omega}{n_{2\\omega}c} d_\\text{eff} E^2(\\omega) e^{i\\,\\Delta k \\, z}, \\\\[5pt]\n \\frac{\\partial E(\\omega)}{\\partial z} &= -\\frac{i\\omega}{n_{\\omega}c} d_\\text{eff}^* E(2\\omega) E^*(\\omega)e^{-i\\,\\Delta k \\, z},\n\\end{align}" }, { "math_id": 25, "text": "*" }, { "math_id": 26, "text": "\n n_{2\\omega} \\left[E^*(2\\omega)\\frac{\\partial E(2\\omega)}{\\partial z} + \\text{c.c.} \\right] =\n -n_\\omega \\left[E(\\omega)\\frac{\\partial E^*(\\omega)}{\\partial z} + \\text{c.c.} \\right]\n" }, { "math_id": 27, "text": "\\text{c.c.}" }, { "math_id": 28, "text": "n_{2\\omega} \\left|E(2\\omega)\\right|^2 + n_\\omega|E(\\omega)|^2 = n_{2\\omega} E_0^2. " }, { "math_id": 29, "text": "\\begin{align}\n E(\\omega) &= \\left|E(\\omega)\\right| e^{i\\varphi(\\omega)} \\\\\n E(2\\omega) &= \\left|E(2\\omega)\\right| e^{i\\varphi(2\\omega)}\n\\end{align}" }, { "math_id": 30, "text": "\\frac{d\\left|E(2\\omega)\\right|}{dz} = - \\frac{i\\omega d_\\text{eff}}{n_\\omega c}\\left[E_0^2 - \\left|E(2\\omega)\\right|^2\\right] e^{2i\\varphi(\\omega) - i\\varphi(2\\omega)}" }, { "math_id": 31, "text": "\\int_0^{\\left|E(2\\omega)\\right|\\ell} \\frac{d\\left|E(2\\omega)\\right|}{E_0^2 - \\left|E(2\\omega)\\right|^2} = -\\int_0^\\ell \\frac{i\\omega d_\\text{eff}}{n_\\omega c} e^{2i\\varphi(\\omega) - i\\varphi(2\\omega)} \\, dz. " }, { "math_id": 32, "text": "\\int \\frac{dx}{a^2 - x^2} = \\frac{1}{a}\\tanh^{-1} \\frac{x}{a}" }, { "math_id": 33, "text": " \\left|E(2\\omega)\\right|_{z=\\ell} = E_0\\tanh \\left(\\frac{-iE_0\\ell\\omega d_\\text{eff}}{n_\\omega c} e^{2i\\varphi(\\omega) - i\\varphi(2\\omega)} \\right). " }, { "math_id": 34, "text": "e^{2i\\varphi(\\omega) - i\\varphi(2\\omega)} = i" }, { "math_id": 35, "text": "I(2\\omega, \\ell) = I(\\omega, 0) \\tanh^2\\left(\\frac{E_0\\omega d_\\text{eff} \\ell}{n_\\omega c}\\right) " }, { "math_id": 36, "text": "I(2\\omega, \\ell) = I(\\omega,0)\\tanh^2 (\\Gamma \\ell)," }, { "math_id": 37, "text": "\\Gamma = \\omega d_\\text{eff} E_0/nc" }, { "math_id": 38, "text": "I(2\\omega, \\ell) + I(\\omega,\\ell) = I(\\omega, 0)" }, { "math_id": 39, "text": "I(\\omega, \\ell) = I(\\omega, 0) \\operatorname{sech}^2(\\Gamma \\ell)." }, { "math_id": 40, "text": " A_1 = A_0 \\sqrt{\\frac{2}{\\pi}}\\frac{z_R}{iq(z)} \\exp \\left( i k_1 \\frac{x^2+y^2}{2q(z)} \\right)" }, { "math_id": 41, "text": " q(z) = z-iz_R " }, { "math_id": 42, "text": "z" }, { "math_id": 43, "text": "z_R" }, { "math_id": 44, "text": "{k}_{1}" }, { "math_id": 45, "text": "\\left[ \\frac{\\partial} {\\partial x^2} +\\frac\\partial {\\partial y^2} +2i k_1 \\frac\\partial{\\partial z} \\right] A(x,y,z;k_1) = \\begin{cases}\n 0 & \\text{for the fundamental}, \\\\\n \\frac{\\omega_n^2 c^2}{\\chi^{(n)}} A(x,y,z;k_1) e^{i\\,\\Delta k\\, z} & \\text{for } n\\text{-th harmonic}.\n\\end{cases}\n" }, { "math_id": 46, "text": "\\Delta k = k_n - k_1" }, { "math_id": 47, "text": " A_n =-i\\frac{\\omega_n}{2n_{n\\omega}c}{\\left( A_0 \\sqrt{\\frac{2}{\\pi}} \\right)^n} z_R^2 \\int_{-\\infty }^z \\frac{\\chi^{(n)}(u)}{q(u)^2} \\, du \\exp \\left( ik_n \\frac{x^2+y^2}{2q(z)} \\right)" }, { "math_id": 48, "text": "k_n = nk_1" }, { "math_id": 49, "text": "\\chi^{(n)}(z)" }, { "math_id": 50, "text": "\\chi^{(n)}(z)e^{i\\,\\Delta k \\, z}" }, { "math_id": 51, "text": "z_0" }, { "math_id": 52, "text": " I_{2\\omega} = \\frac{2\\omega^2}{\\pi c^2 \\varepsilon_0 w_0^2 n_{2\\omega} n_\\omega^2} I_\\omega^2(\\chi^{(2)})^2 \\left( \\int_{z_0}^{z_0+L} \\frac{e^{i\\,\\Delta k \\, z}}{1+iz/z_R} \\right )^2 \\, dz. " }, { "math_id": 53, "text": "c" }, { "math_id": 54, "text": "\\varepsilon_0" }, { "math_id": 55, "text": "n_{n\\omega}" }, { "math_id": 56, "text": "n\\omega" }, { "math_id": 57, "text": "w_0" }, { "math_id": 58, "text": "0 < z_0 < L " } ]
https://en.wikipedia.org/wiki?curid=7334318
7335270
Kähler–Einstein metric
Type of metric in Riemannian geometry In differential geometry, a Kähler–Einstein metric on a complex manifold is a Riemannian metric that is both a Kähler metric and an Einstein metric. A manifold is said to be Kähler–Einstein if it admits a Kähler–Einstein metric. The most important special case of these are the Calabi–Yau manifolds, which are Kähler and Ricci-flat. The most important problem for this area is the existence of Kähler–Einstein metrics for compact Kähler manifolds. This problem can be split up into three cases dependent on the sign of the first Chern class of the Kähler manifold: When first Chern class is not definite, or we have intermediate Kodaira dimension, then finding canonical metric remained as an open problem, which is called the algebrization conjecture via analytical minimal model program. Definition. Einstein manifolds. Suppose formula_0 is a Riemannian manifold. In physics the Einstein field equations are a set of partial differential equations on the metric tensor formula_1 which describe how the manifold formula_2 should curve due to the existence of mass or energy, a quantity encapsulated by the stress–energy tensor formula_3. In a vacuum where there is no mass or energy, that is formula_4, the Einstein Field Equations simplify. Namely, the Ricci curvature of formula_1 is a symmetric formula_5-tensor, as is the metric formula_1 itself, and the equations reduce to formula_6 where formula_7 is the scalar curvature of formula_1. That is, the Ricci curvature becomes proportional to the metric. A Riemannian manifold formula_0 satisfying the above equation is called an Einstein manifold. Every two-dimensional Riemannian manifold is Einstein. It can be proven using the Bianchi identities that, in any larger dimension, the scalar curvature of any connected Einstein manifold must be constant. For this reason, the Einstein condition is often given as formula_8 for a real number formula_9 Kähler manifolds. When the Riemannian manifold formula_0 is also a complex manifold, that is it comes with an integrable almost-complex structure formula_10, it is possible to ask for a compatibility between the metric structure formula_1 and the complex structure formula_11. There are many equivalent ways of formulating this compatibility condition, and one succinct interpretation is to ask that formula_11 is orthogonal with respect to formula_1, so that formula_12 for all vector fields formula_13, and that formula_11 is preserved by the parallel transport of the Levi-Civita connection formula_14, captured by the condition formula_15. Such a triple formula_16 is called a Kähler manifold. Kähler–Einstein metrics. A Kähler–Einstein manifold is one which combines the above properties of being Kähler and admitting an Einstein metric. The combination of these properties implies a simplification of the Einstein equation in terms of the complex structure. Namely, on a Kähler manifold one can define the Ricci form, a real formula_17-form, by the expression formula_18 where formula_19 are any tangent vector fields to formula_2. The almost-complex structure formula_11 forces formula_20 to be antisymmetric, and the compatibility condition formula_15 combined with the Bianchi identity implies that formula_20 is a closed differential form. Associated to the Riemannian metric formula_1 is the Kähler form formula_21 defined by a similar expression formula_22. Therefore the Einstein equations for formula_1 can be rewritten as formula_23 the Kähler–Einstein equation. Since this is an equality of closed differential forms, it implies an equality of the associated de Rham cohomology classes formula_24 and formula_25. The former class is the first Chern class of formula_2, formula_26. Therefore a necessary condition for the existence of a solution to the Kähler–Einstein equation is that formula_27, for some formula_28. This is a topological necessary condition on the Kähler manifold formula_16. Note that since the Ricci curvature formula_29 is invariant under scaling formula_30, if there is a metric such that formula_27, one can always normalise to a new metric with formula_31, that is formula_32. Thus the Kähler–Einstein equation is often written formula_33 depending on the sign of the topological constant formula_34. Transformation to a complex Monge–Ampere equation. The situation of compact Kähler manifolds is special, because the Kähler–Einstein equation can be reformulated as a complex Monge–Ampere equation for a smooth Kähler potential on formula_2. By the topological assumption on the Kähler manifold, we may always assume that there exists some Kähler metric formula_35. The Ricci form formula_36 of formula_37 is given in local coordinates by the formula formula_38 By assumption formula_37 and formula_36 are in the same cohomology class formula_26, so the formula_39-lemma from Hodge theory implies there exists a smooth function formula_40 such that formula_41. Any other metric formula_42 is related to formula_37 by a Kähler potential formula_43 such that formula_44. It then follows that if formula_20 is the Ricci form with respect to formula_21, then formula_45 Thus to make formula_46 we need to find formula_47 such that formula_48 This will certainly be true if the same equation is proven after removing the derivatives formula_39, and in fact this is an equivalent equation by the formula_39-lemma up to changing formula_47 by the addition of a constant function. In particular, after removing formula_39 and exponentiating, the equation is transformed into formula_49 This partial differential equation is similar to a real Monge–Ampere equation, and is known as a complex Monge–Ampere equation, and subsequently can be studied using tools from convex analysis. Its behaviour is highly sensitive to the sign of the topological constant formula_32. The solutions of this equation appear as critical points of the K-energy functional introduced by Toshiki Mabuchi on the space of Kähler potentials in the class formula_26. Existence. The existence problem for Kähler–Einstein metrics can be split up into three distinct cases, dependent on the sign of the topological constant formula_34. Since the Kähler form formula_21 is always a positive differential form, the sign of formula_34 depends on whether the cohomology class formula_26 is positive, negative, or zero. In algebraic geometry this is understood in terms of the canonical bundle of formula_2: formula_50 if and only if the canonical bundle formula_51 is an ample line bundle, and formula_52 if and only if formula_53 is ample. If formula_51 is a trivial line bundle, then formula_54. When the Kähler manifold is compact, the problem of existence has been completely solved. The case "c1(X)&lt;0". When the Kähler manifold formula_2 satisfies the topological assumption formula_50, the canonical bundle is ample and so formula_34 must be negative. If the necessary topological assumption is satisfied, that is there exists a Kähler metric formula_21 such that formula_55, then it was proven by Aubin and Yau that a Kähler–Einstein metric always exists. The existence of a Kähler metric satisfying the topological assumption is a consequence of Yau's proof of the Calabi conjecture. Theorem (Aubin, Yau): A compact Kähler manifold with formula_50 always admits a Kähler–Einstein metric. The case "c1(X)=0". When the canonical bundle formula_51 is trivial, so that formula_54, the manifold is said to be Calabi–Yau. These manifolds are of special significance in physics, where they should appear as the string background in superstring theory in 10 dimensions. Mathematically, this corresponds to the case where formula_56, that is, when the Riemannian manifold formula_0 is Ricci flat. The existence of a Kähler–Einstein metric was proven in this case by Yau, using a continuity method similar to the case where formula_50. The topological assumption assumption formula_54 introduces new difficulties into the continuity method. Partly due to his proof of existence, and the related proof of the Calabi conjecture, Yau was awarded the Fields medal. Theorem (Yau): A compact Kähler manifold with trivial canonical bundle, a Calabi–Yau manifold, always admits a Kähler–Einstein metric, and in particular admits a Ricci-flat metric. The case "c1(X)&gt;0". When the anticanonical bundle formula_53 is ample, or equivalently formula_52, the manifold is said to be Fano. In contrast to the case formula_57, a Kähler–Einstein metric does not always exist in this case. It was observed by Akito Futaki that there are possible obstructions to the existence of a solution given by the holomorphic vector fields of formula_2, and it is a necessary condition that the Futaki invariant of these vector fields is non-negative. Indeed, much earlier it had been observed by Matsushima and Lichnerowicz that another necessary condition is that the Lie algebra of holomorphic vector fields formula_58 must be reductive. It was conjectured by Yau in 1993, in analogy with the similar problem of existence of Hermite–Einstein metrics on holomorphic vector bundles, that the obstruction to existence of a Kähler–Einstein metric should be equivalent to a certain algebro-geometric stability condition similar to slope stability of vector bundles. In 1997 Tian Gang proposed a possible stability condition, which came to be known as K-stability. The conjecture of Yau was resolved in 2012 by Chen–Donaldson–Sun using techniques largely different from the classical continuity method of the case formula_57, and at the same time by Tian. Chen–Donaldson–Sun have disputed Tian's proof, claiming that it contains mathematical inaccuracies and material which should be attributed to them. Tian has disputed these claims. The 2019 Veblen prize was awarded to Chen–Donaldson–Sun for their proof. Donaldson was awarded the 2015 Breakthrough Prize in Mathematics in part for his contribution to the proof, and the 2021 New Horizons Breakthrough Prize was awarded to Sun in part for his contribution. Theorem: A compact Fano manifold formula_2 admits a Kähler–Einstein metric if and only if the pair formula_59 is K-polystable. A proof based along the lines of the continuity method which resolved the case formula_57 was later provided by Datar–Székelyhidi, and several other proofs are now known. See the Yau–Tian–Donaldson conjecture for more details. Kähler–Ricci flow and the minimal model program. A central program in birational geometry is the minimal model program, which seeks to generate models of algebraic varieties inside every birationality class, which are in some sense "minimal", usually in that they minimize certain measures of complexity (such as the arithmetic genus in the case of curves). In higher dimensions, one seeks a minimal model which has "nef" canonical bundle. One way to construct minimal models is to contract certain curves formula_60 inside an algebraic variety formula_2 which have negative self-intersection. These curves should be thought of geometrically as subvarieties on which formula_2 has a concentration of negative curvature. In this sense, the minimal model program can be viewed as an analogy of the Ricci flow in differential geometry, where regions where curvature concentrate are expanded or contracted in order to reduce the original Riemannian manifold to one with uniform curvature (precisely, to a new Riemannian manifold which has uniform Ricci curvature, which is to say an Einstein manifold). In the case of 3-manifolds, this was famously used by Grigori Perelman to prove the Poincaré conjecture. In the setting of Kähler manifolds, the Kähler–Ricci flow was first written down by Cao. Here one fixes a Kähler metric formula_61 with Ricci form formula_62, and studies the geometric flow for a family of Kähler metrics formula_63 parametrised by formula_64: formula_65 When a projective variety formula_2 is of general type, the minimal model formula_66 admits a further simplification to a "canonical model" formula_67, with ample canonical bundle. In settings where there are only mild (orbifold) singularities to this canonical model, it is possible to ask whether the Kähler–Ricci flow of formula_2 converges to a (possibly mildly singular) Kähler–Einstein metric on formula_67, which should exist by Yau and Aubin's existence result for formula_68. A precise result along these lines was proven by Cascini and La Nave, and around the same time by Tian–Zhang. Theorem: The Kähler–Ricci flow on a projective variety formula_2 of general type exists for all time, and after at most a finite number of singularity formations, if the canonical model formula_67 of formula_2 has at worst orbifold singularities, then the Kähler–Ricci flow on formula_2 converges to the Kähler–Einstein metric on formula_67, up to a bounded function which is smooth away from an analytic subvariety of formula_2. In the case where the variety formula_2 is of dimension two, so is a surface of general type, one gets convergence to the Kähler–Einstein metric on formula_67. Later, Jian Song and Tian studied the case where the projective variety formula_2 has log-terminal singularities. Kähler–Ricci flow and existence of Kähler–Einstein metrics. It is possible to give an alternative proof of the Chen–Donaldson–Sun theorem on existence of Kähler–Einstein metrics on a smooth Fano manifold using the Kähler-Ricci flow, and this was carried out in 2018 by Chen–Sun–Wang. Namely, if the Fano manifold is K-polystable, then the Kähler-Ricci flow exists for all time and converges to a Kähler–Einstein metric on the Fano manifold. Generalizations and alternative notions. Constant scalar curvature Kähler metrics. When the canonical bundle formula_51 is not trivial, ample, or anti-ample, it is not possible to ask for a Kähler–Einstein metric, as the class formula_26 cannot contain a Kähler metric, and so the necessary topological condition can never be satisfied. This follows from the Kodaira embedding theorem. A natural generalisation of the Kähler–Einstein equation to the more general setting of an arbitrary compact Kähler manifold is to ask that the Kähler metric has constant scalar curvature (one says the metric is cscK). The scalar curvature is the total trace of the Riemannian curvature tensor, a smooth function on the manifold formula_0, and in the Kähler case the condition that the scalar curvature is constant admits a transformation into an equation similar to the complex Monge–Ampere equation of the Kähler–Einstein setting. Many techniques from the Kähler–Einstein case carry on to the cscK setting, albeit with added difficulty, and it is conjectured that a similar algebro-geometric stability condition should imply the existence of solutions to the equation in this more general setting. When the compact Kähler manifold satisfies the topological assumptions necessary for the Kähler–Einstein condition to make sense, the constant scalar curvature Kähler equation reduces to the Kähler–Einstein equation. Hermite–Einstein metrics. Instead of asking the Ricci curvature of the Levi-Civita connection on the tangent bundle of a Kähler manifold formula_2 is proportional to the metric itself, one can instead ask this question for the curvature of a Chern connection associated to a Hermitian metric on "any" holomorphic vector bundle over formula_2 (note that the Levi-Civita connection on the holomorphic tangent bundle is precisely the Chern connection of the Hermitian metric coming from the Kähler structure). The resulting equation is called the Hermite–Einstein equation, and is of special importance in gauge theory, where it appears as a special case of the Yang–Mills equations, which come from quantum field theory, in contrast to the regular Einstein equations which come from general relativity. In the case where the holomorphic vector bundle is again the holomorphic tangent bundle and the Hermitian metric is the Kähler metric, the Hermite–Einstein equation reduces to the Kähler–Einstein equation. In general however, the geometry of the Kähler manifold is often fixed and only the bundle metric is allowed to vary, and this causes the Hermite–Einstein equation to be easier to study than the Kähler–Einstein equation in general. In particular, a complete algebro-geometric characterisation of the existence of solutions is given by the Kobayashi–Hitchin correspondence. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(X,g)" }, { "math_id": 1, "text": "g" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "T=0" }, { "math_id": 5, "text": "(2,0)" }, { "math_id": 6, "text": "\\operatorname{Ric}_g= \\frac{1}{2}R_g g" }, { "math_id": 7, "text": "R_g" }, { "math_id": 8, "text": "\\operatorname{Ric}_g= \\lambda g" }, { "math_id": 9, "text": "\\lambda." }, { "math_id": 10, "text": "J:TX\\to TX" }, { "math_id": 11, "text": "J" }, { "math_id": 12, "text": "g(Ju,Jv)=g(u,v)" }, { "math_id": 13, "text": "u,v\\in \\Gamma(TM)" }, { "math_id": 14, "text": "\\nabla" }, { "math_id": 15, "text": "\\nabla J = 0" }, { "math_id": 16, "text": "(X,g,J)" }, { "math_id": 17, "text": "(1,1)" }, { "math_id": 18, "text": "\\rho(u,v) = \\operatorname{Ric}_g(Ju,v)," }, { "math_id": 19, "text": "u,v" }, { "math_id": 20, "text": "\\rho" }, { "math_id": 21, "text": "\\omega" }, { "math_id": 22, "text": "\\omega(u,v)=g(Ju,v)" }, { "math_id": 23, "text": "\\rho = \\lambda \\omega," }, { "math_id": 24, "text": "[\\rho]" }, { "math_id": 25, "text": "[\\omega]" }, { "math_id": 26, "text": "c_1(X)" }, { "math_id": 27, "text": "\\lambda \\omega \\in c_1(X)" }, { "math_id": 28, "text": "\\lambda \\in \\mathbb{R}" }, { "math_id": 29, "text": "\\operatorname{Ric}_g" }, { "math_id": 30, "text": "g\\mapsto \\lambda^{-1}g" }, { "math_id": 31, "text": "\\omega \\in c_1(X)" }, { "math_id": 32, "text": "\\lambda = -1,0,1" }, { "math_id": 33, "text": "\\rho = -\\omega, \\quad \\rho = 0,\\quad \\rho = \\omega" }, { "math_id": 34, "text": "\\lambda" }, { "math_id": 35, "text": "\\omega_0\\in c_1(X)" }, { "math_id": 36, "text": "\\rho_0" }, { "math_id": 37, "text": "\\omega_0" }, { "math_id": 38, "text": "\\rho_0 = - i \\partial \\bar \\partial \\log \\omega_0^n." }, { "math_id": 39, "text": "\\partial \\bar \\partial" }, { "math_id": 40, "text": "F\\in C^{\\infty}(X)" }, { "math_id": 41, "text": "\\omega_0 + i \\partial \\bar \\partial F = \\rho_0" }, { "math_id": 42, "text": "\\omega\\in c_1(X)" }, { "math_id": 43, "text": "\\varphi\\in C^{\\infty}(X)" }, { "math_id": 44, "text": "\\omega = \\omega_0 + i \\partial \\bar \\partial \\varphi" }, { "math_id": 45, "text": "\\rho - \\rho_0 = -i\\partial \\bar \\partial \\log \\frac{\\omega^n}{\\omega_0^n}." }, { "math_id": 46, "text": "\\rho=\\lambda \\omega" }, { "math_id": 47, "text": "\\varphi" }, { "math_id": 48, "text": "\\lambda i\\partial \\bar\\partial \\varphi = i \\partial \\bar \\partial F - i \\partial \\bar \\partial \\log \\frac{\\omega^n}{\\omega_0^n}." }, { "math_id": 49, "text": "(\\omega_0 + i \\partial \\bar \\partial \\varphi)^n = e^{F-\\lambda \\varphi} \\omega_0^n." }, { "math_id": 50, "text": "c_1(X)<0" }, { "math_id": 51, "text": "K_X" }, { "math_id": 52, "text": "c_1(X)>0" }, { "math_id": 53, "text": "K_X^{-1}" }, { "math_id": 54, "text": "c_1(X)=0" }, { "math_id": 55, "text": "c_1(X) = \\lambda [\\omega]" }, { "math_id": 56, "text": "\\lambda = 0" }, { "math_id": 57, "text": "c_1(X)\\le 0" }, { "math_id": 58, "text": "H^0(X,TX)" }, { "math_id": 59, "text": "(X,K_X^{-1})" }, { "math_id": 60, "text": "C\\subset X" }, { "math_id": 61, "text": "g_{i\\bar j}" }, { "math_id": 62, "text": "\\rho_{i\\bar j}" }, { "math_id": 63, "text": "\\tilde{g}_{ij}(t)" }, { "math_id": 64, "text": "t\\in [0,\\infty)" }, { "math_id": 65, "text": "\\frac{\\partial \\tilde{g}_{i\\bar j}}{\\partial t} = -\\tilde{\\rho}_{i\\bar j} + \\rho_{i\\bar j},\\quad \\tilde{g}_{i\\bar j}(0) = g_{i\\bar j}." }, { "math_id": 66, "text": "X'" }, { "math_id": 67, "text": "X''" }, { "math_id": 68, "text": "c_1(X'')<0" } ]
https://en.wikipedia.org/wiki?curid=7335270
7335429
Characteristic function (convex analysis)
In the field of mathematics known as convex analysis, the characteristic function of a set is a convex function that indicates the membership (or non-membership) of a given element in that set. It is similar to the usual indicator function, and one can freely convert between the two, but the characteristic function as defined below is better-suited to the methods of convex analysis. Definition. Let formula_0 be a set, and let formula_1 be a subset of formula_0. The characteristic function of formula_1 is the function formula_2 taking values in the extended real number line defined by formula_3 Relationship with the indicator function. Let formula_4 denote the usual indicator function: formula_5 If one adopts the conventions that then the indicator and characteristic functions are related by the equations formula_12 and formula_13 Subgradient. The subgradient of formula_14 for a set formula_1 is the tangent cone of that set in formula_15.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "\\chi_{A} : X \\to \\mathbb{R} \\cup \\{ + \\infty \\}" }, { "math_id": 3, "text": "\\chi_{A} (x) := \\begin{cases} 0, & x \\in A; \\\\ + \\infty, & x \\not \\in A. \\end{cases}" }, { "math_id": 4, "text": "\\mathbf{1}_{A} : X \\to \\mathbb{R}" }, { "math_id": 5, "text": "\\mathbf{1}_{A} (x) := \\begin{cases} 1, & x \\in A; \\\\ 0, & x \\not \\in A. \\end{cases}" }, { "math_id": 6, "text": "a \\in \\mathbb{R} \\cup \\{ + \\infty \\}" }, { "math_id": 7, "text": "a + (+ \\infty) = + \\infty" }, { "math_id": 8, "text": "a (+\\infty) = + \\infty" }, { "math_id": 9, "text": "0(+\\infty)=0" }, { "math_id": 10, "text": "\\frac{1}{0} = + \\infty" }, { "math_id": 11, "text": "\\frac{1}{+ \\infty} = 0" }, { "math_id": 12, "text": "\\mathbf{1}_{A} (x) = \\frac{1}{1 + \\chi_{A} (x)}" }, { "math_id": 13, "text": "\\chi_{A} (x) = (+ \\infty) \\left( 1 - \\mathbf{1}_{A} (x) \\right)." }, { "math_id": 14, "text": "\\chi_{A} (x)" }, { "math_id": 15, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=7335429
73354598
Aliasing (factorial experiments)
Statistical phenomenon where some effects appear the same In the statistical theory of factorial experiments, aliasing is the property of fractional factorial designs that makes some effects "aliased" with each other – that is, indistinguishable from each other. A primary goal of the theory of such designs is the control of aliasing so that important effects are not aliased with each other. In a "full" factorial experiment, the number of "treatment combinations" or "cells" (see below) can be very large. This necessitates limiting observations to a "fraction" (subset) of the treatment combinations. Aliasing is an automatic and unavoidable result of observing such a fraction. The aliasing properties of a design are often summarized by giving its "resolution". This measures the degree to which the design avoids aliasing between main effects and important interactions. Fractional factorial experiments have long been a basic tool in agriculture, food technology, industry, medicine and public health, and the social and behavioral sciences. They are widely used in exploratory research, particularly in screening experiments, which have applications in industry, drug design and genetics. In all such cases, a crucial step in designing such an experiment is deciding on the desired aliasing pattern, or at least the desired resolution. As noted below, the concept of aliasing may have influenced the identification of an analogous phenomenon in signal processing theory. Overview. Associated with a factorial experiment is a collection of "effects". Each factor determines a "main effect", and each set of two or more factors determines an "interaction effect" (or simply an "interaction") between those factors. Each effect is defined by a set of relations between "cell means", as described below. In a "fractional" factorial design, effects are defined by restricting these relations to the cells in the fraction. It is when the restricted relations for two different effects turn out to be the same that the effects are said to be aliased. The presence or absence of a given effect in a given data set is tested by statistical methods, most commonly analysis of variance. While aliasing has significant implications for estimation and hypothesis testing, it is fundamentally a combinatorial and algebraic phenomenon. Construction and analysis of fractional designs thus rely heavily on algebraic methods. The definition of a fractional design is sometimes broadened to allow multiple observations of some or all treatment combinations – a "multisubset" of all treatment combinations. A fraction that is a subset (that is, where treatment combinations are not repeated) is called "simple". The theory described below applies to simple fractions. Contrasts and effects. In any design, full or fractional, the expected value of an observation in a given treatment combination is called a "cell mean", usually denoted using the Greek letter μ. (The term "cell" is borrowed from its use in tables of data.) A "contrast in cell means" is a linear combination of cell means in which the coefficients sum to 0. In the 2 × 3 experiment illustrated here, the expression formula_0 is a contrast that compares the mean responses of the treatment combinations 11 and 12. (The coefficients here are 1 and –1.) The effects in a factorial experiment are expressed in terms of contrasts. In the above example, the contrast formula_1 is said to "belong to the main effect of factor A" as it contrasts the responses to the "1" level of factor formula_2 with those for the "2" level. The main effect of "A" is said to be "absent" if this expression equals 0. Similarly, formula_3   and   formula_4 are contrasts belonging to the main effect of factor "B". On the other hand, the contrasts formula_5   and   formula_6 "belong to the interaction of A and B"; setting them equal to 0 expresses the lack of interaction. These designations, which extend to arbitrary factorial experiments having three or more factors, depend on the pattern of coefficients, as explained elsewhere. Since it is the coefficients of these contrasts that carry the essential information, they are often displayed as column vectors. For the example above, such a table might look like this: The columns of such a table are called "contrast vectors": their components add up to 0. While there are in general many possible choices of columns to represent a given effect, the "number" of such columns — the "degrees of freedom" of the effect — is fixed and is given by a well-known formula. In the 2 × 3 example above, the degrees of freedom for formula_9, and the formula_10 interaction are 1, 2 and 2, respectively. In a fractional factorial experiment, the contrast vectors belonging to a given effect are restricted to the treatment combinations in the fraction. Thus, in the half-fraction {11, 12, 13} in the 2 × 3 example, the three effects may be represented by the column vectors in the following table: The consequence of this truncation — aliasing — is described below. Definitions. The factors in the design are allowed to have different numbers of levels, as in a formula_11 factorial experiment (an "asymmetric" or "mixed-level" experiment). Fix a fraction of a full factorial design. Let formula_12 be a set of contrast vectors representing an effect (in particular, a main effect or interaction) in the full factorial design, and let formula_13 consist of the restrictions of those vectors to the fraction. One says that the effect is Similarly, let formula_14 and formula_15 represent two effects and let formula_16 and formula_17 be their restrictions to the fraction. The two effects are said to be Finney and Bush introduced the terms "lost" and "preserved" in the sense used here. Despite the relatively long history of this topic, though, its terminology is not entirely standardized. The literature often describes lost effects as "not estimable" in a fraction, although estimation is not the only issue at stake. Rao referred to preserved effects as "measurable from" the fraction. Resolution. The extent of aliasing in a given fractional design is measured by the "resolution" of the fraction, a concept first defined by Box and Hunter: A fractional factorial design is said to have "resolution" formula_18 if every formula_19-factor effect is unaliased with every effect having fewer than formula_20 factors. For example, a design has resolution formula_21 if main effects are unaliased with each other (taking formula_22, though it allows main effects to be aliased with two-factor interactions. This is typically the lowest resolution desired for a fraction. It is not hard to see that a fraction of resolution formula_18 also has resolution formula_23, etc., so one usually speaks of the "maximum" resolution of a fraction. The number formula_19 in the definition of resolution is usually understood to be a positive integer, but one may consider the "effect of the grand mean" to be the (unique) effect with no factors (i.e., with formula_24). This effect sometimes appears in analysis of variance tables. It has one degree of freedom, and is represented by a single vector, a column of 1's. With this understanding, an effect is A fraction then has resolution formula_25 if all main effects are preserved in the fraction. If it has resolution formula_21 then two-factor interactions are also preserved. Computation. The definitions above require some computations with vectors, illustrated in the examples that follow. For certain fractional designs (the "regular" ones), a simple algebraic technique can be used that bypasses these procedures and gives a simple way to determine resolution. This is discussed below. Examples. The 2 × 3 experiment. The fraction {11, 12, 13} of this experiment was described above along with its restricted vectors. It is repeated here along with the complementary fraction {21, 22, 23}: In both fractions, the formula_2 effect is completely lost (the formula_2 column is constant) while the formula_7 and interaction effects are preserved (each 3 × 1 column is a contrast vector as its components sum to 0). In addition, the formula_7 and interaction effects are completely aliased in each fraction: In the first fraction, the vectors for formula_7 are linear combinations of those for formula_8, viz., formula_26 and formula_27; in the reverse direction, the vectors for formula_8 can be written similarly in terms of those representing formula_7. The argument in the second fraction is analogous. These fractions have maximum resolution 1. The fact that the main effect of formula_2 is lost makes both of these fractions undesirable in practice. It turns out that in a 2 × 3 experiment (or in any "a × b" experiment in which "a" and "b" are relatively prime) there is no fraction that preserves both main effects -- that is, no fraction has resolution 2. The 2 × 2 × 2 (or 2³) experiment. This is a "two-level" experiment with factors formula_9 and formula_28. In such experiments the factor levels are often denoted by 0 and 1, for reasons explained below. A treatment combination is then denoted by an ordered triple such as 101 (more formally, (1, 0, 1), denoting the cell in which formula_2 and formula_28 are at level "1" and formula_7 is at level "0"). The following table lists the eight cells of the full 2 × 2 × 2 factorial experiment, along with a contrast vector representing each effect, including a three-factor interaction: Suppose that only the fraction consisting of the cells 000, 011, 101, and 110 is observed. The original contrast vectors, when restricted to these cells, are now 4 × 1, and can be seen by looking at just those four rows of the table. (Sorting the table on formula_32 will bring these rows together and make the restricted contrast vectors easier to see. Sorting twice puts them at the top.) The following can be observed concerning these "restricted" vectors: Thus Now suppose instead that the complementary fraction {001,010,100,111} is observed. The same effects as before are lost or preserved, and the same pairs of effects as before are mutually unaliased. Moreover, formula_2 and formula_31 are still aliased in this fraction since the formula_2 and formula_31 vectors are negatives of each other, and similarly for formula_7 and formula_30 and for formula_28 and formula_29. Both of these fractions thus have maximum resolution 3. Aliasing in regular fractions. The two half-fractions of a formula_33 factorial experiment described above are of a special kind: Each is the solution set of a linear equation using "modular arithmetic". More exactly: Such fractions are said to be "regular". This idea applies to fractions of "classical" formula_40 designs, that is, formula_40 (or "symmetric") factorial designs in which the number of levels, formula_41, of each of the formula_42 factors is a prime or the power of a prime. A fractional factorial design is "regular" if it is the solution set of a system of one or more equations of the form formula_43 where the equation is modulo formula_41 if formula_41 is prime, and is in the finite field formula_44 if formula_41 is a power of a prime. Such equations are called "defining equations" of the fraction. When the defining equation or equations are "homogeneous", the fraction is said to be "principal". One defining equation yields a fraction of size formula_45, two independent equations a fraction of size formula_46 and so on. Such fractions are generally denoted as formula_47 designs. The half-fractions described above are formula_48 designs. The notation often includes the resolution as a subscript, in Roman numerals; the above fractions are thus formula_49 designs. Associated to each expression formula_50 is another, namely formula_51, which rewrites the coefficients as exponents. Such expressions are called "words", a term borrowed from group theory. (In a particular example where formula_42 is a specific number, the letters formula_52 are used, rather than formula_53.) These words can be multiplied and raised to powers, where the word formula_54 acts as a multiplicative identity, and they thus form an "abelian group" formula_55, known as the "effects group". When formula_41 is prime, one has formula_56 for every element (word) formula_57; something similar holds in the prime-power case. In formula_58 factorial experiments, each element of formula_55 represents a main effect or interaction. In formula_40 experiments with formula_59, each one-letter word represents the main effect of that factor, while longer words represent "components of interaction". An example below illustrates this with formula_60. To each "defining" expression (the left-hand side of a defining equation) corresponds a "defining word". The defining words generate a subgroup formula_61 of formula_55 that is variously called the "alias subgroup", the "defining contrast subgroup", or simply the "defining subgroup" of the fraction. Each element of formula_61 is a defining word since it corresponds to a defining equation, as one can show. The effects represented by the defining words are completely lost in the fraction while all other effects are preserved. If formula_62, say, then the equation formula_64 is called the "defining relation" of the fraction. This relation is used to determine the aliasing structure of the fraction: If a given effect is represented by the word formula_65, then its aliases are computed by multiplying the defining relation by formula_65, viz., formula_66 where the products formula_67 are then simplified. This relation indicates complete (not partial) aliasing, and W is unaliased with all other effects listed in formula_55. Example 1. In either of the formula_48 fractions described above, the defining word is formula_32, since the exponents on these letters are the coefficients of formula_68. The formula_32 effect is completely lost in the fraction, and the defining subgroup formula_61 is simply formula_69, since squaring does not generate new elements formula_70. The defining relation is thus formula_71, and multiplying both sides by formula_2 gives formula_72; which simplifies to formula_73 the alias relation seen earlier. Similarly, formula_74 and formula_75. Note that multiplying both sides of the defining relation by formula_76 and formula_31 does not give any new alias relations. For comparison, the formula_48 fraction with defining equation formula_77 has the defining word formula_29 (i.e., formula_78). The effect formula_29 is completely lost, and the defining relation is formula_79. Multiplying this by formula_2, by formula_28, and by formula_30 gives the alias relations formula_80, formula_81, and formula_82 among the six remaining effects. This fraction only has resolution 2 since all effects (except formula_29) are preserved but two main effects are aliased. Finally, solving the defining equation formula_77 yields the fraction {000, 001, 110, 111}. One may verify all of this by sorting the table above on column formula_29. The use of arithmetic modulo 2 explains why the factor levels in such designs are labeled 0 and 1. Example 2. In a 3-level design, factor levels are denoted 0, 1 and 2, and arithmetic is modulo 3. If there are four factors, say formula_83 and formula_84, the effects group formula_55 will have the relations formula_85 From these it follows, for example, that formula_86 and formula_87. A defining equation such as formula_88 would produce a regular 1/3-fraction of the 81 (= formula_89) treatment combinations, and the corresponding defining word would be formula_90. Since its powers are formula_91   and  formula_92, the defining subgroup formula_61 would be formula_93, and so the fraction would have defining relation formula_94 Multiplying by formula_2, for example, yields the aliases formula_95 For reasons explained elsewhere, though, all powers of a defining word represent the same effect, and the convention is to choose that power whose leading exponent is 1. Squaring the latter two expressions does the trick and gives the alias relations formula_96 Twelve other sets of three aliased effects are given by Wu and Hamada. Examining all of these reveals that, like formula_2, main effects are unaliased with each other and with two-factor effects, although some two-factor effects are aliased with each other. This means that this fraction has maximum resolution 4, and so is of type formula_97. The effect formula_98 is one of 4 components of the formula_99 interaction, while formula_100 is one of 8 components of the formula_101 interaction. In a 3-level design, each component of interaction carries 2 degrees of freedom. Example 3. A formula_102 design (formula_103 of a formula_104 design) may be created by solving "two" equations in 5 unknowns, say formula_105 modulo 2. The fraction has eight treatment combinations, such as 10000, 00110 and 11111, and is displayed in the article on fractional factorial designs. Here the coefficients in the two defining equations give defining words formula_106 and formula_107. Setting formula_108 and multiplying through by formula_84 gives the alias relation formula_109. The second defining word similarly gives formula_110. The article uses these two aliases to describe an alternate method of construction of the fraction. The defining subgroup formula_61 has one more element, namely the product formula_111 formula_112, making use of the fact that formula_113. The extra defining word formula_114 is known as the "generalized interaction" of formula_106 and formula_107, and corresponds to the equation formula_115, which is also satisfied by the fraction. With this word included, the full defining relation is formula_116 (these are the four elements of the defining subgroup), from which all the alias relations of this fraction can be derived – for example, multiplying through by formula_84 yields formula_117. Continuing this process yields six more alias sets, each containing four effects. An examination of these sets reveals that main effects are not aliased with each other, but are aliased with two-factor interactions. This means that this fraction has maximum resolution 3. A quicker way to determine the resolution of a regular fraction is given below. It is notable that the alias relations of the fraction depend only on the left-hand side of the defining equations, not on their constant terms. For this reason, some authors will restrict attention to principal fractions "without loss of generality", although the reduction to the principal case often requires verification. Determining the resolution of a regular fraction. The "length of a word" in the effects group is defined to be the number of letters in its name, not counting repetition. For example, the length of the word formula_118 is 3. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — The maximum resolution of a regular fractional design is equal to the minimum length of a defining word. Using this result, one immediately gets the resolution of the preceding examples without computing alias relations: One could also construct a formula_102 fraction from the defining words formula_122 and formula_114, but the defining subgroup formula_61 will also include formula_123, their product, and so the fraction will only have resolution 2 (the length of formula_123). This is true starting with any two words of length 4. Thus resolution 3 is the best one can hope for in a fraction of type formula_102. As these examples indicate, one must consider "all" the elements of the defining subgroup in applying the theorem above. This theorem is often taken to be a definition of resolution, but the Box-Hunter definition given earlier applies to arbitrary fractional designs and so is more general. Aliasing in general fractions. Nonregular fractions are common, and have certain advantages. For example, they are not restricted to having size a power of formula_41, where formula_41 is a prime or prime power. While some methods have been developed to deal with aliasing in particular nonregular designs, no overall algebraic scheme has emerged. There is a universal "combinatorial" approach, however, going back to Rao. If the treatment combinations of the fraction are written as rows of a table, that table is an "orthogonal array". These rows are often referred to as "runs". The columns will correspond to the formula_42 factors, and the entries of the table will simply be the symbols used for factor levels, and need not be numbers. The number of levels need not be prime or prime-powered, and they may vary from factor to factor, so that the table may be a "mixed-level" array. In this section fractional designs are allowed to be mixed-level unless explicitly restricted. A key parameter of an orthogonal array is its "strength", the definition of which is given in the article on orthogonal arrays. One may thus refer to the "strength" of a fractional design. Two important facts flow immediately from its definition: To state the next result, it is convenient to enumerate the factors of the experiment by 1 through formula_42, and to let each nonempty subset formula_63 of formula_128 correspond to a main effect or interaction in the following way: formula_129 corresponds to the main effect of factor formula_130, formula_131 corresponds to the interaction of factors formula_130 and formula_132, and so on. &lt;templatestyles src="Math_theorem/styles.css" /&gt; The Fundamental Theorem of Aliasing — Consider a fraction of strength formula_133 on formula_42 factors. Let formula_134. Example: Consider a fractional factorial design with factors formula_139 and maximum strength formula_140. Then: The Fundamental Theorem has a number of important consequences. In particular, it follows almost immediately that if a fraction has strength formula_124 then it has resolution formula_144. With additional assumptions, a stronger conclusion is possible: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — If a fraction has "maximum" strength formula_124 and "maximum" resolution formula_145 then formula_146 This result replaces the group-theoretic condition (minimum wordlength) in regular fractions with a combinatorial condition (maximum strength) in arbitrary ones. Example. An important class of nonregular two-level designs are Plackett-Burman designs. As with all fractions constructed from Hadamard matrices, they have strength 2, and therefore resolution 3. The smallest such design has 11 factors and 12 runs (treatment combinations), and is displayed in the article on such designs. Since 2 is its maximum strength, 3 is its maximum resolution. Some detail about its aliasing pattern is given in the next section. Partial aliasing. In regular formula_58 fractions there is no partial aliasing: Each effect is either preserved or completely lost, and effects are either unaliased or completely aliased. The same holds in regular formula_40 experiments with formula_59 if one considers only main effects and components of interaction. However, a limited form of partial aliasing occurs in the latter. For example, in the formula_119 design described above the overall formula_101 interaction is partly lost since its formula_90 component is completely lost in the fraction while its other components (such as formula_100) are preserved. Similarly, the main effect of formula_2 is partly aliased with the formula_99 interaction since formula_2 is completely aliased with its formula_98 component and unaliased with the others. In contrast, partial aliasing is uncontrolled and pervasive in nonregular fractions. In the 12-run Plackett-Burman design described in the previous section, for example, with factors labeled formula_2 through formula_147, the only "complete" aliasing is between "complementary effects" such as formula_2 and formula_148 or formula_149 and formula_150. Here the main effect of factor formula_2 is unaliased with the other main effects and with the formula_29 interaction, but it is partly aliased with 45 of the 55 two-factor interactions, 120 of the 165 three-factor interactions, and 150 of the 330 four-factor interactions. This phenomenon is generally described as "complex aliasing". Similarly, 924 effects are preserved in the fraction, 1122 effects are partly lost, and only one (the top-level interaction formula_151) is completely lost. Analysis of variance (ANOVA). Wu and Hamada analyze a data set collected on the formula_97 fractional design described above. Significance testing in the analysis of variance (ANOVA) requires that the error sum of squares and the degrees of freedom for error be nonzero. In order to insure this, two design decisions have been made: The accompanying table shows just two columns of an ANOVA table for this experiment. Only main effects and components of two-factor interactions are listed, including three pairs of aliases. Aliasing between some two-factor interactions is expected, since the maximum resolution of this design is 4. This experiment studied two response variables. In both cases, some aliased interactions were statistically significant. This poses a challenge of interpretation, since without more information or further assumptions it is impossible to determine which interaction is responsible for significance. In some instances there may be a theoretical basis to make this determination. This example shows one advantage of fractional designs. The full formula_89 factorial experiment has 81 treatment combinations, but taking one observation on each of these would leave no degrees of freedom for error. The fractional design also uses 81 observations, but on just 27 treatment combinations, in such a way that one can make inferences on main effects and on (most) two-factor interactions. This may be sufficient for practical purposes. History. The first statistical use of the term "aliasing" in print is the 1945 paper by Finney, which dealt with regular fractions with 2 or 3 levels. The term was imported into signal processing theory a few years later, possibly influenced by its use in factorial experiments; the history of that usage is described in the article on aliasing in signal processing. The 1961 paper in which Box and Hunter introduced the concept of "resolution" dealt with regular two-level designs, but their initial definition makes no reference to lengths of defining words and so can be understood rather generally. Rao actually makes implicit use of resolution in his 1947 paper introducing orthogonal arrays, reflected in an important parameter inequality that he develops. He distinguishes effects in full and fractional designs by using symbols formula_152 and formula_153 (corresponding to formula_12 and formula_13), but makes no mention of aliasing. The term "confounded" is often used as a synonym for "aliased", and so one must read the literature carefully. The former term "is generally reserved for the indistinguishability of a treatment contrast and a block contrast", that is, for "confounding with blocks". Kempthorne has shown how confounding with blocks in a formula_42-factor experiment may be viewed as aliasing in a fractional design with formula_154 factors, but it is unclear whether one can do the reverse. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mu_{11} - \\mu_{12} " }, { "math_id": 1, "text": " \\mu_{11} + \\mu_{12} + \\mu_{13} - \\mu_{21} - \\mu_{22} - \\mu_{23} " }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": " \\mu_{11} + \\mu_{21} - \\mu_{12} - \\mu_{22} " }, { "math_id": 4, "text": " \\mu_{11} + \\mu_{21} - \\mu_{13} - \\mu_{23} " }, { "math_id": 5, "text": " \\mu_{11} - \\mu_{12} - \\mu_{21} + \\mu_{22} " }, { "math_id": 6, "text": " \\mu_{11} - \\mu_{13} - \\mu_{21} + \\mu_{23} " }, { "math_id": 7, "text": "B" }, { "math_id": 8, "text": "A \\times B" }, { "math_id": 9, "text": "A, B" }, { "math_id": 10, "text": "A\\times B" }, { "math_id": 11, "text": "2 \\times 3 \\times 3" }, { "math_id": 12, "text": "U" }, { "math_id": 13, "text": "\\widetilde{U}" }, { "math_id": 14, "text": "U_1" }, { "math_id": 15, "text": "U_2" }, { "math_id": 16, "text": "\\widetilde{U}_1" }, { "math_id": 17, "text": "\\widetilde{U}_2" }, { "math_id": 18, "text": "R" }, { "math_id": 19, "text": "p" }, { "math_id": 20, "text": "R-p" }, { "math_id": 21, "text": "R = 3" }, { "math_id": 22, "text": "p = 1)" }, { "math_id": 23, "text": "R-1, R-2" }, { "math_id": 24, "text": "p = 0" }, { "math_id": 25, "text": "R = 2" }, { "math_id": 26, "text": "\\begin{bmatrix} 1 \\\\ -1 \\\\ 0 \\end{bmatrix} = \\begin{bmatrix} 1 \\\\ -1 \\\\ 0 \\end{bmatrix} + 0\\begin{bmatrix} 1 \\\\ 0 \\\\ -1 \\end{bmatrix} " }, { "math_id": 27, "text": "\\begin{bmatrix} 0 \\\\ 1 \\\\ -1 \\end{bmatrix} = \\begin{bmatrix} 1 \\\\ 0 \\\\ -1 \\end{bmatrix} -\\begin{bmatrix} 1 \\\\ -1 \\\\ 0 \\end{bmatrix}" }, { "math_id": 28, "text": "C" }, { "math_id": 29, "text": "AB" }, { "math_id": 30, "text": "AC" }, { "math_id": 31, "text": "BC" }, { "math_id": 32, "text": "ABC" }, { "math_id": 33, "text": "2^3" }, { "math_id": 34, "text": "\\{000,011,101,110\\}" }, { "math_id": 35, "text": "t_1 + t_2 + t_3 = 0 \\pmod 2 " }, { "math_id": 36, "text": "011" }, { "math_id": 37, "text": "0 + 1 + 1 = 0 \\pmod 2 " }, { "math_id": 38, "text": "\\{001,010,100,111\\}" }, { "math_id": 39, "text": "t_1 + t_2 + t_3 = 1 \\pmod 2 " }, { "math_id": 40, "text": "s^k" }, { "math_id": 41, "text": "s" }, { "math_id": 42, "text": "k" }, { "math_id": 43, "text": "a_1t_1 + \\cdots + a_kt_k = b," }, { "math_id": 44, "text": "GF(s)" }, { "math_id": 45, "text": "s^{k-1}" }, { "math_id": 46, "text": "s^{k-2}," }, { "math_id": 47, "text": "s^{k-r}" }, { "math_id": 48, "text": "2^{3-1}" }, { "math_id": 49, "text": "2^{3-1}_{III}" }, { "math_id": 50, "text": "a_1t_1 + \\cdots + a_kt_k" }, { "math_id": 51, "text": "A_1^{a_1} \\cdots A_k^{a_k}" }, { "math_id": 52, "text": "A, B, C \\ldots" }, { "math_id": 53, "text": "A_1, A_2, A_3 \\ldots" }, { "math_id": 54, "text": "I=A_1^0 \\cdots A_k^0" }, { "math_id": 55, "text": "\\mathbb{G}" }, { "math_id": 56, "text": "W^s = I" }, { "math_id": 57, "text": "W \\in \\mathbb{G}" }, { "math_id": 58, "text": "2^k" }, { "math_id": 59, "text": "s > 2" }, { "math_id": 60, "text": "s=3" }, { "math_id": 61, "text": "\\mathbb{H}" }, { "math_id": 62, "text": "\\mathbb{H} = \\{I, W_1, \\ldots, W_{\\ell}\\}" }, { "math_id": 63, "text": "I" }, { "math_id": 64, "text": " I = W_1 = \\cdots = W_{\\ell} " }, { "math_id": 65, "text": "W" }, { "math_id": 66, "text": " W = WW_1 = \\cdots = WW_{\\ell}," }, { "math_id": 67, "text": "WW_i" }, { "math_id": 68, "text": "t_1 + t_2 + t_3" }, { "math_id": 69, "text": "\\{I, ABC\\}" }, { "math_id": 70, "text": "((ABC)^2 = A^2B^2C^2 = I)" }, { "math_id": 71, "text": " I = ABC" }, { "math_id": 72, "text": "A = A^2BC" }, { "math_id": 73, "text": "A = BC," }, { "math_id": 74, "text": "B = AC" }, { "math_id": 75, "text": "C = AB" }, { "math_id": 76, "text": "AB, AC" }, { "math_id": 77, "text": "t_1 + t_2 = 0 \\pmod 2 " }, { "math_id": 78, "text": "A^1B^1C^0" }, { "math_id": 79, "text": "I=AB" }, { "math_id": 80, "text": "A=B" }, { "math_id": 81, "text": "C=ABC" }, { "math_id": 82, "text": "AC=BC" }, { "math_id": 83, "text": "A, B, C" }, { "math_id": 84, "text": "D" }, { "math_id": 85, "text": "A^3 = B^3 = C^3 = D^3 = I." }, { "math_id": 86, "text": "D^4 = D" }, { "math_id": 87, "text": "D^6 = I" }, { "math_id": 88, "text": "t_1 + t_2 + t_3 + 2t^4 = 0 \\pmod{3}" }, { "math_id": 89, "text": "3^4" }, { "math_id": 90, "text": "ABCD^2" }, { "math_id": 91, "text": "(ABCD^2)^2 = A^2B^2C^2D " }, { "math_id": 92, "text": "(ABCD^2)^3 = I" }, { "math_id": 93, "text": "\\{I, ABCD^2, A^2B^2C^2D\\}" }, { "math_id": 94, "text": " I = ABCD^2 = A^2B^2C^2D." }, { "math_id": 95, "text": " A = A^2BCD^2 = B^2C^2D." }, { "math_id": 96, "text": " A = AB^2C^2D = BCD^2." }, { "math_id": 97, "text": "3^{4-1}_{IV}" }, { "math_id": 98, "text": "BCD^2" }, { "math_id": 99, "text": "B \\times C \\times D" }, { "math_id": 100, "text": "AB^2C^2D" }, { "math_id": 101, "text": "A \\times B \\times C \\times D" }, { "math_id": 102, "text": "2^{5-2}" }, { "math_id": 103, "text": "1/4" }, { "math_id": 104, "text": "2^5" }, { "math_id": 105, "text": "\\begin{cases}\nt_1 + t_2 + t_4 = 1\\\\\nt_1 + t_3 + t_5 = 1\n\\end{cases}" }, { "math_id": 106, "text": "ABD" }, { "math_id": 107, "text": "ACE" }, { "math_id": 108, "text": "I = ABD" }, { "math_id": 109, "text": "D = AB" }, { "math_id": 110, "text": "E = AC" }, { "math_id": 111, "text": "(ABD)(ACE)" }, { "math_id": 112, "text": "= BCDE" }, { "math_id": 113, "text": "A^2 = I" }, { "math_id": 114, "text": "BCDE" }, { "math_id": 115, "text": "t_2 + t_3 + t_4 + t_5 = 0 \\pmod2" }, { "math_id": 116, "text": "I = ABD = ACE = BCDE" }, { "math_id": 117, "text": "D = AB = ACDE = BCE" }, { "math_id": 118, "text": "AB^2C" }, { "math_id": 119, "text": "3^{4-1}" }, { "math_id": 120, "text": "A^2B^2C^2D" }, { "math_id": 121, "text": "ABD, ACE" }, { "math_id": 122, "text": "ABCD" }, { "math_id": 123, "text": "AE" }, { "math_id": 124, "text": "t" }, { "math_id": 125, "text": "t'" }, { "math_id": 126, "text": "t'<t" }, { "math_id": 127, "text": "s^t" }, { "math_id": 128, "text": "\\{1, \\ldots, k\\}" }, { "math_id": 129, "text": "I = \\{i\\}" }, { "math_id": 130, "text": "i" }, { "math_id": 131, "text": "I = \\{i, j\\}" }, { "math_id": 132, "text": "j" }, { "math_id": 133, "text": "t \\ge 1" }, { "math_id": 134, "text": "I, J \\subseteq \\{1, \\ldots, k\\}" }, { "math_id": 135, "text": "1 \\le |I| \\le t" }, { "math_id": 136, "text": "|I \\cup J| \\le t" }, { "math_id": 137, "text": "I \\neq J" }, { "math_id": 138, "text": "J" }, { "math_id": 139, "text": "A, B, C, D, E" }, { "math_id": 140, "text": "t=3" }, { "math_id": 141, "text": "A \\times C" }, { "math_id": 142, "text": "C \\times D" }, { "math_id": 143, "text": "\\{A,B,C,D\\}" }, { "math_id": 144, "text": "t+1" }, { "math_id": 145, "text": "R," }, { "math_id": 146, "text": "R=t+1." }, { "math_id": 147, "text": "K" }, { "math_id": 148, "text": "BCDEFGHIJK" }, { "math_id": 149, "text": "ABCJK" }, { "math_id": 150, "text": "DEFGHI" }, { "math_id": 151, "text": "ABCDEFGHIJK" }, { "math_id": 152, "text": "[\\cdots]" }, { "math_id": 153, "text": "\\{\\cdots\\}" }, { "math_id": 154, "text": "k+1" } ]
https://en.wikipedia.org/wiki?curid=73354598
73356523
Bailey's FFT algorithm
High-performance algorithm The Bailey's FFT (also known as a 4-step FFT) is a high-performance algorithm for computing the fast Fourier transform (FFT). This variation of the Cooley–Tukey FFT algorithm was originally designed for systems with hierarchical memory common in modern computers (and was the first FFT algorithm in this so called "out of core" class). The algorithm treats the samples as a two dimensional matrix (thus yet another name, a matrix FFT algorithm) and executes short FFT operations on the columns and rows of the matrix, with a correction multiplication by "twiddle factors" in between. The algorithm got its name after an article by David H. Bailey, "FFTs in external or hierarchical memory", published in 1989. In this article Bailey credits the algorithm to W. M. Gentleman and G. Sande who published their paper, "Fast Fourier Transforms: for fun and profit", some twenty years earlier in 1966. The algorithm can be considered a radix-formula_0 FFT decomposition. Here is a brief overview of how the "4-step" version of the Bailey FFT algorithm works: The result (in natural order) is read column-by-column. Since the operations are performed column-wise and row-wise, steps 2 and 4 (and reading of the result) might include a matrix transpose to rearrange the elements in a way convenient for processing. The algorithm resembles a 2-dimensional FFT, a 3-dimensional (and beyond) extensions are known as 5-step FFT, 6-step FFT, etc. The Bailey FFT is typically used for computing DFTs of large datasets, such as those used in scientific and engineering applications. The Bailey FFT is a very efficient algorithm, and it has been used to compute FFTs of datasets with billions of elements (when applied to the number-theoretic transform, the datasets of the order of 1012 elements were processed in mid-2000s). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt n" } ]
https://en.wikipedia.org/wiki?curid=73356523
73360
Lyapunov fractal
Type of fractal In mathematics, Lyapunov fractals (also known as Markus–Lyapunov fractals) are bifurcational fractals derived from an extension of the logistic map in which the degree of the growth of the population, "r", periodically switches between two values "A" and "B". A Lyapunov fractal is constructed by mapping the regions of stability and chaotic behaviour (measured using the Lyapunov exponent formula_0) in the "a"−"b" plane for given periodic sequences of "a" and "b". In the images, yellow corresponds to formula_1 (stability), and blue corresponds to formula_2 (chaos). Lyapunov fractals were discovered in the late 1980s by the Germano-Chilean physicist Mario Markus from the Max Planck Institute of Molecular Physiology. They were introduced to a large public by a science popularization article on recreational mathematics published in Scientific American in 1991. Properties. Lyapunov fractals are generally drawn for values of "A" and "B" in the interval formula_3. For larger values, the interval [0,1] is no longer stable, and the sequence is likely to be attracted by infinity, although convergent cycles of finite values continue to exist for some parameters. For all iteration sequences, the diagonal "a = b" is always the same as for the standard one parameter logistic function. The sequence is usually started at the value 0.5, which is a critical point of the iterative function. The other (even complex valued) critical points of the iterative function during one entire round are those that pass through the value 0.5 in the first round. A convergent cycle must attract at least one critical point. Therefore, all convergent cycles can be obtained by just shifting the iteration sequence, and keeping the starting value 0.5. In practice, shifting this sequence leads to changes in the fractal, as some branches get covered by others. For instance, the Lyapunov fractal for the iteration sequence AB (see top figure on the right) is not perfectly symmetric with respect to "a" and "b". Algorithm. The algorithm for computing Lyapunov fractals works as follows: More dimensions. Lyapunov fractals can be calculated in more than two dimensions. The sequence string for a "n"-dimensional fractal has to be built from an alphabet with "n" characters, e.g. "ABBBCA" for a 3D fractal, which can be visualized either as 3D object or as an animation showing a "slice" in the C direction for each animation frame, like the example given here. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda" }, { "math_id": 1, "text": "\\lambda < 0" }, { "math_id": 2, "text": "\\lambda > 0" }, { "math_id": 3, "text": "[0,4]" }, { "math_id": 4, "text": "S" }, { "math_id": 5, "text": "(a,b) \\in [0,4] \\times [0,4]" }, { "math_id": 6, "text": "r_n = a" }, { "math_id": 7, "text": "S_n = A" }, { "math_id": 8, "text": "r_n = b" }, { "math_id": 9, "text": "S_n = B" }, { "math_id": 10, "text": "x_0 = 0.5" }, { "math_id": 11, "text": "x_{n+1} = r_n x_n (1 - x_n)" }, { "math_id": 12, "text": "\\lambda = \\lim_{N \\rightarrow \\infty} {1 \\over N} \\sum_{n = 1}^N \\log \\left|{dx_{n+1} \\over dx_n}\\right| = \\lim_{N \\rightarrow \\infty} {1 \\over N} \\sum_{n = 1}^N \\log |r_n (1 - 2x_n)|" }, { "math_id": 13, "text": "N" }, { "math_id": 14, "text": "r_0 (1 - 2x_0) = r_n \\cdot 0 = 0" }, { "math_id": 15, "text": "x_0=0.5" }, { "math_id": 16, "text": "(a,b)" } ]
https://en.wikipedia.org/wiki?curid=73360
7336057
Phase-comparison monopulse
Phase-comparison monopulse is a technique used in radio frequency (RF) applications such as radar and direction finding to accurately estimate the direction of arrival of a signal from the phase difference of the signal measured on two (or more) separated antennas or more typically from displaced phase centers of an array antenna. Phase-comparison monopulse differs from amplitude-comparison monopulse in that the former uses displaced phase centers with a common beam pointing direction, while the latter uses a common phase center and displaced beam pointing directions. In phase-comparison monopulse, typically an array is subdivided into sub-arrays, and then a "sum" and a "difference" or "del" channel are formed. For a linear array, these subarrays would each be half of the elements, divided in the middle. For a planar array, these sub-arrays would be the four quadrants of the array, each with 1/4 of the array's elements. In a linear array, the output of each sub-array is summed to form the "sum" channel, and the same outputs are subtracted to form the "del" channel. The monopulse ratio is formed by dividing the imaginary part of the del channel by the real part of the sum channel. This ratio gives an error signal that indicates to a high degree of accuracy the actual target angle as compared to the center of the beam. For a planar array, one sum channel is formed as the sum of the outputs of all four quadrants, but two del channels are formed, one for the elevation dimension and one for the orthogonal azimuth dimension. Two monopulse ratios are formed just as with a linear array, each one indicating the deviation angle in one dimension from the center of the beam. There are some common misconceptions about phase comparison monopulse. First, only one beam is formed. Monopulse processing is done entirely with the received signal in the array manifold and beam forming network. Speaking in terms of only one dimension for clarity, such as with a linear array, the signal is received by the array and summed into each of two subarrays with displaced phase centers. The sum channel is formed simply by adding these two subarray outputs, and the result is exactly the same as if the entire array was initially summed in one step. The del channel is formed simply by subtracting these same subarray outputs. Second, phase-comparison monopulse doesn't technically actually do a phase comparison, but rather simply divides the del channel by the sum channel to arrive at a ratio wherein the angle information is encoded. The following mathematical derivation should make it clear why this is so. Mathematics. Sum Pattern. We can define the beam pattern (array factor) of a uniform linear array (ULA) with N elements, as: formula_0, where formula_1 is the array manifold vector and formula_2 is a vector of complex weights representing amplitude and phase adjustments applied to each antenna element. The manifold vector, formula_1, fully encapsulates all of the spatial properties of the array. formula_3 is the distance between elements of the array, and formula_4 is the angle of arrival of an incident plane wave, defined from end-fire, i.e., formula_5 is a signal from array broadside. It is common to perform a variable substitution to formula_6-space, where formula_7, and therefore we have: formula_8 and we can more easily see that formula_6 is simply the phase shift between adjacent elements. The formula_9 term simply references the absolute phase to the physical center of the array. Notice that this result is the same if we instead first sum each half of the array, then add those results together. formula_10 The weight vector is a combination of a steering vector that steers the beam in a steered direction, formula_11, using phase adjustments and an amplitude taper that is often applied to reduce sidelobes. Thus, formula_12, and formula_13, where formula_14. We can clearly see now that the beam pattern, in formula_6-space, is the spatial equivalent of the discrete time Fourier transform (DTFT) of the array amplitude tapering vector times a linear phase term. The advantage of formula_6-space is that the beam shape is identical no matter where it is steered, and is only a function of the deviation of the desired target phase from the actual target phase. Let us now assume an un-tapered, normalized array with formula_15. The beam pattern can be easily shown to be the familiar aliased sinc (asinc) function: formula_16 This pattern is also known, for monopulse purposes, as the "sum" pattern, as it was obtained by summing all of the elements together. Going forward we will suppress the formula_17 subscript and instead use only formula_6 with the understanding that it represents the deviation of the steered target phase and the actual target phase. Difference Pattern. Let us now develop the monopulse "difference" or "del" pattern by dividing the array into two equal halves called subarrays. We could have just as easily derived the sum pattern by first determining the pattern of each subarray individually and adding these two results together. In monopulse practice, this is what is actually done. The reader is left to show that formula_18 is conjugate symmetric, so it can be re-written in terms of only its first half, formula_19 using an exchange matrix, formula_20, that "flips" this vector. formula_21 Note that formula_22. Assuming that N is even (we could just as easily develop this using an odd N), formula_23 If we assume that the weight matrix is also conjugate symmetric (a good assumption), then formula_24 and the sum beam pattern can be rewritten as: formula_25 The difference or "del" pattern can easily be inferred from the sum pattern simply by flipping the sign of the weights for the second half of the array: formula_26 Again assuming that formula_15, the del pattern can be shown to reduce to: formula_27 Monopulse Ratio. The monopulse ratio is formed as: formula_28 One can see that, within the 3dB beam width of the system, the monopulse ratio is almost linear. In fact, for many systems a linear approximation is good enough. One can also note that the monopulse ratio is continuous within the null-to-null beam width, but has asymptotes that occur at the beam nulls. Therefore, the monopulse ratio is only accurate to measure the deviation angle of a target within the main lobe of the system. However, targets detected in the sidelines of a system, if not mitigated, will produce erroneous results regardless. Concept of Operations. Before performing monopulse processing, a system must first detect a target, which it does as normal using the sum channel. All of the typical measurements that a non-monopulse system make are done using the sum channel, e.g., range, Doppler, and angle. However, the angle measurement is limited in that the target could be anywhere within the beam width of the sum beam, and therefore the system can only assume that the beam pointing direction is the same as the actual target angle. In reality, of course, the actual target angle and the beam steered angle will differ. Therefore, a monopulse processor functions by first detecting and measuring the target signal on the sum channel. Then, only as necessary for detected targets, it measures the same signal on the "del" channel, dividing the imaginary part of this result by the real part of the "sum" channel, then converting this ratio to a deviation angle using the relationships: formula_29 and formula_30 This deviation angle, which can be positive or negative, is added to the beam pointing angle to arrive at the more accurate estimate of the actual target bearing angle. Of course, if the array is 2-dimensional, such as a planar array, there are two del channels, one for elevation and one for azimuth, and therefore two monopulse ratios are formed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B_\\theta \\left( \\theta \\right) = \\vec{w}^H \\vec{v}_\\theta \\left( \\theta \\right) = \\sum_{n=0}^{N-1}w_n^* \\left[ \\vec{v}_\\theta \\left( \\theta \\right) \\right]_n = \\sum_{n=0}^{N-1}w_n^* e^{j \\left( n- \\frac{N-1}{2} \\right) \\frac{2\\pi}{\\lambda} d cos \\theta}" }, { "math_id": 1, "text": "\\vec{v}_\\theta" }, { "math_id": 2, "text": "\\vec{w}" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "\\theta" }, { "math_id": 5, "text": "\\theta=90^\\circ" }, { "math_id": 6, "text": "\\psi" }, { "math_id": 7, "text": "\\psi=\\frac{2\\pi}{\\lambda} d cos \\theta" }, { "math_id": 8, "text": "B_\\psi \\left( \\psi \\right) = \\sum_{n=0}^{N-1}w_n^* e^{j \\left( n- \\frac{N-1}{2} \\right) \\psi}" }, { "math_id": 9, "text": "\\frac{N-1}{2}" }, { "math_id": 10, "text": "B_\\psi \\left( \\psi \\right) = \\sum_{n=0}^{\\frac{N}{2}-1}w_n^* e^{j \\left( n- \\frac{N-1}{2} \\right) \\psi}+\\sum_{n=\\frac{N}{2}}^{N-1}w_n^* e^{j \\left( n- \\frac{N-1}{2} \\right) \\psi}" }, { "math_id": 11, "text": "\\psi_S" }, { "math_id": 12, "text": "\\left[ \\vec{w} \\right]_n=a_n e^{j \\left( n- \\frac{N-1}{2} \\right) \\psi_S}" }, { "math_id": 13, "text": "B_\\psi \\left( \\psi_\\Delta \\right) = e^{j \\left( \\frac{N-1}{2} \\right) \\psi_\\Delta} \\sum_{n=0}^{N-1} a_n e^{-jn\\psi_\\Delta}" }, { "math_id": 14, "text": "\\psi_\\Delta=\\psi_S-\\psi" }, { "math_id": 15, "text": "a_n = \\frac{1}{N}" }, { "math_id": 16, "text": "B_\\psi \\left( \\psi_\\Delta \\right) = \\frac{1}{N} \\frac{sin \\left( N \\frac{\\psi_\\Delta}{2} \\right)}{sin \\frac{\\psi_\\Delta}{2}}" }, { "math_id": 17, "text": "\\Delta" }, { "math_id": 18, "text": "\\vec{v}_\\psi \\left( \\psi \\right)" }, { "math_id": 19, "text": "\\vec{v}_{\\psi_1} \\left( \\psi \\right)" }, { "math_id": 20, "text": "\\textbf{J}" }, { "math_id": 21, "text": "\\textbf{J}=\\begin{bmatrix} 0 & \\cdots & 0 & 1 \\\\ \\vdots & \\ddots & 1 & 0 \\\\ 0 & \\cdot^{\\cdot^{\\cdot}} & \\ddots & \\vdots \\\\ 1 & 0 & \\cdots & 0 \\end{bmatrix}" }, { "math_id": 22, "text": "\\textbf{J} \\cdot \\textbf{J} = \\textbf{I}" }, { "math_id": 23, "text": "\\vec{v}_\\psi \\left( \\psi \\right)= \\begin{bmatrix} \\vec{v}_{\\psi_1} \\left( \\psi \\right)\\\\ \\cdots\\\\ \\textbf{J} \\vec{v}_{\\psi_1}^* \\left( \\psi \\right) \\end{bmatrix}" }, { "math_id": 24, "text": "\\vec{w} = \\begin{bmatrix} \\vec{w}_1 \\\\ \\cdots\\\\ \\textbf{J} \\vec{w}_1^* \\end{bmatrix}" }, { "math_id": 25, "text": "B_\\psi \\left( \\psi \\right)=\\Sigma_\\psi \\left( \\psi \\right) = \\vec{w}^H \\vec{v}_{\\psi} \\left( \\psi \\right)= \\begin{bmatrix} \\vec{w}_1^H & \\vdots & \\vec{w}_1^T \\textbf{J} \\end{bmatrix} \\begin{bmatrix} \\vec{v}_{\\psi_1} \\left( \\psi \\right)\\\\ \\cdots\\\\ \\textbf{J} \\vec{v}_{\\psi_1}^* \\left( \\psi \\right) \\end{bmatrix} = \\vec{w}_1^H \\vec{v}_{\\psi_1} \\left( \\psi \\right) + \\vec{w}_1^T \\vec{v}_{\\psi_1}^* \\left( \\psi \\right) = 2Re \\left[ \\vec{w}_1^H \\vec{v}_{\\psi_1} \\left( \\psi \\right) \\right] " }, { "math_id": 26, "text": "\\Delta_\\psi \\left( \\psi \\right) = \\begin{bmatrix} \\vec{w}_1^H & \\vdots & -\\vec{w}_1^T \\textbf{J} \\end{bmatrix} \\begin{bmatrix} \\vec{v}_{\\psi_1} \\left( \\psi \\right)\\\\ \\cdots\\\\ \\textbf{J} \\vec{v}_{\\psi_1}^* \\left( \\psi \\right) \\end{bmatrix} = \\vec{w}_1^H \\vec{v}_{\\psi_1} \\left( \\psi \\right) - \\vec{w}_1^T \\vec{v}_{\\psi_1}^* \\left( \\psi \\right) = 2Im \\left[ \\vec{w}_1^H \\vec{v}_{\\psi_1} \\left( \\psi \\right) \\right] " }, { "math_id": 27, "text": "\\Delta_\\psi \\left( \\psi \\right) = \\frac{2}{N} Im \\left[ \\sum_{n=0}^{\\frac{N}{2}-1} e^{-j \\left( n- \\frac{N-1}{2} \\right) \\psi} \\right] = \\frac{2}{N} \\frac{sin^2 \\left( N \\frac{\\psi}{4} \\right)}{sin \\frac{\\psi}{2}} " }, { "math_id": 28, "text": " \\frac{\\Delta_\\psi}{\\Sigma_\\psi} = \\frac{\\frac{2}{N} \\frac{sin^2 \\left( N \\frac{\\psi}{4} \\right)}{sin \\frac{\\psi}{2}}}{\\frac{1}{N} \\frac{sin \\left( N \\frac{\\psi}{2} \\right)}{sin \\frac{\\psi}{2}}}=\\frac{2sin^2 \\left( N \\frac{\\psi}{4} \\right)}{sin \\left( N \\frac{\\psi}{2} \\right)}=\\frac{1-cos \\left( N \\frac{\\psi}{2} \\right)}{sin \\left( N \\frac{\\psi}{2} \\right)}=tan \\left( N \\frac{\\psi}{4} \\right)" }, { "math_id": 29, "text": "\\psi_\\Delta=\\psi_S-\\psi=\\frac{4}{N} arctan \\left( \\frac{\\Delta_\\psi}{\\Sigma_\\psi} \\right)" }, { "math_id": 30, "text": "\\theta=arccos \\left( \\frac{\\left( \\psi_S - \\psi_\\Delta \\right) \\lambda}{2\\pi d} \\right)=arccos \\left( \\frac{\\lambda}{2 \\pi d} \\left( \\frac{2\\pi}{\\lambda} d cos \\theta_S - \\frac{4}{N} arctan \\left( \\frac{\\Delta_\\psi}{\\Sigma_\\psi} \\right) \\right) \\right) = arccos \\left( cos \\theta_S - \\frac{2 \\lambda}{N \\pi d} arctan \\left( \\frac{\\Delta_\\psi}{\\Sigma_\\psi} \\right) \\right)" } ]
https://en.wikipedia.org/wiki?curid=7336057
733653
Mean width
In geometry, the mean width is a measure of the "size" of a body; see Hadwiger's theorem for more about the available measures of bodies. In formula_0 dimensions, one has to consider formula_1-dimensional hyperplanes perpendicular to a given direction formula_2 in formula_3, where formula_4 is the n-sphere (the surface of a formula_5-dimensional sphere). The "width" of a body in a given direction formula_2 is the distance between the closest pair of such planes, such that the body is entirely in between the two hyper planes (the planes only intersect with the boundary of the body). The mean width is the average of this "width" over all formula_2 in formula_3. More formally, define a compact body B as being equivalent to set of points in its interior plus the points on the boundary (here, points denote elements of formula_6). The support function of body B is defined as formula_7 where formula_0 is a direction and formula_8 denotes the usual inner product on formula_6. The mean width is then formula_9 where formula_10 is the formula_1-dimensional volume of formula_3. Note, that the mean width can be defined for any body (that is compact), but it is most useful for convex bodies (that is bodies, whose corresponding set is a convex set). Mean widths of convex bodies in low dimensions. One dimension. The mean width of a line segment "L" is the length (1-volume) of "L". Two dimensions. The mean width "w" of any compact shape "S" in two dimensions is "p"/π, where "p" is the perimeter of the convex hull of "S". So "w" is the diameter of a circle with the same perimeter as the convex hull. Three dimensions. For convex bodies "K" in three dimensions, the mean width of "K" is related to the average of the mean curvature, "H", over the whole surface of "K". In fact, formula_11 where formula_12 is the boundary of the convex body formula_13 and formula_14 a surface integral element, formula_15 is the mean curvature at the corresponding position on formula_12. Similar relations can be given between the other measures and the generalizations of the mean curvature, also for other dimensions As the integral over the mean curvature is typically much easier to calculate than the mean width, this is a very useful result. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. The mean width is usually mentioned in any good reference on convex geometry, for instance, "Selected topics in convex geometry" by Maria Moszyńska (Birkhäuser, Boston 2006). The relation between the mean width and the mean curvature is also derived in that reference. The application of the mean width as one of the measures featuring in Hadwiger's theorem is discussed in Beifang Chen in "A simplified elementary proof of Hadwiger's volume theorem." "Geom. Dedicata" 105 (2004), 107—120.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "(n-1)" }, { "math_id": 2, "text": "\\hat{n}" }, { "math_id": 3, "text": "S^{n-1}" }, { "math_id": 4, "text": "S^n" }, { "math_id": 5, "text": "(n+1)" }, { "math_id": 6, "text": "\\mathbb{R}^n" }, { "math_id": 7, "text": "h_B(n)=\\max\\{ \\langle n,x\\rangle |x \\in B \\}" }, { "math_id": 8, "text": "\\langle,\\rangle" }, { "math_id": 9, "text": "b(B)=\\frac{1}{S_{n-1}} \\int_{S^{n-1}} h_B(\\hat{n})+h_B(-\\hat{n})," }, { "math_id": 10, "text": "S_{n-1}" }, { "math_id": 11, "text": "\\int_{\\delta K} \\frac{H}{2\\pi} dS = b(K)" }, { "math_id": 12, "text": "\\delta K" }, { "math_id": 13, "text": "K" }, { "math_id": 14, "text": "dS" }, { "math_id": 15, "text": "H" } ]
https://en.wikipedia.org/wiki?curid=733653
73370366
Cauchy's limit theorem
Mathematical theorem Cauchy's limit theorem, named after the French mathematician Augustin-Louis Cauchy, describes a property of converging sequences. It states that for a converging sequence the sequence of the arithmetic means of its first formula_0 members converges against the same limit as the original sequence, that is formula_1 with formula_2 implies formula_3. The theorem was found by Cauchy in 1821, subsequently a number of related and generalized results were published, in particular by Otto Stolz (1885) and Ernesto Cesàro (1888). Related results and generalizations. If the arithmetic means in Cauchy's limit theorem are replaced by weighted arithmetic means those converge as well. More precisely for sequence formula_1 with formula_4 and a sequence of positive real numbers formula_5 with formula_6 one has formula_7. This result can be used to derive the Stolz–Cesàro theorem, a more general result of which Cauchy's limit theorem is a special case. For the geometric means of a sequence a similar result exists. That is for a sequence formula_8 with formula_9 and formula_2 one has formula_10. The arithmetic means in Cauchy's limit theorem are also called Cesàro means. While Cauchy's limit theorem implies that for a convergent series its Cesàro means converge as well, the converse is not true. That is the Cesàro means may converge while the original sequence does not. Applying the latter fact on the partial sums of a series allows for assigning real values to certain divergent series and leads to the concept of Cesàro summation and summable series. In this context Cauchy's limit theorem can be generalised into the Silverman–Toeplitz theorem. Proof. Let formula_11 and formula_12 such that formula_13 for all formula_14. Due to formula_15 there exists a formula_16 with formula_17 for all formula_18 . Now for all formula_19 the above yields: formula_20
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": " (a_n)" }, { "math_id": 2, "text": "a_n\\to a" }, { "math_id": 3, "text": "(a_1+\\cdots+a_n) / n \\ \\to a" }, { "math_id": 4, "text": "a_n\\to a " }, { "math_id": 5, "text": " (p_n)" }, { "math_id": 6, "text": "\\frac{1}{p_1+\\cdots+p_n} \\to 0" }, { "math_id": 7, "text": "\\frac{p_1a_1+\\cdots+p_na_n}{p_1+\\cdots+p_n}\\to a " }, { "math_id": 8, "text": "(a_n)" }, { "math_id": 9, "text": "a_n>0" }, { "math_id": 10, "text": "\\sqrt[n]{a_1 \\cdot a_2\\cdot \\cdots \\cdot a_n} \\ \\to a" }, { "math_id": 11, "text": "\\varepsilon>0" }, { "math_id": 12, "text": "N \\in \\N" }, { "math_id": 13, "text": "|a_k - a| \\leq \\tfrac{\\varepsilon}{2}" }, { "math_id": 14, "text": "k \\geq N" }, { "math_id": 15, "text": "\\lim_{n \\to \\infty} \\frac{1}{n} \\sum_{k=1}^N (a_k - a) = 0" }, { "math_id": 16, "text": "M \\in \\N" }, { "math_id": 17, "text": "\\left|\\frac{1}{n} \\sum_{k=1}^N (a_k - a)\\right| \\leq \\frac{\\varepsilon}{2}" }, { "math_id": 18, "text": "n \\geq M" }, { "math_id": 19, "text": "n \\geq \\max(N,M)" }, { "math_id": 20, "text": "\\begin{align}\n\\left|\\frac{1}{n} \\left(\\sum_{k=1}^n a_k\\right) - a\\right| \n & = \\left|\\frac{1}{n} \\sum_{k=1}^n (a_k - a)\\right| \n = \\left|\\frac{1}{n} \\sum_{k=1}^N (a_k - a) + \\frac{1}{n} \\sum_{k=N+1}^n (a_k - a)\\right| \\\\ \n & \\leq \\left|\\frac{1}{n} \\sum_{k=1}^N (a_k - a)\\right| + \\frac{1}{n} \\sum_{k=N+1}^n |a_k - a| \\leq \\frac{\\varepsilon}{2} + \\frac{(n-N)\\varepsilon}{2n} \n \\leq \\varepsilon.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=73370366
73370539
Magnetic buoyancy
Force on magnetic flux tubes In plasma physics, magnetic buoyancy is an upward force exerted on magnetic flux tubes that are immersed in electrically conducting fluids and are under the influence of a gravitational force. It acts on magnetic flux tubes in stellar convection zones where it plays an important role in the formation of sunspots and starspots. It was first proposed by Eugene Parker in 1955. Magnetic flux tubes. For a magnetic flux tube in hydrostatic equilibrium with the surrounding medium, the tube's interior magnetic pressure formula_0 and fluid pressure formula_1 must be balanced by the fluid pressure formula_2 of the exterior medium, that is, formula_3 The magnetic pressure is always positive, so formula_4 As such, assuming that the temperature of the plasma within the flux tube is the same as the temperature of the surrounding plasma, the density of the flux tube must be lower than the density of the surrounding medium. Under the influence of a gravitational force, the tube will rise. Instability. The magnetic buoyancy instability is a plasma instability that can arise from small perturbations in systems where magnetic buoyancy is present. The magnetic buoyancy instability in a system with magnetic field formula_5 and perturbation wavevector formula_6, has three modes: the interchange instability where the perturbation wavevector is perpendicular to the magnetic field direction formula_7; the undular instability, sometimes referred to as the Parker instability or magnetic Rayleigh–Taylor instability, where the perturbation wavevector is parallel to the magnetic field direction formula_8; and the mixed instability, sometimes referred to as the quasi-interchange instability, a combination of the interchange and undular instabilities. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_m" }, { "math_id": 1, "text": "p_i" }, { "math_id": 2, "text": "p_e" }, { "math_id": 3, "text": "p_e = p_i + p_m." }, { "math_id": 4, "text": "p_e > p_i." }, { "math_id": 5, "text": "\\mathbf{B}" }, { "math_id": 6, "text": "\\mathbf{k}" }, { "math_id": 7, "text": "\\left(\\mathbf{k}\\perp\\mathbf{B}\\right)" }, { "math_id": 8, "text": "\\left(\\mathbf{k}\\parallel\\mathbf{B}\\right)" } ]
https://en.wikipedia.org/wiki?curid=73370539
73371448
17-animal inheritance puzzle
Mathematical puzzle The 17-animal inheritance puzzle is a mathematical puzzle involving unequal but fair allocation of indivisible goods, usually stated in terms of inheritance of a number of large animals (17 camels, 17 horses, 17 elephants, etc.) which must be divided in some stated proportion among a number of beneficiaries. It is a common example of an apportionment problem. Despite often being framed as a puzzle, it is more an anecdote about a curious calculation than a problem with a clear mathematical solution. Beyond recreational mathematics and mathematics education, the story has been repeated as a parable with varied metaphorical meanings. Although an ancient origin for the puzzle has often been claimed, it has not been documented. Instead, a version of the puzzle can be traced back to the works of Mulla Muhammad Mahdi Naraqi, an 18th-century Iranian philosopher. It entered the western recreational mathematics literature in the late 19th century. Several mathematicians have formulated different generalizations of the puzzle to numbers other than 17. Statement. According to the statement of the puzzle, a man dies leaving 17 camels (or other animals) to his three sons, to be divided in the following proportions: the eldest son should inherit &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 of the man's property, the middle son should inherit &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3, and the youngest son should inherit &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄9. How should they divide the camels, noting that only a whole live camel has value? Solution. As usually stated, to solve the puzzle, the three sons ask for the help of another man, often a priest, judge, or other local official. This man solves the puzzle in the following way: he lends the three sons his own camel, so that there are now 18 camels to be divided. That leaves nine camels for the eldest son, six camels for the middle son, and two camels for the youngest son, in the proportions demanded for the inheritance. These 17 camels leave one camel left over, which the judge takes back as his own. This is possible as the sum of the fractions is less than one: + + = . Some sources point out an additional feature of this solution: each son is satisfied, because he receives more camels than his originally-stated inheritance. The eldest son was originally promised only &lt;templatestyles src="Fraction/styles.css" /&gt;8+1⁄2 camels, but receives nine; the middle son was promised &lt;templatestyles src="Fraction/styles.css" /&gt;5+2⁄3, but receives six; and the youngest was promised &lt;templatestyles src="Fraction/styles.css" /&gt;1+8⁄9, but receives two. History. Similar problems of unequal division go back to ancient times, but without the twist of the loan and return of the extra camel. For instance, the Rhind Mathematical Papyrus features a problem in which many loaves of bread are to be divided in four different specified proportions. The 17 animals puzzle can be seen as an example of a "completion to unity" problem, of a type found in other examples on this papyrus, in which a set of fractions adding to less than one should be completed, by adding more fractions, to make their total come out to exactly one. Another similar case, involving fractional inheritance in the Roman empire, appears in the writings of Publius Juventius Celsus, attributed to a case decided by Salvius Julianus. The problems of fairly subdividing indivisible elements into specified proportions, seen in these inheritance problems, also arise when allocating seats in electoral systems based on proportional representation. Many similar problems of division into fractions are known from mathematics in the medieval Islamic world, but "it does not seem that the story of the 17 camels is part of classical Arab-Islamic mathematics". Supposed origins of the problem in the works of al-Khwarizmi, Fibonacci or Tartaglia also cannot be verified. A "legendary tale" attributes it to 16th-century Mughal Empire minister Birbal. The earliest documented appearance of the puzzle found by Pierre Ageron, using 17 camels, appears in the work of 18th-century Shiite Iranian philosopher Mulla Muhammad Mahdi Naraqi. By 1850 it had already entered circulation in America, through a travelogue of Mesopotamia published by James Phillips Fletcher. It appeared in "The Mathematical Monthly" in 1859, and a version with 17 elephants and a claimed Chinese origin was included in "Hanky Panky: A Book of Conjuring Tricks" (London, 1872), edited by William Henry Cremer but often attributed to Wiljalba Frikell or Henry Llewellyn Williams. The same puzzle subsequently appeared in the late 19th and early 20th centuries in the works of Henry Dudeney, Sam Loyd, Édouard Lucas, Professor Hoffmann, and Émile Fourrey, among others. A version with 17 horses circulated as folklore in mid-20th-century America. A variant of the story has been told with 11 camels, to be divided into &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄4, and &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄6. Another variant of the puzzle appears in the book "The Man Who Counted", a mathematical puzzle book originally published in Portuguese by Júlio César de Mello e Souza in 1938. This version starts with 35 camels, to be divided in the same proportions as in the 17-camel version. After the hero of the story lends a camel, and the 36 camels are divided among the three brothers, two are left over: one to be returned to the hero, and another given to him as a reward for his cleverness. The endnotes to the English translation of the book cite the 17-camel version of the problem to the works of Fourrey and Gaston Boucheny (1939). Beyond recreational mathematics, the story has been used as the basis for school mathematics lessons, as a parable with varied morals in religion, law, economics, and politics, and even as a lay-explanation for catalysis in chemistry. Generalizations. Paul Stockmeyer, a computer scientist, defines a class of similar puzzles for any number formula_0 of animals, with the property that formula_0 can be written as a sum of distinct divisors formula_1 of formula_2. In this case, one obtains a puzzle in which the fractions into which the formula_0 animals should be divided are formula_3 Because the numbers formula_4 have been chosen to divide formula_2, all of these fractions simplify to unit fractions. When combined with the judge's share of the animals, formula_5, they produce an Egyptian fraction representation of the number one. The numbers of camels that can be used as the basis for such a puzzle (that is, numbers formula_0 that can be represented as sums of distinct divisors of formula_2) form the integer sequence &lt;templatestyles src="Block indent/styles.css"/&gt;1, 3, 5, 7, 11, 15, 17, 19, 23, 27, 29, 31, 35, 39, 41, ... S. Naranan, an Indian physicist, seeks a more restricted class of generalized puzzles, with only three terms, and with formula_2 equal to the least common multiple of the denominators of the three unit fractions, finding only seven possible triples of fractions that meet these conditions. Brazilian researchers Márcio Luís Ferreira Nascimento and Luiz Barco generalize the problem further, as in the variation with 35 camels, to instances in which more than one camel may be lent and the number returned may be larger than the number lent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "d_1, d_2, \\dots" }, { "math_id": 2, "text": "n+1" }, { "math_id": 3, "text": "\\frac{d_1}{n+1}, \\frac{d_2}{n+1}, \\dots ." }, { "math_id": 4, "text": "d_i" }, { "math_id": 5, "text": "1/(n+1)" } ]
https://en.wikipedia.org/wiki?curid=73371448
7337217
Molecular orbital diagram
Visual tool in quantum chemistry A molecular orbital diagram, or MO diagram, is a qualitative descriptive tool explaining chemical bonding in molecules in terms of molecular orbital theory in general and the linear combination of atomic orbitals (LCAO) method in particular. A fundamental principle of these theories is that as atoms bond to form molecules, a certain number of atomic orbitals combine to form the same number of molecular orbitals, although the electrons involved may be redistributed among the orbitals. This tool is very well suited for simple diatomic molecules such as dihydrogen, dioxygen, and carbon monoxide but becomes more complex when discussing even comparatively simple polyatomic molecules, such as methane. MO diagrams can explain why some molecules exist and others do not. They can also predict bond strength, as well as the electronic transitions that can take place. History. Qualitative MO theory was introduced in 1928 by Robert S. Mulliken and Friedrich Hund. A mathematical description was provided by contributions from Douglas Hartree in 1928 and Vladimir Fock in 1930. Basics. Molecular orbital diagrams are diagrams of molecular orbital (MO) energy levels, shown as short horizontal lines in the center, flanked by constituent atomic orbital (AO) energy levels for comparison, with the energy levels increasing from the bottom to the top. Lines, often dashed diagonal lines, connect MO levels with their constituent AO levels. Degenerate energy levels are commonly shown side by side. Appropriate AO and MO levels are filled with electrons by the Pauli Exclusion Principle, symbolized by small vertical arrows whose directions indicate the electron spins. The AO or MO shapes themselves are often not shown on these diagrams. For a diatomic molecule, an MO diagram effectively shows the energetics of the bond between the two atoms, whose AO unbonded energies are shown on the sides. For simple polyatomic molecules with a "central atom" such as methane (CH4) or carbon dioxide (CO2), a MO diagram may show one of the identical bonds to the central atom. For other polyatomic molecules, an MO diagram may show one or more bonds of interest in the molecules, leaving others out for simplicity. Often even for simple molecules, AO and MO levels of inner orbitals and their electrons may be omitted from a diagram for simplicity. In MO theory molecular orbitals form by the overlap of atomic orbitals. Because σ bonds feature greater overlap than π bonds, σ bonding and σ* antibonding orbitals feature greater energy splitting (separation) than π and π* orbitals. The atomic orbital energy correlates with electronegativity as more electronegative atoms hold their electrons more tightly, lowering their energies. Sharing of molecular orbitals between atoms is more important when the atomic orbitals have comparable energy; when the energies differ greatly the orbitals tend to be localized on one atom and the mode of bonding becomes ionic. A second condition for overlapping atomic orbitals is that they have the same symmetry. Two atomic orbitals can overlap in two ways depending on their phase relationship (or relative signs for real orbitals). The phase (or sign) of an orbital is a direct consequence of the wave-like properties of electrons. In graphical representations of orbitals, orbital phase is depicted either by a plus or minus sign (which has no relationship to electric charge) or by shading one lobe. The sign of the phase itself does not have physical meaning except when mixing orbitals to form molecular orbitals. Two same-sign orbitals have a constructive overlap forming a molecular orbital with the bulk of the electron density located between the two nuclei. This MO is called the bonding orbital and its energy is lower than that of the original atomic orbitals. A bond involving molecular orbitals which are symmetric with respect to any rotation around the bond axis is called a sigma bond (σ-bond). If the phase cycles once while rotating round the axis, the bond is a pi bond (π-bond). Symmetry labels are further defined by whether the orbital maintains its original character after an inversion about its center; if it does, it is defined gerade, "g". If the orbital does not maintain its original character, it is ungerade, "u". Atomic orbitals can also interact with each other out-of-phase which leads to destructive cancellation and no electron density between the two nuclei at the so-called nodal plane depicted as a perpendicular dashed line. In this anti-bonding MO with energy much higher than the original AO's, any electrons present are located in lobes pointing away from the central internuclear axis. For a corresponding σ-bonding orbital, such an orbital would be symmetrical but differentiated from it by an asterisk as in σ*. For a π-bond, corresponding bonding and antibonding orbitals would not have such symmetry around the bond axis and be designated π and π*, respectively. The next step in constructing an MO diagram is filling the newly formed molecular orbitals with electrons. Three general rules apply: The filled MO highest in energy is called the highest occupied molecular orbital (HOMO) and the empty MO just above it is then the lowest unoccupied molecular orbital (LUMO). The electrons in the bonding MO's are called bonding electrons and any electrons in the antibonding orbital would be called antibonding electrons. The reduction in energy of these electrons is the driving force for chemical bond formation. Whenever mixing for an atomic orbital is not possible for reasons of symmetry or energy, a non-bonding MO is created, which is often quite similar to and has energy level equal or close to its constituent AO, thus not contributing to bonding energetics. The resulting electron configuration can be described in terms of bond type, parity and occupancy for example dihydrogen 1σ"g"2. Alternatively it can be written as a molecular term symbol e.g. 1Σg+ for dihydrogen. Sometimes, the letter n is used to designate a non-bonding orbital. For a stable bond, the bond order defined as formula_0 must be positive. The relative order in MO energies and occupancy corresponds with electronic transitions found in photoelectron spectroscopy (PES). In this way it is possible to experimentally verify MO theory. In general, sharp PES transitions indicate nonbonding electrons and broad bands are indicative of bonding and antibonding delocalized electrons. Bands can resolve into fine structure with spacings corresponding to vibrational modes of the molecular cation (see Franck–Condon principle). PES energies are different from ionisation energies which relates to the energy required to strip off the nth electron after the first n − 1 electrons have been removed. MO diagrams with energy values can be obtained mathematically using the Hartree–Fock method. The starting point for any MO diagram is a predefined molecular geometry for the molecule in question. An exact relationship between geometry and orbital energies is given in Walsh diagrams. s-p mixing. The phenomenon of s-p mixing occurs when molecular orbitals of the same symmetry formed from the combination of 2s and 2p atomic orbitals are close enough in energy to further interact, which can lead to a change in the expected order of orbital energies. When molecular orbitals are formed, they are mathematically obtained from linear combinations of the starting atomic orbitals. Generally, in order to predict their relative energies, it is sufficient to consider only one atomic orbital from each atom to form a pair of molecular orbitals, as the contributions from the others are negligible. For instance, in dioxygen the 3σg MO can be roughly considered to be formed from interaction of oxygen 2pz AOs only. It is found to be lower in energy than the 1πu MO, both experimentally and from more sophisticated computational models, so that the expected order of filling is the 3σg before the 1πu. Hence the approximation to ignore the effects of further interactions is valid. However, experimental and computational results for homonuclear diatomics from Li2 to N2 and certain heteronuclear combinations such as CO and NO show that the 3σg MO is higher in energy than (and therefore filled after) the 1πu MO. This can be rationalised as the first-approximation 3σg has a suitable symmetry to interact with the 2σg bonding MO formed from the 2s AOs. As a result, the 2σg is lowered in energy, whilst the 3σg is raised. For the aforementioned molecules this results in the 3σg being higher in energy than the 1πu MO, which is where s-p mixing is most evident. Likewise, interaction between the 2σu* and 3σu* MOs leads to a lowering in energy of the former and a raising in energy of the latter. However this is of less significance than the interaction of the bonding MOs. Diatomic MO diagrams. A diatomic molecular orbital diagram is used to understand the bonding of a diatomic molecule. MO diagrams can be used to deduce magnetic properties of a molecule and how they change with ionization. They also give insight to the bond order of the molecule, how many bonds are shared between the two atoms. The energies of the electrons are further understood by applying the Schrödinger equation to a molecule. Quantum Mechanics is able to describe the energies exactly for single electron systems but can be approximated precisely for multiple electron systems using the Born-Oppenheimer Approximation, such that the nuclei are assumed stationary. The LCAO-MO method is used in conjunction to further describe the state of the molecule. Diatomic molecules consist of a bond between only two atoms. They can be broken into two categories: homonuclear and heteronuclear. A homonuclear diatomic molecule is one composed of two atoms of the same element. Examples are H2, O2, and N2. A heteronuclear diatomic molecule is composed of two atoms of two different elements. Examples include CO, HCl, and NO. Dihydrogen. The smallest molecule, hydrogen gas exists as dihydrogen (H-H) with a single covalent bond between two hydrogen atoms. As each hydrogen atom has a single 1s atomic orbital for its electron, the bond forms by overlap of these two atomic orbitals. In the figure the two atomic orbitals are depicted on the left and on the right. The vertical axis always represents the orbital energies. Each atomic orbital is singly occupied with an up or down arrow representing an electron. Application of MO theory for dihydrogen results in having both electrons in the bonding MO with electron configuration 1σ"g"2. The bond order for dihydrogen is (2-0)/2 = 1. The photoelectron spectrum of dihydrogen shows a single set of multiplets between 16 and 18 eV (electron volts). The dihydrogen MO diagram helps explain how a bond breaks. When applying energy to dihydrogen, a molecular electronic transition takes place when one electron in the bonding MO is promoted to the antibonding MO. The result is that there is no longer a net gain in energy. The superposition of the two 1s atomic orbitals leads to the formation of the σ and σ* molecular orbitals. Two atomic orbitals in phase create a larger electron density, which leads to the σ orbital. If the two 1s orbitals are not in phase, a node between them causes a jump in energy, the σ* orbital. From the diagram you can deduce the bond order, how many bonds are formed between the two atoms. For this molecule it is equal to one. Bond order can also give insight to how close or stretched a bond has become if a molecule is ionized. Dihelium and diberyllium. Dihelium (He-He) is a hypothetical molecule and MO theory helps to explain why dihelium does not exist in nature. The MO diagram for dihelium looks very similar to that of dihydrogen, but each helium has two electrons in its 1s atomic orbital rather than one for hydrogen, so there are now four electrons to place in the newly formed molecular orbitals. The only way to accomplish this is by occupying both the bonding and antibonding orbitals with two electrons, which reduces the bond order ((2−2)/2) to zero and cancels the net energy stabilization. However, by removing one electron from dihelium, the stable gas-phase species He2+ ion is formed with bond order 1/2. Another molecule that is precluded based on this principle is diberyllium. Beryllium has an electron configuration 1s22s2, so there are again two electrons in the valence level. However, the 2s can mix with the 2p orbitals in diberyllium, whereas there are no p orbitals in the valence level of hydrogen or helium. This mixing makes the antibonding 1σu orbital slightly less antibonding than the bonding 1σg orbital is bonding, with a net effect that the whole configuration has a slight bonding nature. This explains the fact that the diberyllium molecule exists and has been observed in the gas phase. The slight bonding nature explains the low dissociation energy of only 59 kJ·mol−1. Dilithium. MO theory correctly predicts that dilithium is a stable molecule with bond order 1 (configuration 1σ"g"21σ"u"22σ"g"2). The 1s MOs are completely filled and do not participate in bonding. Dilithium is a gas-phase molecule with a much lower bond strength than dihydrogen because the 2s electrons are further removed from the nucleus. In a more detailed analysis which considers the environment of each orbital due to all other electrons, both the 1σ orbitals have higher energies than the 1s AO and the occupied 2σ is also higher in energy than the 2s AO (see table 1). Diboron. The MO diagram for diboron (B-B, electron configuration 1σ"g"21σ"u"22σ"g"22σ"u"21π"u"2) requires the introduction of an atomic orbital overlap model for p orbitals. The three dumbbell-shaped p-orbitals have equal energy and are oriented mutually perpendicularly (or orthogonally). The p-orbitals oriented in the z-direction (pz) can overlap end-on forming a bonding (symmetrical) σ orbital and an antibonding σ* molecular orbital. In contrast to the sigma 1s MO's, the σ 2p has some non-bonding electron density at either side of the nuclei and the σ* 2p has some electron density between the nuclei. The other two p-orbitals, py and px, can overlap side-on. The resulting bonding orbital has its electron density in the shape of two lobes above and below the plane of the molecule. The orbital is not symmetric around the molecular axis and is therefore a pi orbital. The antibonding pi orbital (also asymmetrical) has four lobes pointing away from the nuclei. Both py and px orbitals form a pair of pi orbitals equal in energy (degenerate) and can have higher or lower energies than that of the sigma orbital. In diboron the 1s and 2s electrons do not participate in bonding but the single electrons in the 2p orbitals occupy the 2πpy and the 2πpx MO's resulting in bond order 1. Because the electrons have equal energy (they are degenerate) diboron is a diradical and since the spins are parallel the molecule is paramagnetic. In certain diborynes the boron atoms are excited and the bond order is 3. Dicarbon. Like diboron, dicarbon (C-C electron configuration:1σg21σu22σg22σu21πu4) is a reactive gas-phase molecule. The molecule can be described as having two pi bonds but without a sigma bond. Dinitrogen. With nitrogen, we see the two molecular orbitals mixing and the energy repulsion. This is the reasoning for the rearrangement from a more familiar diagram. The σ from the 2p is more non-bonding due to mixing, and same with the 2s σ. This also causes a large jump in energy in the 2p σ* orbital. The bond order of diatomic nitrogen is three, and it is a diamagnetic molecule. The bond order for dinitrogen (1σg21σu22σg22σu21πu43σg2) is three because two electrons are now also added in the 3σ MO. The MO diagram correlates with the experimental photoelectron spectrum for nitrogen. The 1σ electrons can be matched to a peak at 410 eV (broad), the 2σg electrons at 37 eV (broad), the 2σu electrons at 19 eV (doublet), the 1πu4 electrons at 17 eV (multiplets), and finally the 3σg2 at 15.5 eV (sharp). Dioxygen. Oxygen has a similar setup to H2, but now we consider 2s and 2p orbitals. When creating the molecular orbitals from the p orbitals, the three atomic orbitals split into three molecular orbitals, a singly degenerate σ and a doubly degenerate π orbital. Another property we can observe by examining molecular orbital diagrams is the magnetic property of diamagnetic or paramagnetic. If all the electrons are paired, there is a slight repulsion and it is classified as diamagnetic. If unpaired electrons are present, it is attracted to a magnetic field, and therefore paramagnetic. Oxygen is an example of a paramagnetic diatomic. The bond order of diatomic oxygen is two. MO treatment of dioxygen is different from that of the previous diatomic molecules because the pσ MO is now lower in energy than the 2π orbitals. This is attributed to interaction between the 2s MO and the 2pz MO. Distributing 8 electrons over 6 molecular orbitals leaves the final two electrons as a degenerate pair in the 2pπ* antibonding orbitals resulting in a bond order of 2. As in diboron, these two unpaired electrons have the same spin in the ground state, which is a paramagnetic diradical triplet oxygen. The first excited state has both HOMO electrons paired in one orbital with opposite spins, and is known as singlet oxygen. The bond order decreases and the bond length increases in the order O2+ (112.2 pm), O2 (121 pm), O2- (128 pm) and O22- (149 pm). Difluorine and dineon. In difluorine two additional electrons occupy the 2pπ* with a bond order of 1. In dineon Ne2 (as with dihelium) the number of bonding electrons equals the number of antibonding electrons and this molecule does not exist. Dimolybdenum and ditungsten. Dimolybdenum (Mo2) is notable for having a sextuple bond. This involves two sigma bonds (4dz2 and 5s), two pi bonds (using 4dxz and 4dyz), and two delta bonds (4dx2 − y2 and 4dxy). Ditungsten (W2) has a similar structure. MO energies overview. Table 1 gives an overview of MO energies for first row diatomic molecules calculated by the Hartree-Fock-Roothaan method, together with atomic orbital energies. Heteronuclear diatomics. In heteronuclear diatomic molecules, mixing of atomic orbitals only occurs when the electronegativity values are similar. In carbon monoxide (CO, isoelectronic with dinitrogen) the oxygen 2s orbital is much lower in energy than the carbon 2s orbital and therefore the degree of mixing is low. The electron configuration 1σ21σ*22σ22σ*21π43σ2 is identical to that of nitrogen. The g and u subscripts no longer apply because the molecule lacks a center of symmetry. In hydrogen fluoride (HF), the hydrogen 1s orbital can mix with fluorine 2pz orbital to form a sigma bond because experimentally the energy of 1s of hydrogen is comparable with 2p of fluorine. The HF electron configuration 1σ22σ23σ21π4 reflects that the other electrons remain in three lone pairs and that the bond order is 1. The more electronegative atom is the more energetically excited because it more similar in energy to its atomic orbital. This also accounts for the majority of the electron negativity residing around the more electronegative molecule. Applying the LCAO-MO method allows us to move away from a more static Lewis structure type approach and actually account for periodic trends that influence electron movement. Non-bonding orbitals refer to lone pairs seen on certain atoms in a molecule. A further understanding for the energy level refinement can be acquired by delving into quantum chemistry; the Schrödinger equation can be applied to predict movement and describe the state of the electrons in a molecule. NO. Nitric oxide is a heteronuclear molecule that exhibits mixing. The construction of its MO diagram is the same as for the homonuclear molecules. It has a bond order of 2.5 and is a paramagnetic molecule. The energy differences of the 2s orbitals are different enough that each produces its own non-bonding σ orbitals. Notice this is a good example of making the ionized NO+ stabilize the bond and generate a triple bond, also changing the magnetic property to diamagnetic. HF. Hydrogen fluoride is another example of a heteronuclear molecule. It is slightly different in that the π orbital is non-bonding, as well as the 2s σ. From the hydrogen, its valence 1s electron interacts with the 2p electrons of fluorine. This molecule is diamagnetic and has a bond order of one. Triatomic molecules. Carbon dioxide. Carbon dioxide, CO2, is a linear molecule with a total of sixteen bonding electrons in its valence shell. Carbon is the central atom of the molecule and a principal axis, the z-axis, is visualized as a single axis that goes through the center of carbon and the two oxygens atoms. For convention, blue atomic orbital lobes are positive phases, red atomic orbitals are negative phases, with respect to the wave function from the solution of the Schrödinger equation. In carbon dioxide the carbon 2s (−19.4 eV), carbon 2p (−10.7 eV), and oxygen 2p (−15.9 eV)) energies associated with the atomic orbitals are in proximity whereas the oxygen 2s energy (−32.4 eV) is different. Carbon and each oxygen atom will have a 2s atomic orbital and a 2p atomic orbital, where the p orbital is divided into px, py, and pz. With these derived atomic orbitals, symmetry labels are deduced with respect to rotation about the principal axis which generates a phase change, pi bond ("π") or generates no phase change, known as a sigma bond ("σ"). Symmetry labels are further defined by whether the atomic orbital maintains its original character after an inversion about its center atom; if the atomic orbital does retain its original character it is defined gerade, "g", or if the atomic orbital does not maintain its original character, ungerade, "u". The final symmetry-labeled atomic orbital is now known as an irreducible representation. Carbon dioxide’s molecular orbitals are made by the linear combination of atomic orbitals of the same irreducible representation that are also similar in atomic orbital energy. Significant atomic orbital overlap explains why sp bonding may occur. Strong mixing of the oxygen 2s atomic orbital is not to be expected and are non-bonding degenerate molecular orbitals. The combination of similar atomic orbital/wave functions and the combinations of atomic orbital/wave function inverses create particular energies associated with the nonbonding (no change), bonding (lower than either parent orbital energy) and antibonding (higher energy than either parent atomic orbital energy) molecular orbitals. Water. For nonlinear molecules, the orbital symmetries are not σ or π but depend on the symmetry of each molecule. Water (H2O) is a bent molecule (105°) with C2v molecular symmetry. The possible orbital symmetries are listed in the table below. For example, an orbital of B1 symmetry (called a b1 orbital with a small b since it is a one-electron function) is multiplied by -1 under the symmetry operations C2 (rotation about the 2-fold rotation axis) and σv'(yz) (reflection in the molecular plane). It is multiplied by +1(unchanged) by the identity operation E and by σv(xz) (reflection in the plane bisecting the H-O-H angle). The oxygen atomic orbitals are labeled according to their symmetry as a1 for the 2s orbital and b1 (2px), b2 (2py) and a1 (2pz) for the three 2p orbitals. The two hydrogen 1s orbitals are premixed to form a1 (σ) and b2 (σ*) MO. Mixing takes place between same-symmetry orbitals of comparable energy resulting a new set of MO's for water: In agreement with this description the photoelectron spectrum for water shows a sharp peak for the nonbonding 1b1 MO (12.6 eV) and three broad peaks for the 3a1 MO (14.7 eV), 1b2 MO (18.5 eV) and the 2a1 MO (32.2 eV). The 1b1 MO is a lone pair, while the 3a1, 1b2 and 2a1 MO's can be localized to give two O−H bonds and an in-plane lone pair. This MO treatment of water does not have two equivalent "rabbit ear" lone pairs. Hydrogen sulfide (H2S) too has a C2v symmetry with 8 valence electrons but the bending angle is only 92°. As reflected in its photoelectron spectrum as compared to water the 5a1 MO (corresponding to the 3a1 MO in water) is stabilised (improved overlap) and the 2b2 MO (corresponding to the 1b2 MO in water) is destabilized (poorer overlap). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ \\mbox{bond order} = \\frac{(\\mbox{number of electrons in bonding MOs}) - (\\mbox{number of electrons in antibonding MOs})}{2} " } ]
https://en.wikipedia.org/wiki?curid=7337217
7338342
Homology manifold
In mathematics, a homology manifold (or generalized manifold) is a locally compact topological space "X" that looks locally like a topological manifold from the point of view of homology theory. Definition. A homology "G"-manifold (without boundary) of dimension "n" over an abelian group "G" of coefficients is a locally compact topological space X with finite "G"-cohomological dimension such that for any "x"∈"X", the homology groups formula_0 are trivial unless "p"="n", in which case they are isomorphic to "G". Here "H" is some homology theory, usually singular homology. Homology manifolds are the same as homology Z-manifolds. More generally, one can define homology manifolds with boundary, by allowing the local homology groups to vanish at some points, which are of course called the boundary of the homology manifold. The boundary of an "n"-dimensional first-countable homology manifold is an "n"−1 dimensional homology manifold (without boundary).
[ { "math_id": 0, "text": " H_p(X,X-x, G)" } ]
https://en.wikipedia.org/wiki?curid=7338342
73384654
Chessboard complex
Mathematical object in topological graph theory A chessboard complex is a particular kind of abstract simplicial complex, which has various applications in topological graph theory and algebraic topology. Informally, the ("m", "n")-chessboard complex contains all sets of positions on an "m"-by-"n" chessboard, where rooks can be placed without attacking each other. Equivalently, it is the matching complex of the ("m", "n")-complete bipartite graph, or the independence complex of the "m"-by-"n" rook's graph. Definitions. For any two positive integers "m" and "n", the ("m, n")-chessboard complex formula_0 is the abstract simplicial complex with vertex set formula_1 that contains all subsets "S" such that, if formula_2 and formula_3 are two distinct elements of "S", then both formula_4 and formula_5. The vertex set can be viewed as a two-dimensional grid (a "chessboard"), and the complex contains all subsets "S" that do "not" contain two cells in the same row or in the same column. In other words, all subset "S" such that rooks can be placed on them without taking each other. The chessboard complex can also be defined succinctly using deleted join. Let "Dm" be a set of "m" discrete points. Then the chessboard complex is the "n"-fold 2-wise deleted join of "Dm", denoted by "formula_6".176 Another definition is the set of all matchings in the complete bipartite graph formula_7. Examples. In any ("m","n")-chessboard complex, the neighborhood of each vertex has the structure of a ("m" − 1,"n" − 1)-chessboard complex. In terms of chess rooks, placing one rook on the board eliminates the remaining squares in the same row and column, leaving a smaller set of rows and columns where additional rooks can be placed. This allows the topological structure of a chessboard to be studied hierarchically, based on its lower-dimensional structures. An example of this occurs with the (4,5)-chessboard complex, and the (3,4)- and (2,3)-chessboard complexes within it: Properties. Every facet of formula_0 contains formula_8 elements. Therefore, the dimension of formula_0 is formula_9. The homotopical connectivity of the chessboard complex is at least formula_10 (so formula_11).Sec.1 The Betti numbers formula_12 of chessboard complexes are zero if and only if formula_13.200 The eigenvalues of the combinatorial Laplacians of the chessboard complex are integers.193 The chessboard complex is formula_14-connected, where formula_15.527 The homology group formula_16 is a 3-group of exponent at most 9, and is known to be exactly the cyclic group on 3 elements when formula_17. The formula_18-skeleton of chessboard complex is "vertex decomposable" in the sense of Provan and Billera (and thus shellable), and the entire complex is vertex decomposable if formula_19.3 As a corollary, any position of "k" rooks on a "m"-by-"n" chessboard, where formula_20, can be transformed into any other position using at most formula_21 single-rook moves (where each intermediate position is also not rook-taking).3 Generalizations. The complex formula_22 is a "chessboard complex" defined for a "k"-dimensional chessboard. Equivalently, it is the set of matchings in a complete "k"-partite hypergraph. This complex is at least formula_23-connected, for formula_24 33 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta_{m,n}" }, { "math_id": 1, "text": "[m]\\times [n]" }, { "math_id": 2, "text": "(i_1,j_1)" }, { "math_id": 3, "text": "(i_2,j_2)" }, { "math_id": 4, "text": "i_1\\neq i_2" }, { "math_id": 5, "text": "j_1\\neq j_2" }, { "math_id": 6, "text": "(D_m)^{*n}_{\\Delta(2)}" }, { "math_id": 7, "text": "K_{m,n}" }, { "math_id": 8, "text": "\\min(m,n)" }, { "math_id": 9, "text": "\\min(m,n)-1" }, { "math_id": 10, "text": "\\min\\left(m, n, \\frac{m+n+1}{3}\\right)-2" }, { "math_id": 11, "text": "\\eta \\geq \\min\\left(m, n, \\frac{m+n+1}{3}\\right)" }, { "math_id": 12, "text": "b_{r - 1}" }, { "math_id": 13, "text": "(m - r)(n - r) > r" }, { "math_id": 14, "text": "(\\nu_{m, n} - 1)" }, { "math_id": 15, "text": "\\nu_{m, n} := \\min\\{m, n, \\lfloor\\frac{m + n + 1}{3}\\rfloor \\}" }, { "math_id": 16, "text": "H_{\\nu_{m, n}}(M_{m, n})" }, { "math_id": 17, "text": "m + n \\equiv 1\\pmod{3}" }, { "math_id": 18, "text": "(\\lfloor\\frac{n + m + 1}{3}\\rfloor - 1)" }, { "math_id": 19, "text": "n\\geq 2m - 1" }, { "math_id": 20, "text": "k\\leq\\lfloor\\frac{m + n + 1}{3}\\rfloor" }, { "math_id": 21, "text": "mn - k" }, { "math_id": 22, "text": "\\Delta_{n_1,\\ldots,n_k}" }, { "math_id": 23, "text": "(\\nu - 2)" }, { "math_id": 24, "text": "\\nu := \\min\\{n_1, \\lfloor\\frac{n_1 + n_2 + 1}{3}\\rfloor, \\dots, \\lfloor\\frac{n_1 + n_2 + \\dots + n_k + 1}{2k + 1}\\rfloor\\}" } ]
https://en.wikipedia.org/wiki?curid=73384654